Giter Site home page Giter Site logo

elastic / apm-nodejs-http-client Goto Github PK

View Code? Open in Web Editor NEW
21.0 284.0 30.0 442 KB

**Moved to elastic/apm-agent-nodejs.** A low-level Node.js HTTP client for communicating with the Elastic APM intake API

License: MIT License

JavaScript 99.22% Makefile 0.78%

apm-nodejs-http-client's Introduction

Note: This repository will no longer be updated. This APM HTTP client code was moved into the elastic/apm-agent-nodejs repository in 2023-07. The code now lives at: https://github.com/elastic/apm-agent-nodejs/tree/main/lib/apm-client/http-apm-client. Please open any issues, questions, PRs for the APM HTTP client there.

elastic-apm-http-client

A low-level HTTP client for communicating with the Elastic APM intake API version 2. For support for version 1, use version 5.x of this module.

This module is meant for building other modules that needs to communicate with Elastic APM.

If you are looking to use Elastic APM in your app or website, you'd most likely want to check out the official Elastic APM agent for Node.js instead.

Installation

npm install elastic-apm-http-client --save

Example Usage

const Client = require('elastic-apm-http-client')

const client = new Client({
  serviceName: 'My App',
  agentName: 'my-nodejs-agent',
  agentVersion: require('./package.json').version,
  userAgent: 'My Custom Elastic APM Agent'
})

const span = {
  name: 'SELECT FROM users',
  duration: 42,
  start: 0,
  type: 'db.mysql.query'
}

client.sendSpan(span)

API

new Client(options)

Construct a new client object. Data given to the client will be converted to ndjson, compressed using gzip, and streamed to the APM Server.

Arguments:

  • options - An object containing config options (see below). All options are optional, except those marked "(required)".

Data sent to the APM Server as part of the metadata object. See also the "Cloud & Extra Metadata" section below.

  • agentName - (required) The APM agent name
  • agentVersion - (required) The APM agent version
  • agentActivationMethod - An enum string (spec) that identifies the way this agent was activated/started
  • serviceName - (required) The name of the service being instrumented
  • serviceNodeName - Unique name of the service being instrumented
  • serviceVersion - The version of the service being instrumented
  • frameworkName - If the service being instrumented is running a specific framework, use this config option to log its name
  • frameworkVersion - If the service being instrumented is running a specific framework, use this config option to log its version
  • configuredHostname - A user-configured hostname, if any, e.g. from the ELASTIC_APM_HOSTNAME envvar. See https://github.com/elastic/apm/blob/main/specs/agents/metadata.md#hostname.
  • environment - Environment name (default: process.env.NODE_ENV || 'development')
  • containerId - Docker container id, if not given will be parsed from /proc/self/cgroup
  • kubernetesNodeName - Kubernetes node name
  • kubernetesNamespace - Kubernetes namespace
  • kubernetesPodName - Kubernetes pod name, if not given will be the hostname
  • kubernetesPodUID - Kubernetes pod id, if not given will be parsed from /proc/self/cgroup
  • globalLabels - An object of key/value pairs to use to label all data reported (only applied when using APM Server 7.1+)

HTTP client configuration:

  • userAgent - (required) The HTTP user agent that your module should identify itself as
  • secretToken - The Elastic APM intake API secret token
  • apiKey - Elastic APM API key
  • serverUrl - The APM Server URL (default: http://127.0.0.1:8200)
  • headers - An object containing extra HTTP headers that should be used when making HTTP requests to he APM Server
  • rejectUnauthorized - Set to false if the client shouldn't verify the APM Server TLS certificates (default: true)
  • serverCaCert - The CA certificate used to verify the APM Server's TLS certificate, and has the same requirements as the ca option of tls.createSecureContext.
  • serverTimeout - HTTP request timeout in milliseconds. If no data is sent or received on the socket for this amount of time, the request will be aborted. It's not recommended to set a serverTimeout lower than the time config option. That might result in healthy requests being aborted prematurely. (default: 15000 ms)
  • keepAlive - If set the false the client will not reuse sockets between requests (default: true)
  • keepAliveMsecs - When using the keepAlive option, specifies the initial delay for TCP Keep-Alive packets. Ignored when the keepAlive option is false or undefined (default: 1000 ms)
  • maxSockets - Maximum number of sockets to allow per host (default: Infinity)
  • maxFreeSockets - Maximum number of sockets to leave open in a free state. Only relevant if keepAlive is set to true (default: 256)
  • freeSocketTimeout - A number of milliseconds of inactivity on a free (kept-alive) socket after which to timeout and recycle the socket. Set this to a value less than the HTTP Keep-Alive timeout of the APM server to avoid ECONNRESET exceptions. This defaults to 4000ms to be less than the node.js HTTP server default of 5s (useful when using a Node.js-based mock APM server) and the Go lang Dialer KeepAlive default of 15s (when talking to the Elastic APM Lambda extension). (default: 4000)

Cloud & Extra Metadata Configuration. Zero or one of the following three options may be used.

  • cloudMetadataFetcher - An object with a getCloudMetadata(cb) method for fetching metadata related to the current cloud environment. The callback is of the form function (err, cloudMetadata) and the returned cloudMetadata will be set on metadata.cloud for intake requests to APM Server. If provided, this client will not begin any intake requests until the callback is called.
  • expectExtraMetadata - A boolean option to indicate that the client should not allow any intake requests to begin until cloud.setExtraMetadata(...) has been called. It is the responsibility of the caller to call cloud.setExtraMetadata(). If not, then the Client will never perform an intake request.
  • extraMetadata - An object with extra metadata to merge into the metadata object created from the individual fields above.

APM Agent Configuration via Kibana:

  • centralConfig - Whether or not the client should poll the APM Server regularly for new agent configuration. If set to true, the config event will be emitted when there's an update to an agent config option (default: false). Requires APM Server v7.3 or later and that the APM Server is configured with kibana.enabled: true.

Streaming configuration:

  • size - The maxiumum compressed body size (in bytes) of each HTTP request to the APM Server. An overshoot of up to the size of the internal zlib buffer should be expected as the buffer is flushed after this limit is reached. The default zlib buffer size is 16kB. (default: 768000 bytes)
  • time - The maxiumum number of milliseconds a streaming HTTP request to the APM Server can be ongoing before it's ended. Set to -1 to disable (default: 10000 ms)
  • bufferWindowTime - Objects written in quick succession are buffered and grouped into larger clusters that can be processed as a whole. This config option controls the maximum time that buffer can live before it's flushed (counted in milliseconds). Set to -1 for no buffering (default: 20 ms)
  • bufferWindowSize - Objects written in quick succession are buffered and grouped into larger clusters that can be processed as a whole. This config option controls the maximum size of that buffer (counted in number of objects). Set to -1 for no max size (default: 50 objects)
  • maxQueueSize - The maximum number of buffered events (transactions, spans, errors, metricsets). Events are buffered when the agent can't keep up with sending them to the APM Server or if the APM server is down. If the queue is full, events are rejected which means transactions, spans, etc. will be lost. This guards the application from consuming unbounded memory, possibly overusing CPU (spent on serializing events), and possibly crashing in case the APM server is unavailable for a long period of time. A lower value will decrease the heap overhead of the agent, while a higher value makes it less likely to lose events in case of a temporary spike in throughput. (default: 1024)
  • intakeResTimeout - The time (in milliseconds) by which a response from the APM Server events intake API is expected after all the event data for that request has been sent. This allows a smaller timeout than serverTimeout to handle an APM server that is accepting connections but is slow to respond. (default: 10000 ms)
  • intakeResTimeoutOnEnd - The same as intakeResTimeout, but used when the client has ended, hence for the possible last request to APM server. This is typically a lower value to not hang an ending process that is waiting for that APM server request to complete. (default: 1000 ms)

Data sanitizing configuration:

  • truncateKeywordsAt - Maximum size in unicode characters for strings stored as Elasticsearch keywords. Strings larger than this will be trucated (default: 1024)
  • truncateLongFieldsAt - The maximum size in unicode characters for a specific set of long string fields. String values above this length will be truncated. Default: 10000. This applies to the following fields:
    • transaction.context.request.body, error.context.request.body
    • transaction.context.message.body, span.context.message.body, error.context.message.body
    • span.context.db.statement
    • error.exception.message (unless truncateErrorMessagesAt is specified)
    • error.log.message (unless truncateErrorMessagesAt is specified)
  • truncateStringsAt - The maximum size in unicode characters for strings. String values above this length will be truncated (default: 1024)
  • truncateErrorMessagesAt - DEPRECATED: prefer truncateLongFieldsAt. The maximum size in unicode characters for error messages. Messages above this length will be truncated. Set to -1 to disable truncation. This applies to the following properties: error.exception.message and error.log.message. (default: 2048)

Other options:

  • logger - A pino logger to use for trace and debug-level logging.
  • payloadLogFile - Specify a file path to which a copy of all data sent to the APM Server should be written. The data will be in ndjson format and will be uncompressed. Note that using this option can impact performance.
  • apmServerVersion - A string version to assume is the version of the APM Server at serverUrl. This option is typically only used for testing. Normally this client will fetch the APM Server version at startup via a GET / request. Setting this option avoids that request.

Event: config

Emitted every time a change to the agent config is pulled from the APM Server. The listener is passed the updated config options as a key/value object.

Each key is the lowercase version of the environment variable, without the ELASTIC_APM_ prefix, e.g. transaction_sample_rate instead of ELASTIC_APM_TRANSACTION_SAMPLE_RATE.

If no central configuration is set up for the given serviceName / environment when the client is started, this event will be emitted once with an empty object. This will also happen after central configuration for the given serviceName / environment is deleted.

Event: close

The close event is emitted when the client and any of its underlying resources have been closed. The event indicates that no more events will be emitted, and no more data can be sent by the client.

Event: error

Emitted if an error occurs. The listener callback is passed a single Error argument when called.

Event: finish

The finish event is emitted after the client.end() method has been called, and all data has been flushed to the underlying system.

Event: request-error

Emitted if an error occurs while communicating with the APM Server. The listener callback is passed a single Error argument when called.

The request to the APM Server that caused the error is terminated and the data included in that request is lost. This is normally only important to consider for requests to the Intake API.

If a non-2xx response was received from the APM Server, the status code will be available on error.code.

For requests to the Intake API where the response is a structured error message, the error object will have the following properties:

  • error.accepted - An integer indicating how many events was accepted as part of the failed request. If 100 events was sent to the APM Server as part of the request, and the error reports only 98 as accepted, it means that two events either wasn't received or couldn't be processed for some reason
  • error.errors - An array of error messages. Each element in the array is an object containing a message property (String) and an optional document property (String). If the document property is given it will contain the failed event as it was received by the APM Server

If the response contained an error body that could not be parsed by the client, the raw body will be available on error.response.

The client is not closed when the request-error event is emitted.

client.sent

An integer indicating the number of events (spans, transactions, errors, or metricsets) sent by the client. An event is considered sent when the HTTP request used to transmit it has ended. Note that errors in requests to APM server may mean this value is not the same as the number of events accepted by the APM server.

client.config(options)

Update the configuration given to the Client constructor. All configuration options can be updated except:

  • size
  • time
  • keepAlive
  • keepAliveMsecs
  • maxSockets
  • maxFreeSockets
  • centralConfig

client.supportsKeepingUnsampledTransaction()

This method returns a boolean indicating whether the remote APM Server (per the configured serverUrl) is of a version that requires unsampled transactions to be sent.

This defaults to true if the remote APM server version is not known -- either because the background fetch of the APM Server version hasn't yet completed, or the version could not be fetched.

client.supportsActivationMethodField()

This method returns a boolean indicating whether the remote APM Server (per the configured serverUrl) is of a version that supports the metadata.service.agent.activation_method field. This is true for APM server versions >=8.7.1. It defaults to true if the APM server version is not (yet) known.

client.addMetadataFilter(fn)

Add a filter function for the "metadata" object sent to APM server. This will be called once at client creation, and possibly again later if client.config() is called to reconfigure the client or client.addMetadataFilter(fn) is called to add additional filters.

Here is an example of a filter that removes the metadata.process.argv field:

apm.addMetadataFilter(function dropArgv(md) {
  if (md.process && md.process.argv) {
    delete md.process.argv
  }
  return md
})

It is up to the user to ensure the returned object conforms to the metadata schema, otherwise APM data injest will be broken. An example of that (when used with the Node.js APM agent) is this in the application's log:

[2021-04-14T22:28:35.419Z] ERROR (elastic-apm-node): APM Server transport error (400): Unexpected APM Server response
APM Server accepted 0 events in the last request
Error: validation error: 'metadata' required
  Document: {"metadata":null}

See the APM Agent addMetadataFilter documentation for further details.

client.setExtraMetadata([metadata])

Add extra metadata to be included in the "metadata" object sent to APM Server in intake requests. The given metadata object is merged into the metadata determined from the client configuration.

The reason this exists is to allow some metadata to be provided asynchronously, especially in combination with the expectExtraMetadata configuration option to ensure that event data is not sent to APM Server until this extra metadata is provided. For example, in an AWS Lambda function some metadata is not available until the first function invocation -- which is some async time after Client creation.

client.lambdaStart()

Tells the client that a Lambda function invocation has started. See Notes on Lambda Usage below.

client.lambdaShouldRegisterTransactions()

This returns a boolean indicating if the APM agent -- when running in a Lambda environment -- should bother calling client.lambdaRegisterTransaction(...). This can help the APM agent avoid some processing gathering transaction data.

Typically the reason this would return false is when the Lambda extension is too old to support registering transactions.

client.lambdaRegisterTransaction(transaction, awsRequestId)

Tells the Lambda Extension about the ongoing transaction, so that data can be used to report the transaction in certain error cases -- e.g. a Lambda handler timeout. See Notes on Lambda Usage below.

Arguments:

  • transaction - A transaction object that can be serialized to JSON.
  • awsRequestId - The AWS request ID for this invocation. This is a UUID available on the Lambda context object.

client.sendSpan(span[, callback])

Send a span to the APM Server.

Arguments:

  • span - A span object that can be serialized to JSON
  • callback - Callback is called when the span have been flushed to the underlying system

client.sendTransaction(transaction[, callback])

Send a transaction to the APM Server.

Arguments:

  • transaction - A transaction object that can be serialized to JSON
  • callback - Callback is called when the transaction have been flushed to the underlying system

client.sendError(error[, callback])

Send a error to the APM Server.

Arguments:

  • error - A error object that can be serialized to JSON
  • callback - Callback is called when the error have been flushed to the underlying system

client.sendMetricSet(metricset[, callback])

Send a metricset to the APM Server.

Arguments:

  • error - A error object that can be serialized to JSON
  • callback - Callback is called when the metricset have been flushed to the underlying system

client.flush([opts,] [callback])

Flush the internal buffer and end the current HTTP request to the APM Server. If no HTTP request is in process nothing happens. In an AWS Lambda environment this will also initiate a quicker shutdown of the intake request, because the APM agent always flushes at the end of a Lambda handler.

Arguments:

  • opts:
    • opts.lambdaEnd - An optional boolean to indicate if this is the final flush at the end of the Lambda function invocation. The client will do some extra handling if this is the case. See notes in client.lambdaStart() above.
  • callback - Callback is called when the internal buffer has been flushed and the HTTP request ended. If no HTTP request is in progress the callback is called in the next tick.

client.end([callback])

Calling the client.end() method signals that no more data will be sent to the client. If the internal buffer contains any data, this is flushed before ending.

Arguments:

  • callback - If provided, the optional callback function is attached as a listener for the 'finish' event

client.destroy()

Destroy the client. After this call, the client has ended and subsequent calls to sendSpan(), sendTransaction(), sendError(), flush(), or end() will result in an error.

Notes on Lambda usage

To properly handle data flushing for instrumented Lambda functions this Client should be used as follows in a Lambda environment.

  1. Ensure that metadata is set before any of the following calls. Typically in Lambda this is done by (a) configuring the client with expectExtraMetadata and (b) calling setExtraMetadata() at the start of the first invocation.

  2. When a Lambda invocation starts, client.lambdaStart() must be called. The Client prevents intake requests to APM Server when in a Lambda environment when a function invocation is not active. This is to ensure that an intake request does not accidentally span a period when a Lambda VM is frozen, which can lead to timeouts and lost APM data.

  3. When the transaction for this Lambda invocation has been created, await client.lambdaRegisterTransaction(<transaction>, <awsRequestId>) should be called. This is used to pass transaction details to the Lambda Extension so a transaction can be reported in certain failure modes (e.g. a Lambda handler timeout).

    client.lambdaShouldRegisterTransactions() can be used to avoid gathering data for this call.

  4. When a Lambda invocation finishes, client.flush({lambdaEnd: true}, cb) must be called.

    The lambdaEnd: true tells the Client to (a) mark the lambda as inactive so a subsequent intake request is not started until the next invocation, and (b) signal the Elastic AWS Lambda Extension that this invocation is done. The user's Lambda handler should not finish until cb is called. This ensures that the extension receives tracing data and the end signal before the Lambda Runtime freezes the VM.

License

MIT

apm-nodejs-http-client's People

Contributors

alemagio avatar alexanderwert avatar astorm avatar axw avatar david-luna avatar gergof avatar kuisathaverat avatar psrebniak avatar qard avatar trentm avatar v1v avatar vigneshshanmugam avatar watson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apm-nodejs-http-client's Issues

Client Behavior in AWS Lambda Enviornment

We've observed a few inconsistencies in how the HTTP Client behaves in an AWS Lambda enviornment.

  1. The use of an unrefed socket/connection can lead to a lambda function freezing before all data has been sent. This, in turn, can lead to the data being lost when the lambda unfreezes due to a closed connection

  2. We've observed when using the lambda callback based API (vs. using an async function or a returned promise) that the client's stream can be rendered un-writable.

This Issue represents the work needed to reproduce these issues consistently and solve this problems such that the HTTP client can report data to the new Lambda extension.

consider changing payloadLogFile handling to *not* unzip the payload data

Currently the (intended for debugging only) payloadLogFile config value will unzip the gzipped stream of intake API data and write it to the given path. That re-unzipping is a perf hit for little reason. With the diff below we could avoid taking the CPU cycles to re-unzip it and just write the gzip'd data to disk. The developer could then watch the payload log file as follows:

tail -n0 -F payload.log | zcat

Note internal discussion today, it was decided this should wait for a major version bump on the outside chance that some user took this part of the current apm-agent-nodejs.git README:

Screen Shot 2021-03-26 at 12 55 09 PM

to imply that using payloadLogFile could be load-bearing. :)

Starter patch:

diff --git a/index.js b/index.js
index fd96cbf..91f93ae 100644
--- a/index.js
+++ b/index.js
@@ -912,7 +912,7 @@ function getChoppedStreamHandler (client, onerror) {
     // impact.
     if (client._conf.payloadLogFile) {
       const payloadLogStream = fs.createWriteStream(client._conf.payloadLogFile, { flags: 'a' })
-      gzipStream.pipe(zlib.createGunzip()).pipe(payloadLogStream)
+      gzipStream.pipe(payloadLogStream)
     }

     // Send the metadata object (always first) and hook up the streams.
diff --git a/test/config.js b/test/config.js
index 36d766b..5d33964 100644
--- a/test/config.js
+++ b/test/config.js
@@ -9,6 +9,7 @@ const path = require('path')
 const semver = require('semver')
 const test = require('tape')
 const URL = require('url').URL
+const zlib = require('zlib')

 const utils = require('./lib/utils')
 const pkg = require('../package')
@@ -432,7 +433,7 @@ test('payloadLogFile', function (t) {
   t.plan(6)

   const receivedObjects = []
-  const filename = path.join(os.tmpdir(), Date.now() + '.ndjson')
+  const filename = path.join(os.tmpdir(), Date.now() + '.zndjson')
   let requests = 0

   let client
@@ -453,7 +454,9 @@ test('payloadLogFile', function (t) {
         server.close()
         t.equal(receivedObjects.length, 5, 'should have received 5 objects')

-        const file = fs.createReadStream(filename).pipe(ndjson.parse())
+        const file = fs.createReadStream(filename)
+          .pipe(zlib.createGunzip())
+          .pipe(ndjson.parse())

         file.on('data', function (obj) {
           const expected = receivedObjects.shift()

Need to increment git ignore configuration

It is a good practice to increase the configuration of .gitignore with files that have no direct value to the application in addition.

For example, add the following files:
yarn.lock
logs
.log
npm-debug.log
pids
*.pid
*.seed
.idea

Audit failure due to a vulnerability in semver

Library semver <7.5.2 has a vulnerability:
GHSA-c2qf-rxjj-qqgw

Script npm audit fail due error:

                       === npm audit security report ===                        
                                                                                
# Run  npm install [email protected]  to resolve 1 vulnerability
SEMVER WARNING: Recommended action is a potentially breaking change
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate      │ semver vulnerable to Regular Expression Denial of Service    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package       │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path          │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info     │ https://github.com/advisories/GHSA-c2qf-rxjj-qqgw            │
└───────────────┴──────────────────────────────────────────────────────────────┘


# Run  npm install --save-dev [email protected]  to resolve 1 vulnerability
SEMVER WARNING: Recommended action is a potentially breaking change
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate      │ semver vulnerable to Regular Expression Denial of Service    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package       │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ nyc [dev]                                                    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path          │ nyc > test-exclude > read-pkg-up > read-pkg >                │
│               │ normalize-package-data > semver                              │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info     │ https://github.com/advisories/GHSA-c2qf-rxjj-qqgw            │
└───────────────┴──────────────────────────────────────────────────────────────┘


# Run  npm install --save-dev [email protected]  to resolve 1 vulnerability
SEMVER WARNING: Recommended action is a potentially breaking change
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate      │ semver vulnerable to Regular Expression Denial of Service    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package       │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ standard [dev]                                               │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path          │ standard > eslint > cross-spawn > semver                     │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info     │ https://github.com/advisories/GHSA-c2qf-rxjj-qqgw            │
└───────────────┴──────────────────────────────────────────────────────────────┘


┌──────────────────────────────────────────────────────────────────────────────┐
│                                Manual Review                                 │
│            Some vulnerabilities require your attention to resolve            │
│                                                                              │
│         Visit https://go.npm.me/audit-guide for additional guidance          │
└──────────────────────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate      │ semver vulnerable to Regular Expression Denial of Service    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package       │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in    │ >=7.5.2                                                      │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ nyc [dev]                                                    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path          │ nyc > istanbul-lib-instrument > semver                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info     │ https://github.com/advisories/GHSA-c2qf-rxjj-qqgw            │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate      │ semver vulnerable to Regular Expression Denial of Service    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package       │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in    │ >=7.5.2                                                      │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ nyc [dev]                                                    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path          │ nyc > make-dir > semver                                      │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info     │ https://github.com/advisories/GHSA-c2qf-rxjj-qqgw            │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate      │ semver vulnerable to Regular Expression Denial of Service    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package       │ semver                                                       │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in    │ >=7.5.2                                                      │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ nyc [dev]                                                    │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path          │ nyc > caching-transform > make-dir > semver                  │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info     │ https://github.com/advisories/GHSA-c2qf-rxjj-qqgw            │
└───────────────┴──────────────────────────────────────────────────────────────┘
found 6 moderate severity vulnerabilities in 365 scanned packages
  3 vulnerabilities require semver-major dependency updates.
  3 vulnerabilities require manual review. See the full report for details.

Feature Request: Logging in the Client

The APM Client current doesn't have access to a logger. This means we can't add logging for our customers, or move forward on our long term plans for "logging as self instrumentation".

In order to fix this we should

  1. Consider a logger for the APM client
  2. Determine how, if at all, this logger interacts with the Agent's logger
  3. Be extra considerate of perf. concerns, as the client can often be a "hot path" for high throughput applications

One the logger's in place, we should also consider places we may want to add logs. For example -- our cautions check on _encodedMetadata when we're 95% certain is should already be set.

    if (!client._encodedMetadata) {
      client._encodedMetadata = client._encode({ metadata: client._conf.metadata }, Client.encoding.METADATA)
    }

perf: Alternate Options to Initial Stream Corking

The current implementation of cloud metadata relies on corking the client stream until the cloud metadata is fetched (or not fetched), and then uncorking the client stream once the data is available. This potentially results in a small accumulation of data in the corked client stream. One potential work around for this from @watson

It might be desirable to cork a bit further down the pipeline (e.g. the zlib or request stream), because the zlib stream already has an internal buffer that's used for compression, so in an ideal situation, corking that would not add any time delay to the pipeline whatsoever, since the metadata would have time to be populate before the zlib stream had a chance to flush.

This would also mean that the buffered data would be compressed and that the client would not retain any references to the span/transaction objects, which in turn would allow them to be garbage collected.

Our initial attempts at this resulted in strange timing issues where the payload metadata was not the first item into the request stream, which resulted in APM server rejecting data.

We should revisit this during our next perf. pass on the agent/client.

api-v2: add friendly error message in case of 404

If the client tries to connect to the APM but it does't respond or it responds with 404, the following error message should be reported:

<host-prefix>/intake/v2/events not found (requires APM Server 6.5.0 or newer)

Throws error `Cannot read property 'length' of undefined` on StreamChopper.Writable.write()

I've discovered today that this library which under a certain circumstance throws the following errors by readable-stream:

 Uncaught Exception: TypeError: Cannot read property 'length' of undefined
     at writeOrBuffer (readable-stream/lib/_stream_writable.js:370:42)
     at StreamChopper.Writable.write (readable-stream/lib/_stream_writable.js:303:11)
     at Client._writevCleaned (elastic-apm-http-client/index.js:276:17)
     at processBatch (elastic-apm-http-client/index.js:249:12)
     at Client._writev (elastic-apm-http-client/index.js:267:3)
     at doWrite (readable-stream/lib/_stream_writable.js:405:97)
     at clearBuffer (readable-stream/lib/_stream_writable.js:501:5)
     at onwrite (readable-stream/lib/_stream_writable.js:451:7)
     at WritableState.onwrite (readable-stream/lib/_stream_writable.js:158:5)
     at afterWrite (readable-stream/lib/_stream_writable.js:465:3)
     at onwrite (readable-stream/lib/_stream_writable.js:457:7)
     at WritableState.onwrite (readable-stream/lib/_stream_writable.js:158:5)
     at StreamChopper.ondrain [as _ondrain] (stream-chopper/index.js:59:5)
     at StreamChopper._removeStream (stream-chopper/index.js:182:45)
     at StreamChopper._endStream (stream-chopper/index.js:165:8)
     at StreamChopper._maybeEndTransformSteam (stream-chopper/index.js:137:27)
     at Gzip._stream.on.chunk (stream-chopper/index.js:94:14)
     at emitOne (events.js:121:20)
     at Gzip.emit (events.js:211:7)
     at addChunk (_stream_readable.js:263:12)
     at readableAddChunk (_stream_readable.js:250:11)
     at Gzip.Readable.push (_stream_readable.js:208:10)
     at Gzip.Transform.push (_stream_transform.js:147:32)
     at Zlib.callback (zlib.js:474:14)
Uncaught Exception: TypeError: Cannot read property 'length' of undefined
    at writeOrBuffer (readable-stream/lib/_stream_writable.js:370:42)
    at StreamChopper.Writable.write (readable-stream/lib/_stream_writable.js:303:11)
    at Client._writevCleaned (elastic-apm-http-client/index.js:276:17)
    at processBatch (elastic-apm-http-client/index.js:249:12)
    at Client._writev (elastic-apm-http-client/index.js:267:3)
    at doWrite (readable-stream/lib/_stream_writable.js:405:97)
    at clearBuffer (readable-stream/lib/_stream_writable.js:501:5)
    at Client.Writable.uncork (readable-stream/lib/_stream_writable.js:317:94)
    at process.nextTick (elastic-apm-http-client/index.js:310:42)
    at _combinedTickCallback (internal/process/next_tick.js:132:7)
    at process._tickDomainCallback (internal/process/next_tick.js:219:9)

Both seemingly boiling down to the same bug. This library (I use it in version 8.2.1) calls the write() method and skips the second parameter (https://github.com/elastic/apm-nodejs-http-client/blob/v8.2.1/index.js#L276):

  this._chopper.write(chunk, cb)

@mcollina asked me to open this issue here in the hope you know better how to reproduce the problem (nodejs/readable-stream#421)

Pipeline job is failing to convert TAP to junit

the pipeline job seems broken, it could be related to the upgrade to Jenkins core 2.201

[2019-10-25T04:56:49.483Z] process apparently never started in /var/lib/jenkins/workspace/apm-nodejs-http-client-mbp_PR-93@tmp/durable-b9d74e73
[2019-10-25T04:56:49.484Z] (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
script returned exit code -2

TypeError: Cannot convert undefined or null to object

I'm getting this error not sure why, please advice:

TypeError: Cannot convert undefined or null to object
    at Function.keys (<anonymous>)
    at truncateCustomKeys (/cache/node_modules/elastic-apm-http-client/lib/truncate.js:309:23)
    at truncateCustomKeys (/cache/node_modules/elastic-apm-http-client/lib/truncate.js:316:22)
    at Object.truncSpan [as span] (/cache/node_modules/elastic-apm-http-client/lib/truncate.js:136:12)
    at Client._encode (/cache/node_modules/elastic-apm-http-client/index.js:327:27)
    at Client.encodeObject (/cache/node_modules/elastic-apm-http-client/index.js:272:15)
    at Array.map (<anonymous>)
    at Client._writevCleaned (/cache/node_modules/elastic-apm-http-client/index.js:276:22)
    at processBatch (/cache/node_modules/elastic-apm-http-client/index.js:250:12)
    at Client._writev (/cache/node_modules/elastic-apm-http-client/index.js:268:3)
    at doWrite (/cache/node_modules/elastic-apm-http-client/node_modules/readable-stream/lib/_stream_writable.js:405:97)
    at clearBuffer (/cache/node_modules/elastic-apm-http-client/node_modules/readable-stream/lib/_stream_writable.js:501:5)
    at Client.Writable.uncork (/cache/node_modules/elastic-apm-http-client/node_modules/readable-stream/lib/_stream_writable.js:317:94)
    at Timeout._onTimeout (/cache/node_modules/elastic-apm-http-client/index.js:299:14)
    at listOnTimeout (internal/timers.js:531:17)
    at processTimers (internal/timers.js:475:7)

Truncate keys

Today we just truncate object values, but we should probably truncate keys with the same algorithm we use to truncate values.

Non-standard indentation and undefined catch scope

In the file "lib/ndjson.js" the tryJSONStringify function is declared below your call, this causes an "Expected to return a value at the end of function 'tryJSONStringify' indentation conflict" and this function still has nothing defined in its catch scope.

I suggest implementing the following code:

'use strict'

const stringify = require('fast-safe-stringify')

function tryJSONStringify (obj) {
  try {
    return JSON.stringify(obj)
  } catch (e) {
    throw new Error('Error when running tryJSONStringify')
  }
}

exports.serialize = function serialize (obj) {
  const str = tryJSONStringify(obj) || stringify(obj)
  return str + '\n'
}

CI implementations (in GitHub actions and Jenkinis) duplicate the number of builds per PR

What

If GitHub actions produce the same outcome that the JenkinsCI pipeline, then can it be removed?

Why

  • Duplicated effort to maintain the same implementation in two different CI.
  • JenkinsCI is the official CI.
  • Builds cost twice.
  • Observing the CI/CD with Jenkins is something we have in place.
  • Get for free all the tooling we have in place and will put in place to improve the Development eXperience.
  • Observability Onboarding, the same development experience for everyone at Elastic who contributes to the observability solution.

Issues

The first implementation was done in Jenkins in #65

And the GitHub actions in #123

Fix CI issue on Node.js 6

For some reason, our tests are often failing on Node.js 6. So far I've not been able to reproduce the issue locally but only happens on Travis.

But this happens maybe 19 out of 20 times a test runs, so it's basically a blocker at the moment.

`client.flush(cb)` callback may not be called in v9.7.0

The recent v9.7.0 release changed behaviour such that the cb callback in client.flush(cb) is not called in a specific case.

A minimal sample case with the APM agent (https://github.com/elastic/apm-agent-nodejs) is this:

require('elastic-apm-node').start({
  serviceName: 'use-agent-and-throw',
  centralConfig: false  // is necessary to trigger the issue
})

throw new Error('boom')

By default the elastic-apm-node agent is setting up an 'uncaughtException' handler that (eventually) effectively calls the following:

      client.sendError(error, function () {
        client.flush(function (err) {
          // ...
        })
      })

With previous versions of the http client, this would send the error to the APM server and call the callback to client.flush(). With v9.7.0, that client.flush() callback is not called. More details coming in subsequent comments.

Cannot get this to send out spans

I am sorry if I post this in the wrong way, or to the wrong forum. But I cannot get this package to work. Here is my complete node.js program:

const Client = require('elastic-apm-http-client')
const options = {
agentName: 'my-nodejs-agent',
agentVersion: require('elastic-apm-http-client/package.json').version,
serviceName: 'Hybrid_cloudflow_jackets',
serviceVersion : '1.0.0',
userAgent: 'Hybrid Custom Elastic APM Agent',
secretToken: 'XXXXXXXXX',
serverUrl: 'https://528e6f7cd510472591f55b07d0d68130.apm.eu-west-3.aws.elastic-cloud.com',
payloadLogFile : 'test_apm_debug.log'
};
const client = new Client(options);
const span = { name: 'SELECT FROM users', duration: 42, start: 0, type: 'db.mysql.query' }
client.sendSpan(span)
client.sendSpan(span)
console.log(driver reported ${client.sent} records sent)
client.flush();
console.log(after flush: driver reported ${client.sent} records sent)
client.end();
client.destroy();

Running this gives the following output:
C:\Program Files\nodejs\node.exe .\test_apm.js
driver reported 0 records sent
after flush: driver reported 0 records sent

Also, the logfile is empty (as is to be expected)
Please help,
Thank you so much in advance

Update remote config logic to match status code change

If no remote config is set up for the given agent/environment, the APM Server now returns 200 with an empty object instead of 404. This wrongly triggers the config event.

Our agent currently handles this ok, but it would still be best if we didn't emit the config event.

webpack dynamic require failed

At the moment this._transport performed by require with dynamic value.
This broke webpack builds, as webpack cannot determine value of this require and we got runtime cannot find 'http' module.
I fix this by using switch instead of dynamic require.

Kibana Instrumentation and `APM Server transport error (ECONNRESET): socket hang up` Log Messages

We've received reports that some users are seeing the following error message

APM Server transport error (ECONNRESET): socket hang up

These users are using the Elastic Node.js Agent to instrument their Kibana development instances. This issue is a general catch all thread for information about these errors and our attempts to get a stable working reproduction in order to further diagnose the issue.

Add backoff algorithm in case of APM Server communication issues

This is needed both in case the APM Server is offline or otherwise isn't accepting new connections, or in case it responds with a 503 status code.

From the intake API spec:

If the HTTP response status code isn’t 2xx or if a request is prematurely closed (either on the TCP or HTTP level) the request MUST be considered failed.

When a request fails, the agent has no way of knowing exactly what data was successfully processed by the APM Server. And since the agent doesn’t keep a copy of the data that was sent, there’s no way for the agent to re-send any data. Furthermore, as the data waiting to be sent is already compressed, it’s impractical to recover any of it in a way so that it can be sent over a new HTTP request.

The agent should therefore drop the entire compressed buffer - both the internal zlib buffer, and potentially the already compressed data if such data is also buffered. From thereon new data written to the compression library can be directed to a new HTTP request.

The new HTTP request should not necessarily be started immediately after the previous HTTP request fails, as the reason for the failure might not have been resolved up-stream. Instead an incremental back-off algorithm SHOULD be used to delay new requests. The grace period should be calculated in seconds using the algorithm min(reconnectCount++, 6) ** 2 ± 10%, where reconnectCount starts at zero. So the delay after the first error is 0 seconds, then circa 1, 4, 9, 16, 25 and finally 36 seconds. We add ±10% jitter to the calculated grace period in case multiple agents entered the grace period simultaneously. This way they will not all try to reconnect at the same time.

While the grace period is in effect, the agent MAY buffer the data that was supposed to be sent if the grace period wasn’t in effect. If buffering, the agent should ensure the memory used to buffer data data doesn’t grow indefinitely. A reasonable limit MUST in that case be implemented.

Flush seems to complete before request is sent

I am using APM in a script and need to call flush to ensure all APM data has been persisted. However, I noticed that was not always happening.

The following script fails to persist the transaction unless I add an additional wait which can be done by calling flushAndWait instead of flush.

async function main() {
  const Client = require('elastic-apm-http-client')
  const { mainModule } = require('process')
  
  const client = new Client({
    serviceName: 'tyler-test',
    agentName: 'my-nodejs-agent',
    agentVersion: 'v7.10.0',
    userAgent: 'My Custom Elastic APM Agent',
    environment: 'test',

    serverUrl: 'https://b---1.apm.us-west1.gcp.cloud.es.io',
    secretToken: 'V---F',

    payloadLogFile: '/tmp/repro-apm-nodejs-http-client.log'
  })
  
  const trans = {
    timestamp: new Date().getTime() * 1000,
    name: 'ResourceHttpRequestHandler',
    type: 'http',
    id: '4340a8e0df1906ecbfa9',
    trace_id: '0acd456789abcdef0123456789abcdef',
    parent_id: 'abcdefabcdef01234567',
    span_count: {
      started: 20,
      dropped: 0
    },
    duration: 10,
  }
  
  client.on('error', err => console.error(err))
  client.on('request-error', err => console.error(err))

  client.sendTransaction(trans)

  const flush = () => new Promise((resolve) => client.flush(resolve))
  const flushAndWait = () => new Promise((resolve) => client.flush(setTimeout(resolve, 2000)))

  console.log('before')
  await flushAndWait()
  console.log('after')
}

main();

'npm run coverage' errors; breaks checks for node v15

Starting at least last Friday I'm seeing errors like the following from running code coverage: npm run coverage

With node v14 sometimes this:

% npm run coverage

> [email protected] coverage /Users/trentm/el/apm-nodejs-http-client2
> nyc report --reporter=text-lcov > coverage.lcov && codecov

  _____          _
 / ____|        | |
| |     ___   __| | ___  ___ _____   __
| |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
| |___| (_) | (_| |  __/ (_| (_) \ V /
 \_____\___/ \__,_|\___|\___\___/ \_/
                                v3.8.1
==> Detecting CI Provider
    No CI Detected. Using git/mercurial
==> Configuration:
    Endpoint: https://codecov.io
{
  commit: 'c7cbce96795405dc91aebd92389ae69fe1f4f626',
  branch: 'trentm/blocking-behavior',
  package: 'node-v3.8.1'
}
==> Building file structure
==> Generating gcov reports (skip via --disable=gcov)
    $ find /Users/trentm/el/apm-nodejs-http-client2 -type f -name '*.gcno'  -exec gcov  {} +
    Failed to run gcov command.
==> Scanning for reports
    + /Users/trentm/el/apm-nodejs-http-client2/coverage.lcov
==> Uploading reports
    Could not determine repo and owner

which is fine, I just don't have codecov.io creds setup.

With node v14 sometimes this:

% npm run coverage

> [email protected] coverage /Users/trentm/el/apm-nodejs-http-client2
> nyc report --reporter=text-lcov > coverage.lcov && codecov

  _____          _
 / ____|        | |
| |     ___   __| | ___  ___ _____   __
| |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
| |___| (_) | (_| |  __/ (_| (_) \ V /
 \_____\___/ \__,_|\___|\___\___/ \_/
                                v3.8.1
==> Detecting CI Provider
    No CI Detected. Using git/mercurial
==> Configuration:
    Endpoint: https://codecov.io
{
  commit: 'c7cbce96795405dc91aebd92389ae69fe1f4f626',
  branch: 'trentm/blocking-behavior',
  package: 'node-v3.8.1'
}
==> Building file structure
==> Generating gcov reports (skip via --disable=gcov)
    $ find /Users/trentm/el/apm-nodejs-http-client2 -type f -name '*.gcno'  -exec gcov  {} +
    Failed to run gcov command.
==> Scanning for reports
    + /Users/trentm/el/apm-nodejs-http-client2/coverage.lcov
==> Uploading reports
(node:34727) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'startsWith' of undefined
    at Object.getAgent (/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/agents.js:30:22)
    at requestToFetchOptions (/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/index.js:86:30)
    at teenyRequest (/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/index.js:143:14)
    at /Users/trentm/el/apm-nodejs-http-client2/node_modules/codecov/lib/codecov.js:215:9
    at /Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/index.js:233:13
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:34727) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:34727) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

which is concerning, but doesn't break builds because the exit status is still 0.

However, on node v15 is does exit non-zero:

% npm run coverage

> [email protected] coverage
> nyc report --reporter=text-lcov > coverage.lcov && codecov

  _____          _
 / ____|        | |
| |     ___   __| | ___  ___ _____   __
| |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
| |___| (_) | (_| |  __/ (_| (_) \ V /
 \_____\___/ \__,_|\___|\___\___/ \_/
                                v3.8.1
==> Detecting CI Provider
    No CI Detected. Using git/mercurial
==> Configuration:
    Endpoint: https://codecov.io
{
  commit: 'c7cbce96795405dc91aebd92389ae69fe1f4f626',
  branch: 'trentm/blocking-behavior',
  package: 'node-v3.8.1'
}
==> Building file structure
==> Generating gcov reports (skip via --disable=gcov)
    $ find /Users/trentm/el/apm-nodejs-http-client2 -type f -name '*.gcno'  -exec gcov  {} +
    Failed to run gcov command.
==> Scanning for reports
    + /Users/trentm/el/apm-nodejs-http-client2/coverage.lcov
==> Uploading reports
/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/agents.js:30
    var isHttp = uri.startsWith('http://');
                     ^

TypeError: Cannot read property 'startsWith' of undefined
    at Object.getAgent (/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/agents.js:30:22)
    at requestToFetchOptions (/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/index.js:86:30)
    at teenyRequest (/Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/index.js:143:14)
    at /Users/trentm/el/apm-nodejs-http-client2/node_modules/codecov/lib/codecov.js:215:9
    at /Users/trentm/el/apm-nodejs-http-client2/node_modules/teeny-request/build/src/index.js:233:13
    at processTicksAndRejections (node:internal/process/task_queues:93:5)
npm ERR! code 1
npm ERR! path /Users/trentm/el/apm-nodejs-http-client2
npm ERR! command failed
npm ERR! command sh -c nyc report --reporter=text-lcov > coverage.lcov && codecov

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/trentm/.npm/_logs/2021-03-29T17_15_05_970Z-debug.log

Presumably because in node v15 the new default is to throw on unhandled rejections (--unhandled-rejections=throw).

This is codecov/codecov-node#284

Truncate Mutates Context Data

When using Elastic APM, any data being logged is mutated during the filter execution of truncate. This occurred when I tried to use APM with AWS Lambda. The result of the lambda function is attached to the context in https://github.com/elastic/apm-agent-nodejs/blob/abb4f1af4a39e59415bf8977608d928e29e11c72/lib/lambda.js#L45, and this client mutates the output object as a result trimming all headers and the body to 1024 bytes. This results in an invalid AWS lambda response.

Node 11 support

Does this library not work with Node 11? I get a yarn error when trying to install elastic-apm-node 3.3.0 with yarn when using Node 11.14.0, but I would expect the library to work under Node 11 if it works with Node 10 and 12:

sensorium:bagel yarn add elastic-apm-node
yarn add v1.15.2
[1/4] 🔍  Resolving packages...
[2/4] 🚚  Fetching packages...
error [email protected]: The engine "node" is incompatible with this module. Expected version "^8.6.0 || 10 || >=12". Got "11.14.0"
error Found incompatible module
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
sensorium:bagel node -v
v11.14.0

test failure with node v15.5.0

$ npm test
...
# polling
ok 1175 should make a GET request
ok 1176 should send request to /config/v1/agents
ok 1177 should encode query in query params
ok 1178 should add secret token
ok 1179 should add proper User-Agent
ok 1180 should not have If-None-Match header
ok 1181 should make a GET request
ok 1182 should send request to /config/v1/agents
ok 1183 should encode query in query params
ok 1184 should add secret token
ok 1185 should add proper User-Agent
ok 1186 should not have If-None-Match header
ok 1187 should make a GET request
ok 1188 should send request to /config/v1/agents
ok 1189 should encode query in query params
ok 1190 should add secret token
ok 1191 should add proper User-Agent
ok 1192 should not have If-None-Match header
ok 1193 should make a GET request
ok 1194 should send request to /config/v1/agents
ok 1195 should encode query in query params
ok 1196 should add secret token
ok 1197 should add proper User-Agent
ok 1198 should not have If-None-Match header
ok 1199 should make a GET request
ok 1200 should send request to /config/v1/agents
ok 1201 should encode query in query params
ok 1202 should add secret token
ok 1203 should add proper User-Agent
ok 1204 should not have If-None-Match header
ok 1205 should make a GET request
ok 1206 should send request to /config/v1/agents
ok 1207 should encode query in query params
ok 1208 should add secret token
ok 1209 should add proper User-Agent
ok 1210 should not have If-None-Match header
ok 1211 should emit config after 6th request
ok 1212 should be equivalent
ok 1213 should make a GET request
ok 1214 should send request to /config/v1/agents
ok 1215 should encode query in query params
ok 1216 should add secret token
ok 1217 should add proper User-Agent
ok 1218 should be equal
ok 1219 should make a GET request
ok 1220 should send request to /config/v1/agents
ok 1221 should encode query in query params
ok 1222 should add secret token
ok 1223 should add proper User-Agent
ok 1224 should be equal
not ok 1225 plan != count
  ---
    operator: fail
    expected: 59
    actual:   50
    at: Server.<anonymous> (/Users/trentm/el/travis-bye/apm-nodejs-http-client/test/central-config.js:87:11)
    stack: |-
      Error: plan != count
          at Test.assert [as _assert] (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:228:54)
          at Test.bound [as _assert] (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:80:32)
          at Test.fail (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:322:10)
          at Test.bound [as fail] (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:80:32)
          at Test._end (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:173:14)
          at Test.bound [as _end] (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:80:32)
          at Test.end (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:157:10)
          at Test.bound [as end] (/Users/trentm/el/travis-bye/apm-nodejs-http-client/node_modules/tape/lib/test.js:80:32)
          at Server.<anonymous> (/Users/trentm/el/travis-bye/apm-nodejs-http-client/test/central-config.js:87:11)
          at Server.emit (node:events:376:20)
  ...

Note that this did not fail for me with node v15.2.1, FWIW.

v2 API: How to handle invalid serverUrl

Today, if the client is given an invalid serverUrl that cannot be properly parsed by the url parser, an error will be thrown.

Is this ok behavior, or should we handle it differently?

Change truncate algorithm?

We currently have a pretty sophisticated truncate algorithm that will ensure that the truncated string isn't above a certain threshold measured in bytes, but it's clever enough to not cut a string in the middle of a unicode character. If the truncation point happens to in the middle of the a unicode character, it will instead drop that character and truncate just before it. This ensures that we can guarantee that the string is never longer than the a given size in bytes.

This however is different from how string max lengths is measured by the JSON Schema used by the APM Server and how Elasticsearch limits strings. Here, the limit is measured in characters and not in bytes.

It's my understanding that the other agents also truncate based on characters and not bytes.

So while I think our truncation algorithm is superior, to align with the rest of the stack and with the other agents, we should consider counting chars instead of bytes.

Blocking Behavior under Benchmarking Load

We've received reports the client can block when both of the following conditions are true.

  1. The instrumented application is under a heavy, artificial/benchmarking load
  2. The APM Server is in a non-response state

We've also been able to reproduce this blocking behavior using a heavy autocannon run.

"Blocking" here means the entire application is blocked -- for example, the above express application will not response to HTTP requests when this blocking state is reached. The application will typically block for 5 - 10 seconds before resuming normal operation.

The intent of this issue is to diagnose this problem and collect any data from folks seeing this in their own benchmarking or production systems in order to identify the root cause, as well as identify configurations that may alleviate this issue.

Uncaught Exception: RangeError: Invalid string length

After posting tons of very big span or transaction object, my process crashed by this exception:

Uncaught Exception: RangeError: Invalid string length
    at objs.reduce (./my-project/node_modules/elastic-apm-http-client/index.js:272:19)
    at Array.reduce (<anonymous>)
    at Client._writevCleaned (./my-project/node_modules/elastic-apm-http-client/index.js:271:22)
    at processBatch (./my-project/node_modules/elastic-apm-http-client/index.js:249:12)
    at Client._writev (./my-project/node_modules/elastic-apm-http-client/index.js:267:3)
    at doWrite (./my-project/node_modules/readable-stream/lib/_stream_writable.js:405:97)
    at clearBuffer (./my-project/node_modules/readable-stream/lib/_stream_writable.js:501:5)
    at onwrite (./my-project/node_modules/readable-stream/lib/_stream_writable.js:451:7)
    at WritableState.onwrite (./my-project/node_modules/readable-stream/lib/_stream_writable.js:158:5)
    at afterWrite (./my-project/node_modules/readable-stream/lib/_stream_writable.js:465:3)
    at onwrite (./my-project/node_modules/readable-stream/lib/_stream_writable.js:457:7)
    at WritableState.onwrite (./my-project/node_modules/readable-stream/lib/_stream_writable.js:158:5)
    at StreamChopper.ondrain [as _ondrain] (./my-project/node_modules/stream-chopper/index.js:59:5)
    at StreamChopper._removeStream (./my-project/node_modules/stream-chopper/index.js:182:45)
    at StreamChopper._endStream (./my-project/node_modules/stream-chopper/index.js:165:8)
    at StreamChopper._maybeEndTransformSteam (./my-project/node_modules/stream-chopper/index.js:137:27)
    at Gzip._stream.on.chunk (./my-project/node_modules/stream-chopper/index.js:94:14)
    at emitOne (events.js:121:20)
    at Gzip.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)
    at readableAddChunk (_stream_readable.js:250:11)
    at Gzip.Readable.push (_stream_readable.js:208:10)
    at Gzip.Transform.push (_stream_transform.js:147:32)
    at Zlib.callback (zlib.js:474:14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.