Giter Site home page Giter Site logo

lynx's Introduction

lynx

NPM Downloads NPM Version

A minimalistic node.js client for statsd server. Fork of original work by sivy

lynx features:

  • Minimalistic — there is only a minimum of abstraction between you and statsd
  • Streams — You can stream in and out of a lynx connection
  • Re-usable UDP Connections — Keeps UDP connections open for a certain time
  • Errors — Pluggable error handling, by default errors are ignored

Quick Start

$ npm install lynx
$ node
> var lynx = require('lynx');
//
// Options in this instantiation include:
//   * `on_error` function to be executed when we have errors
//   * `socket` if you wish to just use a existing udp socket
//   * `scope` to define the a prefix for all stats, e.g. with `scope`
//     'product1' and stat 'somestat' the key would actually be
//     'product1.somestat'
//
> var metrics = new lynx('localhost', 8125);
{ host: 'localhost', port: 8125 }
> metrics.increment('node_test.int');
> metrics.decrement('node_test.int');
> metrics.timing('node_test.some_service.task.time', 500); // time in ms
> metrics.gauge('gauge.one', 100);
> metrics.set('set.one', 10);

This is the equivalent to:

echo "node_test.int:1|c"  | nc -w 0 -u localhost 8125
echo "node_test.int:-1|c" | nc -w 0 -u localhost 8125
echo "node_test.some_service.task.time:500|ms" | nc -w 0 -u localhost 8125
echo "gauge.one:100|g"    | nc -w 0 -u localhost 8125
echo "set.one:10|s"       | nc -w 0 -u localhost 8125

The protocol is super simple, so feel free to check out the source code to understand how everything works.

Advanced

Sampling

If you want to track something that happens really, really frequently, it can overwhelm StatsD with UDP packets. To work around that, use the optional sampling rate for metrics. This will only send packets a certain percentage of time. For very frequent events, this will give you a statistically accurate representation of your data.

Sample rate is an optional parameter to all of the metric API calls. A valid sample rate is 0.0 - 1.0. Values of 0.0 will never send any packets, and values of 1.0 will send every packet.

In these examples we are samping at a rate of 0.1, meaning 1-in-10 calls to send a sample will actually be sent to StatsD.

var metrics = new lynx('localhost', 8125);
metrics.increment('node_test.int', 0.1);
metrics.decrement('node_test.int', 0.1);
metrics.timing('node_test.some_service.task.time', 500, 0.1);
metrics.gauge('gauge.one', 100, 0.1);
metrics.set('set.one', 10, 0.1);
var timer2 = metrics.createTimer('node_test.some_service.task2.time', 0.1);
timer2.stop();

Streams

You can stream to lynx:

fs.createReadStream('file.statsd')
  .pipe(new lynx('localhost', port))
  .pipe(fs.createReadStream('file-fixed.statsd'))
  ;

Feel free to check the stream-test for more info.

Timers

If you wish to measure timing you can use the timer() functionality.

var metrics = new lynx('localhost', 8125)
  , timer   = metrics.createTimer('some.interval')
  ;

//
// Should send something like "some.interval:100|ms"
//
setTimeout(function () {
  timer.stop();
}, 100);

Timers use Date.getTime() which is known for being imprecise at the ms level. If this is a problem to you please submit a pull request and I'll take it.

Batching

Batching is possible for increment, decrement, and count:

metrics.decrement(['uno', 'two', 'trezentos']);

If you want to mix more than one type of metrics in a single packet you can use send, however you need to construct the values yourself. An example:

//
// This code is only to exemplify the functionality
//
// As of the current implementation the sample rate is processed per group
// of stats and not per individual stat, meaning either all would be send
// or none would be sent.
//
metrics.send(
  { "foo" : "-1|c"    // count
  , "bar" : "15|g"    // gauge
  , "baz" : "500|ms"  // timing
  , "boaz": "40|s"    // set
  }, 0.1);            // sample rate at `0.1`

Closing your socket

You can close your open socket when you no longer need it by using metrics.close().

Errors

By default errors get logged. If you wish to change this behavior simply specify a on_error function when instantiating the lynx client.

function on_error(err) {
  console.log(err.message);
}

var connection = new lynx('localhost', 1234, {on_error: on_error});

Source code is super minimal, if you want try to get familiar with when errors occur check it out. If you would like to change behavior on how this is handled send a pull request justifying why and including the alterations you would like to propose.

Tests

Run the tests with npm.

npm test

Meta

       `\.      ,/'
        |\\____//|
        )/_ `' _\(
       ,'/-`__'-\`\
       /. (_><_) ,\
       ` )/`--'\(`'  atc
         `      '

(oo)--',- in caos

lynx's People

Contributors

agnoster avatar dscape avatar fictorial avatar kbourgoin avatar lloyd avatar matthewbennink avatar mindreframer avatar mojodna avatar raynos avatar sivy avatar tmm1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

lynx's Issues

Socket: Not running issue when disconnecting cluster

First off great module, unfortunately we are seeing a high number of the error below when cycling boxes during deployment.

Error: Not running
    at Socket._healthCheck (dgram.js:420:11)
    at Socket.send (dgram.js:272:8)
    at Lynx.send (/usr/local/moonstick/node_modules/lynx/lib/lynx.js:470:10)
    at Lynx.count (/usr/local/moonstick/node_modules/lynx/lib/lynx.js:310:8)
    at Lynx.increment (/usr/local/moonstick/node_modules/lynx/lib/lynx.js:228:8)
    at module.exports (/usr/local/moonstick/lib/middleware/requestStats.js:8:13)

I have put together a simple example to reproduce the error

var Lynx = require('./lib/lynx');
var cluster = require('cluster');

var metrics = new Lynx('localhost', 2056);

if (cluster.isMaster) {
    for (var i = 0; i < 1; i++) {
        cluster.fork();
    }

    cluster.on('exit', function (worker, code, signal) {
        console.log('worker ' + worker.process.pid + ' died');
    });

    cluster.on('online', function (worker, code, signal) {
        console.log('worker ' + worker.process.pid + ' online');
        cluster.disconnect();
    });
} else {
    metrics.increment('node_test.int', 1);
}

We could add a custom error handler and filter these but I believe it is a issue with Lynx as googling the stack shows similar issues in other module which use dgram.

Allow a callback to be passed in to call a function when the message has been sent?

I would like to implement metrics in a child process and exit the process when it is complete.

timer.stop()
process.exit(0)

But what happens is the process exits before the message is sent. I can see a noop is being passed in as the callback parameter:

socket.send(buffer, 0, buffer.length, this.port, this.host, noop);

Would be great to be able to pass in a callback to the .stop() function call so my code will be:

timer.stop(function (err) {
  process.exit(0)
})

Would you accept a pull request for this?

Lynx send overload UDP packets

UDP package length is 1472 bytes.
This happened when using send API to send a batch of large metrics.

Is it ok if I submit a PR?

Missing sample rate documentation

Most (all?) of the metrics support specifying a sample rate but this option is not documented other than through the 'send' API call.

"Nothing to send" spam

Why is it considered an error if there's no stats to send? If you're sending metrics with a sample rate lower than 1.0, you end up with the message "Nothing to send" spamming console.log. Sure, I could set on_error to a no-op, or manually filter out this error, but it seems extraneous. Can we remove it?

[Feature request] Optional callback on send() method

Feature request : callback on send() method and other metrics reporting methods.
Reason : error handing on UDP messages

I know UDP calls are "fire-and-forget", which well matches with the Node.js asynchronous nature. But, I think lynx misses the common Node.js async functions pattern, which is :

asyncFunction(function callback(err, result) {
  if (err) {
    // handle error ...
  } else {
    // process results
  }
});

This, applied to lynx, would look like :

const Lynx = require('lynx');
const metrics = new lynx('localhost', 8125);

metrics.increment('node_test.int', function(err) { // this new callback here
  if (err) {
    console.error(err);
  }
});

And this can be achieved by replacing the noop callback of the UDP send() method by the user-defined callback.

I can submit a PR if necessary.

Doesn't work with cluster

Hello guys,
Very strange situation: I sent timers from workers but after few timers (statsd success receive) master is down with error:

assert.js:98
  throw new assert.AssertionError({
        ^
AssertionError: false == true
    at SharedHandle.add (cluster.js:77:3)
    at queryServer (cluster.js:402:12)
    at Worker.onmessage (cluster.js:360:7)
    at ChildProcess.<anonymous> (cluster.js:611:8)
    at ChildProcess.EventEmitter.emit (events.js:123:20)
    at handleMessage (child_process.js:318:10)
    at Pipe.channel.onread (child_process.js:346:11)

Some stats not being delivered during load test

I'm locally running a web service that sends a timing metric (for each incoming request) to statsd (using this lynx client library), which then sends the metrics to carbon relay, which then routes the metrics to influx. I also have a grafana dashboard that pulls from influx and displays the number of requests/second based on the metrics described above. When load testing with under 200 requests/second, the graph I built in grafana accurately shows the number of requests/second. However, when load testing with around 350 requests/second, the grafana graph indicates that some metrics are missing from influx.

Does this client deliver all metrics under load stressed conditions?

Support readable and writable streams

Should be possible to do something like:

request.get("http://somedatasource.com/valid_statds")
  .pipe(new lynx("localhost", 1234))
  .pipe(fs.createWriteStream("foo.txt"))
  ;

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.