Giter Site home page Giter Site logo

nnubes256 / node-fast-ratelimit Goto Github PK

View Code? Open in Web Editor NEW

This project forked from valeriansaliou/node-fast-ratelimit

0.0 2.0 0.0 112 KB

:umbrella: Fast and efficient in-memory rate-limit for Node, used to alleviate most common DOS attacks.

Home Page: https://www.npmjs.com/package/fast-ratelimit

License: MIT License

JavaScript 100.00%

node-fast-ratelimit's Introduction

node-fast-ratelimit

Build Status Test Coverage NPM Downloads Gitter

Fast and efficient in-memory rate-limit, used to alleviate most common DOS attacks.

This rate-limiter was designed to be as generic as possible, usable in any NodeJS project environment, regardless of wheter you're using a framework or just vanilla code.

Rate-limit lists are stored in a native hashtable to avoid V8 GC to hip on collecting lost references. The hashtable native module is used for that purpose.

Who uses it?

Crisp Doctrine Anchor.Chat

👋 You use fast-ratelimit and you want to be listed there? Contact me.

How to install?

Include fast-ratelimit in your package.json dependencies.

Alternatively, you can run npm install fast-ratelimit --save.

Compilation note: ensure you have a C++11 compiler available (available in GCC 4.9+). This allows for node-gyp to build the hashtable dependency that fast-ratelimit depends on.

Windows users: you may have to install windows-build-tools globally using: npm install -g windows-build-tools to be able to compile.

How to use?

The fast-ratelimit API is pretty simple, here are some keywords used in the docs:

  • ratelimiter: ratelimiter instance, which plays the role of limits storage
  • namespace: the master ratelimit storage namespace (eg: set namespace to the user client IP, or user username)

You can create as many ratelimiter instances as you need in your application. This is great if you need to rate-limit IPs on specific zones (eg: for a chat application, you don't want the message send rate limit to affect the message composing notification rate limit).

Here's how to proceed (we take the example of rate-limiting messages sending in a chat app):

1. Create the rate-limiter

The rate-limiter can be instanciated as such:

var FastRateLimit = require("fast-ratelimit").FastRateLimit;

var messageLimiter = new FastRateLimit({
  threshold : 20, // available tokens over timespan
  ttl       : 60  // time-to-live value of token bucket (in seconds)
});

This limiter will allow 20 messages to be sent every minute per namespace. An user can send a maximum number of 20 messages in a 1 minute timespan, with a token counter reset every minute for a given namespace.

The reset scheduling is done per-namespace; eg: if namespace user_1 sends 1 message at 11:00:32am, he will have 19 messages remaining from 11:00:32am to 11:01:32am. Hence, his limiter will reset at 11:01:32am, and won't scheduler any more reset until he consumes another token.

2. Check by consuming a token

On the message send portion of our application code, we would add a call to the ratelimiter instance.

2.1. Consume token with asynchronous API (Promise catch/reject)

// This would be dynamic in your application, based on user session data, or user IP
namespace = "user_1";

// Check if user is allowed to send message
messageLimiter.consume(namespace)
  .then(() => {
    // Consumed a token
    // Send message
    message.send();
  })
  .catch(() => {
    // No more token for namespace in current timespan
    // Silently discard message
  });

2.2. Consume token with synchronous API (boolean test)

// This would be dynamic in your application, based on user session data, or user IP
namespace = "user_1";

// Check if user is allowed to send message
if (messageLimiter.consumeSync(namespace) === true) {
  // Consumed a token
  // Send message
  message.send();
} else {
  // consumeSync returned false since there is no more tokens available
  // Silently discard message
}

3. Check without consuming a token

In some instances, like password brute forcing prevention, you may want to check without consuming a token and consume only when password validation fails.

3.1. Check whether there are remaining tokens with asynchronous API (Promise catch/reject)

limiter.hasToken(request.ip).then(() => {
  return authenticate(request.login, request.password)
})
  .then(
    () => {
      // User is authenticated
    },

    () => {
      // User is not authenticated
      // Consume a token and reject promise
      return limiter.consume(request.ip)
        .then(() => Promise.reject())
    }
  )
  .catch(() => {
    // Either invalid authentication or too many invalid login
    return response.unauthorized();
  })

3.2. Check whether there are remaining tokens with synchronous API (boolean test)

if (!limiter.hasTokensSync(request.ip)) {
  throw new Error("Too many invalid login");
}

const is_authenticated = authenticateSync(request.login, request.password);

if (!is_authenticated) {
  limiter.consumeSync(request.ip);

  throw new Error("Invalid login/password");
}

Notes on performance

This module is used extensively on edge WebSocket servers, handling thousands of connections every second with multiple rate limit lists on the top of each other. Everything works smoothly, I/O doesn't block and RAM didn't move that much with the rate-limiting module enabled.

On one core / thread of 2.5 GHz Intel Core i7, the parallel asynchronous processing of 40,000 namespaces in the same limiter take an average of 300 ms, which is fine (7.5 microseconds per operation).

Why not using existing similar modules?

I was looking for an efficient, yet simple, DOS-prevention technique that wouldn't hurt performance and consume tons of memory. All proper modules I found were relying on Redis as the keystore for limits, which is definitely not great if you want to keep away from DOS attacks: using such a module under DOS conditions would subsequently DOS Redis since 1 (or more) Redis queries are made per limit check (1 attacker request = 1 limit check). Attacks should definitely not be allieviated this way, although a Redis-based solution would be perfect to limit abusing users.

This module keeps all limits in-memory, which is much better for our attack-prevention concern. The only downside: since the limits database isn't shared, limits are per-process. This means that you should only use this module to prevent hard-attacks at any level of your infrastructure. This works pretty well for micro-service infrastructures, which is what we're using it in.

node-fast-ratelimit's People

Contributors

valeriansaliou avatar

Watchers

James Cloos avatar Ignacio Gutiérrez Gómez avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.