Giter Site home page Giter Site logo

fastify-rate-limit's Introduction

CI Package Manager CI Web SIte js-standard-style CII Best Practices

NPM version NPM downloads Security Responsible Disclosure Discord Contribute with Gitpod Open Collective backers and sponsors


An efficient server implies a lower cost of the infrastructure, a better responsiveness under load and happy users. How can you efficiently handle the resources of your server, knowing that you are serving the highest number of requests as possible, without sacrificing security validations and handy development?

Enter Fastify. Fastify is a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture. It is inspired by Hapi and Express and as far as we know, it is one of the fastest web frameworks in town.

The main branch refers to the Fastify v4 release. Check out the v3.x branch for v3.

Table of Contents

Quick start

Create a folder and make it your current working directory:

mkdir my-app
cd my-app

Generate a fastify project with npm init:

npm init fastify

Install dependencies:

npm i

To start the app in dev mode:

npm run dev

For production mode:

npm start

Under the hood npm init downloads and runs Fastify Create, which in turn uses the generate functionality of Fastify CLI.

Install

To install Fastify in an existing project as a dependency:

Install with npm:

npm i fastify

Install with yarn:

yarn add fastify

Example

// Require the framework and instantiate it

// ESM
import Fastify from 'fastify'
const fastify = Fastify({
  logger: true
})
// CommonJs
const fastify = require('fastify')({
  logger: true
})

// Declare a route
fastify.get('/', (request, reply) => {
  reply.send({ hello: 'world' })
})

// Run the server!
fastify.listen({ port: 3000 }, (err, address) => {
  if (err) throw err
  // Server is now listening on ${address}
})

with async-await:

// ESM
import Fastify from 'fastify'
const fastify = Fastify({
  logger: true
})
// CommonJs
const fastify = require('fastify')({
  logger: true
})

fastify.get('/', async (request, reply) => {
  reply.type('application/json').code(200)
  return { hello: 'world' }
})

fastify.listen({ port: 3000 }, (err, address) => {
  if (err) throw err
  // Server is now listening on ${address}
})

Do you want to know more? Head to the Getting Started.

Note

.listen binds to the local host, localhost, interface by default (127.0.0.1 or ::1, depending on the operating system configuration). If you are running Fastify in a container (Docker, GCP, etc.), you may need to bind to 0.0.0.0. Be careful when deciding to listen on all interfaces; it comes with inherent security risks. See the documentation for more information.

Core features

  • Highly performant: as far as we know, Fastify is one of the fastest web frameworks in town, depending on the code complexity we can serve up to 76+ thousand requests per second.
  • Extensible: Fastify is fully extensible via its hooks, plugins and decorators.
  • Schema based: even if it is not mandatory we recommend to use JSON Schema to validate your routes and serialize your outputs, internally Fastify compiles the schema in a highly performant function.
  • Logging: logs are extremely important but are costly; we chose the best logger to almost remove this cost, Pino!
  • Developer friendly: the framework is built to be very expressive and help the developer in their daily use, without sacrificing performance and security.

Benchmarks

Machine: EX41S-SSD, Intel Core i7, 4Ghz, 64GB RAM, 4C/8T, SSD.

Method:: autocannon -c 100 -d 40 -p 10 localhost:3000 * 2, taking the second average

Framework Version Router? Requests/sec
Express 4.17.3 14,200
hapi 20.2.1 42,284
Restify 8.6.1 50,363
Koa 2.13.0 54,272
Fastify 4.0.0 77,193
-
http.Server 16.14.2 74,513

Benchmarks taken using https://github.com/fastify/benchmarks. This is a synthetic, "hello world" benchmark that aims to evaluate the framework overhead. The overhead that each framework has on your application depends on your application, you should always benchmark if performance matters to you.

Documentation

中文文档地址

Ecosystem

  • Core - Core plugins maintained by the Fastify team.
  • Community - Community supported plugins.
  • Live Examples - Multirepo with a broad set of real working examples.
  • Discord - Join our discord server and chat with the maintainers.

Support

Please visit Fastify help to view prior support issues and to ask new support questions.

Contributing

Whether reporting bugs, discussing improvements and new ideas or writing code, we welcome contributions from anyone and everyone. Please read the CONTRIBUTING guidelines before submitting pull requests.

Team

Fastify is the result of the work of a great community. Team members are listed in alphabetical order.

Lead Maintainers:

Fastify Core team

Fastify Plugins team

Great Contributors

Great contributors on a specific area in the Fastify ecosystem will be invited to join this group by Lead Maintainers.

Past Collaborators

Hosted by

We are a At-Large Project in the OpenJS Foundation.

Sponsors

Support this project by becoming a SPONSOR! Fastify has an Open Collective page where we accept and manage financial contributions.

Acknowledgements

This project is kindly sponsored by:

Past Sponsors:

This list includes all companies that support one or more of the team members in the maintenance of this project.

License

Licensed under MIT.

For your convenience, here is a list of all the licenses of our production dependencies:

  • MIT
  • ISC
  • BSD-3-Clause
  • BSD-2-Clause

fastify-rate-limit's People

Contributors

bodinsamuel avatar cemremengu avatar climba03003 avatar delvedor avatar dependabot-preview[bot] avatar dependabot[bot] avatar diogomarques2003 avatar eomm avatar fdawgs avatar fox1t avatar frikille avatar gendronb avatar github-actions[bot] avatar greenkeeper[bot] avatar gurgunday avatar jsumners avatar kibertoad avatar leny32 avatar leomp12 avatar leonitousconforti avatar lknsi avatar mcollina avatar nherment avatar pc-jedi avatar rluvaton avatar salmanm avatar samstiles avatar saniol avatar svjard avatar uzlopak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastify-rate-limit's Issues

Does redis store do atomic increments?

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the issue has not already been raised

Issue

Does Redis store use atomic increments, I mean that is all operations in memory, or distributed environment use atomic increments against race conditions?

PR #123 is preventing use cases of multiple rate limiter usage

💥 Regression Report

Since PR #123 it is not possible to register two rate limiter like for a use case where a global rate limiter should be applied and a second, stricter limiter to only some specific routes like this (nestjs registration of the rate limiter):

    // rate limiter for every route except /auth/password
    await app.register(fastifyRateLimit, {
      max: 150,
      timeWindow: 1000 * 60 * 5, // 150 req/5min
      redis: new Redis({ ...configService.redisConnectionData, keyPrefix: 'ratelimit:' }),
      whitelist: (req) => {
        return req.url === '/auth/password';
      },
      errorResponseBuilder: rateLimitErrorResponse,
    });

    // rate limiter for /auth/password route only
    await app.register(fastifyRateLimit, {
      max: 15,
      timeWindow: 1000 * 60 * 5, // 15 req/5min
      redis: new Redis({ ...configService.redisConnectionData, keyPrefix: 'ratelimit:auth:' }),
      whitelist: (req) => {
        return req.url !== '/auth/password';
      },
      errorResponseBuilder: rateLimitErrorResponse,
    });

Last working version

Worked up to version: 5.0.0

Stopped working in version: 5.0.1

To Reproduce

Steps to reproduce the behavior:

Use two rate limiters, the second one registered does not work.

Expected behavior

Multiple rate limiter should be usable for a use case where a stricter rate limiting is needed for some routes.

Your Environment

  • node version: 14
  • fastify version: >3.0.0
  • os: WSL2

skipOnError does nothing

Hi everybody,

just wanted to know how/what the skipOnError flag is working. I expected that if redis connection is lost I can skip the rate limit with this flag. Is this wrong? But the behaviour is that my endpoint needs a lot of time to answer, when connection is lost to redis.

Thanks in advance.
Markus

Add .npmignore to reduce size of npm package

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the issue has not already been raised

Issue

The package contains unnecessary files. I recommend to create a .npmignore with following content:

.github
example
test
.gitattributes
.taprc
README.md

Thus making the final package to be around 15 kb instead of 115 kb.

Cannot find module 'ioredis' during TS compile

Hi, i just updated to 2.1.1 and got this error during TS compilation.
It happens because ioredis is in devDependencies and you are using it in index.d.ts, that is just a typings file.
Let me know if you need help.

An in-range update of knex is breaking the build 🚨

The devDependency knex was updated from 0.20.4 to 0.20.6.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

knex is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 8 commits.

  • 4feefdf Enforce Unix (lf) line terminators (#3598)
  • a2a6660 Prepare 0.20.5 release
  • 3914bf5 Fix colors in debug logs (#3592)
  • 9b37c94 Return more information about empty updates (#3597)
  • c277edb Use more efficient algorithm for generating internal ids (#3595) (#3596)
  • a613fe2 Fix some spelling mistakes (#3572)
  • dcbe555 The project location has moved to knex/knex (#3573)
  • d5773f8 Use Buffer.alloc() instead of deprecated () (#3574)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

IP whitelist

What about adding an array with ips that are whiteslisted? I'm thinking about scenarios where there are some server to server calls that are under control of the deployer.

No rate limiting on not-found/undefined routes

🐛 Bug Report

Even if the clients (based on their ip) has done too many request in the given timeWindow, the server does not respond with 429. This makes the application still vulnerable to attacks

To Reproduce

Steps to reproduce the behavior:

  1. JS
const fastify = require('fastify')()

fastify.register(require('fastify-rate-limit'), {
  max: 2,
  timeWindow: '1 minute'
})

fastify.get('/', (req, reply) => {
  reply.send({ hello: 'world' })
})

fastify.listen(3000, err => {
  if (err) throw err
  console.log('Server listening at http://localhost:3000')
})
  1. Curl
 curl 'http://localhost:8080/foo'

Expected behavior

It should give 429, "Too Many Requests", response, but giving this:

{ 
    "statusCode":404,
    "error":"Not Found",
    "message":"Not Found"
}

Feature: abuse usage

🚀 Feature Proposal

Add an abuse status that will reply 403 Fobidden when the 429 response has been used to much.

Motivation

This will let you ban users that are calling the API without managing the 429 response.

Example

const banList = []
app.register(fastifyRateLimit, {
  abuse: 10, // how many 429 request the plugin can send. It is reset when the rate limit is resetted
});

Headers are added even when set to false

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure it has not already been reported

Fastify version

3.19.1

Plugin version

5.5.0

Node.js version

14.15.1

Operating system

Windows

Operating system version (i.e. 20.04, 11.3, 10)

10

Description

Headers like 'x-ratelimit-limit' are always added to the response when the limit has not yet been exceeded, as seen here:

fastify-rate-limit/index.js

Lines 206 to 209 in b8bd789

if (current <= maximum) {
res.header(params.labels.rateLimit, maximum)
.header(params.labels.rateRemaining, maximum - current)
.header(params.labels.rateReset, timeLeft)

Steps to Reproduce

  1. Implement fastify-rate-limit
  2. Set low 'max' and high 'timeWindow' values
  3. Set specific headers to 'false'
  4. Request a route

Expected Behavior

If set to false, those headers should never be added, no matter whether the limit has been exceeded or not. Or there should be an option to enforce this behaviour.

An in-range update of fastify-plugin is breaking the build 🚨

The dependency fastify-plugin was updated from 1.6.0 to 1.6.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

fastify-plugin is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v1.6.1

📚 PR:

  • chore(package): update standard to version 13.0.1
  • chore(package): update standard to version 14.0.0
  • fix linting errors for standard@14
  • Updated deps
  • Update standard to the latest version 🚀 (#74)
  • Error name (#84)
Commits

The new version differs by 7 commits.

  • b9a7ede Bumped v1.6.1
  • df58357 Updated deps
  • 03cdeee Error name (#84)
  • 7d78d57 fix linting errors for standard@14
  • 3686925 chore(package): update standard to version 14.0.0
  • 3d3ca8c Update standard to the latest version 🚀 (#74)
  • 5204636 chore(package): update standard to version 13.0.1

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

An in-range update of tiny-lru is breaking the build 🚨

The dependency tiny-lru was updated from 7.0.1 to 7.0.2.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

tiny-lru is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 11 commits.

  • 0cc846c Updating changelog script to include merges, generating new CHANGELOG
  • 1cf5b1d Version bump to release new type definition
  • c3d4a46 Merge pull request #33 from fox1t/master
  • 8797da5 Adds factory function to typings
  • 521bb55 Merge pull request #32 from avoidwork/revert-31-ts-port
  • 06dc95a Revert "fixes #30: auto generate type definitions from source"
  • e0ff63c Merge pull request #31 from osdevisnot/ts-port
  • 0386635 update dev dependencies
  • 6321fa8 auto generate type definitions
  • 0704851 Merge pull request #29 from avoidwork/avoidwork-patch-1
  • 26db9ab Create FUNDING.yml

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

rename whitelist to allowedClient

🚀 Feature Proposal

Rename the whitelist option to allowedClient
This will be a breaking change.

Motivation

This wording is more inclusive.

Redis `fastify-rate-limit-*` key is deleted automatically after 1 minute despite timeWindow being set to 5 minutes

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure it has not already been reported

Fastify version

3.14.2

Plugin version

5.6.0

Node.js version

14.16.0

Operating system

macOS

Operating system version (i.e. 20.04, 11.3, 10)

11.4

Description

Redis fastify-rate-limit-* key is deleted automatically after 1 minute even if the timeWindow is set to '5 minutes'. I've tried with different timeWindows, both in string and number notation.
I do not have anywhere a TTL set for the keys in Redis itself, so AFAIK they should be stored until the timeWindow reaches it's limit.
The issue occurs when I'm connected to Redis both on GCP and locally, something seems to be dumping the key generator from keyGenerator, but I really do not have any clue by now what that might be.
Could anyone help me with that?
It seems like a bug, but I am quite new in the field, so it's possible it's a rookie mistake made somewhere.

Steps to Reproduce

A. Register FastifyRateLimit on the global server configuration along with local Redis connection. Set global to false, so you're able to create custom keyGenerator functions for each route:

server.register(FastifyRateLimit, {
    global: false,
    redis: new Redis({ host: '127.0.0.1', port: 6379})
... })

B. Mount rateLimit function on one of your routes (in my case a POST one). Return hardcoded 45 for easier reproduction

  • check out if a new key fastify-rate-limit-45 appeared, via: get fastify-rate-limit-45
preHandler: [
        fastify.rateLimit({
          max: 1,
          timeWindow: '5 minutes',
          keyGenerator: function (request) {
            return 45
          }

C. Spin up the local env and hit the endpoint twice to trigger the rate-limit block
D. Wait for 1 minute
E. Hit the same endpoint again

Actual result:
-> The hit is not blocked, although it should since the timeWindow is set to 5 minutes
-> When executing get fastify-rate-limit-45 in telnet you'll get a -1 response - the key has been deleted

Expected Behavior

  • the hit is blocked when timeWindow is not over
  • key is not dumped after 1 minutes

An in-range update of tiny-lru is breaking the build 🚨

The dependency tiny-lru was updated from 3.0.5 to 3.0.6.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

tiny-lru is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 3 commits.

  • d646940 Version bump
  • d0cdcfe Moving reset() into lexical scope & calling from constructor() & clear(), fixing / simplifying remove(), fixes #7
  • cd1d926 Updating test to validate there is only 1 null next & previous within cache items

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Wrong type for max function in RateLimitOptions

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure it has not already been reported

Fastify version

3.19.1

Plugin version

5.5.0

Node.js version

16.6.2

Operating system

macOS

Operating system version (i.e. 20.04, 11.3, 10)

Big Sur 11.4

Description

The docs page states that the max function can be an async function with the signature async (req, key) => {} where req is the Fastify request object and key is the value generated by the keyGenerator. The function must return a number.

However, typescript complains I have the wrong type when I use an async function because Promise is not assignable to number.

Basically, I think the type definition for the max function needs to be changed from:

number | ((req: FastifyRequest, key: string) => number)

to:

number | ((req: FastifyRequest, key: string) => number) | ((req: FastifyRequest, key: string) => Promise<number>)

Steps to Reproduce

// Declare a config object
const config: RateLimitPluginOptions = {
    ban: 10,
    timeWindow: '2 minute',
    store: new Redis(process.env.REDIS_URL, { connectTimeout: 500, maxRetriesPerRequest: 1 }) as any,
    keyGenerator: function (req) {
        return req.headers['authorization'] || req.ip;
    },

    // This is where the error is
    async max(request, key) {
        const apiKeyObject = await ApiKey.findOne({ where: { apiKey: key } });
        if (!apiKeyObject) {
            return 10;
        }

        return apiKeyObject.rateLimit;
    }
};

// Register the plugin
app.register(fastifyRateLimit, config);

The exact error is:

Type '(request: FastifyRequest<RouteGenericInterface, Server, IncomingMessage>, key: string) => Promise<number>' is not assignable to type '(req: FastifyRequest<RouteGenericInterface, Server, IncomingMessage>, key: string) => number'.
    Type 'Promise<number>' is not assignable to type 'number'.

Expected Behavior

No response

An in-range update of fastify is breaking the build 🚨

The devDependency fastify was updated from 2.11.0 to 2.12.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

fastify is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes for v2.12.0

📚 PR:

  • fix: skip serialization for json string (#1937)
  • Added fastify-casl to Community Plugins (#1977)
  • feature: added validation context to validation result (#1915)
  • ESM support (#1984)
  • fix: adjust hooks body null (#1991)
  • Added mcollina's plugin "fastify-secure-session" (#1999)
  • Add fastify-typeorm-plugin to community plugins (#1998)
  • Remove Azure Pipelines (#1985)
  • Update validation docs (#1994)
  • Drop Windows-latest and Node 6 from CI as its failing. (#2002)
  • doc: fix esm-support anchor (#2001)
  • Docs(Fluent-Schema.md): fix fluent schema repo link (#2007)
  • fix - docs - hooks - error handling (#2000)
  • add fastify-explorer to ecosystem (#2003)
  • Add a recommendations doc (#1997)
  • Fix TOC typo in recommendations (#2009)
  • docs(Typescript): typo (#2016)
  • docs: fix onResponse parameter (#2020)
  • Update template bug.md (#2025)
  • fix replace way enum (#2026)
  • docs: update inject features (#2029)
  • Update Copyright Year to 2020 (#2031)
  • add generic to typescript Reply.send payload (#2032)
  • Shorten longest line (docs) (#2035)
  • docs: OpenJS CoC (#2033)
  • Workaround for using one schema for multiple routes (#2044)
  • docs: inject chainable methods (#1917) (#2043)
  • http2: handle graceful close (#2050)
  • chore(package): update fluent-schema to version 0.10.0 (#2057)
  • chore(package): update yup to version 0.28.1 (#2059)
  • Update README.md (#2064)
  • Added fastify-response-validation to ecosystem (#2063)
  • fix: use opts of onRoute hook (#2060)
  • Fixed documentation typo (#2067)
  • Add missing TS definition for ServerOptions.genReqId function arg (#2076)
  • fix: throw hooks promise rejection (#2070) (#2074)
  • Add docs to stop processing hooks with async (#2079)
Commits

The new version differs by 38 commits.

  • 7a37924 Bumped v2.12.0
  • aacefcd Add docs to stop processing hooks with async (#2079)
  • c052c21 fix: throw hooks promise rejection (#2070) (#2074)
  • 6b39870 Add missing TS definition for ServerOptions.genReqId function arg (#2076)
  • 7fa4bdd Fixed documentation typo (#2067)
  • 6b73e0a fix: use opts of onRoute hook (#2060)
  • 7bb9733 Added fastify-response-validation to ecosystem (#2063)
  • 0a1c1f0 Update README.md (#2064)
  • bd9f608 chore(package): update yup to version 0.28.1 (#2059)
  • e19d078 chore(package): update fluent-schema to version 0.10.0 (#2057)
  • d0c976e http2: handle graceful close (#2050)
  • af8a6ac docs: inject chainable methods (#1917) (#2043)
  • 62f21b1 Workaround for using one schema for multiple routes (#2044)
  • 5258f42 docs: OpenJS CoC (#2033)
  • ac46905 Shorten longest line (docs) (#2035)

There are 38 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Broken Example of Custom Store with Knex

Fairly sure that the custom store example in /example/example-knex.js does not work. At a minimum there is a function scope 'this' error, as well as errors in the logic for the inserts and updates. There should also be a read lock on the row before the updates are attempted. The /example-sequelize.js example seems to be okay. The logic is correct, although not sure whether a similar transaction is required.

Below is a working example for Knex.js and MySQL.

I'd be glad to submit a PR for /example/example-knex-mysql.js

'use strict'

// Example of a custom store using Knex.js and MySQL
//
// Note that the rate check should place a read lock on the row.
// For MySQL see:
// https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html
// https://blog.nodeswat.com/concurrency-mysql-and-node-js-a-journey-of-discovery-31281e53572e
//
// Below is an example table to store rate limits that must be created
// in the database first.
// // Knex migration
// exports.up = async knex => {
//   await knex.schema.createTable('rate_limits', table => {
//     table.string('source').notNullable()
//     table.string('route').notNullable()
//     table.integer('count').unsigned()
//     table.bigInteger ('ttl')
//     table.primary(['route', 'source'])
//   })
// }
//
// exports.down = async knex => {
//   await knex.schema.dropTable('rate_limits')
// }
// // Above migration will create...
// CREATE TABLE `rate_limits` (
//   `source` varchar(255) NOT NULL,
//   `route` varchar(255) NOT NULL,
//   `count` int unsigned DEFAULT NULL,
//   `ttl` int unsigned DEFAULT NULL,
//   PRIMARY KEY (`route`,`source`)
// ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

function KnexStore(options) {
  this.options = options
  this.route = ''
}

KnexStore.prototype.routeKey = function (route) {
  if (route) this.route = route
  return route
}

KnexStore.prototype.incr = function (key, cb) {
  const now = (new Date()).getTime()
  const ttl = now + this.options.timeWindow
  const cond = { route: this.route, source: key }
  knex.transaction(function (trx) {
    trx('rate_limits')
      .whereRaw('route = ? AND source = ? FOR UPDATE', [cond.route || '', cond.source]) // Create read lock
      .then(r => {
        const d = r[0]
        if (d && d.ttl > now) {
          trx
            .raw('UPDATE rate_limits SET count = ? WHERE route = ? AND source = ?', [d.count + 1, cond.route, key])
            .then(() => {
              cb(null, { current: d.count + 1, ttl: d.ttl })
            })
            .catch(err => {
              trx.rollback()
              cb(err, {
                current: 0,
              })
            })
        } else {
          trx
            .raw('INSERT INTO rate_limits(route, source, count, ttl) VALUES(?,?,1,?) ON DUPLICATE KEY UPDATE count = 1, ttl = ?', [cond.route, key, (d && d.ttl) || ttl, ttl])
            .then(() => {
              cb(null, {
                current: 1,
                ttl: (d && d.ttl) || ttl,
              })
            })
            .catch(err => {
              trx.rollback()
              cb(err, { current: 0 })
            })
        }
      })
      .then(trx.commit)
      .catch(err => {
        trx.rollback()
        cb(err, { current: 0 })
      })
  })
}

KnexStore.prototype.child = function (routeOptions = {}) {
  const options = { ...this.options, ...routeOptions }
  const store = new KnexStore(options)
  store.routeKey(routeOptions.routeInfo.method + routeOptions.routeInfo.url)
  return store
}

fastify.register(require('../../fastify-rate-limit'),
  {
    global: false,
    max: 10,
    store: KnexStore,
    skipOnError: false,
  }
)

fastify.get('/', {
  config: {
    rateLimit: {
      max: 10,
      timeWindow: '1 minute',
    },
  },
}, (req, reply) => {
  reply.send({ hello: 'from ... root' })
})

fastify.get('/private', {
  config: {
    rateLimit: {
      max: 3,
      timeWindow: '1 minute',
    },
  },
}, (req, reply) => {
  reply.send({ hello: 'from ... private' })
})

fastify.get('/public', (req, reply) => {
  reply.send({ hello: 'from ... public' })
})

fastify.listen(3000, err => {
  if (err) throw err
  console.log('Server listening at http://localhost:3000')
})

Can't apply whitelist per route

Trying to whitelist some keys per route but it does not apply.

To Reproduce

Steps to reproduce the behavior: Add rate limit config to the route and add whitelist param.

server.register(rateLimitPlugin, {
  global : false,
  redis,
  skipOnError: true
});
 server.get('/test', {
    preHandler: authHandler,
    config: {
      rateLimit: {
        max: 3,
        timeWindow: '1 minute',
        whitelist: ['testuser'],
        keyGenerator: function (req: any) {
          return req.params.authedUserId || test;
        },
        onExceeded: function (req: any) {
          console.log('Exceeded')
        },
      }
    }
  }, handler);

Expected behavior

As documentation shows, it should allow to add whitelist list per route not only on global.
I think problem is in preHandler

    // whitelist doesn't apply any rate limit
    if (pluginComponent.whitelist.indexOf(key) > -1) {
      next()
      return
    }

pluginComponent.whitelist is an empty array. Should be params.whitelist in this case.

Your Environment

  • node version: 10.16
  • fastify version: >=2.5

Being able to async "max"

🚀 Feature Proposal

Being able to use async / await in max in so we can fetch details from our database/cache or similar.

Motivation

I would love to contribute to this, and it's a very easy implementation. I've not tested out the performance that might get lost using async / await here.

Example

For one of my recent projects, different users with different access were able to have different rate limits using async. I think this would be very good to have added to the fastify-rate-limit plugin.

More efficient limiting

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the feature has not already been requested

🚀 Feature Proposal

Description

I ​think the current rate limiter is missing the point. Rate limiters are made to prevent brute force attacks, right?
The thing is, fastify-rate-limiter only blocks most of brute force requests.
Let me give you an example:
I've reached the limit and now I'm locked for 2 minutes. I'll wait for 1 minute and will send a request to the server again.
It will says that I should wait for 1 more minute.

Nothing wrong right? 🧐

But imagine a script; which will send 15 requests to the server every minute. It will reach the rate limit after 1 minute (lets assume limit is at 15 and lock time is 2m). It will keep sending requests until the rate limit ends. after that it will send another 15 requests to the server. so its sending 15 requests every 3 minute even when rate limiter is on!

Solution

I think the best solution is to renew the lock time on every request when the ip is locked

Motivation

I've discovered Fastify recently, and since then I'm in love with it 😁

I try to use Fastify version of everything when I'm developing using Fastify! Rate limiter is one of those. Maybe the feature that I'm requesting is not useful But I needed it, so I shared my thoughts with you guys.

Example

No response

An in-range update of fast-json-stringify is breaking the build 🚨

The dependency fast-json-stringify was updated from 1.15.7 to 1.16.0.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

fast-json-stringify is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v1.16.0

Features:

  • Added BigInt support - #197
Commits

The new version differs by 2 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Per route limit not working

In my project I tried using the rate-limit, and it worked. Then I tried using the per-route limit, but it didn't work at all. I replicated the example to see if I made any error in my project, but it still didn't work.

Here is the code I used:

const fastify = require('fastify')()

fastify.register(require('fastify-rate-limit'),
  {
    global: false,
    max: 10,
    timeWindow: '1 minute',
    skipOnError: false,
    keyGenerator: function(req) {
        console.log(req.ip);
        return req.ip;
    }
  })

fastify.get('/rate-test', {
  config: {
    rateLimit: {
      max: 3,
      timeWindow: '1 minute'
    }
  }
}, (req, reply) => {
  reply.send({ hello: 'from ... root' })
});

fastify.listen(3000, err => {
    if (err) throw err
    console.log('Server listening at http://localhost:3000')
})

The server returns a 429 status only after 10 continuous requests, but not after 3.

allowList not working in Endpoint config

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure it has not already been reported

Fastify version

3.20.2

Plugin version

5.6.1

Node.js version

14.6.0

Operating system

Windows

Operating system version (i.e. 20.04, 11.3, 10)

10 x64

Description

Read the docs and the examples, they're pretty clear IMO. I must be missing something.
I cannot get allowList to work in the config for an endpoint. It does work though when used in the global config.

Steps to Reproduce

This works for allowing 127.0.0.1 - via global config:

fastify.register(fastifyRateLimit,
    {
        global: false,
        max: 3000, // default max rate limit
        allowList: ['127.0.0.1'], // global allowList access ( ACL based on the key from the keyGenerator)
        skipOnError: false, // default false
    });

But, when I'm specifying 127.0.0.1 in the allowList directly for an endpoint, it does not kick in:

fastify.route({
        url: '/activate',
        method: 'POST',
        schema: {},
        handler: (request) => {
            return rt_activate(request);
        },
        config: {
            rateLimit: {
                max: 3,
                timeWindow: '1 minute',
                allowList: ['127.0.0.1'],     // <--------- never kicks in
                errorResponseBuilder: (req: FastifyRequest, context: any) =>
                    ({
                        code: 429, timeWindow: context.after, limit: context.max,
                        message: 'Too many requests sent. Please try again in 1 minute.'
                    }),
                onExceeding: function(req: FastifyRequest){
                    console.log('callback on exceeding ... executed before response to client. req is give as argument')
                },
                onExceeded: function(req: FastifyRequest){
                    console.log('callback on exceeded ... to block ip in security group for example, req is give as argument')
                }
            }
        }
    }
);

What am I doing wrong ?

Expected Behavior

to white list / allow the specified IPs in the endpoint config > allowList array.

Thanks.

Dynamic max requests

I was looking at the code and it seems that the plugin allows for the specified max option to be a function, but when I try to use a function to generate the max value I get an error such as:

unhandled exception TypeError: The header content contains invalid characters
at storeHeader (_http_outgoing.js:436:13)
at ServerResponse._storeHeader (_http_outgoing.js:350:7)
at ServerResponse.writeHead (_http_server.js:257:8)
at onSendEnd (/usr/src/app/node_modules/fastify/lib/reply.js:329:7)
at onSendHook (/usr/src/app/node_modules/fastify/lib/reply.js:282:5)
at _Reply.Reply.send (/usr/src/app/node_modules/fastify/lib/reply.js:139:3)
at onIncr (/usr/src/app/node_modules/fastify-rate-limit/index.js:136:12)
at LocalStore.incr (/usr/src/app/node_modules/fastify-rate-limit/store/LocalStore.js:19:3)
at Object.preHandler (/usr/src/app/node_modules/fastify-rate-limit/index.js:110:27)
at hookIterator (/usr/src/app/node_modules/fastify/lib/hooks.js:124:10)

Is there support for a non-static max requests value and I'm missing something or is this not implemented?

In my use case different clients connecting to the API have different max limits and as such I would like to make the max option depend on a function provided by me that would return a value to be checked against.

Thanks for your time!

Slowing down of responses instead of returning 429

🚀 Feature Proposal

Have an option to slow down responses instead of returning 429 errors.

Motivation

By having the option to slow down responses would deter scrapping of data from a server which would put an additional load on the server while still enabling responses for cases where user may not be actually scrapping, but just that there are many machines surfing the webpage under the same IP

Example

express-slow-down

Per Route Rate Limit

Is it possible to use this rate-limiter per route instead of for every route?

Custom response

Currently you will always get a JSON in the following schema if you exceed the rate limit:

{
  "error": "Too Many Requests",
  "message": "Rate limit exceeded, retry in 1 minute",
  "statusCode": 429
}

Since I use a different JSON schema in my response, it would be great to have a way of customizing the response. E.g. Having a callback to format/send the response.

request to HEAD or GET route counted as two with activated `exposeHeadRoutes` in Fastify

🐛 Bug Report

Every request to HEAD or GET route counted as two with activated exposeHeadRoutes for Fastify (x-ratelimit-remaining header gets decreased by -2)
If you are manually installing HEAD route before GET one with/without option then counter works properly.

After minor investigation it looks like function onRequest from plugin gets installed into route 2 times.

  // onRequest function that will be use for current endpoint been processed
  function onRequest (req, res, next) {

To Reproduce

Reproduction steps are pretty simple:

  1. Get latest fastify from npm
  2. get latest fastify-rate-limit plugin from npm
  3. Activate exposeHeadRoutes: true option for fastify server
  4. Copypaste my minimal example
  5. Do 2 requests with postman or curl to 127.0.0.1:1111/route -> x-ratelimit-remaining header in first response will be 118, 116 - second.
const fastify = require("fastify")({
	onConstructorPoisoning: "remove",
	onProtoPoisoning: "remove",
	ignoreTrailingSlash: true,
	exposeHeadRoutes: true
});

fastify.register(require("fastify-rate-limit"), {
	max: 120,
	timeWindow: "5 minutes",
	addHeaders: {
		"x-ratelimit-reset": false,
		"retry-after": false
	}
});


async function testGetRoute(fastify) {
	fastify.get("/route", async () => {
		throw new Error();
	});
}

fastify.register(testGetRoute);

const start = async () => {
	try {
		await fastify.listen(1111, "127.0.0.1");
	} catch (error) {
		process.exit(1);
	}
};
start();

Expected behavior

After first request with postman/curl to 127.0.0.1:1111/route x-ratelimit-remaining in first response header will be 119, after second - 118

Your Environment

  • node version: 14/15
  • fastify version: 3.12.0
  • os: Windows

Per-route ratelimits don't work.

🐛 Bug Report

As described in #113, using a per-route ratelimit doesn't work.

To Reproduce

Steps to reproduce the behavior:

  1. Create a new fastify instance
  2. Register the ratelimit plugin
  3. Add a ratelimit on a specific route
import fastify, { FastifyInstance, FastifyRequest } from 'fastify'

import ratelimit from 'fastify-rate-limit'

const server: FastifyInstance = fastify()

server.register(ratelimit,{
	global: false,
	max: 3000,
	keyGenerator: (req: FastifyRequest): string => {
		return req.headers.authorization
	},
})

server.get('/private', {
	config: {
	  rateLimit: {
		max: 3,
		timeWindow: '1 minute'
	  }
	}
  }, (req, reply) => {
	reply.send({ hello: 'from ... private' })
})

Expected behavior

I expect the ratelimit to be applied.

Your Environment

  • node version: 14.16.0
  • fastify version: ^3.14.12
  • fastifi-ratelimit version: ^5.5.0
  • os: Mac
  • any other relevant information

An in-range update of knex is breaking the build 🚨


☝️ Important announcement: Greenkeeper will be saying goodbye 👋 and passing the torch to Snyk on June 3rd, 2020! Find out how to migrate to Snyk and more at greenkeeper.io


The devDependency knex was updated from 0.20.11 to 0.20.12.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

knex is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Commits

The new version differs by 12 commits.

  • 90eac8f Prepare for 0.20.12 release
  • 31c5b86 Fix: Transaction_OracleDB can use config.connection (#3731)
  • 37d9c30 Fix/method binding on knex proxy (#3717)
  • acf56b5 Fix incorrect type signature of Having (#3719)
  • c309c98 Cleanup/remove transaction stalling (#3716)
  • 0f523db Removed .should(..) syntax from test cases (#3713)
  • 9d07bc9 Removed some globals from tests (#3712)
  • d00bd8d Fix: Added missing call to _reject in Transactor#transaction (#3706)
  • 6e6b666 Rewrote Transaction#acquireConnection() methods to use async (#3707)
  • 4006bdd Fixed a few unhandled Promise rejections in a test case (#3705)
  • 2270c11 Speed up CI by tweaking db configs (#3688)
  • 1ae9312 Updated .gitignore files to ignore testing artifacts (#3709)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Feature: Ability to pass in a custom Store

🚀 Feature Proposal

Be a nice enhancement to add the ability to pass in a Store interface which abides by the current constructs of Local/Redis store, i.e. incr method and callback. Easy way to extend for other storage mechanisms without bloating the existing package.

Motivation

There are a lot of memory storage tools out there, as well as traditional ones such as databases, so often adding a new technology into an already established stack can be costly in terms of maintenance, hosting, etc...

Example

function thestore = function(timeWindow, key) {

}
etc...

fastify.register(require('fastify-rate-limit'), {
store: require('./my-custom-store')
})

This means a slightly custom call to the custom-store but otherwise should be a very doable feature.

route level whitelist do not work

🐛 Bug Report

When routing level whitelist is used, it does not take effect

To Reproduce

Steps to reproduce the behavior:

fastify.get('/private', {
  config: {
    rateLimit: {
      max: 3,
      whitelist: ['127.0.0.1', '127.0.3.1'],
      timeWindow: '1 minute'
    }
  }
}, (req, reply) => {
  reply.send({ hello: 'from ... private' })
})

Your Environment

  • node version: v14.4.0
  • fastify version: ^3.8.0
  • os: Mac
  • any other relevant information

An in-range update of ioredis is breaking the build 🚨


☝️ Important announcement: Greenkeeper will be saying goodbye 👋 and passing the torch to Snyk on June 3rd, 2020! Find out how to migrate to Snyk and more at greenkeeper.io


The devDependency ioredis was updated from 4.16.0 to 4.16.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

ioredis is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v4.16.1

4.16.1 (2020-03-28)

Bug Fixes

Commits

The new version differs by 3 commits.

  • 0b4826f chore(release): 4.16.1 [skip ci]
  • 0013991 fix: abort incomplete pipelines upon reconnect (#1084)
  • 4bbdfd6 docs: fix README typo (#1068)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Rate limiting without a handler

🚀 Feature Proposal

It would be really helpful if one could add route based rate limiting without a handler function.

Motivation

I am using Fastify as a gateway in combination with fastify-gateway. With the gateway plugin (fastify-gateway) I can delegate handling to external services. By being able to add route based rate limiter without a handler parameter we can make the fastify-rate-limit plugin usable beyond typical route based rate limiting.

Example

Expected Behaviour:

fastify.register(fastifyRateLimit, { //just as an option
  config: {
    route: '/routeX',
    rateLimit: {
      max: 3,
      timeWindow: '1 minute'
    }
  }
})

Return an error object instead of a POJO.

Currently, we are returning a POJO (and using a custom serializer) instead of an error object, which means that the .setErrorHandler API won't be called.
This should be considered a bug, as users can't override the default response nor perform custom actions when a 429 error happens.

Route handler registered twice

🐛 Bug Report

When using fastify with ignoreTrailingSlash: false (default), creating routes with prefixing and defining rate limitting for / of prefixed route, each request is counted twice. As I looked deeper into this issue it looks like onRequest handler is added twice - first time when fastify adds route for '' (node_modules\fastify\lib\route.js:163), and second time when it adds route for '/' (node_modules\fastify\lib\route.js:166; fastify-rate-limit onRequest handler seems to already exist in routeOptions then).

To Reproduce

Steps to reproduce the behavior:

  const subroute = async (fst) => {
      fst.register(FastifyRateLimit, {
        max: 60,
        timeWindow: 1000,
        keyGenerator: (req) => {
          console.log('call'); // will be called twice when POST /resource called

          return 'test';
        },
      });

      fst.post('/', {}, (req, res) => {
        res.type('application/json').send({ data: 'test' });
      });
    };

    const app = fastify();

    app.register(subroute, { prefix: '/resource' });

Expected behavior

onRequest called once and each request counted as one.

Your Environment

  • node version: 12
  • fastify version: 3.9.2
  • os: Windows

Rate limit 404s

🚀 Feature Proposal

It would be interesting to rate limit 404s as well. This might be controversial because it's faster to actually return a 404 vs read up the current limit from a database. However it might be a security risk as an attacker could search which routes are available.

Example

  const fastify = Fastify()
  await fastify.register(rateLimit, { global: true, max: 2, timeWindow: 1000 })
  fastify.setNotFoundHandler({
    onRequest: fastify.rateLimit
  }, function (request, reply) {
    reply.code(404).send({ hello: 'world' })
  })
  fastify.get('/', function (request, reply) {
    reply.send({ hello: 'world' })
  })
  await fastify.listen(3000)

Consider not using X-Forwarded-For header for detecting the IP address

I tried to use this rate limiter with fastify server behind the proxy and I found two major problems with the detection of the IP address in following code:

var ip = req.headers['X-Forwarded-For'] || req.connection.remoteAddress
  1. All keys in req.headers are in lowercase so the req.headers['X-Forwarded-For'] is always undefined even if the proxy sends the header to the upstream. That means, req.connection.remoteAddress is always used as IP and it's always the same value (the IP of the proxy). This gives an attacker a possibility to make the entire server unavailable for every user.

  2. Even if you fix the casing, there is still another issue: You should never use HTTP header as a cache key unless you trust the source. X-Forwarded-For header can be crafted on the client. Imagine your fastify server is publicly available (no proxy in front of it), then following requests from the same client are going to be considered as different:

$ curl -v -H "X-Forwarded-For: a" http://example.com/
X-RateLimit-Limit: 5
X-RateLimit-Remaining: 4

$ curl -v -H "X-Forwarded-For: b" http://example.com/
X-RateLimit-Limit: 5
X-RateLimit-Remaining: 4

$ curl -v -H "X-Forwarded-For: c" http://example.com/
X-RateLimit-Limit: 5
X-RateLimit-Remaining: 4

With this you can bypass the entire rate limiter, by varying the X-Forwarded-For header for each request. The same happens with the proxy in front of it, because proxies usually leave the original X-Forwarded-For header and just append its own IP to the end of the string.

There are few candidates which might suit better for the cache key: e.g. nginx adds X-Real-IP header which is always overridden, or you can use a session ID.

I'd like to propose possible solution: use only req.connection.remoteAddress by default and provide a configurable option for cases when devs want to change the IP detection behavior:

fastify.register(require('fastify-rate-limit'), {
  max: 5,
  timeWindow: '1 minute',
  resolveIpAddress: function (req) {
	return req.headers['x-forwarded-for'];
	// or return req.headers['x-real-ip'];
	// or return req.session.id;
  }
});

And then the implementation:

var ip;
if (typeof opts.resolveIpAddress === "function") {
	ip = opts.resolveIpAddress(req);
}
if (!ip) {
	ip = req.connection.remoteAddress;
}

I can create a PR but first I'd like to hear your opinion.

An in-range update of fastify is breaking the build 🚨

The devDependency fastify was updated from 2.12.0 to 2.12.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

fastify is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v2.12.1

📚 PR:

  • Added fastify-esso plugin (#2084)
  • add comma to queryStringJsonSchema (#2085)
  • Throws error if an invalid status code is sent (#2082)
  • chore: greenkeeper ignore semver (#2095)
  • Fix inaccurate type of setErrorHandler (#2092)
  • Fixed typo res.res to reply.res (#2099)
  • docs: use computed key when assigning property to request (#2100)
  • types: add type of this to the not found handler (#2102)
  • docs: fix header and body generic types (#2103)
  • fix: multiple route same schema (#2108)
  • http2: fix req.hostname not set (#2113)
  • Added fastify-axios plugin (#2118)
  • docs: Clarify reply.redirect status code docs (#2121)
Commits

The new version differs by 14 commits.

  • c4a83ae Bumped v2.12.1
  • 350f00b docs: Clarify reply.redirect status code docs (#2121)
  • a378dd5 Added fastify-axios plugin (#2118)
  • 093947b http2: fix req.hostname not set (#2113)
  • 6f79a90 fix: multiple route same schema (#2108)
  • 98ca7fd docs: fix header and body generic types (#2103)
  • 772cf5e types: add type of this to the not found handler (#2102)
  • b44d859 docs: use computed key when assigning property to request (#2100)
  • da4c89d Fixed typo res.res to reply.res (#2099)
  • 743ad74 Fix inaccurate type of setErrorHandler (#2092)
  • f971751 chore: greenkeeper ignore semver (#2095)
  • 41cd02f Throws error if an invalid status code is sent (#2082)
  • 14b4e02 add comma to queryStringJsonSchema (#2085)
  • 1c52b9f Added fastify-esso plugin (#2084)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Per route limit with redis *and* pre-handler does not work

🐛 Bug Report

When you are using Redis and a custom pre handler in a route, then the rate limiting basically fails.

To Reproduce

Steps to reproduce the behavior:

  1. Create a route
  2. Create a pre handler method for that route
  3. Configure rate limiting via config
  4. Fail

Paste your code here:

https://github.com/amir-hadi/fastify-rate-limit/blob/master/test/route-rate-limit.test.js

I forked this repo and added another test case that proves that something is wrong with Redis and a pre handler on the route. Please see the test case in my repo that is failing.

Expected behavior

It should also work with Redis and a pre handler.

Your Environment

  • node version: 10.16
  • fastify version: >=2.5.0
  • os: Mac
  • any other relevant information

Any help would be much appreciated. It looks in your preHandler the callback does not wait long enough before calling next. So your pre handler is exiting/returning and your callback is not modifying the response anymore.

one route with limit

Hello, i try use limit on one route but not work
example

fastifyGlobal.register(require('fastify-rate-limit'),
{
    global : false, // default true
    whitelist: [], // default []
})


fastifyGlobal.get('/', {
    config: {
      rateLimit: {
        max: 3,
        timeWindow: '1 minute'
      }
    }
  }, (req, reply) => {
    reply.send({ hello: 'from ... root' })
  })

This code is example from git, not work, request always norm.
If i use global limits WORK! But i no need global limits.
whats a problem?

Req.ip or req.raw.ip? Cookies?

In docs we have:

fastify.register(require('fastify-rate-limit'), {
  keyGenerator: function(req) { /* ... */ }, // default (req) => req.ip
})

But i had undefined for req.ip. I think it's was changed sometime ago, not sure. Maybe docs should be updated?

  keyGenerator: function(req) {
    console.log(req.ip) //undefined
    console.log(req.raw.ip) //ok
    return req.ip 
  }

In code i can see:

const keyGenerator = typeof opts.keyGenerator === 'function'
    ? opts.keyGenerator
    : (req) => req.raw.ip

And a question. If hooks added onRequest, than there is no way rate limit based on parsed cookies (fastify-cookie)?

move from preHandler hook to onRequest hook

Currently the rate limiting is not implemented before auth or body parsing is done. This create potential situations where some load is not shed from the server when it could have been,

CORS headers?

It seems CORS Headers doesn't included in error response leading fetch to the following issue:

Access to fetch at 'http://domain.test:3000/test' from origin 'http://domain.test' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

TypeError: Failed to fetch "Failed to fetch"

My question is, Can we have another event option as onError? I mean:

{
  // ....
   onError: function ( /* Fastify response */ res ) {
        // .... magic here
        res.header('X-Custom-Header', 'value');
   },
   // ....
}

An in-range update of fast-json-stringify is breaking the build 🚨

The dependency fast-json-stringify was updated from 1.16.2 to 1.16.3.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

fast-json-stringify is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes for v1.16.3

📚 PR:

  • chore(package): update semver to version 7.1.0 (#198)
  • adding dereference of refs in anyOfs (#207)
Commits

The new version differs by 3 commits.

  • 82bfe28 Bumped v1.16.3
  • 913cc2d adding dereference of refs in anyOfs (#207)
  • b4e7e06 chore(package): update semver to version 7.1.0 (#198)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

LocalStore onClose does not calls done after clearInterval

When using fastify-rate-limit with LocalStore, in a NestFastifyApplication instance:

await app.close() hangs forever as localStore in the onClose hook,does not call done=>

app. addHook('onClose', (done) => {
clearInterval(this.interval);
// missing done();
})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.