Giter Site home page Giter Site logo

nestjs / throttler Goto Github PK

View Code? Open in Web Editor NEW
584.0 5.0 51.0 8.2 MB

A rate limiting module for NestJS to work with Fastify, Express, GQL, Websockets, and RPC 🧭

Home Page: https://nestjs.com

License: MIT License

JavaScript 2.01% TypeScript 97.65% Shell 0.34%
nestjs-throttler nest nestjs nodejs typescript javascript rate-limiter

throttler's Introduction

Nest Logo

[travis-image]: https://api.travis-ci.org/nestjs/nest.svg?branch=master [travis-url]: https://travis-ci.org/nestjs/nest [linux-image]: https://img.shields.io/travis/nestjs/nest/master.svg?label=linux [linux-url]: https://travis-ci.org/nestjs/nest

A progressive Node.js framework for building efficient and scalable server-side applications.

NPM Version Package License NPM Downloads Coverage Discord Backers on Open Collective Sponsors on Open Collective

Description

A Rate-Limiter for NestJS, regardless of the context.

For an overview of the community storage providers, see Community Storage Providers.

This package comes with a couple of goodies that should be mentioned, first is the ThrottlerModule.

Installation

$ npm i --save @nestjs/throttler

Versions

@nestjs/throttler@^1 is compatible with Nest v7 while @nestjs/throttler@^2 is compatible with Nest v7 and Nest v8, but it is suggested to be used with only v8 in case of breaking changes against v7 that are unseen.

For NestJS v10, please use version 4.1.0 or above

Table of Contents

Usage

ThrottlerModule

Once the installation is complete, the ThrottlerModule can be configured as any other Nest package with forRoot or forRootAsync methods.

@@filename(app.module)
@Module({
  imports: [
    ThrottlerModule.forRoot([{
      ttl: 60000,
      limit: 10,
    }]),
  ],
})
export class AppModule {}

The above will set the global options for the ttl, the time to live in milliseconds, and the limit, the maximum number of requests within the ttl, for the routes of your application that are guarded.

Once the module has been imported, you can then choose how you would like to bind the ThrottlerGuard. Any kind of binding as mentioned in the guards section is fine. If you wanted to bind the guard globally, for example, you could do so by adding this provider to any module:

{
  provide: APP_GUARD,
  useClass: ThrottlerGuard
}

Multiple Throttler Definitions

There may come upon times where you want to set up multiple throttling definitions, like no more than 3 calls in a second, 20 calls in 10 seconds, and 100 calls in a minute. To do so, you can set up your definitions in the array with named options, that can later be referenced in the @SkipThrottle() and @Throttle() decorators to change the options again.

@@filename(app.module)
@Module({
  imports: [
    ThrottlerModule.forRoot([
      {
        name: 'short',
        ttl: 1000,
        limit: 3,
      },
      {
        name: 'medium',
        ttl: 10000,
        limit: 20
      },
      {
        name: 'long',
        ttl: 60000,
        limit: 100
      }
    ]),
  ],
})
export class AppModule {}

Customization

There may be a time where you want to bind the guard to a controller or globally, but want to disable rate limiting for one or more of your endpoints. For that, you can use the @SkipThrottle() decorator, to negate the throttler for an entire class or a single route. The @SkipThrottle() decorator can also take in an object of string keys with boolean values for if there is a case where you want to exclude most of a controller, but not every route, and configure it per throttler set if you have more than one. If you do not pass an object, the default is to use {{ '{' }} default: true {{ '}' }}

@SkipThrottle()
@Controller('users')
export class UsersController {}

This @SkipThrottle() decorator can be used to skip a route or a class or to negate the skipping of a route in a class that is skipped.

@SkipThrottle()
@Controller('users')
export class UsersController {
  // Rate limiting is applied to this route.
  @SkipThrottle({ default: false })
  dontSkip() {
    return 'List users work with Rate limiting.';
  }
  // This route will skip rate limiting.
  doSkip() {
    return 'List users work without Rate limiting.';
  }
}

There is also the @Throttle() decorator which can be used to override the limit and ttl set in the global module, to give tighter or looser security options. This decorator can be used on a class or a function as well. With version 5 and onwards, the decorator takes in an object with the string relating to the name of the throttler set, and an object with the limit and ttl keys and integer values, similar to the options passed to the root module. If you do not have a name set in your original options, use the string default You have to configure it like this:

// Override default configuration for Rate limiting and duration.
@Throttle({ default: { limit: 3, ttl: 60000 } })
@Get()
findAll() {
  return "List users works with custom rate limiting.";
}

Proxies

If your application runs behind a proxy server, check the specific HTTP adapter options (express and fastify) for the trust proxy option and enable it. Doing so will allow you to get the original IP address from the X-Forwarded-For header, and you can override the getTracker() method to pull the value from the header rather than from req.ip. The following example works with both express and fastify:

// throttler-behind-proxy.guard.ts
import { ThrottlerGuard } from '@nestjs/throttler';
import { Injectable } from '@nestjs/common';

@Injectable()
export class ThrottlerBehindProxyGuard extends ThrottlerGuard {
  protected getTracker(req: Record<string, any>): Promise<string> {
  return new Promise<string>((resolve, reject) => {
    const tracker = req.ips.length > 0 ? req.ips[0] : req.ip; // individualize IP extraction to meet your own needs
    resolve(tracker);
  });
  }
}

// app.controller.ts
import { ThrottlerBehindProxyGuard } from './throttler-behind-proxy.guard';

@UseGuards(ThrottlerBehindProxyGuard)

info Hint You can find the API of the req Request object for express here and for fastify here.

Websockets

This module can work with websockets, but it requires some class extension. You can extend the ThrottlerGuard and override the handleRequest method like so:

@Injectable()
export class WsThrottlerGuard extends ThrottlerGuard {
  async handleRequest(
    context: ExecutionContext,
    limit: number,
    ttl: number,
    throttler: ThrottlerOptions,
  ): Promise<boolean> {
    const client = context.switchToWs().getClient();
    const ip = client._socket.remoteAddress;
    const key = this.generateKey(context, ip, throttler.name);
    const { totalHits } = await this.storageService.increment(key, ttl);

    if (totalHits > limit) {
      throw new ThrottlerException();
    }

    return true;
  }
}

info Hint If you are using ws, it is necessary to replace the _socket with conn

There's a few things to keep in mind when working with WebSockets:

  • Guard cannot be registered with the APP_GUARD or app.useGlobalGuards()
  • When a limit is reached, Nest will emit an exception event, so make sure there is a listener ready for this

info Hint If you are using the @nestjs/platform-ws package you can use client._socket.remoteAddress instead.

GraphQL

The ThrottlerGuard can also be used to work with GraphQL requests. Again, the guard can be extended, but this time the getRequestResponse method will be overridden

@Injectable()
export class GqlThrottlerGuard extends ThrottlerGuard {
  getRequestResponse(context: ExecutionContext) {
    const gqlCtx = GqlExecutionContext.create(context);
    const ctx = gqlCtx.getContext();
    return { req: ctx.req, res: ctx.res };
  }
}

However, when using Apollo Express/Fastify or Mercurius, it's important to configure the context correctly in the GraphQLModule to avoid any problems.

Apollo Server (for Express):

For Apollo Server running on Express, you can set up the context in your GraphQLModule configuration as follows:

GraphQLModule.forRoot({
  // ... other GraphQL module options
  context: ({ req, res }) => ({ req, res }),
});

Apollo Server (for Fastify) & Mercurius:

When using Apollo Server with Fastify or Mercurius, you need to configure the context differently. You should use request and reply objects. Here's an example:

GraphQLModule.forRoot({
  // ... other GraphQL module options
  context: (request, reply) => ({ request, reply }),
});

Configuration

The following options are valid for the object passed to the array of the ThrottlerModule's options:

name the name for internal tracking of which throttler set is being used. Defaults to `default` if not passed
ttl the number of milliseconds that each request will last in storage
limit the maximum number of requests within the TTL limit
ignoreUserAgents an array of regular expressions of user-agents to ignore when it comes to throttling requests
skipIf a function that takes in the ExecutionContext and returns a boolean to short circuit the throttler logic. Like @SkipThrottler(), but based on the request
getTracker a function that takes in the Request and returns a string to override the default logic of the getTracker method
generateKey a function that takes in the ExecutionContext, the tacker string and the throttler name as a string and returns a string to override the final key which will be used to store the rate limit value. This overrides the default logic of the generateKey method

If you need to set up storages instead, or want to use a some of the above options in a more global sense, applying to each throttler set, you can pass the options above via the throttlers option key and use the below table

storage a custom storage service for where the throttling should be kept track. See here.
ignoreUserAgents an array of regular expressions of user-agents to ignore when it comes to throttling requests
skipIf a function that takes in the ExecutionContext and returns a boolean to short circuit the throttler logic. Like @SkipThrottler(), but based on the request
throttlers an array of throttler sets, defined using the table above
errorMessage a string which overrides the default throttler error message
getTracker a function that takes in the Request and returns a string to override the default logic of the getTracker method
generateKey a function that takes in the ExecutionContext, the tacker string and the throttler name as a string and returns a string to override the final key which will be used to store the rate limit value. This overrides the default logic of the generateKey method

Async Configuration

You may want to get your rate-limiting configuration asynchronously instead of synchronously. You can use the forRootAsync() method, which allows for dependency injection and async methods.

One approach would be to use a factory function:

@Module({
  imports: [
    ThrottlerModule.forRootAsync({
      imports: [ConfigModule],
      inject: [ConfigService],
      useFactory: (config: ConfigService) => [
        {
          ttl: config.get('THROTTLE_TTL'),
          limit: config.get('THROTTLE_LIMIT'),
        },
      ],
    }),
  ],
})
export class AppModule {}

You can also use the useClass syntax:

@Module({
  imports: [
    ThrottlerModule.forRootAsync({
      imports: [ConfigModule],
      useClass: ThrottlerConfigService,
    }),
  ],
})
export class AppModule {}

This is doable, as long as ThrottlerConfigService implements the interface ThrottlerOptionsFactory.

Storages

The built in storage is an in memory cache that keeps track of the requests made until they have passed the TTL set by the global options. You can drop in your own storage option to the storage option of the ThrottlerModule so long as the class implements the ThrottlerStorage interface.

info Note ThrottlerStorage can be imported from @nestjs/throttler.

Time Helpers

There are a couple of helper methods to make the timings more readable if you prefer to use them over the direct definition. @nestjs/throttler exports five different helpers, seconds, minutes, hours, days, and weeks. To use them, simply call seconds(5) or any of the other helpers, and the correct number of milliseconds will be returned.

Migration Guide

For most people, wrapping your options in an array will be enough.

If you are using a custom storage, you should wrap you ttl and limit in an array and assign it to the throttlers property of the options object.

Any @ThrottleSkip() should now take in an object with string: boolean props. The strings are the names of the throttlers. If you do not have a name, pass the string 'default', as this is what will be used under the hood otherwise.

Any @Throttle() decorators should also now take in an object with string keys, relating to the names of the throttler contexts (again, 'default' if no name) and values of objects that have limit and ttl keys.

Warning Important The ttl is now in milliseconds. If you want to keep your ttl in seconds for readability, use the seconds helper from this package. It just multiplies the ttl by 1000 to make it in milliseconds.

For more info, see the Changelog

Community Storage Providers

Feel free to submit a PR with your custom storage provider being added to this list.

License

Nest is MIT licensed.

🔼 Back to TOC

throttler's People

Contributors

aljazoblonsek avatar benjlevesque avatar bttger avatar caucik avatar chris-si avatar dependabot-preview[bot] avatar dependabot[bot] avatar friebetill avatar ganesha2552 avatar github-actions[bot] avatar imchrischen avatar innei avatar jirios125 avatar jmcdo29 avatar kamilmysliwiec avatar kkoomen avatar maciej-jasiewicz avatar micalevisk avatar mishrasamiksha avatar railsstudent avatar renovate-bot avatar renovate[bot] avatar rubiin avatar semantic-release-bot avatar snps-achim avatar stefansundin avatar tony133 avatar trejgun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

throttler's Issues

GQL code example incomplete

@Injectable()
export class GqlThrottlerGuard extends ThrottlerGuard {
  getRequestResponse(context: ExecutionContext) {
    const gqlCtx = GqlExecutionContext.create(context);
    const ctx = gql.getContext();
    return { req, ctx.req, res: ctx.res }; // ctx.request and ctx.reply for fastify
  }
}

Where is the request pulled from? Where does gqlCtx get used? What is the gql variable as it's undefined anyways.

The code example is quite literally useless, the docs may as well point to documentation about getRequestResponse.

Lock the route for user ip and not in general

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

Hello guys i used to provided code in the example:

import { APP_GUARD } from '@nestjs/core';
import { ThrottlerGuard, ThrottlerModule } from '@nestjs/throttler';

@Module({
  imports: [
    ThrottlerModule.forRoot({
      ttl: 60,
      limit: 10,
    }),
  ],
  providers: [
    {
      provide: APP_GUARD,
      useClass: ThrottlerGuard,
    },
  ],
})
export class AppModule {}

Which says: The above would mean that 10 requests from the same IP can be made to a single endpoint in 1 minute.

But when i make 10 requests from one IP then i'm not able to access from different IP it says 429 for both different IP's, any idea if there are some additional settings that should be make or something

Minimum reproduction code

The above would mean that 10 requests from the same IP can be made to a single endpoint in 1 minute.

Steps to reproduce

No response

Expected behavior

When 1 ip reach the limit, endpoint to be open for different ip's

Package version

^2.0.0

NestJS version

No response

Node.js version

No response

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

Custom ThrottlerGuard throwing TypeError about getAllAndOverride

Bug Report

Current behavior

I am trying to use a custom ThrottlerGuard in combination with redis (package ioredis). Whenever I use the custom guard I always get an error after sending a request to the API: TypeError: Cannot read property 'getAllAndOverride' of undefined

Input Code

https://codesandbox.io/s/nestjs-forked-s226z

// app.module
imports: [
	ThrottlerModule.forRootAsync({
		imports: [LoggerModule],
		inject: [LoggerService],
		useFactory: () => ({
			ttl: 50,
			limit: 1,
			storage: new ThrottlerStorageRedisService()
		})
	})
],

// redis.client
const redisClient = new Redis({
	host: CONSTANTS.REDIS.HOST,
	port: CONSTANTS.REDIS.PORT
});

// throttlerRedis.service
@Injectable()
export class ThrottlerStorageRedisService implements ThrottlerStorageRedis {
	redis: Redis.Redis;
	storage: Record<string, number[]>;

	constructor() {
		this.redis = redisClient;
		this.storage = {};
	}

	async getRecord(key: string): Promise<number[]> {
		const ttls = (await this.redis.scan(0, 'MATCH', `${key}:*`, 'COUNT', 10000)).pop();
		return (ttls as string[]).map(k => parseInt(k.split(':')[1])).sort();
	}

	async addRecord(key: string, ttl: number): Promise<void> {
		await this.redis.set(`${key}:${Date.now() + ttl * 1000}`, ttl, 'EX', ttl);
	}
}

// throttler.guard.ts
@Injectable()
export class ThrottlerGuard extends TGuard {
	constructor(protected reflector: Reflector, private logger: LoggerService) {
		super({}, new ThrottlerStorageRedisService(), reflector);
		this.logger.setContext('Throttler');
	}

	async handleRequest(context: ExecutionContext, limit: number, ttl: number): Promise<boolean> {
		const request = context.switchToHttp().getRequest<IRequest>();

		const ip = request.socket.remoteAddress ?? request.ip;
		const key = this.generateKey(context, ip);
		const ttls = await this.storageService.getRecord(key);

		if (ttls.length >= limit) {
			this.logger.log('Rate limit exceeded');
			throw new ThrottlerException();
		}

		await this.storageService.addRecord(key, ttl);
		return true;
	}
}

// logger.service
export class LoggerService extends Logger {
	constructor(@Inject(REQUEST) private req?: IRequest, private moduleRef?: ModuleRef) {
		super('Nest');
	}

        log(message: string) {
		console.log('LOG [' + this.context + '] ' + message);
	}
}

Expected behavior

I expect the guard to thrown a ThrottlerException exception whenever the rate limit gets exceeded.

Possible Solution

Environment


Nest version: 7.5.6

 
For Tooling issues:
- Node version: v14.15.5  
- Platform: Windows  

Others:

Visual Studio Code, NPM, ioredis

Proposal: Add `onLimit` option

Are there considerations to add an onLimit option, e.g. like in this project GraphQL Rate Limit?

This would be useful so that one could, throw a GraphQLError directly with all the needed formatting (message, extension, ...).

Currently one can either:

  1. have extra logic for the ThrottlerException in the formatError method or
  2. override the handleRequest method to throw one's own exception.

Both are imho inelegant workarounds, but on the other side the feature is not a high priority. Probably there are other solutions, but these are the solutions I came up with.

Share same context between 'limit' and 'ttl'

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

Below is my throttler module code. I used identical code for both the "ttl" and "limit" functions, resulting in fetching the same data from the database twice. Is there a way to share the ExecutionContext between the "limit" and "ttl" functions to avoid this duplication?

    ThrottlerModule.forRootAsync({
      imports: [ConfigModule, JwtModule, GlobalModule],
      inject: [ConfigService, JwtService, SupabaseService],
      useFactory: (
        config: ConfigService,
        jwtService: JwtService,
        supabaseService: SupabaseService
      ) => ({
        ttl: async (context) => {
          const request = context.switchToHttp().getRequest();
          const extractor = ExtractJwt.fromAuthHeaderAsBearerToken();
          const accessToken = ExtractJwt.fromExtractors([extractor])(request);
          const payload = jwtService.decode(accessToken);
          const userId = payload.sub;
          const expressPass = await supabaseService.getExpressPass(userId);
          return expressPass ? 1000 : 60;
        },
        limit: async (context) => {
          const request = context.switchToHttp().getRequest();
          const extractor = ExtractJwt.fromAuthHeaderAsBearerToken();
          const accessToken = ExtractJwt.fromExtractors([extractor])(request);
          const payload = jwtService.decode(accessToken);
          const userId = payload.sub;
          const expressPass = await supabaseService.getExpressPass(userId);
          return expressPass ? 1000 : 1;
        },

        storage: config.get("REDIS_URL")
          ? new ThrottlerStorageRedisService(config.get("REDIS_URL"))
          : undefined,
      }),
    }),
    ```

### Describe the solution you'd like

Inject data to request before executing ttl, limit function.

const expressPass = request.expressPass


### Teachability, documentation, adoption, migration strategy

_No response_

### What is the motivation / use case for changing the behavior?

.

missing package-lock.json?

Hi, I wanted to ask you would not miss the package-lock.json?

a curiosity / question

I hope you don't steal a lot of time :-)

Add whitelisting feature

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

Throttling may block requests from frontend nodejs server.
I mean, when server (react, next) requests some resources with getServerSideProps it may block even frontend server for sending too many requests.
I've also asked a question in stackoverflow.com related to this issue:
https://stackoverflow.com/q/72762848/4098788

Describe the solution you'd like

A feature for whitelisting an array of ip addresses. It. Can be something like a decorator, or global configuration in module.

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

Whitelisting personal or company wide ip addresses for developers and team, and also whitelisting for access of specific servers.

TypeError: Class constructor ThrottlerGuard cannot be invoked without 'new'

Hi,

I'm trying to use the ThrottlerGuard as a global guard. I'm using GraphQL, so I did some extra configuration.

Here's what I've got so far:

@Injectable()
export class GqlThrottlerGuard extends ThrottlerGuard {
    getRequestResponse(context: ExecutionContext) {
        const gqlCtx = GqlExecutionContext.create(context);
        const ctx = gqlCtx.getContext();
        return { req: ctx.req, res: ctx.res }
    }
}

@Module({
    imports: [
        // other modules
        ThrottlerModule.forRoot({
            ttl: 10,
            limit: 3,
        }),
        GraphQLModule.forRoot({
            typePaths: ['src/**/*.graphql'],
            formatError,
            introspection: true,
            playground: true,
            context: ({ req, res }) => ({ req, res }),
        }),
        SentryModule.forRoot({
            dsn: process.env.SENTRY_DSN,
            environment: process.env.SENTRY_ENVIRONMENT,
            debug: true,
            tracesSampleRate: 1.0,
            logLevel: LogLevel.Error,
        }),
    ],
    providers: [
        // other providers,
        {
            provide: APP_FILTER,
            useClass: GraphQLErrorFilter,
        },
        {
            provide: APP_GUARD,
            useClass: GqlThrottlerGuard,
        },
    ],
    controllers: [AppController],
})
export class AppModule {
    configure(consumer: MiddlewareConsumer) {
        // configuring some middleware
    }
}

This results in the error as described in the title, the full stacktrace can be found here:

[0] [Nest] 56196   - 07/14/2021, 2:01:18 PM   [ExceptionHandler] Class constructor ThrottlerGuard cannot be invoked without 'new' +8ms
[0] TypeError: Class constructor ThrottlerGuard cannot be invoked without 'new'
[0]     at new GqlThrottlerGuard (/Users/rubenrutten/Projects/NodeAPI/src/app.module.ts:79:9)
[0]     at Injector.instantiateClass (/Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/injector.js:286:19)
[0]     at callback (/Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/injector.js:42:41)
[0]     at Injector.resolveConstructorParams (/Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/injector.js:114:24)
[0]     at Injector.loadInstance (/Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/injector.js:46:9)
[0]     at Injector.loadProvider (/Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/injector.js:68:9)
[0]     at async Promise.all (index 9)
[0]     at InstanceLoader.createInstancesOfProviders (/Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/instance-loader.js:43:9)
[0]     at /Users/rubenrutten/Projects/NodeAPI/node_modules/@nestjs/core/injector/instance-loader.js:28:13
[0]     at async Promise.all (index 1)

If I replace the GqlThrottlerGuard with ThrottlerGuard, it starts up fine, but the rate limiting doesn't work on GraphQL endpoints.

What other information could be relevant?

Example for RPC

Hi, on top of this repository it reads "A rate limiting module for NestJS to work with Fastify, Express, GQL, Websockets, and RPC ".

Can you give me an example of how to use with RPC?

forRootAsync doesn't work in version 0.2.3

Hello everyone,

Problem
I installed the latest version of the nestjs-throttler (0.2.3).
forRootAsync did not work.

Solution
I installed version 0.2.2

Conclusion
Bug in version 0.2.3.

It didn't apply the desired config to ServeStaticModule module imported from @nestjs/serve-static

I'm submitting a...

[ ] Regression 
[ ] Bug report
[ ] Feature request
[x] Documentation issue or request

Current behavior

import { ServeStaticModule } from '@nestjs/serve-static';
import { ThrottlerGuard, ThrottlerModule } from '@nestjs/throttler';

@Module({
    imports: [
        ServeStaticModule.forRoot({
            rootPath: path.join(__dirname, '..', 'statics'),
            serveRoot: '/statics',
        }),
        ThrottlerModule.forRoot({
            ttl: 60,
            limit: 10,
        }),
    ],
    providers: [
         {
             provide: APP_GUARD,
             useClass: ThrottlerGuard,
         },
     ],
})
export class AppModule implements NestModule {...}
  • for urls like example.com/statics/something the desired config of ThrottlerModuleOptions didn't work and I couldn't find documents for using guard in app.useGlobalGuards and test that, but I try below and got Error:
...
    const throttlerGuard = app.select(ThrottlerModule).get(ThrottlerGuard);
    app.useGlobalGuards(throttlerGuard);
...
Error: Nest could not select the given module (it does not exist in current context)
    at NestApplication.select (/root/app/packages/server-template/node_modules/@nestjs/core/nest-application-context.js:49:19)
    at /root/app/packages/server-template/node_modules/@nestjs/core/nest-factory.js:127:40
    at Function.run (/root/app/packages/server-template/node_modules/@nestjs/core/errors/exceptions-zone.js:9:13)
    at Proxy.<anonymous> (/root/app/packages/server-template/node_modules/@nestjs/core/nest-factory.js:126:46)
    at Proxy.<anonymous> (/root/app/packages/server-template/node_modules/@nestjs/core/nest-factory.js:168:54)
    at /root/app/packages/server-template/src/main.ts:38:32
    at Generator.next (<anonymous>)
    at fulfilled (/root/app/packages/server-template/src/main.ts:5:58)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
  • But the desired config applied to other routes of nestjs controller correctly. This problem just happened for routes that ServeStaticModule provides for serving statics files.

  • for solving this issue i have to use express-rate-limit packages, and use its middleware like below code and it works fine:

import RateLimit from 'express-rate-limit';
...
   app.use(
        RateLimit({
            windowMs: 15 * 60 * 1000, 
            max: 1000, 
        }),
    );
...

Expected behavior

It should apply desired config for statics root, too.

Environment

node: v14.16.0
@nestjs/throttler: 2.0.0
@nestjs/core: 7.6.18
@nestjs/platform-express: 7.6.18
@nestjs/common: 7.6.18
@nestjs/serve-static: 2.1.4
- Platform: docker node:lts

Question - Throttler Per UserContext

Let's say I have a socket server that receives on websocket a message

ADD_TASK
data: { userId: 1 }

I would like to throttle the usage , so I'm decorating with:

  @Throttle(2, 10)
  @SubscribeMessage('ADD_TASK')
  onAddTask(client: Socket, data: { userId: number }): void {
 ... do something only if throttling okay per userId (not per socketId)
  }

I was under the impression I could pass a function to the throttle to use the context on the request.
I don't want to throttle globally but per userId, but the event is the same for all.

serious bug: remaining times is stuck when use "storage" option with nestjs-redis

I'm submitting a...


[ ] Regression 
[x] Bug report
[ ] Feature request
[ ] Documentation issue or request

Current behavior

export const Throttler: DynamicModule = ThrottlerModule.forRootAsync({
  imports: [Redis],
  inject: [RedisService],
  useFactory(redisService: RedisService): ThrottlerModuleOptions {
    const redisClient = redisService.getClient(REDIS_CLIENT_NAME_THROTTLER);

    return {
      ttl: ms('1m') / 1000,
      limit: 30,
      storage: new ThrottlerStorageRedisService(redisClient),
    };
  },
});
  • same kkoomen/nestjs-throttler-storage-redis#322
  • The X-RateLimit-Remaining in headers of response is stuck when send request about 11-12 times.
  • But if not use "storage" option, it works normally.
  • I created a mini repo below for this issue.

Expected behavior

The throttler should works normally.

Minimal reproduction of the problem with instructions

  1. download from https://github.com/DevAngsy/throttle-bug
  2. npm install
  3. edit redis password according to your redis config
  4. npm run start:dev
  5. send get request to http://127.0.0.1:3000/app/hello 10-20 times, and notice X-RateLimit-Remaining
  6. single-line annotate "storage" option, notice X-RateLimit-Remaining again

Environment

OS Version: Ubuntu 20.04.2 LTS
CPU: Intel® Core™ i5-9600K
NodeJS Version: 14.16.0
NPM Version: 7.6.3
Ubuntu Redis Version: V6
Global Nest CLI Version: 7.5.6
Repo Nest CLI Version: 7.5.6
Repo nestjs-throttler Version: 0.3.0
Repo nestjs-throttler-storage-redis Version: 0.1.11
Repo ioredis Version: 4.24.3

No documentation on how to use GqlThrottlerGuard with Subscription

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

I am using global throttler guard for my graphql api. But I get an error when I am adding subscriptions
"Cannot read properties of undefined (reading 'ip')",
It is thrown bu GqlThrottlerGuard

Describe the solution you'd like

It would be great to have section in documentation here: https://docs.nestjs.com/security/rate-limiting#graphql how to handle it. I had to dig into your github code to see that there is a @SkipThrottle() decorator

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?


Property 'storageService' is private and only accessible within class 'ThrottlerGuard'.

I have extended ThrottlerGuard as per the example given in README(https://github.com/nestjs/throttler#working-with-websockets) and in nestjs documentation(https://docs.nestjs.com/security/rate-limiting#websockets), but I'm not able to access the storageService as currently it's private member(https://github.com/nestjs/throttler/blob/master/src/throttler.guard.ts#L23), Which is why I'm facing below error :

src/shared/throttle.guard.ts:10:33 - error TS2341: Property 'storageService' is private and only accessible within class 'ThrottlerGuard'.

const ttls = await this.storageService.getRecord(key);
                                   ~~~~~~~~~~~~~~

storageService should be as protected.

throttler will add more times in a request

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

First I registered a Guard {ttl: 10,limit: 20} globally, then I wanted to override the Throttler configuration on a route, so I added another Guard to the route with @Throttle(1, 60).

I thought this would limit the route to only one access a minute. But when I access the interface, the data stored in the store is 2. This 2 exceeds the Throttler configuration on the route, so I never get the requested content.

Minimum reproduction code

Steps to reproduce

No response

Expected behavior

I expect that the @Throttle configuration on the route and the Guard configuration of the global Throttler will not be added repeatedly.

Package version

4.0.0

NestJS version

9.0.0

Node.js version

16.13.2

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Invalid npm token.

The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.

If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.

Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.


Good luck with your project ✨

Your semantic-release bot 📦🚀

Adding documentation for IP behind a reverse proxy

Hi everybody! I've found this module pretty interesting, however digging in the source code I've found you're using req.ip which obviously is perfectly fine.
However, in order to prevent issues for users trying the package, I think a tip in the README warning about the usage behind a reverse proxy could be extremely useful.

Since Express requires the trust proxy option to be set to true in order to use the X-Forwarded-For client IP, and since Nginx should be configured accordingly, i.e. proxy_set_header X-Forwarded-For $remote_addr; , atleast a warning could be useful.

Am I missing something? Is this something that should be addressed atleast at a documentation level?
I'm leaving this note here just to be propositive and help someone, let me know what do you think about 😄

Throttle only on Unauthorized requests

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

We try to throttle only requests if a certain number of unauthorized requests comes in. E.g. assume an attacker tries to find user/pw cominations. We now could throttle the whole endpoint. But we need many requests but to prevent brute force attacks we only want to apply the limit, if there appear too many requests where an authentication guard fails.

Describe the solution you'd like

Basically the ThrottleGuard should be changed to separate functions to test test if limits are exceeded and to apply the trottle actually.

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

see above

Error: Cannot read property 'ip' of undefined

hello.

I created a new project today and by installing throttler package, I got the following error:

Screen Shot 2021-07-14 at 1 30 03 AM

ENV And Dependency:
Mac Os Big sur: 11.4
node : 14

"@nestjs/core": "^8.0.0",
"@nestjs/graphql": "^8.0.2",
"@nestjs/platform-express": "^8.0.0",
"@nestjs/throttler": "^2.0.0",
"apollo-server-express": "^2.25.2",

Add a way to use Throttler multiple times

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

The problem is I can't use two Throttler decorators.

Describe the solution you'd like

I want to make something like:

    @UseGuards(ThrottlerGuard)
    @Throttle(10,60)
    @Throttle(1000, 86400)
    @Post("/teste")

With this, is possible to make a business rule:

  • Ten request per minute
  • One thousand request per day

Teachability, documentation, adoption, migration strategy

image

It's a pretty good way to use this library.

What is the motivation / use case for changing the behavior?

It's the necessity to Rate Limiting on micro-time (seconds) and macro-time (days).

this library does not support Nestjs 10.0.0

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

this library does not support Nestjs 10.0.0

Minimum reproduction code

this library does not support Nestjs 10.0.0

Steps to reproduce

No response

Expected behavior

this library does not support Nestjs 10.0.0

Package version

4.0.0

NestJS version

10.0.0

Node.js version

18.16.0

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

this library does not support Nestjs 10.0.0

No way to create custom key generator for storage

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

I just want to have a opportunity to generate my own key. Md5 hash is good, but in real life there may be situations then you should connect to your redis and check keys rl-*.
I can't believe that such a popular lib doesn't have such a simple thing.

Describe the solution you'd like

Global default customKey function

.forRoot({
  customKey (context) {}
})

Local customKey function

class ThrottlerFoo extends ThrottlerGuard {
  customKey (context) {}
}

Local function overrides global

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

In real life there may be situations then you should connect to your redis and check/drop constraints

Advanced Rate Limiting Options for Throttler

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

I propose adding an advanced set of options to provide a more flexible and powerful rate limiting mechanism. This would allow different rate limits based on different keys (e.g. IP address, customer ID). It would also introduce a credit-based system to temporarily allow exceeding the base rate limit for allowing 'bursty' workloads while controlling the potential cost.

Here are the proposed features:

  1. Requests per Interval: Allow rate limits to be specified per arbitrary time intervals, not just per minute.

  2. Credits and Their Duration: Introduce a credit-based system where each key can be assigned a certain amount of credits. Each credit would allow the key to make requests at a higher rate for a certain duration.

  3. Multiple Limit Configurations: Allow multiple limit configurations, each tied to a different key. This would allow for both platform-wide and key-specific limits.

  4. Customizable Responses: Provide a way to customize the response based on the type of rate limit that was exceeded.

Here's how the configuration for these features might look:

"limitGroups": [
  {
    "name": "IP-specific limits",
    "limits": [
      { "upperLimit": 100, "rateIntervalInMs": 60000, "isUnlimited":true },
      { "upperLimit": 1800, "rateIntervalInMs": 60000, "creditsInMinutes": 360, "creditsWindowInMinutes": 1440 },
      { "upperLimit": null, "rateIntervalInMs": null, "creditsInMinutes": 0, "creditsWindowInMinutes": null }
    ],
    "getKey": "(req) => req.ip",
    "responseMessage": "You have exceeded the rate limit for your IP address."
  },
  {
    "name": "Customer-specific limits",
    "limits": [
      { "upperLimit": 5000, "rateIntervalInMs": 60000, "isUnlimited":true },
      { "upperLimit": 10000, "rateIntervalInMs": 60000, "creditsInMinutes": 720, "creditsWindowInMinutes": 1440 },
      { "upperLimit": null, "rateIntervalInMs": null, "creditsInMinutes": 0, "creditsWindowInMinutes": null }
    ],
    "getKey": "(req) => req.headers['x-customer-id']",
    "responseMessage": "You have exceeded the rate limit for your customer account."
  },
  {
    "name": "Platform-wide limits",
    "limits": [
      { "upperLimit": 100000, "rateIntervalInMs": 60000, "creditsInMinutes": 1440, "creditsWindowInMinutes": 43200 },
      { "upperLimit": null, "rateIntervalInMs": null, "creditsInMinutes": 0, "creditsWindowInMinutes": null }
    ],
    "getKey": "(req) => 'platform'",
    "responseMessage": "The platform has reached its overall rate limit. Please try again later."
  }
]

In this configuration:

  • The first limit group applies to each IP address individually. The getKey function returns the IP address of the request.
  • The second limit group applies to each customer individually. The getKey function returns the customer ID from the request headers.
  • The third limit group applies platform-wide. The getKey function always returns the string 'platform', so all requests contribute to the same count.

Each limit group has a responseMessage that will be returned in the response when a user exceeds a rate limit. This provides a clear explanation to the user of why their request was throttled.

Here is a breakdown for the first example:

"limits": [
  { "upperLimit": 100, "rateIntervalInMs": 60000, "isUnlimited":true },
  { "upperLimit": 1800, "rateIntervalInMs": 60000, "creditsInMinutes": 360, "creditsWindowInMinutes": 1440 },
  { "upperLimit": null, "rateIntervalInMs": null, "creditsInMinutes": 0, "creditsWindowInMinutes": null }
]

Each object inside the limits array is a different rate limit tier. The parameters are:

  • upperLimit: This is the maximum number of requests that can be made within the rateIntervalInMs. If null, it means there is no upper limit.
  • rateIntervalInMs: This is the time period (in milliseconds) in which the number of requests is counted.
  • creditsInMinutes: This is the amount of time (in minutes) that a requester can be within this range up to the upper limit. When the requester exceeds the upper limit for the lower, they start using their credits. Once the credits are used up, they are moved to the next lower tier.
  • creditsWindowInMinutes: This is the duration (in minutes), in which the credits will refresh.

Here are some examples of how this works:

  • If an IP address makes 100 requests in a minute, they remain in the first tier and don't use any credits - as this tier is set with "isUnlimited":true.
  • If the same IP address then makes 1100 requests in the next minute, they exceed the upper limit of the first tier (100 requests per minute) and move to the second tier. They start using their credits to make requests at a higher rate (up to 1800 requests per minute). For the next 360 minutes (6 hours), each minute within the credits window they use between 101 and 1800 request will reduce the credits amount for that tier.
  • In the third tier, they are not allowed to make any more requests (0 credits). They will stay in this tier until their credits are reset (e.g., at the start of the next credits window - in this case a month or 43200 minutes).

This mechanism allows requesters to temporarily exceed the base rate limit when needed, while also ensuring that they don't overwhelm the server with a high number of requests for an extended period of time.

I believe these features would greatly enhance the flexibility and power of the @nestjs/throttler library. I would appreciate your thoughts on this proposal.

Describe the solution you'd like

Desired Solution:

Extend`@nestjs/throttler to:

  1. Allow rate limits per arbitrary time intervals, not just per minute.
  2. Implement a credit-based system for temporary limit exceeding.
  3. Enable multiple limit configurations for different keys, supporting both platform-wide and key-specific limits.
  4. Provide customizable responses for rate limit exceeding based on the type of limit.

Potential Drawbacks:

  1. Added complexity in configuration and understanding of the throttler library.
  2. Increased codebase complexity, potential bugs, and need for extensive testing.
  3. Risk of server flooding if the credit-based system is misused or exploited.

Teachability, documentation, adoption, migration strategy

Clear and comprehensive documentation, including explanations of new concepts and step-by-step guides, will help users understand and implement the advanced rate limiting features.

Adoption and Migration Strategy:

The new features should be backwards-compatible, allowing existing users to opt-in as needed. For example you can maintain the basic mode and add the limitGroups option as an extra option and not the main option

Visual aids like flowcharts or diagrams explaining the rate limiting process would also be beneficial for understanding and adoption.

What is the motivation / use case for changing the behavior?

The motivation behind these changes is to provide users with more flexible and powerful rate limiting options, catering to diverse use cases and traffic patterns.

Current rate limiting offer fixed request limits per interval, which may not be ideal for all scenarios. For instance, certain operations might need to temporarily allow higher request rates, such as batch operations or data syncing tasks (especially with elastic cloud solutions where auto-scale makes throttling more about cost control than protection for the platform).

The proposed changes will allow rate limiting to be more adaptable to the dynamic nature of web traffic. The credit-based system can handle bursts of high traffic, while the flexible intervals and multiple limit configurations can cater to different types of requests or users.

Use cases include:

  1. User-based limiting: Different users or roles can have different limits.
  2. Platform-wide limiting: To protect the overall system from high traffic.
  3. Handling traffic bursts: The credit system allows temporary exceeding of limits.

Using throttler with graphql causes: `TypeError: Cannot read properties of undefined (reading 'header')`

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

Following the readme in this repo, the above PR has included the code that should make it compatible with gql:

@Injectable()
export class GqlThrottlerGuard extends ThrottlerGuard {
  getRequestResponse(context: ExecutionContext) {
    const gqlCtx = GqlExecutionContext.create(context);
    const ctx = gqlCtx.getContext();
    return { req: ctx.req, res: ctx.res }; // ctx.request and ctx.reply for fastify
  }
}

however, it throws this error:

[Nest] 95109  - 31/03/2023, 09:21:31   ERROR [ExceptionsHandler] Cannot read properties of undefined (reading 'header')
TypeError: Cannot read properties of undefined (reading 'header')
    at GqlThrottlerGuard.handleRequest (/[...]/nest/sample/23-graphql-code-first/node_modules/@nestjs/throttler/src/throttler.guard.ts:88:9)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)

My best guess is this is caused by a recent upgrade to @nestjs/graphql to make it compatible with a more recent version of apollo.
An example based on a the samples in the (forked) nestjs core repo can be found here: sadams/nest#1

Minimum reproduction code

sadams/nest#1

Steps to reproduce

  1. clone branch from here: sadams/nest#1
  2. run npm install
  3. run npm start in sample/23-graphql-code-first directory
  4. visit http://localhost:3000/graphql
  5. execute query {recipes{title}}
  6. see output in terminal

Expected behavior

The request should go through normally until they are throttled, at which point the normal throttler behaviour should kick in.

Package version

4.0.0

NestJS version

9.3.12

Node.js version

v16.19.0

In which operating systems have you tested?

  • macOS

Other

No response

Nan if register module with default and use Throttle decorator

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

When ThrottlerModule.forRoot() is registered with no parameters and @Throttle decorator is set on one of the methods and try to call method where decorator is not set we get Nan in ttlMilliseconds when in storageService trying to add key
const ttlMilliseconds = ttl * 1000;
In memory realisation is we have [Nan]
In https://github.com/kkoomen/nestjs-throttler-storage-redis realisation we have exception

Minimum reproduction code

later

Steps to reproduce

No response

Expected behavior

Ttl must not be Nan, maybe return earlier when ttl is not set or add default value. And need to fix typing from
handleRequest( context: ExecutionContext, limit: number, ttl: number, )

to
handleRequest( context: ExecutionContext, limit?: number, ttl?: number, )

Package version

8.1.3

NestJS version

8.1.3

Node.js version

16.13.1

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

Add headers to show the current usage of the throttle limits

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

The problem is that the client doesnt know if the endpoint enforces limits and what those limits are.

Describe the solution you'd like

Optionally add helper headers to help the consumer to know how much of the limit he has used so far. I've used other frameworks which expose these headers in the response:

X-Rate-Limit-Limit, the maximum number of requests allowed with a time period
X-Rate-Limit-Remaining, the number of remaining requests in the current time period
X-Rate-Limit-Reset, the number of seconds to wait in order to get the maximum number of allowed requests

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

These will allow clients to know the limits and act accordingly or display something to the users.

Pass more info for customization to throwThrottlingException()

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

It would be nice to have an ability to pass retry-after values and limits in the response body. Right now, the only ways to do that in the ThrottlerGuard subclasses are:

  • to parse the headers from the response object in throwThrottlingException()
  • to reimplement throttling logic in handleRequest()

Describe the solution you'd like

Pass an additional argument to throwThrottlingException() that would contain the nearestExpiryTime (and, ideally, the current request limit, and number of requests remaining).

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

An SMS/email code endpoint for authorization needs to allow no more than one request per IP+account per minute. The client app needs to show a countdown. Parsing headers in the client app is cumbersome.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/ci.yml
  • actions/checkout v4
  • actions/setup-node v4
  • pascalgn/automerge-action v0.16.3
.github/workflows/release.yml
npm
package.json
  • @apollo/server 4.10.4
  • @changesets/cli 2.27.1
  • @commitlint/cli 19.3.0
  • @commitlint/config-angular 19.3.0
  • @nestjs/cli 10.3.2
  • @nestjs/common 10.3.8
  • @nestjs/core 10.3.8
  • @nestjs/graphql 12.1.1
  • @nestjs/platform-express 10.3.8
  • @nestjs/platform-fastify 10.3.8
  • @nestjs/platform-socket.io 10.3.8
  • @nestjs/platform-ws 10.3.8
  • @nestjs/schematics 10.1.1
  • @nestjs/testing 10.3.8
  • @nestjs/websockets 10.3.8
  • @semantic-release/git 10.0.1
  • @types/express 4.17.21
  • @types/express-serve-static-core 4.19.0
  • @types/jest 29.5.12
  • @types/node 20.12.7
  • @types/supertest 6.0.2
  • @typescript-eslint/eslint-plugin 7.7.1
  • @typescript-eslint/parser 7.7.1
  • apollo-server-fastify 3.13.0
  • conventional-changelog-cli 4.1.0
  • cz-conventional-changelog 3.3.0
  • eslint 8.57.0
  • eslint-config-prettier 9.1.0
  • eslint-plugin-import 2.29.1
  • graphql 16.8.1
  • graphql-tools 9.0.1
  • husky 9.0.11
  • jest 29.7.0
  • lint-staged 15.2.2
  • nodemon 3.1.0
  • pactum ^3.4.1
  • pinst 3.0.0
  • prettier 3.2.5
  • reflect-metadata 0.2.2
  • rimraf 5.0.5
  • rxjs 7.8.1
  • socket.io 4.7.5
  • supertest 7.0.0
  • ts-jest 29.1.2
  • ts-loader 9.5.1
  • ts-node 10.9.2
  • tsconfig-paths 4.2.0
  • typescript 5.4.5
  • ws 8.16.0
  • @nestjs/common ^7.0.0 || ^8.0.0 || ^9.0.0 || ^10.0.0
  • @nestjs/core ^7.0.0 || ^8.0.0 || ^9.0.0 || ^10.0.0
  • reflect-metadata ^0.1.13 || ^0.2.0

  • Check this box to trigger a request for Renovate to run again on this repository

Cannot find module '@nestjs/throttler' or its corresponding type declarations.

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

Screenshot 2021-10-04 at 16 09 18

Minimum reproduction code

ThrottlerModule.forRoot({ ttl: 60, limit: 10, })

Steps to reproduce

npm i --save @nestjs/throttler

Declare in imports

   ThrottlerModule.forRoot({
      ttl: 60,
      limit: 10,
    }),

Expected behavior

Should work

Package version

2.0.0

NestJS version

7.0.0

Node.js version

No response

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

graphql endpoint protected by default

setup

import { APP_GUARD } from '@nestjs/core';
import { ThrottlerGuard, ThrottlerModule } from 'nestjs-throttler';

@Module({
  imports: [
    ThrottlerModule.forRoot({
      ttl: 60,
      limit: 20,
    }),
  ],
  providers: [
    {
      provide: APP_GUARD,
      useClass: GqlThrottlerGuard,
    },
  ],
})
export class AppModule {}
import { ExecutionContext, Injectable } from '@nestjs/common';
import { GqlContextType, GqlExecutionContext } from '@nestjs/graphql';
import { ThrottlerGuard } from '@nestjs/throttler';

@Injectable()
export class GqlThrottlerGuard extends ThrottlerGuard {
  getRequestResponse(context: ExecutionContext) {
    switch (context.getType() as GqlContextType) {
      case 'graphql':
        const { req } = GqlExecutionContext.create(context).getContext();
        return { req, res: req.res };
      default:
        return super.getRequestResponse(context);
    }
  }
}

request graphql header

X-RateLimit-Limit: 20
X-RateLimit-Remaining: 19
X-RateLimit-Reset: 0

polling graphql header no X-RateLimit-

expect

graphql should not protected by default

Limit not working correctly in GQL

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

I'm using the throttler with GraphQL and when I set the parameters to: { ttl:60, limit:10 }

I'm expecting that the Query that it's protecting should be able to get called 10 times in a span of 60 seconds.
However, it starts blocking the query at half of the limit specified.

For example, if I set limit:10 it starts sending a TooManyAttempts response at 5 attempts instead of 10.
If I set limit:1 I can't even call the Query, the throttler blocks it right away.
Tried different limits and it's always half.

Minimum reproduction code

https://gist.github.com/Alechuu/9f8b350f06c0a3981f5461dbccc79f96

Steps to reproduce

No response

Expected behavior

Expected the throttler to block requests at specified limit, instead it starts blocking at half of the specified limit.

Package version

2.0.1

NestJS version

8.0.0

Node.js version

17.6.0

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

Is throttler limits number of request per user or on total count

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

Is throttler limits number of request per user or on total count

example - if i configure throttler for 60 sec to allow only 10 request so it will be 10 request per user or total 10 request combining multiple users

because to avoid DDOS attack i want to set limit per user not on total request by multiple users

Minimum reproduction code

https://github.com/nestjs/throttler/issues/new?assignees=&labels=needs+triage%2Cbug&template=Bug_report.yml

Steps to reproduce

No response

Expected behavior

Is throttler limits number of request per user or on total count

Package version

8.0.0

NestJS version

8.0.0

Node.js version

14.0.0

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

Is throttler limits number of request per user or on total count

Requesting flooding bypasses throttle limit

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

Hi, maintainer of the redis storage provider here. I got an issue regarding the fact that when using e.g. @Throttle(5,1) and then a simple script that triggers 1000 requests ASAP then there are requests that should receive a 429 but they are coming through. From 1000 requests, there are ±400 coming through, while perhaps only 5 are allowed. This is not a problem with the builtin throttler service, but it becomes an issue with redis and perhaps other external storage providers. It does require changes in this package, which I'll explain below.

I did explain my full problem here: kkoomen/vim-doge#1064.

Minimum reproduction code

https://github.com/kkoomen/nestjs-throttler-redis-demo

Steps to reproduce

Open 3 terminals: 1 for redis-server, 1 for nest application and 1 to trigger the requests

terminal 1: run redis-server

terminal 2:

  1. git clone https://github.com/kkoomen/nestjs-throttler-redis-demo
  2. cd nestjs-throttler-redis-demo
  3. yarn install
  4. yarn start:dev

terminal 3: inside the nestjs-throttler-redis-demo repo, run ./trigger_requests.js

The output should give you something like this:

AxiosError: Request failed with status code 429
AxiosError: Request failed with status code 429
...
default: 1
default: 2
...
default: 400
..
AxiosError: Request failed with status code 429
AxiosError: Request failed with status code 429

as you see, based on the default: 400, it was able to successfully run 400 times, while only 5 are allowed.

Expected behavior

A correct output with @Throttle(5, 1) using the trigger_requests.js script looks like this:

default: 1
default: 2
default: 3
default: 4
default: 5
AxiosError: Request failed with status code 429
AxiosError: Request failed with status code 429
AxiosError: Request failed with status code 429
...
AxiosError: Request failed with status code 429

In order to fix this, the logic of the throttler package should change (please note that in order to understand the problem, read my full comment here).

Proposed solution: the getRecord should be completely removed and this part should be merged into addRecord(). The addRecord() should do 2 things respectively as described below:

  1. add the record to the storage and set expiration time (just how it is now)
  2. fetch and return the list of TTLs (the result which is returned from getRecord()) the total number of requests done by a single user (partially based on IP) along with the expiration time, this is also how express-rate-limit does it and as far as I, this works best in terms of performance and resolves these nasty issues.

This will solve the issue as I described in my comment here because the first call will immediately increment the amount of requests done, so the next request will receive the +1 from the previous one (which is not the case when being flooded with requests)

This week I will make a PR with changes (along with updates tests) with the proposed change.

This should have a slight impact on the tests (nothing to worry about as far as I see) and an API change in the next version bump.

Package version

3.1.0

NestJS version

9.0.0

Node.js version

16.6.0

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

if pass no options will fail to start

failed

imports: [ThrottlerModule.forRoot()],
[Nest] 33359   - 02/07/2021, 5:15:05 PM   [ExceptionHandler] Nest cannot export a provider/module that is not a part of the currently processed module (ThrottlerModule). Please verify whether the exported THROTTLER:MODULE_OPTIONS is available in this particular context.

Possible Solutions:
- Is THROTTLER:MODULE_OPTIONS part of the relevant providers/imports within ThrottlerModule?
 +26ms
Error: Nest cannot export a provider/module that is not a part of the currently processed module (ThrottlerModule). Please verify whether the exported THROTTLER:MODULE_OPTIONS is available in this particular context.

Possible Solutions:
- Is THROTTLER:MODULE_OPTIONS part of the relevant providers/imports within ThrottlerModule?

success

imports: [ThrottlerModule.forRoot({})],

Support for async `getTracker`

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

Current implementation of ThrottlerGuard supports only sync getTracker

Describe the solution you'd like

Convert

protected getTracker(req: Record<string, any>): string;

To

protected getTracker(req: Record<string, any>): string | Promise<string>;

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

In some cases we might need to do async operations to generate a proper tracker for the request, e.g. to verify and decode authorization token.

Throttle based on graphql params instead of context

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

We would like to throttle based on params the user sends in the request. So instead of the context. For some mutations we would like to limit based on input value: accountId the number of request that can be send per 10 seconds.

Describe the solution you'd like

Adding a decorator where we can set the field that should be unique per request within the time limit. @FieldThrottle("account", 1, 10)

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

security

Make the `storage` field in the `ThrottlerStorage` interface optional

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

Currently the storage field in the ThrottlerStorage interface is required:

export interface ThrottlerStorage {
/**
* The internal storage with all the request records.
* The key is a hashed key based on the current context and IP.
* The value of each item wil be an array of epoch times which indicate all
* the request's ttls in an ascending order.
*/
storage: Record<string, number[]>;

This makes sense for the in memory cache, but I have built my own Redis storage service which implements the ThrottlerStorage interface and I have no use for the storage field from what I can tell.

It would be nice if this field was deemed optional so I don't have unused code sitting in my ThrottlerRedisStorageService which implements ThrottlerStorage.

I am happy to make a PR for this fix, but I just wanted to confirm this field is indeed optional when using your own storage option.

Minimum reproduction code

See 'Steps to reproduce'.

Steps to reproduce

import { Injectable } from '@nestjs/common';
import { ThrottlerStorage } from '@nestjs/throttler';

@Injectable()
export class ThrottlerRedisStorageService implements ThrottlerStorage {
  // storage: Record<string, number[]>; // <-- Throws a TypeScript error when not visible.

  addRecord(key: string, ttl: number): Promise<void> {
    return Promise.resolve(undefined);
  }

  getRecord(key: string): Promise<number[]> {
    return Promise.resolve([]);
  }
}

Expected behavior

As far as I know, I do not require the storage field, so would prefer it to be optional to save an unused line of code.

Package version

3.1.0

NestJS version

9.1.4

Node.js version

16.18.0

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

Revert ttl when request failed

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

I set throttle 1 request per minute. I don't want to make user wait for failed requests.

Describe the solution you'd like

Revert expirationTime when request failed (Exception occur)

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

apply throttle only for succeed requests.

Throttler doesn't work in GraphQL but work in controller

Is there an existing issue for this?

  • I have searched the existing issues

Current behavior

I read document and set up the global guard Throttler for restful controller and graphql resolver .

I use autocannon to test the function of Throttler.

Minimum reproduction code

https://github.com/mpx2m/nest-rate-limiting-example

Steps to reproduce

In Minimum reproduction code

To test controller

after run :

npm run test:controller-test 

return

statusCodeStats: { '200': { count: 10 }, '429': { count: 24836 } }

429 is returned and Throttler works well

To test graphql

npm run test:graphql-resolver-test

return

statusCodeStats: { '200': { count: 9727 } }

No 429 is returned

Expected behavior

expect

npm run test:graphql-resolver-test

statusCodeStats contains 429 result

Package version

3.0.0

NestJS version

9.0.1

Node.js version

v16.15.0

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Other

No response

Block duration option for throttler

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

Currently, when a user exceeds their request limit within a defined time window (ttl), their requests are blocked for a fixed duration tied to the ttl.

The proposed feature would allow developers to tailor the duration of this block separately from the ttl. This means developers can fine-tune how long user requests are restricted after reaching their limit, providing more control and adaptability for different use cases in NestJS.

Describe the solution you'd like

I've devised a way to improve the rate limiting feature in NestJS. We can introduce a new option called blockDuration when importing the ThrottleModule. If users choose not to specify blockDuration, the system will fall back to the default behavior, which relies on the ttl.

Teachability, documentation, adoption, migration strategy

@Module({
  imports: [
    ThrottlerModule.forRoot({
      ttl: 5,
      limit: 3,
      blockDuration: 3600
    }),
  ],
})
export class AppModule {}

What is the motivation / use case for changing the behavior?

As we built our app, security was a top concern. To prevent bot attacks, we decided to allow only a certain number of requests 'n' within a time frame 't'. If users exceeded this limit, we'd block them for a day. It was our way of keeping the bots at bay and ensuring a safe user experience.

4.2.0 is a BREAKING CHANGE for custom guards

Did you read the migration guide?

  • I have read the whole migration guide

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Potential Commit/PR that introduced the regression

PR #1484

Versions

4.1.0 => 4.2.0

Describe the regression

When extending the ThrottleGuard canActivate method or injecting the ThrottleModuleOptions into a custom guard with @Inject(getOptionsToken()) limit and ttl are now type Resolvable<number> whereas previously they were type number.

Minimum reproduction code

Expected behavior

Expect when a public interface is updated to no longer be backwards compatible there would be a major release bump.

Other

No response

make error message configurable

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

I want to override the default error message

for now, I have to override ThrottleGuard like this

class MyThrottlerGuard extends ThrottlerGuard {
  protected throwThrottlingException(): void {
    throw new ThrottlerException("My Error Message");
  }
}

Describe the solution you'd like

I would like to pass the error message to the module config function

    ThrottlerModule.forRoot({
      ttl: 3600,
      limit: 100,
      throttlerMessage: "My Error Message"
    }),

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

I need to localize error messages in my app

looks like the module throttles only 20x responses

looks like the module throttles only 20x responses, if response is 401
for example if credentials are invalid - module doesn't work.

I wanted to use module to limit number of sign-in attempts to 5 per 60s to eliminate bruteforce attack

Context-based `ttl` and `limit` options

Is there an existing issue that is already proposing this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe it

The current limit and ttl options are statically configured, which prevents adjusting limits for specific users or IPs.

The existing skipIf is a workaround but isn't sufficient as it completely disables the throttling.

Describe the solution you'd like

Enable configuring the limit and ttl with a callback, like this:

ThrottlerModule.forRoot({
      limit: () => 5,
      ttl: () => 60,
 }),

or

ThrottlerModule.forRoot({
      limit: async (context) => // ... retrieve something,
      ttl: async (context) => 60,
}),

and, for the decorator:

@Throttle(()=> 2, ()=> 10)

NB: for my current use case, the ttl is not mandatory; neither is the fact that these callbacks return Promises, but it seems reasonable.

Teachability, documentation, adoption, migration strategy

No response

What is the motivation / use case for changing the behavior?

Per-user / per-application rate limiting: normal users have a specific rate, and automations (machine to machine integrations) have a different rate.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.