alisaifee / limits Goto Github PK
View Code? Open in Web Editor NEWRate limiting using various strategies and storage backends such as redis & memcached
Home Page: https://limits.readthedocs.org
License: MIT License
Rate limiting using various strategies and storage backends such as redis & memcached
Home Page: https://limits.readthedocs.org
License: MIT License
Add type hints to all (at least public API) code make the distribution PEP 561 compliant. This allows mypy
(and other tools?) to find the type hints and use them in linting.
In practice, add an empty limits/py.typed
file, and include it in the package:
setup(
package_data={'limits': ['py.typed']},
zip_safe=False, # not needed with wheels, AFAIK
)
First of all, I'm not sure if it's a bug or misconfiguration at my side...
I'm trying to create MovingWindowRateLimiter
for Asynchronous Redis Sentinel - created using
self.redis_storage = storage_from_string(
f"async+redis+sentinel://:{password}@{urls}/{cluster_name}",
db=db,
sentinel_kwargs=dict(password=password),
stream_timeout=stream_timeout,
)
self.moving_window = MovingWindowRateLimiter(self.redis_storage)
The storage is created
But unfortunatelly MovingWindowRateLimiter
is not, it fails on this assert
https://github.com/alisaifee/limits/blob/master/limits/strategies.py#LL15C9-L15C44
from debug console:
isinstance(storage, Storage)
False
As for DEPENDENCIES = {"coredis.sentinel": Version("3.4.0")} - it looks I'm fulfilling this
redis+sentinel
storage schema, it works as expected...
it's running on Python 3.11
@ Debian
inside WSL
connecting to redis 7.0.8
from poetry.lock
[[package]]
name = "limits"
version = "3.5.0"
description = "Rate limiting utilities"
optional = false
python-versions = ">=3.7"
files = [
{file = "limits-3.5.0-py3-none-any.whl", hash = "sha256:3ad525faeb7e1c63859ca1cae34c9ed22a8f22c9ea9d96e2f412869f6b36beb9"},
{file = "limits-3.5.0.tar.gz", hash = "sha256:b728c9ab3c6163997b1d11a51d252d951efd13f0d248ea2403383952498f8a22"},
]
[package.dependencies]
coredis = {version = ">=3.4.0,<5", optional = true, markers = "python_version > \"3.7\" and extra == \"async-redis\""}
deprecated = ">=1.2"
importlib-resources = ">=1.3"
packaging = ">=21,<24"
setuptools = "*"
typing-extensions = "*"
[package.extras]
all = ["aetcd", "coredis (>=3.4.0,<5)", "emcache (>=0.6.1)", "emcache (>=1)", "etcd3", "motor (>=3,<4)", "pymemcache (>3,<5.0.0)", "pymongo (>4.1,<5)", "redis (>3,!=4.5.2,!=4.5.3,<5.0.0)", "redis (>=4.2.0,!=4.5.2,!=4.5.3)"]
async-etcd = ["aetcd"]
async-memcached = ["emcache (>=0.6.1)", "emcache (>=1)"]
async-mongodb = ["motor (>=3,<4)"]
async-redis = ["coredis (>=3.4.0,<5)"]
etcd = ["etcd3"]
memcached = ["pymemcache (>3,<5.0.0)"]
mongodb = ["pymongo (>4.1,<5)"]
redis = ["redis (>3,!=4.5.2,!=4.5.3,<5.0.0)"]
rediscluster = ["redis (>=4.2.0,!=4.5.2,!=4.5.3)"]
# ...
[[package]]
name = "coredis"
version = "4.14.0"
description = "Python async client for Redis key-value store"
optional = false
python-versions = ">=3.7"
files = [
{file = "coredis-4.14.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:19604e82c6659550f4b057c9692725f21abb9cc19e5866d8fb673c4f3e6baea4"},
{file = "coredis-4.14.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c52ca9e2a0793be6e4bc73dd882397e4a30a4355f72cc5d9ef63d6f6ac3d33f3"},
{file = "coredis-4.14.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d12377892c85d60711c29ec11468d153dfc8a0f8efaca95ced3c00270636b31"},
{file = "coredis-4.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:988aad852f99a53e51ecf72416275224a8cf9152578a0b11d887499c6557b15c"},
{file = "coredis-4.14.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4983307a243c80405ae785ae9c3285bf90fd2865a7559576d944630ece06aa83"},
{file = "coredis-4.14.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:17472fc86150576d8d560e80c1004e66d6b7199ca49ec8d69b0e1e470de776c3"},
{file = "coredis-4.14.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:77fd842dde397cc4f143b681a0976462a3f4a809128241f9155c594e1b0ac434"},
{file = "coredis-4.14.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:60b9c173a6ba07200d7b58ce2b0146a0d87f14d17a02eab6a6439ed30ae8855d"},
{file = "coredis-4.14.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0cbd44bbbf6d7af6808a1fd483788439b81931426a5b529a78193f99c372bd70"},
{file = "coredis-4.14.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:286c773649c271094fa44859002032cf1c68755755ccc9425b77de02c99325e7"},
{file = "coredis-4.14.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7b8c59f97474f6008aef4bcd92e88192ac01f8f65504a98c4591bd8997ab04af"},
{file = "coredis-4.14.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ec9decc61d0991998b297028f9a4f07e11fef626affa8362acdccdb63b6664da"},
{file = "coredis-4.14.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:738006462914b62acd679c876c907760f85532dbc3e60225046393a673a959aa"},
{file = "coredis-4.14.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2134f7de3e4076d29a399874b6311f29a91e6b9576a4cd0d0128115b11a555d2"},
{file = "coredis-4.14.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1c4b2b9fb753630ef07edecabe0b41dfe958008c57d7f462e2aa4b9cfa1a82cf"},
{file = "coredis-4.14.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e2ea7a07eeb1934c149b208c9a6f28d64430653f1d99de105efc895c5db75ec9"},
{file = "coredis-4.14.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7d22abaf5bcdbfeeed813b25c5612d0ddb5645ec1c5e41aafa8bbdebd28f18db"},
{file = "coredis-4.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2259ce3c14455709fd0f0730e702ec375bd4a44b217eb60405d4849902df7af"},
{file = "coredis-4.14.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e5cb15b4ae866ee8fdca83ece53800fbae7b6a3208ab599a2f5f8927cc5aeebd"},
{file = "coredis-4.14.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:343ea8882551cb808cd9589823808b9863b3e179b82d9012716d345d79916918"},
{file = "coredis-4.14.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a333abe23ea045c8e1b11f3dd9e572dad11b8951d218e3a6d258128b44fea0ef"},
{file = "coredis-4.14.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6b878946e7abf833e3c199f5253de306d19d6ca5d3d96ffe525c1ae54916f74e"},
{file = "coredis-4.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84eb18976a0bf26091859813424a5e736ba7bb9f5a3648b80951e4666f8f6ae"},
{file = "coredis-4.14.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:98634d779025992b0a90585678691de753019265b9b5f6c315ba38bedba99f4f"},
{file = "coredis-4.14.0-py3-none-any.whl", hash = "sha256:f1efdc8a53b15952505a3b0d1ff125b7b423531d4b4737c119def0ab566fd83d"},
{file = "coredis-4.14.0.tar.gz", hash = "sha256:43c45928caaa38589b7f98882863f330f4ae412a31f834279c74b80afae15ee2"},
]
[package.dependencies]
async-timeout = ">4,<5"
deprecated = ">=1.2"
packaging = ">=21,<24"
pympler = ">1,<2"
typing-extensions = ">=4.3"
wrapt = ">=1.1.0,<2"
Thanks for looking into it.
Hey !,
I was wondering if you'd be willing to add support for SQL tables (I'm willing to contribute)
Hi,
I am the maintainer of the Falcon-Limiter package (see https://github.com/zoltan-fedor/falcon-limiter) and one of my users has raised an issue about the Falcon-Limiter breaking when using limits version 2.3.0
with Redis Sentinel, while it works with llimits version 2.0.3
.
See the original issue at zoltan-fedor/falcon-limiter#1
I did manage to reproduce the user's issue in Falcon-Limiter after deploying Redis Sentinel onto my Kubernetes cluster.
Then I went and tested the limits
library (without the Falcon-Limiter
) and unfortunately I was able to reproduce the same here too, so it seems to me this is an issue with the limits
library, not with how Falcon-Limiter
makes calls to it. (correct me if I am wrong here)
Below is the test code I use:
from limits import storage, strategies, parse
redis_password = "xxxxxxx"
r_storage = storage.storage_from_string(f"redis+sentinel://:{redis_password}@127.0.0.1:26379/mymaster",
sentinel_kwargs={"password": redis_password})
moving_window = strategies.MovingWindowRateLimiter(r_storage)
one_per_minute = parse("1/minute")
assert True == moving_window.hit(one_per_minute, "test_namespace", "foo")
When running this with limits version 2.3.0
I get the following error - while there is no error when using version 2.0.3
.
Traceback (most recent call last):
File "test-limits.py", line 9, in <module>
assert True == moving_window.hit(one_per_minute, "test_namespace", "foo")
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/strategies.py", line 84, in hit
return self.storage().acquire_entry( # type: ignore
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/storage/redis.py", line 178, in acquire_entry
return super()._acquire_entry(key, limit, expiry, self.storage, amount)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/storage/redis.py", line 79, in _acquire_entry
acquired = self.lua_acquire_window([key], [timestamp, limit, expiry, amount])
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/commands/core.py", line 4440, in __call__
return client.evalsha(self.sha, len(keys), *args)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/commands/core.py", line 3891, in evalsha
return self.execute_command("EVALSHA", sha, numkeys, *keys_and_args)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1176, in execute_command
return conn.retry.call_with_retry(
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/retry.py", line 44, in call_with_retry
fail(error)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1180, in <lambda>
lambda error: self._disconnect_raise(conn, error),
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1166, in _disconnect_raise
raise error
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/retry.py", line 41, in call_with_retry
return do()
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1177, in <lambda>
lambda: self._send_command_parse_response(
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1153, in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1192, in parse_response
response = connection.read_response()
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/sentinel.py", line 61, in read_response
return super().read_response(disable_decoding=disable_decoding)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/connection.py", line 800, in read_response
response = self._parser.read_response(disable_decoding=disable_decoding)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/connection.py", line 336, in read_response
raise error
redis.exceptions.AuthenticationError: Authentication required.
Does limits version 2.3.0
require the password for Sentinel to be passed differently?
In case you want to reproduce the issue on your side, unfortunately you will need to deploy a Redis Sentinel cluster. There are additional details about that in the original ticket, but just a quick summary:
# deploy Redis Sentinel onto a Kubernetes cluster (or Minikube):
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install redis-sentinel bitnami/redis --set sentinel.enabled=true
# port-forward the services
$ kubectl port-forward --namespace default svc/redis-sentinel 26379:26379
$ kubectl port-forward --namespace default svc/redis-sentinel 6379:6379
# get the Redis password
$ export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-sentinel -o jsonpath="{.data.redis-password}" | base64 --decode
# you might also need to alias the redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local domain name to localhost in your host file
Based on this test (https://github.com/alisaifee/limits/blob/master/tests/storage/test_redis_sentinel.py#L33) this scenario should work, but obviously it doesn't - likely the mocking is not able to capture the issue.
Redis-py has has async support for a while, it would be cleaner to use the same library.
Hi @alisaifee!
I've been looking at limits
in relation to Python 3.11 upgrades, and noticed a flaky unit test while running the unit tests on a fairly old computer.
It looks like the problem occurred when the setup time for the first round of simulated requests took longer than 0.1s.
When that happens, the limit-cleared assertions can fail (for example, if we are at time start + 1.05s
-- while the 1s rate limit window remains in effect).
As a workaround, recording the time of the most recent simulated request, and waiting until 1s after that resolved the problem. (I do recognize that that subtly changes the meaning of the test, though)
last = None
while time.time() - start < 0.5 and count < 10:
assert await limiter.hit(per_second)
last = time.time() - start # keep track of the time of the most recent request
count += 1
assert not await limiter.hit(per_second)
while time.time() - start <= last + 1: # wait until one second after the most recent request
await asyncio.sleep(0.1)
for _ in range(10):
assert await limiter.hit(per_second)
Possibly not worth the effort to resolve given that it's probably fairly rare (so feel free to close this issue) - I thought it might be worth reporting though.
Is there any way to connect to Redis by Unix socket? Because my shared hosting only provides Unix socket instead of TCP connection.
I found can't no support Elasti Cache.
I'm presuming this was intended to be GRANULARITY
:
Line 65 in 5facdd5
Confusing when dir(rate)
shows a non-existent attribute.
I'm running a pretty big stress test with redis+sentinel and this popped up:
File "/root/venv_smtpin/local/lib/python2.7/site-packages/limits/strategies.py", line 76, in hit
item.get_expiry()
File "/root/venv_smtpin/local/lib/python2.7/site-packages/limits/storage.py", line 456, in acquire_entry
key, limit, expiry, master, no_add
File "/root/venv_smtpin/local/lib/python2.7/site-packages/limits/storage.py", line 294, in acquire_entry
return True
File "/root/venv_smtpin/local/lib/python2.7/site-packages/redis/lock.py", line 88, in __exit__
self.release()
File "/root/venv_smtpin/local/lib/python2.7/site-packages/redis/lock.py", line 133, in release
raise LockError("Cannot release an unlocked lock")
LockError: Cannot release an unlocked lock
I'm firing up a lot of gevent workers on this and i'm just forcing a lot of hits upon a small moving window.
Etcd is a distributed often used for critical data structures and caching (i.e. Discord uses etcd), it's also used as a core part of kubernetes (meaning technically Salesforce, Ticketmaster, and a bunch of other companies use it), adding support for it would be nice.
I myself am currently using redis, but trying to move to etcd for increased scalability, the only thing lacking seemingly is limits' support for etcd.
If a wrapper for it needs to be found, you could use python-etcd3 for non-async environments and aetcd for async environments.
I have such test code:
storage = storage_from_string('memory://')
limiter = limits.strategies.FixedWindowElasticExpiryRateLimiter(storage)
rate = limits.parse('2/6second')
while 1:
print(limiter.hit(rate, '111'))
time.sleep(5)
and I got
True
True
False
False
False
My actual rate is 1/5second, lower than limit 2/6second, but I got failed to submit. There are no 3 hits in any continuous 6 seconds.
If it's by design, hope it to be documented in the strategies.
Importing the limits
library in any environment causes the following error:
ModuleNotFoundError: No module named 'pkg_resources'
This is identical to this issue, and resolved as specified (with respect to your environment; i.e. adding setuptools
as a dependency to your project).
Importing the limits
library should cause no issues once it is properly installed.
Declare a dependency on setuptools
in the setup.py
Hello,
I'm trying to use the ratelimiter with an AWS Service which limits me to 10/requests per second: https://docs.aws.amazon.com/servicequotas/latest/userguide/reference_limits.html.
I'm using FixedWindowRateLimiter to limit this using memcached as storage.
Most of the time it works well, but for some reason some errors occur:
"An error occurred (TooManyRequestsException) when calling the ListServiceQuotas operation: Rate exceeded"
I saw that there may occur because we can have the burst problem and I need to put a second ratelimit to manage it: https://limits.readthedocs.io/en/stable/strategies.html#fixed-window.
But for a ratelimit of 10/sec, how to can I add another ratelimit? Is there a way to put a ratelimit with time shorter than the second. What approach do you recommend?
Thank you.
Hi Team,
I tried testing limits package on both architectures but it shows all tests passed on X86 and for aarch64 it shows the below error:
No matching distribution found for emcache>=0.6.1
Also, I explored regarding the emcache error, I have gone through the pypi of emcache, but wheels and source code are not present for emcache.
@Ali-Akber Saifee Could you please share your feedback regarding this and suggest what could be done to test this over ARM64 server?
Thanks in advance.
I have a Sentinel cluster where even the sentinels require a password. However, using Sentinel(โฆ, password=password)
sends the password only to the actual Redis instances, but not to the sentinels.
To do this, you have to instantiate Sentinel()
like this:
storage = Sentinel(sentinel_list, password=redis_password, sentinel_kwargs={'password': sentinel_password})
I have a fix in place locally (a subclass of RedisSentinelStorage
) that sends the same password to the sentinels this way, but it would be awesome to be able to set the sentinel password separately.
Hey y'all, I'm curious whether flask-limiter could help when requests have variable cost. For instance, can I count a single request against the limit with some multiplier based on the content of the request?
If not, do you all have any thoughts on how to help with this type or metering?
I want to use DNS Seed List Connection Format to connect MongoDB๏ผ but it throws an exception
Traceback (most recent call last):
File "D:\Python\Python38\lib\site-packages\flask_limiter\extension.py", line 294, in __init__
self.init_app(app)
File "D:\Python\Python38\lib\site-packages\flask_limiter\extension.py", line 332, in init_app
storage_from_string(
File "D:\Python\Python38\lib\site-packages\limits\storage\__init__.py", line 60, in storage_from_string
raise ConfigurationError("unknown storage scheme : %s" % storage_string)
limits.errors.ConfigurationError: unknown storage scheme : mongodb+srv://example.com/?retryWrites=true&w=majority
Here is the documentation about this format.
https://docs.mongodb.com/manual/reference/connection-string/#dns-seed-list-connection-format
This format is supported in pymongo 3.6 and later
While reviewing limits
/flask-limiter
, we came across a scaling issue that we wanted to bring up. The current Redis implementation is using a non-Redis based mechanism for locking. Instead, it is using a threading based lock.
https://github.com/alisaifee/limits/blob/1.0.6/limits/storage.py#L55-L56
https://github.com/alisaifee/limits/blob/1.0.6/limits/storage.py#L249
https://github.com/alisaifee/limits/blob/1.0.6/limits/storage.py#L279
This is fine for when a service is running on 1 machine but when it goes to multiple machines with multiple disks, then it falls apart. Instead of this approach, maybe move to dynamic Redis keys that leverage INCR
instead to avoid the locking issue altogether? (e.g. 127.0.0.1/endpoint/epoch_000
, 127.0.0.1/endpoint/epoch_100
)
I just received this "bug" report in my application: indico/indico#5391
Would you consider making your URL format compatible with the one redis-py uses? I'd rather not convert between two URL formats if I can avoid it...
Hey hey, I've used the library through Flask-Limiter
but never got deep into it ๐
Trying to check if configuring an exponential/growing limit is possible, ie. limiting that gets prolonged if the caller keeps on bombarding.
Are you / would you support something like this? If yes/no, why?
Your implementation supports sentinel but it doesn't support authentication. I already had a sentinel instance initialized in my code, which i used for multiple purposes, so will you accept a PR which allows creating this?
thx
Hi,
I have a situation, where memcached storage#incr sometimes returns None. This is not handled in FixedWindowRateLimiter at the moment (it raises TypeError: unorderable types: NoneType() <= int()
in FixedWindowRateLimiter#hit.
Would a PR be accepted, where FixedWindowRateLimiter#hit returns False, if storage#incr returns None?
Hi, how can I implement rate limit based on user id or token? I couldn't find the code where it checks IP to replace it with token.
I am not sure where this bug should be reported. But I think it may be fixed in the limits package, so I leave the bug description here.
The following error occurs with Sanic-Limiter on edge between limited and permitted timespans when too many requests run.
[2020-12-21 14:08:55 +0300] [96980] [ERROR] Exception occurred while handling uri: 'http://localhost:8000/'
Traceback (most recent call last):
File "/tmp/tmp.Rziuc8Wg8e/.venv/lib/python3.9/site-packages/sanic/app.py", line 908, in handle_request
response = await self._run_request_middleware(
File "/tmp/tmp.Rziuc8Wg8e/.venv/lib/python3.9/site-packages/sanic/app.py", line 1265, in _run_request_middleware
response = middleware(request)
File "/tmp/tmp.Rziuc8Wg8e/.venv/lib/python3.9/site-packages/sanic_limiter/extension.py", line 243, in __check_request_limit
six.reraise(*sys.exc_info())
File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
raise value
File "/tmp/tmp.Rziuc8Wg8e/.venv/lib/python3.9/site-packages/sanic_limiter/extension.py", line 220, in __check_request_limit
if not self.limiter.hit(lim.limit, key, limit_scope):
File "/tmp/tmp.Rziuc8Wg8e/.venv/lib/python3.9/site-packages/limits/strategies.py", line 133, in hit
self.storage().incr(item.key_for(*identifiers), item.get_expiry())
TypeError: '<=' not supported between instances of 'NoneType' and 'int'
Prerequisite:
Steps to reproduce:
pip install sanic pymemcache \
'limits @ git+https://github.com/alisaifee/limits' \
'sanic-limiter @ git+https://github.com/bohea/sanic-limiter'
from sanic import Sanic
from sanic.response import text
from sanic_limiter import Limiter, RateLimitExceeded, get_remote_address
app = Sanic("hello_example")
app.error_handler.add(RateLimitExceeded,
lambda _req, err: text("", err.status_code))
limiter = Limiter(app, get_remote_address,
storage_uri="memcached://localhost:11211")
@app.route("/")
@limiter.limit("1 per second")
async def test(request):
return text("Hello world!\n")
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000, access_log=False)
ab -t30 -k http://localhost:8000/
I am using https://github.com/laurents/slowapi which uses limits, I would like to know how to enforce ratelimit across instances, I found out that Redis keys are different for same request called from different instances which breaks the sharing.
The module pkg_resources
gets imported here, but that package is deprecated. From the docs:
Use of pkg_resources is deprecated in favor of importlib.resources, importlib.metadata and their backports
Specifically, while using the Flask-Limiter package (which uses this package), I get this stack trace:
api/app.py:5: in <module>
from flask_limiter import Limiter
../../.virtualenvs/redacted/lib/python3.11/site-packages/flask_limiter/__init__.py:4: in <module>
from .errors import RateLimitExceeded
../../.virtualenvs/redacted/lib/python3.11/site-packages/flask_limiter/errors.py:7: in <module>
from .wrappers import Limit
../../.virtualenvs/redacted/lib/python3.11/site-packages/flask_limiter/wrappers.py:10: in <module>
from limits import RateLimitItem, parse_many
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/__init__.py:5: in <module>
from . import _version, aio, storage, strategies
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/aio/__init__.py:1: in <module>
from . import storage, strategies
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/aio/storage/__init__.py:6: in <module>
from .base import MovingWindowSupport, Storage
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/aio/storage/base.py:5: in <module>
from limits.storage.registry import StorageRegistry
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/storage/__init__.py:12: in <module>
from .base import MovingWindowSupport, Storage
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/storage/base.py:6: in <module>
from limits.util import LazyDependency
../../.virtualenvs/redacted/lib/python3.11/site-packages/limits/util.py:11: in <module>
import pkg_resources
../../.virtualenvs/redacted/lib/python3.11/site-packages/pkg_resources/__init__.py:121: in <module>
warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning)
E DeprecationWarning: pkg_resources is deprecated as an API
Hi @alisaifee ,
Sorry, it is me again. Apologies for sending yet another issue.
I might have found another bug in the async module, this time with the MemoryStorage.
I took my redis+sentinel
async example and turned it into a MemoryStorage
one and I got an error.
This same code runs fine if I replace the line with the store to be the one from Redis
, but fails with the MemoryStorage
:
import asyncio
from limits import parse
from limits.storage import storage_from_string, MemoryStorage
from limits.aio.strategies import MovingWindowRateLimiter
# store = storage_from_string("async+redis://:xxxx@localhost:63796") # with this line it runs fine
store = MemoryStorage() # with this line it fails
moving_window = MovingWindowRateLimiter(store)
one_per_minute = parse("1/minute")
async def hit():
return await moving_window.hit(one_per_minute, "test_namespace", "foo")
loop = asyncio.get_event_loop()
loop.run_until_complete(hit())
loop.close()
Error:
Traceback (most recent call last):
File "test-async-limits.py", line 18, in <module>
loop.run_until_complete(hit())
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "test-async-limits.py", line 15, in hit
return await moving_window.hit(one_per_minute, "test_namespace", "foo")
File "/home/user/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/aio/strategies.py", line 85, in hit
return await self.storage().acquire_entry( # type: ignore
TypeError: object bool can't be used in 'await' expression
coredis need high version redis which not work on redis 5.
In order to make slowapi asyncio compatible, I would need to update the layers underneath to work asynchronously. I think the key part is getting limits
to support asyncio.
Would you be interested in me having a go at it? I think it would be possible to either:
async def
methods on the strategies, something like RateLimiter.ahit()
and RateLimiter.atest()
AsyncRateLimiter
and subclasses.And then add an async storage backend, starting with redis
.
Otherwise, if you see no interest in having this in the same code base, I guess I could fork the library and do more or less the same, but I think it's probably more valuable to have everything together.
Hi,
I was just wondering if there are any plans in the future to add support for DynamoDB.
Thank you in advance
I noticed this commit:
24b9fc3
And a recent push to pypi (3.14
)
However I am still seeing the wrong restriction: packaging>=21,<23
Requirement already satisfied: limits==3.1.4 in ./lib/python3.9/site-packages (3.1.4)
Requirement already satisfied: typing-extensions in ./lib/python3.9/site-packages (from limits==3.1.4) (4.4.0)
Requirement already satisfied: packaging<23,>=21 in ./lib/python3.9/site-packages (from limits==3.1.4) (22.0)
Indeed when I extract the pushed wheel, I see:
...
Requires-Dist: packaging (<23,>=21)
...
Q: Could there have been some miss out in the pypi update ?
I am doing self.db.get_num_acquired(db_key, self.db.get_expiry(db_key))
after doing self.db.acquire_entry(limit=self.max_calls, key=db_key, expiry=self.period)
and it is still returning 0. Shouldn't it be 1?
Is an error thrown when the data goes over the amount of ram I have or does limits handle the error and clear the storage?
https://github.com/alisaifee/limits/blob/master/limits/storage.py#L364 ignores passed kwargs (options) while it could just pass them along to Redis.from_url()
in https://github.com/alisaifee/limits/blob/master/limits/storage.py#L372.
At the moment, redis 2.10.5 is not parsing query strings correctly (see redis/redis-py#723 for fix) so it's not possible to configure socket_timeout
and socket_connect_timeout
and such.
Fix for URL query string parsing will be included in 2.10.6 but it shouldn't hurt to just pass the options along here, either?
Hello! I've recently updated one of my projects from limits 2.0.3 to 2.1.1 and ran into an issue where mypy reports incompatible types for MovingWindowRateLimiter
and FixedWindowRateLimiter
. A minimal reproducer:
import limits
storage = limits.storage.storage_from_string("memory://")
strategy = limits.strategies.MovingWindowRateLimiter(storage)
With this file and running mypy 0.930 I get:
error: Argument 1 to "MovingWindowRateLimiter" has incompatible type "Union[limits.storage.base.Storage, limits.aio.storage.base.Storage]"; expected "limits.storage.base.Storage"
I could also reproduce that by running mypy on the tests in this repo: python3 -m mypy tests/test_strategy.py
.
Any ideas what I could look at next?
Thanks!
I'm trying to implement flask-limiter with Redis 6.
If I use storage_uri without ssl (redis://:p51cd36bc3ea1.........), I'm getting this error:
redis.exceptions.ConnectionError: Error while reading from ec2-.........compute-1.amazonaws.com:29940 : (10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)
It seems like TLS is mandatory in Redis 6 (unfortunately, I'm not able to downgrade)
If I use storage_uri with ssl (rediss://:p51cd36bc3ea1.........), I'm getting another error:
redis.exceptions.ConnectionError: Error 1 connecting to ec2-............compute-1.amazonaws.com:29940. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1123).
I am able to connect to Redis and update keys/values with this code:
url = urlparse(os.environ.get("REDIS_URL"))
r = redis.Redis(host=str(url.hostname), port=url.port, username=url.username, password=url.password, ssl=True, ssl_cert_reqs=None)
r.set('Key', value, ex=86400)
Is there a way to make flask-limiter use the same method for connecting?
Finally, if I change ssl_cert_reqs="required" to ssl_cert_reqs="none" in class SSLConnection(Connection) (under site-packages\redis\connection.py), everything works, but I guess this is not a good solution as I don't want to change source files.
I'm implementing limits in fastapi to have more control over the ratelimiting logic that cannot be done by other libraries. I have the below implementation
limiter = FixedWindowElasticExpiryRateLimiter(storage_from_string("async+redis://localhost:6379"))
def app_rate_limit(func):
@wraps(func)
async def wrapper(*args, **kwargs):
is_allowed = await limiter.hit(parse('1/second'), "random-key")
if not is_allowed:
raise HTTPException(status_code=429, detail="Too many requests")
return await func(request, *args, db=db, **kwargs)
return wrapper
And in the fastapi route I'm attaching the above decorator
@router.post(...)
@app_rate_limit
def route(...)
pass
its giving the below error
File "/Users/anandtripathi/.virtualenvs/verification-backend-CPjzdNsq-py3.8/lib/python3.8/site-packages/limits/aio/strategies.py", line 196, in hit
amount = await self.storage.incr(
ReferenceError: weakly-referenced object no longer exists
I checked inside the code of limits it seems to be coming from storage initialization but not sure what I'm doing wrong
the redis hit
implementation seems to suffer from a race condition. If the code after the incr
is not executed (eg. python crashes) then the expire
command is never executed and the key will live indefinitely. This would result in false positive rate limit checks / wrong stats. Redis docs has a note about this here: http://redis.io/commands/incr#pattern-rate-limiter-2
Replacing incr + expire with a LUA script could do it easily (though LUA is not available on old Redis versions); I can come up with a PR if necessary.
It would be awesome to have the following options in limits that can be passed to it via other packages (such as flask_limiter).
(Feature requests originated in pull #1)
Hello limits team. I think there is a problem when the hit
method is called passing a cost higher than the limit amount configured in the limit. I tested it with the following code:
from limits import storage
from limits import strategies
from limits import parse
# my_storage = storage.RedisStorage("redis://localhost:6379/1")
my_storage = storage.MemoryStorage()
window = strategies.MovingWindowRateLimiter(my_storage)
ten_per_minute = parse("10/minute")
window.clear(ten_per_minute, "test_namespace", "foo")
assert True == window.hit(ten_per_minute, "test_namespace", "foo", cost=5)
assert False == window.hit(ten_per_minute, "test_namespace", "foo", cost=100)
In the script the limit is created as 10 hits per minute. I first consume 5 with the first hit
call and it returns True (as expected). Then I try to consume 100, that should not be permitted, and the hit
returns True again.
The error that occurs is:
Traceback (most recent call last):
File "/Users/aarmoa/Library/Application Support/JetBrains/PyCharm2022.3/scratches/test_limits.py", line 15, in <module>
assert False == window.hit(ten_per_minute, "test_namespace", "foo", cost=100)
AssertionError
I have only reproduced the issue with MovingWindowRateLimiter
, but think it might be happening for all storages (I reproduced the error both with MemoryStorage
and with RedisStorage
.
Repro:
uri = "redis+cluster://:testpassword@host:123,host:456"
RedisClusterStorage(uri=uri)
will throw:
limits/storage/redis_cluster.py in __init__(self, uri, **options)
53 cluster_hosts = []
54 for loc in parsed.netloc.split(","):
---> 55 host, port = loc.split(":")
56 cluster_hosts.append((host, int(port)))
57
ValueError: too many values to unpack (expected 2)
I think this is because the netloc will include the password piece and fail to parse then when splitting.
limits version 3.1.5, python 3.9.
Is the library thread-safe? Docs don't mention, but I would assume it is, because it's being used in contexts that are normally multi-threaded (e.g. Flask-Limiter
).
However, when I was using the library in multi-threaded context (with high concurrency), I randomly hit into:
...
File "/home/musttu/.pyenv/versions/his/lib/python3.5/site-packages/limits/strategies.py", line 130, in hit
self.storage().incr(item.key_for(*identifiers), item.get_expiry())
File "/home/musttu/.pyenv/versions/his/lib/python3.5/site-packages/limits/storage.py", line 160, in incr
self.__schedule_expiry()
File "/home/musttu/.pyenv/versions/his/lib/python3.5/site-packages/limits/storage.py", line 148, in __schedule_expiry
self.timer.start()
File "/home/musttu/.pyenv/versions/3.5.5/lib/python3.5/threading.py", line 840, in start
raise RuntimeError("threads can only be started once")
RuntimeError: threads can only be started once
Looking at https://github.com/alisaifee/limits/blob/master/limits/storage.py#L62 there seems to be a default lock object, but it doesn't seem to be ever used? In this case, https://github.com/alisaifee/limits/blob/master/limits/storage.py#L146-L148 is not locking anything so above can happen.
What is the library's take on thread-safety? I didn't go deep into code, so I may have gotten something wrong.
Connecting to a connection pool would be very useful. I have an issue with my application reaching the maximum number of Redis clients.
title
Apparently if you set a horrible limit, you won't be able to evaluate the script; try like this
98765432198765432198765 per 3600 seconds.
It gives a pretty nice traceback, but i seem to have lost it. I think lua data type is responsible for this, so just raise an exception when you construct the object.
Hi,
I'm starting to test flask-limiter in one of our services and while testing it found that limits MemcacheStorage doesn't support multiple hosts by default. Check https://github.com/alisaifee/limits/blob/master/limits/storage.py#L610, get_client uses module.Client(*hosts) and by default module is pymemcache.client which only supports a single host.
Maybe the docs should be fixed to point out that a single memcached host is supported unless a custom client_getter is provided?
P.S: workaround this with a custom client_getter to use pymemcache.client.hash.HashClient instead, but now hitting a different issue with flask-limiter.
Hi,
In my use case I need to perform increases of different values. It would be great to have that supported in the public API. Do you have any plans for that? Would you merge a PR for this feature?
Would it be useful to have a file system backend? I currently run a flask app on Apache, which spins up a pool of 7-30 processes to respond to requests. Clearly the memory cache is out. I could consider using Redis or Memcached, but I don't have those apps installed yet, and a file system could be faster, especially if it were just writing to the /tmp RAM drive. I know those apps are solid, but they are slower, and more complicated.
I could try to implement it, perhaps leveraging some code from flask-caching, which has a FS backend.
Hi,
I was testing out the async mode, so I can add that to the Falcon-limiter
package and wanted to test with a password-protected Redis instance, but failed.
I went to check and saw that the coredis
now supports authentication (including username / password with v2.2.1
- thank you @alisaifee !!!), see https://coredis.readthedocs.io/en/stable/release_notes.html?highlight=password#v2-2-1
I know that the async support is still in experimental mode, but I am hoping that this could still be fixed.
Thanks!
Below is the code and the error (the issue is the same if the password is provided via the RATELIMIT_STORAGE_OPTIONS):
import asyncio
from limits import parse
from limits.storage import storage_from_string
from limits.aio.strategies import MovingWindowRateLimiter
redis = storage_from_string("async+redis://:MyPassword@localhost:6379")
moving_window = MovingWindowRateLimiter(redis)
one_per_minute = parse("1/minute")
async def hit():
return await moving_window.hit(one_per_minute, "test_namespace", "foo")
loop = asyncio.get_event_loop()
loop.run_until_complete(hit())
loop.close()
The error:
Traceback (most recent call last):
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/connection.py", line 468, in connect
await self._connect()
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/connection.py", line 728, in _connect
await self.on_connect()
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/connection.py", line 497, in on_connect
if nativestr(await self.read_response()) != "OK":
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/connection.py", line 529, in read_response
raise response
coredis.exceptions.ResponseError: wrong number of arguments for 'auth' command
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/client.py", line 221, in execute_command
await connection.send_command(*args)
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/connection.py", line 564, in send_command
await self.connect()
File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/coredis/connection.py", line 472, in connect
raise ConnectionError()
coredis.exceptions.ConnectionError
EDIT: I just realized, that maybe I should have raised this against coredis
instead?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.