jneight / django-db-geventpool Goto Github PK
View Code? Open in Web Editor NEWAnother DB pool using gevent
License: Apache License 2.0
Another DB pool using gevent
License: Apache License 2.0
The installation fails on Ubuntu 16.04 when installed with the latest pip and Python 3.6 via Docker:
django-db-geventpool-3.0.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-6qupugcx/django-db-geventpool/setup.py", line 14, in <module>
long_description=open("README.rst").read(),
File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 873: ordinal not in range(128)
This appears to be because of one or more non-ascii characters are in the README.rst file, particularly the em dash (char 0xe2 or —
) found on line 27 (....._—see ....
).
did you drop Django 2.2 (current stable and LTS version) ??
if no , then why you put django 3 on dependecies! it's removing my django (2.2) and installing 3.1
The loggers used in this project us the "django" name. This is not really helpful if one wants to filter for / out the geventpool events. It would be helpful to rename the logger either to django.geventpool
or geventpool
. The former could be propagated and managed by a parent "django" logger.
I'm having both a django web process and a celery worker running.
My procfile looks like:
web: gunicorn project.wsgi --workers 3 -k gevent --worker-connections 100 --config gunicorn_config.py
worker: celery -A project worker --pool=gevent --concurrency=500
The web process is using gunicorn, which patches psycopg as explained in the readme. However, as celery is also using gevent, do I need to patch this as well?
Thanks!
Hello, I'm make some experimentations running django 3 under gevent.
I've created an endpoint that execute a pg_sleep
faking slow queries, then, I used vegeta for tests my setup with high load.
my MAX_CONNS was 10.
But some of my api response was 500 :
Traceback (most recent call last):
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/danilo/Code/django_db__pool_issues_example/app/sleep_db.py", line 9, in sleep_in_db
with connection.cursor() as cursor:
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/db/backends/base/base.py", line 260, in cursor
return self._cursor()
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/db/backends/base/base.py", line 236, in _cursor
self.ensure_connection()
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django/db/backends/base/base.py", line 197, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/base.py", line 49, in get_new_connection
self.connection = self.pool.get()
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 48, in get
conn = self.create_connection()
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 84, in create_connection
conn = self.connect(*self.args, **self.kwargs)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/home/danilo/.local/share/virtualenvs/django_db__pool_issues_example-SoJGTr3V/lib/python3.6/site-packages/psycogreen/gevent.py", line 32, in gevent_wait_callback
state = conn.poll()
django.db.utils.OperationalError: FATAL: sorry, too many clients already
I've uploaded a project to show it.
https://github.com/dnp1/django_db_gevent_pool_issues_example
My attack was:
echo "GET http://localhost:8000/sleep_in_db" | vegeta attack -duration=10s --rate 100 | tee results.bin | vegeta report
When I watched the database:
SELECT * FROM "pg_stat_activity" WHERE "client_addr" = '172.26.0.1'
there was 99 connections of my application.
I'm using python 3.6.10 (CPython) under Linux.
My postgres is running inside a docker container
Hello guys...
I'm currently working on a legacy codebase with the following configuration:
python==3.8.17
django==3.1.14
django-db-geventpool==4.0.1
psycopg2==2.9.3
psycogreen==1.0.2
And, since last week's deployment, we have been experiencing the following error, randomly, in all of our endpoints:
Psycopg2.ProgrammingError: set_session cannot be used inside a transaction
When checking the bug report, I can see that it breaks in the following piece of code:
# file: django/db/backends/postgresql/base.py in _set_autocommit at line 277
def _set_autocommit(self, autocommit):
with self.wrap_database_errors:
self.connection.autocommit = autocommit
# autocommit is True
# self = <django_db_geventpool.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x7fa249656550>
Could the issue be related to this library/subdependency or Django subdependency?
EDIT 1: We rolled back to the previous version, and the error seems to persist.
Trying to figure out what this is about...
Traceback (most recent call last):
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/transaction.py", line 394, in inner
return func(*args, **kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 401, in dispatch
response = self.handle_exception(exc)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 389, in dispatch
self.initial(request, *args, **kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 318, in initial
self.perform_authentication(request)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 268, in perform_authentication
request.user
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/request.py", line 226, in user
self._authenticate()
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/request.py", line 401, in _authenticate
user_auth_tuple = authenticator.authenticate(self)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/rest_framework/authentication.py", line 117, in authenticate
if not user or not user.is_active:
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/utils/functional.py", line 224, in inner
self._setup()
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/utils/functional.py", line 357, in _setup
self._wrapped = self._setupfunc()
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 23, in <lambda>
request.user = SimpleLazyObject(lambda: get_user(request))
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 11, in get_user
request._cached_user = auth.get_user(request)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 158, in get_user
user = backend.get_user(user_id)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/contrib/auth/backends.py", line 69, in get_user
return UserModel._default_manager.get(pk=user_id)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/manager.py", line 92, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/query.py", line 351, in get
num = len(clone)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/query.py", line 122, in __len__
self._fetch_all()
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/query.py", line 966, in _fetch_all
self._result_cache = list(self.iterator())
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/query.py", line 265, in iterator
for row in compiler.results_iter():
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 700, in results_iter
for rows in self.execute_sql(MULTI):
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 786, in execute_sql
cursor.execute(sql, params)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 81, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
DatabaseError: error with no message from the libpq
If gevent worker_connections attr is greater than MAX_CONNS, concurrent requests can result in more database connections as MAX_CONNS allows.
To prevent self.size from changing while the connection is still established, the following change fixed the problem for me:
from gevent.lock import RLock
class DatabaseConnectionPool(object):
def __init__(self, maxsize=100, reuse=100):
self.lock = RLock()
def get(self):
with self.lock:
try:
if self.size >= self.maxsize or self.pool.qsize():
The docs mention that connections don't close automatically in spawned greenlet workers that are not tied to a request cycle.
I'm trying to make my celery workers safe with the @close_connections
decorator, but I don't know how (and I'm having trouble finding any docs mentioning worker customization).
I'm happy to submit a PR with an example, if someone can help me do it in the first place!
For cases of using django-db-geventpool
without gevent (eventlet
only)
function RLock return function nullcontext which doesn't support context manager
and executed with fails
File "/home/user/mainapp/api/.venv/lib/python3.9/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 58, in get
if self.size >= self.maxsize or self.pool.qsize():
File "/home/user/mainapp/api/.venv/lib/python3.9/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 53, in size
with self.lock:
AttributeError: __enter__
Example:
from django_db_geventpool.utils import nullcontext
def RLock():
return nullcontext
with RLock()(): # success using
print(1)
with RLock(): # fail with: AttributeError: __enter__
print(1)
And if can not, how can do bulk save objects (not objects.update())
Hi and thanks for your hard work.
we steel want the battle tested synchronous django and it is worthless at least for me without gevent and now we need support for psycopg3
This is my bad, because I introduced the version number 0.85. Apparently according to setup.py versioning and pip's behavior, version 0.85 > 0.9. Therefore if you had 0.85 installed, you will never automatically pick up the upgrade to 0.9.
See http://pythonhosted.org/setuptools/setuptools.html#specifying-your-project-s-version
Given that 0.85 was out for a limited time window, the solution might be to just delete it from PyPi. The other option is to actually pick a version number greater than 0.85, such as 0.90.
Learn something new every day I guess :/
As the title says I can see the test_ database get created but no tables are ever created in it. I can fix this be altering the get_new_connection method in the DatabaseWrapperMixin16 in backends/postgresql_psycopg2/base.py to set the pool kwargs equal to the conn_params before getting the pool.
def get_new_connection(self, conn_params):
if self.connection is None:
self.pool.kwargs['database']=conn_params['database']
self.pool.kwargs['host']=conn_params['host']
self.pool.kwargs['user']=conn_params['user']
self.pool.kwargs['password']=conn_params['password']
self.pool.kwargs['port']=conn_params['port']
self.connection = self.pool.get()
return self.connection
That may not be the correct place to fix this and may in fact cause other issues.
The question is: what if I set a CONN_MAX_AGE
greater than 0
, would it be respected?
Would Django close that connection after that certain amount of time?
I am asking this because I want my system to be able to get advantage of all the connections allowed by the pool limit, but I also want this number to go down when all these connections are not needed anymore.
Since #50 was merged all JSONFields
return JSON as a str
instead of dict
with Django 2.2 (<3.0 actually, it also fails with Python 2.7 and Django 1.1). I updated the test to demonstrate the problem: fdemmer@73ea274
Considering that Django 2.2 is LTS until April 2022, I'd recommend another 3.2.x release rolling back the change and a new 3.3.x or 4.x release with Django 3.x compatibility and documentation of the breaking change.
Would a PR with the updated test (and at least updated readme/changelog regarding compatibility) be welcome?
Django gets closer to its first 1.7 release and with this django-db-geventpool should also be available for 1.7 users.
Upon first installing django-db-eventpool I set MAX_CONNS to be the same as the max_connections setting in PostgreSQL because that's what the readme seemed to indicate.
After learning more about pooling and continuing to experience the issue that this library was supposed to resolve (i.e., benoitc/gunicorn#996 ), I realized that MAX_CONNS is probably actually supposed to be the max connections per pool, so lowered the number dramatically. Could you please confirm this understanding and edit the readme accordingly if it's correct?
Following the documentation of decorating celery tasks with @close_connection
, I've had some difficult with running django tests using pytest
as the runner.
Here's the code(it's pretty simple)
test.py
@patch("saleor.payment.models.Subscription.get_detail")
def test_handle_invoice_payment_succeeded(self, retrieve_mock):
retrieve_mock.return_value = Mock(
status="active",
current_period_end=datetime.now().timestamp(),
current_period_start=datetime.now().timestamp(),
cancel_at_period_end=False,
)
update_subscription(Events.invoice.payment_succeeded)
subscription = Subscription.objects.get(
stripe_id=Events.invoice.payment_succeeded
)
self.assertEqual(subscription.status, "active")
tasks.py
@app.task
@close_connection
def update_subscription(sub_id):
""" Updates a subscription to match what's in Stripe. """
try:
subscription = Subscription.objects.get(stripe_id=sub_id)
except Subscription.DoesNotExist:
logger.exception("update_subscription_failure")
return
try:
subscription.update()
except StripeError:
logger.exception("update_subscription_failure")
else:
subscription.save()
When running pytest I get these errors:
self = <django_db_geventpool.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x7f9f90bb1ac8>, rollback = True
def set_rollback(self, rollback):
"""
Set or unset the "needs rollback" flag -- for *advanced use* only.
"""
if not self.in_atomic_block:
raise TransactionManagementError(
> "The rollback flag doesn't work outside of an 'atomic' block.")
E django.db.transaction.TransactionManagementError: The rollback flag doesn't work outside of an 'atomic' block.
The test also fails but when I paste the task functionality inside the test and don't actually call the task function, the test passes.
Is this expected or am I doing something wrong?
Please add a changelog. I'd like to know what changed between 1.2.2 and 2. Also, 2
doesn't follow SemVer.
Hi
Using this pool, gevent and psycogreen patching i get errors constantly when running "manage.py test" .
The problem seems to be that when django closes a db connection it actually just goes back to the pool (instead of being closed). This means that the connections are still active and then postgresql blocks the "DROP DATABASE" statement that is issued.
I do understand that this is not a production problem (you would never use drop database there), but it still hampers testing quite a bit (my issue is that i'm using gevent explicitly in a celery tasks which i need to test, so it would not help to simply turn of gevent when environment==DEV).
You have any clue on a good fix for that?
Traceback (most recent call last):
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/transaction.py", line 394, in inner
return func(*args, **kwargs)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/transaction.py", line 346, in __exit__
connection.close()
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/transaction_hooks/mixin.py", line 80, in close
super(TransactionHooksDatabaseWrapperMixin, self).close(*a, **kw)
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django_db_geventpool/backends/postgresql_psycopg2/base.py", line 163, in close
self.validate_thread_sharing()
File "/Users/rajiv/Code/zdiscover17/.venv/lib/python2.7/site-packages/django/db/backends/__init__.py", line 515, in validate_thread_sharing
% (self.alias, self._thread_ident, thread.get_ident()))
DatabaseError: DatabaseWrapper objects created in a thread can only be used in that same thread. The object with alias 'default' was created in thread id 140735187129104 and this is thread id 4350867088.
Javier and Rajiv,
Thank you very much for creating and maintaining this package!
I'm not sure if you're aware of this but, may I please suggest an update to tests/models.py
?
django.contrib.postgres.fields.JSONField
has been removed in favor of django.db.models.JSONField
, which causes Django to fail if tests
is in INSTALLED_APPS
.
django.db.models.JSONField
appears to exist from version 3.1
, from quickly looking through their repository.Thank you very much for your time and attention.
Daniel
I'm moving my app to use the new psycopg2-binary
package as recommended by the maintainers but django-db-geventpool still requires psycopg2
.
I'd like to recommend either switching the install_requires
to use psycopg2-binary
or removing the requirement all together in favor of a runtime check (handling the ImportError
). I'm happy to put together a PR, just want some input on the preferred direction.
Not really sure where this issue belongs but it's very similar in result to: benoitc/gunicorn#527
I find that when I invoke psycogreen.gevent.patch_psycopg() it results in the following errors from Django:
ImproperlyConfigured: The included urlconf 'urls' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import.
It only appears to occur for the first request on the worker. Subsequent requests appear to be ok.
If I don't do the patch_psycopg, the errors go away.
I'm using Django 1.7.1, Gunicorn 18.0, gevent 1.0.1 and psycogreen 1.0. I'm also using Tastypie for APIs.
Any thoughts on this?
I opened an issue about a library I’m using for my django project, here’s the link: Async DataBaseHandler (Sink?) Django · Issue #879 · Delgan/loguru · GitHub 2
Searching the internet I found a question on stackoverflow which describes exactly what happens: django - SynchronousOnlyOperation from celery task using gevent execution pool - Stack Overflow 1
commands:
celery==5.2.7
django==3.2.19
celery -A scheduler.celery beat -l INFO -f $path/celery/beat.log --uid=nobody --gid=nogroup --detach
celery -A scheduler.celery worker -l INFO -f $path/celery/worker.log --uid=nobody --gid=nogroup -O fair --pool=gevent --hostname=celery-workerAnalyser --autoscale=30,10 --without-heartbeat --without-gossip # --without-mingle
The main question is: But why isn’t this exception raised in 100% of cases but only in a minority of cases?
I figured out that django orm is not greenlet thread safe for gevent.
I already have a PG pooler, pgbouncer, so I would like to use its pooling, but I would like to understand if this library could help me with the error that celery throws when trying to query.
I already tried this comment: #61 (comment), but nothing changes.
Looks like there's a while to fix it, but just a heads up. :)
/usr/local/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/base.py:13: RemovedInDjango30Warning: The django.db.backends.postgresql_psycopg2 module is deprecated in favor of django.db.backends.postgresql.
I have MAX_CONNS set to 15 but for some reason it is reaching 20+ on my Google Cloud App Engine instance. https://imgur.com/a/wKcPC
OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections
Traceback (most recent call last):
File "/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 126, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/env/lib/python3.6/site-packages/django/views/decorators/cache.py", line 31, in _cache_controlled
response = viewfunc(request, *args, **kw)
File "/home/vmagent/app/checker/views.py", line 34, in search
context = process_summoner(lookup_summoner(parsed_name_result, region), name)
File "/home/vmagent/app/checker/views.py", line 193, in lookup_summoner
if summoner_set.count() == 1:
File "/env/lib/python3.6/site-packages/django/db/models/query.py", line 387, in count
return self.query.get_count(using=self.db)
File "/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 491, in get_count
number = obj.get_aggregation(using, ['__count'])['__count']
File "/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 476, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/env/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1061, in execute_sql
cursor = self.connection.cursor()
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 255, in cursor
return self._cursor()
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 232, in _cursor
self.ensure_connection()
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/env/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect
self.connection = self.get_new_connection(conn_params)
File "/env/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/base.py", line 159, in get_new_connection
self.connection = self.pool.get()
File "/env/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 50, in get
new_item = self.create_connection()
File "/env/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 84, in create_connection
conn = self.connect(*self.args, **self.kwargs)
File "/env/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/env/lib/python3.6/site-packages/psycogreen/gevent.py", line 29, in gevent_wait_callback
state = conn.poll()
django.db.utils.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections
Traceback (most recent call last):
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect
self.connection = self.get_new_connection(conn_params)
File "/env/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/base.py", line 159, in get_new_connection
self.connection = self.pool.get()
File "/env/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 50, in get
new_item = self.create_connection()
File "/env/lib/python3.6/site-packages/django_db_geventpool/backends/postgresql_psycopg2/psycopg2_pool.py", line 84, in create_connection
conn = self.connect(*self.args, **self.kwargs)
File "/env/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/env/lib/python3.6/site-packages/psycogreen/gevent.py", line 29, in gevent_wait_callback
state = conn.poll()
psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections
Django settings.py:
DATABASES = {
'default': {
'ENGINE': 'django_db_geventpool.backends.postgresql_psycopg2',
'NAME': 'db1',
'USER': 'lolnames',
'PASSWORD': 'XXXXXXXXXXXXXXXXXX',
'HOST': '/cloudsql/lolnames:us-central1:loldb',
'PORT': '5432',
'ATOMIC_REQUESTS': False,
'CONN_MAX_AGE': 0,
'OPTIONS': {
'MAX_CONNS': 15,
'connect_timeout': 5,
}
}
}
requirements.txt:
pip==9.0.1
mysqlclient==1.3.12
Django==2.0.1
django-db-geventpool==2
django-removewww==0.1.2
gevent==1.2.2
gunicorn==19.7.1
psycopg2==2.7.3.2
python-dateutil==2.6.1
pytz==2017.3
regex==2017.12.12
requests-toolbelt==0.8.0
requests[security]
six==1.11.0
wheel==0.30.0
We see that celery tasks, running gevent, are sometimes stalling for several minutes (or indefinitely) before resuming. We run worker --concurrency=100 --pool=gevent
.
Could this mean that the pool is exhausted and so any further connections hang until something becomes available? Is there a way we can debug what's going on or check the status of the pool?
Too many idle connections are always in the background until the database idle-connections timeout
settings:
'CONN_MAX_AGE' : 0,
'OPTIONS' : {
'MAX_CONNS' : 1,
'REUSE_CONNS' : 1,
}
gunicorn_conf.py:
...
from psycogreen.gevent import patch_psycopg
from django.core.signals import request_finished
workers= 17
threads=16
worker_class ='gevent'
def post_fork(server,worker):
patch_psycopg()
def worker_exit(server,worker):
request_finished.send("greenlst")
....
There are 20 idle connections in the background
Not working with the upcoming version of Django 3.0 which will be released in less than 1 month. https://docs.djangoproject.com/en/dev/releases/3.0/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.