mher / flower Goto Github PK
View Code? Open in Web Editor NEWReal-time monitor and web admin for Celery distributed task queue
Home Page: https://flower.readthedocs.io
License: Other
Real-time monitor and web admin for Celery distributed task queue
Home Page: https://flower.readthedocs.io
License: Other
hi, any plan to add a graph for monitoring the queue length?
thanks
When I entered expire date to a periodical task. flower tries to show this task as revoked. Bu When i try to enter the details. it gives th error below:
Traceback (most recent call last):
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/flower/views/tasks.py", line 18, in get
self.render("task.html", task=task)
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/flower/views/init.py", line 25, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/Users/skecer/.virtualenvs/hede/lib/python2.7/site-packages/tornado/template.py", line 261, in generate
return execute()
File "task_html.generated.py", line 111, in _execute
_tmp = task.name # task.html:29 (via base.html:52)
AttributeError: 'TaskModel' object has no attribute 'name'
![Screen Shot 2013-01-22 at 12 15 32 PM](https://f.cloud.githu
![Screen Shot 2013-04-05 at 11 49 57 AM 2 ]%28https://f.cloud.github.com/assets/361370/345409/ac425040-9e21-11e2-9866-58b23ee72a8b.png%29
b.com/assets/361370/345408/a19d9f64-9e21-11e2-9aff-ac4c86ff52ab.png)
Stacktrace:
It looks like you have found a bug! You can help to improve Celery Flower by opening an issue in https://github.com/mher/flower/issues
Traceback (most recent call last):
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/flower/views/tasks.py", line 18, in get
self.render("task.html", task=task)
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/flower/views/init.py", line 25, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/opt/stow/python-2.7.2/lib/python2.7/site-packages/tornado/template.py", line 261, in generate
return execute()
File "task_html.generated.py", line 111, in _execute
_tmp = task.name # task.html:29 (via base.html:52)
AttributeError: 'TaskModel' object has no attribute 'name'
"Running tasks" field in the workers view shows out of date value if a worker is offline
I have a "div" task which divides two numbers. I invoked it with /api/task/apply-async/div with 1 and 0 parameters (which will cause an exception). The flower server had the following error (and returned 500):
[E 121016 09:14:18 web:1085] Uncaught exception GET /api/task/result/cca8490d-db6f-4623-ab66-ec5cae53c29a (127.0.0.1)
HTTPRequest(protocol='http', host='localhost:5555', method='GET', uri='/api/task/result/cca8490d-db6f-4623-ab66-ec5cae53c29a', version='HTTP/1.1', remote_ip='127.0.0.1', body='', headers={'Host': 'localhost:5555', 'Accept-Encoding': 'identity, deflate, compress, gzip', 'Accept': '*/*', 'User-Agent': 'python-requests/0.13.3 CPython/2.7.3 Linux/3.2.0-32-generic'})
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1809, in wrapper
return method(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flower/api/tasks.py", line 60, in get
self.write(response)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 493, in write
chunk = escape.json_encode(chunk)
File "/usr/local/lib/python2.7/dist-packages/tornado/escape.py", line 90, in json_encode
return _json_encode(recursive_unicode(value)).replace("</", "<\\/")
File "/usr/lib/python2.7/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 201, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 264, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 178, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: ZeroDivisionError('integer division or modulo by zero',) is not JSON serializable
Exception trace:
# /var/virtualenvs/vitask/bin/flower --port=8008
Traceback (most recent call last):
File "/var/virtualenvs/vitask/bin/flower", line 8, in <module>
load_entry_point('flower==0.2.0', 'console_scripts', 'flower')()
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 318, in load_entry_point
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 2221, in load_entry_point
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 1954, in load
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/flower/__main__.py", line 11, in <module>
from flower.urls import handlers
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/flower/urls.py", line 5, in <module>
from .views.worker import (
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/flower/views/worker.py", line 6, in <module>
from ..models import WorkersModel, WorkerModel
File "/var/virtualenvs/vitask/lib/python2.6/site-packages/flower/models.py", line 4, in <module>
from collections import OrderedDict
ImportError: cannot import name OrderedDict
OrderedDict isn't available as part of collections
on 2.6; flower should probably import OrderedDict from celery.utils.compat
instead.
In FlowerCommand, static method flower_option(arg), the line:
return name in options
raises the TypeError. Replacing the line with:
return hasattr(options, name)
works fine. This is probably due to backwards-incompatible changes to Tornado's OptionsParser, but I have no experience with the recent changes to that project.
This is probably caused by my bad setup, as I am having trouble seeing any workers in /workers
Traceback (most recent call last):
File "/home/jacob/everplaces.com/everplaces.com/local/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(*args, **kwargs)
File "/home/jacob/everplaces.com/everplaces.com/local/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, *args, **kwargs)
File "/home/jacob/everplaces.com/everplaces.com/local/lib/python2.7/site-packages/flower/views/workers.py", line 13, in get
workers = WorkersModel.get_latest(app).workers
File "/home/jacob/everplaces.com/everplaces.com/local/lib/python2.7/site-packages/flower/models.py", line 38, in get_latest
return WorkersModel(app)
File "/home/jacob/everplaces.com/everplaces.com/local/lib/python2.7/site-packages/flower/models.py", line 25, in __init__
pool = stat.get('pool') or {}
AttributeError: 'list' object has no attribute 'get'
It seems Celery and Celerybeat is running according to ps aux | grep celery
.
Also tasks have State: SUCCESS
.
http://localhost:5555/worker/MacBook-Air-de-Luis.local#tab-limits
when setting Timeouts, soft or hard celery respose is:
[2013-02-22 18:32:47,203: INFO/MainProcess] New time limits for tasks of type service_bus_autos.tasks.general: soft=None hard=None
celery==3.0.13
django-celery==3.0.11
flower==0.4.2
service_bus_autos.tasks.general is a simple subtask executed via chords.
When I try to run flower and connect to my Django backend, the following error occurs:
$ flower --broker=django://localhost//
[I 130301 12:04:30 command:43] Visit me at http://localhost:5555
Traceback (most recent call last):
File "/Users/nateaune/Dropbox/code/venvs/dashboard/bin/flower", line 8, in <module>
load_entry_point('flower==0.4.2', 'console_scripts', 'flower')()
File "/Users/nateaune/Dropbox/code/venvs/dashboard/lib/python2.7/site-packages/flower/__main__.py", line 8, in main
flower.execute_from_commandline()
File "/Users/nateaune/Dropbox/code/venvs/dashboard/lib/python2.7/site-packages/celery/bin/base.py", line 87, in execute_from_commandline
return self.handle_argv(prog_name, argv[1:])
File "/Users/nateaune/Dropbox/code/venvs/dashboard/lib/python2.7/site-packages/flower/command.py", line 54, in handle_argv
return self.run_from_argv(prog_name, argv)
File "/Users/nateaune/Dropbox/code/venvs/dashboard/lib/python2.7/site-packages/flower/command.py", line 44, in run_from_argv
logging.info('Broker: %s', self.app.connection().as_uri())
AttributeError: 'App' object has no attribute 'connection'
Note: this error happens even if I don't use the --broker option.
I want to run Flower behind our Nginx proxy (which provides authentication, SSL, and a few other nice features for us) on our tools machine. I cannot currently redirect "/flower" to the flower app though, because the flower app makes all of the URLs relative to the root URL. Ie, static links to "/static/foo.js" rather than "static/foo.js".
Does this work when using eventlet as my worker pool? No workers are being displayed.
Not sure I have anything to add, we have some tasks that fail, and on clicking them we get this:
Traceback (most recent call last):
File "/var/venvs/staging/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/var/venvs/staging/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/var/venvs/staging/lib/python2.7/site-packages/flower/views/tasks.py", line 18, in get
self.render("task.html", task=task)
File "/var/venvs/staging/lib/python2.7/site-packages/flower/views/init.py", line 25, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/var/venvs/staging/lib/python2.7/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/var/venvs/staging/lib/python2.7/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/var/venvs/staging/lib/python2.7/site-packages/tornado/template.py", line 261, in generate
return execute()
File "task_html.generated.py", line 111, in _execute
_tmp = task.name # task.html:29 (via base.html:52)
AttributeError: 'TaskModel' object has no attribute 'name'
The README for celery lists that you can use the --broker flag to specify the broker being used with celery. However this option does not exist.
Traceback (most recent call last):
File "/Volumes/Data/python_virtual_environments/env2/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.6', 'console_scripts', 'celery')()
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/celery/__main__.py", line 14, in main
main()
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/celery/bin/celery.py", line 941, in main
cmd.execute_from_commandline(argv)
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/celery/bin/celery.py", line 885, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/celery/bin/base.py", line 175, in execute_from_commandline
return self.handle_argv(prog_name, argv[1:])
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/celery/bin/celery.py", line 880, in handle_argv
return self.execute(command, argv)
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/celery/bin/celery.py", line 855, in execute
return cls(app=self.app).run_from_argv(self.prog_name, argv)
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/flower/command.py", line 10, in run_from_argv
return main(argv)
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/flower/__main__.py", line 24, in main
parse_command_line(argv)
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/tornado/options.py", line 348, in parse_command_line
return options.parse_command_line(args)
File "/Volumes/Data/python_virtual_environments/env2/lib/python2.7/site-packages/tornado/options.py", line 132, in parse_command_line
raise Error('Unrecognized command line option: %r' % name)
tornado.options.Error: Unrecognized command line option: 'broker'
How do I go about configuring flower to point to the broker and backend celery is using.
These are the versions I'm using
celery==3.0.6
celery-with-redis==3.0
flower==0.3.0
There is currently no authentication (username/password) required to use flower. Flower provides the ability to view queues, stop and restart workers. By securing access we can remotely using and view flower thu the web interface
75be696 3.1.0rc1 (Cipater)
celery/celery@d1ca495
Click on a worker name on the Workers page.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1809, in wrapper
return method(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flower-0.4.1-py2.7.egg/flower/views/workers.py", line 27, in get
self.render("worker.html", worker=worker)
File "/usr/local/lib/python2.7/dist-packages/flower-0.4.1-py2.7.egg/flower/views/__init__.py", line 25, in render
super(BaseHandler, self).render(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/usr/local/lib/python2.7/dist-packages/tornado/template.py", line 261, in generate
return execute()
File "worker_html.generated.py", line 114, in _execute
if worker.stats['autoscaler']: # worker.html:82 (via base.html:52)
KeyError: 'autoscaler'
Some output if I manually inspect a worker:
import celery.app.control
app = celery.app.App()
i = app.control.inspect()
stats = i.stats()
stats['celery@foobarbaz'].keys()
>>> [u'clock', u'pid', u'broker', u'prefetch_count', u'total', u'pool']
It looks like your template's use of stats['consumer']['broker'] would break as well.
Running on amazon linux ami.
python2.7
installed via pip from 2.7.
flower is connecting to remote redis queue
crashes after a few moments with 1 user or no users connected via http
top usage:
25703 celmon 20 0 383m 29m 4940 S 3.7 0.4 1:07.58 celery
command started like:
celery flower --port=5001 --broker=redis://myredis.mycompany.com:6379/0 --inspect_timeout=800 --auth=[email protected],[email protected],[email protected]
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tornado/web.py", line 1077, in _execute
_self.path_args, *_self.path_kwargs)
File "/usr/lib/python2.7/site-packages/tornado/web.py", line 1892, in wrapper
return method(self, _args, *_kwargs)
File "/usr/lib/python2.7/site-packages/flower/views/workers.py", line 13, in get
workers = WorkersModel.get_latest(app).workers
File "/usr/lib/python2.7/site-packages/flower/models.py", line 38, in get_latest
return WorkersModel(app)
File "/usr/lib/python2.7/site-packages/flower/models.py", line 33, in init
workername, [])),
File "/usr/lib/python2.7/site-packages/flower/models.py", line 31, in
queues=map(lambda x: x['name'],
TypeError: string indices must be integers
which is followed by this message in the stdout:
[E 130415 19:09:37 ioloop:796] Error in periodic callback
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tornado/ioloop.py", line 794, in _run
self.callback()
File "/usr/lib/python2.7/site-packages/flower/views/update.py", line 47, in on_update_time
workers = WorkersModel.get_latest(app)
File "/usr/lib/python2.7/site-packages/flower/models.py", line 38, in get_latest
return WorkersModel(app)
File "/usr/lib/python2.7/site-packages/flower/models.py", line 25, in init
pool = stat.get('pool') or {}
AttributeError: 'list' object has no attribute 'get'
Hi,
Do you plan to release a new version of Flower to Pypi ? A new version would help me packaging/installing and testing Flower with the new url_prefix option.
Thx.
Hi,
When launching Flower on Python 2.6 I get string formatting errors. Because Python versions prior to 2.7 must have positional arguments in string formatting.
Source : http://docs.python.org/2/library/string.html#format-string-syntax
Changed in version 2.7: The positional argument specifiers can be omitted, so '{} {}' is equivalent to '{0} {1}'.
Broken lines of code in flower are :
./flower/views/__init__.py:61: base = "{}://{}/".format(self.request.protocol, self.request.host)
./flower/command.py:35: app_settings['static_url_prefix'] = '/{}/static/'.format(prefix)
Sould be :
./flower/views/__init__.py:61: base = "{0}://{1}/".format(self.request.protocol, self.request.host)
./flower/command.py:35: app_settings['static_url_prefix'] = '/{0}/static/'.format(prefix)
I had clicked on a task link that had failed, and got this. My task is pretty simple, just a function with an @celery.task decorator.
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/usr/lib/python2.6/site-packages/flower/views/tasks.py", line 15, in get
self.render("task.html", task=task)
File "/usr/lib/python2.6/site-packages/flower/views/init.py", line 16, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/usr/lib/python2.6/site-packages/tornado/template.py", line 261, in generate
return execute()
File "task_html.generated.py", line 26, in _execute
_tmp = task.name # task.html:13 (via base.html:48)
AttributeError: 'TaskModel' object has no attribute 'name'
As far as I could see, there's no option to run flower as listening on a specific IP address.
So it would be nice to have a "--bind" option, just like celerymon does for instance.
If I want to expose Flower on the web and don't want to use OAuth to log in, I think there's currently no solution.
Here is a rough patch I made to urls.py to add HTTP Basic Auth on all requests. It sure needs more work to connect it to the config system for instance but I guess you could use it as a base for adding support:
35a36,37
> import functools
> import base64
37c39
< handlers = [
---
> _handlers = [
84a87,147
>
> """
> This patch adds mandatory HTTP Basic Auth to all requests, except websockets
> """
>
> # http://kelleyk.com/post/7362319243/easy-basic-http-authentication-with-tornado
> def require_basic_auth(handler_class, auth):
>
> def _request_auth(handler):
> if hasattr(handler, "ws_connection"):
> return True # TODO, basic auth not supported in websockets
>
> handler.set_header('WWW-Authenticate', 'Basic realm=Flower')
> handler.set_status(401)
> handler._transforms = []
> handler.finish()
> return False
>
> def wrap_execute(handler_execute):
> def require_basic_auth(handler):
> auth_header = handler.request.headers.get('Authorization')
> if auth_header is None or not auth_header.startswith('Basic '):
> return _request_auth(handler)
>
> auth_decoded = base64.decodestring(auth_header[6:])
>
> username, password = auth_decoded.split(':', 2)
>
> if (auth(username, password)):
> return True
> else:
> return _request_auth(handler)
>
> def _execute(self, transforms, *args, **kwargs):
> if not require_basic_auth(self):
> return False
> return handler_execute(self, transforms, *args, **kwargs)
> return _execute
>
> handler_class._execute = wrap_execute(handler_class._execute)
> return handler_class
>
>
> def oxauth(username, password):
> return "%s:%s" % (username, password) == config.config["FLOWER_AUTH"]
>
>
> # Force-add httpauth to each handler
> handlers = []
> for h in _handlers:
> if len(h) > 2:
> handlers.append((h[0], require_basic_auth(h[1], oxauth), h[2]))
> else:
> handlers.append((h[0], require_basic_auth(h[1], oxauth)))
This is a feature request to allow flower to bind to another host other than localhost (127.0.0.1). This could be useful for servers that don't have a display or for monitoring from a remote server.
I am trying to run flower as a Daemon on Ubuntu.
However it wont start withy celeryd and I have knocked up an /etc/init.d script but that does not work properly either.
I am very new to this - so suspect I am being stupid - but any hints on how to get this to run when a machine starts up and be able to start/stop it as a normal service would be appreciated.
Cheers
Dan
On the Monitor tab, the graph is only displaying the traffic for my local (same server that runs rabbitMQ and flower) celery worker. It is not displaying the traffic for my remote (running on different server) celery worker.
I am sure I have started the remote worker correctly and have it configured correctly, because the other Flower tabs do show that the remote tasks are being executed successfully. And, my app is working fine.
Not sure if it is a factor or not, but each worker is assigned its own distinct queue. Also I should mention that the local worker task uses apply_async to send a task to the remote worker, in case this is unusual.
I'm new to python, Celery, and Flower. If you can offer suggestions on what I might be doing wrong, I'll sure appreciate it.
Thanks,
Steve
Hi,
Did I miss it or all real time data celery fetches from workers is not persistent on flower end ?
Tommaso
I get this when I click on the UUID of a failed job from the Tasks page:
Traceback (most recent call last):
File "/opt/nextdoor-ve/lib/python2.6/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/opt/nextdoor-ve/lib/python2.6/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/opt/nextdoor-ve/lib/python2.6/site-packages/flower/views/tasks.py", line 18, in get
self.render("task.html", task=task)
File "/opt/nextdoor-ve/lib/python2.6/site-packages/flower/views/init.py", line 16, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/opt/nextdoor-ve/lib/python2.6/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/opt/nextdoor-ve/lib/python2.6/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/opt/nextdoor-ve/lib/python2.6/site-packages/tornado/template.py", line 261, in generate
return execute()
File "task_html.generated.py", line 26, in _execute
_tmp = getattr(task, 'name') # task.html:13 (via base.html:48)
AttributeError: 'TaskModel' object has no attribute 'name'
Indention wrong at https://github.com/mher/flower/blob/master/flower/utils/broker.py#L56, cause redis broker call the base function.
>>> import celery
>>> celery.VERSION
(3, 0, 19)
>>> import flower
>>> flower.__version__
'0.5.0'
>>>
Traceback (most recent call last):
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/flower/views/tasks.py", line 18, in get
self.render("task.html", task=task)
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/flower/views/init.py", line 16, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/home/navrav/.virtualenvs/navrav_env/local/lib/python2.7/site-packages/tornado/template.py", line 261, in generate
return execute()
File "task_html.generated.py", line 26, in _execute
_tmp = getattr(task, 'name') # task.html:13 (via base.html:48)
AttributeError: 'TaskModel' object has no attribute 'name'
I am using supervisor to start celery flower.
I have a weird problem where it shows the service started but I can't access the site in a browser (no error message, browser just sits there loading...)
If I ssh into the server and restart the flower service then I can access the site fine.
The background is I'm using Chef to provision a Vagrant vm. Here's what I see:
...
[Tue, 18 Sep 2012 14:39:53 +0000] INFO: Processing execute[supervisorctl restart enc-celeryd] action run (/tmp/vagrant-chef-1/chef-solo-1/cookbooks/supervisor/providers/service.rb line 63)
[Tue, 18 Sep 2012 14:39:56 +0000] INFO: execute[supervisorctl restart enc-celeryd] ran successfully
[Tue, 18 Sep 2012 14:39:56 +0000] INFO: Processing execute[supervisorctl restart enc-flower] action run (/tmp/vagrant-chef-1/chef-solo-1/cookbooks/supervisor/providers/service.rb line 63)
[Tue, 18 Sep 2012 14:39:58 +0000] INFO: execute[supervisorctl restart enc-flower] ran successfully
[Tue, 18 Sep 2012 14:39:58 +0000] INFO: Chef Run complete in 171.643989648 seconds
[Tue, 18 Sep 2012 14:39:58 +0000] INFO: Running report handlers
[Tue, 18 Sep 2012 14:39:58 +0000] INFO: Report handlers complete
(.venv)developer3:~/github/rbx_subtree/conf (master)$ vagrant ssh
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
* Documentation: https://help.ubuntu.com/
54 packages can be updated.
27 updates are security updates.
Last login: Tue Sep 18 14:34:21 2012 from 10.0.2.2
vagrant@vagrant:~$ sudo supervisorctl avail
enc-celeryd in use manual 999:999
enc-flower in use manual 999:999
Chef is showing the the very last action taken in the provisioning is to restart the flower service (which appears to be running)... but at this point I can't access the site.
vagrant@vagrant:~$ sudo supervisorctl restart enc-flower
enc-flower: stopped
enc-flower: started
vagrant@vagrant:~$ sudo supervisorctl avail
enc-celeryd in use manual 999:999
enc-flower in use manual 999:999
...now it works after restarting it a second time.
Any ideas what could possibly be wrong?
thanks!
It would be good if flower supports retrying failed task.
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/usr/lib/python2.6/site-packages/flower/views/workers.py", line 27, in get
self.render("worker.html", worker=worker)
File "/usr/lib/python2.6/site-packages/flower/views/init.py", line 25, in render
super(BaseHandler, self).render(_args, *_kwargs)
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 500, in render
html = self.render_string(template_name, **kwargs)
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 607, in render_string
return t.generate(**namespace)
File "/usr/lib/python2.6/site-packages/tornado/template.py", line 261, in generate
return execute()
File "worker_html.generated.py", line 352, in _execute
_tmp = absolute_url('/task/' + task) # worker.html:267 (via base.html:52)
TypeError: cannot concatenate 'str' and 'NoneType' objects
If you set some filtering to Tasks, like State=Failuer, then subconsciously you want to push Reset button, but as soon as you push Reset, limit=100 is gone and this can lead to unexpected and bad results and browser crash, when you have few million tasks.
Reset button should set back default limit and form to state which is the same when first entering Tasks.
In case the broker URL contains passwords to the database !
This is related to the flower REST services and WebSocket messages.
In case there is a task which returns a dictonary as result.
Eg:
...
return {"sleepTime": executionTime, "hostname": "workerx"}
The JSON received as a result of a REST service call is differently serialized than the one in the websocket message:
http://localhost:5555/api/task/result/8a4da87b-e12b-4547-b89a-e92e4d1f8efd
will return result as:
"result": {"hostname": "workerx", "sleepTime": 4}
while the WebSocket
ws://localhost:5555/api/task/events/task-succeeded/
will send the onmessage string as:
"result": "{'hostname': 'workerx', 'sleepTime': 4}"
django-celery 3.0.11
flower 0.4.1 (Development version as of 12/6/2012)
When I tried to view Tasks (localhost:5555/tasks), got an error that:
Line 105 of flower/models.py: events_state.tasks_by_time(), the function tasks_by_time() does not exist.
Seems that this is because events_state.tasks_by_time is only available in Celery 3.1 and later.
Compare:
Celery 3.0.12: http://docs.celeryproject.org/en/latest/reference/celery.events.state.html
to
Celery 3.1.0b1: http://docs.celeryproject.org/en/master/reference/celery.events.state.html
(search for tasks_by_time)
For backward compatibility, it might work better to use tasks_by_timestamp(), which exists in earlier versions of Celery, but I haven't had time to test personally.
Thanks for taking a look!
I'm presuming once a worker has the been shutdown the flower server doesn't have the capabilities of restarting, but the option can still be triggered through the front-end. It would be beneficial if an option simply restart a worker was available instead of pool.
Restarting worker after shutting down throws indexError (list index is out of range):
Traceback (most recent call last):
File "/root/virtualenvs/flower/local/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/root/virtualenvs/flower/local/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/root/virtualenvs/flower/local/lib/python2.7/site-packages/flower/api/control.py", line 40, in post
if 'ok' in response[0][workername]:
IndexError: list index out of range
I have an def add(x, y): return x + y
task.
I'm trying to invoke it using curl -X POST -d'{"params": [1, 2]}' http://localhost:5555/api/task/apply_async/tasks.add
I get an empty reply from the server and I see the following in the flower log:
[E 121010 06:37:19 iostream:308] Uncaught exception, closing connection.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/iostream.py", line 305, in wrapper
callback(*args)
File "/usr/local/lib/python2.7/dist-packages/tornado/httpserver.py", line 281, in _on_request_body
self.request_callback(self._request)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1427, in __call__
handler._execute(transforms, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1046, in _execute
self._handle_request_exception(e)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1082, in _handle_request_exception
self.send_error(e.status_code, exc_info=sys.exc_info())
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 750, in send_error
self.finish()
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 710, in finish
content_length = sum(len(part) for part in self._write_buffer)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 710, in <genexpr>
content_length = sum(len(part) for part in self._write_buffer)
TypeError: object of type 'NoneType' has no len()
[E 121010 06:37:19 ioloop:435] Exception in callback <tornado.stack_context._StackContextWrapper object at 0x7ff540023c58>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 421, in _run_callback
callback()
File "/usr/local/lib/python2.7/dist-packages/tornado/iostream.py", line 305, in wrapper
callback(*args)
File "/usr/local/lib/python2.7/dist-packages/tornado/httpserver.py", line 281, in _on_request_body
self.request_callback(self._request)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1427, in __call__
handler._execute(transforms, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1046, in _execute
self._handle_request_exception(e)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1082, in _handle_request_exception
self.send_error(e.status_code, exc_info=sys.exc_info())
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 750, in send_error
self.finish()
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 710, in finish
content_length = sum(len(part) for part in self._write_buffer)
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 710, in <genexpr>
content_length = sum(len(part) for part in self._write_buffer)
TypeError: object of type 'NoneType' has no len()
so I just added flower to my production server and the server after around 5 minutes gets totally out of memory, here is a nice graph:
As soon as I stopped flower all the memory came back (as seen on the image)
I'll try tomorrow to give it a deeper look thou.
My supervisor command would be this one:
[program:celery_flower]
command=/home/web/.virtualenvs/web/bin/python /home/web/manage.py celery flower --url_prefix=celery --auth=EMAILS
stdout_logfile=/var/log/celery_flower.log
autostart=false
user=web
killasgroup=true
and my celery setup:
software -> celery:3.0.13 (Chiastic Slide) kombu:2.5.4 py:2.6.5
billiard:2.7.3.19 py-amqp:N/A
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> djcelery.loaders.DjangoLoader
settings -> transport:amqp results:mongodb
I start my workers with supervisor, when I am in the dashboard I can shut them down no problem but I am not able to restart them after.
I use a redis backend for both brokering and results. I have two sites on my server both using Celery. My confs are something along the lines of:
BROKER_BACKEND = 'redis'
BROKER_HOST = 'localhost'
BROKER_PORT = 6379
BROKER_VHOST = '4'
CELERY_RESULT_BACKEND = 'redis'
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB = 5
REDIS_CONNECT_RETRY = True
CELERYBEAT_SCHEDULE_FILENAME = PROJECT_ROOT + 'celerybeat-schedule'
and
BROKER_BACKEND = 'redis'
BROKER_HOST = 'localhost'
BROKER_PORT = 6379
BROKER_VHOST = '2'
CELERY_RESULT_BACKEND = 'redis'
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB = 3
REDIS_CONNECT_RETRY = True
CELERYBEAT_SCHEDULE_FILENAME = PROJECT_ROOT + 'celerybeat-schedule'
Note the DB numbers differ.
I run celeryb and celeryd in each virtualenv for the sites in question:
celerybeat --loglevel DEBUG --pidfile=/tmp/www_celerybeat.pid
celerybeat --loglevel DEBUG --pidfile=/tmp/portal_celerybeat.pid
celeryd --loglevel DEBUG -c2 -E --pidfile=/tmp/portal_celeryd.pid
celeryd --loglevel DEBUG -c2 -E --pidfile=/tmp/www_celeryd.pid
When I run flower in one of the environments I get the following issues:
No workers appear in the worker view
Let's say I run flower in env1. My celeryd.error log from env2 start spitting out errors (at least one per second) like this:
[2012-08-20 15:33:27,046: DEBUG/MainProcess] * Dump of currently registered tasks: celery.backend_cleanup celery.chain celery.chord celery.chord_unlock celery.chunks celery.group celery.map celery.starmap rsvp.tasks.send_mail_task [2012-08-20 15:33:27,047: ERROR/MainProcess] Control command error: InconsistencyError("Queue list empty or key does not exist: u'_kombu.binding.reply.celery.pidbox'",) Traceback (most recent call last): File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 512, in on_control self.pidbox_node.handle_message(body, message) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 103, in handle_message return self.dispatch(method, arguments, reply_to) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 85, in dispatch routing_key=reply_to['routing_key']) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 108, in reply channel=self.channel) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 190, in _publish_reply producer.publish(reply, routing_key=routing_key) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/messaging.py", line 162, in publish immediate, exchange, declare) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/messaging.py", line 170, in _publish mandatory, immediate, exchange) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/entity.py", line 215, in publish immediate=immediate) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 454, in basic_publish exchange, routing_key, **kwargs) File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/virtual/exchange.py", line 61, in deliver for queue in _lookup(exchange, routing_key): File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 535, in _lookup R = self.typeof(exchange).lookup(self.get_table(exchange), File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 476, in get_table self.keyprefix_queue % exchange)) InconsistencyError: Queue list empty or key does not exist: u'_kombu.binding.reply.celery.pidbox' [2012-08-20 15:33:27,047: DEBUG/MainProcess] Consumer: Closing broadcast channel...
Note the tasks listed at the top of the error are accurate for env2 not env1 where flower is running. Flower also reports tasks running in env2 regardless of the error above.
Is it my config or something broken in Flower?
I have celery running on my localhost which connects to SQS, I get this error when I launch flower:
[I 130406 16:53:31 command:43] Visit me at http://localhost:5555
[I 130406 16:53:31 command:44] Broker: sqs://<url encoded id>@localhost//
[E 130406 16:53:31 state:42] Dashboard and worker management commands are not available for 'N/A' transport
I start it using:
celery flower --broker=sqs://<url encoded id>:<url encoded secret>@
Celery: v3.0.17
Installed flower via pip install flower now.
Any suggestions?
Traceback (most recent call last):
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/tornado/web.py", line 1042, in _execute
getattr(self, self.request.method.lower())(_args, *_kwargs)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/tornado/web.py", line 1809, in wrapper
return method(self, _args, *_kwargs)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/flower/api/control.py", line 176, in post
celery.control.revoke(taskid, terminate=terminate)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/celery/app/control.py", line 130, in revoke
'signal': signal}, *_kwargs)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/celery/app/control.py", line 265, in broadcast
limit, callback, channel=channel,
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/kombu/pidbox.py", line 240, in _broadcast
timeout=timeout)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/kombu/pidbox.py", line 220, in _publish
'expires': time() + timeout if timeout else None})
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/kombu/messaging.py", line 164, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/kombu/messaging.py", line 179, in _publish
mandatory=mandatory, immediate=immediate,
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/amqp/channel.py", line 2097, in basic_publish
self._send_method((60, 40), args, msg)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/amqp/abstract_channel.py", line 58, in _send_method
self.channel_id, method_sig, args, content)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/amqp/method_framing.py", line 218, in write_method
write_frame(1, channel, payload)
File "/Users/antonkhelou/Documents/Rodan/rodan_env/lib/python2.7/site-packages/amqp/transport.py", line 149, in write_frame
frame_type, channel, size, payload, 0xce))
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(_args)
error: [Errno 32] Broken pipe
Hi,
I am using redis as broker and I was thinking that it would really convenient if flower was also showing some stats about queued tasks.
Right now I am using redis-cli LLEN queue_name to have an idea but i guess it should be quite easy to show it in flower.
Any chance to see this in a future release (and also, do you think it is a good idea ? / do you see big issues in implementing such a feature ?)
Tommaso
django-celery 3.0.11
flower 4.1 (development version as of 12/6/2012)
I was trying to get OAuth working properly running:
$ python manage.py celery flower --url_prefix=flower --address=example.com [email protected]
Using the setup described in this issue: #13
When I tried to go to http://example.com/flower, I was properly redirected to a google OAuth page and signed in. After sign in (and whenever I tried to visit any page http://example.com/flower/.* thereafter), I was first redirected to:
Unfortunately, the http://example.com/login automatically redirects to http://example.com when a user is logged in to example.com (due to the settings on example.com).
As a result, any attempt to view flower redirects to http://example.com.
Not sure, but it seems that when --url_prefix=flower is defined, oauth logins should redirect through:
http://example.com/flower/login instead of http://example.com/login
NOTE: The same issue appears to happen in flower 0.4.0 without --url_prefix. With the command:
$ python manage.py celery flower --port=5555 --address=example.com [email protected]
OAuth logins seem to pass through:
http://example.com/login instead of the expected http://example.com:5555/login
This leads to a similar redirect path.
Thanks for looking into this. Appreciate any feedback or fixes you have.
If there is a dot in a worker´s name, the links from /workers do not work. If a workers name is "index.morrow" the url from http://localhost:5555/workers is http://morrow:5555/worker/index/.morrow
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.