Giter Site home page Giter Site logo

huey's Introduction

image

a lightweight alternative.

huey is:

huey supports:

  • multi-process, multi-thread or greenlet task execution models
  • schedule tasks to execute at a given time, or after a given delay
  • schedule recurring tasks, like a crontab
  • automatically retry tasks that fail
  • task prioritization
  • task result storage
  • task expiration
  • task locking
  • task pipelines and chains

image

At a glance

from huey import RedisHuey, crontab

huey = RedisHuey('my-app', host='redis.myapp.com')

@huey.task()
def add_numbers(a, b):
    return a + b

@huey.task(retries=2, retry_delay=60)
def flaky_task(url):
    # This task might fail, in which case it will be retried up to 2 times
    # with a delay of 60s between retries.
    return this_might_fail(url)

@huey.periodic_task(crontab(minute='0', hour='3'))
def nightly_backup():
    sync_all_data()

Calling a task-decorated function will enqueue the function call for execution by the consumer. A special result handle is returned immediately, which can be used to fetch the result once the task is finished:

>>> from demo import add_numbers
>>> res = add_numbers(1, 2)
>>> res
<Result: task 6b6f36fc-da0d-4069-b46c-c0d4ccff1df6>

>>> res()
3

Tasks can be scheduled to run in the future:

>>> res = add_numbers.schedule((2, 3), delay=10)  # Will be run in ~10s.
>>> res(blocking=True)  # Will block until task finishes, in ~10s.
5

For much more, check out the guide or take a look at the example code.

Running the consumer

Run the consumer with four worker processes:

$ huey_consumer.py my_app.huey -k process -w 4

To run the consumer with a single worker thread (default):

$ huey_consumer.py my_app.huey

If your work-loads are mostly IO-bound, you can run the consumer with threads or greenlets instead. Because greenlets are so lightweight, you can run quite a few of them efficiently:

$ huey_consumer.py my_app.huey -k greenlet -w 32

Storage

Huey's design and feature-set were informed by the capabilities of the Redis database. Redis is a fantastic fit for a lightweight task queueing library like Huey: it's self-contained, versatile, and can be a multi-purpose solution for other web-application tasks like caching, event publishing, analytics, rate-limiting, and more.

Although Huey was designed with Redis in mind, the storage system implements a simple API and many other tools could be used instead of Redis if that's your preference.

Huey comes with builtin support for Redis, Sqlite and in-memory storage.

Documentation

See Huey documentation.

Project page

See source code and issue tracker on Github.

Huey is named in honor of my cat:

image

huey's People

Contributors

72squared avatar adamchainz avatar antoviaque avatar antwan avatar averagehuman avatar azuline avatar blablacio avatar camilonova avatar coleifer avatar geyser avatar gl3nn avatar hgdeoro avatar jbaiter avatar jedie avatar kennell avatar lhfelis avatar logannc avatar mindojo-victor avatar miohtama avatar moser avatar mtyaka avatar olamyy avatar oz123 avatar peterbe avatar psycojoker avatar smarnach avatar swilcox avatar timgates42 avatar tirkarthi avatar vitormhenrique avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

huey's Issues

python3 compatibility

I started a branch for python3 compatibility: https://github.com/dmr/huey/tree/python3

I ran 2to3 and adjusted a few parts in the code
dmr@97fadcc

The only real modification I had to do to get the tests to pass were lines 70 and 71 in huey/registry.py:

After the split, data is a sting now
"b'\x80\x03cdatetime\ndatetime\nq\x00C\n\x07\xdc\x0b\x1e\x10):\x04K\xcaq\x01\x85q\x02Rq\x03.''"
and pickle fails to import that
TypeError: 'str' does not support the buffer interface

I used eval(data) to "fix" this. Now the tests pass:
https://travis-ci.org/dmr/huey/builds/3436501

Any ideas on how to really fix that?

RabbitMQ Support

I understand one can easily roll their own RabbitMQ broker, but would this be something you'd accept a PR for?

Worker stops working after UTF-8 Exception

If I have certain characters (like '\xb2', which I thought was UTF-8 anyway) in a task's message, it throws the follow error:

InvalidStringData: strings in documents must be valid UTF-8

Stack trace:

Exception in thread Worker 1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 47, in run
    self.loop()
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 118, in loop
    self.check_message()
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 137, in check_message
    self.handle_task(task, self.get_now())
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 154, in handle_task
    self.process_task(task, ts)
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 169, in process_task
    self.requeue_task(task, self.get_now())
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 179, in requeue_task
    self.add_schedule(task)
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 60, in add_schedule
    self.huey.add_schedule(task)
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/api.py", line 274, in add_schedule
    self._add_schedule(msg, ex_time)
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/api.py", line 153, in inner
    wrap_exception(exc_class)
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/utils.py", line 18, in wrap_exception
    raise exc_class(exc.message)

After this exception the Huey worker doesn't seem to do anything anymore, and only continues after I restart Huey.

Forgive my unicode noviceness, but isn't something like '\xb2' a valid UTF-8 character? Or how can I escape or encode this properly so that I don't get the exception?

And can you see anything obvious that would stop the worker from continuing other work after such an exception?

Any help would be greatly appreciated! Thanks!

Doc & Example

Can you provide some doc / example to show how to use this library?

Add Django DB backend

Are you interested in django db backend implementation? Actually I already implemented it and use it on my dev environment. If you think it will be useful to bundle this kind of backend with the module I can merge my code into the project and make a PR so you could look.

run_huey + Tasks Not running

I'm having trouble getting tasks to run using run_huey (django intergration).
I get the expected output when the consumer starts which displays the tasks I have defined. Once I kick off a task, it doesnt run or at least I cant tell that it has. The logged output remains the same.

Here are my HUEY settings

HUEY = {
    'queue': 'huey.backends.redis_backend.RedisBlockingQueue',  # required.
    'queue_name': 'dh-queue',
    'queue_connection': {'host': 'localhost', 'port': 6379},

    # Options for configuring a result store -- *recommended*
    'result_store': 'huey.backends.redis_backend.RedisDataStore',
    'result_store_name': 'dh-results',
    'result_store_connection': {'host': 'localhost', 'port': 6379},

    # Options to pass into the consumer when running ``manage.py run_huey``
    'consumer_options': {'workers': 4, 'loglevel':logging.DEBUG,},
}

The Troubleshooting sections mentions running the consumer with --verbose but run_huey doesn't have that option.

Any thoughts?

task Name

Does the @task decorator take a name argument? I noticed this:

def create_task(task_class, func, retries_as_argument=False, task_name=None, **kwargs):
    ...

Remove a task by id

Celery has a revoke(taskUUID) function to remove a scheduled task from the queue. Is there a way to do this that I'm missing?

Huey won't repeat scheduled tasks, won't retry delay tasks

Using Huey with Django, and get get huey to do anything more than once, even if retry or schedules are specified.

from huey.djhuey import crontab, periodic_task, task, db_task, db_periodic_task
@task(retries=90, retry_delay=10)
      def often():
           result = requests.get("http://myserver/")
           raise Exception('nope')

Basically using this to ping my server, but it doesn't ever retry. I see the 'GET' request in the server log, but only once, and never again, despite the retry specification.

I've also tried specifying it like this:

@periodic_task(crontab(minute='*/1'))
    def often():
        result = requests.get("http://myserver/")
        return True

I'm running Huey with the following:

./manage.py run_huey -p -v 3 --traceback

And the settings look like this:

HUEY = {
    'backend': 'huey.backends.redis_backend',  # required.
    'name': 'projectname',
    'connection': {'host': '<server>', 'port': <port>},
    'always_eager': True, # or False, doesn't matter either way
    'consumer_options': {'workers': 4},
}

Am I doing something wrong, or have I found a bug?

Possible to connect to redid using authentication?

Is it possible to authenticate (password) with the Redis server when not using a local server?

For example using a redistogo instance. I can't find a mention of authentication in the documentation, but i figure there must be a way!

Tag 0.4 missing?

I was wondering which commit is the version that is called 0.4.0 on Pypi and I didn't find a tag on github. What am I doing wrong?

how to get results of async task

This is more of a question then an issue. So I apologize for opening this here. I didn't see any where else to ask questions.

I am planning on using Huey with django. The task is going to call an API that will build and destroy new machines in the "cloud". I figured out how to get the taskid and I can store that in a session or cache. How do I take that ID and get the status of the job? I am looking to see if the job is queued, failed, has not been ran, or was successful. I want to create a page that list all the task for the last hour and the status that they are in.

Events Question

Is it possible to define the EventEmitter from the Django Settings?

Test runner spawning huey processes

In our deploy setup we have a staging server which runs the run_huey command using supervisor. When I run my test suite, when a test calls a huey task, it seems to kick off a new instance of huey. As a result, after running my test suite a few times, running ps -ax on my staging server will show multiple instances of huey running at once. This is problematic because the huey instances seem to fight over tasks resulting in inconsistent results on the server. Is it possible to either (1) force the test suite to use the existing huey process running under supervisortctl or (2) cleanup the instances of huey kicked off by the test suite.

Have looked through the github issues and the documentation and cannot seem to find any mention of this problem.

Cheers,
Gordon

is this to replace Celery

that was just a discussion opener. :) I am back to follow all of Charles projects and add them to my toolkit.

Huey 0.4.0 requires Tasks to be defined in tasks.py?

What I noticed in our installations is that run_huey in a django app requires the commands to be defined in the tasks.py file. In the old version I could define tasks in commands.py but now I cannot anymore.

Did the behavior change?

If this was changed, it could be included in the upgrading part of the documentation.

queue_command delay argument

Hi everyone,
what are your thought about a delay parameter for the queue_command decorator?

A request triggers a huey command to send mails.
If the request uses transactions, huey might try to execute the command before the request finished. Therefore, not all model changes were written to the database at that time (connected models).

A delay parameter

@queue_command(delay=2)
def send_mail(): pass

would solve the issue.
Sending a mail 2 seconds later does not matter and the request will be finished by then.

Schedule a task in advance and cancel it

hi @coleifer,

I might have missed something in the docs, but I read it several times and cannot find a solution for this case:
If I scheduled a task in advance (e.g 1-2 days) and want to cancel it after some time, what should I store in DB to get task.revoke() working? I tried with task_id. However, HUEY._get(task_id) returns EmptyData because there are no results yet.
The only one way I found is to dump the whole registry.get_message_for_task(task) into DB and on the cancel call to the backend restore it using registry.get_task_for_message(msg_from_DB)
Is it correct approach?

Thank you

Memory leak

Hello.

Huey solved periodical and event tasks in my project. But it looks like there is memory leak in my process, huey allocated 512MB since start in just 3 hours. I'm monitoring huey process with monit.

Can you please give me some hints, whether problem is in my tasks or elsewhere ? Restarting huey process every 3 hours is not very elegant solution.

huey_consumer.py main.Configuration ImportError

In the documentation you refer to:

$ huey_consumer.py main.Configuration

...to monitor the output of the queue consumer. This however results in an ImportError:

Unable to import "main"

I think this is because you assume PWD is in PYTHONPATH which isn't the case at least on my installation (2.7.3 from python.org on OS X 10.8.3, using virtualenv). Not sure if this is specific to my installation, but I thought it might be worth to mention in case someone else runs into this. :)

Startup issue on Python 3.4 and Django 1.7

I have an app using new style Django AppConfig object. Getting:

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 2218, in _find_and_load_unlocked
AttributeError: 'module' object has no attribute '__path__'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/srv/django/xstore/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
    utility.execute()
  File "/srv/django/xstore/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 377, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/srv/django/xstore/venv/lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv
    self.execute(*args, **options.__dict__)
  File "/srv/django/xstore/venv/lib/python3.4/site-packages/django/core/management/base.py", line 338, in execute
    output = self.handle(*args, **options)
  File "/srv/django/xstore/venv/lib/python3.4/site-packages/huey/djhuey/management/commands/run_huey.py", line 89, in handle
    self.autodiscover()
  File "/srv/django/xstore/venv/lib/python3.4/site-packages/huey/djhuey/management/commands/run_huey.py", line 59, in autodiscover
    import_module(app)
  File "/usr/lib/python3.4/importlib/__init__.py", line 109, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
  File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
  File "<frozen importlib._bootstrap>", line 2221, in _find_and_load_unlocked
ImportError: No module named 'cryptoassets.django.app.CryptoassetsConfig'; 'cryptoassets.django.app' is not a package

Maybe the management commands need some love to get Django 1.7 compatibility?

Huey 0.4.3.

Problems with database connection

Hi.

I've been using huey in my django project and found a problem. I think it's not a problem of huey, but I think it should be written in the troubleshooting section of the docs. The problem is when I do something with the database asynchronously I should manually close the database connection otherwise a lot of errors appears in the logs (kind of "can't connect to database"). I saw the celery sources and it closes all connections automatically. Thanks.

Example:

from django.db import close_connection

@task
def some_sync_db_task():
    m = MyModel(name='some-name')
    m.save()
    close_connection()

setting workers in settings or --workers does not start more processes

Maybe this works differently from what i expect but if i set the workers in the settings.py for HUEY or start the (django) run_huey command with -w 4 or --workers 4 it still only starts one process. i would expect to see 4 run_huey processes in my ps output if i set workers to 4?

Still get occasional DatabaseError from huey worker

Even using the latest db_task decorator i still get the occasional DatabaseError:

server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Much less than we used to but still happens now and then. Any way to retry the task if it fails? Would maybe not solve the underlying issue but it would at least make it manageable...

New release to get python 3 working?

hi @coleifer,

I'm new to Redis, so I spent several hours yesterday trying to find an error in my environment.
Eventually, I switched to sqlite - didn't help, enabled DEBUG, downloaded your example django app started going through the code.
Finally noticed an empty result after Huey consumer initialized with following commands
As a true slowpoke, I discovered that 0.4.2 will not work in my py3 env. only by the evening.
master works perfectly fine though.
Do you have any plans to release it?

Thanks for all you efforts

task not found in TaskRegistry

I just upgraded to the latest version in master to make use of the fix for closing the db but it seems huey is no longer finding the tasks after the update for some reason. I was already running a pretty recent version so not sure what could have changed... i upgraded huey and restarted the workers which should recreate the task registry in redis again right?

QueueException: queuecmd_create_initial_notifications not found in TaskRegistry
  File "huey/bin/huey_consumer.py", line 124, in check_message
    task = self.huey.dequeue()
  File "huey/api.py", line 211, in dequeue
    return registry.get_task_for_message(message)
  File "huey/registry.py", line 70, in get_task_for_message
    klass = self.get_task_class(klass_str)
  File "huey/registry.py", line 60, in get_task_class
    raise QueueException('%s not found in TaskRegistry' % klass_str)

Tasks succeed when python interpreter is used to import from main.py but not when `python main.py` is called.

I am running into an issue where seemingly when I import from the main module using the interpreter, I get my results. However, when the module is executed directly, nothing is returned.

Python 2.7.3 (v2.7.3:70274d53c1dd, Apr  9 2012, 20:52:43) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from main import mytask
>>> res = mytask(10)
>>> res.get()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]


[0] sepiidae-mbp sched (master) โœ— > python main.py 
Please enter number: 100
Results:  None

Interestingly, I can see that redis is being updated, but the results are just never returned. Have you seen behavior similar to this?

Cannot get past an import issue

I am rapidly starting to feel like the dumbest person on the planet. This should be simple seemingly, yet no matter what I do, I get an import error.

$ ./cons.sh
HUEY CONSUMER
-------------
In another terminal, run 'python main.py'
Stop the consumer using Ctrl+C
Error importing main.huey

Tried to create a clean env. with virtualenv and still, same results:

(tasks)bash-3.2$ PYTHONPATH=.:$PYTHONPATH huey_consumer.py main.huey
Error importing main.huey

I expect this to work when I run the interpreter from the same directory where the config.py and other files are, yet:

>>> from main import huey
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "main.py", line 1, in <module>
    from config import huey
  File "config.py", line 1, in <module>
    from huey import RedisHuey
ImportError: cannot import name RedisHuey

This seems like an incredibly simple, yet annoyingly complicated issue. A bit of troubleshooting guidance would be great.

Reset WorkerThread delay after task is found

As I see it, the delay in a blocking queue after each "poll" is not reset after a task is found. I am under the impression that the back-off is meant for periods of inactivity, yet after tasks are available for execution again, shouldn't the delay be reset to the original value, since it's now more likely that more tasks will be in the queue soon?

method to return Id of a long running task

I am wondering if there should not be a method of the AsyncData object that would return an ID of an enqueued task that is expected to be long running. For example, if I wanted to use another process at some point in the future to check on a state of a task that say takes 30 minutes to run, right now I cannot easily do it. I can do something similar to below to get the information that I need, but I am sure everyone will agree that this is pretty tacky. I am not sure if maybe my use-case is somewhat unique and does not map well to the design and intent of Huey.

    x = function_to_queue(cmd='ls -l /tmp')
    key = x.__dict__['command'].__dict__['task_id']
    print key

    ... at a later point...
    print pickle.loads(result_store.peek(key))

Perhaps, instead of returning None when there is no data ready, because task is not done, we could optionally get a string of the ID?

Task management for multiple sites

More of a design question than an issue:

Is there a best practice for handling queued tasks for multiple sites?

Say I am running 20 instances of the same app on a given server. All sharing the same code, but different data. All 20 need to use queued tasks and all tasks could be run off the same schedule ("do thing X every hour, do thing Y every midnight, etc").

Should I just keep copying the same command "config" onto every instance or would it be better to actually create a standalone app for the task scheduling which would then oversee all the instances?

Many TIA,
-filipp

Sub 1min intervals

I may be missing something, but I am not seeing that currently with cron style schedule anything that's under 1 is allowed. Am I correct, or is there a way to schedule something to run more than once per minute?

create a task with initial start delay

I'd like to be able to kick off a job that doesn't dequeue for a certain arbitrary amount of time. My use-case is probably about 5 minutes or so but it seems like the system should be flexible.

Does Huey support this?

someting like this:

@huey.task(delay=300)
def send_reminder(phone):
# do a check here to see if the expected user behavior was already completed
# if not, send out an sms to remind them to do it.

Unknown Exception

Hi, I've got a problem when adding a task to the queue, it gives me this error:

[06/Aug/2014 09:43:39] ERROR [huey.consumer:132] Unknown exception
Traceback (most recent call last):
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/bin/huey_consumer.py", line 124, in check_message
    task = self.huey.dequeue()
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/api.py", line 211, in dequeue
    return registry.get_task_for_message(message)
  File "/home/revolution/site/ve/local/lib/python2.7/site-packages/huey-0.4.2-py2.7.egg/huey/registry.py", line 67, in get_task_for_message
    raw = pickle.loads(msg)
  File "/usr/lib/python2.7/pickle.py", line 1382, in loads
    return Unpickler(file).load()
  File "/usr/lib/python2.7/pickle.py", line 858, in load
    dispatch[key](self)
  File "/usr/lib/python2.7/pickle.py", line 1133, in load_reduce
    value = func(*args)
TypeError: __init__() takes exactly 3 arguments (2 given)

The stack trace does not even go back to my code, so I don't know where to begin looking.

Can anyone help, please?
Thanks!

Add multiprocessing support (redux)

Per your comment, here's an issue to discuss adding multiprocessing support.

The main limitations that became obvious when trying to do this last time were around the fact that pretty much all inter-worker communication using the Python multiprocessing stack takes place via Pickle (e.g., when you push something onto a shared queue, it gets pickled, and reconstructed on the way out of the queue when a child process retrieves it). This had/has a couple of concrete ramifications:

  • Everything that gets shared has to be picklable.
  • Care has to be taken to keep object graphs from getting out of control, since you can end up inadvertently pickling a lot of stuff if your objects have attributes that have attributes that have attributes, ad infinitum, that all have to be pickled -- and the first point makes it worse, since all of that has to be picklable. I was able to work around some of these issues by hiding some stuff from pickle and papering over it via property decorators, but it ended up feeling pretty hacky. Particular example: one of the objects that needed to be shared in the implementation held a reference to the command queue, which in turn, when using the redis backend, held a connection to the redis connection, which wasn't picklable.
  • You can't rely on the master and the workers operating on the same copy of a data structure, since it gets copied during pickling/unpickling. As an example, this caused weirdness within the consumer code with tracking retries of commands, because retries were stored as a property on the command object, so child workers' decrements of this value weren't seen by the parent process.

periodic_task strange pause behavior

Hi,

I use huey with Django
HUEY = {
'name': 'unique name',
'backend': 'huey.backends.dummy', # required.
'connection': {'host': 'localhost', 'port': 6379},
'always_eager': False, # Defaults to False when running via manage.py run_huey

# Options to pass into the consumer when running ``manage.py run_huey``
'consumer_options': {
    #'loglevel': logging.DEBUG,
    'workers': 1,
}

}

I have simple task

@periodic_task(crontab(minute='*/1'))
def test_periodic_task():
print 'test %s' % datetime.datetime.now()

I run 'run_huey' and get in the log

Setting signal handler
Huey consumer initialized with following commands

  • test_periodic_task
    1 worker threads
    Starting scheduler thread
    Starting worker threads
    Starting periodic task scheduler thread
    Scheduling <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c07bd0> for execution

and nothing happens

I press Ctrl+C in the 'run_huey' terminal and get on console

^Ctest2014-02-18 04:12:24.140822
test2014-02-18 04:12:24.148758
test2014-02-18 04:12:24.309742

and in the log

Setting signal handler
Huey consumer initialized with following commands

  • test_periodic_task
    1 worker threads
    Starting scheduler thread
    Starting worker threads
    Starting periodic task scheduler thread
    Scheduling <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c07bd0> for execution
    Scheduling <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c07bd0> for execution
    Error
    Traceback (most recent call last):
    File "/home/max/workspace/oncloud/env/local/lib/python2.7/site-packages/huey/bin/huey_consumer.py", line 277, in run
    self._shutdown.wait(.1)
    File "/usr/lib/python2.7/threading.py", line 403, in wait
    self.__cond.wait(timeout)
    File "/usr/lib/python2.7/threading.py", line 262, in wait
    _sleep(delay)
    KeyboardInterrupt
    Shutdown initiated
    Exiting
    Executing <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c0a550>
    Setting signal handler
    Huey consumer initialized with following commands
  • test_periodic_task
    1 worker threads
    Starting scheduler thread
    Starting worker threads
    Executing <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c1f910>
    Starting periodic task scheduler thread
    Scheduling <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c07bd0> for execution
    Executing <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c1f8d0>
    Scheduling <src.apps.on.tasks.queuecmd_test_periodic_task object at 0x2c07bd0> for execution

How can I avoid that pause?

Thanks

huey.schedule makes redis down

hi, I find huey schedule task push message to redis every second,
and the key huey.redis.uniquename doesn't expire,
this makes redis server memory overflow.

how to solve this?

Crontab

I've defined a couple of tasks, one to be started every hour and another one to be started every two hours like:

from huey.djhuey.decorators import queue_command, periodic_command, crontab

@periodic_command(crontab(hour='*/1'))
def periodicTask2():
...

and 

@periodic_command(crontab(hour='*/2'))
def periodicTask3():
...

when I run ./manage.py run_huey both tasks get executed immediately and every minute, so I suppose that crontab hour argument isn't interpreted correctly?

For a simple test I added another task to be run once a day and crontab(day=*/1)) gets executed also on startup and every minute as well.

The huey_consumer.py main.Configuration and tutorial.

Hi,

I'd like to try huey out, but I'm stuck with the configuration issue. According to your tutorial I should start the consumer with the huey_consumer.py main.Configuration command, but it returns Unable to import "main". I know it's mentioned in common pitfalls, but it's not really helping me. Could you please help me to complete the tutorial? I'm quite new to python also.

I have also written on SO.

Thanks.

Huey singleton between Django instances on several servers with common backend

Hi Charles,

I use Django and I want to start run_huey code automatically in the my app init code. I have tasks.py file with some periodical task (backup, for instance). If nginx starts more then one workers I see that task is invoked n-times. Even more, I have more then one servers with my Django app.

Is it possible to avoid multiply huey task instances? Can I have only one huey in my entire system between servers and workers, having common backend (I'm working on the couchbase backend now). Could you give me the cue, please, how can I create that huey singleton.

Thank you in advance!

Redis database sharing?

I am doing my first Huey installation and have some experience with other task managers. What I have seen with other task managers running on Redis (Celery, etc.) they recommend that you have a separate Redis database for tasks. As far as I understood this had something to do with the fact that if you have a bug and you start pushing out tasks wildly, Redis does not like it.

However when reading http://huey.readthedocs.org/en/latest/django.html - is there a way to configure Redis database or is it the 'name' argument?

Huey ignores loglevel set in Django

Using this configuration in my settings.py:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'filters': {
        'require_debug_false': {
            '()': 'django.utils.log.RequireDebugFalse'
        }
    },
    'handlers': {
        'sentry': {
            'level': 'INFO',
#            'filters': ['require_debug_false'],
            'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler'
        },
        'console': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
        }
    },
    'loggers': {
        'huey': {
            'handlers': ['sentry', 'console'],
            'level': 'ERROR',
            'propagate': False,
        },
        ...
    }
}

The loglevel of huey.consumer is being set to INFO when instantiating the Consumer class, so my console/sentry handlers are spammed by huey logging.
Please modify the Consumer to accept external logging settings.

At the moment, a workaround is to set the loglevel like this:

HUEY = {
    # ...
    'consumer_options': {'loglevel': 'ERROR'},
}

Task Resume

Do tasks resume if the worker dies or is shutdown?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.