Giter Site home page Giter Site logo

Comments (5)

mperham avatar mperham commented on June 21, 2024 1

@stevenou Yeah, those are some of the harder issues to overcome.

  1. There's a global death handler. The morgue handling is in the job_retry logic.
  2. I don't think there's any event or callback for this.

Sidekiq doesn't have a great solution for following jobs as they change state. Actually, adding support for callbacks/event handlers for various job state changes throughout the system sounds like an interesting idea for Sidekiq 8.

from sidekiq.

mperham avatar mperham commented on June 21, 2024

The issue of course is that Sidekiq doesn't know which jobs use which limiters. total_number_jobs_in_backlog_for_rate_limiter is impossible to know. This is why the wiki suggests slowly enqueuing the set of jobs using the scheduler, N jobs per minute. You, the app developer, know a lot more about how and when to enqueue a job so it's not likely to hit a rate limit.

If you use a separate queue for limited jobs, that job counter becomes extremely easy to implement: Sidekiq::Queue.new("name").size.

from sidekiq.

stevenou avatar stevenou commented on June 21, 2024

Right, but what if there was an API for the job to declare itself dependent on a rate limiter + an estimate of how dependent e.g. I need to do 4 API calls in this job. That way when the job enqueues, we "allocate" 4 requests in the limiter "backlog counter". When the job finishes running or dies, we "deallocate" the estimated 4 requests.

Or even if the job itself doesn't have an API, just allowing the rate limiter to have an API that allows us to keep a counter of the backlog (e.g. we manually increment and decrement that counter, possibly by hooking into the job lifecycle but not necessarily), but if the counter is used, then the rate limiter's default backoff strategy can be smarter.

Of course we can implement this independently of Sidekiq... but feels appropriate to me for it to be part of the rate limiting feature.

If you use a separate queue for limited jobs, that job counter becomes extremely easy to implement: Sidekiq::Queue.new("name").size.

This makes sense though in my use case I feel like that would create too many queues and wouldn't be the best way to handle it. Additionally, the backlog needs to include matching jobs in the scheduled queue. So I think a separate/dedicated counter might be better.

Would middleware be the right way to hook into "enqueued" and "processed/dead"?

from sidekiq.

mperham avatar mperham commented on June 21, 2024

I see your viewpoint. Sidekiq is generic infrastructure meant to be useful to all apps. As I develop more and more specialized APIs, those APIs become less and less useful to the majority. Rate limiting isn't useful to all but it's useful to many. Your followup suggestion would be useful to a far smaller subset in my experience; the hardest part of my job is determining where to stop building APIs.

What I've found is that adding more complexity for a minority of users is a bad tradeoff. You can certainly build this complexity yourself, but I can't without more social proof that this is useful to the majority of people using rate limiting. Client middleware would be the place to start. I don't recall if client middleware runs when scheduled, enqueued or both. If you come up with a solution and I see demand for that solution from others, then I have cause to bake this in.

https://github.com/sidekiq/sidekiq/wiki/Middleware

from sidekiq.

stevenou avatar stevenou commented on June 21, 2024

Totally understand.

I got something implemented via middleware out running on our own setup now. Will monitor to see how it does.

The only issues I couldn't figure out were:

  1. How to know when a job dies? It seems the server middleware will do something inside the yield to move a job to dead, but I couldn't figure out how to determine when that happens.
  2. Likewise, how can I tell a job was re-enqueued from dead (I believe it will have an error_class but how to differentiate that from normal retry enqueue)?

Ideally, I can decrement the counter when a job dies and increment it again when it gets retried, because a dead job really shouldn't factor into the throttle rate calculation. For now, I don't do that and dead jobs continue to count as backlogged. While not ideal, this is probably ok in practice because under normal operating conditions we do not expect there to be many dead jobs.

Thanks!

from sidekiq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.