Giter Site home page Giter Site logo

mission_control-jobs's Introduction

Mission Control — Jobs

This gem provides a Rails-based frontend to Active Job adapters. It currently supports Resque and Solid Queue. Its features depend on those offered by the adapter itself. At a minimum, it allows you to inspect job queues and jobs currently waiting in those queues and inspect and retry or discard failed jobs.

Installation

Add this line to your application's Gemfile:

gem "mission_control-jobs"

And then execute:

$ bundle install

Basic configuration

Mount Mission Control Job's engine where you wish to have it accessible from your app, in your routes.rb file:

Rails.application.routes.draw do
  # ...
  mount MissionControl::Jobs::Engine, at: "/jobs"

And that's it. With this alone, you should be able to access Mission Control Job's UI, where you can browse the existing queues, jobs pending in these queues, jobs in different statuses, and discard and retry failed jobs:

Queues tab in a simple app

Failed jobs tab in a simple app

Authentication and base controller class

By default, Mission Control's controllers will extend the host app's ApplicationController. If no authentication is enforced, /jobs will be available to everyone. You might want to implement some kind of authentication for this in your app. To make this easier, you can specify a different controller as the base class for Mission Control's controllers:

Rails.application.configure do
  MissionControl::Jobs.base_controller_class = "AdminController"
end

Or, in your environment config or application.rb:

config.mission_control.jobs.base_controller_class = "AdminController"

Other configuration settings

Besides base_controller_class, you can also set the following for MissionControl::Jobs or config.mission_control.jobs:

  • logger: the logger you want Mission Control Jobs to use. Defaults to ActiveSupport::Logger.new(nil) (no logging). Notice that this is different from Active Job's logger or Active Job's backend's configured logger.
  • delay_between_bulk_operation_batches: how long to wait between batches when performing bulk operations, such as discard all or retry all jobs—defaults to 0
  • adapters: a list of adapters that you want Mission Control to use and extend. By default this will be the adapter you have set for active_job.queue_adapter.
  • internal_query_count_limit: in count queries, the maximum number of records that will be counted if the adapter needs to limit these queries. True counts above this number will be returned as INFINITY. This keeps count queries fast—defaults to 500,000
  • scheduled_job_delay_threshold: the time duration before a scheduled job is considered delayed. Defaults to 1.minute (a job is considered delayed if it hasn't transitioned from the scheduled status 1 minute after the scheduled time).
  • show_console_help: whether to show the console help. If you don't want the console help message, set this to false—defaults to true.

This library extends Active Job with a querying interface and the following setting:

  • config.active_job.default_page_size: the internal batch size that Active Job will use when sending queries to the underlying adapter and the batch size for the bulk operations defined above—defaults to 1000.

Adapter Specifics

  • Resque: Queue pausing is supported only if you have resque-pause installed in your project

Advanced configuration

When we built Mission Control Jobs, we did it with the idea of managing multiple apps' job backends from a single, centralized app that we used for monitoring, alerts and other tools that related to all our apps. Some of our apps run in more than one datacenter, and we run different Resque instances with different Redis configurations in each. Because of this, we added support for multiple apps and multiple adapters per app. Even when running Mission Control Job within the app it manages, and a single DC, as we migrated from Resque to Solid Queue, we needed to manage both adapters from Mission Control.

Without adding any additional configuration to the one described before, Mission Control will be configured with one single app and a single server for your configured active_job.queue_adapter.

If you want to support multiple adapters, you need to add them to Mission Control configuration via the adapters setting mentioned above. For example:

config.mission_control.jobs.adapters = [ :resque, :solid_queue ]

Then, to configure the different apps and/or different servers, you can do so in an initializer like this (taken from our dummy app for testing purposes):

require "resque"
require "resque_pause_helper"

require "solid_queue"

Resque.redis = Redis::Namespace.new "#{Rails.env}", redis: Redis.new(host: "localhost", port: 6379)

SERVERS_BY_APP = {
  BC4: %w[ resque_ashburn resque_chicago ],
  HEY: %w[ resque solid_queue ]
}

def redis_connection_for(app, server)
  redis_namespace = Redis::Namespace.new "#{app}:#{server}", redis: Resque.redis.instance_variable_get("@redis")
  Resque::DataStore.new redis_namespace
end

SERVERS_BY_APP.each do |app, servers|
  queue_adapters_by_name = servers.collect do |server|
    queue_adapter = if server.start_with?("resque")
      ActiveJob::QueueAdapters::ResqueAdapter.new(redis_connection_for(app, server))
    else
      ActiveJob::QueueAdapters::SolidQueueAdapter.new
    end

    [ server, queue_adapter ]
  end.to_h

  MissionControl::Jobs.applications.add(app, queue_adapters_by_name)
end

This is an example for two different apps, BC4 and HEY, each one with two servers. BC4 has two Resque servers with two different configurations, and HEY has one Resque server and one Solid Queue server.

Currently, only one Solid Queue configuration is supported, but support for several Solid Queue backends (with different databases) is planned.

This is how we set Resque and Solid Queue together when we migrated from one to the other:

queue_adapters_by_name = {
  resque: ActiveJob::QueueAdapters.lookup(:resque).new, # This will use Resque.redis as the redis client
  solid_queue: ActiveJob::QueueAdapters.lookup(:solid_queue).new
}

MissionControl::Jobs.applications.add("hey", queue_adapters_by_name)

When you have multiple apps and servers configured, you can choose between them with select and toggle menus:

Queues tab with multiple apps and servers

Basic UI usage

As mentioned, the features available in Mission Control depend on the adapter you're using, as each adapter supports different features. Besides inspecting the queues and the jobs in them, and discarding and retrying failed jobs, you can inspect jobs in different statuses supported by each adapter, filter them by queue name and job class name (with the idea of adding more filters in the future), pause and un-pause queues (if the adapter allows that), inspect workers, know which jobs are being run by what worker, checking a specific job or a specific worker...

Default queue tab

In-progress jobs tab

Workers tab

Single job

Single worker

Console helpers, scripting and dealing with big sets of jobs

Besides the UI, Mission Control provides a light console helper to switch between applications and adapters. Some potentially destructive actions aren't exposed via the UI (for example, discarding jobs that aren't failed, although this might change in the future), but you can always perform these from the console if you know very well what you're doing.

It's also possible that you need to deal with very big sets of jobs that are unmanageable via the UI or that you wish to write a script to deal with an incident, some cleanup or some data migration. The console helpers and the querying API with which we've extended Active Job come in handy here.

First, when connecting to the Rails console, you'll see this new message:

 bin/rails c


Type 'jobs_help' to see how to connect to the available job servers to manage jobs

Typing jobs_help, you'll get clear instructions about how to switch between applications and adapters:

>> jobs_help
You can connect to a job server with
  connect_to "<app_id>:<server_id>"

Available job servers:
  * bc4:resque_ashburn
  * bc4:resque_chicago
  * hey:resque
  * hey:solid_queue

And then:

>> connect_to "hey:solid_queue"
Connected to hey:solid_queue

Now you're ready to query and operate over jobs for this adapter via the API. Some examples of queries:

# All jobs
ActiveJob.jobs

# All failed jobs
ActiveJob.jobs.failed

# All pending jobs in some queue
ActiveJob.jobs.pending.where(queue_name: "some_queue")

# All failed jobs of a given class
ActiveJob.jobs.failed.where(job_class_name: "SomeJob")

# All pending jobs of a given class with limit and offset
ActiveJob.jobs.pending.where(job_class_name: "SomeJob").limit(10).offset(5)

# For adapters that support these statuses:
# All scheduled/in-progress/finished jobs of a given class
ActiveJob.jobs.scheduled.where(job_class_name: "SomeJob")
ActiveJob.jobs.in_progress.where(job_class_name: "SomeJob")
ActiveJob.jobs.finished.where(job_class_name: "SomeJob")

# For adapters that support filtering by worker:
# All jobs in progress being run by a given worker
ActiveJob.jobs.in_progress.where(worker_id: 42)

Some examples of bulk operations:

# Retry all the jobs (only possible for failed jobs)
ActiveJob.jobs.failed.retry_all

# Retry all the jobs of a given class (only possible for failed jobs)
ActiveJob.jobs.failed.where(job_class_name: "SomeJob").retry_all

# Discard all failed jobs
ActiveJob.jobs.failed.discard_all

# Discard all pending jobs of a given class
ActiveJob.jobs.pending.where(job_class_name: "SomeJob").discard_all
# Or all pending jobs in a given queue:
ActiveJob.jobs.pending.where(queue_name: "some-queue").discard_all

When performing these bulk operations in the console, a delay of 2 seconds between batches processed will be introduced, set via delay_between_bulk_operation_batches. You can modify it as

MissionControl::Jobs.delay_between_bulk_operation_batches = 5.seconds

Contributing

Thanks for your interest in contributing! To get the app running locally, just run:

bin/setup

This will load a bunch of jobs as seeds.

We have both unit and functional tests and system tests. If you want to run system tests, you'd need to install ChromeDriver. Then, you'll be able to run the tests as:

bin/rails test test/system

License

The gem is available as open source under the terms of the MIT License.

mission_control-jobs's People

Contributors

alxckn avatar andbar avatar carlosantoniodasilva avatar chiraggshah avatar dorianmariecom avatar excid3 avatar garyhtou avatar intrepidd avatar jorgemanrubia avatar juanvqz avatar lewispb avatar lexcao avatar manuelvanrijn avatar matiassalles99 avatar morgoth avatar pmareke avatar rosa avatar songjiz avatar st0012 avatar trobrock avatar xeitor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mission_control-jobs's Issues

Dashboard exception: Solid Queue::job is missing a `status` method

When I open Queues and click on a job that did not run because of an exception or because I pass wrong args I get the following error and the dashboard fails with a fatal exception

When I analyse the logs this is the error I see that the Solid Queue::job is missing a status method

I'm using Rails 7.1 with mission_control-jobs (~> 0.2.1) and everything is running in a Puma 6.0

[hiphiphouse] [2024-05-22 17:55:33] I, [2024-05-22T17:55:33.515147 #15]  INFO -- : [24d6be8a-e014-4e7a-a987-6214618d79e0] Processing by MissionControl::Jobs::QueuesController#show as HTML
[hiphiphouse] [2024-05-22 17:55:33] I, [2024-05-22T17:55:33.515211 #15]  INFO -- : [24d6be8a-e014-4e7a-a987-6214618d79e0]   Parameters: {"server_id"=>"solid_queue", "application_id"=>"realestatescraperrails", "id"=>"default"}
[hiphiphouse] [2024-05-22 17:55:33] I, [2024-05-22T17:55:33.572069 #15]  INFO -- : [24d6be8a-e014-4e7a-a987-6214618d79e0]   Rendered layout /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/mission_control-jobs-0.2.1/app/views/layouts/mission_control/jobs/application.html.erb (Duration: 45.1ms | Allocations: 7221)
[hiphiphouse] [2024-05-22 17:55:33] I, [2024-05-22T17:55:33.572507 #15]  INFO -- : [24d6be8a-e014-4e7a-a987-6214618d79e0] Completed 200 OK in 57ms (Views: 25.7ms | ActiveRecord: 25.1ms | Allocations: 8192)
[hiphiphouse] [2024-05-22 17:55:34] I, [2024-05-22T17:55:34.895968 #15]  INFO -- : [779eac64-9fb1-4161-bda7-a8c088579963] Started GET "/jobs/applications/realestatescraperrails/failed/jobs?server_id=solid_queue" for 172.68.134.38 at 2024-05-22 17:55:34 +0000
[hiphiphouse] [2024-05-22 17:55:34] I, [2024-05-22T17:55:34.897793 #15]  INFO -- : [779eac64-9fb1-4161-bda7-a8c088579963] Processing by MissionControl::Jobs::JobsController#index as HTML
[hiphiphouse] [2024-05-22 17:55:34] I, [2024-05-22T17:55:34.897892 #15]  INFO -- : [779eac64-9fb1-4161-bda7-a8c088579963]   Parameters: {"server_id"=>"solid_queue", "application_id"=>"realestatescraperrails", "status"=>"failed"}
[hiphiphouse] [2024-05-22 17:55:34] I, [2024-05-22T17:55:34.926022 #15]  INFO -- : [779eac64-9fb1-4161-bda7-a8c088579963]   Rendered layout /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/mission_control-jobs-0.2.1/app/views/layouts/mission_control/jobs/application.html.erb (Duration: 13.4ms | Allocations: 4188)
[hiphiphouse] [2024-05-22 17:55:34] I, [2024-05-22T17:55:34.926398 #15]  INFO -- : [779eac64-9fb1-4161-bda7-a8c088579963] Completed 200 OK in 28ms (Views: 8.2ms | ActiveRecord: 11.5ms | Allocations: 5820)
[hiphiphouse] [2024-05-22 17:55:38] I, [2024-05-22T17:55:38.642000 #15]  INFO -- : [6cacec07-49ce-4790-b9f8-c2168298ce10] Started GET "/jobs/applications/realestatescraperrails/jobs/476c448c-2c9a-4b50-9d38-4c346a8c6ff9?filter%5Bqueue_name%5D=default&server_id=solid_queue" for 172.64.238.152 at 2024-05-22 17:55:38 +0000
[hiphiphouse] [2024-05-22 17:55:38] I, [2024-05-22T17:55:38.643418 #15]  INFO -- : [6cacec07-49ce-4790-b9f8-c2168298ce10] Processing by MissionControl::Jobs::JobsController#show as HTML
[hiphiphouse] [2024-05-22 17:55:38] I, [2024-05-22T17:55:38.643480 #15]  INFO -- : [6cacec07-49ce-4790-b9f8-c2168298ce10]   Parameters: {"filter"=>{"queue_name"=>"default"}, "server_id"=>"solid_queue", "application_id"=>"realestatescraperrails", "id"=>"476c448c-2c9a-4b50-9d38-4c346a8c6ff9"}
[hiphiphouse] [2024-05-22 17:55:38] I, [2024-05-22T17:55:38.655808 #15]  INFO -- : [6cacec07-49ce-4790-b9f8-c2168298ce10] Completed 500 Internal Server Error in 12ms (ActiveRecord: 1.4ms | Allocations: 6172)
[hiphiphouse] [2024-05-22 17:55:38] F, [2024-05-22T17:55:38.659066 #15] FATAL -- : [6cacec07-49ce-4790-b9f8-c2168298ce10]   
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] NoMethodError (undefined method `status' for #<SolidQueue::Job id: 1, queue_name: "default", class_name: "NjuskaloScraperJob", arguments: {"job_class"=>"NjuskaloScraperJob", "job_id"=>"476c448c-2c9a-4b50-9d38-4c346a8c6ff9", "provider_job_id"=>nil, "queue_name"=>"default", "priority"=>nil, "arguments"=>["www.njuskalo.hr", nil, 2, false], "executions"=>0, "exception_executions"=>{}, "locale"=>"en", "timezone"=>"UTC", "enqueued_at"=>"2024-05-22T17:44:59.815855130Z", "scheduled_at"=>nil}, priority: 0, active_job_id: "476c448c-2c9a-4b50-9d38-4c346a8c6ff9", scheduled_at: "2024-05-22 17:44:59.815697000 +0000", finished_at: nil, concurrency_key: nil, created_at: "2024-05-22 17:44:59.952260000 +0000", updated_at: "2024-05-22 17:44:59.952260000 +0000">):
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10]   
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activemodel (7.1.3.3) lib/active_model/attribute_methods.rb:489:in `method_missing'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) lib/active_job/queue_adapters/solid_queue_ext.rb:113:in `status_from_solid_queue_job'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) lib/active_job/queue_adapters/solid_queue_ext.rb:96:in `deserialize_and_proxy_solid_queue_job'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) lib/active_job/queue_adapters/solid_queue_ext.rb:78:in `find_job'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) lib/active_job/jobs_relation.rb:172:in `find_by_id!'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) app/controllers/concerns/mission_control/jobs/job_scoped.rb:10:in `set_job'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:403:in `block in make_lambda'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:183:in `block (2 levels) in halting_and_conditional'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/abstract_controller/callbacks.rb:34:in `block (2 levels) in <module:Callbacks>'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:184:in `block in halting_and_conditional'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:598:in `block in invoke_before'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:598:in `each'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:598:in `invoke_before'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:119:in `block in run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) lib/mission_control/jobs/adapter.rb:3:in `activating'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) lib/mission_control/jobs/server.rb:18:in `activating'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] mission_control-jobs (0.2.1) app/controllers/concerns/mission_control/jobs/application_scoped.rb:28:in `activating_job_server'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] turbo-rails (2.0.5) lib/turbo-rails.rb:24:in `with_request_id'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] turbo-rails (2.0.5) app/controllers/concerns/turbo/request_id_tracking.rb:10:in `turbo_tracking_request_id'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] audited (5.6.0) lib/audited/sweeper.rb:16:in `around'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] audited (5.6.0) lib/audited/sweeper.rb:16:in `around'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actiontext (7.1.3.3) lib/action_text/rendering.rb:23:in `with_renderer'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actiontext (7.1.3.3) lib/action_text/engine.rb:69:in `block (4 levels) in <class:Engine>'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:130:in `instance_exec'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:141:in `run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/abstract_controller/callbacks.rb:258:in `process_action'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_controller/metal/rescue.rb:25:in `process_action'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_controller/metal/instrumentation.rb:74:in `block in process_action'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/notifications.rb:206:in `block in instrument'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/notifications/instrumenter.rb:58:in `instrument'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/notifications.rb:206:in `instrument'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_controller/metal/instrumentation.rb:73:in `process_action'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_controller/metal/params_wrapper.rb:261:in `process_action'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activerecord (7.1.3.3) lib/active_record/railties/controller_runtime.rb:32:in `process_action'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/abstract_controller/base.rb:160:in `process'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionview (7.1.3.3) lib/action_view/rendering.rb:40:in `process'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_controller/metal.rb:227:in `dispatch'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_controller/metal.rb:309:in `dispatch'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/routing/route_set.rb:49:in `dispatch'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/routing/route_set.rb:32:in `serve'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:51:in `block in serve'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:131:in `block in find_routes'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:124:in `each'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:124:in `find_routes'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:32:in `serve'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/routing/route_set.rb:882:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/engine.rb:536:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/railtie.rb:226:in `public_send'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/railtie.rb:226:in `method_missing'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/routing/mapper.rb:22:in `block in <class:Constraints>'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/routing/mapper.rb:51:in `serve'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:51:in `block in serve'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:131:in `block in find_routes'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:124:in `each'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:124:in `find_routes'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/journey/router.rb:32:in `serve'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/routing/route_set.rb:882:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] warden (1.2.9) lib/warden/manager.rb:36:in `block in call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] warden (1.2.9) lib/warden/manager.rb:34:in `catch'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] warden (1.2.9) lib/warden/manager.rb:34:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/tempfile_reaper.rb:20:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/etag.rb:29:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/conditional_get.rb:31:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/head.rb:15:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/http/permissions_policy.rb:36:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/http/content_security_policy.rb:33:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack-session (2.0.0) lib/rack/session/abstract/id.rb:272:in `context'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack-session (2.0.0) lib/rack/session/abstract/id.rb:266:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/cookies.rb:689:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/callbacks.rb:101:in `run_callbacks'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/callbacks.rb:28:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/debug_exceptions.rb:29:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/show_exceptions.rb:31:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/rack/logger.rb:37:in `call_app'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/rack/logger.rb:24:in `block in call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/tagged_logging.rb:135:in `block in tagged'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/tagged_logging.rb:39:in `tagged'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/tagged_logging.rb:135:in `tagged'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] activesupport (7.1.3.3) lib/active_support/broadcast_logger.rb:240:in `method_missing'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/rack/logger.rb:24:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/remote_ip.rb:92:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/request_id.rb:28:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/method_override.rb:28:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/runtime.rb:24:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/executor.rb:14:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] actionpack (7.1.3.3) lib/action_dispatch/middleware/static.rb:25:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] rack (3.0.11) lib/rack/sendfile.rb:114:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] railties (7.1.3.3) lib/rails/engine.rb:536:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] puma (6.4.2) lib/puma/configuration.rb:272:in `call'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] puma (6.4.2) lib/puma/request.rb:100:in `block in handle_request'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] puma (6.4.2) lib/puma/thread_pool.rb:378:in `with_force_shutdown'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] puma (6.4.2) lib/puma/request.rb:99:in `handle_request'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] puma (6.4.2) lib/puma/server.rb:464:in `process_client'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2168298ce10] puma (6.4.2) lib/puma/server.rb:245:in `block in run'
[hiphiphouse] [2024-05-22 17:55:38] [6cacec07-49ce-4790-b9f8-c2

Support dispatching recurring task immediatly

It would be great to be able to dispatch a new job for a recurring task straight from the mission control recurring task page.

I would love to make a PR for it, any pointers ? My naive approach would be to add a new controller and simply call perform_later on the job class.

bin/setup error

Hey!

I'm getting the following error when running bin/setup:
Screenshot 2024-04-02 at 01 20 21

I noticed there is no v2.0.1 in Ruby Gems and I get redirected to https://github.com/rails/solid_queue when navigating to https://github.com/basecamp/solid_queue, so I'm guessing the repository was transferred but I might be wrong.

By making gem "solid_queue" point to rails/solid_queue instead of basecamp/solid_queue in the Gemfile.rb everything worked just fine for me and I could get the app running. I can create a PR with the fix (would be just one line), but I wanted to double-check I wasn't missing something or doing anything wrong in the setup.

Discard scheduled jobs

Would be nice to be able to discard a scheduled job.

Use case: preventing a job that was rescheduled due to an error being raised from running again.

Infinite redirect loop if no servers can be found

Problem

When MissionControl::Jobs::Current.application.servers is empty, the application gets stuck in an infinite redirect loop. This might happen when config.active_job.queue_adapter is not set.

Why it happens

Visiting http://localhost:3000/jobs will redirect to http://localhost:3000/jobs/ which redirects to itself and so on.

This happens when there are zero MissionControl::Jobs::Current.application.servers which raises a MissionControl::Jobs::Errors::ResourceNotFound which in turn redirects to root_url which calls queues#index which starts the redirect loop again.

Proposed solution

If no servers are found, do not redirect to root_url but show an error message. Optional: If config.active_job.queue_adapter is set, suggest the user to set it as that's likely the cause for no servers being found.

Discard and Retry buttons not working for jobs that use retry_on

We're experiencing an issue where jobs that have a retry_on will give the error Job with id '#####################' not found when we click the Discard or Retry button for that failed job. This does not happen when we click the button for jobs that failed on the first attempt and do not have a retry_on.

I think the root issue is this:

When a job has retry_on, if it fails, the solid_queue_jobs entry for that job gets a finished_at timestamp and the retry creates a new solid_queue_jobs entry with a new id but the same active_job_id. That will continue for the number of retry_on attempts. The final job that fails and does not queue another retry will have finished_at: nil and it will have a related solid_queue_failed_executions entry created with a job_id that matches. This is all as expected.

Example: if retry_on is set to 5 attempts, the database will end up with 5 solid_queue_jobs rows that match that same active_job_id. The first 4 will have finished_at timestamps, the last one will not, and the last one will link to a solid_queue_failed_executions database row (but none of the first 4 do). This is the behavior we are seeing and I'm assuming that's working as expected.

The issue comes in when mission control tries to retrieve the failed job when you click the Discard or Retry button. It is using the active_job_id; in tracing the code, it appears to end up at line 172 of the active_job/queue_adapters/solid_queue_ext.rb file (https://github.com/basecamp/mission_control-jobs/blob/c14ab260b493195cfa99863e8a939882605e8ed6/lib/active_job/queue_adapters/solid_queue_ext.rb#L172).

There it is attempting to find the solid queue job by active_job_id, but there are multiple rows that match in this scenario due to the retry_on. The job that is returned in that query is the first one that matches (the oldest one), which does not have a matching solid_queue_failed_executions row (it is the first failed attempt). When the check on line 173 happens (matches_relation_filters?(job)), which checks for a matched status on line 281 (which is the failed status), it returns false due to that particular job not having a failed_execution, because it was not the most recent failed attempt but rather the first one. So then it returns the error that it cannot find the job.

It seems that in this scenario, when there is more than one row in the solid_queue_jobs table that match that active_job_id, if it is looking for a failed job it needs to find the most recent one with finished_at: nil which will then have a corresponding failed_execution. Or perhaps it should find the job by the id (the solid_queue_jobs id), rather than by the active_job_id?

17.7 MB binary speedtest file in the root of this repo?

There's currently a binary speedtest file in the root of the repo that's 17.7 MB.

Was this meant to be .gitignored? I was going to open a PR to do this but I'm not familiar with this speedtest file and wondered if you might have plans for it in the future.

Retry from dashboard does not work

We have a Job listed in "Failed jobs" with this error:

Errno::ECONNREFUSED
Failed to open TCP connection to :80 (Connection refused - connect(2) for nil port 80)

It is clearly a temporary network error.

The job has this retry policy:

retry_on StandardError, wait: 5.minutes, attempts: 3

I also see this status for the retries (in the section "Raw data"):

    "executions": 2,
    "exception_executions": {
      "[StandardError]": 2
    },

The problem is that when I click "Retry" in the dashboard it seems that the Job is not even retried and after a few seconds it's immediately sent to "Failed jobs" again.

The executions count is not incremented (does not change) and from the time it takes to execute the Job I am pretty sure that it is not even trying to execute it again.

Is this by design when you reach the max number of attempts? (In this case it would be better to clarify this behavior in the dashboard)

Or is it a bug?

Allow to connect to server by default in console

When using rails console to manage jobs, one have to connect first to given server via "connect_to".
I believe for most of the apps there will be only one server, so connecting to it by default would be convenient.

Setting a controller base class that uses http_basic_authenticate_with breaks bin/rails assets:precompile

Start with an application that is working fine and a docker image that is building successfully.

The application already has a Admin::AdminController that looks like this:

class Admin::AdminController < ApplicationController
  layout 'admin'
  http_basic_authenticate_with name: 'admin', password: Rails.application.credentials.admin_password, realm: 'Admin'
end

Then add this gem and add this line to an initializer:

MissionControl::Jobs.base_controller_class = "Admin::AdminController"

Or, alternatively, add this line to config/application.rb:

config.mission_control.jobs.base_controller_class = "Admin::AdminController"

Once you add that line of configuration, you start getting this error when you try to build the Docker image:

 > [10/10] RUN SECRET_KEY_BASE_DUMMY=1 ./bin/rails assets:precompile:
3.508 bin/rails aborted!
3.509 ArgumentError: Expected password: to be a String, got NilClass (ArgumentError)
3.509 
3.509             raise ArgumentError, "Expected password: to be a String, got #{password.class}" unless password.is_a?(String)
3.509                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3.509 /app/app/controllers/admin/admin_controller.rb:3:in `<class:AdminController>'
3.509 /app/app/controllers/admin/admin_controller.rb:1:in `<main>'
3.509 /app/config/environment.rb:5:in `<main>'
3.510 Tasks: TOP => environment

Is that configuration of mission_control causing an eager loading of controller file or something?
Or it's causing the controller file to be loaded before credentials are fully loaded?

Any solution?

Including a changelog, git tag and release for new versions?

Hi,

At the time of making this issue I noticed new versions are released by bumping the gem's version file in a commit as seen here 307e1e3, which you can then find on Ruby Gems at https://rubygems.org/gems/mission_control-jobs.

Is there a path forward to where new releases will include a changelog, a git tag as well as a github release? Having an empty github release which points back to a tag is useful to quickly see the latest release of a project.

inspect on the arguments in the index pages makes the pages explode for some more complex type of argument objects

because we pass in some more complex arguments, our index pages are exploding and we're currently forced to override inspect on some of our classes, which is kind of 'bwah' ...

it seems that 37s is largely passing very simple arguments but i guess it's not so uncommon to pass in somewhat more complex objects.

i see several options

  • considering to use to_s for the arguments
  • more aggressive truncation on the inspect
  • considering a cascade of display_name, to_s on the arguments
  • ability to configure per job
  • ...

the screenshot below is variant where we already compacted to the inspect for some key classes. originally one row would take up more then one page on a 4k monitor ...

created at 2018-03-15 115819 792462000 +01

Mounting the engine with a route constraint

Hey,

Albeit the mention of setting the base controller, I was trying to define a contraint while mounting the engine.
mount MissionControl::Jobs::Engine, at: "jobs", constraints: AdminConstraint.new

class AdminConstraint
  def matches?(request)
    return unless request.session[:user_id]
    User.find(request.session[:user_id])&.admin?
  end
end

which fails because <ActionDispatch::Request::Session:0x72ba0 not yet loaded>

I don't know if it's a limit of Rails engine (it used to work with Sidekiq dashboard though) or if there is some loading logic to change to make it work. Is that something that can be envisioned? Thanks

Using with Rails main

We run rails main in our Gemfile and since this commit 83914ae we've been unable to update because of the restricted Rails version number.

Is there any opposition to changing this to >= 7.1? Or is there another recommended way to use this with Rails main?

Doesn't work with Rails 8

Hi All,

Without a doubt this is one of the very few public issues I've written in years, so forgive me if I'm bungling it. Welcome to any feedback!

So I'm running rails new ..... --main which is effectively Rails 8 alpha and the gem won't work with this version of rails. Totally understandable, but still this may be an unexpected behaviour to the maintainers.

bundle install
Fetching gem metadata from https://rubygems.org/...........
Resolving dependencies...
Could not find compatible versions

Because rails >= 7.1, < 8.A could not be found in https://github.com/rails/rails.git (at main@ff0ef93)
  and every version of mission_control-jobs depends on rails ~> 7.1,
  mission_control-jobs cannot be used.
So, because Gemfile depends on mission_control-jobs >= 0,
  version solving has failed.

IRB context wrong number of arguments

I recently upgraded from Ruby 3.2.2 to 3.3.0 and from Rails 7.0 to 7.1 and currently, when I try to use the console it throws the following error. Everything works as expected except for the console.

Screenshot 2024-04-09 at 7 09 04 AM

Add more filters to the different job statuses

For example, filter scheduled jobs by scheduled_at time ranges, or finished jobs by finished_at time ranges, or blocked jobs by the block (Blocked by) key. The current filters by queue name and job class are generic and apply to all statuses, but we could have some filters that apply only to certain statuses. The Adapter#supported_filters already takes a JobsRelation as argument, so it can easily return different filters depending on jobs_relation.status.

Does Mission Control's JavaScript need to be preloaded? [importmaps]

Currently, I'm seeing this warning in my browser console everywhere in my app, even though mission control is only available on one authenticated route:

The resource /assets/mission_control/jobs/application-8d538c5b.js was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it wasn't preloaded for nothing.

This is because the gem's importmap.rb has every dependency set to preload: true, as in:

pin "application-mcj", to: "mission_control/jobs/application.js", preload: true

And (I think?) because the engine appends all these paths to the application's importmap config:

      initializer "mission_control-jobs.importmap", before: "importmap" do |app|
        app.config.importmap.paths << root.join("config/importmap.rb")
        app.config.importmap.cache_sweepers << root.join("app/javascript")
      end

Having every user, whether or not they are authorized to see the mission control jobs preload all of its JavaScript seems less than optimal. Is there any way around this?

Make the 'Job' cells of the UI a configurable partial

Your default partial for showing the info about the job you are seeing is a great default. But it would be likely simple and hugely impactful if it basically rendered a partial for that based on the Job, and let standard rails partial identification (or maybe something with a customer path setting) be used to get alternative definable partials.

Right now, with Sidekiq's UI that is very similar, we often will need to look at the arguments, find an ID number, then go look that up in another UI. Having configurable partials would let us trivially make those nicely-formatted UIs (for example, showing a name for the object instead of its GlobalID) and to hyperlink to other management pages.

Dispatching using polling interval when using retry all in falied executions.

As a newcomer to this feature, I've encountered something that I'm unsure whether it's an issue or simply a gap in my understanding. I've been utilizing jobs to invoke an API that has a rate limit. To avoid this limit, I've configured solid queue in the YAML file with a polling interval of 5 seconds.

When using this setup, everything functions as expected; however, occasionally, jobs fail. After fixing these failures, I attempt to retry the calls. It appears that the 'retry all' option dispatches all of them simultaneously, which triggers the rate limit once again. When I send a few of them individually, I don't encounter this issue.

I'm uncertain whether this could also impact the scheduled job, but it took me a lot to determine why I was experiencing this issue. The only explanation I can think of is that the 'retry all' function might not be considering the polling interval option configuration.

Auto update UI

Is this going to get support for updating the page live?

The linkable Queue labels throughout the application create incorrect URLs

On the main Queue tab, cluck any one of the "queue name" buttons, and you redirect to a page for that queue.

On any other page where a "Queue name" button exists (schedule jobs, blocked jobs, failed jobs, etc), the link errors (fails to successfully redirect). I'm not sure if my authentication system is impacting the errors, but I have confirmed the root cause is:

  • On the queue page, the "Queue name" buttons honor the case of the Queue name (i.e., QoS60s) is part of the URL with that casing.
  • On all the other pages, the "Queen name" buttons have the queue name in all lower case, which is failing in a case sensitive query somewhere.

I might have time to dig more if this description isn't robust enough to take you to the "oh crap, why are we downcasing this string" line of code.

Existing inflection for "UI" breaks UiHelper

My app includes an acronym inflection for "UI", used for my own ViewComponents:

inflect.acronym "UI"

When I try to load the jobs dashboard, I get an error:

uninitialized constant MissionControl::Jobs::ApplicationHelper::UiHelper
app/helpers/mission_control/jobs/application_helper.rb:6:in `<module:ApplicationHelper>' 

If I remove the inflection, the error goes away.

Strangely, the error only occurs the first time I try to load /jobs after booting the application. If I reload the page, I get a different error.

Is there anything that can be done to resolve this, other than removing my inflection and renaming my components? Is there someway to override the inflection in the engine? Thanks!

Cannot open Heroku console after adding mission_control-jobs

Hey Guys,

After adding mission_control-jobs to my app, I cannot open the console on Heroku (Prod) anymore. Everything is fine on my local machine (Dev).

I'm on Heroku stack 22.

I have traced the error back to mission_control (the error appears after adding mission_control-jobs in isolation), but really don't know where to start troubleshooting this since it doesn't happen on my local machine.

Just wanted to let you guys know. (if you need any more info, feel free to reach out)

Heroku Error: https://gist.github.com/badbusiness/f1030390308f23be85671e543946f131
Gemfile.lock: https://gist.github.com/badbusiness/8137bede068ca1eaaccd6c2610313038

Undefined method `after_fork' for Resque:Module (NoMethodError)

Nice work, mission_control-jobs looks great 👏

I found this issue, I'm looking into it further and will come back if I find anything.

17:17:16 web.1  | [Puma] PID=12 ! Unable to load application: NoMethodError: undefined method `after_fork' for Resque:Module
17:17:16 web.1  | /app/vendor/bundle/ruby/3.2.0/gems/rails_semantic_logger-4.14.0/lib/rails_semantic_logger/engine.rb:242:in `block in <class:Engine>': undefined method `after_fork' for Resque:Module (NoMethodError)
17:17:16 web.1  | 
17:17:16 web.1  |       Resque.after_fork { |_job| ::SemanticLogger.reopen } if defined?(Resque)
17:17:16 web.1  |             ^^^^^^^^^^^
17:17:16 web.1  | 	from /app/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3/lib/active_support/lazy_load_hooks.rb:94:in `block in execute_hook'

Potentially remove mission_control-web from Ruby Gems?

Hey,

When searching for this gem on Ruby Gems I noticed (2) gems came up in the auto-complete search:

The web gem links to a Basecamp repo that was likely deleted: https://github.com/basecamp/mission_control-web

New users might get confused. I originally thought this project was split into 2 different gems.

What do you think about yanking mission_control-web from Ruby Gems and / or creating https://github.com/basecamp/mission_control-web again but archiving it with a link back to this repo so folks know where to go?

NoMethodError in MissionControl::Jobs::Queues#index

Hi Folks, I'm have a fresh install of Solid Queue and Mission Control (on an app recently upgraded from Rails 7.0 to 7.1).

I'm getting an error when trying to load /jobs.

NoMethodError in MissionControl::Jobs::Queues#index
mission_control-jobs (0.2.1) app/views/mission_control/jobs/queues/index.html.erb:1

My hunch is that it's something to do with the changes to load paths in Rails 7.1, but I'm using the 7.1 default settings so I would imagine it should work.

Any advice on how I can debug this? Thanks!

Encoding::UndefinedConversionError: "\xE2" from ASCII-8BIT to UTF-8

If your computer has a Socket.gethostname that has an apostrophe in it, like Basecamp’s-Computer, you get the following error when running bin/setup.

# bin/setup

Encoding::UndefinedConversionError: "\xE2" from ASCII-8BIT to UTF-8

This is caused by an encoding issue in SolidQueue when it retrieves the hostname. I have a PR open to fix the issue there: rails/solid_queue#143

When I apply that fix to SolidQueue, the MissionControl error goes away and bin/setup finishes successfully.

Running `bin/setup` leads to errors on Ruby 3.3.0

Hi.
I just wanted to start the development locally on my machine.
Running bin/setup leads to a

NoMethodError: undefined method `[]' for nil

Here's the complete trace:

Creating databases...
Dropped database 'db/development.sqlite3'
Database 'db/test.sqlite3' does not exist
Created database 'db/development.sqlite3'
Created database 'db/test.sqlite3'
Deleting existing jobs...
Generating 70 finished jobs for BC4 - resque_ashburn...
bin/rails aborted!
NoMethodError: undefined method `[]' for nil
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:41:in `dispatch_jobs'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:56:in `load_finished_jobs'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:24:in `block in load'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/resque/thread_safe_redis.rb:28:in `enable_with'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/resque/thread_safe_redis.rb:15:in `with_per_thread_redis_override'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/active_job/queue_adapters/resque_ext.rb:10:in `activating'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/mission_control/jobs/server.rb:18:in `activating'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:23:in `load'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:94:in `block (2 levels) in <top (required)>'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/mission_control/jobs/identified_elements.rb:7:in `each'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/mission_control/jobs/identified_elements.rb:7:in `each'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:93:in `block in <top (required)>'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/mission_control/jobs/identified_elements.rb:7:in `each'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/lib/mission_control/jobs/identified_elements.rb:7:in `each'
/Users/holgerfrohloff/projects/OSS/mission_control-jobs/test/dummy/db/seeds.rb:92:in `<top (required)>'

I used Ruby 3.3.0.
It worked (almost flawlessly) on Ruby 3.2.2.

Add support for `resque-scheduler`

Resque by itself doesn't support jobs enqueued in the future, it needs resque-scheduler for this. Mission Control could check if resque-scheduler is available, and if it is, support inspecting scheduled (delayed) jobs. It might be tricky due to the data structure that resque-scheduler uses for delayed jobs (a Redis sorted set IIRC), but might be doable.

Retries fail because of missing controller action

The JobScoped concern sets a before_action on the index action that does not exist in the RetriesController. For retries, it needs to run on the create action instead.

This is caused by the new Rails 7.1 default config for raising errors on missing callback actions.

AbstractController::ActionNotFound (The index action could not be found for the :set_job
callback on MissionControl::Jobs::RetriesController, but it is listed in the controller's
:except option.

Raising for missing callback actions is a new default in Rails 7.1, if you'd
like to turn this off you can delete the option from the environment configurations
or set `config.action_controller.raise_on_missing_callback_actions` to `false`.
):

The before_action might need to get extracted to the controllers, but it would be nice if it didn't have to?

Paginate workers page

/workers is not currently paginated; only the pages that list jobs are. Normally there would be many less workers than jobs for most pages (eg. finished or scheduled jobs might have millions of jobs), but still, in a medium or big-ish app, there'll be surely hundreds of workers, so this page might be a bit too slow. It'd be good to have a Page implementation we can use for workers (the existing one is tailored for JobsRelation) and use it to paginate this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.