Giter Site home page Giter Site logo

mojolicious / minion Goto Github PK

View Code? Open in Web Editor NEW
222.0 41.0 53.0 5.86 MB

:octopus: Perl high performance job queue

Home Page: https://metacpan.org/release/Minion

License: Artistic License 2.0

Perl 97.05% CSS 0.74% JavaScript 0.55% PLpgSQL 1.66%
perl job-queue postgresql mojolicious

minion's People

Contributors

aferreira avatar akron avatar avkhozov avatar briandfoy avatar candyangel avatar grinnz avatar jberger avatar kiwiroy avatar kraih avatar kwakwaversal avatar mergify[bot] avatar renatocron avatar rgci avatar stuartskelton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

minion's Issues

Use of missing_after minion attribute

repair executes every remove_after seconds for each worker https://github.com/kraih/minion/blob/master/lib/Minion/Command/minion/worker.pm#L46. So jobs that processes by abandoned workers are marked as failed only after remove_after seconds but not missing_after seconds. Even for default values (1 day and 10 days) it looks not very good. We can restart the failed jobs only after 10 days, but not 1 day, as expected by documentation.

Seems that we need two repair methods repair_workers and repair_jobs with it own intervals.

Prevent duplicate jobs

For example, I have some items in my shop in "books" category. When I'm updating a book, I should update a search index for this category, and I could create a task and enqueue some job

$minion->add_task(
  update_index => sub {
    my ($job, $type) = @_;
    # a lot of code
    # about 1 minute to execute
  }
);

$minion->enqueue(update_index => ['books']);

But I could edit this item again and I will get two similar (inactive) tasks. But I don't want to do the same job twice.

$VAR1 = [
          {
            'task' => 'update_index',
            'created' => '1417737737.14251',
            'retries' => '0',
            'priority' => '0',
            'delayed' => '1',
            'state' => 'inactive',
            'id' => 'ff8a853f39bab0e5cb962e801ac3b857',
            'args' => [
                        'books'
                      ]
          },
          {
            'state' => 'inactive',
            'delayed' => '1',
            'args' => [
                        'books'
                      ],
            'id' => 'e8320bd49ccba196c9330721981497c7',
            'task' => 'update_index',
            'priority' => '0',
            'retries' => '0',
            'created' => '1417737737.14208'
          }
        ];

I need some feature which helps me to prevent duplicate jobs. Dou you have any idea how to implement that? I could do it in my own backend, but it will be good if it is a standart solution for all backends.

Batches

Job dependencies is a topic that keeps coming up every now and then. Personally, i still believe that we don't need a sophisticated dependency system. But it might be nice to have something to synchronize a batch of jobs. Say you have 10 images that need to be resized, and mailed somewhere together afterwards. Wouldn't it be convenient to resize those images in 10 separate jobs, and then have a followup job handle the mailing aspect, once the last image has been resized?

Help to upgrade MongoDB Backend

I'm current trying to upgrade Minion::BackEnd::MongoDB backend (unmanteined from 2015) to support interface changes in Minion and MongoDB.

https://github.com/EmilianoBruni/Minion-Backend-MongoDB

At this time I was able to make work minion_bench.pl, linkcheck and most parts of admin interface but I have need to change Minion.pm code owing to a strange behaviour and I need some advices.

The problem is that when forks, mongodb client need to reconnect

https://metacpan.org/pod/MongoDB::MongoClient#THREAD-SAFETY-AND-FORK-SAFETY

So I need to intercept the worker forks. The best way I thought it was to register the emit of dequeue worker in register_worker but here I have only the id of the worker. If you can send the worker object I can register this event there.

For now I overloaded the minion method of Backened in this way

`sub minion {
my ($self, $minion) = @_;

return $self->{minion} unless $minion;

$self->{minion} = $minion;
weaken $self->{minion};

$minion->on(worker => sub {
my ($minion, $worker) = @_;
$worker->on(dequeue => sub { pop->once(spawn => &_spawn) } );
});
}
sub spawn {
my ($job, $pid) = @
;
my ($id, $task) = ($job->id, $job->task);
$job->minion->backend->mongodb->client->reconnect();
}
`

but now, the problem. This method is called in Minion.pm with this one line code

return $self->backend($class->new(@_)->minion($self));

but, with this call, I find an empty minion object in my overloaded method ( bless({}, Minion) )

If I change your code exploding the one line code like this

my $be = $class->new(@_); $be->minion($self); return $self->backend($be);

all works fine.

Can you help me to find where is the problem and oveloaded in the correct way the minion method so your one line code works?

Thanks in advance and sorry for my bad english

Make Minion easier to use outside of Mojolicious web apps

While installing Minion will always require Mojolicious, there is no real reason for the worker to be a web application. I believe all we need to do is move the worker lifecycle code from Minion::Command::minion::worker into a new module or Minion::Worker method. All the log messages could become events in Minion::Worker that Minion::Command::minion::worker subscribes to.

use Minion;

my $minion = Minion->new(Pg => 'postgres://...');
$minion->add_task(foo => sub {
  my ($job, @args) = @_;
  ...
});
$minion->add_task(bar => sub {
  my ($job, @args) = @_;
  ...
});

my $worker = $minion->worker;
$worker->{jobs} = 12;
$worker->run;

That would still leave the app attribute, but that's pretty easy to ignore because of it's default value, and doesn't really get in the way.

Multiple tasks enqueue

Is it in plans to add support to enqueue multiple tasks at once (with one postgresql query, as example)?

Minion::Backend::Pg is DELETE waiting

  • Minion version: 5.08
  • Perl version: 5.22
  • Operating system: CentOS release 6.5 (Final)

Steps to reproduce the behavior

perl ./script/minion_example minion worker -j 20

(worker number is 20)

Expected behavior

Actual behavior

postgresql is stuck on DELETE waiting

postgres 16695 16649  0 11:30 ?        00:00:00 postgres: minion minion_jobs_20160613 10.162.63.17(39757) idle
postgres 16696 16649 33 11:30 ?        00:15:48 postgres: minion minion_jobs_20160613 10.162.63.17(39758) DELETE
postgres 18112 16649  1 11:41 ?        00:00:40 postgres: minion minion_jobs_20160613 127.0.0.1(42078) DELETE waiting
postgres 19751 16649 74 11:55 ?        00:15:47 postgres: minion minion_jobs_20160613 10.162.63.17(44214) DELETE
postgres 19937 16649  3 11:57 ?        00:00:40 postgres: minion minion_jobs_20160613 10.162.63.17(45155) DELETE waiting
postgres 19944 16649  3 11:57 ?        00:00:40 postgres: minion minion_jobs_20160613 10.162.63.17(45334) DELETE waiting
root     20156 20150  0 11:58 ?        00:00:00 perl /opt/app/edge_api/script/edge_api minion worker -j 20
postgres 20178 16649  0 11:58 ?        00:00:00 postgres: minion minion_jobs_20160613 127.0.0.1(44288) idle
postgres 20179 16649  3 11:58 ?        00:00:40 postgres: minion minion_jobs_20160613 127.0.0.1(44289) DELETE waiting
postgres 20406 16649  0 12:00 ?        00:00:00 postgres: minion minion_jobs_20160613 10.162.63.17(46778) idle
postgres 20407 16649  0 12:00 ?        00:00:00 postgres: minion minion_jobs_20160613 10.162.63.17(46779) idle
postgres 21138 16649  0 12:06 ?        00:00:00 postgres: minion minion_jobs_20160613 10.162.63.17(50022) idle
postgres 21139 16649  0 12:06 ?        00:00:00 postgres: minion minion_jobs_20160613 10.162.63.17(50023) DELETE waiting

Mojo::EventEmmiter in other processes

Hi!

I'm creating a launcher based on minion app and ran into some problems with events, emitted in child processes. Child event listener has the same object address as minion event listener, but minion doesn't actually listen to child events at all. I'll give a few examples a bit later.

Run task with args hostname

At the moment the task runs without a host.

Ex:

  1. Some app has many workers on different hosts
  2. Some of these workers get a task, create a job and run it

For example, I have two tasks:
1 - Launch server
2 - Stop server

I run the first task - worker on the bender.com host starts the server. Next, I run the second task - worker on fray.com host stops the server.

I suggest you update this code (https://github.com/kraih/minion/blob/master/lib/Minion.pm#L118), so that one could indicate the host for the task.

History

This is related to #25. Right now we generate statistics on demand, based on the current content of the queue, which only goes back a few days. It might be nice to have persistent statistics as well, like how many jobs the queue has processed during its lifetime, how many failed and got retried, perhaps even daily/weekly summaries.

Backward compatibility breakage

Hi!

I'm not sure about when that occured, but I've updated my perl modules and now the whole minion API seems to be broken. Among the changes I have identified so far:

  • app->minion->backend->job_info isn't available anymore, have to use list_jobs instead.
  • app->minion->backend->list_jobs now returns a hash ref (was an array ref not so long ago).

I'm still investigating, but other pieces seem to have been moved. Maybe this is all normal, but I could not find any explanation so far so I'm a bit confused..

  • Minion version: 8.03
  • Perl version: (with perlbrew) 5.26.1
  • Operating system: Linux Mint Debian Edition LMDE 2 Betsy

BTW, thanks for all the good work, Mojo rocks. I've built Alambic on top of it, and your framework has proved great and flawless up to now.

Very slow jobs perform method

I've noticed that Minion::Job->perform method is too slow. I made some benchmarks with Redis backend

my $minion = Minion->new(Redis => { sock => '/run/redis/redis.sock' });
$minion->reset;

$minion->add_task(empty => sub {
    my ($job, $num) = @_;
    $num++;
});

timethese (-5, {
    job_perform  => sub {
        my $id = $minion->enqueue(empty => [1]);
        my $job = $worker->dequeue(0);
        $job->perform;
    },
    job_perform2  => sub {
        my $id = $minion->enqueue(empty => [1]);
        my $job = $worker->dequeue(0);
        my $cb = $minion->tasks->{$job->{task}};
        $job->fail('error') unless eval { $job->$cb(@{$job->args}); 1 };
        $job->finish;
    },
});

And what I got

Benchmark: running job_perform, job_perform2 for at least 5 CPU seconds...
job_perform: 306 wallclock secs ( 2.55 usr  3.43 sys + 252.46 cusr 41.75 csys = 300.19 CPU) @  6.26/s (n=1880)
job_perform2:  9 wallclock secs ( 3.34 usr +  1.96 sys =  5.30 CPU) @ 1397.36/s (n=7406)

It's about 220 times slower.

Allow setting dequeue wait timeout for Minion::Command::minion::worker

  • Minion version: 7.03
  • Perl version: 5.18.2
  • Operating system: Linux

Steps to reproduce the behavior

I am currently using the Minion job queue with a Mojolicious app and with the SQLite backend. I have some jobs that need to be started very soon. We have workers running, but they wait too long.

I am starting the worker like this:

script/myapp minion worker

The sub Minion::Command::minion::worker::_work calls the dequeue method with a hard-coded wait time of 5 seconds: https://metacpan.org/source/SRI/Minion-7.05/lib/Minion/Command/minion/worker.pm#L81

This leads to the situation that a job that was enqueued one second after calling the dequeue method, it waits about four seconds until the job gets started. In most cases this won't be a problem, but in our use case it does.

I suggest to make this 5 a parameter that can be called like this:

script/myapp minion worker -w 0.5

The 5 would stay the default value if no other value is given.
I could try to provide a pull request if you argue for it.

Refresh history graph every 10 minutes

For those of us that keep the Admin UI dashboard open for longer periods of time, it might be nice to have the history graph automatically refresh every 10 minutes. This should be a fairly simple task, just some fiddling with the JavaScript on the dashboard template and adding a /history route (similar to /stats) to the plugin.
68747470733a2f2f7261772e6769746875622e636f6d2f6b726169682f6d696e696f6e2f6d61737465722f6578616d706c65732f61646d696e2e706e673f7261773d74727565

Unable to pass all tests on Mac OS X 10.10.1

cpanm (App::cpanminus) 1.7014 on perl 5.018002 built for darwin-thread-multi-2level
Work directory is /Users/konstantin_c/.cpanm/work/1417766923.31741
You have make /usr/bin/make
You have LWP 6.05
You have /usr/bin/tar: bsdtar 2.8.3 - libarchive 2.8.3
You have /usr/bin/unzip
Searching Minilla on cpanmetadb ...
--> Working on Minilla
Fetching http://www.cpan.org/authors/id/T/TO/TOKUHIROM/Minilla-v2.2.1.tar.gz
-> OK
Unpacking Minilla-v2.2.1.tar.gz
Entering Minilla-v2.2.1
Checking configure dependencies from META.json
Checking if you have Module::Build 0.38 ... Yes (0.4003)
Configuring Minilla-v2.2.1
Running Build.PL
Created MYMETA.yml and MYMETA.json
Creating new 'Build' script for 'Minilla' version 'v2.2.1'
cp META.json MYMETA.json
cp META.yml MYMETA.yml
-> OK
Checking dependencies from MYMETA.json ...
Checking if you have Pod::Markdown 1.322 ... Yes (2.002)
Checking if you have TAP::Harness::Env 0 ... Yes (3.34)
Checking if you have Time::Piece 1.16 ... Yes (1.20_01)
Checking if you have CPAN::Meta::Validator 0 ... Yes (2.133380)
Checking if you have Test::More 0.98 ... Yes (1.001009)
Checking if you have parent 0 ... Yes (0.225)
Checking if you have File::Copy::Recursive 0 ... Yes (0.38)
Checking if you have App::cpanminus 1.6902 ... Yes (1.7014)
Checking if you have Moo 1.001 ... Yes (1.004002)
Checking if you have Try::Tiny 0 ... Yes (0.19)
Checking if you have ExtUtils::Manifest 1.54 ... Yes (1.65)
Checking if you have File::Which 0 ... Yes (1.09)
Checking if you have Config::Identity 0 ... Yes (0.0018)
Checking if you have TOML 0.95 ... Yes (0.95)
Checking if you have CPAN::Meta 2.132830 ... Yes (2.133380)
Checking if you have Module::CPANfile 0.9025 ... Yes (1.1000)
Checking if you have Data::Section::Simple 0.04 ... Yes (0.07)
Checking if you have Text::MicroTemplate 0.20 ... Yes (0.20)
Checking if you have version 0 ... Yes (0.9902)
Checking if you have Getopt::Long 2.36 ... Yes (2.39)
Checking if you have File::pushd 0 ... Yes (1.009)
Checking if you have Test::Output 0 ... Yes (1.03)
Checking if you have Test::Requires 0 ... Yes (0.07)
Checking if you have Archive::Tar 1.60 ... Yes (1.90)
Checking if you have JSON 0 ... Yes (2.90)
Checking if you have Term::ANSIColor 0 ... Yes (4.02)
Checking if you have File::Temp 0 ... Yes (0.23)
Checking if you have Module::Metadata 1.000012 ... Yes (1.000024)
Building and testing Minilla-v2.2.1
Building Minilla
t/00_compile.t ............................... ok
t/03_step.t .................................. skipped: Test requires module 'Version::Next' but it's not found
t/bumpversion.t .............................. ok

Software::License is not installed

t/05_metadata.t .............................. ok
t/01_load_all.t .............................. ok
t/cli/regenerate_BuildPL.t ................... skipped: Test requires module 'Version::Next' but it's not found
t/cli/release.t .............................. skipped: Test requires module 'Version::Next' but it's not found
t/cli/release_notest.t ....................... skipped: Test requires module 'Version::Next' but it's not found
t/cli/release_with_hooks.t ................... skipped: Test requires module 'Version::Next' but it's not found
t/cli/clean.t ................................ ok
Cloning into 'libfoo'...
t/filegatherer.t ............................. ok
t/cli/build.t ................................ ok
t/gitignore.t ................................ ok
t/migrate/dzil.t ............................. skipped: Test requires module 'Dist::Zilla' but it's not found
t/migrate/changes.t .......................... ok
cpanfile not found at -e line 1.
t/migrate/eumm.t ............................. ok
t/migrate/no-changes.t ....................... ok
Cannot determine author info from lib/Acme/Foo.pm
Software::License is needed when you want to use non Perl_5 license.
Cannot retrieve 'abstract' from /private/tmp/XkRau4Rij2. You need to write POD in your main module.
t/migrate/no-pod.t ........................... ok
t/migrate/tmpfiles.t ......................... ok
t/dist.t ..................................... ok
t/module_maker/PL_files.t .................... ok
t/module_maker/c_source.t .................... ok
t/module_maker/allow_pureperl.t .............. ok
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
Can't exec "tree": No such file or directory at t/module_maker/eumm.t line 53.
t/module_maker/eumm.t ........................ ok
t/module_maker/requires_external_bin.t ....... ok
fatal: bad default revision 'HEAD'
t/module_maker/tap_harness_args.t ............ ok
fatal: bad default revision 'HEAD'
Can't exec "tree": No such file or directory at t/module_maker/tiny.t line 45.
t/module_maker/tiny.t ........................ ok
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
t/module_maker/tiny/requires_external_bin.t .. ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 43.
    #          got: undef
    #     expected: '0'

fatal: bad default revision 'HEAD'
t/module_maker/xsutil.t ...................... ok
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 44.
    #          got: undef
    #     expected: '0'

fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
t/new/dist-name.t ............................ ok
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 45.
    #          got: undef
    #     expected: '0'

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

fatal: bad default revision 'HEAD'
t/profile-xs.t ............................... ok

Failed test at xt/fail.t line 4.

Looks like you failed 1 test of 1.

Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

t/profile/module-build.t ..................... ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

t/project/badge.t ............................ ok
t/project/contributors.t ..................... skipped: Test requires module 'Software::License' but it's not found

Failed test at xt/fail.t line 4.

Looks like you failed 1 test of 1.

fatal: bad default revision 'HEAD'
t/module_maker/eumm/run_tests.t .............. ok
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 45.
    #          got: undef
    #     expected: '0'

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

t/project/detect_source_path.t ............... ok
t/project/dist_name.t ........................ ok
fatal: bad default revision 'HEAD'
t/project/format_tag.t ....................... ok
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
t/project/license.t .......................... skipped: Test requires module 'Software::License' but it's not found
t/project/from.t ............................. ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

t/project/meta_no_index.t .................... ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

t/project/script_files.t ..................... ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

t/project/unstable.t ......................... ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

t/project/xsutil.t ........................... ok
t/project/meta.t ............................. ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 44.
    #          got: undef
    #     expected: '0'

t/work_dir/_rewrite_pod.t .................... skipped: Pod rewriting is temporary disabled.
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 45.
    #          got: undef
    #     expected: '0'

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

t/work_dir/copy.t ............................ ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

t/release_test/config.t ...................... ok

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

fatal: bad default revision 'HEAD'
fatal: bad default revision 'HEAD'
t/work_dir/dist.t ............................ ok
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
t/work_dir/release_test.t .................... ok
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 45.
    #          got: undef
    #     expected: '0'

#   Failed test 'run only t/*.t and pass all'
#   at t/module_maker/tiny/run_tests.t line 46.

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.
fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 49.
    #          got: '0'
    #     expected: anything else

fatal: bad default revision 'HEAD'
Module::Build::Tiny version 0.035 required--this is only version 0.034 at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
Giving up.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

    #   Failed test at t/module_maker/tiny/run_tests.t line 50.
    #          got: '0'
    #     expected: anything else
    # Looks like you failed 2 tests of 2.

#   Failed test 'run t/*.t and xt/*.t and fail'
#   at t/module_maker/tiny/run_tests.t line 51.
# Looks like you failed 1 test of 2.

Failed test 'dist test'

at t/module_maker/tiny/run_tests.t line 52.

Looks like you failed 1 test of 1.

t/module_maker/tiny/run_tests.t ..............
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests

Test Summary Report

t/module_maker/tiny/run_tests.t (Wstat: 256 Tests: 32 Failed: 32)
Failed tests: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
1, 1, 1, 1
Non-zero exit status: 1
Parse errors: Plan (1..1) must be at the beginning or end of the TAP output
Tests out of sequence. Found (1) but expected (2)
More than one plan found in TAP output
Tests out of sequence. Found (1) but expected (3)
More than one plan found in TAP output
Displayed the first 5 of 64 TAP syntax errors.
Re-run prove with the -p option to see them all.
Files=51, Tests=158, 46 wallclock secs ( 0.87 usr 0.23 sys + 86.24 cusr 23.85 csys = 111.19 CPU)
Result: FAIL
Errors in testing. Cannot continue.

Asynchronous backends

Hello,
Mojo::Pg is a module with excellent sync/async support.
May be it as a good idea to add async support for minion operations with backends?

Document MySQL user privileges

  • Minion version: latest
  • Perl version: 5.16
  • Operating system: centos

Hello,
I've tried to install our worker on the new machine and I face some mysql errors:

DBD::mysql::st execute failed: TRIGGER command denied to user 'worker'@'172.31.48.229' for table 'mojo_pubsub_notify' at /usr/local/share/perl5/Mojo/mysql/Database.pm line 47

If it's possible please mention what are the required privileges for minion to run.

Thanks,
Ovidiu

Non-blocking Mojo::UserAgent inside Minion

  • Minion version: 5.09
  • Mojolicious version: 7.05
  • Perl version: 5.18.1
  • Operating system: Linux 3.2.0-4-686-pae #1 SMP Debian 3.2.81-1 i686 GNU/Linux

Steps to reproduce the behavior

The following test script defines a Mojolicious::Lite app that allows to do a few tests with Mojo::UserAgent from within Minion:

#!/usr/bin/env perl
use strict;
use warnings;
use Mojolicious::Lite;
use Data::Dumper;
$Data::Dumper::Indent = 1;

my $uri = 'http://mojolicious.org';

plugin Minion => {SQLite => 'sqlite:test4bug.db'};
app->minion->add_task(poke_block   => \&poke_block);
app->minion->add_task(poke_noblock => \&poke_noblock);

my $allowed_uas = [ua => [qw< app new >]];

get '/minion/:op/:ua' => $allowed_uas => sub {
   my $c         = shift;
   my $operation = $c->stash('op');
   my $uas       = $c->stash('ua');
   $c->minion->enqueue('poke_' . $operation, [$uri, $uas]);
   $c->render(text => "minion $operation/$uas will be performed soon\n");
};

app->start;

sub poke_block {
   my ($job, $uri, $uas) = @_;
   my $log     = $job->app->log;
   my $ua      = $uas eq 'app' ? $job->app->ua : Mojo::UserAgent->new();
   my $tx      = $ua->get($uri);
   my $outcome = log_outcome($log, $tx, "/minion/block/$uas");
   $tx->success ? $job->finish($outcome) : $job->fail($outcome);
} ## end sub poke_block

sub poke_noblock {
   my ($job, $uri, $uas) = @_;
   my $log = $job->app->log;
   my $ua = $uas eq 'app' ? $job->app->ua : Mojo::UserAgent->new();
   $ua->get(
      $uri => sub {
         my ($ua, $tx) = @_;
         my $outcome = log_outcome($log, $tx, "/minion/noblock/$uas");
         $tx->success ? $job->finish($outcome) : $job->fail($outcome);
      }
   );
} ## end sub poke_noblock

sub log_outcome {
   my ($log, $tx, $msg) = @_;
   $msg = defined($msg) ? $msg . ' ' : '';
   my $outcome = $tx->success ? 'SUCCESS!' : 'FAILURE!';
   $log->info($msg . $outcome);
   $log->debug(Dumper $tx->res) unless $tx->success;
   return $outcome;
} ## end sub log_outcome

I started the the worker in one shell and then run the following requests in another:

shell$ ./test4bug-minion.pl get /minion/block/app
[Sun Sep 11 08:40:46 2016] [debug] GET "/minion/block/app"
[Sun Sep 11 08:40:46 2016] [debug] Routing to a callback
[Sun Sep 11 08:40:46 2016] [debug] 200 OK (0.004269s, 234.247/s)
minion block/app will be performed soon
shell$ ./test4bug-minion.pl get /minion/block/new
[Sun Sep 11 08:40:55 2016] [debug] GET "/minion/block/new"
[Sun Sep 11 08:40:55 2016] [debug] Routing to a callback
[Sun Sep 11 08:40:55 2016] [debug] 200 OK (0.002441s, 409.668/s)
minion block/new will be performed soon
shell$ ./test4bug-minion.pl get /minion/noblock/app
[Sun Sep 11 08:41:05 2016] [debug] GET "/minion/noblock/app"
[Sun Sep 11 08:41:05 2016] [debug] Routing to a callback
[Sun Sep 11 08:41:05 2016] [debug] 200 OK (0.002478s, 403.551/s)
minion noblock/app will be performed soon
shell$ ./test4bug-minion.pl get /minion/noblock/new
[Sun Sep 11 08:41:12 2016] [debug] GET "/minion/noblock/new"
[Sun Sep 11 08:41:12 2016] [debug] Routing to a callback
[Sun Sep 11 08:41:12 2016] [debug] 200 OK (0.002429s, 411.692/s)
minion noblock/new will be performed soon

Targets are:

  • /minion/block/app blocking request using $job->app->ua
  • /minion/block/new blocking request using Mojo::UserAgent->new
  • /minion/noblock/app non-blocking request using $job->app->ua
  • /minion/noblock/new non-blocking request using Mojo::UserAgent->new

Expected behavior

This is what I expected:

  • /minion/block/app prints out a SUCCESS! log message in the Minion worker
  • /minion/block/new prints out a SUCCESS! log message in the Minion worker
  • /minion/noblock/app prints out a SUCCESS! log message in the Minion worker
  • /minion/noblock/new prints out a SUCCESS! log message in the Minion worker

i.e. all requests to eventually succeed.

Actual behavior

This is what I got instead:

  • /minion/block/app behaves as expected
  • /minion/block/new behaves as expected
  • /minion/noblock/app disappears
  • /minion/noblock/new fails

Minion worker's log follows:

[Sun Sep 11 08:30:06 2016] [debug] Performing job "34" with task "poke_block" in process 9955
[Sun Sep 11 08:30:06 2016] [info] /minion/block/app SUCCESS!
[Sun Sep 11 08:30:11 2016] [debug] Performing job "35" with task "poke_block" in process 9968
[Sun Sep 11 08:30:11 2016] [info] /minion/block/new SUCCESS!
[Sun Sep 11 08:30:16 2016] [debug] Performing job "36" with task "poke_noblock" in process 9981
[Sun Sep 11 08:30:26 2016] [debug] Performing job "37" with task "poke_noblock" in process 9994
[Sun Sep 11 08:30:26 2016] [info] /minion/noblock/new FAILURE!
[Sun Sep 11 08:30:26 2016] [debug] $VAR1 = bless( {
  'finished' => 2,
  'state' => 'finished',
  'error' => {
    'message' => 'Premature connection close'
  },
  'events' => {},
  'content' => bless( {
    'read' => sub { "DUMMY" },
    'events' => {
      'read' => [
        $VAR1->{'content'}{'read'}
      ]
    },
    'headers' => bless( {
      'headers' => {}
    }, 'Mojo::Headers' )
  }, 'Mojo::Content::Single' )
}, 'Mojo::Message::Response' );

Migration-less minions

Currently, the postgresql backend appears to always fire the 'migrate' method on its migrations object; since the minion users in one of my planned deployments will not have permission to run DDL, this is unacceptable.

How do I configure minion to only check the schema version is correct?

Implement a Wait-for-Job Mechanism

Hi,

After a longer talk on IRC sri said that he was open for a new implementation of a mechanism to wait for job completion.

After some thinking he figured out:

  • That the tests would have to be changed fundamentally
  • That there would have to be a new backend function for notification & polling
  • That a proposed API could be like:

21.210933 < sri> my $result = $minion->wait($job_id); and $minion->wait($job_id => sub { my ($minion, $result) = @_; });

(Discussion happened on the 21.02.2016. The times are in GMT+1(CET). Channel was #mojo on irc.perl.org).

Kind Regards,
Stephan

Optimize performance relevant backend methods

While especially enqueue is already heavily optimized, there are still a few backend methods that get used a lot and could benefit from SQL query and index optimizations. I'm thinking specifically of job_info, repair and stats.

At the same time we need to be very careful not to introduce performance regressions, like we did with the gin index on the parents column (IRC discussion).

This index might actually still be a reasonable optimization for job_info and the DELETE query of repair, but we need to verify this with reproducible benchmarks. Based on my personal experience with bigger Minion queues, i think it would be reasonable to optimize for queues containing about a million jobs where the majority also has dependencies.

Slow performance on job_info method

  • Minion version: 6.0
  • Perl version: any
  • Operating system: any

$minion->job($id) for Pg is slow. Even if there is no dependency in jobs.

Pg explain analyze output info for query (https://github.com/kraih/minion/blob/978612cd3fc33ec9f66c4caa8bdc9c308d27d200/lib/Minion/Backend/Pg.pm#L54):

# explain analyze select id, args, attempts, array(select id from minion_jobs where j.id = any(parents)) as children, extract(epoch from created) as created, extract(epoch from delayed) as delayed, extract(epoch from finished) as finished, parents, priority, queue, result, extract(epoch from retried) as retried, retries, extract(epoch from started) as started, state, task, worker from minion_jobs as j where id = 164360;
                                                               QUERY PLAN                                                                
-----------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using minion_jobs_pkey on minion_jobs j  (cost=0.42..26552.96 rows=1 width=1039) (actual time=96.433..96.438 rows=1 loops=1)
   Index Cond: (id = 164360)
   SubPlan 1
     ->  Seq Scan on minion_jobs  (cost=0.00..26544.51 rows=8002 width=8) (actual time=96.359..96.359 rows=0 loops=1)
           Filter: (j.id = ANY (parents))
           Rows Removed by Filter: 166400
 Planning time: 0.162 ms
 Execution time: 96.493 ms
(8 rows)

You can see, than subquery for children array use sequence scan on minion_jobs table.

What about gin index on parent field and contains operator for postgres arrays (https://www.postgresql.org/docs/current/static/functions-array.html)?

create index on minion_jobs using gin (parents);

And new explain output for new query:

# explain analyze select id, args, attempts, array(select id from minion_jobs where array[164360]::bigint[] <@ parents) as children, extract(epoch from created) as created, extract(epoch from delayed) as delayed, extract(epoch from finished) as finished, parents, priority, queue, result, extract(epoch from retried) as retried, retries, extract(epoch from started) as started, state, task, worker from minion_jobs as j where id = 164360;
                                                                 QUERY PLAN                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using minion_jobs_pkey on minion_jobs j  (cost=2933.08..2941.11 rows=1 width=1039) (actual time=0.055..0.056 rows=1 loops=1)
   Index Cond: (id = 164360)
   InitPlan 1 (returns $0)
     ->  Bitmap Heap Scan on minion_jobs  (cost=114.45..2932.66 rows=832 width=8) (actual time=0.013..0.013 rows=0 loops=1)
           Recheck Cond: ('{164360}'::bigint[] <@ parents)
           ->  Bitmap Index Scan on minion_jobs_parents_idx  (cost=0.00..114.24 rows=832 width=0) (actual time=0.011..0.011 rows=0 loops=1)
                 Index Cond: ('{164360}'::bigint[] <@ parents)
 Planning time: 0.258 ms
 Execution time: 0.111 ms
(9 rows)

Minion workers could not start on Windows 7 x64

  • Minion version: 6.0
  • Perl version: v5.24.0
  • Operating system: Microsoft Windows [Version 6.1.7601]

Steps to reproduce the behavior

CMD> perl script\myapp.pl minion worker

Expected behavior

Expect worker to start normally as v5.09

Actual behavior

CMD> perl script\myapp.pl minion worker
Minion workers do not support fork emulation

Detailed worker stats

Now the command minion job -w return list of workers (id and host:port). It would be nice, if this command will also return information about watching queues and numbers of jobs.

I think it can be done via register_worker method, each worker would update info about self every heartbeat interval.

_limit.html.ep and _pagination.html.ep append query parameters instead of merging it ...

  • Minion version: 9.09
  • Perl version: 5.24.1
  • Operating system: Debian 9 (stretch)

Steps to reproduce the behavior

E.g. Simply browse the finished job menu and change using the limit and the offset buttons.

Expected behavior

The queries should be merged and not appended.
I simply changed all url_with->query calls from [] (append) to {} merge.

Actual behavior

The queries being appended infinitely. Every browser defines its own max query parameter limit, some don't show the query in the address bar when bigger than 64k.
See the progression after some clicks:
[2019-02-15 12:54:13.13139] [105827] [debug] URL query: /minion/jobs?state=finished&limit=20&limit=10
[2019-02-15 12:54:16.98312] [105826] [debug] URL query: /minion/jobs?state=finished&limit=20&limit=50&limit=100&limit=50
[2019-02-15 12:54:18.40884] [105827] [debug] URL query: /minion/jobs?state=finished&limit=20&limit=50&limit=100&limit=10&limit=100
[2019-02-15 12:54:47.13084] [105826] [debug] URL query: /minion/jobs?state=finished&limit=20&limit=50&limit=100&limit=10&limit=20&limit=10&limit=100
[2019-02-15 12:55:02.72499] [105823] [debug] URL query: /minion/jobs?state=finished&limit=20&limit=10&limit=100&limit=50&limit=20&limit=10&limit=20&limit=50&limit=100&limit=50&limit=20&limit=10&limit=100

My solution works for me. Tested it extensively though.

Best regards
Franz

Mistake in documentation

  • Minion version: 5.08
  • Perl version: -
  • Operating system: -

Documentation of https://metacpan.org/pod/Minion#remove_after says: "jobs that have reached the state finished and have no unresolved dependencies". But jobs that have reached the state finished cannot have unresolved dependencies.

Did you mean dependent jobs instead of dependencies?

Bug in command worker

I have mojo app with minion worker in which i run external process.
I want to have ability to handle stdout when it occur. So, i use Mojo::IOLoop::ReadWriteFork to get this functionality.
When i start external program with Mojo::IOLoop::ReadWriteFork the "close" event not arrive.

Below two example of code. First example have right behaviour (without mojo app), second example (with mojo app) have wrong behaviour.

I don't know which module have bug, this is not obvious, so @jhthorsen maybe you would be interested.

1-st example

#!/usr/bin/perl

use Mojo::IOLoop::ReadWriteFork;
use Minion;

my $minion = Minion->new(Pg => 'postgresql://logioniz@/test');

$minion->add_task(x => sub {
  my ($job, @args) = @_;

  my $fork = Mojo::IOLoop::ReadWriteFork->new;

  $fork->on(close => sub {
    my($fork, $exit_value, $signal) = @_;
    Mojo::IOLoop->stop;
  });

  $fork->start(program => sub { warn $$; `ls` });
  Mojo::IOLoop->start;

  $job->finish;
});

$minion->enqueue('x');

my $worker = $minion->worker;
my $job = $worker->register->dequeue;
my $pid = $job->start;
warn $pid;
while (1) {
  last if $job->is_finished($pid);
  sleep 1;
}

print "Ok!\n";

2-nd example

#!/usr/bin/perl

use Mojolicious::Lite;
use Mojo::IOLoop::ReadWriteFork;

app->plugin('Minion' => {Pg => 'postgresql://logioniz@/test'});

app->minion->add_task(x => sub {
  my ($job, @args) = @_;

  my $fork = Mojo::IOLoop::ReadWriteFork->new;

  $fork->on(close => sub {
    my($fork, $exit_value, $signal) = @_;
    Mojo::IOLoop->stop;
  });

  $fork->start(program => sub { warn $$; `ls` });
  Mojo::IOLoop->start;

  $job->finish;
});

app->start;

Second example never completed its work.

$ perl 2.pl minion job -e x
36
$ MOJO_READWRITE_FORK_DEBUG=1 perl 1.pl minion worker
[22304] Child starting (CODE(0x25f12d8) )
[22304] Starting CODE(0x25f12d8) 
[22304] >>> 22338 at 1.pl line 18, <DATA> line 660.\n
$ ps ax | grep  22338
22338 pts/10   Z+     0:00 [perl] <defunct>

You see that child process is zombie process.

more gentle waitpid check

Now Minion::Job::is_finished wait for actual $pid from waitpid https://github.com/kraih/minion/blob/v5.08/lib/Minion/Job.pm#L28 and if someone already waiting this pid, than waitpid will return -1 instead of $pid and is_finished will never return 1.

  • Minion version: 5.08
  • Perl version: any
  • Operating system: linux

Steps to reproduce the behavior

#!/usr/bin/perl

use Mojolicious::Lite;
use Time::HiRes 'sleep';
use Test::Mojo;
use Test::More;

my $sleep = 0.1;

plugin Minion => {Pg => 'postgresql://and@/og'};
app->minion->add_task(test => sub { sleep $sleep and shift->finish('ok'); });

get '/' => sub {
  my $c = shift->render_later;
  $c->minion->enqueue('test');
  Mojo::IOLoop->timer(0.3 => sub { $c->rendered(201); });
};

my $t = Test::Mojo->new;
app->minion->reset;

my ($job, $pid);
my $worker = app->minion->worker->register;

app->minion->on(
  enqueue => sub {
    if (my $j = $worker->dequeue) {
      $job = $j;
      $pid = $job->start;
    }
  }
);

$t->get_ok('/')->status_is(201);
say 'waiting...' and sleep 0.1 until $job->is_finished($pid);

$worker->unregister;
done_testing;

Expected behavior

$ prove -v 1.pl
1.pl .. 
ok 1 - GET /
ok 2 - 201 Created
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
1..2
ok
All tests successful.
Files=1, Tests=2,  1 wallclock secs ( 0.03 usr  0.00 sys +  0.28 cusr  0.03 csys =  0.34 CPU)
Result: PASS
$ 

Actual behavior

If $sleep less than timeout in Mojo::IOLoop->timer test never finished.

$ prove -v 1.pl
1.pl .. 
ok 1 - GET /
ok 2 - 201 Created
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
waiting...
^C
$ 

Kill running jobs

If you, by chance, enqueue odd job (which loops or just incorrectly works for a long time) to a big minion cluster, that you need make great effort to stop it โ€” find a worker host, find a process in which job is performed and kill it.

It would be nice, if minion job would have kill option for stop and fail those job automatically.

when start minion worker, report error "minion_state" already exists

  • Minion version: 8.08
  • Perl version: 5.18.2
  • Operating system: macOS 10.13.1

Steps to reproduce the behavior

  1. I upgrade the Mojo, Minion, Mojo::Pg to the newest and install Mojolicious::Plugin::Minion
  2. I drop the table minion_jobs, minion_workers, minion_migrations
  3. myapp.pl minion worker

Expected behavior

start the work correctly

Actual behavior

DBD::Pg::st execute failed: ERROR: type "minion_state" already exists at /Library/Perl/5.18/Mojo/Pg/Migrations.pm line 66.

Changes don't mention required PostgreSQL version

Since upgrades between PostgreSQL major versions are a Big Deal for a lot of sites, the move from 9.3 to 9.4 and then 9.4 to 9.5 as required versions is, from the point of view of already deployed systems, a massive breaking/backwards incompatible change, and should be documented in both POD and Changes in accordance with your policy for backwards incompatible changes.

(I currently have a customer having to build and deploy a complete extra server because they want new minion features and didn't realise the Changes file was missing such an important change)

Allow for Minion worker to be started by the application server

This gets requested very commonly. It would be nice if Mojolicious::Plugin::Minion
had the ability to start a Minion worker together with the application server (daemon and prefork). And it would be especially nice if it worked with Morbo automatic restarting during development.

Pausing named queues

This is more of an enterprise feature, but i think it might be nice to be able to pause individual named queues. Like, if a piece of faulty code enqueues a million bad jobs, you could just pause the named queue, remove all bad jobs, fix the code, and resume normal operation afterwards.

List worker queue in worker_info

Currently worker_info provides the host, running jobs, start and notify times and process pid.

It would be very useful to also provide what queue or queues are being listened to for that worker.

As it is now if you have two workers running on the same box there is no way to tell which is which.

Alternatively, if the queues are too hard to get listed (more advanced setups), being able to name the worker would likely provide a viable alternative.

In my use case I have a management interface that does work well for displaying scheduled, running and past jobs. But the worker information is somewhat lacking, since I run multiple workers on a single box.

Register worker for specific task(s)

It would be nice if I could tell the worker (while registering) specific tasks that it can handle (dequeue and perform).
For example, it may be useful when for some tasks order is important and you can run only one worker. And this will allow more flexibility to increase the number of workers.

Inquiry: Recurring jobs feature.

  • Minion version: 7.05
  • Perl version: 5.22.1
  • Operating system: Ubuntu gnome 16.04.3 LTS

Steps to reproduce the behavior

This is not a bug per se, but is it possible to have recurring jobs that can be scheduled to run at:

  1. Specific days.
  2. Specific time of day.

The only way to achieve some form of recurring jobs is to have a job to reschedule itself before finishing. Something like this:

$app->minion->add_task(
    sloth_add => sub {
        my ( $job, $a, $b ) = @_;
        sleep 20;
        my $sum = $a + $b;

        #schedule job again to run 10 seconds from now
        $app->minion->enqueue(
            sloth_add => [ $a, $b ] => { priority => 5, delay => 10 } );
        $job->finish( { result => $sum } );
    }
);

Expected behavior

What if instead of a job scheduling itself, there could be specific parameters (in the hash) that are passed when queuing a job which the worker will use to schedule recurring jobs using cron-style format.

#schedule job to run at midnight. n_runs being number of times to run so
#this will run at midnight for ten days
$app->minion->enqueue(
    sloth_add => [ $a, $b ] => { n_runs => 10, schedule => "@midnight" } );

#run once every 24 hours, indefinitely
$app->minion->enqueue( sloth_add => [ $a, $b ] => { schedule => "@daily" } );

Actual behavior

Currently not implemented.

Admin UI

Minion still needs a pretty admin UI, with a dashboard showing the current cluster state, and basic helpers for job control. Perhaps implemented as a Mojolicious plugin, so it can be embedded easily into existing applications.

Periodic jobs

It appears that there's a lot of interest in periodic jobs, and i think native support in Minion would make sense. Many existing implementations appear to be using a single server, often backed by cron, to schedule their periodic jobs. It has been suggested that cron+unique jobs (as proposed in #22) would be good enough, but i'm not so sure about that. What do you think, can we come up with something better?

Please explain what should happen.

When I do not have scripts/myapp minion worker running, I can see in SQLite DB' minion_jobs jobs queuing up.

When I start scripts/myapp minion worker, nothing happens to these jobs.

When I enqueu a new job while scripts/myapp minion worker is running, nothin appears in the job queue.

I just started to work with mojolicious/minion and so it's still quite unclear as what to provide in order to get help to see my lack of knowledge.

I took this linkercheck example as a boilerplate for my minion jobs. The only thing I'm doing inside the task is simply calling $job->finished(\[]), so nothing fancy which could make the thing crash or so.

Bug with wait timeout in dequeue in Minion::Backend::Pg

Case:
There are two worker that call dequeue with timeout 10. When one task is added to the queue after 3 seconds, then both worker will receive a notification about the new job, but only one worker receives a job. The second worker must wait for 7 seconds for a new job, but not return undef immediately after 3 second.

"Not a HASH reference" in Minion::Admin UI

  • Minion version: 8.12
  • Perl version: 5.24.3
  • Operating system: MacOS X 10.13.3

Steps to reproduce the behavior

While testing out the new (very fancy) Minion Admin UI, I was seeing nice graphs of the jobs that were processed. When I clicked on "Finished" (=> /minion/jobs?state=finished) it returns a error page stating "Not a HASH reference".

Expected behavior

I've never seen the "Finished jobs" page so far, but I would expect to see it, when clicking that link ;)

Actual behavior

[Tue Mar 20 23:12:58 2018] [debug] Routing to a callback
[Tue Mar 20 23:12:58 2018] [error] Not a HASH reference at /Users/wneessen/perl5/perlbrew/perls/perl-5.24.1/lib/site_perl/5.24.3/Mojolicious/Plugin/Minion/Admin.pm line 53.

This actually happens with any of the jobs links in the navigation.

Linkcheck example and mojolicious.org

  • Minion version: v8.12
  • Perl version: 5.22.1
  • Operating system: Ubuntu 16.04

Steps to reproduce the behavior

The Linkcheck example does not work for the default example website mojolicious.org. The reason why is that there is a link to https://shop.spreadshirt.com/kraih/ on the mojolicious website which results in an inactivity timeout.
wget -d https://shop.spreadshirt.com/kraih results in a:
---request begin---
GET /kraih HTTP/1.1
User-Agent: Wget/1.17.1 (linux-gnu)
Accept: /
Accept-Encoding: identity
Host: shop.spreadshirt.com
Connection: Keep-Alive

---request end---
HTTP request sent, awaiting response...

If I specify the user agent with wget, wget will follow the redirects and works.
wget -U "Mozilla/5.0" https://shop.spreadshirt.com/kraih

Expected behavior

Example should work.
While nothing to do with Minion, it did take me a while to realise that the issue was with a link on the mojolicious website rather than with Minion itself.

I added $ua->transactor->name('Mozilla/5.0') and $ua->request_timeout(5) to CheckLinks.pm but I still get an inactivity timeout when I run the linkcheck for the website mojolicious.org

Actual behavior

Request timeout at /home/ekenny/mojo/minion/examples/linkcheck/script/../lib/LinkCheck/Task/CheckLinks.pm line 33.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.