Giter Site home page Giter Site logo

crazy-max / swarm-cronjob Goto Github PK

View Code? Open in Web Editor NEW
700.0 14.0 70.0 2.65 MB

Create jobs on a time-based schedule on Docker Swarm

Home Page: https://crazymax.dev/swarm-cronjob/

License: MIT License

Dockerfile 16.86% Go 75.50% HCL 7.64%
swarm docker cronjob scheduler golang go docker-api swarm-mode

swarm-cronjob's Introduction

Documentation GitHub release Total downloads Build Status Docker Stars Docker Pulls
Go Report Become a sponsor Donate Paypal

About

swarm-cronjob creates jobs on a time-based schedule on Swarm with a dedicated service in a distributed manner that configures itself automatically and dynamically through labels and Docker API.

Note

Want to be notified of new releases? Check out 🔔 Diun (Docker Image Update Notifier) project!

Documentation

Documentation can be found on https://crazymax.dev/swarm-cronjob/

Contributing

Want to contribute? Awesome! The most basic way to show your support is to star the project, or to raise issues. You can also support this project by becoming a sponsor on GitHub or by making a PayPal donation to ensure this journey continues indefinitely!

Thanks again for your support, it is much appreciated! 🙏

License

MIT. See LICENSE for more details.
Icon credit to Laurel.

swarm-cronjob's People

Contributors

crazy-max avatar decentral1se avatar dependabot-preview[bot] avatar dependabot[bot] avatar fl4shback avatar github-actions[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swarm-cronjob's Issues

Terminal not showing the output of the python file

Behaviour

Screen Shot 1400-10-11 at 18 03 49

Expected behaviour

It has to show an output of python file that I put in that directory

It shows nothing and it doesn’t show any error either

  • Target Docker version (the host/cluster you manage) :
  • Platform (windows/linux) :
  • System info (type uname -a) :
  • Target Swarm version :

Docker info

Output of command docker info

Logs

swarm-cronjob service logs (set LOG_LEVEL to debug) and cron based service logs if useful

Is there a way to pass in options to start the service?

Great project, thanks for building this!

Question though, we use a third party docker registry and some services are failing to start because the image it is referencing isn't on the host machine. Is there a way to specify start up options on the service to force it to look to the docker registry for this image when starting up?

Ideally something that would emulate the following:

docker service update service_name_xyz --with-registry-auth --force

Thanks!

it is possible to use code in the schedule?

Hello, good night.

I am trying to run a cronjob on the last day of the month, and the way I use it in the cron format it is not working.

Below is the way I am trying to comply with the schedule:

swarm.cronjob.schedule=55 23 28-31 * * [[ "$(date --date=tomorrow +\%d)" == "01" ]]

and he gives this error

yaml: line 10: did not find expected '-' indicator

the error is literally in the label line, my entire yaml is correct, but there is a problem with the schedule label.

renovate/renovate does not run when invoked by swarm-cronjob

(Thanks for this project!)

Behaviour

I am trying to run renovate bot to upgrade my configs on a cron job using this project. The cronjob is registered and it is scheduled but the job does not run. I see nothing in the logs for the actual scheduled container. I have tried to use healthcheck: disable but this doesn't seem to work either.

My configs:

Refs:

Steps to reproduce this issue

  1. Setup a docker swarm setup (single node cluster works)
  2. git clone https://git.autonomic.zone/autonomic-cooperative/swarm-cronjob
  3. cd swarm-cronjob && docker stack deploy -c docker-compose.yml -c docker-compose.production.yml swam-cronjob
  4. git clone https://git.autonomic.zone/autonomic-cooperative/renovate
  5. cd renovate && docker stack deploy -c docker-compose.yml -c docker-compose.production.yml renovate

Expected behaviour

The renovate bot container should run.

Actual behaviour

The renovate bot container does not run.

Configuration

  • Target Docker version (the host/cluster you manage) : Docker swarm single host cluster
  • Platform (windows/linux) : linux
  • System info (type uname -a) : Linux my-swarm 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64 GNU/Linux
  • Target Swarm version : Latest

Docker info

Client:
 Debug Mode: false
Server:
 Containers: 9
  Running: 7
  Paused: 0
  Stopped: 2
 Images: 17
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: barfoo
  Is Manager: true
  ClusterID: foobar
  Managers: 1
  Nodes: 1
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 116.203.211.204
  Manager Addresses:
   116.203.211.204:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.19.0-8-amd64
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 1.902GiB
 Name: my-swarm
 ID: 6RXL:N5JJ:O3QC:5R7U:OYDN:63HG:27Z7:2VQK:W5M5:PIKY:LMCJ:PYFI
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: bingbong
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Logs

swarm-cronjob_docker-cron-job.1.2qw0nstpe2sy@myswarm    | Thu, 07 May 2020 08:11:01 UTC INF Start job service=renovate_renovate status=updating tasks_active=0

Concatenate jobs

Hi, this is a feature request / general question

There are several scenarios when multiple commands have to be issued on a cronjob, and they belong to very different Docker Images
An example of this could be backupping a mysql database and then uploading such payload to a cloud provider, let's say dropbox

For such a scenario, a developer could build a third image containing both mysql binaries and dbxcli, and with that newly created Image, build a script that can do both the actions: [backup, upload]

Another scenario could be having 2 containers sharing a volume, the first one executing the backup procedure, and then following one to be the one uploading, maybe those containers don't even need a script file, but just one command each

This could lead to an higher reuse of public docker images, do you think it could be achieved?
Do you see hard limitations on this happening?

Couple of questions

So far i'm loving this, it's the only swarm scheduler i have found that actually works. I do have a couple of questions:

  1. When my service is running in replicated mode, lets say i say 9 replicas, it seems to automatically go back to just 1. Is this the correct behaviour? I ask because i have a container that processes video files, converts the format etc i don't want it to just run 1 container every minute, i want it to be able to use the swarm to process say 10 videos at once, thus replicating it across my nodes. Is there a way to do this?

  2. Is there a way to make it run every 20 seconds?

Skip-running not working on a 3-nodes cluster

Behaviour

Hi,

First of all, thank you so much for this awesome repo. I just unfortunately noticed that the "skip-running" argument does not work on my 3-nodes swarm cluster.
It works correctly once, then it begins to shutdown and re-run running services.

Steps to reproduce this issue

Here is my docker-compose :

version: "3.6"
services:
  # Swarm cronjob
  swarm-cronjob:
    image: crazymax/swarm-cronjob:1.1.0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - LOG_LEVEL=info
      - LOG_JSON=true
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

  sleep:
    image: busybox
    command: sleep 58
    deploy:
      mode: replicated
      replicas: 0
      restart_policy:
        condition: none
      labels:
        - swarm.cronjob.enable=true
        - swarm.cronjob.schedule=0/10 * * * * * # each ten secs
        - swarm.cronjob.skip-running=true # but only each minut with this

Expected behaviour

A single service completed each minut.

Actual behaviour

Here is the docker service logs test-cron_swarm-cronjob -f output :

test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:51:40Z","message":"Start test-cron_sleep (exit 0 ; complete)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:51:50Z","message":"Start test-cron_sleep (exit 0 ; complete)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:52:00Z","message":"Start test-cron_sleep (exit 0 ; complete)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:52:10Z","message":"Start test-cron_sleep (exit 1 ; ready)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:52:20Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:52:30Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:52:40Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"warn","time":"2019-04-25T13:52:50Z","message":"Skip test-cron_sleep (exit 1 ; running)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:53:00Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:53:10Z","message":"Start test-cron_sleep (exit 1 ; ready)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:53:20Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:53:30Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}
test-cron_swarm-cronjob.1.qrvfjoycbnhv@devdock2    | {"level":"info","time":"2019-04-25T13:53:40Z","message":"Start test-cron_sleep (exit 1 ; shutdown)"}

Here is the docker service ps --no-trunc test-cron_sleep output :

Every 5.0s: docker service ps --no-trunc test-cron_sleep            devdock1: Thu Apr 25 15:53:19 2019

ID                          NAME                    IMAGE
                                    NODE                DESIRED STATE       CURRENT STATE             ERR
OR               PORTS
qoe1htr8tjywa3wovlziqa8wf   test-cron_sleep.1       busybox:latest@sha256:954e1f01e80ce09d0887ff6ea10b13a
812cb01932a0781d6b0cc23f743a874fd   devdock1         Running             Running 6 seconds ago

8wy308c7tx6t7hokeh03j8zzq    \_ test-cron_sleep.1   busybox:latest@sha256:954e1f01e80ce09d0887ff6ea10b13a
812cb01932a0781d6b0cc23f743a874fd   devdock1         Shutdown            Shutdown 7 seconds ago

w1cj0nz1arzn1vedzrrexdmc0    \_ test-cron_sleep.1   busybox:latest@sha256:954e1f01e80ce09d0887ff6ea10b13a
812cb01932a0781d6b0cc23f743a874fd   devdock1         Shutdown            Shutdown 5 seconds ago

j0lol5la1683ar1oquh6khggp    \_ test-cron_sleep.1   busybox:latest@sha256:954e1f01e80ce09d0887ff6ea10b13a
812cb01932a0781d6b0cc23f743a874fd   devdock1         Shutdown            Shutdown 25 seconds ago

w8wo5al2ynfxbw839lkbsfi4w    \_ test-cron_sleep.1   busybox:latest@sha256:954e1f01e80ce09d0887ff6ea10b13a
812cb01932a0781d6b0cc23f743a874fd   devdock1         Shutdown            Shutdown 45 seconds ago

The cronjob seems to consider the job as complete and rerun it many times, but he is still running at this moment.

I checked your portion of code managing this (it seems to be the worker.go) but I didn't find any obvious. This may also come from my own swarm configuration. Can you reproduce this bug ?

Configuration

Docker version : 18.09.2, build 6247962

Logs

Detailed earlier

Thank you a lot,

Axel

Docker Swarm Config Ignored

Behaviour

Hi there,

It seems that if you put a "configs" in your docker-compose swarm file, and you try to run this container onto the your CRON system, the configuration is totally ignored. I made a quick fix to avoid any problem with this behavior. but it will be great if it doesn't happen.

Steps to reproduce this issue

  1. Having a docker-compose with a config file
    deploy:
    labels:
    - "swarm.cronjob.enable=true"
    - "swarm.cronjob.schedule=55 * * * * *"
    - "swarm.cronjob.skip-running=false"
    replicas : 0
    restart_policy:
    condition: none
    configs:
    • source : app_config
      target : /app/config.json

configs :
app_config:
file : ./config.json

  1. Make this service under swarm-cronjob
  2. Launch it

Expected behaviour

My service is to rename specific file to start different services, so it should just do his normal execution then stop.

Actual behaviour

It simply can't find the file anymore, it was working perfectly when I didn't update his behavior to be on swarm-cronjob.

Traceback (most recent call last):
File "index.py", line 43, in
with open("./config.json", 'r') as file:
FileNotFoundError: [Errno 2] No such file or directory: './config.json'

Configuration

  • Target Docker version (the host/cluster you manage) :
    Client:
    Version: 18.09.1
    API version: 1.39
    Go version: go1.10.6
    Git commit: 4c52b90
    Built: Wed Jan 9 19:35:23 2019
    OS/Arch: linux/amd64
    Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:02:44 2019
OS/Arch: linux/amd64
Experimental: false

  • Platform (windows/linux) : Linux
  • Target Swarm version :

Docker info

Containers: 53
Running: 10
Paused: 0
Stopped: 43
Images: 90
Server Version: 18.09.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: r3n35izga6bm62qe3ukavh90n
Is Manager: true
ClusterID: mejy5e45rpv3qi5c31brgj75z
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.11.228
Manager Addresses:
192.168.11.228:2377
Runtimes: nvidia runc
Default Runtime: nvidia
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 96ec2177ae841256168fcf76954f7177af9446eb-dirty
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-131-generic
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 48
Total Memory: 125.8GiB
Name: gpu-dev
ID: TSWA:FM46:HGP5:XN25:G4K2:T6RW:UZIO:2KRN:DNV4:FIN2:CRRA:3ALK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://registry.docker-cn.com/
Live Restore Enabled: false
Product License: Community Engine

WARNING: No swap limit support

Logs

See earlier in the actual output

I really appreciate your work and hope it will grow into something big ! Cheers

Keep the command running until it finishes even if there was an update to the service.

Description

In my use case, I have an hourly cron job "9 * * * *" It takes 10 minutes to finish if the image is updated during the execution time then the command will be stopped and interrupted.

The request is to give the possibility to apply the updates only on the new runs and keep the old instances untouched.

Example:

  • service started at 10:9 AM
  • service got updated at 10:11 AM

then the command will stop and updates will be applied in the next run

Feature request: cronjob to restart existing service?

Firstly, thanks for the awesome project!

I'm using certbot to renew certificates, after it successes, I'd like to ask nginx service to reload them.

I wonder if swarm-cronjob could restart a service in this case? The certbot is defined as a cron job now, but I have no idea what command to write to restart the nginx service.

BTW, restarting nginx will be conditional. Only when the renewing is successful, should it be restarted. Is this kind of logic possible?

Feature request: cli to manually run a job?

Sometimes manually running a job is desired, like migrating db etc. The job doesn't have to be run repeatedly.

I wonder if it makes sense to offer a cli that can instruct swarm-cronjob to run a job immediately?

Ideally the cli could offer arguments similar to that of docker run where you can override entry point, working dir, etc.

Feature: log removed services

Would be nice if swarm-cronjob could log removed services as well. Now when a service is added, the following line is printed:

INF Add cronjob with schedule 0 0,30 * * * * service=some_task

But when you deploy the same stack with the --prune option, and service gets deleted, nothing is printed, which is a bit confusing.

Log when edit swarm.cronjob.schedule label

Behaviour

Log when cronjob time is updated via docker swarm label.

Steps to reproduce this issue

  1. Create a service xxx in your nnn stack with swarm-cronjob labels to activate cronjob;
  2. swarm-cronjob log will log the message:
    Mon, 02 Mar 2020 13:41:43 -03 INF Add cronjob with schedule 45 * * * * service=nnn_xxxx
  3. Update the label swarm.cronjob.schedule
  4. Not will be logged in swarm-cronjob container;

Expected behaviour

Maybe log something in swarm-cronjob container:
Mon, 02 Mar 2020 13:41:43 -03 INF Updated cronjob with schedule 55 * * * * service=nnn_xxxx

Actual behaviour

Nothing is logged when label swarm.cronjob.schedule is updated

Configuration

  • Target Docker version (the host/cluster you manage) : 19.03.5
  • Platform (windows/linux) : linux
  • System info (type uname -a) : Linux swarm3 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Logs

Mon, 02 Mar 2020 13:27:26 -03 INF Starting swarm-cronjob 1.7.1
Mon, 02 Mar 2020 13:41:43 -03 INF Add cronjob with schedule 45 * * * * service=nnn_xxx
Mon, 02 Mar 2020 13:45:00 -03 INF Start job service=nnn_xxx status= tasks_active=0
Mon, 02 Mar 2020 13:55:00 -03 INF Start job service=nnn_xxx status= tasks_active=0
Mon, 02 Mar 2020 14:05:00 -03 INF Start job service=nnn_xxx status= tasks_active=0

[QUESTION] Registry authentication

Hi,

thank you for your work on this project, it's been working perfectly for us.

We've come across an issue with pulling images from a private registry. While that's not specifically a swarm-cronjob issue, I was wondering if someone would be able to find a solution. When deploying with docker stack deploy --with-registry-auth stackname, if the only service in the stack is a service to be run by swarm-cronjob, it naturally initially only scales to 0 replicas. However, this means that the image is not pulled immediately. So, when swarm-cronjob attempts to run the container later on (ie. the next day), the node doesn't have registry credentials available.

Has anyone experienced this issue? Is there a straightforward way of resolving it?

Any sort of help would be appreciated.

docker cap_add seems to work only with replica to 1 and not in scheduled mode

Hi, and thanks for your work !

We use cronjob to start jobs in our clusters and for some reasons we need for one of our container to set the mac_address (licensing stuff). We found that adding cap_add: - NET_ADMIN allow us to override that value and we are ok with that.

We then tried to deploy such a container in our swarm and found that :

  • if the container is started with deploy: replica: 1, it is running well
  • if the container is started by the scheduler : swarm.cronjob.schedule=* * * * * for example, we see the error : ip: ioctl 0x8914 failed: Operation not permitted

In the docker documentation, we can see that :
`Note when using docker stack deploy

The cap_add and cap_drop options are ignored when deploying a stack in swarm mode`

But it seems that with our used version of docker (20.10.10) and cronjob (1.10.0, latest) the cap_add is NOT ignored, or at least for the replica :1 option.

Here is an example of our iml file :

version: '3.8'

services:
  stream:
    image: alpine
    entrypoint: [ "/bin/sh","-c" ]
    command: >
        "ip link set dev eth0 down
        && ip link set dev eth0 address fa:16:3e:87:02:d7
        && ip link set dev eth0 up"
    deploy:
      replicas: 0
      labels:
        - "swarm.cronjob.enable=true"
        - "swarm.cronjob.schedule=* * * * *"
        - "swarm.cronjob.skip-running=true"
    cap_add:
      - NET_ADMIN

Update service's schedule

Is it possible to have the cron schedule updated when the service's labels change?

For example, i'm running Docker Swarm and there's a stack running; one service in the stack has the labels to let swarm-cronjob know that it is a cron task.
I update the swarm.cronjob.schedule label in docker-compose.yml and run a docker stack deploy - swarm-cronjob would notice the change and re-schedule this cron task.

My current workaround is docker service remove {cron_service} before deploy.

Thanks!

Feature suggestion: Scale up to more than 1 task

Currently, swarm-cronjob scales a service down to 0 and then up to 1 when a cron job is to be started.

// Only 1 replica is necessary in replicated mode
if service.Mode == model.ServiceModeReplicated {
*serviceUp.Spec.Mode.Replicated.Replicas = 1
}

The idea is to add an additional label, e.g. swarm.cronjob.replicas=3, which would control the number of replicas started on the cron schedule, so it would be scaled down to 0 and up to e.g. 3.

cronjob for an existing service in swarm

Hi,

I have a service in my swarm that is running my nodejs application and my goal is creating a crontab task inside of it. So if I put command lines to swarm configuration it's overlapsing my docker image command and my service doesn't run.

So how can i configure it? Also how can i run multiple commands? (such as every minute, every hour etc.)

Why no Windows Docker image ?

Hi,

Since it is a Go app and that you release Windows binaries, why wouldn't you also release Windows Docker images ?

Kind regards

cron doesnt work in swarm

Behaviour

Im using docker 19.03.5 and compose version 1.25.0
I did test this on docker-compose with no swarm and it works, once i deploy on swarm
it doesnt matter what i put in the swarm.cronjob.schedule swarm start container with a script, and when the script end container stops - every 10 seconds.

i want the container to go up every X time that i have set in ""swarm.cronjob.schedule".
here is my settings:

        cron:
                image: crazymax/swarm-cronjob:latest
                volumes:
                        - /var/run/docker.sock:/var/run/docker.sock
                environment:
                        - "TZ=Asia/Jerusalem"
                        - "LOG_LEVEL=debug"
                        - "LOG_JSON=false"
                deploy:
                   placement:
                     constraints:
                        - node.role == manager

        service1:
                image: 127.0.0.1:5000/api2:latest
                command: python session.py
                labels:
                        - "swarm.cronjob.enable=true"
                        - "swarm.cronjob.schedule=0/30 * * * * *"
                        - "swarm.cronjob.skip-running=true"
                deploy:
                   placement:
                     constraints:
                        - node.role == manager

I have a swarm with only one host at the time ( meaning a swarm of 1 machine which is the manager) i will excpent it later.

i use ubuntu 18

logs from cron container:


base_cron.1.2y0btqbrsewm@ansible    | Mon, 27 Jan 2020 11:18:44 IST DBG Event triggered newstate=completed oldstate=updating service=base_service1
base_cron.1.2y0btqbrsewm@ansible    | Mon, 27 Jan 2020 11:18:44 IST DBG Cronjob disabled service=base_service1

What am i doing wrong ?

Feature: Adding support for automatically fetching the newest image

Hi

First of all, thanks for making this!

I don't have that much experience in Go, but I was wondering if it would be possible to add a flag for making the swarm update the service on the next start.

I'm currently using Shepherd for doing this, but it doesn't play nice with multiple services updating the service definition.

Non-cronjob containers are added to cronjob list

Behaviour

swarm-cronjob adds non-crontab services to crontab schedule,

Steps to reproduce this issue

  1. Deploy swarm-cronjob following the Quickstart guide.

  2. Deploy service with crontab labels

  3. Verify that the service is picked up by swarm-cronjob

  4. Deploy service without the swarm-cronjob specified labels

Expected behaviour

Only services with the required labels should be picked up by swarm-cronjob

Actual behaviour

Services without the required labels are picked up and added without a schedule by swarm-cronjob

Configuration

  • Target Docker version (the host/cluster you manage) : 17.06.2-ee-16
  • Platform (windows/linux) : linux
  • Target Swarm version : 17.06.2-ee-16

Docker info

Containers: 26
Running: 13
Paused: 0
Stopped: 13
Images: 843
Server Version: 17.06.2-ee-16
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: iesbyog0ek52azwt02v9y3hrc
Is Manager: true
ClusterID: 26uqxgbgutp9acw2v4ntxma8f
Managers: 3
Nodes: 22
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
External CAs:
cfssl: https://10.107.XXX.186:12381/api/v1/cfssl/sign
cfssl: https://10.107.XXX.184:12381/api/v1/cfssl/sign
cfssl: https://10.107.XXX.185:12381/api/v1/cfssl/sign
Root Rotation In Progress: false
Node Address: 10.107.XXX.186
Manager Addresses:
10.107.XXX.184:2377
10.107.XXX.185:2377
10.107.XXX.186:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 462c82662200a17ee39e74692f536067a3576a50
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-133-generic
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 15.65GiB
Name: XXX
ID: VF5U:RKYW:353Q:5TIJ:5I2H:FTZA:FQCH:XXXX:HHM7:KLKE:WPPF:CLGI
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 152
Goroutines: 327
System Time: 2019-01-11T10:58:05.751575781+01:00
EventsListeners: 9
Http Proxy: XXX
Https Proxy: XXX
No Proxy: localhost,127.0.0.0/8,XXX
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

Logs

Service logs (set LOG_LEVEL to debug)

Thu, 10 Jan 2019 17:34:10 CET INF Add cronjob for service shared_runner_runner with schedule 
Thu, 10 Jan 2019 17:34:10 CET ERR Cannot update job for service shared_runner_runner error="Empty spec string"
Thu, 10 Jan 2019 17:34:10 CET INF Add cronjob for service shared_runner_runner with schedule 
Thu, 10 Jan 2019 17:34:10 CET ERR Cannot update job for service shared_runner_runner error="Empty spec string"
Thu, 10 Jan 2019 17:34:41 CET INF Add cronjob for service shared_runner_runner with schedule 
Thu, 10 Jan 2019 17:34:41 CET ERR Cannot update job for service shared_runner_runner error="Empty spec string"

How to pass variables to run docker commands

Hi =)

I've a problem...I've a nextcloud instance running in swarm, that needs a cronjob...I can't figure out how to set this...my approach was the following:

version: '3.3'
services:
  cron:
    image: crazymax/swarm-cronjob:latest
    environment:
      LOG_JSON: 'false'
      LOG_LEVEL: info
      TZ: Europe/Vienna
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
    networks:
     - default
    logging:
      driver: json-file
    deploy:
      placement:
        constraints:
         - node.role == manager
  nextcloud_cron_php:
    image: docker:latest
    command: docker exec --user www-data $(docker ps -q -f name=nextcloud_stack_nextcloud) php -f /var/www/html/cron.php
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
    networks:
     - default
    logging:
      driver: json-file
    deploy:
      mode: global
      labels:
        swarm.cronjob.enable: 'true'
        swarm.cronjob.schedule: '* * * * *'
        swarm.cronjob.skip-running: 'false'
      restart_policy:
        condition: none
networks:
  default:
    driver: overlay

The command works in docker run, but it doesn't work in the compose-style (the $(docker ps -q -f name=nextcloud_stack_nextcloud) part is missinterpreted). Can someoene help set this up correctly?

Besides thanks for all the cool containers and development crazymax =)

Scheduling in timezone

Have their been any requests/thoughts around the ability to schedule tasks in a timezone?

For example, if I have a swarm-cronjob instance that runs in UTC, I would like to schedule a service to run every week on Monday at Midnight California time, without doing the math myself to adjust the scheduling.

I have users scheduling tasks in multiple time zones and this would be a great nice-to-have.

Thanks,

CJ

Incorrect cron parsing

Log is better than thousand words. Hours are swapped with minutes, instead of executing at 2am, it is running every hour at xx:02.

{"level":"debug","service":"generator","time":"2019-05-15T14:02:00Z","message":"Update cronjob with schedule 0 2 * * *"},
{"level":"debug","service":"generator","newstate":"completed","oldstate":"updating","time":"2019-05-15T14:02:00Z","message":"Event triggered"}

Support for Global Mode

Hi,
I'd like support for global mode services in swarm.
Our use-case is to run a prune service globally.

Prune Service

version: "3.6"
services:
  prune:
    image: docker:18.06
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: ["docker", "system", "prune", "-f"]
    deploy:
      mode: global
      restart_policy:
        condition: none
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: continue
      labels:
        - "swarm.cronjob.enable=true"
        - "swarm.cronjob.schedule=0 */5 * * * *"
        - "swarm.cronjob.skip-running=false"

Swarm-Cronjob Log

Wed, 29 May 2019 14:42:38 CDT INF Starting swarm-cronjob 1.2.0
Wed, 29 May 2019 14:42:38 CDT INF Add cronjob with schedule 0 */5 * * * * service=swarm-cronjob_prune
Wed, 29 May 2019 14:45:00 CDT INF Start job last_status=failed service=swarm-cronjob_prune
2019/05/29 14:45:00 cron: panic running job: runtime error: invalid memory address or nil pointer dereference
goroutine 49 [running]:
github.com/crazy-max/cron.(*Cron).runWithRecovery.func1(0xc00035c150)
	/go/pkg/mod/github.com/crazy-max/[email protected]/cron.go:260 +0x9e
panic(0x8c89c0, 0xd58ad0)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/crazy-max/swarm-cronjob/internal/worker.(*Client).Run(0xc000324340)
	/app/internal/worker/worker.go:35 +0x20d
github.com/crazy-max/cron.(*Cron).runWithRecovery(0xc00035c150, 0x9f2ae0, 0xc000324340)
	/go/pkg/mod/github.com/crazy-max/[email protected]/cron.go:264 +0x57
created by github.com/crazy-max/cron.(*Cron).run
	/go/pkg/mod/github.com/crazy-max/[email protected]/cron.go:298 +0x90a

Panic when trying to extract registry auth from a service without registry auth

Steps to reproduce this issue

  1. Deploy a service without --with-registry-auth
  2. Set swarm.cronjob.registry-auth=true for that service
  3. Wait for swarm-cronjob to trigger that job

Expected behaviour

Not really sure, I think it would make sense to log a warning and try to run that service anyway.

Actual behaviour

swarm-cronjob (or rather, the docker cli package, if I understand correctly) panics

Configuration

  • Target Docker version (the host/cluster you manage) : 20.10.7
  • Platform (windows/linux) : Linux Ubuntu 20.04.4 LTS

Logs

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x220 pc=0x937d53]
goroutine 84 [running]:
github.com/docker/cli/cli/command.ElectAuthServer({0xbe2f50, 0xc00009c000}, {0xbf31b8, 0xc000074750})
	/go/pkg/mod/github.com/docker/[email protected]+incompatible/cli/command/registry.go:31 +0x73
github.com/docker/cli/cli/command.ResolveAuthConfig({0xbe2f50, 0xc00009c000}, {0xbf31b8, 0xc000074750}, 0x0)
	/go/pkg/mod/github.com/docker/[email protected]+incompatible/cli/command/registry.go:86 +0x9a
github.com/docker/cli/cli/command.resolveAuthConfigFromImage({0xbe2f50, 0xc00009c000}, {0xbf31b8, 0xc000074750}, {0xc000082f00, 0xc00032b150})
	/go/pkg/mod/github.com/docker/[email protected]+incompatible/cli/command/registry.go:213 +0x112
github.com/docker/cli/cli/command.RetrieveAuthTokenFromImage({0xbe2f50, 0xc00009c000}, {0xbf31b8, 0xc000074750}, {0xc000082f00, 0x0})
	/go/pkg/mod/github.com/docker/[email protected]+incompatible/cli/command/registry.go:192 +0x7b
github.com/crazy-max/swarm-cronjob/internal/docker.(*dockerClient).RetrieveAuthTokenFromImage(0xc00021aa20, {0xbe2f50, 0xc00009c000}, {0xc000082f00, 0x0})
	/src/internal/docker/client.go:65 +0x45
github.com/crazy-max/swarm-cronjob/internal/worker.(*Client).Run(0xc000477f40)
	/src/internal/worker/worker.go:72 +0x63e
github.com/robfig/cron/v3.(*Cron).startJob.func1()
	/go/pkg/mod/github.com/robfig/cron/[email protected]/cron.go:312 +0x6a
created by github.com/robfig/cron/v3.(*Cron).startJob
	/go/pkg/mod/github.com/robfig/cron/[email protected]/cron.go:310 +0xb2

Global mode does not work correctly

Behaviour

Steps to reproduce this issue

  1. Create a 5-node swarm cluster with 3 manager nodes in play with docker
  2. Create a swarm-cronjob stack as shown in the example
  3. Create a stack as shown in the example for a global job
  4. Tail the service logs for swarm-cronjob and the job service's stack

Expected behaviour

On every schedule occasion, there should be one log entry for each node.

Actual behaviour

After the first deployment output (where there is output for every node), only the output of two nodes is shown. docker stack ps shows that the service is only restarted on two of the nodes.

Most of the time, that is. Every few iterations, the service is successfully run on all nodes again.

I am able to reproduce this on PWD as well as a real swarm cluster with 3 managers and 1 worker (w/ docker 18.09.8).

Configuration

  • Target Docker version (the host/cluster you manage) : 19.03.4 (happens on 18.09.8 as well)
  • Platform (windows/linux) : Linux amd64
  • Target Swarm version : ?

Docker info

Client:
 Debug Mode: false
 Plugins:
  app: Docker Application (Docker Inc., v0.8.0)

Server:
 Containers: 6
  Running: 1
  Paused: 0
  Stopped: 5
 Images: 2
 Server Version: 19.03.4
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: 3gq13ao9gjqm4e93fbt2a0g5q
  Is Manager: true
  ClusterID: ocg8epc8ygsna7jyckvnoeqcs
  Managers: 3
  Nodes: 5
  Default Address Pool: 10.0.0.0/8  
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.0.51
  Manager Addresses:
   192.168.0.49:2377
   192.168.0.50:2377
   192.168.0.51:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.4.0-161-generic
 Operating System: Alpine Linux v3.10 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.4GiB
 Name: manager2
 ID: Y4T2:VAE3:BTYF:6WVY:MPZH:PPO7:U7SL:AJI4:YTHG:YS2P:B3XP:Q45A
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 62
  Goroutines: 172
  System Time: 2019-10-26T20:40:38.154648103Z
  EventsListeners: 2
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.1
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

Logs

swarm-cronjob logs:

2019-10-26T20:42:53.338828516Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:53 CEST INF Starting swarm-cronjob 1.6.0
2019-10-26T20:42:53.338959117Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:53 CEST DBG Creating Docker API client
2019-10-26T20:42:53.379697258Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:53 CEST DBG 0 scheduled services found through labels
2019-10-26T20:42:53.379733758Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:53 CEST DBG Starting the cron scheduler
2019-10-26T20:42:53.379741258Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:53 CEST DBG Listening docker events...
2019-10-26T20:42:57.319945256Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:57 CEST DBG Event triggered newstate= oldstate= service=global-job_test
2019-10-26T20:42:57.335406447Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:57 CEST INF Add cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:42:57.335447247Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:57 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:42:57.335475647Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:57 CEST DBG Event triggered newstate= oldstate= service=global-job_test
2019-10-26T20:42:57.362249406Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:57 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:42:57.362274906Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:42:57 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:43:00.019533319Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Service task node=worker2 service=global-job_test status_message=starting status_state=starting task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:43:00.019564819Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Service task node=manager2 service=global-job_test status_message=starting status_state=starting task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:43:00.019572119Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Service task node=manager3 service=global-job_test status_message=starting status_state=starting task_id=o6xlg4g3t1k8nh5jfgl4x5usu
2019-10-26T20:43:00.019578119Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Service task node=manager1 service=global-job_test status_message=starting status_state=starting task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:43:00.019599419Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Service task node=worker1 service=global-job_test status_message=starting status_state=starting task_id=p8roxdmjrfvlkolsy9vnfogr0
2019-10-26T20:43:00.019605119Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST INF Start job service=global-job_test status= tasks_active=0
2019-10-26T20:43:00.027649867Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Event triggered newstate= oldstate= service=global-job_test
2019-10-26T20:43:00.696545622Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:43:00.696584223Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:43:00.696590923Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Event triggered newstate=updating oldstate= service=global-job_test
2019-10-26T20:43:00.703906666Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:43:00.704290168Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:43:14.607366999Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:14 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:43:14.622608289Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:14 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:43:14.622631989Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:14 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:43:30.027846938Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=rxu731gfkxywzztejrzqt8yz2
2019-10-26T20:43:30.027888638Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:43:30.027899038Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:43:30.027905838Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=y6jhnhv6ztacb6hvuhgmgpf03
2019-10-26T20:43:30.027912538Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:43:30.028043239Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:43:30.028067639Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:43:30.028075339Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=worker1 service=global-job_test status_message=shutdown status_state=shutdown task_id=p8roxdmjrfvlkolsy9vnfogr0
2019-10-26T20:43:30.028084040Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=o6xlg4g3t1k8nh5jfgl4x5usu
2019-10-26T20:43:30.028090540Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:43:30.038057399Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:43:30.044022234Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:43:30.079028441Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:43:30.079083141Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:43:30.079092241Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:43:30.096803246Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:43:30.096932147Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:43:32.697817939Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:32 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:43:32.707604197Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:32 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:43:32.707882299Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:43:32 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:44:00.027384935Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=6ctbo74wx1evgkl3891e46m8g
2019-10-26T20:44:00.027586137Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=yw0p0uvvwkzpjbsgx2c5t3qa5
2019-10-26T20:44:00.027606037Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=rxu731gfkxywzztejrzqt8yz2
2019-10-26T20:44:00.027613137Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:44:00.027619537Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:44:00.027625837Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=y6jhnhv6ztacb6hvuhgmgpf03
2019-10-26T20:44:00.027632237Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:44:00.027638637Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:44:00.027647737Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:44:00.027654137Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=worker1 service=global-job_test status_message=shutdown status_state=shutdown task_id=p8roxdmjrfvlkolsy9vnfogr0
2019-10-26T20:44:00.027660537Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=o6xlg4g3t1k8nh5jfgl4x5usu
2019-10-26T20:44:00.027682737Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:44:00.027695837Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:44:00.041459819Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:44:00.063773651Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:44:00.063823751Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:44:00.063830951Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:44:00.077671733Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:44:00.077749134Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:44:03.080828819Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:03 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:44:03.114022615Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:03 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:44:03.114073416Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:03 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:44:30.030278875Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=t1vlce7ha8roh8j4zg3cwyhy9
2019-10-26T20:44:30.030308275Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=x5o0z8i9xj8p3o3hz7nt9cjcc
2019-10-26T20:44:30.030315075Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=6ctbo74wx1evgkl3891e46m8g
2019-10-26T20:44:30.030320775Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=yw0p0uvvwkzpjbsgx2c5t3qa5
2019-10-26T20:44:30.030326175Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=rxu731gfkxywzztejrzqt8yz2
2019-10-26T20:44:30.030347175Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:44:30.030353075Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:44:30.030358175Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=y6jhnhv6ztacb6hvuhgmgpf03
2019-10-26T20:44:30.030364375Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:44:30.030369575Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:44:30.030374675Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:44:30.031743484Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker1 service=global-job_test status_message=shutdown status_state=shutdown task_id=p8roxdmjrfvlkolsy9vnfogr0
2019-10-26T20:44:30.031758784Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=o6xlg4g3t1k8nh5jfgl4x5usu
2019-10-26T20:44:30.031765884Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:44:30.031772084Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:44:30.045133463Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:44:30.089437725Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:44:30.097190371Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:44:30.097214972Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:44:30.160094244Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:44:30.160147244Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:44:32.684168202Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:32 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:44:32.694651664Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:32 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:44:32.694931866Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:44:32 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:45:00.031574920Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=33hdn2wdg37ys24yuj75udp1o
2019-10-26T20:45:00.031603820Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=7yta3pmm2iw8x5s9t44300xl3
2019-10-26T20:45:00.031612021Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=t1vlce7ha8roh8j4zg3cwyhy9
2019-10-26T20:45:00.031617721Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=x5o0z8i9xj8p3o3hz7nt9cjcc
2019-10-26T20:45:00.031623421Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=6ctbo74wx1evgkl3891e46m8g
2019-10-26T20:45:00.031628921Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=yw0p0uvvwkzpjbsgx2c5t3qa5
2019-10-26T20:45:00.031634821Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=rxu731gfkxywzztejrzqt8yz2
2019-10-26T20:45:00.031654321Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:45:00.031662921Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:45:00.031668921Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=y6jhnhv6ztacb6hvuhgmgpf03
2019-10-26T20:45:00.031674421Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:45:00.031679921Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:45:00.031685721Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:45:00.031691121Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker1 service=global-job_test status_message=shutdown status_state=shutdown task_id=p8roxdmjrfvlkolsy9vnfogr0
2019-10-26T20:45:00.031696521Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=o6xlg4g3t1k8nh5jfgl4x5usu
2019-10-26T20:45:00.031761521Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:45:00.031779422Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:45:00.038967364Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:45:00.057264973Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:45:00.057375673Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:45:00.060752593Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:45:00.078486898Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:45:00.078798700Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:45:03.399136490Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:03 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:45:03.425201744Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:03 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:45:03.425610447Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:03 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:45:30.030523261Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=ih0qu55x4ksl8caqkkuk7g02d
2019-10-26T20:45:30.030553661Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=r4km6owlvsdzq2yrgycs1c5h0
2019-10-26T20:45:30.030561561Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=33hdn2wdg37ys24yuj75udp1o
2019-10-26T20:45:30.030567961Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=7yta3pmm2iw8x5s9t44300xl3
2019-10-26T20:45:30.030574161Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=t1vlce7ha8roh8j4zg3cwyhy9
2019-10-26T20:45:30.030579961Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=x5o0z8i9xj8p3o3hz7nt9cjcc
2019-10-26T20:45:30.030585961Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=6ctbo74wx1evgkl3891e46m8g
2019-10-26T20:45:30.030608161Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=yw0p0uvvwkzpjbsgx2c5t3qa5
2019-10-26T20:45:30.030616761Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=rxu731gfkxywzztejrzqt8yz2
2019-10-26T20:45:30.030622461Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:45:30.030628061Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:45:30.030633661Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=y6jhnhv6ztacb6hvuhgmgpf03
2019-10-26T20:45:30.030639261Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:45:30.030644861Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:45:30.030650561Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:45:30.030656261Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=o6xlg4g3t1k8nh5jfgl4x5usu
2019-10-26T20:45:30.030661962Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:45:30.033594879Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:45:30.039685415Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:45:30.061822946Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:45:30.062170848Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:45:30.068468986Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:45:30.080570458Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:45:30.080592858Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:45:33.294997630Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:33 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:45:33.337374382Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:33 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:45:33.337403182Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:45:33 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:46:00.026643387Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=iqmkwqs3tghm0zf2vrgq44axw
2019-10-26T20:46:00.026670187Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=p4gliavd25dfa1nxwbtdewois
2019-10-26T20:46:00.026676887Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=ih0qu55x4ksl8caqkkuk7g02d
2019-10-26T20:46:00.026682587Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=r4km6owlvsdzq2yrgycs1c5h0
2019-10-26T20:46:00.026687987Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=33hdn2wdg37ys24yuj75udp1o
2019-10-26T20:46:00.026693287Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=7yta3pmm2iw8x5s9t44300xl3
2019-10-26T20:46:00.026820488Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=t1vlce7ha8roh8j4zg3cwyhy9
2019-10-26T20:46:00.026826888Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=x5o0z8i9xj8p3o3hz7nt9cjcc
2019-10-26T20:46:00.026833888Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=6ctbo74wx1evgkl3891e46m8g
2019-10-26T20:46:00.026839188Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=yw0p0uvvwkzpjbsgx2c5t3qa5
2019-10-26T20:46:00.026844188Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=rxu731gfkxywzztejrzqt8yz2
2019-10-26T20:46:00.026849388Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:46:00.026854488Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:46:00.026959589Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=y6jhnhv6ztacb6hvuhgmgpf03
2019-10-26T20:46:00.026967889Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:46:00.026973089Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:46:00.026978089Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:46:00.026991889Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:46:00.026997289Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:46:00.038284156Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:46:00.049991326Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:46:00.050014826Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:46:00.050020426Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:46:00.102332336Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:46:00.105970558Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:00 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:46:03.288794054Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:03 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:46:03.314152204Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:03 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:46:03.314178204Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:03 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:46:30.042047224Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=7u1map386f0eblw9skzoep7vo
2019-10-26T20:46:30.042071725Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=okhp1zbav1xz5oe7uea0vlqk8
2019-10-26T20:46:30.042078825Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=iqmkwqs3tghm0zf2vrgq44axw
2019-10-26T20:46:30.042084725Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=p4gliavd25dfa1nxwbtdewois
2019-10-26T20:46:30.042104225Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=ih0qu55x4ksl8caqkkuk7g02d
2019-10-26T20:46:30.042110425Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=r4km6owlvsdzq2yrgycs1c5h0
2019-10-26T20:46:30.042115925Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=33hdn2wdg37ys24yuj75udp1o
2019-10-26T20:46:30.042121525Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=7yta3pmm2iw8x5s9t44300xl3
2019-10-26T20:46:30.042128425Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=t1vlce7ha8roh8j4zg3cwyhy9
2019-10-26T20:46:30.042134225Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager3 service=global-job_test status_message=finished status_state=complete task_id=x5o0z8i9xj8p3o3hz7nt9cjcc
2019-10-26T20:46:30.042139625Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=6ctbo74wx1evgkl3891e46m8g
2019-10-26T20:46:30.042145025Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker1 service=global-job_test status_message=finished status_state=complete task_id=yw0p0uvvwkzpjbsgx2c5t3qa5
2019-10-26T20:46:30.042150225Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=mti2ir0t6hdba32cbanty7spo
2019-10-26T20:46:30.042155625Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=mx5qs3kd6wwkz2isd3xqeqwcz
2019-10-26T20:46:30.042160925Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=qxy9lnetgqfiqztj0uwjvssoi
2019-10-26T20:46:30.042171025Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager1 service=global-job_test status_message=finished status_state=complete task_id=jfm1rym73lqtu2j374ohtxj8u
2019-10-26T20:46:30.042176725Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=manager2 service=global-job_test status_message=finished status_state=complete task_id=jp0w343tiyb5joqthv67zp36u
2019-10-26T20:46:30.042182525Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Service task node=worker2 service=global-job_test status_message=finished status_state=complete task_id=nzrhwgwcj1v3fxhoymq6gb69z
2019-10-26T20:46:30.042188025Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST INF Start job service=global-job_test status=paused tasks_active=0
2019-10-26T20:46:30.046720952Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:46:30.075022120Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:46:30.075351622Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:46:30.075468123Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Event triggered newstate=updating oldstate=updating service=global-job_test
2019-10-26T20:46:30.091783520Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:46:30.092178822Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:30 CEST DBG Number of cronjob tasks: 1
2019-10-26T20:46:32.764649596Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:32 CEST DBG Event triggered newstate=paused oldstate=updating service=global-job_test
2019-10-26T20:46:32.776077264Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:32 CEST DBG Update cronjob with schedule 0/30 * * * * * service=global-job_test
2019-10-26T20:46:32.776272465Z swarm_cronjob_app.1.1d1bw0uwm17q@manager3    | Sat, 26 Oct 2019 22:46:32 CEST DBG Number of cronjob tasks: 1

logs from scheduled service:

2019-10-26T20:43:02.877993923Z global-job_test.0.nzrhwgwcj1v3@worker2    | NodeID: "v5cm1cjd4iy05zz2am73hrf0z"
2019-10-26T20:43:08.565210959Z global-job_test.0.jfm1rym73lqt@manager1    | NodeID: "ecs6mnxe31s0t08nkvei092so"
2019-10-26T20:43:08.569653485Z global-job_test.0.jp0w343tiyb5@manager2    | NodeID: "3gq13ao9gjqm4e93fbt2a0g5q"
2019-10-26T20:43:09.428232463Z global-job_test.0.qxy9lnetgqfi@worker2    | NodeID: "v5cm1cjd4iy05zz2am73hrf0z"
2019-10-26T20:43:13.127515745Z global-job_test.0.mx5qs3kd6wwk@manager1    | NodeID: "ecs6mnxe31s0t08nkvei092so"
2019-10-26T20:43:13.535353457Z global-job_test.0.mti2ir0t6hdb@manager2    | NodeID: "3gq13ao9gjqm4e93fbt2a0g5q"
2019-10-26T20:43:31.658171186Z global-job_test.0.yw0p0uvvwkzp@worker1    | NodeID: "5azgjswiu09jxa9dhhqickrtj"
2019-10-26T20:43:34.335444231Z global-job_test.0.6ctbo74wx1ev@worker2    | NodeID: "v5cm1cjd4iy05zz2am73hrf0z"
2019-10-26T20:44:02.045843889Z global-job_test.0.x5o0z8i9xj8p@manager3    | NodeID: "pmdttoyn0fco2f0t9d7vegupc"
2019-10-26T20:44:04.309006493Z global-job_test.0.t1vlce7ha8ro@worker1    | NodeID: "5azgjswiu09jxa9dhhqickrtj"
2019-10-26T20:44:31.758374216Z global-job_test.0.7yta3pmm2iw8@manager3    | NodeID: "pmdttoyn0fco2f0t9d7vegupc"
2019-10-26T20:44:33.590233772Z global-job_test.0.33hdn2wdg37y@worker1    | NodeID: "5azgjswiu09jxa9dhhqickrtj"
2019-10-26T20:45:02.283250073Z global-job_test.0.r4km6owlvsdz@manager3    | NodeID: "pmdttoyn0fco2f0t9d7vegupc"
2019-10-26T20:45:05.406355593Z global-job_test.0.ih0qu55x4ksl@worker1    | NodeID: "5azgjswiu09jxa9dhhqickrtj"
2019-10-26T20:45:31.978585819Z global-job_test.0.p4gliavd25df@manager2    | NodeID: "3gq13ao9gjqm4e93fbt2a0g5q"
2019-10-26T20:45:34.746719645Z global-job_test.0.iqmkwqs3tghm@manager3    | NodeID: "pmdttoyn0fco2f0t9d7vegupc"
2019-10-26T20:46:02.227012350Z global-job_test.0.okhp1zbav1xz@manager3    | NodeID: "pmdttoyn0fco2f0t9d7vegupc"
2019-10-26T20:46:04.194193329Z global-job_test.0.7u1map386f0e@worker1    | NodeID: "5azgjswiu09jxa9dhhqickrtj"

-- Question -- Mariabackup with swarm cronjob

Hi =), I want to use your container with mariabackup (included in maraidb official docker container) to make backups (as this is not possible in swarm). Have you tried it out and have you maybe an example for that? Do I have to mount the mariadb config to the cron also? Does it somehow interfere with my databases? For example if I use another container for mariabackup, does it crash my databases?

Support running multiple, isolated instances of `swarm-cronjob` within a single Docker Swarm cluster

Description

It seems that swarm-cronjob executes jobs found from any stack running within the same Docker Swarm cluster. Would it be possible to somehow limit swarm-cronjob to only read in cronjobs from only the same stack in which it is deployed?

We use Docker Swarm to host multiple versions of our application within the same cluster for development. We're observing that each swarm-cronjob instance executes cronjobs of all stacks. So if for example there are 5 version of our app running, then each cronjob is run 5 times. We want to keep running multiple instances of swarm-cronjob so that the dev environments would be as close as our production environment.

Skip running not working

Behaviour

When adding a cron service with skip-running=true the next iteration of the service starts even though the prior service is still running

Steps to reproduce this issue

1.Add a service in yml file with cronjob

 autoscript-cronjob:
    <<: *php
  command: /usr/local/bin/php /var/www/html/nmb_importtool/v2.00/autoScript/autoScript.php date=20200212 task=full_minus_autotext maintenance_setting=none
    deploy:
      mode: replicated
      replicas: 0
      labels:
        - "swarm.cronjob.enable=true"
        - "swarm.cronjob.schedule=* * * * *"
        - "swarm.cronjob.skip-running=true"
        - "swarm.cronjob.replicas=1"
      restart_policy:
        condition: none
      placement:
        constraints:
          - node.labels.app == 1
          - node.role == manager
  1. Deploy stack through docker stack deploy
  2. Observe swarm-cronjob log it will state something like this:
Thu, 13 Feb 2020 15:25:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:26:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:27:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:28:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Thu, 13 Feb 2020 15:29:00 EAT INF Start job service=webdev_test-cronjob status=paused tasks_active=0

Expected behaviour

not to start the cron service again as it is still running

Actual behaviour

the service starts again killing the old service that was still running and starts a new one

Configuration

  • Target Docker version (the host/cluster you manage) : 19.03.2, build 6a30dfc

  • Platform (windows/linux) : linux redhat 7.7

  • System info (type uname -a) : 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Target Swarm version :

Docker info

Server:
Containers: 19
Running: 6
Paused: 0
Stopped: 13
Images: 25
Server Version: 19.03.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: owd30wzrmpy14f8r5a95rjwmh
Is Manager: true
ClusterID: owy8kklxey6kv9f3pmcr1ikg2
Managers: 1
Nodes: 7
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.200.221.79
Manager Addresses:
10.200.221.79:10.200.221.79
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.7 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 23.37GiB
Name: NMB-DC-CRG-V004
ID: GALW:KDQW:UREM:54O4:OBIY:3YQH:XMQW:WHCD:XWJS:WBI6:ZBDY:GQI4
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
10.200.221.79:5000
127.0.0.0/8
Live Restore Enabled: false

Logs


Thu, 13 Feb 2020 15:38:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=l5543ugudsrh1eqdt4aw9070k

Thu, 13 Feb 2020 15:38:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=kcwwj7geik91w8xy4p58idvpf

Thu, 13 Feb 2020 15:38:00 EAT INF Start job service=webdev_autoscript-cronjob status= tasks_active=0

Thu, 13 Feb 2020 15:38:00 EAT DBG Event triggered newstate=completed oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Number of cronjob tasks: 1

Thu, 13 Feb 2020 15:38:00 EAT DBG Event triggered newstate=updating oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:38:00 EAT DBG Number of cronjob tasks: 1

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=starting task_id=953fdagojn0beocz402y9zr62

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=8tk4h8mbq4dgz75u8erz1yq21

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=l5543ugudsrh1eqdt4aw9070k

Thu, 13 Feb 2020 15:39:00 EAT DBG Service task node=NMB-DC-CRG-V004 service=webdev_autoscript-cronjob status_message=starting status_state=failed task_id=kcwwj7geik91w8xy4p58idvpf

Thu, 13 Feb 2020 15:39:00 EAT INF Start job service=webdev_autoscript-cronjob status=updating tasks_active=0

Thu, 13 Feb 2020 15:39:00 EAT DBG Event triggered newstate=updating oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Number of cronjob tasks: 1

Thu, 13 Feb 2020 15:39:00 EAT DBG Event triggered newstate=updating oldstate=updating service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Update cronjob with schedule * * * * * service=webdev_autoscript-cronjob

Thu, 13 Feb 2020 15:39:00 EAT DBG Number of cronjob tasks: 1```

Support for swarm jobs

In line with v23.x release, can we expect some sort of rework on swarm-cronjob logic to utilize swarm jobs feature?

Swarm-cronjob to stop/start services ?

Hi,

I would like to know if it is possible to use swarm-cronjob to stop running services running in containers and start other containers ?

Supposing I have limited resources on the cloud, and both Flask app and Airflow tasks should not be running at the same time.

Can I use swarm-cronjob to accomplish it (Schedule what time each container should be running / be able to stop one container before starting another) ?

No registration of new cron jobs in swarm-cronjob.

Support guidelines

I've found a bug and checked that ...

  • ... the documentation does not mention anything about my problem
  • ... there are no open or closed issues that are related to my problem

Description

We use swarm cron job to manage cron jobs in swarm. At the moment, 3 manager node is in swarm. The swarm-cronjob service is located on one of the node managers (1 container, no replicas), its presence on the node depends on the presence of the swarm label.

Registration of services in swarm-cronjob is done by adding docker labels. At the moment, we noticed that new cron jobs are not always picked up. A little over 100 cron jobs are currently in use.

Expected behaviour

New cron jobs will be picked up correctly.

Actual behaviour

After deploying services containing swarm cron jobs lables, sometimes new cron jobs are not registered in swarm-cronjob.

Steps to reproduce

  1. Create a 3-node swarm cluster with 3 manager nodes
  2. Create a swarm-cronjob stack as shown in the example
  3. Create a new stack to be scheduled as shown in the example
  4. Add new stack and deploy
  5. Сheck in the logs whether a new job

swarm-cronjob version

1.12.0

Docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.8.2-docker)

Server:
 Containers: 381
  Running: 53
  Paused: 0
  Stopped: 328
 Images: 239
 Server Version: 20.10.17
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  Is Manager: true
  Managers: 3
  Nodes: 3
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
 runc version: v1.1.3-0-g6724737
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.15.0-46-generic
 Operating System: Ubuntu 20.04.4 LTS
 OSType: linux
 Architecture: x86_64
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Live Restore Enabled: false

Docker Compose config

No response

Logs

There are no messages in the logs.

Additional info

Some questions:

  • Is there any limit on the number of added cron jobs?
  • Were there hits previously related to the number of cron jobs?

With a smaller number of cron jobs, no problems were observed.

As a temporary solution, we added a restart of the swarm-cronjob container after deploying services containing swarm-cronjob Docker labels.

one of many cronjobs silently fail to load

Behaviour

Steps to reproduce this issue

  1. Load cron jobs as part docker stack deploy

Expected behaviour

  • All cron jobs should load, if it cannot load, should throw error

Actual behaviour

  • All cron jobs declared should either load or display error if they cannot be loaded.

Configuration

  • Target Docker version (the host/cluster you manage) : 19.03.8

  • Platform (windows/linux) : Linux

  • System info (type uname -a) :
    Linux hostname 5.3.0-1019-aws #21~18.04.1-Ubuntu SMP Mon May 11 12:33:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

  • Target Swarm version : 1.40

Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: afacb8b7f0 Built: Wed Mar 11 01:24:19 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683

Docker info

Client:
 Debug Mode: false

Server:
 Containers: 13
  Running: 4
  Paused: 0
  Stopped: 9
 Images: 7
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: redacted
  Is Manager: true
  ClusterID: redacted
  Managers: 3
  Nodes: 6
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 10.10.1.100
  Manager Addresses:
   10.10.1.100:2377
   10.10.1.101:2377
   10.10.1.102:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.3.0-1019-aws
 Operating System: Ubuntu 18.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.786GiB
 Name: hostname
 ID: ----------
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Logs

swarm-cronjob service logs (set LOG_LEVEL to debug) and cron based service logs if useful

One of the cron jobs did not load. did not set LOG_LEVEL to debug for the below output

 02:54:16 UTC INF Starting swarm-cronjob 1.8.0
 02:54:16 UTC INF Add cronjob with schedule 15 0 * * * service=cron_production_job_0
 02:54:16 UTC INF Add cronjob with schedule 30 0 * * * service=cron_production_job_1
 02:54:16 UTC INF Add cronjob with schedule 10 9 * * * service=cron_production_job_10
 02:54:16 UTC INF Add cronjob with schedule 15 9 * * 3 service=cron_production_job_11
 02:54:16 UTC INF Add cronjob with schedule 30 9 * * 1 service=cron_production_job_12
 02:54:16 UTC INF Add cronjob with schedule 15 20 * * * service=cron_production_job_14
 02:54:16 UTC INF Add cronjob with schedule 0 21 * * * service=cron_production_job_16
 02:54:16 UTC INF Add cronjob with schedule 30 1 * * * service=cron_production_job_2
 02:54:16 UTC INF Add cronjob with schedule 30 3 * * * service=cron_production_job_3
 02:54:16 UTC INF Add cronjob with schedule 59 3 1 7 * service=cron_production_job_4
 02:54:16 UTC INF Add cronjob with schedule 0 5 * * * service=cron_production_job_5
 02:54:16 UTC INF Add cronjob with schedule 15 5 * * * service=cron_production_job_7
 02:54:16 UTC INF Add cronjob with schedule 30 8 * * * service=cron_production_job_8
 02:54:16 UTC INF Add cronjob with schedule 30 10 1,15 * * service=cron_production_job_13
 02:54:17 UTC INF Add cronjob with schedule 5 5 * * * service=cron_production_job_6
 02:54:17 UTC INF Add cronjob with schedule 0 9 * * * service=cron_production_job_9
 03:30:00 UTC INF Start job service=cron_production_job_3 status= tasks_active=0
 05:00:00 UTC INF Start job service=cron_production_job_5 status= tasks_active=0
 05:05:00 UTC INF Start job service=cron_production_job_6 status= tasks_active=0
 05:15:00 UTC INF Start job service=cron_production_job_7 status= tasks_active=0
 08:30:00 UTC INF Start job service=cron_production_job_8 status= tasks_active=0
 09:00:00 UTC INF Start job service=cron_production_job_9 status= tasks_active=0
 09:10:00 UTC INF Start job service=cron_production_job_10 status= tasks_active=0
 20:15:00 UTC INF Start job service=cron_production_job_14 status= tasks_active=0
 21:00:00 UTC INF Start job service=cron_production_job_16 status= tasks_active=0
 00:15:00 UTC INF Start job service=cron_production_job_0 status= tasks_active=0
 00:30:00 UTC INF Start job service=cron_production_job_1 status= tasks_active=0
 01:30:00 UTC INF Start job service=cron_production_job_2 status= tasks_active=0
 03:30:00 UTC INF Start job service=cron_production_job_3 status= tasks_active=0
 05:00:00 UTC INF Start job service=cron_production_job_5 status= tasks_active=0
 05:05:00 UTC INF Start job service=cron_production_job_6 status= tasks_active=0
 05:15:00 UTC INF Start job service=cron_production_job_7 status= tasks_active=0

Restarted

 07:04:38 UTC INF Starting swarm-cronjob 1.8.0
 07:04:38 UTC INF Add cronjob with schedule 15 0 * * * service=cron_production_job_0
 07:04:38 UTC INF Add cronjob with schedule 30 0 * * * service=cron_production_job_1
 07:04:38 UTC INF Add cronjob with schedule 10 9 * * * service=cron_production_job_10
 07:04:38 UTC INF Add cronjob with schedule 15 9 * * 3 service=cron_production_job_11
 07:04:38 UTC INF Add cronjob with schedule 30 9 * * 1 service=cron_production_job_12
 07:04:38 UTC INF Add cronjob with schedule 30 10 1,15 * * service=cron_production_job_13
 07:04:38 UTC INF Add cronjob with schedule 15 20 * * * service=cron_production_job_14
 07:04:38 UTC INF Add cronjob with schedule 30 20 * * * service=cron_production_job_15
 07:04:38 UTC INF Add cronjob with schedule 0 21 * * * service=cron_production_job_16
 07:04:38 UTC INF Add cronjob with schedule 30 1 * * * service=cron_production_job_2
 07:04:38 UTC INF Add cronjob with schedule 30 3 * * * service=cron_production_job_3
 07:04:38 UTC INF Add cronjob with schedule 59 3 1 7 * service=cron_production_job_4
 07:04:38 UTC INF Add cronjob with schedule 0 5 * * * service=cron_production_job_5
 07:04:38 UTC INF Add cronjob with schedule 5 5 * * * service=cron_production_job_6
 07:04:38 UTC INF Add cronjob with schedule 15 5 * * * service=cron_production_job_7
 07:04:38 UTC INF Add cronjob with schedule 30 8 * * * service=cron_production_job_8
 07:04:39 UTC INF Add cronjob with schedule 0 9 * * * service=cron_production_job_9

Ability to trigger cron job via swarm-cronjob cli

Description

Hi,

I think it would be very useful to have the ability to trigger a job via swarm-cronjob cli.
This should be useful for manual triggering of jobs for testing the job will work properly.

I think a cli like would do the trick

swarm-cronjob run job_name 

where job-name is the name of the service implementing the job.

Email/slack notifications on warnings & errors

First of all, thanks for the awesome project!

I have been using the image for a while, and I always thought that it would be awesome to get configurable notifications via email/slack when there's an issue running any of the jobs. In my case, some of the jobs hang sometimes due to disk usage issues, and I have to always find out because the service is down. I would love to get the notification as soon as possible to be able to restart the service.

Thanks!

[Feature Request] Let external scripts ask swarm-cronjob to run a job

So, I have need of a service that will run jobs when asked to. Basically webhooks.

Specifically I'd like to run my database dumps when the backup utility is ready to backup the data. Right now I'm using swarm-cronjob to schedule the dumps, and then scheduling the backup utility job a few minutes after the cronjob. I have my databases in docker, and I'm not exposing them outside their stack, so the backup utility can't just run a script to connect and do the dump itself.

The task is so similar to what swarm-cronjob does, that I thought it might be an interesting feature to add. But I also see it as different enough it might not fit. Maybe it would work as a companion project?

Anyway, if my idea is interesting, awesome, otherwise we can just close this.

Thanks for creating swarm-cronjob. It is extremely useful. Made my Portainer/Traefik/Swarm implementation work so much smoother that it might otherwise have gone!

Multiple Schedules?

Really enjoying this capability.

I do have one service that I want to have multiple schedules. Example:

0 * 17-20 5 9 *
0 * 10-20 8 9 *
0 * 16-22 9 9 *
0 * 17-20 12 9 *
0 * 10-20 15 9 *
0 * 17-20 16 9 *
0 * 17-20 19 9 *
0 * 10-20 22 9 *
0 * 17-20 23 9 *
0 * 17-20 26 9 *
0 * 10-20 29 9 *

Would I need to specify multiple services with the same command but different schedules? Or is there a way to give a single service multiple schedules?

CJ

Report failed cron jobs in swarm-cronjob log

This is not a bug, it's a feature request.

I believe it would be very useful to have report sucess / failure of running tasks in swarm-cronjob logs.

Running the example with date, I can see some information in the logs but it's not clear which job failed and which succeeded.
I believe it can be done since swarm cronjobs has access to docker logs and can read information about those failed jobs.

infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:38:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:39:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:40:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:41:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:42:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:43:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:44:00 UTC INF Start job service=infra_cronjob_test status= tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:45:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:46:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:47:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:48:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:49:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:50:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:51:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:52:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:53:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:54:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:55:00 UTC INF Start job service=infra_cronjob_test status= tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:56:00 UTC INF Start job service=infra_cronjob_test status= tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:57:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:58:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 08:59:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 09:00:00 UTC INF Start job service=infra_cronjob_test status= tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 09:01:00 UTC INF Start job service=infra_cronjob_test status=paused tasks_active=0
infra_swarm_cronjob.1.l217ahvflt64@dev1    | Fri, 03 Jun 2022 09:02:00 UTC INF Start job service=infra_cronjob_test status= 

It would be great to have #170 but it looks more complex to implement.

Add retry system

Behaviour

I noticed today that my nightly job hasn't run with the following message:

 Rejected 5 hours ago    "No such image: whatever-image:latest@sha256:[...]"

Steps to reproduce this issue

A bit complicated but might be a network issue, a Docker bug, etc.

Expected behaviour

As the job didn't run correctly, retry it. Swarm cronjob should probably keep track of the failed runs and retry a couple of times before giving up

Actual behaviour

Job is not restarted until the next slot

Configuration

  • Target Docker version (the host/cluster you manage) : 19.03.4
  • Platform (windows/linux) : Linux
  • System info (type uname -a) : Linux xxxxx 4.19.75-v7+ #1270 SMP Tue Sep 24 18:45:11 BST 2019 armv7l GNU/Linux
  • Target Swarm version : 1.6.0

Docker info

Output of command docker info

Logs

swarm-cronjob service logs (set LOG_LEVEL to debug) and cron based service logs if useful

DNS - docker stack service names cannot be resolved

Behaviour

Container for cron job does not seem to be able to resolve names in host /etc/hosts file nor docker stack service names.

Steps to reproduce this issue

  1. Add new alias for 0.0.0.0 to /etc/hosts file or add service to depends_on of service definition
  2. try to resolve either name from within container i.e. by using curl

Expected behaviour

Expected behaviour is to resolve either hosts file names or service names , and this had worked a few months ago on older versions (currently using the latest).

Tell me what should happen

Actual behaviour

Tell me what happens instead

Configuration

  • Target Docker version (the host/cluster you manage) :

Docker version 20.10.5, build 55c4c88

  • Platform (windows/linux) :
    linux (ubuntu 20.x)

  • System info (type uname -a) :

Linux box 5.4.0-66-generic #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

  • Target Swarm version :

Docker info

Client:              
 Context:    default  
 Debug Mode: false            
 Plugins:            
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:                       
 Containers: 36
  Running: 25                                                                                              
  Paused: 0
  Stopped: 11
 Images: 93
 Server Version: 20.10.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: error
  NodeID: wh6si5o61whyotyxynd0m7qe4
  Error: manager stopped: can't initialize raft node: WAL error cannot be repaired: unexpected EOF
  Is Manager: true
  Node Address: 192.168.100.95
  Manager Addresses:
   192.168.100.95:2377
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
 Default Runtime: nvidia
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-66-generic
 Operating System: Ubuntu 20.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 31.36GiB
 Name: box
 ID: X3QF:YWI4:NUPK:5LR3:E4AC:4JYU:BGSP:XTZZ:EFJR:B3JM:IWWH:UV43
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false

Output of command docker info

Logs

cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Event triggered newstate=completed oldstate=updating service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Event triggered newstate=updating oldstate=updating service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Event triggered newstate=updating oldstate=updating service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:02 UTC DBG Event triggered newstate=paused oldstate=updating service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:02 UTC DBG Update cronjob with schedule * * * * * service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:02 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:20 UTC DBG Event triggered newstate=completed oldstate=updating service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:20 UTC DBG Update cronjob with schedule * * * * * service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:15:20 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_volatility_change_emitter status_message=started status_state=running task_id=x9h96cba3ted8kfoix94ox5k7
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_volatility_change_emitter status_message=shutdown status_state=shutdown task_id=ofs7cgznvd7dje6612vnatdrq
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_tf_serving_config status_message=finished status_state=complete task_id=08jcmvzlchlt9grinu2cjuvl4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_tf_serving_config status_message=finished status_state=complete task_id=yrrop2mpw17bug3xuihykv764
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_volatility_change_emitter status_message=shutdown status_state=shutdown task_id=yzhaei1m81gfjwyiyysrn53m4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_volatility_change_emitter status_message=shutdown status_state=shutdown task_id=wyhwja6n2f958crshrjeyhlto
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_volatility_change_emitter status_message=shutdown status_state=shutdown task_id=n3beb0m14zyssi0bmct6gzoar
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC INF Start job service=meascron_volatility_change_emitter status=completed tasks_active=1
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_tf_serving_config status_message=finished status_state=complete task_id=tjg5rnebhqs9rqlazvn96289r
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_tf_serving_config status_message=finished status_state=complete task_id=ibm1uykc69eqfjn6909r4p69h
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Service task node=box service=meascron_tf_serving_config status_message=finished status_state=complete task_id=xppf6eg67a78egaqxl8ymsbzz
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC INF Start job service=meascron_tf_serving_config status=paused tasks_active=0
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Event triggered newstate=completed oldstate=updating service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Event triggered newstate=completed oldstate=updating service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Event triggered newstate=updating oldstate=updating service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Event triggered newstate=updating oldstate=updating service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Update cronjob with schedule * * * * * service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:00 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:02 UTC DBG Event triggered newstate=paused oldstate=updating service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:02 UTC DBG Update cronjob with schedule * * * * * service=meascron_tf_serving_config
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:02 UTC DBG Number of cronjob tasks: 4
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:20 UTC DBG Event triggered newstate=completed oldstate=updating service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:20 UTC DBG Update cronjob with schedule * * * * * service=meascron_volatility_change_emitter
cronjob_swarm-cronjob.1.a8t7zicfx49x@box    | Tue, 23 Mar 2021 05:16:20 UTC DBG Number of cronjob tasks: 4

swarm-cronjob service logs (set LOG_LEVEL to debug) and cron based service logs if useful

Start a job in a time range (H syntax)

Hello,

I would like to start many nearly identical jobs/services once every X minutes, but not at the exact same time (to not overload the swarm workers).

For now, I handle this manually by specifying a different schedule label for each service.
When I have new services, I have to check for potential overlaps, etc.
As you can imagine, this is pretty cumbersome to maintain.

Have you planned to implement something like the H syntax used in Jenkins ?
It would greatly improve this use case.
Cf. :

Replicas > 1

Is it possible to run this on multiple replicas?

My deployment has three manager nodes and three worker nodes. I'm curious if it's possible to run this with three replicas in order to accommodate scale.

Alternatively, maybe a single replica can handle as much as one can throw at it?

Thanks for making this!

CJ

The `docker service create` reporting no-such Docker Image

Behaviour

The docker service create CLI failed to find crazy-max/swarm-cronjob Docker Image from DockerHub.

Steps to reproduce this issue

  1. Clone the Repository

  2. Run the below command:

$ docker service create --name swarm_cronjob \
  --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
  --env "LOG_LEVEL=debug" \
  --env "LOG_NOCOLOR=false" \
  --constraint "node.role == manager" \
  crazy-max/swarm-cronjob
$ docker service create --name swarm_cronjob \
>   --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
>   --env "LOG_LEVEL=debug" \
>   --env "LOG_NOCOLOR=false" \
>   --constraint "node.role == manager" \
>   crazy-max/swarm-cronjob
image crazy-max/swarm-cronjob:latest could not be accessed on a registry to record
its digest. Each node will access crazy-max/swarm-cronjob:latest independently,
possibly leading to different nodes running different
versions of the image.

9kajseol4htx23ee8rmb6mwmy
overall progress: 0 out of 1 tasks
overall progress: 0 out of 1 tasks
overall progress: 0 out of 1 tasks
overall progress: 0 out of 1 tasks
1/1: No such image: crazy-max/swarm-cron

Expected behaviour

The docker service create command should bring up service successfully.

Tell me what should happen

Actual behaviour

>   crazy-max/swarm-cronjob
image crazy-max/swarm-cronjob:latest could not be accessed on a registry to reco[manager1] (local) [email protected] ~
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE
   PORTS
9kajseol4htx        swarm_cronjob       replicated          0/1                 crazy-max/swarm-cronjob:latest

Tell me what happens instead

Configuration

  • Target Docker version (the host/cluster you manage) : 18.06
  • Platform (windows/linux) : Alpine Linux - Play with Docker Platform

Docker Info

``
$ docker version
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:20:43 2018
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:28:38 2018
OS/Arch: linux/amd64
Experimental: true
[manager2] (local) [email protected] ~
$


> Output of command docker info

manager2] (local) [email protected] ~
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: 01fys52vbw5lk7v6d9vf6egc1
Is Manager: true
ClusterID: 92tcpvpq257c9va1m3v2mntm1
Managers: 3
Nodes: 5
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.0.33
Manager Addresses:
192.168.0.29:2377
192.168.0.32:2377
192.168.0.33:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-139-generic
Operating System: Alpine Linux v3.8 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.4GiB
Name: manager2
ID: CVAV:2WX7:43M5:G6JM:PS5M:3TL5:EGCG:FU7V:AC7P:QTLL:P7WU:QKM7
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 43
Goroutines: 152
System Time: 2018-12-20T03:25:49.703326389Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.1
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled


> Output of command docker info

### Logs

> Service logs (set LOG_LEVEL to debug)

Question: how to run a script on my VPS?

What I want is to manage cron without editing the crontab -e on every VPS.

I see all examples are running something in a container. I want to run a script on my host.

How should I run something like /user/mnt/my_scripts/all-nodes/fct_BackupGhostDbOnly.sh ?

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.