funkyfuture / deck-chores Goto Github PK
View Code? Open in Web Editor NEWA job scheduler for Docker containers, configured via labels.
Home Page: https://deck-chores.readthedocs.io/
License: ISC License
A job scheduler for Docker containers, configured via labels.
Home Page: https://deck-chores.readthedocs.io/
License: ISC License
Hi everone,
i tried using deck-chores in a swarm multiple times on a node (with different LABEL_NAMESPACE's and so far everything works. The only thing I changed is the there_is_another_deck_chores_container
method to return always false. I guess if I change this to check against LABEL_NAMESPACE's in the containers Environment (or if empty like the normal behavior) it should be possible to run as many deck-chores you like in a swarm on every node.
Or did i miss something?
for now it would take seconds as value, when a parser for time units is implemented for the interval trigger, that can be used as well.
Hi,
by default, any output the executed tasks produce will be collected by deck-chores and output at the end of an execution. What I would instead like is that output is left as if the job was executed directly in the target container (so that the application log is not divided between two containers).
Is there an option for that (or am I doing anything wrong? Using deck-chores together with docker-compose)?
Thanks for your work!
new build build environments on rtd that also support Python 3.6 should soon be available (this issue should track its delpoyment) and then allow to release a new version with proper docs on the web.
this is just an informative issue. since my Raspberry Pi is broken, i cannot build images on it and therefore i am not able to release new versions until i obtained a new one.
We encounter some STDOUT/STDERR problems.
The following messages are printed to STDOUT:
2020-04-08 12:59:53,609|ERROR |Misconfigured job definition: {'command': 'ls', 'cron': '20', 'name': 'UPPERCASE', 'user': ''}
2020-04-08 12:59:53,609|ERROR |Errors: {'name': ["value does not match regex '[a-z0-9.-]+'"]}
{"log":"2020-04-08 12:59:53,609|ERROR |Misconfigured job definition: {'command': 'ls', 'cron': '20', 'name': 'UPPERCASE', 'user': ''}\n","stream":"stdout","time":"2020-04-08T12:59:53.609464237Z"}
{"log":"2020-04-08 12:59:53,609|ERROR |Errors: {'name': [\"value does not match regex '[a-z0-9.-]+'\"]}\n","stream":"stdout","time":"2020-04-08T12:59:53.609658025Z"}
We would love to see this messages at STDERR (because STDERR get more attention).
Also why are not uppercase characters allowed? Is there a good reason for that?
when released, the code around container.exec_run
can be simplified.
It's not possible to change the timezone via TZ
-Env due to missing tzdata package.
Adding apk add --no-cache tzdata
to your Dockerfile should fix the problem.
I think this was also the problem in #29
when the final version is released, i shoud establish a modus operandus for constant re-builds of the image and dependency updates. an emulator based build for all platform (#46) would be very helfpul.
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock funkyfuture/deck-chores
Deck Chores 0.1.beta2 started.
2016-12-22 13:58:50,373|ERROR |Caught unhandled exception:
2016-12-22 13:58:50,373|ERROR |'NoneType' object has no attribute 'get'
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/deck_chores-0.1b2-py3.6.egg/deck_chores/main.py", line 178, in main
if there_is_another_deck_chores_container(cfg.client):
File "/usr/local/lib/python3.6/site-packages/deck_chores-0.1b2-py3.6.egg/deck_chores/main.py", line 36, in there_is_another_deck_chores_container
if labels.get('org.label-schema.name', '') == 'deck-chores':
AttributeError: 'NoneType' object has no attribute 'get'
I have a lot of containers without labels. Could that be a problem?
Ubuntu 15, Docker 1.12.1.
it's 70% atm (0.2-rc1).
I defined deck-chores as DaemonSet to have running one instance of this image per cluster node.
Tasks are properly registered, but somehow each execution fails with
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
During a quite long debugging session I found out that container ID which is used for executing a task is actually not the right container, rather a kubelet container with base image k8s.gcr.io/pause:3.2
Is there any chance to make deck-chores work in kubernetes? Is it even possible?
Plz, can we use again the atest tag? It's a liitle bit standard. Thanx!!!
the now used docker client library can take a workdir
argument. that shall be provisioned with deck-chores.<job_name>.workdir
.
I have multiple jobs that run every minute, and a less-needed job that runs every hour.
I don't fully understand the code, but there doesn't seem to be a provision for handling missed executions other than skipping to the next one.
I had a look at the underlying APScheduler library which powers this container, and it seems there are configurations for the features I described above.
https://apscheduler.readthedocs.io/en/latest/userguide.html#missed-job-executions-and-coalescing
The first is misfire_grace_time it basically lists the number of seconds that is acceptable for a job to be late.
The second is coalesce in the event the backlog has multiple identical scripts to run, coalesce will run one of those scripts in the backlog, and delete any identical backlogged scripts.
Hello,
Not sure I can debug this but I had this very weird bug happening suddenly out of the blue:
^[[36mofficer_1 |^[[0m 2020-04-09 11:50:07,575|INFO |W, [2020-04-09T11:50:05.128761 #29290] WARN -- : DEPRECATION WARNING: Using a dynamic :action segment in a route is deprecated and will be removed in Rails 6.0. (called from instance_eval at /usr/src/redmine/config/routes.rb:364)
^[[36mofficer_1 |^[[0m 2020-04-09 11:50:07,576|INFO |W, [2020-04-09T11:50:05.129211 #29290] WARN -- : DEPRECATION WARNING: Using a dynamic :action segment in a route is deprecated and will be removed in Rails 6.0. (called from instance_eval at /usr/src/redmine/config/routes.rb:364)
^[[36mofficer_1 |^[[0m 2020-04-09 11:50:07,576|INFO |== END of captured stdout & stderr ====
^[[36mofficer_1 |^[[0m 2020-04-09 11:55:00,004|INFO |redmine_web_1: Executing 'read-helpdesk-emails'.
^[[36mofficer_1 |^[[0m Execution of job "exec_job (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*/5', second='0'], next run at: 2020-04-09 12:00:00 CEST)" skipped: maximum number of running instances reached (1)
^[[36mofficer_1 |^[[0m Error notifying listener
^[[36mofficer_1 |^[[0m Traceback (most recent call last):
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
^[[36mofficer_1 |^[[0m cb(event)
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
^[[36mofficer_1 |^[[0m definition = job.kwargs
^[[36mofficer_1 |^[[0m AttributeError: 'dict' object has no attribute 'kwargs'
^[[36mofficer_1 |^[[0m Execution of job "exec_job (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*/5', second='0'], next run at: 2020-04-09 12:05:00 CEST)" skipped: maximum number of running instances reached (1)
^[[36mofficer_1 |^[[0m Error notifying listener
^[[36mofficer_1 |^[[0m Traceback (most recent call last):
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
^[[36mofficer_1 |^[[0m cb(event)
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
^[[36mofficer_1 |^[[0m definition = job.kwargs
^[[36mofficer_1 |^[[0m AttributeError: 'dict' object has no attribute 'kwargs'
^[[36mofficer_1 |^[[0m Execution of job "exec_job (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*/5', second='0'], next run at: 2020-04-09 12:10:00 CEST)" skipped: maximum number of running instances reached (1)
^[[36mofficer_1 |^[[0m Error notifying listener
^[[36mofficer_1 |^[[0m Traceback (most recent call last):
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
^[[36mofficer_1 |^[[0m cb(event)
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
^[[36mofficer_1 |^[[0m definition = job.kwargs
^[[36mofficer_1 |^[[0m AttributeError: 'dict' object has no attribute 'kwargs'
^[[36mofficer_1 |^[[0m Execution of job "exec_job (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*/5', second='0'], next run at: 2020-04-09 12:15:00 CEST)" skipped: maximum number of running instances reached (1)
^[[36mofficer_1 |^[[0m Error notifying listener
^[[36mofficer_1 |^[[0m Traceback (most recent call last):
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
^[[36mofficer_1 |^[[0m cb(event)
^[[36mofficer_1 |^[[0m File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
^[[36mofficer_1 |^[[0m definition = job.kwargs
My docker-compose:
version: '3.7'
services:
web:
image: redmine:4.1.1-passenger
(snip)
labels:
deck-chores.reminder-emails.command: rake redmine:send_reminders
deck-chores.reminder-emails.cron: 'mon 4 0 0'
deck-chores.read-helpdesk-emails.command: rake redmine:email:helpdesk:receive
deck-chores.read-helpdesk-emails.cron: '*/5 0'
officer:
image: funkyfuture/deck-chores:1
restart: always
environment:
TZ: Europe/Zurich
TIMEZONE: Europe/Zurich
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The error message is weird, can you explain what it means?
and figure out a conveniant way to build it when make release
.
mind that: http://container-solutions.com/multi-arch-docker-images/
and that: https://eyskens.me/multiarch-docker-images/#runallonintel
example: https://github.com/TGOlson/rpi-haskell/blob/master/.travis.yml
Setting the timezone Environment Variable in Rancher UI did not change the default UTC value.
thx
Tags are mutable, digests are immutable. The digest for python:3.6-alpine
should be preferred over the tag.
I'm toying around with deck-chores and quite directly ran into an issue.
OS: Ubuntu 18.04
docker: Docker version 18.09.7, build 2d0083d
Log from deck-chores:
Deck Chores 1.0.0 started.
2020-04-02 14:29:09,013|DEBUG |Config: {'assert_hostname': False, 'client_timeout': 60, 'default_flags': ('image', 'service'), 'docker_host': 'unix://var/run/docker.sock', 'debug': True, 'default_max': 1, 'job_executor_pool_size': 1, 'label_ns': 'deck-chores.', 'logformat': '{asctime}|{levelname:8}|{message}', 'service_identifiers': ('com.docker.compose.project', 'com.docker.compose.service'), 'ssl_version': <_SSLMethod.PROTOCOL_TLS: 2>, 'timezone': 'UTC', 'client': <docker.client.DockerClient object at 0x7f8b3c32b810>}
2020-04-02 14:29:09,031|INFO |Inspecting running containers.
2020-04-02 14:29:09,038|DEBUG |Parsing labels: {'org.opencontainers.image.created': '2020-03-27 13:30:03+00:00', 'org.opencontainers.image.description': 'Job scheduler for Docker containers, configured via labels.', 'org.opencontainers.image.documentation': 'https://deck-chores.readthedocs.org/', 'org.opencontainers.image.revision': '3222643d7efa010bfaa9619dc6c245937b9045ed', 'org.opencontainers.image.source': 'https://github.com/funkyfuture/deck-chores', 'org.opencontainers.image.title': 'deck-chores', 'org.opencontainers.image.version': '1.0.0'}
2020-04-02 14:29:09,038|DEBUG |Considering labels for service id: {}
2020-04-02 14:29:09,038|DEBUG |Parsed & resolved container flags: image,service
2020-04-02 14:29:09,041|DEBUG |Considering labels for job definitions: {}
2020-04-02 14:29:09,042|DEBUG |Job definitions: {}
2020-04-02 14:29:09,045|DEBUG |Parsing labels: {'deck-chores.echo.command': './script.sh', 'deck-chores.echo.interval': '5s'}
2020-04-02 14:29:09,045|DEBUG |Considering labels for service id: {}
2020-04-02 14:29:09,048|DEBUG |Considering labels for job definitions: {'deck-chores.echo.command': './script.sh', 'deck-chores.echo.interval': '5s'}
2020-04-02 14:29:09,048|DEBUG |Job definitions: {'echo': {'command': './script.sh', 'interval': '5s'}}
2020-04-02 14:29:09,049|DEBUG |Processing echo
2020-04-02 14:29:09,057|DEBUG |Normalized definition: {'command': './script.sh', 'name': 'echo', 'user': '', 'environment': {}, 'jitter': None, 'max': 1, 'timezone': 'UTC', 'trigger': (<class 'apscheduler.triggers.interval.IntervalTrigger'>, (0, 0, 0, 0, 5))}
2020-04-02 14:29:09,057|DEBUG |Adding jobs to container 8a3e0d4cbe9b30c9f5dc3f429a43bbc4471ba38c6dbccfff37ae02d4f5694a84.
2020-04-02 14:29:09,059|INFO |priceless_mccarthy: Added 'echo' (fbff3edf-7ff9-598f-8c4a-cae07c8633eb).
2020-04-02 14:29:09,063|DEBUG |Parsing labels: {}
2020-04-02 14:29:09,063|DEBUG |Considering labels for service id: {}
2020-04-02 14:29:09,066|DEBUG |Considering labels for job definitions: {}
2020-04-02 14:29:09,066|DEBUG |Job definitions: {}
2020-04-02 14:29:09,070|DEBUG |Parsing labels: {'maintainer': 'Sebastian Ramirez <[email protected]>'}
2020-04-02 14:29:09,070|DEBUG |Considering labels for service id: {}
2020-04-02 14:29:09,075|DEBUG |Considering labels for job definitions: {}
2020-04-02 14:29:09,075|DEBUG |Job definitions: {}
2020-04-02 14:29:09,075|DEBUG |Finished inspection of running containers.
2020-04-02 14:29:09,077|INFO |Added job "exec_job" to job store "default"
2020-04-02 14:29:09,077|INFO |Scheduler started
2020-04-02 14:29:09,077|DEBUG |Looking for jobs to run
2020-04-02 14:29:09,077|INFO |Listening to events.
2020-04-02 14:29:09,077|DEBUG |Next wakeup is due at 2020-04-02 14:29:14.057721+00:00 (in 4.979914 seconds)
2020-04-02 14:29:14,058|DEBUG |Looking for jobs to run
2020-04-02 14:29:14,059|INFO |priceless_mccarthy: Executing 'echo'.
2020-04-02 14:29:14,059|DEBUG |Next wakeup is due at 2020-04-02 14:29:19.057721+00:00 (in 4.998639 seconds)
2020-04-02 14:29:14,071|DEBUG |Daemon event: {'status': 'exec_start: ./script.sh ', 'id': '8a3e0d4cbe9b30c9f5dc3f429a43bbc4471ba38c6dbccfff37ae02d4f5694a84', 'from': 'app2:latest', 'Type': 'container', 'Action': 'exec_start: ./script.sh ', 'Actor': {'ID': '8a3e0d4cbe9b30c9f5dc3f429a43bbc4471ba38c6dbccfff37ae02d4f5694a84', 'Attributes': {'deck-chores.echo.command': './script.sh', 'deck-chores.echo.interval': '5s', 'execID': '65e90163e1636883aed141dfc6f0a730a912c95d076ff78a6928bdca2600370d', 'image': 'app2:latest', 'name': 'priceless_mccarthy'}}, 'scope': 'local', 'time': 1585837754, 'timeNano': 1585837754070516452}
2020-04-02 14:29:19,058|DEBUG |Looking for jobs to run
2020-04-02 14:29:19,058|WARNING |Execution of job "exec_job (trigger: interval[0:00:05], next run at: 2020-04-02 14:29:19 UTC)" skipped: maximum number of running instances reached (1)
2020-04-02 14:29:19,059|ERROR |Error notifying listener
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
cb(event)
File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
definition = job.kwargs
AttributeError: 'dict' object has no attribute 'kwargs'
2020-04-02 14:29:19,059|DEBUG |Next wakeup is due at 2020-04-02 14:29:24.057721+00:00 (in 4.998914 seconds)
2020-04-02 14:29:24,059|DEBUG |Looking for jobs to run
2020-04-02 14:29:24,059|WARNING |Execution of job "exec_job (trigger: interval[0:00:05], next run at: 2020-04-02 14:29:24 UTC)" skipped: maximum number of running instances reached (1)
2020-04-02 14:29:24,059|ERROR |Error notifying listener
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
cb(event)
File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
definition = job.kwargs
AttributeError: 'dict' object has no attribute 'kwargs'
2020-04-02 14:29:24,059|DEBUG |Next wakeup is due at 2020-04-02 14:29:29.057721+00:00 (in 4.998511 seconds)
2020-04-02 14:29:29,058|DEBUG |Looking for jobs to run
2020-04-02 14:29:29,058|WARNING |Execution of job "exec_job (trigger: interval[0:00:05], next run at: 2020-04-02 14:29:29 UTC)" skipped: maximum number of running instances reached (1)
2020-04-02 14:29:29,058|ERROR |Error notifying listener
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apscheduler/schedulers/base.py", line 831, in _dispatch_event
cb(event)
File "/usr/local/lib/python3.7/site-packages/deck_chores/jobs.py", line 37, in on_max_instances
definition = job.kwargs
AttributeError: 'dict' object has no attribute 'kwargs'
2020-04-02 14:29:29,058|DEBUG |Next wakeup is due at 2020-04-02 14:29:34.057721+00:00 (in 4.999146 seconds)
Steps to reproduce:
Dockerfile
COPY ./script.sh /
WORKDIR /
SHELL ["/bin/bash", "-c"]
LABEL deck-chores.echo.command="./script.sh" \
deck-chores.echo.interval="5s"
CMD ["ls"]
and the script it runs is simply:
while [ 1 == 1 ]; do echo "app2 says yes"; sleep 10; done
Finally the image built and a container started through
docker run --name app2 app2:latest
In cron examle code there is used for run command every Sunday this code:
* * * * 7 1 0 0 # run every Sunday at 1:00
but according to APScheduler API day_of_week accept these value:
day_of_week (int|str) โ number or name of weekday (0-6 or mon,tue,wed,thu,fri,sat,sun)
So what is correct? Number 6 or 7?
I have a build configuration for mongodb in my docker-compose.yml file.
I have added the following section to the labels:
deck-chores.label-name-one.command: "mongo localhost:27017/myDB /scripts/aggregate-script.js"
deck-chores.label-name-one.cron: "14,29,44,59 * * * *"
Based on the cron, I would expect my script to run every 14th, 29th, 44th and 59th minute.
I have tried changing the cron part to "*/1 * * * *" and the script runs successfully every minute.
Is there a possible issue parsing this cron properly? As I understand it is via APScheduler and they seem to support the comma notation.
this should be solved first before proceeding w/ other code changes. including a pre-commit check and linting on the ci platform.
the now used docker client library allows to pass environment variables to the exec
endpoint. it seems quiet simple and useful to add configuration capabilities for that, probably like deck.chores.<job_name>.env.LANG: zu
.
@dperetti proposed in #24 to add the capability of running a command not only at scheduled times but also when a new container is discovered. a job attribute onstartup
(or a better name) should be sufficient to achieve that. since this is a simple flag, a job attribute options
could be introduced that is parsed similar like the container options.
subsequent feature request for a jitter / delay option for these initial executions are to be anticipated.
since a lot of these are possible on systems where a lot of containers are running and deck-chores is (re-)started, there should be an extra-queue to limit impact on resources. that queue should drop jobs that may have been invoked as scheduled in the meantime.
due to these additional considerations i don't set a milestone for now.
Adding deck-chores on a node with several services running, one of which was an old instance of dnsmasq, resulted in the same error reported in closed issue #2.
Upon further inspection we discovered that the dnsmasq container was configured with labels:None, removing it solved the problem.
Maybe it's about how legacy stuff from the time when labels were not available is handled in Docker?
Points to https://deck-chores.readthedocs.io/en/latest/ when it'd point to https://deck-chores.readthedocs.io/en/stable/
Do you plan on supporting two deck-chores containers in parallel in the near future? Right now I'm getting the "There's another container running deck-chores, maybe paused or restarting." error message
http://deck-chores.readthedocs.io/en/master/
Documentation says:
You don't have the proper permissions to view this page. Please contact the owner of this project to request permission.
Found in version 0.2, and still exists in 0.3.
Description:
Cron triggers don't work because the code parsing the trigger definition has a bug.
For example, if i specify a cron trigger as documented, or any arrangement of cron statements, it fails with an error.
2 Examples below:
deck-chores | Deck Chores 0.3 started.
deck-chores | 2019-02-27 18:05:15,843|INFO |Inspecting running containers.
deck-chores | 2019-02-27 18:05:15,857|INFO |Listening to events.
deck-chores | 2019-02-27 18:05:15,936|ERROR |Misconfigured job definition: {'command': '/bin/sh -c "cd /some/path; execute_this"', 'cron': '"0 0 7 3 0"', 'user': 'root', 'name': 'refresh-data'}
deck-chores | 2019-02-27 18:05:15,936|ERROR |Errors: {'cron': ['Error while instantiating a CronTrigger with \'(\'*\', \'*\', \'*\', \'"0\', \'0\', \'7\', \'3\', \'0"\')\'.']}
2019-02-27T16:43:59.269614536Z 2019-02-27 16:43:59,269|ERROR |Misconfigured job definition: {'command': '/bin/sh -c "cd /some/path; execute_this"', 'cron': '"* * * * * 8 9 0"', 'user': 'root', 'name': 'refresh-data'}
2019-02-27T16:43:59.269992640Z 2019-02-27 16:43:59,269|ERROR |Errors: {'cron': ['Error while instantiating a CronTrigger with \'(\'"*\', \'*\', \'*\', \'*\', \'*\', \'8\', \'9\', \'0"\')\'.']}
I'm not a python dev, but I can see the error is a result of the _fill_args
function:
def _normalize_coerce_cron(self, value: str) -> Tuple[Type, Tuple[str, ...]]:
args = self._fill_args(value, CRON_TRIGGER_FIELDS_COUNT, '*')
return CronTrigger, args
Because if i replace it with this, and specify the correct number of cron fields (* * * * * 8 * *
) it works fine.
def _normalize_coerce_cron(self, value: str) -> Tuple[Type, Tuple[str, ...]]:
return CronTrigger, value.split(' ')
i have these services running on my system. based on the docs this should work:
version: '2'
services:
db:
image: postgres:alpine
volumes:
- /snapshot
- /var/lib/postgresql/data
labels:
deck-chores.snapshot.command: sh -c "nice -n 19 pg_dump -Z 9 -U foo bar -f /snapshot/`date -I`.dump.gz"
deck-chores.snapshot.interval: daily
officer:
image: funkyfuture/deck-chores
volumes:
- /var/run/docker.sock:/var/run/docker.sock
however, what i get is this error message:
officer_1 | 2017-06-22 16:10:43,427|INFO |Listening to events.
officer_1 | 2017-06-22 16:10:43,428|INFO |Inspecting running containers.
officer_1 | 2017-06-22 16:10:43,447|INFO |Locking service id: 698d4e22-6668-5ffe-84e8-cc7e8594e95d
officer_1 | 2017-06-22 16:10:43,449|INFO |Added 'snapshot' for /project_db_1
officer_1 | 2017-06-22 16:10:43,509|ERROR |Misconfigured job definition: {'interval': 'daily', 'name': 'shapshot'}
officer_1 | 2017-06-22 16:10:43,509|ERROR |Errors: {'command': ['required field']}
officer_1 | 2017-06-22 16:10:43,514|ERROR |Misconfigured job definition: {'command': 'sh -c "nice -n 19 pg_dump -Z 9 -U foo bar -f /snapshot/`date -I`.dump.gz"', 'name': 'snapshot'}
officer_1 | 2017-06-22 16:10:43,514|ERROR |Errors: {'cron': ['required field'], 'date': ['required field'], 'interval': ['required field']}
that defines a default in a container's scope that overrides the global default user. this turns to be useful for multiple jobs by the same user.
in my use-case i would need it even on image-level.
Hi!
First of all, i want to thank you for the tool. It perfectly suites our needs and i couldn't find anything else like that.
Currently docs for cron trigger are little vague. It's not clear which position in expression stands for what, and provided link seems to be broken. I think it would be great to add little explanation about how the expression is parsed and what each position means, because it is not fully interchangeable with standard cron expression.
It looks like daily
runs at 18h30
, but where do I find this information?
All I could find is:
NAME_INTERVAL_MAP = {
'weekly': (1, 0, 0, 0, 0),
'daily': (0, 1, 0, 0, 0),
'hourly': (0, 0, 1, 0, 0),
'every minute': (0, 0, 0, 1, 0),
'every second': (0, 0, 0, 0, 1),
}
But I couldn't figure how it maps to a real time. In my case, I want to run something daily but during the night, so I used a cron
interval instead.
Hi ๐
This is my first visit to this fine repo, but it seems you have been working hard to keep all dependencies updated so far.
Once you have closed this issue, I'll create separate pull requests for every update as soon as I find one.
That's it for now!
Happy merging! ๐ค
thet makes the table hard to read and i must investigate that.
when docker-py
2.3 will be released, not only can the code be simpler, but the SDK can be mocked usefully and hence unit test coverage be increased.
deck-chores considers the events that are emitted during an image build as immediate containers have declared labels. thus it schedules jobs accordingly. that must not happen!
Upgrading system packages easily leads to changed state within an image, and containers that are started from updated images. To install tini
, all you really need are for the repository indexes to be updated. apk add --update --no-cache tini
can be used to install the dependency instead.
Additionally, tini
should be installed at a pinned version to maintain container immutability. The current latest version is 0.9.0-r1
.
I see this in the log:
2017-08-28 09:38:59,967|INFO |Locking service id: xxxyyy
2017-08-28 09:38:59,970|INFO |Added 'scheduled' for demo
2017-08-28 09:39:11,648|INFO |Unlocking service id: xxxyyy
2017-08-28 09:39:11,838|INFO |Removing job 'scheduled' for demo
The demo container runs a single command and then exits, so it is in Exited(0) state. Is it then correctly understood that I can't have deck-chore run the container when exited? Would it be possible to add an option to have it start the container?
When I run a cron job, it seems that the command always use the same date in the command.
Example:
Running:
--label deck-chores.backup.command="/bin/sh -c 'umask 0077; tar cfz /secret/gitlab/backups/$(date "+etc-gitlab-%F_%H-%M-%S.tgz") -C / etc/gitlab'" \
Will output etc-gitlab-2018-10-29_13-51-58.tgz and will always reuse the same date for output.
It seems to be the date the label was added to the container.
Hello, few days ago latest
was removed from docker hub. Would you please add the latest stable version as latest
tag on docker hub? Thank you.
w/ Docker's announcement of limited image retention it seems to be a good idea to host the images on a self-hosted registry.
i'd be interested in other possible hosts nonetheless.
there's no need for that as a global option since the fallback should be the configured USER
for the container.
Hi there, I really like this container. Also, the "interval" notation is nice, coz it's very easy to read, but...
...interval: daily
at which time is this task executed? At the midnight, or at the time of running the Docker container, or something else? Is it possible to specify exact time like:
...interval: daily at 3:00
Thanx!
We ran into an issue where the deck-chores instance was holding on to a container that was restarted, and it was not able to recover from the failure. I am not sure if docker emitted an event and the container didn't receive it or there was a race condition and the event was lost. Minimally if deck-chores already determines if the container is not running, can it remove the container from the tasks and re-discover them?
cron 2ssszq9rhbx3 2020-06-07 23:00:00,002|INFO |vm119_api-gw.2.z1yfdmahe3ut5651ax3kg00pz: Executing 'submission-monitor'.
cron 2ssszq9rhbx3 Job "exec_job (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*/5', second='0'], next run at: 2020-06-07 23:05:00 UTC)" raised an exception
cron 2ssszq9rhbx3 Traceback (most recent call last):
cron 2ssszq9rhbx3 File "/usr/local/lib/python3.8/site-packages/apscheduler/executors/base.py", line 125, in run_job
cron 2ssszq9rhbx3 retval = job.func(*job.args, **job.kwargs)
cron 2ssszq9rhbx3 File "/usr/local/lib/python3.8/site-packages/deck_chores/jobs.py", line 100, in exec_job
cron 2ssszq9rhbx3 raise AssertionError('Container is not running.')
cron 2ssszq9rhbx3 AssertionError: Container is not running.
cron 2ssszq9rhbx3 2020-06-07 23:00:00,010|CRITICAL|An exception in deck-chores occurred while executing submission-monitor in container 5c33cd04b1aeae3e265c0512957a2fb7866164d9de4a489b23fb472e806dc4e2:
cron 2ssszq9rhbx3 2020-06-07 23:00:00,010|ERROR |Container is not running.
cron 2ssszq9rhbx3 2020-06-07 23:03:06,954|INFO |ID: c78a823e-a86e-59c2-b71a-c2016cb38d09 Next execution: 2020-06-07 23:05:00+00:00 Configuration:
cron 2ssszq9rhbx3 2020-06-07 23:03:06,955|INFO |{'command': '/bin/bash -c "/opt/python3/bin/python3.7 /api-gw/scripts/monitoring/submission_monitor.py"', 'name': 'submission-monitor', 'user': '', 'environment': {}, 'jitter': None, 'max': 1, 'timezone': 'UTC', 'trigger': (<class 'apscheduler.triggers.cron.CronTrigger'>, ('*', '*', '*', '*', '*', '*', '*/5', '0')), 'service_id': ('com.docker.swarm.service.name=vm119_api-gw',), 'job_name': 'submission-monitor', 'job_id': 'c78a823e-a86e-59c2-b71a-c2016cb38d09', 'container_id': '5c33cd04b1aeae3e265c0512957a2fb7866164d9de4a489b23fb472e806dc4e2'}
that'd mainly ease the installation of a development environment.
Hey.
I have few things that run on a daily basis, but occasionally I would like to run them an extra time manually. Is this possible?
I am trying to have multiple jobs scheduled using deck-chores, is this possible.
services:
db:
image: mongo
labels:
deck-chores.dump.command: sh -c "mongodump --db <database> --gzip --archive=/data/backups/$$(date -Idate).gz"
deck-chores.dump.cron: "* * * * * * */1 0"
website:
image: my-image
labels:
deck-chores.generate-report.command: <Some task>
deck-chores.generate-report.cron: "* * * * * * */10 0"
But deckchores log only prints that it is adding the job for the website service.
I have tried having multiple deckchores running, but this seems expensive for having multiple tasks scheduled.
I couldn't see anything in the documentation that says that this isn't possible, but then again, I could see anything that says this is possible.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.