Giter Site home page Giter Site logo

supercronic's Introduction


Supercronic

Supercronic has an announcement blog post over here!

Supercronic is a crontab-compatible job runner, designed specifically to run in containers.

Why Supercronic?

Crontabs are the lingua franca of job scheduling, but typical server cron implementations are ill-suited for container environments:

  • They purge their environment before starting jobs. This is an important security feature in multi-user systems, but it breaks a fundamental configuration mechanism for containers.
  • They capture the output from the jobs they run, and often either want to email this output or simply discard it. In a containerized environment, logging task output and errors to stdout / stderr is often easier to work with.
  • They often don't respond gracefully to SIGINT / SIGTERM / SIGQUIT, and may leave running jobs orphaned when signaled. Again, this makes sense in a server environment where init will handle the orphan jobs and Cron isn't restarted often anyway, but it's inappropriate in a container environment as it'll result in jobs being forcefully terminated (i.e. SIGKILL'ed) when the container exits.
  • They often try to send their logs to syslog. This conveniently provides centralized logging when a syslog server is running, but with containers, simply logging to stdout or stderr is preferred.

Finally, they are often quiet, making these issues difficult to understand and debug!

Supercronic's goal is to behave exactly how you would expect cron running in a container to behave:

  • Your environment variables are available in jobs
  • Job output is logged to stdout / stderr
  • SIGTERM triggers a graceful shutdown (and so does SIGINT, which you can deliver via CTRL+C when used interactively)
  • Job return codes and schedules are logged to stdout / stderr
  • SIGUSR2 triggers a graceful shutdown and reloads the crontab configuration
  • SIGQUIT triggers a graceful shutdown

How does it work?

  • Install Supercronic (see below)
  • Point it at a crontab: supercronic CRONTAB
  • You're done!

Who is it for?

We (Aptible) originally created Supercronic to make it easy for customers of our No Infrastructure Platform to incorporate periodic jobs in their apps, but it's more broadly applicable to anyone running cron jobs in containers.

Installation

Download

The easiest way to install Supercronic is to download a pre-built binary.

Navigate to the releases page, and grab the build that suits your system. The releases include example Dockerfile stanzas to install Supercronic that you can easily include in your own Dockerfile or adjust as needed.

Note: If you are unsure which binary is right for you, try supercronic-linux-amd64.

Build

You can also build Supercronic from source.

Run the following to fetch Supercronic, install its dependencies, and install it:

go get -d github.com/aptible/supercronic
cd "${GOPATH}/src/github.com/aptible/supercronic"
go mod vendor
go install

Crontab format

Broadly speaking, Supercronic tries to process crontabs just like Vixie cron does. In most cases, it should be compatible with your existing crontab.

There are, however, a few exceptions:

  • First, Supercronic supports second-resolution schedules: Under the hood, Supercronic uses the cronexpr package, so refer to its documentation to know exactly what you can do.
  • Second, Supercronic does not support changing users when running tasks. Setting USER in your crontab will have no effect. Changing users is usually best accomplished in container environments via other means, e.g., by adding a USER directive to your Dockerfile.

Here's an example crontab:

# Run every minute
*/1 * * * * echo "hello"

# Run every 2 seconds
*/2 * * * * * * ls 2>/dev/null

# Run once every hour
@hourly echo "$SOME_HOURLY_JOB"

Environment variables

Just like regular cron, Supercronic lets you specify environment variables in your crontab using a KEY=VALUE syntax.

However, this is only here for compatibility with existing crontabs, and using this feature is generally not recommended when using Supercronic.

Indeed, Supercronic does not wipe your environment before running jobs, so if you need environment variables to be available when your jobs run, just set them before starting Supercronic itself, and your jobs will inherit them.

For example, if you're using Docker, jobs started by Supercronic will inherit the environment variables defined using ENV directives in your Dockerfile, and variables passed when you run the container (e.g. via docker run -e SOME_VARIABLE=SOME_VALUE).

Unless you've used cron before, this is exactly how you expect environment variables to work!

Timezone

Supercronic uses your current timezone from /etc/localtime to schedule jobs. You can also override the timezone by setting the environment variable TZ (e.g. TZ=Europe/Berlin) when running Supercronic. You may need to install tzdata in order for Supercronic to find the supplied timezone.

You can override TZ to use a different timezone, but if you need your cron jobs to be scheduled in a timezone A and have them run in a timezone B, you can run with /etc/localtime or TZ set to B and add a CRON_TZ=A line to your crontab.

If you're unsure what timezone Supercronic is using, you can run it with the -debug flag to confirm.

Logging

Supercronic provides rich logging, and will let you know exactly what command triggered a given message. Here's an example:

$ cat ./my-crontab
*/5 * * * * * * echo "hello from Supercronic"

$ ./supercronic ./my-crontab
INFO[2017-07-10T19:40:44+02:00] read crontab: ./my-crontab
INFO[2017-07-10T19:40:50+02:00] starting                                      iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:50+02:00] hello from Supercronic                        channel=stdout iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:50+02:00] job succeeded                                 iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:55+02:00] starting                                      iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:55+02:00] hello from Supercronic                        channel=stdout iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:55+02:00] job succeeded                                 iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"

Debugging

If your jobs aren't running, or you'd simply like to double-check your crontab syntax, pass the -debug flag for more verbose logging:

$ ./supercronic -debug ./my-crontab
INFO[2017-07-10T19:43:51+02:00] read crontab: ./my-crontab
DEBU[2017-07-10T19:43:51+02:00] try parse(7): */5 * * * * * * echo "hello from Supercronic"[0:15] = */5 * * * * * *
DEBU[2017-07-10T19:43:51+02:00] job will run next at 2017-07-10 19:44:00 +0200 CEST  job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"

Duplicate Jobs

Supercronic will wait for a given job to finish before that job is scheduled again (some cron implementations do this, others don't). If a job is falling behind schedule (i.e. it's taking too long to finish), Supercronic will warn you.

Here is an example:

$ cat ./my-crontab
# Sleep for 2 seconds every second. This will take too long.
* * * * * * * sleep 2

$ ./supercronic ./my-crontab
INFO[2017-07-11T12:24:25+02:00] read crontab: ./my-crontab
INFO[2017-07-11T12:24:27+02:00] starting                                      iteration=0 job.command="sleep 2" job.position=0 job.schedule="* * * * * * *"
INFO[2017-07-11T12:24:29+02:00] job succeeded                                 iteration=0 job.command="sleep 2" job.position=0 job.schedule="* * * * * * *"
WARN[2017-07-11T12:24:29+02:00] job took too long to run: it should have started 1.009438854s ago  job.command="sleep 2" job.position=0 job.schedule="* * * * * * *"
INFO[2017-07-11T12:24:30+02:00] starting                                      iteration=1 job.command="sleep 2" job.position=0 job.schedule="* * * * * * *"
INFO[2017-07-11T12:24:32+02:00] job succeeded                                 iteration=1 job.command="sleep 2" job.position=0 job.schedule="* * * * * * *"
WARN[2017-07-11T12:24:32+02:00] job took too long to run: it should have started 1.014474099s ago  job.command="sleep 2" job.position=0 job.schedule="* * * * * * *"

You can optionally disable this behavior and allow overlapping instances of your jobs by passing the -overlapping flag to Supercronic. Supercronic will still warn about jobs falling behind, but will run duplicate instances of them.

Reload crontab

Send SIGUSR2 to Supercronic to reload the crontab:

# docker environment (Supercronic needs to be PID 1 in the container)
docker kill --signal=USR2 <container id>

# shell
kill -USR2 <pid>

Testing your crontab

Use the -test flag to prompt Supercronic to verify your crontab, but not execute it. This is useful as part of e.g. a build process to verify the syntax of your crontab.

Level-based logging

By default, Supersonic routes all logs to stderr. If you wish to change this behaviour to level-based logging, pass the -split-logs flag to route debug and info level logs to stdout:

$ ./supercronic -split-logs ./my-crontab 1>./stdout.log
$ cat ./stdout.log
time="2019-01-12T19:34:57+09:00" level=info msg="read crontab: ./my-crontab"
time="2019-01-12T19:35:00+09:00" level=info msg=starting iteration=0 job.command="echo \"hello from Supercronic\"" job.position=0 job.schedule="*/5 * * * * * *"
time="2019-01-12T19:35:00+09:00" level=info msg="hello from Supercronic" channel=stdout iteration=0 job.command="echo \"hello from Supercronic\"" job.position=0 job.schedule="*/5 * * * * * *"
time="2019-01-12T19:35:00+09:00" level=info msg="job succeeded" iteration=0 job.command="echo \"hello from Supercronic\"" job.position=0 job.schedule="*/5 * * * * * *"

Integrations

Sentry

Supercronic offers integration with Sentry for real-time error tracking and reporting. This feature helps in identifying, triaging, and fixing crashes in your cron jobs.

Enabling Sentry

To enable Sentry reporting, configure the Sentry Data Source Name (DSN) e.g. use the -sentry-dsn argument when starting Supercronic

$ ./supercronic -sentry-dsn DSN

You can also specify the DSN via the SENTRY_DSN environment variable. When a DSN is specified via both the environment variable and the command line parameter the parameter's DSN has priority.

Additional Sentry Configuration

You can also specify the environment and release for Sentry to provide more context to the error reports:

Environment: Use the -sentry-environment flag or the SENTRY_ENVIRONMENT environment variable to set the environment tag in Sentry.

$ ./supercronic -sentry-dsn YOUR_SENTRY_DSN -sentry-environment YOUR_ENVIRONMENT

Release: Use the -sentry-release flag or the SENTRY_RELEASE environment variable to set the release tag in Sentry.

$ ./supercronic -sentry-dsn YOUR_SENTRY_DSN -sentry-release YOUR_RELEASE

Questions and Support

Please feel free to open an issue in this repository if you have any question about Supercronic!

Note that if you're trying to use Supercronic on Aptible App, we have a dedicated support article.

Contributing

PRs are always welcome! Before undertaking a major change, consider opening an issue for some discussion.

License

See LICENSE.md.

Copyright

Copyright (c) 2019 Aptible. All rights reserved.

supercronic's People

Contributors

aaw avatar acj avatar ajgon avatar alco avatar alexgustafsson avatar almathew avatar blackmou5e avatar chasballew avatar cpnielsen avatar dadgar avatar dleve123 avatar fancyremarker avatar gorhill avatar henryhund avatar imperiuse avatar jaysonsantos avatar joeclayallday avatar krallin avatar madhuravius avatar magnum5234 avatar mattwiese-aptible avatar neurosnap avatar puckey avatar rihardsgrislis avatar safx avatar sevagh avatar sled avatar takumakanari avatar timm0e avatar usernotfound avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

supercronic's Issues

output log to stdout without prefix

Hi, is it possible to output the script log, to stdout, without supercronic's prefix? This would help to add these logs to Elasticsearch more easily.

Jitter?

I don't see any reference to whether this supports jitter

-j jitter
Enable time jitter. Prior to executing commands, cron will sleep
a random number of seconds in the range from 0 to jitter. This
will not affect superuser jobs (see -J). A value for jitter must
be between 0 and 60 inclusive. Default is 0, which effectively
disables time jitter.

   This option can help to smooth down system	load spikes during
   moments when a lot	of jobs	are likely to start at once, e.g., at
   the beginning of the first	minute of each hour.

Initialize prometheus counters with 0

Hello,

i'm trying to create an alert based on supercronic prometheus metrics that will notify me if a job execution fails. If i have exactly one failed execution of a cron job the alert doesn't fire. Reason seems to be that supercronic_failed_executions counter isn't initialized when the process starts so the metric isn't available in prometheus until at least one failure. And when the first failure happens i cannot detect a change because there's nothing to compare to, the metric has a single value. With at least 2 failures the alert fires but i would really like to know about the first failure too. Initializing all counters with 0 will solve this. Also see the first part of this blog post https://blog.doit-intl.com/making-peace-with-prometheus-rate-43a3ea75c4cf.

My alert rule is supercronic_failed_executions - supercronic_failed_executions offset 10m > 1. Doing something like supercronic_failed_executions > 1 will work but isn't useful because such alert will fire until the process is restarted even if there are successful executions afterwards.

Email on failure

Is there some way to configure supercronic to send an email if a job fails?

Health checks

This is a feature request.

Hello.

Would it be possible to add a health check to supercronic? Should be as simple as this: https://www.ianlewis.org/en/using-kubernetes-health-checks

Rationale: a lot of containers nowadays run under the orchestration software, such as Kubernetes. All of them has health checking primitives that allows failed services inside containers to restart. Usually, it just listens on a TCP socket, and responds to HTTP GET requests with an HTTP OK.

Support for multiline entries

While migrating some of my cronjobs to supercronic, I noticed that multiple lines joined together with the \ character work in cron, but are not supported in supercronic.

For example:

*/10 * * * * echo 'a' && \
                  echo 'b' && \
                  echo 'c'

Is a valid crontab entry for most classic crons, but is rejected by supercronics parser.

Combine job's json log with supercronic's log together.

These days find supercronic is a great cron tool to run as non root user in container. Great job! 👍

Our project requires json log format with specified fields, like

{
"time":"",
"message":"",
"level":"",
"thread",""
...
}

Our job's stdout outprint is above json.
But supercronic wrap the job's message to 'msg' field, it will destroy our format to view on kibana.

I want one option to disable rich logging in supercronic, just print what job's log, not wrap it in msg field.
I know the supercronic's log is useful to watch cron's activity, maybe it is perfect to combine json structure between supercronic and job's own json log, but it will be difficult to implement. So we accept that the log format of supercronic to be as it is, but we want that our job's logs output standalone, not as 'msg' field.

Optional SIGTERM at Job Failure

I would like to THANK YOU for this brilliant piece of software!

I have a use case in which supercronic acts as the entry-point of a container. That leads to the necessity to restart the container if the job that is run by supercronic fails.

Process seems to hang after 2m output lines

I've been running supercronic in a docker / rancher environment for a couple of months now, and it stalls regularly.

Simplified sketch of my case:

  • It's a single docker instance running 8 cronjobs every minute. And a couple on a @hourly schedule.
  • At the end of each job a trigger is send to an influx instance, which in turn has a grafana alerting us when not enough jobs are ran.
  • The docker logs contain 240 lines every minute, this is a combination of supercronics own logs and the output for each job. (our docker logs are pushed to an elk-stack)
  • On average once or twice a day a single job runs for more than a minute, so the next execution of that job is skipped.

The issue:

  • After about 6 days, (happend twice since i started logging in grafana and the elk-stack) supercronic just freezes.
  • The output log shows a couple of jobs with the message 'job succeeded', and a couple jobs in the middle of their own output, which is a multiline message stopped at a random (but at an expected) newline.
  • If I enter the container and request the processlist ps aux I see only supercronic, a [sh], and my current bash processes. The [sh] process is not present when supercronic is running normally.

Some (relevant) version information:

  • supercronic: v0.1.6 (ran v0.1.5 before i added the elk logging/alerting and had the same issue)
  • alpine 3.6 base container

On a separate environment running only 2 jobs every minute the same happens only less frequent. So I expect it has something to do with the output logging, the last 6 days produced a total of 2,140,101 lines accoording to kibana.

Disable supercronic logs, or store them to file

Hi,

supercronic has logs but it's creating a lot of confusion with my application logs. I want to see my application logs, not supercronic. Is there any way to turn off the tool logging feature?

Thanks

musl support

Hey there, any chance to get releases also with musl support? Otherwise, this cannot be used in images like alpine.
Thank you!

Prometheus metrics support

Good day.

Thank you very much for the supercronic - this is really very cool software.

However, we encountered a problem that we can’t find out if cron ended up with an error.

Do you plan to support metrics that prometheus can pick up? With information about the finished/failed tasks.

CVE-2015-5237

Could you please update the required dependencies and publish a fresh release?

1

How to reproduce:

$ wget -q "https://github.com/aptible/supercronic/releases/download/v0.1.12/supercronic-linux-amd64" -O ./app
$ docker run -v "$(pwd)/app:/test/app:ro" anchore/grype:v0.28.0 dir:/test
NAME                        INSTALLED  FIXED-IN  VULNERABILITY  SEVERITY 
google.golang.org/protobuf  v1.21.0              CVE-2015-5237  High  

Include detailed instructions in documentation

From the documentation it's unclear to me how to get supercronic into a cron job in a Docker container. Say I specify a non-root system user in the Dockerfile before making the call to RUN usr/bin/crontab... how does supercronic fit into all of this?

How to run this in Dockerfile?

After I read the documentation and a few examples, it is still not clear to me how to start the supercronic during container startup.

When I want to start the supercronic in a container that already have a CMD, I can't add another CMD supercronic /crontab because it will overwrite the original.

I can't put RUN supercronic /crontab in Dockerfile because the container will never start.

Finally, when I RUN the supercronic in backgroudn, I can neither access the supercronic logs with docker logs nor they are stored in any file.

What is the best practice to run supercronic in an existing container?

My case: I have an application container based on php7.1-apache container and I want to run a command periodically inside this container. I want the crontab to be activated automatically when the container starts. I want to be able to access logs of both apache and supercronic.

Fix formatting of log messages

Hi! Thanks for this package, it's been really easy to setup and get going.

I was hoping for a way of specifying the log format, but haven't found anything in the docs.

Without using the -split-logs option, my logs are coming through formatted like

time="2019-01-12T19:35:00+09:00" level=info msg="hello from Supercronic" channel=stdout iteration=0 job.command="echo \"hello from Supercronic\"" job.position=0 job.schedule="*/5 * * * * * *"

but this is a bit of a drag to visually parse.

How do I get the logs that look like

INFO[2017-07-10T19:40:50+02:00] hello from Supercronic                        channel=stdout iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"

or just show the message?

Thank you!

Wrong crontab line when minutes frequency is >= 60

Hi

I'm using Supercronic v0.1.12 in a docker container with Alpine Linux and I just discovered that the minutes field doesn't work when you try to set a frequency equal or higher than 60 minutes. So

This works:

*/59 * * * * /home/user_a/script_a.sh

But this doesn't:

*/60 * * * * /home/user_a/script_a.sh

This is something you can easily avoid when setting fixed frequencies by hand but in my case I created an environment variable that contains the number of minutes between executions. On the other hand, frequencies equal or above 60 work perfectly fine in cron.

Regards and many thanks for this piece of software. Definitely it has some improvements over cron, specially when working with containers.

Closed: I see this behaviour is by design

Logging of messages in console not related to cronjobs

I'm running supercronic with Django, Docker and Supervisord, it works fine, however it logs thing not related to the cronjob, for example it's logging different error messages (from dotenv for example) and a print "LOGIN" which I do inside my Django app:

q_cron | crontab | time="2022-01-25T13:04:00Z" level=info msg=" dotenv.read_dotenv()" channel=stderr iteration=0 job.command=/var/app/bin/staging/count_number_of_users.sh job.position=0 job.schedule="* * * * *" q_cron | crontab | time="2022-01-25T13:04:03Z" level=info msg=LOGIN channel=stdout iteration=0 job.command=/var/app/bin/staging/count_number_of_users.sh job.position=0 job.schedule="* * * * *" q_cron | crontab | time="2022-01-25T13:04:04Z" level=info msg="Number of users in the DB: 68" channel=stdout iteration=0 job.command=/var/app/bin/staging/count_number_of_users.sh job.position=0 job.schedule="* * * * *" q_cron | crontab | time="2022-01-25T13:04:05Z" level=info msg="job succeeded" iteration=0 job.command=/var/app/bin/staging/count_number_of_users.sh job.position=0 job.schedule="* * * * *"

This is the part of my Dockerfile where I install Supercronic

# SUPERCRONIC (CRON-TABS)
#########################

ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.12/supercronic-linux-amd64 \
    SUPERCRONIC=supercronic-linux-amd64 \
    SUPERCRONIC_SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e

RUN curl -fsSLO "$SUPERCRONIC_URL" \
 && echo "${SUPERCRONIC_SHA1SUM}  ${SUPERCRONIC}" | sha1sum -c - \
 && chmod +x "$SUPERCRONIC" \
 && mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
 && ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic`??```

This is how I use it in Supervisord
```[program:crontab]
user=www-data
command=/usr/local/bin/prefix-log /var/app/bin/staging/crontab.sh
directory=./projectile/
autostart=true
autorestart=true
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0```

**Where I run it (crontab.sh)**

```supercronic /var/app/bin/staging/crontab 

100% CPU and lots of logs when setting a range 6-7 in days

ex this simple job:

45 4        *  *  6-7   echo "foo"

generates hundreds logs per seconds of:

time="2020-05-07T20:55:58+02:00" level=warning msg="job took too long to run: it should have started -2562047h47m16.854775808s ago" job.command="echo \"foo\"" job.position=0 job.schedule="45 4        *  *  6-7"
time="2020-05-07T20:55:58+02:00" level=warning msg="job took too long to run: it should have started -2562047h47m16.854775808s ago" job.command="echo \"foo\"" job.position=0 job.schedule="45 4        *  *  6-7"
time="2020-05-07T20:55:58+02:00" level=warning msg="job took too long to run: it should have started -2562047h47m16.854775808s ago" job.command="echo \"foo\"" job.position=0 job.schedule="45 4        *  *  6-7"
time="2020-05-07T20:55:58+02:00" level=warning msg="job took too long to run: it should have started -2562047h47m16.854775808s ago" job.command="echo \"foo\"" job.position=0 job.schedule="45 4        *  *  6-7"
...

same job with either:

45 4        *  *  6,7   echo "foo"

or

45 4        *  *  6,0   echo "foo"

or

45 4        *  *  6   echo "foo"
45 4        *  *  7   echo "foo"

are working normally, so a work-around is pretty straightforward.

I know "7" is non standard, but since it's supported alone and within lists, I suppose it should be supported in range.
(and taking 100% of cpu for logging warning is not good anyway)

Thanks,
Regards,
Thomas.

Specifying timezone using the TZ environment variable does not work

Using the suggested environment variable, TZ to specify the timezone for Supercronic to use yields no change.

Using TZ=Europe/Stockholm, for example, should yield 2020-07-17T07:00:00Z as the next event for the cron 0 7 * * *.

Example Dockerfile to replicate:

FROM alpine:3

ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
    SUPERCRONIC=supercronic-linux-amd64 \
    SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85

RUN apk add --update --no-cache curl && \
  curl -fsSLO "$SUPERCRONIC_URL" && \
  echo "$SUPERCRONIC_SHA1SUM  $SUPERCRONIC" | sha1sum -c - && \
  chmod +x "$SUPERCRONIC" && \
  mv "$SUPERCRONIC" "/usr/local/bin/$SUPERCRONIC" && \
  ln -s "/usr/local/bin/$SUPERCRONIC" /usr/local/bin/supercronic

RUN echo "0 7 * * * echo 'this won't work'" > /tmp/cron

ENV TZ=Europe/Stockholm

ENTRYPOINT ["supercronic"]
CMD ["-debug", "/tmp/cron"]

Output:

time="2020-07-16T20:35:07Z" level=info msg="read crontab: /tmp/cron"
time="2020-07-16T20:35:07Z" level=debug msg="try parse(7): 0 7 * * * echo 'this won't work'[0:20] = 0 7 * * * echo 'this"
time="2020-07-16T20:35:07Z" level=debug msg="try parse(6): 0 7 * * * echo 'this won't work'[0:14] = 0 7 * * * echo"
time="2020-07-16T20:35:07Z" level=debug msg="try parse(5): 0 7 * * * echo 'this won't work'[0:9] = 0 7 * * *"
time="2020-07-16T20:35:07Z" level=debug msg="job will run next at 2020-07-17 07:00:00 +0000 UTC" job.command="echo 'this won't work'" job.position=0 job.schedule="0 7 * * *"
^Ctime="2020-07-16T20:35:12Z" level=info msg="received interrupt, shutting down"
time="2020-07-16T20:35:12Z" level=info msg="waiting for jobs to finish"
time="2020-07-16T20:35:12Z" level=debug msg="shutting down" job.command="echo 'this won't work'" job.position=0 job.schedule="0 7 * * *"
time="2020-07-16T20:35:12Z" level=info msg=exiting

Is this incorrect behavior (a bug) or is the documentation misleading on how to specify the timezone to use?

Release a Windows build

It would be cool if you would release Windows binaries, too.
I'm not a Gopher, but I believe you can cross compile it, right? I'm not sure how hard it would be.

Schedule job isn't started because of defunct zombie process

I was trying to run Magento schedule task with supercronic. But sometime the schedule job isn't trigger because previous job doesn't end. Running ps ufx shows that supercronic is waiting for a zombie process to exit.

> ps ufx
www-data   117  0.0  0.0 713884  7836 ?        Sl   13:49   0:00  \_ supercronic -debug /etc/cron.d/magento2-cron
www-data   699  0.0  0.0      0     0 ?        Z    13:52   0:00      \_ [sh] <defunct>

Here is my the content of /etc/cron.d/magento2-cron

* * * * * /usr/local/bin/php -d memory_limit=-1 /var/www/html/bin/magento cron:run 2>&1

Extra logging for each log

Hi. Thanks for this beautiful package.

We are seeing an extra line of unnecessary log whenever our log received. For example:

Feb 12 05:36:15 5c1e5dd859a6 test-cron time="2020-02-12T13:36:10Z" level=info channel=stderr iteration=56 job.command="php artisan test:test" job.position=6 job.schedule="30 */2 * * *"

Feb 12 05:36:15 5c1e5dd859a6 test-cron time="2020-02-12T13:36:10Z" level=info msg="message content here " channel=stderr iteration=56 job.command="php artisan test:test" job.position=6 job.schedule="30 */2 * * *"

Here, second log record is legit, but the first one is not created by us (there is no msg field in the first log). And we are seeing logs like the first one here for every log that we recorded.

This results in too much logging space.

Is there any configuration we are missing and/or misusing?

By the way, we are using v0.1.5.

Thanks!

Level-based logging by default on Docker alpine image

Hello,

First and foremost, thanks a lot for the super useful piece of software.

I am running a node script hourly on a alpine based Docker image and logs are by default logged as level-based, which I find too verbose and uneasy to read.
In addition, logs are not dated from my Time Zone, which I suspect is caused by the same issue. I tried using TZ and CRON_TZ env variables, without success.

Here is my Dockerfile :

FROM node:16.5.0-alpine3.14

RUN apk --no-cache add dumb-init
ENV NODE_ENV production

# Adding support for cron
RUN apk --no-cache add curl
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.12/supercronic-linux-amd64 \
    SUPERCRONIC=supercronic-linux-amd64 \
    SUPERCRONIC_SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e
RUN curl -fsSLO "$SUPERCRONIC_URL" \
 && echo "${SUPERCRONIC_SHA1SUM}  ${SUPERCRONIC}" | sha1sum -c - \
 && chmod +x "$SUPERCRONIC" \
 && mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
 && ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic

USER node
WORKDIR /usr/src/app
COPY --chown=node:node package*.json ./
RUN npm ci --only=production

COPY --chown=node:node . /usr/src/app

ENTRYPOINT ["dumb-init"]
CMD ["supercronic", "./config/crontab"]

And my crontab :
0 */1 * * * TZ="Europe/Paris" node index.js
Here are my logs :
Capture d’écran 2021-07-28 à 11 36 13

Thanks you for your help !

how to add job to supercronic docker?

I used crontab in my original machine, I use crontab -e to edit job list.
but after I use supercronic docker, I don't know how to do that. this docker cannot use command "docker exec -ti supercronic /bin/bash" to access this docker and use crontab -e to edit job list.

what should I do, I have go through the supercronic readme.md, I did not found any introduction about this. could you help me about this?

Thanks,
Ronald

Weird SIGQUIT exception when combined with Docker's PHP fpm-buster image

Hello,

First and most important, thank you for creating and making supercronic available to the world!

I'm trying to add supercronic to Docker container based on official Docker PHP one, and noticed that when supercronic is launched, either as entrypoint or via sh exec, the following exception is shown when trying to docker stop that container:

SIGQUIT: quit
PC=0x465601 m=0 sigcode=0

goroutine 0 [idle]:
runtime.futex(0xe45ca8, 0x80, 0x0, 0x0, 0x0, 0x7ffe00000000, 0x439e03, 0xc000080148, 0x7ffeddb51438, 0x40ac0f, ...)
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/sys_linux_amd64.s:567 +0x21
runtime.futexsleep(0xe45ca8, 0x7ffe00000000, 0xffffffffffffffff)
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/os_linux.go:45 +0x46
runtime.notesleep(0xe45ca8)
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/lock_futex.go:151 +0x9f
runtime.stoplockedm()
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/proc.go:1977 +0x88
runtime.schedule()
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/proc.go:2460 +0x4a6
runtime.park_m(0xc0002c2600)
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/proc.go:2696 +0x9d
runtime.mcall(0x0)
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/asm_amd64.s:318 +0x5b

goroutine 1 [chan receive]:
main.main()
        /home/travis/gopath/src/github.com/aptible/supercronic/main.go:156 +0x98c

goroutine 39 [select]:
github.com/aptible/supercronic/cron.startFunc.func1(0xc0004f3db0, 0xe44b40, 0xa98640, 0xc0004bda40, 0xc00054e1c0, 0xaa2c60, 0xc0004c9440, 0xc0004eddd0, 0xc0004edd00)
        /home/travis/gopath/src/github.com/aptible/supercronic/cron/cron.go:182 +0x46c
created by github.com/aptible/supercronic/cron.startFunc
        /home/travis/gopath/src/github.com/aptible/supercronic/cron/cron.go:158 +0xbb

goroutine 40 [syscall]:
os/signal.signal_recv(0x0)
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
        /home/travis/.gimme/versions/go1.14.4.linux.amd64/src/os/signal/signal.go:127 +0x44

rax    0xca
rbx    0xe45b60
rcx    0x465603
rdx    0x0
rdi    0xe45ca8
rsi    0x80
rbp    0x7ffeddb51400
rsp    0x7ffeddb513b8
r8     0x0
r9     0x0
r10    0x0
r11    0x286
r12    0xff
r13    0x0
r14    0xa87be6
r15    0x0
rip    0x465601
rflags 0x286
cs     0x33
fs     0x0
gs     0x0

I was able to reduce the test scenario to the following Dockerfile:

FROM php:8.0-fpm-buster

# ---
# Setup Supercronic to run cron jobs
#
# Ref: https://github.com/aptible/supercronic

ENV SUPERCRONIC_VERSION=v0.1.12 \
      SUPERCRONIC_SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e

RUN set -eux; \
    cd /tmp; \
    { \
        curl --fail -Lo supercronic https://github.com/aptible/supercronic/releases/download/${SUPERCRONIC_VERSION}/supercronic-linux-amd64; \
        echo "${SUPERCRONIC_SHA1SUM} *supercronic" | sha1sum -c - >/dev/null 2>&1; \
        chmod +x supercronic; \
        mv supercronic /usr/local/bin/supercronic; \
    };

# ---
# Copy crontab

COPY ./echo.crontab /etc/cron.d/echo

# ---
# Entrypoint

ENTRYPOINT ["/usr/local/bin/supercronic"]
CMD ["-passthrough-logs", "/etc/cron.d/echo"]
# echo.crontab
* * * * * echo "Hello!"

While PHP container is based on debian:buster-slim, I was not able to reproduce this scenario when using that image, however, This also works with cli-buster image, which makes things more tricky to investigate. I'm not proficient in Go in order to deeper debug this and report to Docker's official tracker.

Any hint that could help me debug this further?

Thank you in advance for your time.

❤️ ❤️ ❤️

How to suppress printing of crontab command each time supercronic finishes a job

Hi there. I have a containerized application which runs about 15 crontabs on a "each-minute" cron schedule.

I started using supercronic because I did not want to have to run crond as a super-user. That's a security risk which my organization needed me to find a workaround for. Supercronic to the rescue!

Supercronic's feature set is so much better than crond. So much so, that it's a bit too good for me :)

The jobs that supercronic is running are of the form:
"${CRON_SCHEDULE} timeout -t ${timeout_seconds} java ${jvm_options}
-Daccess.key.id=${ACCESS_KEY_ID}
-Daccess.key.secret=${ACCESS_KEY_SECRET}
-jar ${JAR}
${app_args} > /proc/1/fd/1 2>/proc/1/fd/2"

Our application containers forward logs to a centralized logging system.

The problem here is that there is sensitive information (AWS access keys and secrets) being passed as properties to the application. (Yes I know, I can do it with environment variables. Long story short, that will take a lot of work due to application logic requirements). These credentials are then logged in plaintext in our centralized logging system because supercronic will print them out as part of the command it just finished executing.

Is there an easy way I could disable supercronic from logging those commands which will print my credentials in plaintext .... but still leave the logs of the Java application unchanged?

Question: Running supercronic on multiple Instances

Hello devs,

Thank you, I am unable to find any proper answers, Wondering any devs can help me to understand how can I run supercron on only one instance. I have application that running with two docker instances, Currently the execution is happening in all environments in all instances, As images is promoted to from IT to PROD. How can I configure to run only on certain environments or execution based on any environment variables.

Allow to configure the cron jobs through ENV variables

Hi,

First of all thanks for the software it is really useful sometimes!

It would be really great if we could define the cron entries via environment variables so we do not need to have static files in our container (or mount the cron file from the container host).
To workaround this I am using confd and using environment variable as source for my cron file.

Thanks,
Balazs

Suggestion: Make images available on Docker Hub

Thank you for the work on this project, it looks like a great solution to running Cron in a secure containerised environment.

I'd like to suggest that this project provide images on Docker Hub. The main advantage I see to doing this would be that for using Supercronic in a project, I could simply use the COPY directive in my Dockerfile to copy the binary from the most appropriate Supercronic image.

For example, the PHP Composer project does this and in a Dockerfile I simply need to include the following:
COPY --from=composer:2 /usr/bin/composer /usr/bin/composer

This line ensures that I always have the most up to date build of Composer 2 in my final image, and (while not applicable to Composer) Docker would know which architecture I need, making my Dockerfile more portable across different architectures (x86, ARM, etc).

Thanks,
-Aaron

log to stdout instead of stderr

Thanks for making supercronic! I've found it useful so far.

One issue I'd like to fix in my setup is that after installing supercronic, my messages are now showing up in stderr instad of stdout. Any ideas why this might be happening?

[Feature request] allow to prefix output

state of the art docker runs only one servce, but in true life, you sometimes need to run supercrinic alongside other services.
In this case, it is difficult to diffrentiate output from supercronic from other output.
can you either add an option to define an output prefix, or prefix it by default?

Error running command: signal: killed

Hello,

testing locally everything is running fine however once supercronic is started in k8s, I get this error message every minute:

level=error msg="error running command: signal: killed" iteration=6 job.command="dotnet /app/myapp.dll > /proc/1/fd/1 2> /proc/1/fd/2" job.position=1 job.schedule="* * * * *"

Color escape sequences are being quoted/escaped on Docker logs

Running Supercronic directly in a Bash shell shows colors, but when inspecting docker logs, the Supercronic output is quoted or escaped which prevents showing the actual colors.

Compared to other Cron implementations like the Debian's default Cron implementation which doesn't quote/escape the output and shows the colors, whenever called directly from the shell or from docker logs.

This is the Supercronic output from within Bash inside Docker:
image

Output from docker logs when using Supercronic:
\x1b[1m\x1b[35m (0.5ms)\x1b[0m BEGIN"
image

Output from docker logs when using Debian's Cron package:
image

Is this project active?

Seems like there are some serious issues, security, DST, and excessive logging. They all have gone unaddressed for many months.

This is open source, so no one is entitled to anything, but will there realistically be any future development on this?

Support @reboot command

Hi there,

I'm currently running supercronic in a docker and it's working great.

I would like to run my script on "boot" e.g. when supercronic is started, as well as at a set interval. Normally in many cron versions we can use the standard @reboot to run a script on reboot, but in my testing it looks like supercronic doesn't currently support this.

Would it be possible to add this functionality? I notice that other shortcuts such as @hourly are supported.

I do have a work-around which is to run a shell script /start-container which simply runs my script and then runs supercronic, but it seems a bit awkward!

supercronic doesn't like to send SIGTERM/SIGINT to my script

I start the tool with supercronic -debug my.crontab. When I press ^C, I expect SIGTERM/SIGINT is sent to my script (aka, supercronic would work as a signal proxy). However, supercronic doesn't send any signal to my script, it just waits for my script.

My configuration

*/5 * * * * * * ./cd.sh sys_sleep

(/cd.sh sys_sleep works perfectly with SIGTERM/SIGINT when I tried its own, see below)

Debug logging

INFO[2020-04-19T14:22:50+02:00] starting                                      iteration=7 job.command="./cd.sh sys_sleep" job.position=0 job.schedule="*/5 * * * * * *"                 
INFO[2020-04-19T14:22:50+02:00] :: sys_sleep: Sleeping 180 second(s). Use 'now' to wake up the sleeping script.  channel=stdout iteration=7 job.command="./cd.sh sys_sleep" job.position=0 job.schedule="*/5 * * * * * *"                                                                                                                                                                                                                 
^CINFO[2020-04-19T14:22:53+02:00] received interrupt, shutting down                                                                                                                                                  
INFO[2020-04-19T14:22:53+02:00] waiting for jobs to finish                                                                                                                                                           
WARN[2020-04-19T14:22:55+02:00] not starting: job is still running since 2020-04-19 14:22:50 +0200 CEST (5s elapsed)  iteration=7 job.command="./cd.sh sys_sleep" job.position=0 job.schedule="*/5 * * * * * *"      
WARN[2020-04-19T14:23:00+02:00] not starting: job is still running since 2020-04-19 14:22:50 +0200 CEST (10s elapsed)  iteration=7 job.command="./cd.sh sys_sleep" job.position=0 job.schedule="*/5 * * * * * *"     

My cd.sh signal trap is working

$ ./cd.sh sys_sleep
:: sys_sleep: Sleeping 180 second(s). Use 'now' to wake up the sleeping script.
^C:: sys_trap: Testing purpose: This is sys_trap
:: rclean: Cleaning up metric file /home/gfg/metrics.txt
:: sys_trap: Testing purpose: This is sys_trap
:: rclean: Cleaning up metric file /home/gfg/metrics.txt

Massive logging of missed jobs in v0.1.12

supercronic-linux-amd64 with SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e
I just had my /var/lib/docker/overlay2 filesystem in my production swarm filled to 100% because of millions of messages like:

time="2021-12-03T09:24:42+01:00" level=warning msg="job took too long to run: it should have started -2562047h47m16.854775808s ago" job.command="printf '%(%c)T' -1 > /tmp/cron.running" job.position=0 job.schedule="* 12,13 16 11 2 2021"

10 GB log space consumed in less than 1 minute.

(The schedule in the crontab was set like this in a base image so that the task would only be run during the building/testing of the base image, and not run in containers that use the base image without overwriting the crontab).

Daylight Savings Issue

I had an issue this past Sunday morning during the daylight savings spring forward. I have a cron job that runs every minute, and when the change-over occurred supercronic started logging the following message continually for about 10 minutes until it filled up my VM with a 40gb log file.

{"log":"time=\"2021-03-14T01:00:00-08:00\" level=warning msg=\"not starting: job is still running since 2021-03-14 01:00:00 -0800 PST (0s elapsed)\" iteration=104 job.command=\"/bin/bash -l -c 'cd /home/deploy/app \u0026\u0026 RAILS_ENV=production bundle exec rake process --silent \u003e\u003e /proc/1/fd/1 2\u003e\u003e /proc/1/fd/2'\" job.position=2 job.schedule=\"0 * * * *\"\n","stream":"stderr","time":"2021-03-14T09:00:00.041059641Z"}

I'm setting my localtime zone in my Dockerfile:

ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

Should I stick to UTC in my container and set the timezone in my cron job? I'm running supercronic version 0.1.11.

parse error with per hour task

My contab config is */60 8-21 * * * which should run per 60 min.

version: v0.1.6
OS version: Debian GNU/Linux 9 (stretch)

debug info :

DEBU[2018-12-12T14:19:58+08:00] try parse(7): */60 8-21 * * * /bin/bash -l -c 'my_cmd >> /opt/log/cron.log 2>&1'[0:28] = */60 8-21 * * * /bin/bash -l 
DEBU[2018-12-12T14:19:58+08:00] try parse(6): */60 8-21 * * * /bin/bash -l -c 'my_cmd >> /opt/log/cron.log 2>&1'[0:25] = */60 8-21 * * * /bin/bash 
DEBU[2018-12-12T14:19:58+08:00] try parse(5): */60 8-21 * * * /bin/bash -l -c 'my_cmd >> /opt/log/cron.log 2>&1'[0:15] = */60 8-21 * * * 
DEBU[2018-12-12T14:19:58+08:00] try parse(1): */60 8-21 * * * /bin/bash -l -c 'my_cmd >> /opt/log/cron.log 2>&1'[0:4] = */60 
FATA[2018-12-12T14:19:58+08:00] bad crontab line: */60 8-21 * * * /bin/bash -l -c 'my_cmd >> /opt/log/cron.log 2>&1' 

Any plan to seperate stdout/stderr log?

Hi,
Thanks for making this great tool!

Is there any plan to add separate stdout/stderr log option to supercronic?
I saw this closed issue, and I understand why supercronic logs all log to stderr.

But I use supercronic with stackdriver logging, and info logs shows as an error, and very annoying...

[Request] Implementing CRON_TZ

Hi,
I'm trying to use UTC time zone whenever I can in my projects and there is this one functionality missing for me in supercronic. It's ability to change scheduler time zone without changing system time zone. Have you considered implementing such functionality?

I'm using docker debian image with supercronic installed and I want to run commands with default system time zone (UTC) but scheduler itself should run commands at different time zone (e.g Europe/Warsaw). Setting cron scheduler time zone for all commands using CRON_TZ variable would be great. Even better if you could overwrite it in individual cron entries.

I know I can set TZ variable for container to Europe/Warsaw and then in crontab file set TZ=UTC before commands but it seems like workaround.

P.S. Thank you for your great work!

How to restart supercronic / refresh list of jobs

I start supercronic with: supercronic /app/config/crontab
In some situations I need to add some jobs to /app/config/crontab (after supercronic startup).
But new jobs are not recognized by supercronic.
Is there way to restart supercronic / refresh list of jobs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.