Giter Site home page Giter Site logo

schickling / dockerfiles Goto Github PK

View Code? Open in Web Editor NEW
826.0 826.0 361.0 159 KB

Collection of lightweight and ready-to-use docker images

Home Page: https://hub.docker.com/u/schickling/

License: MIT License

Shell 54.42% PHP 2.28% Dockerfile 33.66% Makefile 9.64%

dockerfiles's People

Contributors

adamgoose avatar ajctrl avatar baachi avatar claudyus avatar dextervip avatar evgenyorekhov avatar hoax avatar iloveitaly avatar issei-m avatar jdecool avatar kbedel avatar kbrownlees avatar kshilovskiy avatar ledermann avatar m2sh avatar matiasgarciaisaia avatar mlaitinen avatar msh100 avatar neclimdul avatar oliviercuyp avatar pka avatar robwithhair avatar samhopwell avatar sammousa avatar schickling avatar smellman avatar spalladino avatar splattael avatar stigkj avatar teohhanhui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dockerfiles's Issues

Merge into swagger-ui repo?

Hi, we're going to start automatically publishing docker images for swagger-ui in the official repository. Would you like to merge your script into there with a PR? Looks like you've done a great job with this one.

Open Source License

Could you please add an Open Source Software License for these packages? May I suggest MIT.

[Mailcatcher] 220 EventMachine SMTP Server

When using Docker-compose trying to send email to mailcatcher from a separate container gets Connection refused. However if I test this via netcat I get the following error:

220 EventMachine SMTP Server

If I test the connection from my host it works, just inside of a docker network container to container.

nc -z 127.0.0.1 1025
Connection to 127.0.0.1 port 1025 [tcp/blackjack] succeeded!
nc -z 127.0.0.1 1080
Connection to 127.0.0.1 port 1080 [tcp/socks] succeeded!

The version of docker/compose is:

Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad01

An example of the docker-compose.yml below:

version: '3.2'
services:

  php:
    image: example/trusty:php70
    ports:
      - 9000
    volumes:
      - .:/var/www
    networks:
      - internal

  mailcatcher:
    image: schickling/mailcatcher:latest
    ports:
      - "${DOCKER_PORT_MAILCATCHER_SMTP}:1025"
      - "${DOCKER_PORT_MAILCATCHER_HTTP}:1080"
    networks:
      - internal

networks:
  internal:

The values for the environments are as follows:

# .env content
DOCKER_PORT_MAILCATCHER_SMTP=1025
DOCKER_PORT_MAILCATCHER_HTTP=1080

Change the value of max_binlog_cache_size in MySQL

Thanks for providing this useful container.

I am doing a research aiming at finding issues in configuration files. I have a question about one MySQL config: max_binlog_cache_size. It seems the official document says "The maximum recommended value is 4GB; this is due to the fact that MySQL currently cannot work with binary log positions greater than 4GB."

However, the default value is 18446744073709551615, which is much larger than the recommended value.

Shall we change the value to 4GB?
Thanks!

optimize size of images

Because docker works with layer and it creates one layer per action on your DockerFile, you should combine actions in the same action to really reduce the size.

When you run

RUN apt-get update && apt-get install -y ruby ruby-dev build-essential sqlite3 libsqlite3-dev
RUN gem install mailcatcher --no-ri --no-rdoc
RUN apt-get remove --purge -y build-essential ruby-dev libsqlite3-dev && apt-get autoclean && apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

The layer #1 will contains the packages build-essential, ruby-dev and libsqlite3-dev then the size of all layers (the size of the image) will not be optilmized.

A better way to reduce size of your images:

RUN \
   apt-get update && apt-get install -y ruby ruby-dev build-essential sqlite3 libsqlite3-dev && \
   gem install mailcatcher --no-ri --no-rdoc && \
   apt-get remove --purge -y build-essential ruby-dev libsqlite3-dev && apt-get autoclean && apt-get clean && \
   rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

In the same action you install and uninstall builds packages. So neither layer will contains those packages.

Docker-compose support

I want to add docker compose file to several docker images, starting with postgres-backup-s3

Images may be unversioned

In case someone else run into this, for example mailcatcher only contains a latest tag which means automated builds using it may lead to a different results as that tag evolves and a new version is pushed. The list of versions contains tens of different versions that have all been overwritten on the same tag.

API Authorization is wrong.

Putting an apiKey authorization results in the calls being sent with api_key query param instead of the specified header with the specified key.

mysql-backup-s3: access denied error when password contains ">" and --all-databases is used

When using the mysql to S3 backup image, if you pass in the --all-databases option and have a complex password that contains special characters, those appear to be presenting issues with escaping when the SHOW DATABASES command is run.

A password that contains a > in it causes the command to be terminated before it can issue the SHOW DATABASES part. The tool backs up the output and sends it to S3, but the output is really just a series of error messages and not the databases.

Hopefully I'll be able to make a pull request to resolve this at some point, but if I don't I figured I'd get the issue in place at least in case someone else is wondering why they are getting access denied errors when they shouldn't be.

Backup all PostgresSQL databases

Can pg_dumpall be used instead of pg_dump if no database name was specified? I'm not truly familiar with dump/restore of PostgresSQL databases, but if you agree I can try to make a PR with this.

Beanstalkd error with docker-compose

Hi,
I need to execute beanstalkd with docker-compose and I got this error:

/usr/bin/beanstalkd: unknown argument: beanstalkd

My docker-compose.yml code:

version: '3.1'
services:
beanstalkd:
image: schickling/beanstalkd
volumes:
- beanstalk-data:/data
command: beanstalkd -p 11300 -b /data
depends_on:
- php
beanstalkd-console:
image: schickling/beanstalkd-console
ports:
- 2080:2080
environment:
BEANSTALKD_PORT_11300_TCP_ADDR: beanstalkd
links:
- beanstalkd

I need any help with this.
Thanks in advance and good work @schickling

`The SSL certificate is invalid`

Thanks for sharing this.

I've previously used this before succesfully. However, today I am getting an unable to fetch error.

This occurs at RUN cargo build --release

Updating registry `https://github.com/rust-lang/crates.io-index` error: failed to fetch `https://github.com/rust-lang/crates.io-index` Caused by: [16/-17] The SSL certificate is invalid

Any ideas? Thanks!

swagger-ui: the URL to swagger.json in index.html should be relative

I use a reverse proxy and I use http://domain**/docs/** as the base URL for swagger-ui. The reverse proxy forwards all /docs/* requests to swagger-ui, and swagger-ui is receiving all requests stripped off of /docs/.
The run.sh uses /swagger.json (with a leading slash) as the URL in index.html, so swagger-ui ends up making a request to http://domain/swagger.json, and I have to serve an additional swagger.json file using my reverse proxy.
If we change the URL to swagger.json it will fix my issue and swagger-ui will make the request to http://domain/docs/swagger.json, it will decouple my reverse proxy and swagger-ui.
The ones who serve swagger-ui under the root path will not be affected.

Would you accept the pull request if I change /swagger.json to swagger.json in run.sh?

Got a error

mysqldump: Got error: 2002: "Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (2 "No such file or directory")" when trying to connect

and how can I fix this ?

Thanks

swagger-ui with jwt auth

Hi, I'm trying to figure out what the "api_key" property of your dockfile is doing by looking at source code of swagger ui and your run.sh. However, I don't really see where the api_key is in their code.
reading this thread
swagger-api/swagger-ui#818
there's a comment by Qwerios on how to make swagger-ui work with jwt. does this seem like something I can do with your "API_KEY" property?
Thank you very much for your help and for the swagger-ui container

postgres 9.5 support

Supporting postgres 9.5 would be fantastic.

I've tried to rebuild with latest Alpine linux image but can't get it to install 9.5.

mysql-backup-s3: Container should not run as root

I quickly looked at Dockerfile and used scripts and I think image should container a USER directive and run as non-root user. This is a security issue and would prevent the backup from running on clusters with PodSecurityPolicy (namely OpenShift).

Swagger-ui: prefix path

[root@static backend]# docker logs -f current_swagger-ui_1
Starting up http-server, serving ./
Available on:
  http://127.0.0.1:80
  http://172.20.0.5:80
Hit CTRL-C to stop the server
[Tue Feb 21 2017 15:38:28 GMT+0000 (UTC)] "GET /_doc/api/v1.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
[Tue Feb 21 2017 15:38:28 GMT+0000 (UTC)] "GET /_doc/api/v1.1" Error (404): "Not found"

How settings custom path prefix URL? It is without from current project?

ability to set base url path

I wish to access beanstalkd-console in a path of my application:

https://myapp.com/utilities/beanstalkd-console

I have an ingress controller that routes that URL to the beanstalkd-console container.
However, the console is making many requests to paths that don't exist. For example:

https://myapp.com/assets/vendor/bootstrap/css/bootstrap.min.css
https://myapp.com/highlight/styles/magula.css
https://myapp.com/css/customer.css

The requests from the console need to be prefixed with utilities/ like this:
https://myapp.com/utilities/assets/vendor/bootstrap/css/bootstrap.min.css
https://myapp.com/utilities/highlight/styles/magula.css
https://myapp.com/utilities/css/customer.css

Is there any way to set this?

The -b parameter is not in effect when beanstalkd is started

FROM schickling/beanstalkd:latest
LABEL maintainer="Johannes Schickling [email protected]"

RUN apk add --no-cache beanstalkd

ENTRYPOINT ["/usr/bin/beanstalkd -b /usr/bin"]

Above is my dockerfile,Always prompt for this error:exec: "/usr/bin/beanstalkd -b /usr/bin": stat /usr/bin/beanstalkd -b /usr/bin: no such file or directory"

How to run multiple mysql-backup-s3?

First of all - thank you for such a great tools!!!

When trying to run more then 1 schickling/mysql-backup-s3 containers I get:

2017/03/21 00:13:20 Running version: 8f7b187
2017/03/21 00:13:20 new cron: @daily
2017/03/21 00:13:20 ListenAndServe: listen tcp :18080: bind: address already in use

I have multiple DBs on 1 host...How do I run more then 1 instance?
Thanks.

mysql-backup-s3: compare md5sum before uploading

Not an issue, but a feature request.

Before uploading to S3, it is possible to check the md5sum of the last backup using HEAD operation on s3api. If the md5sum doesn't matches the md5sum received, upload new backup with md5sum as a part of the metadata.

Alternately, the same thing could be done locally too, if the last one or two backups aren't deleted from local filesystem immediately on upload.

This would save an incredible amount of storage on S3!

postgres-backup-s3: pg_dump Version missmatch, v10 not supported

12.12.2017 15:38:002017/12/12 14:38:00 12 cmd: /bin/sh backup.sh
12.12.2017 15:38:002017/12/12 14:38:00 12: Creating dump of postgres database from postgres...
12.12.2017 15:38:002017/12/12 14:38:00 12: pg_dump:
12.12.2017 15:38:002017/12/12 14:38:00 12: server version: 10.1; pg_dump version: 9.6.5
12.12.2017 15:38:002017/12/12 14:38:00 12: pg_dump:
12.12.2017 15:38:002017/12/12 14:38:00 12: aborting because of server version mismatch
12.12.2017 15:38:002017/12/12 14:38:00 12 Exit Status: 1

Segmentation fault (core dumped)

After pulling the Dockerfile, running commands including beanstalkd leads to segmentation fault (core dumped). Have been searching for a solution but couldn't find any. I will provide any informations necessary, any help would be appreciated.

Switch to swagger ui 3.0.3

The new version of swagger ui has been released and it fixes some bugs, so it would be great to have it.

Also, I'm quite new to github so I don't know if it's the place to ask such things.

mailcatcher image is (too?) big

I have just installed the mailcatcher image and I am surprised by its size. Is it normal ?

$ docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
schickling/mailcatcher         latest              07f9b78a9b71        6 days ago          745.3 MB

mysql-backup-s3: custom S3 endpoints not working

S3_ENDPOINT env variable is not working for some reason.

When I set S3_ENDPOINT value to Upcloud's endpoint like this https://example.fi-hel2.upcloudobjects.com there is no reason I got an error like this:

upload failed: - to s3://example-bucket/backup/2021-04-26T154437Z.dump.sql.gz 
Could not connect to the endpoint URL: "https://s3.fi-hel2.amazonaws.com/example-bucket/backup/2021-04-26T154437Z.dump.sql.gz?uploads"

where s3 client is taking a default amazon endpoint.

I checked the backup.sh code fragment and I didn't found any issues but for some reason it is not taking it in place:

copy_s3 () {
  SRC_FILE=$1
  DEST_FILE=$2

  if [ "${S3_ENDPOINT}" == "**None**" ]; then
    AWS_ARGS=""
  else
    AWS_ARGS="--endpoint-url ${S3_ENDPOINT}"
  fi

  echo "Uploading ${DEST_FILE} on S3..."

  cat $SRC_FILE | aws $AWS_ARGS s3 cp - s3://$S3_BUCKET/$S3_PREFIX/$DEST_FILE

  if [ $? != 0 ]; then
    >&2 echo "Error uploading ${DEST_FILE} on S3"
  fi

  rm $SRC_FILE
}

AWS_ARGS should be --endpoint-url https://example.fi-hel2.upcloudobjects.com but its not taking it.

[Feature] MySQL restore - have a proof of concept version - comments please - PR later.

Hi @schickling ;

Just to say that I have created a restore container for mysql. It's mixed in with another repo where I'm experimenting with mysql cloud backup restore
https://github.com/PaddleHQ/rds-dump-restore/tree/master/s3-restore-mysql
If you want to grab the script then feel free; Although some parts of that repo are under GPL licensing, I kept those files clearly under MIT. The current version only restores the latest backup which fits my current tests but I will likely later improve that.

If you have any suggestions for improvement or ways to better match your docker files I'll likely try to implement them. Please do comment. Once I have finished experimenting and assuming you are interested I will likely make a PR with my new script + some changes to the mysql backup script.
https://github.com/PaddleHQ/rds-dump-restore/tree/master/mysql-backup-s3

Mailcatcher image build is failing since 2 months

I was looking at the build details and it's failing with :

Cloning into 'b9wxmtd55hihkgfssfcuett'...
KernelVersion: 3.13.0-40-generic
Os: linux
BuildTime: Mon Oct 12 05:37:18 UTC 2015
ApiVersion: 1.20
Version: 1.8.3
GitCommit: f4bf5c7
Arch: amd64
GoVersion: go1.4.2
Step 0 : FROM debian:wheezy
 ---> 5e2a9df259d2
Step 1 : MAINTAINER Johannes Schickling "[email protected]"
 ---> Running in 4962d918e17a
 ---> e49b67fb1f4b
Removing intermediate container 4962d918e17a
Step 2 : ADD install.sh install.sh
 ---> eb3ff94379a6
Removing intermediate container 6586c661a403
Step 3 : RUN chmod +x install.sh && ./install.sh && rm install.sh
 ---> Running in 904f1152b724
�[91m/bin/sh: 1: �[0m�[91m./install.sh: Text file busy�[0m�[91m
�[0mRemoving intermediate container 904f1152b724
The command '/bin/sh -c chmod +x install.sh && ./install.sh && rm install.sh' returned a non-zero code: 2

add --no-quit to mailcatcher

Currently it's possible to quit mailcatcher over the web ui -> terminates docker container afterwards

Fix: add option "--no-quit" to CMD

mysql-backup-s3 rotate back-ups

It would be great if it would be possible to rotate the back-ups. There are 3 scenario's and only 2 are covered so far.

  • Write back-up to S3 as new file
  • Write back-up to S3 and overwrite previous back-up

Now the last scenario I'd find useful would be:

  • Back-up every x days/hours/... and when n-amount of back-ups are reached, throw away the oldest back-up and add the newest.

This scenario would enable us to keep for example 3 versions. This scenario doesn't flood the bucket with really old data but also prevents the latest back-up being overwritten with data you wanted to keep.

For example:
Day 1: back-up created
Day 2: back-up overwritten
Day 3: back-up overwritten
Day 4: something goes terribly wrong with the data > back-up overwritten > notices data is corrupt after the back-up.

If using the overwrite mode there is no way to go back to the data of day 3.

Undefined $S3_PREFIX results in nameless directory in s3

As per the title, by not setting S3_PREFIX I assumed all backups would go into the root of my dedicated backup bucket however the backup files end up under an unnamed directory when viewing the AWS S3 console.

cat dump.sql.gz | aws s3 cp - s3://$S3_BUCKET/$S3_PREFIX/$(date +"%Y-%m-%dT%H:%M:%SZ").sql.gz || exit 2

I assume the extraneous / needs to be removed if $S3_PREFIX isn't provided.

mysql-backup-s3: set '-o pipefail' to catch errors when database not found

Hi,

You have 'set -e' at the top of the backup.sh script.

Right now, the mysqldump ...|gzip... command wont fail if mysqldump returns an error.

If you add

set -eo pipefail

It'll catch that. You could also add `set -euo pipefail' to catch unset variables too.

I assume this affects other images you have too.

swagger-ui: SWAGGER_JSON-value not used?

Heyho,

just found your helping swagger-ui docker-image. Could really help me out.

But one question: Why is the value of SWAGGER_JSON just checked and not used? I made a path like /app/subfolder_with_json/swagger.json and set SWAGGER_JSON to it but this takes no effect :(

any idea?

All the best,
Florian

postgres-backup-s3: V11 not supported

Backing up a PostgreSQL v11.2 database fails. From the logs:

2019/03/31 00:00:00 17 cmd: /bin/sh backup.sh
2019/03/31 00:00:00 17: Creating dump of example database from db...
2019/03/31 00:00:00 17: pg_dump: 
2019/03/31 00:00:00 17: server version: 11.2; pg_dump version: 10.4
2019/03/31 00:00:00 17: pg_dump: 
2019/03/31 00:00:00 17: aborting because of server version mismatch
2019/03/31 00:00:00 17 Exit Status: 1

It seems that the Docker image is a bit outdated and should be rebuild.

gdb in the rust image is too old to use as rust-gdb

Hi!

First of all, thank you for the up-to-date rust images :)

I tried to use rust-gdb in your image, but it doesn't recognize the -iex command line option, which is used in /usr/local/bin/rust-gdb.

Upgrading to the jessies's gdb solves this problem, so maybe changing the base image to jessie wouldn't be a bad idea. What do you think?

swagger-ui: Ability to set custom validator URL

I want to be able to override the validatorUrl property to point to a custom validator. In my case, this is a container created from the swaggerapi/swagger-validator image.

I can submit a PR for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.