Giter Site home page Giter Site logo

containers / podman-compose Goto Github PK

View Code? Open in Web Editor NEW
4.8K 45.0 461.0 636 KB

a script to run docker-compose.yml using podman

License: GNU General Public License v2.0

Python 99.22% Shell 0.40% Dockerfile 0.38%
podman docker-compose linux-containers rootless-containers

podman-compose's Introduction

Podman Compose

Tests

An implementation of Compose Spec with Podman backend. This project focuses on:

  • rootless
  • daemon-less process model, we directly execute podman, no running daemon.

This project only depends on:

  • podman
  • podman dnsname plugin: It is usually found in the podman-plugins or podman-dnsname distro packages, those packages are not pulled by default and you need to install them. This allows containers to be able to resolve each other if they are on the same CNI network. This is not necessary when podman is using netavark as a network backend.
  • Python3
  • PyYAML
  • python-dotenv

And it's formed as a single Python file script that you can drop into your PATH and run.

References:

Alternatives

As in this article you can setup a podman.socket and use unmodified docker-compose that talks to that socket but in this case you lose the process-model (ex. docker-compose build will send a possibly large context tarball to the daemon)

For production-like single-machine containerized environment consider

For the real thing (multi-node clusters) check any production OpenShift/Kubernetes distribution like OKD.

Versions

If you have legacy version of podman (before 3.1.0) you might need to stick with legacy podman-compose 0.1.x branch. The legacy branch 0.1.x uses mappings and workarounds to compensate for rootless limitations.

Modern podman versions (>=3.4) do not have those limitations, and thus you can use latest and stable 1.x branch.

If you are upgrading from podman-compose version 0.1.x then we no longer have global option -t to set mapping type like hostnet. If you desire that behavior, pass it the standard way like network_mode: host in the YAML.

Installation

Pip

Install the latest stable version from PyPI:

pip3 install podman-compose

pass --user to install inside regular user home without being root.

Or latest development version from GitHub:

pip3 install https://github.com/containers/podman-compose/archive/main.tar.gz

Homebrew

brew install podman-compose

Manual

curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/main/podman_compose.py
chmod +x /usr/local/bin/podman-compose

or inside your home

curl -o ~/.local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/main/podman_compose.py
chmod +x ~/.local/bin/podman-compose

or install from Fedora (starting from f31) repositories:

sudo dnf install podman-compose

Basic Usage

We have included fully functional sample stacks inside examples/ directory. You can get more examples from awesome-compose.

A quick example would be

cd examples/busybox
podman-compose --help
podman-compose up --help
podman-compose up

A more rich example can be found in examples/awx3 which have

  • A Postgres Database
  • RabbitMQ server
  • MemCached server
  • a django web server
  • a django tasks

When testing the AWX3 example, if you got errors, just wait for db migrations to end. There is also AWX 17.1.0

Tests

Inside tests/ directory we have many useless docker-compose stacks that are meant to test as many cases as we can to make sure we are compatible

Unit tests with unittest

run a unittest with following command

python -m unittest pytests/*.py

Contributing guide

If you are a user or a developer and want to contribute please check the CONTRIBUTING section

podman-compose's People

Contributors

aripollak avatar aviduda avatar barseghyanartur avatar baszoetekouw avatar breca avatar bugfest avatar charliemirabile avatar dependabot[bot] avatar dixonwhitmire avatar dwt avatar evedel avatar falmarri avatar hedayat avatar hernandor avatar howlowck avatar husio avatar jbaptperez avatar jotelha avatar mariushoch avatar mohd-akram avatar mokibit avatar muayyad-alsadi avatar muz avatar ohxodi avatar p12tic avatar tayeh avatar tjikkun avatar vansari avatar white-gecko avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

podman-compose's Issues

podman-compose run seems not to boot dependencies

STR:

  1. Clone https://github.com/Tecnativa/doodba-scaffolding
  2. ln -s devel.yaml docker-compose.yml
  3. podman-compose build
  4. podman-compose run --rm odoo psql -l
Logs
➤ podman-compose run --rm odoo psql -l
podman pod create --name=doodbadevel12 --share net -p 0.0.0.0:12069:8069 -p 127.0.0.1:6899:6899 -p 127.0.0.1:8025:8025 -p 127.0.0.1:1984:1984
Error: unable to create pod: error adding pod to state: name doodbadevel12 is in use: pod already exists
125
Namespace(T=False, cnt_command=['psql', '-l'], command='run', detach=False, dry_run=False, e=None, entrypoint=None, file=['docker-compose.yml'], label=None, name=None, no_ansi=False, no_cleanup=False, no_deps=False, podman_path='podman', project_name=None, publish=None, rm=True, service='odoo', service_ports=False, transform_policy='1podfw', user=None, volume=None, workdir=None)
podman volume inspect doodbadevel12_filestore || podman volume create doodbadevel12_filestore
podman run --rm -i --name=doodbadevel12_odoo_tmp16894 --pod=doodbadevel12 --label traefik.docker.network=inverseproxy_shared --label traefik.enable=true --label traefik.frontend.passHostHeader=true --label traefik.longpolling.port=8072 --label traefik.port=8069 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=odoo -e EMAIL_FROM -e PGDATABASE=devel -e PGUSER=odoo -e DB_FILTER=.* -e PROXY_MODE=true -e DOODBA_ENVIRONMENT=devel -e LIST_DB=true -e PTVSD_ENABLE=0 -e PYTHONOPTIMIZE -e PYTHONPATH=/opt/odoo/custom/src/odoo -e SMTP_PORT=1025 -e WITHOUT_DEMO=false --mount type=bind,source=/home/yajo/.local/share/containers/storage/volumes/doodbadevel12_filestore/_data,destination=/var/lib/odoo,bind-propagation=z --mount type=bind,source=/home/yajo/Documentos/devel/tecnativa/doodbadevel12/./odoo/custom,destination=/opt/odoo/custom,bind-propagation=z,ro --mount type=bind,source=/home/yajo/Documentos/devel/tecnativa/doodbadevel12/./odoo/auto/addons,destination=/opt/odoo/auto/addons,bind-propagation=z --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 --hostname example.com --tty 12.0 psql -l
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126

Using podman-compose up -d also doesn't help, it might be the same issue:

podman-compose up -d
podman pod create --name=doodbadevel12 --share net -p 127.0.0.1:1984:1984 -p 127.0.0.1:8025:8025 -p 0.0.0.0:12069:8069 -p 127.0.0.1:6899:6899
Error: unable to create pod: error adding pod to state: name doodbadevel12 is in use: pod already exists
125
podman volume inspect doodbadevel12_db || podman volume create doodbadevel12_db
Error: no volume with name "doodbadevel12_db" found: no such volume
podman run --name=doodbadevel12_db_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=db -e POSTGRES_DB=devel -e POSTGRES_USER=odoo -e CONF_EXTRA=work_mem = 32MB
 -e POSTGRES_PASSWORD=odoopassword --mount type=bind,source=/home/yajo/.local/share/containers/storage/volumes/doodbadevel12_db/_data,destination=/var/lib/postgresql/data,bind-propagation=z --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 --shm_size 512mb tecnativa/postgres-autoconf:10-alpine
Error: unknown flag: --shm_size
125
podman run --name=doodbadevel12_smtp_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=smtp --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 mailhog/mailhog
Trying to pull docker.io/mailhog/mailhog...
Getting image source signatures
Copying blob d6a5679aa3cf done
Copying blob b96c5d9bff5f done
Copying blob a1300bbb94d5 done
Copying blob 0f03c49950cb done
Copying config e00a21e210 done
Writing manifest to image destination
Storing signatures
ERRO[0007] error starting some container dependencies   
ERRO[0007] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_wdb_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=wdb --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 yajo/wdb-server
Trying to pull docker.io/yajo/wdb-server...
Getting image source signatures
Copying blob c74d77b2e916 done
Copying blob 2f8f143a8987 done
Copying blob ff3a5c916c92 done
Copying blob 486bba6fdbf5 done
Copying blob 02b100ec4a6d done
Copying blob 44014a6ad6bc done
Copying config b999d7aa12 done
Writing manifest to image destination
Storing signatures
ERRO[0017] error starting some container dependencies   
ERRO[0017] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_cdnjs_cloudflare_proxy_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=cdnjs_cloudflare_proxy -e TARGET=cdnjs.cloudflare.com -e PRE_RESOLVE=1 --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 tecnativa/whitelist
Trying to pull docker.io/tecnativa/whitelist...
Getting image source signatures
Copying blob 921b31ab772b done
Copying blob ec0818a7bbe4 done
Copying blob 1a0c422ed526 done
Copying blob b53197ee35ff done
Copying blob 7d401e323f1c done
Copying blob 8b25717b4dbf done
Copying blob f569732042c7 done
Copying config 38225c953b done
Writing manifest to image destination
Storing signatures
ERRO[0021] error starting some container dependencies   
ERRO[0021] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_fonts_googleapis_proxy_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=fonts_googleapis_proxy -e TARGET=fonts.googleapis.com -e PRE_RESOLVE=1 --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 tecnativa/whitelist
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_fonts_gstatic_proxy_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=fonts_gstatic_proxy -e TARGET=fonts.gstatic.com -e PRE_RESOLVE=1 --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 tecnativa/whitelist
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_google_proxy_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=google_proxy -e TARGET=www.google.com -e PRE_RESOLVE=1 --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 tecnativa/whitelist
ERRO[0001] error starting some container dependencies   
ERRO[0001] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_gravatar_proxy_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=gravatar_proxy -e TARGET=www.gravatar.com -e PRE_RESOLVE=1 --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 tecnativa/whitelist
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman volume inspect doodbadevel12_filestore || podman volume create doodbadevel12_filestore
podman run --name=doodbadevel12_odoo_1 -d --pod=doodbadevel12 --label traefik.docker.network=inverseproxy_shared --label traefik.enable=true --label traefik.frontend.passHostHeader=true --label traefik.longpolling.port=8072 --label traefik.port=8069 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=odoo -e EMAIL_FROM -e PGDATABASE=devel -e PGUSER=odoo -e DB_FILTER=.* -e PROXY_MODE=true -e DOODBA_ENVIRONMENT=devel -e LIST_DB=true -e PTVSD_ENABLE=0 -e PYTHONOPTIMIZE -e PYTHONPATH=/opt/odoo/custom/src/odoo -e SMTP_PORT=1025 -e WITHOUT_DEMO=false --mount type=bind,source=/home/yajo/.local/share/containers/storage/volumes/doodbadevel12_filestore/_data,destination=/var/lib/odoo,bind-propagation=z --mount type=bind,source=/home/yajo/Documentos/devel/tecnativa/doodbadevel12/./odoo/custom,destination=/opt/odoo/custom,bind-propagation=z,ro --mount type=bind,source=/home/yajo/Documentos/devel/tecnativa/doodbadevel12/./odoo/auto/addons,destination=/opt/odoo/auto/addons,bind-propagation=z --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 --hostname example.com --tty 12.0 odoo --limit-memory-soft=0 --limit-time-real-cron=9999999 --limit-time-real=9999999 --workers=0 --dev=reload,qweb,werkzeug,xml
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126
podman run --name=doodbadevel12_odoo_proxy_1 -d --pod=doodbadevel12 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=doodbadevel12 --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=odoo_proxy -e PORT=6899 8069 -e TARGET=odoo --add-host odoo_proxy:127.0.0.1 --add-host doodbadevel12_odoo_proxy_1:127.0.0.1 --add-host odoo:127.0.0.1 --add-host doodbadevel12_odoo_1:127.0.0.1 --add-host db:127.0.0.1 --add-host doodbadevel12_db_1:127.0.0.1 --add-host smtp:127.0.0.1 --add-host doodbadevel12_smtp_1:127.0.0.1 --add-host wdb:127.0.0.1 --add-host doodbadevel12_wdb_1:127.0.0.1 --add-host cdnjs_cloudflare_proxy:127.0.0.1 --add-host doodbadevel12_cdnjs_cloudflare_proxy_1:127.0.0.1 --add-host fonts_googleapis_proxy:127.0.0.1 --add-host doodbadevel12_fonts_googleapis_proxy_1:127.0.0.1 --add-host fonts_gstatic_proxy:127.0.0.1 --add-host doodbadevel12_fonts_gstatic_proxy_1:127.0.0.1 --add-host google_proxy:127.0.0.1 --add-host doodbadevel12_google_proxy_1:127.0.0.1 --add-host gravatar_proxy:127.0.0.1 --add-host doodbadevel12_gravatar_proxy_1:127.0.0.1 tecnativa/whitelist
ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error
126

build: OSError: Dockerfile not found in .

See the error:

# Cloned commit: f75d36ac141ede5dded989fcc158f9c02d143362
[root@cc701f6ac6f1 ~]# git clone https://github.com/Tecnativa/doodba-scaffolding
Cloning into 'doodba-scaffolding'...
remote: Enumerating objects: 21, done.
remote: Counting objects: 100% (21/21), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 1000 (delta 11), reused 11 (delta 8), pack-reused 979
Receiving objects: 100% (1000/1000), 158.13 KiB | 5.65 MiB/s, done.
Resolving deltas: 100% (648/648), done.

[root@cc701f6ac6f1 ~]# cd doodba-scaffolding/

[root@cc701f6ac6f1 doodba-scaffolding]# pip3 install podman-compose
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting podman-compose
  Downloading https://files.pythonhosted.org/packages/d8/1a/4eed53406776275302a9325555a3c389c7ad8fa35ab287e6d93c041b7de7/podman_compose-0.1.5-py2.py3-none-any.whl
Collecting pyyaml (from podman-compose)
  Downloading https://files.pythonhosted.org/packages/e3/e8/b3212641ee2718d556df0f23f78de8303f068fe29cdaa7a91018849582fe/PyYAML-5.1.2.tar.gz (265kB)
     |████████████████████████████████| 266kB 22.9MB/s 
Installing collected packages: pyyaml, podman-compose
  Running setup.py install for pyyaml ... done
Successfully installed podman-compose-0.1.5 pyyaml-5.1.2

[root@cc701f6ac6f1 doodba-scaffolding]# podman-compose -f devel.yaml build
Traceback (most recent call last):
  File "/usr/local/bin/podman-compose", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 1093, in main
    podman_compose.run()
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 625, in run
    cmd(self, args)
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 782, in wrapped
    return func(*args, **kw)
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 850, in compose_build
    build_one(compose, args, cnt)
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 834, in build_one
    raise OSError("Dockerfile not found in "+ctx)
OSError: Dockerfile not found in .

envsubst not working "Syntax error: Unterminated quoted string"

With nginx Containers it is useful if one could pass Environmentvariables directly to the container image config file. A workaround to achieve that is using envsubst in combination with something like printf or awk and double dollar quoting. Some of the solutions are disscused in docker-library/docs#496
With podman-compose I get the following error:

\"`env: 1: \"`env: Syntax error: Unterminated quoted string

if I try to use something like:

command: sh -c "envsubst \"`env | awk -F = '{printf \" $$%s\", $$1}'`\" < /etc/nginx/conf.d/web.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"

as the command.

TypeError: sequence item 6: expected str instance, int found

Trying to run https://github.com/puppetlabs/pupperware with podman-compose and I can't even start.

[renich@introdesk pupperware]$ podman-compose up
Traceback (most recent call last):
  File "/home/renich/.local/bin/podman-compose", line 10, in <module>
    sys.exit(main())
  File "/home/renich/.local/lib/python3.7/site-packages/podman_compose.py", line 1093, in main
    podman_compose.run()
  File "/home/renich/.local/lib/python3.7/site-packages/podman_compose.py", line 625, in run
    cmd(self, args)
  File "/home/renich/.local/lib/python3.7/site-packages/podman_compose.py", line 782, in wrapped
    return func(*args, **kw)
  File "/home/renich/.local/lib/python3.7/site-packages/podman_compose.py", line 895, in compose_up
    create_pods(compose, args)
  File "/home/renich/.local/lib/python3.7/site-packages/podman_compose.py", line 862, in create_pods
    compose.podman.run(podman_args)
  File "/home/renich/.local/lib/python3.7/site-packages/podman_compose.py", line 585, in run
    print("podman " + " ".join(podman_args))
TypeError: sequence item 6: expected str instance, int found

Compatibility spreadsheet

I've noticed https://github.com/muayyad-alsadi/podman-compose/blob/master/CONTRIBUTING.md#missing-commands-help-needed is incomplete.

So I parsed docker-compose commands and added to this spreadsheet https://file.io/1Gx0gD (alternative link: https://filebin.net/258jxepc4lefjmst/podman-compose.ods?t=mlsgj9yu)

Markdown tables are messy so it's an ods file. I'm sure it's easy to convert either with some online service or just export to csv and do a quick replacement.

ps: you don't need to credit me or anything, I'm just trying to help but I'm not much of a programmer

Remove and start new container if changes to

docker-compose up -d usualy removes and starts a new container whenever one changes the docker-compose.yml and then issue that command, would it be possible to add this feature aswell?

unknown mount option delegated

After updating to Fedora 31, my docker containers will stop working and I've worked with podman.

Then I built an if query in the script docker-compose, which checks whether docker-compose or podman-compose exists and uses the correct commands.
https://gitlab.com/foodsharing-dev/foodsharing/blob/podman/scripts/docker-compose

https://gitlab.com/foodsharing-dev/foodsharing/tree/podman/scripts
https://gitlab.com/foodsharing-dev/foodsharing/tree/podman/docker

After that I installed podman-compose dev.

Unfortunately, I get an error message and the other containers do not start.

[christian@thinkpad-local foodsharing]$ ./scripts/start podman pod create --name=foodsharing_dev --share net -p 18080:18080 -p 18084:80 -p 18090:8080 -p 11337:1337 -p 18086:8086 -p 8083:8083 -p 4000:3000 -p 11338:1338 -p 18089:8089/udp -p 16379:6379 -p 13306:3306 -p 18081:80 94e4b500ff56c871949c3dbc0c084f18cc50ddc06bb1309d162a6c2cea9a0b80 0 Namespace(T=False, cnt_command=['sh', '-c', 'chown -R 1000 /app/client/node_modules'], command='run', detach=False, dry_run=False, e=None, entrypoint=None, file=['/home/christian/git/foodsharing/scripts/../docker/docker-compose.dev.yml'], label=None, name=None, no_ansi=False, no_cleanup=False, no_deps=True, podman_path='podman', project_name='foodsharing_dev', publish=None, rm=True, service='client', service_ports=False, transform_policy='1podfw', user='root', volume=None, workdir=None) Traceback (most recent call last): File "/usr/local/bin/podman-compose", line 11, in <module> load_entry_point('podman-compose==0.1.6.dev0', 'console_scripts', 'podman-compose')() File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 1264, in main podman_compose.run() File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 753, in run cmd(self, args) File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 936, in wrapped return func(*args, **kw) File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 1122, in compose_run podman_args = container_to_args(compose, cnt, args.detach) File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 482, in container_to_args mount_args = mount_desc_to_args(compose, volume, cnt['_service'], cnt['name']) File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 406, in mount_desc_to_args if is_str(mount_desc): mount_desc=parse_short_mount(mount_desc, basedir) File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 106, in parse_short_mount raise ValueError("unknown mount option "+opt) ValueError: unknown mount option delegated

[christian@thinkpad-local foodsharing]$ podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 94e4b500ff56 foodsharing_dev Created 12 seconds ago 1 b4e5813a6a30

Podman cannot create (inspect) volume

I am trying to replace docker-compose with podman-compose. When run podman-compose up --build. Images have been built successfully and pod has started. Afterwards, when I want to create volume, following error appears.

.........
podman pod create --name=vmaas --share net -p 5432:5432 -p 8082:8082 -p 8730:8730 -p 9090:9090 -p 3000:3000 -p 8080:8080 -p 8081:8081 -p 8083:8083
7929ba1407c8341d05650487a71743c3db0ad92ffe8834d1d78eb7dedfe731d8
0
podman volume inspect vmaas_vmaas-db-data || podman volume create vmaas_vmaas-db-data
Error: no volume with name "vmaas_vmaas-db-data" found: no such volume
Error: error creating volume directory "/home/mjurek/.local/share/containers/storage/volumes/vmaas_vmaas-db-data/_data": mkdir /home/mjurek/.local/share/containers/storage/volumes/vmaas_vmaas-db-data/_data: file exists
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 362, in mount_dict_vol_to_bind
    try: out = compose.podman.output(["volume", "inspect", vol_name])
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 582, in output
    return subprocess.check_output(cmd)
  File "/usr/lib64/python3.7/subprocess.py", line 411, in check_output
    **kwargs).stdout
  File "/usr/lib64/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['podman', 'volume', 'inspect', 'vmaas_vmaas-db-data']' returned non-zero exit status 125.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/podman-compose", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 1093, in main
    podman_compose.run()
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 625, in run
    cmd(self, args)
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 782, in wrapped
    return func(*args, **kw)
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 898, in compose_up
    detached=args.detach, podman_command=podman_command)
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 457, in container_to_args
    mount_args = mount_desc_to_args(compose, volume, cnt['_service'], cnt['name'])
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 386, in mount_desc_to_args
    mount_desc = mount_dict_vol_to_bind(compose, fix_mount_dict(mount_desc, proj_name, srv_name))
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 364, in mount_dict_vol_to_bind
    compose.podman.output(["volume", "create", "-l", "io.podman.compose.project={}".format(proj_name), vol_name])
  File "/usr/local/lib/python3.7/site-packages/podman_compose.py", line 582, in output
    return subprocess.check_output(cmd)
  File "/usr/lib64/python3.7/subprocess.py", line 411, in check_output
    **kwargs).stdout
  File "/usr/lib64/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['podman', 'volume', 'create', '-l', 'io.podman.compose.project=vmaas', 'vmaas_vmaas-db-data']' returned non-zero exit status 125.

podman version:

podman==1.6.0

podman-compose version:

podman-compose==0.1.6.dev0

docker-compose.yml:

version: '3'

services:
  vmaas_database:
    container_name: vmaas-database
    build:
        context: .
        dockerfile: ./database/Dockerfile
    image: vmaas/database:latest
    restart: unless-stopped
    env_file:
      - ./conf/database-connection-admin.env
    ports:
      - 5432:5432
    volumes:
      - vmaas-db-data:/var/lib/pgsql/data

  vmaas_websocket:
    container_name: vmaas-websocket
    build:
        context: .
        dockerfile: ./websocket/Dockerfile
    image: vmaas/websocket:latest
    restart: unless-stopped
    ports:
      - 8082:8082

  vmaas_reposcan:
    container_name: vmaas-reposcan
    build:
        context: .
        dockerfile: ./reposcan/Dockerfile
    image: vmaas/reposcan:latest
    restart: unless-stopped
    env_file:
      - ./conf/database-connection-writer.env
      - ./conf/reposcan.env
    ports:
      - 8081:8081
      - 8730:8730
    volumes:
      - vmaas-reposcan-tmp:/tmp
      - vmaas-dump-data:/data:z
    depends_on:
      - vmaas_websocket
      - vmaas_database

  vmaas_webapp:
    container_name: vmaas-webapp
    build:
        context: .
        dockerfile: ./webapp/Dockerfile
    image: vmaas/webapp:latest
    restart: unless-stopped
    env_file:
      - ./conf/webapp.env
    ports:
      - 8080:8080
    depends_on:
      - vmaas_websocket
      - vmaas_reposcan

    
  vmaas_webapp_utils:
    container_name: vmaas-webapp-utils
    build:
        context: .
        dockerfile: ./webapp_utils/Dockerfile
    image: vmaas/webapp_utils:latest
    restart: unless-stopped
    env_file:
      - ./conf/webapp_utils.env
      - ./conf/database-connection-reader.env
    ports:
      - 8083:8083
    depends_on:
      - vmaas_webapp


  vmaas_prometheus:
    container_name: vmaas-prometheus
    image: prom/prometheus:v2.1.0
    volumes:
      - prometheus-data:/prometheus
      - ./monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
    security_opt:
      - label=disable
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'
    ports:
      - 9090:9090
    depends_on:
      - vmaas_reposcan
      - vmaas_webapp
    restart: always

  vmaas_grafana:
    container_name: vmaas-grafana
    image: grafana/grafana:6.2.5
    volumes:
      - grafana-data:/var/lib/grafana
      - ./monitoring/grafana/provisioning/:/etc/grafana/provisioning/
    depends_on:
      - vmaas_prometheus
    ports:
      - 3000:3000
    env_file:
      - ./monitoring/grafana/grafana.conf
    user: "104"
    restart: always

volumes:
  vmaas-db-data:
  vmaas-dump-data:
  vmaas-reposcan-tmp:
  prometheus-data:
  grafana-data:

TypeError: the JSON object must be str, not 'bytes'

Traceback (most recent call last):
  File "/usr/local/bin/podman-compose", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 1093, in main
    podman_compose.run()
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 625, in run
    cmd(self, args)
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 782, in wrapped
    return func(*args, **kw)
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 898, in compose_up
    detached=args.detach, podman_command=podman_command)
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 457, in container_to_args
    mount_args = mount_desc_to_args(compose, volume, cnt['_service'], cnt['name'])
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 386, in mount_desc_to_args
    mount_desc = mount_dict_vol_to_bind(compose, fix_mount_dict(mount_desc, proj_name, srv_name))
  File "/usr/local/lib/python3.5/dist-packages/podman_compose.py", line 366, in mount_dict_vol_to_bind
    src = json.loads(out)[0]["mountPoint"]
  File "/usr/lib/python3.5/json/__init__.py", line 312, in loads
    s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'

RuntimeError: Set changed size during iteration

Hello, I seem to be having an issue, and I'm not sure what to try next. ( understand this is a WIP, but I would really like a way to move to podman from docker for my next big server install)
I have tried running as user, sudo, and root. I get the same error every time.

The command I'm using is:
./podman-compose.py -t 1podfw -f docker-compose.yml up
both podman-compose and the docker compose file are in the same location currently (user home directory)
The compose file is for installing the mist.io platform. (https://github.com/mistio/mist-ce/releases/tag/v4.1.0)

The following is the error I get:

  File "./podman-compose.py", line 747, in <module>
    main()
  File "./podman-compose.py", line 742, in main
    podman_path=args.podman_path
  File "./podman-compose.py", line 672, in run_compose
    flat_deps(container_names_by_service, container_by_name)
  File "./podman-compose.py", line 506, in flat_deps
    rec_deps(services, container_by_name, cnt, cnt.get('_service'))
  File "./podman-compose.py", line 484, in rec_deps
    for dep in deps:
RuntimeError: Set changed size during iteration

Any ideas on how to proceed?

RecursionError: maximum recursion depth exceeded while calling a Python object

git clone [email protected]:Tecnativa/doodba-scaffolding.git
cd doodba-scaffolding
podman-compose -f devel.yaml up -d
Traceback (most recent call last):
File "/usr/bin/podman-compose", line 11, in
load_entry_point('podman-compose==0.1.6.dev0', 'console_scripts', 'podman-compose')()
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 1263, in main
podman_compose.run()
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 738, in run
self._parse_compose_file()
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 818, in _parse_compose_file
flat_deps(services, with_extends=True)
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 613, in flat_deps
rec_deps(services, name)
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 592, in rec_deps
new_deps = rec_deps(services, dep_name, start_point)
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 592, in rec_deps
new_deps = rec_deps(services, dep_name, start_point)
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 592, in rec_deps
new_deps = rec_deps(services, dep_name, start_point)
[Previous line repeated 991 more times]
File "/usr/lib/python3.7/site-packages/podman_compose.py", line 585, in rec_deps
for dep_name in deps.copy():
RecursionError: maximum recursion depth exceeded while calling a Python object

This is on Fedora, podman-compose-0.1.5-1.git20191030.fc31.noarch

Other docker-compose files seem to work, including from this project.

Build: Use intermediate images

When running podman-compose build it seems like it doesn't use the intermediate images created during a build but rather rebuilds every time. When trying out a manual build (using podman build) I can see that it's reusing the cached intermediate images so apparently these images are produced and kept around at least, but podman-compose build seems to ignore them.

Can't `up` or `build` anymore

$ podman-compose build
Traceback (most recent call last):
  File "/home/thomas/bin/podman-compose", line 1094, in <module>
    main()
  File "/home/thomas/bin/podman-compose", line 1091, in main
    podman_compose.run()
  File "/home/thomas/bin/podman-compose", line 623, in run
    cmd(self, args)
  File "/home/thomas/bin/podman-compose", line 780, in wrapped
    return func(*args, **kw)
  File "/home/thomas/bin/podman-compose", line 848, in compose_build
    build_one(compose, args, cnt)
  File "/home/thomas/bin/podman-compose", line 820, in build_one
    if getattr(args, 'if_not_exists'):
AttributeError: 'Namespace' object has no attribute 'if_not_exists'

on current master (2246204)

Inconsistant volume directory naming convention with Docker-compose

Docker-compose names volumes to the names start with pod name. For instance, a NextCloud docker-compose YAML

version: '3'

volumes:
  db:

services:
  db:
    image: mariadb
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    restart: always
    volumes:
      - db:/var/lib/mysql

should create a volume named nextcloud_db instead of db (current result), which is similar to the container names. The current result gives some confusion.

Note: this problem is only related to volumes instead of bind-mount.

support multiple passing -f multiple times

"menderproduction" is value being passed with -p switch, however its prefixing the uri which makes pulling images fail. If i use podman on its own i can pull the image manually.

Trying to pull docker.io/library/menderproduction_minio...
denied: requested access to the resource is denied
Trying to pull registry.fedoraproject.org/menderproduction_minio...
manifest unknown: manifest unknown
Trying to pull registry.access.redhat.com/menderproduction_minio...
name unknown: Repo not found
Error: unable to pull menderproduction_minio: 3 errors occurred:
* Error initializing source docker://menderproduction_minio:latest: Error reading manifest latest in docker.io/library/menderproduction_minio: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
* Error initializing source docker://registry.fedoraproject.org/menderproduction_minio:latest: Error reading manifest latest in registry.fedoraproject.org/menderproduction_minio: manifest unknown: manifest unknown
* Error initializing source docker://registry.access.redhat.com/menderproduction_minio:latest: Error reading manifest latest in registry.access.redhat.com/menderproduction_minio: name unknown: Repo not found

[question] best way to run multiple instances of x?

Hi,

im running multiple instances of mysql/mariadb on the default port in docker-compose, what is the best way to run them in podman-compose? Do i need to change the ports, because podman is not able to give each container a separated ip?

build with string values

as seen here build might have a string value

version: '3'
services:
  web:
    build: .

which is to be interpreted as

version: '3'
services:
  web:
    build:
      context: .

Run command works from podman cmd but not as docker-compose file

Hi,
another problem from my side.
I have a docker-compose file which looks like this:

version: "2"
services:
  node:
    image: "node:latest"
    user: "node"
    working_dir: /home/node/app
    volumes:
      - ./:/home/node/app:Z
    ports:
      - '3000:3000'
    command: bash -c "npm install && node index.js"

The command that you see at the end works when invoking podman like this: podman run -it --rm -p 3000:3000 --name NodeTest -v "$PWD":/home/node/app:Z -w /home/node/app node:latest bash -c "npm install && node index.js" which should be equivalent to what my docker-compose does.
When I run the command everything works as expected(apart from not being able to kill it...)
But when I run podman-compose up, this happens:

podman start -a NodeTest_node_1
install: -c: line 0: unexpected EOF while looking for matching `"'
install: -c: line 1: syntax error: unexpected end of file
1

Do you have any idea what could be causing this?

With best regards
Mario

error adding pod to state: name root is in use: pod already exists

error adding pod to state: name root is in use: pod already exists

version: "2"

services:
s1:
hostname: s1
.....
s2:
hostname: s2
.....

# podman-compose -f /root/1234.yml up -d
podman pod create --name=root --share net -p 1234:4321/udp -p 1234:4321/tcp
podman run --name=root_s1_1 -d --pod=root --label io.podman.compose.config-hash=123 --label io.podman.compose.project=root --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=s1 ..... --hostname s1 

# podman-compose -f /root/2345.yml up -d
podman pod create --name=root --share net -p 2345:4321/udp -p 2345:4321/tcp
error adding pod to state: name root is in use: pod already exists
125
podman run --name=root_s2_1 -d --pod=root --label io.podman.compose.config-hash=123 --label io.podman.compose.project=root --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=s2 ..... --hostname s2
```

podman run --systemd=true|false

As far as I am aware there is currently no way to specify the --systemd=true|false parameter in a compose file.

I’m not sure if this is in scope, because I’m pretty sure this param doesn’t exist in docker, but I thought I’d report it.

add "run" command

To quote from docker-compose CLI documentation:

Runs a one-time command against a service. For example, the following command starts the web service and runs bash as its command.

docker-compose run web bash

We use several different docker-compose YAML files in our projects, with various one-shot "helper commands" for different tasks a developer might need to perform, e.g. database setup.

Those tasks are performed by standard non-project specific images provided by our internal repository. I.e. the developer doesn't have to install anything or remember a complicated command line to perform those tasks. He simply types e.g.

docker-compose -f docker-compose.dev.yml run mysql-init

on any project to initialize the MySQL database to such a state that he can start working on the project.

The YAML file provides an easy way to supply the parameters, e.g.

mysql-init:
  env_file:
    - .dev.env
  image: <repo name>/<path>/mysql-init:5.7
  network_mode: host

Furthermore the file is under project version control.

Without the run command I have to translate the exisiting YAML file contents to the following command line:

podman run --rm -it --env-file .dev.env --network host <repo name>:5004/<path>/mysql-init:5.7

allow customization of container name via container_name

podman-compose does not respect container_name for example

version: '3'
services:
  web:
    container_name: myweb
    image: busybox
    command: ["httpd", "-f", "-p", "80"] 
    restart: always
    ports:
      - "8000:80"

Originally posted by @remyd1 in #6 (comment)

Thanks for the clarifications.

However, I am sorry, but I am still unable to connect to the DB (using relative paths).

I am using python 2.7.15.

Here is what I get:

git clone https://gitlab.mbb.univ-montp2.fr/jlopez/wicopa.git
cd wicopa

✔ ~/wicopa [master|✔] 
13:41 $ wget http://web.mbb.univ-montp2.fr/download/wicopa.sql.gz && gunzip -d wicopa.sql.gz
--2019-05-14 13:41:29--  http://web.mbb.univ-montp2.fr/download/wicopa.sql.gz
Résolution de web.mbb.univ-montp2.fr (web.mbb.univ-montp2.fr)… 162.38.181.47
Connexion à web.mbb.univ-montp2.fr (web.mbb.univ-montp2.fr)|162.38.181.47|:80… connecté.
requête HTTP transmise, en attente de la réponse… 200 OK
Taille : 37436098 (36M) [application/x-gzip]
Sauvegarde en : « wicopa.sql.gz »

wicopa.sql.gz                                         100%[======================================================================================================================>]  35,70M  80,1MB/s    ds 0,4s    

2019-05-14 13:41:29 (80,1 MB/s) — « wicopa.sql.gz » sauvegardé [37436098/37436098]

✔ ~/wicopa [master|✔] 
13:41 $ podman ps
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
✔ ~/wicopa [master|✔] 
13:41 $ python ../podman-compose/podman-compose.py build
podman build -t wicopa_web -f .docker/web/Dockerfile .docker/web
STEP 1: FROM alpine:3.9
STEP 2: LABEL Author remyd1 - https://github.com/remyd1
--> Using cache e125c1dc1d780509b47bf73f4d678faa8ed686de3e1055b6eb56886e3ad554a4
STEP 3: FROM e125c1dc1d780509b47bf73f4d678faa8ed686de3e1055b6eb56886e3ad554a4
STEP 4: RUN apk --update add php-apache2 php7-session php7-mysqli && rm -f /var/cache/apk/*
--> Using cache 80733cb0b5837b09a2f0ee99c1658ab6c353e38d3790568d2eded022ce8b633e
STEP 5: FROM 80733cb0b5837b09a2f0ee99c1658ab6c353e38d3790568d2eded022ce8b633e
STEP 6: RUN mkdir /app && cd /app &&   wget https://gitlab.mbb.univ-montp2.fr/jlopez/wicopa/-/archive/v0.4/wicopa-v0.4.tar.gz &&   tar -xf wicopa-v0.4.tar.gz && ln -s wicopa-v0.4 wicopa &&   cp wicopa/conf/Conf.php.sample wicopa/conf/Conf.php &&   chown -R apache:apache /app &&   sed -i "s#DB_NAME       = ''#DB_NAME       = 'wicopa'#" wicopa/conf/Conf.php &&   sed -i "s#DB_HOSTNAME   = ''#DB_HOSTNAME   = 'wicopadb'#" wicopa/conf/Conf.php &&   sed -i "s#DB_USERNAME   = ''#DB_USERNAME   = 'wicopauser'#" wicopa/conf/Conf.php &&   sed -i "s#DB_PP         = ''#DB_PP         = 'w1c0Pa5s'#" wicopa/conf/Conf.php &&   sed -i "s#'to_replace_with_your_admin_pass'#'450cb0c92db35549cb926efc391df2ceae4b48d1'#" wicopa/conf/Conf.php
--> Using cache 9bf171fc43ef9bb3d698d09be8157e568fdb7702f9841952c52137fc89b7c5b5
STEP 7: FROM 9bf171fc43ef9bb3d698d09be8157e568fdb7702f9841952c52137fc89b7c5b5
STEP 8: RUN sed -i 's/^#ServerName .*/ServerName localhost:80/g' /etc/apache2/httpd.conf &&   sed -i 's#/var/www/localhost/htdocs#/app/wicopa#g' /etc/apache2/httpd.conf &&   sed -i 's/^LoadModule php7_module.*/LoadModule php7_module modules\/libphp7\.so/g' /etc/apache2/httpd.conf &&   sed -i 's/DirectoryIndex index\.html/DirectoryIndex index\.php/g' /etc/apache2/httpd.conf &&   sed -ri 's#^DocumentRoot .*#DocumentRoot "/app/wicopa"#g' /etc/apache2/httpd.conf &&   sed -i 's#AllowOverride None#AllowOverride All#g' /etc/apache2/httpd.conf &&   echo "AddType application/x-httpd-php .php" >> /etc/apache2/httpd.conf
--> Using cache 6dd81e691505f5c40fc28aa6e3a84d86b15adc9574fe05bf727160f59e1de28f
STEP 9: FROM 6dd81e691505f5c40fc28aa6e3a84d86b15adc9574fe05bf727160f59e1de28f
STEP 10: RUN echo "Success"
--> Using cache 63f45c93dc519b6d1104699b53127278b527213545224133077f03dbd49c6cd2
STEP 11: FROM 63f45c93dc519b6d1104699b53127278b527213545224133077f03dbd49c6cd2
STEP 12: EXPOSE 80
--> Using cache a13778223a62f95344e7a4dff8d56126d81892c1c2e96455a69a1ca1685452ee
STEP 13: FROM a13778223a62f95344e7a4dff8d56126d81892c1c2e96455a69a1ca1685452ee
STEP 14: ENTRYPOINT httpd -D FOREGROUND && /bin/bash
--> Using cache 5eae286585bf3f40b5308be53930914024ff23acf23cd879a5ff058b546670e9
STEP 15: COMMIT wicopa_web
--> 5eae286585bf3f40b5308be53930914024ff23acf23cd879a5ff058b546670e9
0
✔ ~/wicopa [master|✔] 
13:41 $ python ../podman-compose/podman-compose.py up
podman stop -t=1 wicopa_web_1
Error: no container with name or ID wicopa_web_1 found: no such container
125
podman stop -t=1 wicopa_db_1
Error: no container with name or ID wicopa_db_1 found: no such container
125
podman rm wicopa_web_1
1
podman rm wicopa_db_1
1
podman pod rm wicopa
Error: unable to lookup pod wicopa: no pod with name or ID wicopa found: no such pod
125
podman pod create --name=wicopa --share net -p 8000:80
7d3605a8a936054e016fa45366805445e71671c018ed2d6e5e3ae00eeaf8ba2c
0
podman run --name=wicopa_web_1 -d --pod=wicopa -l io.podman.compose.config-hash=123 -l io.podman.compose.project=wicopa -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=web --add-host web:127.0.0.1 --add-host wicopa_web_1:127.0.0.1 --add-host db:127.0.0.1 --add-host wicopa_db_1:127.0.0.1 wicopa_web
e6ce0c4bd391a72e1208bab2cf5f1eae571576932039715d93b57bfa22ff74ea
0
podman run --name=wicopa_db_1 -d --pod=wicopa -l io.podman.compose.config-hash=123 -l io.podman.compose.project=wicopa -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=db -e MYSQL_ROOT_PASSWORD=w1c0Pa5s -e MYSQL_PASSWORD=w1c0Pa5s -e MYSQL_USER=wicopauser -e MYSQL_DATABASE=wicopa -v /home/userwi/wicopa/.docker/db/wicopa:/var/lib/mysql:z -v /home/userwi/wicopa/wicopa.sql:/docker-entrypoint-initdb.d/wicopa.sql:z --add-host web:127.0.0.1 --add-host wicopa_web_1:127.0.0.1 --add-host db:127.0.0.1 --add-host wicopa_db_1:127.0.0.1 --expose 3306 mariadb:10.3
5935e5ea71e63b009765abbd2f2180af2aa738c4449c3e218eb40d0b9ab00bd6
0
✔ ~/wicopa [master|✔] 
13:41 $ podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS                 NAMES
5935e5ea71e6  docker.io/library/mariadb:10.3  docker-entrypoint...  4 seconds ago  Up 4 seconds ago  0.0.0.0:8000->80/tcp  wicopa_db_1
e6ce0c4bd391  localhost/wicopa_web:latest     /bin/sh -c httpd ...  6 seconds ago  Up 5 seconds ago  0.0.0.0:8000->80/tcp  wicopa_web_1
✔ ~/wicopa [master|✔] 
13:42 $ sudo netstat -naptu |grep 3306
✘-1 ~/wicopa [master|✔] 

As you can see, the expose 3306 for wicopa_db does not seem to work.

Would you like me to create a new issue ? As this seems not related to this one.

Best regards,

Ulimits not used

Hello,
its me with another problem.
I want to change the ulimits in my container.

My docker-compose file looks like this:

version: '3'
services:
  tor-relay:
    image: tor-relay-stretch
    restart: always
    ports:
      - '9001:9001'
    volumes:
      - ./tor-data/:/root/.tor/:Z
      - ./torrc:/etc/tor/torrc:Z
    ulimits:
      nofile: 
        soft: 10000
        hard: 15000

When I start the container the ulimit -n still says 1024. The command line podman-compose prints looks like this podman run --name=TorMiddleRelay_tor-relay_1 -d --pod=TorMiddleRelay -l io.podman.compose.config-hash=123 -l io.podman.compose.project=TorMiddleRelay -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=tor-relay --mount type=bind,source=/home/mario/podman/TorMiddleRelay/./tor-data/,destination=/root/.tor/,bind-propagation=Z --mount type=bind,source=/home/mario/podman/TorMiddleRelay/./torrc,destination=/etc/tor/torrc,bind-propagation=Z --add-host tor-relay:127.0.0.1 --add-host TorMiddleRelay_tor-relay_1:127.0.0.1 tor-relay-stretch

This looks like it is not parsing the ulimit correctly, as it is not passing it to the podman commandline.
When I copy the commands compose executes and add the ulimit param manually it works. So it seems to be a problem podman-compose.

[feature] view commands in debug/dry-run mode

You you add a --debug switch to list all the commands instead of directly running podman? This would it make possible to debug everything or re-write it to shell-scripts to remove or add functionality.

error: argument command: invalid choice: 'exec'

docker compose supports the exec command. This does not seem to support podman compose.

I get this message:

usage: podman-compose [-h] [-f file] [-p PROJECT_NAME] [--podman-path PODMAN_PATH] [--no-ansi] [--no-cleanup] [--dry-run] [-t {1pod,1podfw,hostnet,cntnet,publishall,identity}] {help,version,pull,push,build,up,down,ps,run,start,stop,restart} ... podman-compose: error: argument command: invalid choice: 'exec' (choose from 'help', 'version', 'pull', 'push', 'build', 'up', 'down', 'ps', 'run', 'start', 'stop', 'restart')

Devices not attaching for a service

I am trying to pass in a device from the host to a container.

services:
  flightaware-dump1090:
    container_name: flightaware-dump1090
    image: boxel/flightaware-dump1090:latest
    hostname: slim2-flightaware-dump1090
    deploy:
      mode: global
    ports:
      - "30002:30002"
      - "30003:30003"
      - "30005:30005"
    args:
      - LATITUDE=38.672
      - LONGITUDE=-121.091
    volumes:
      - run-dump1090-fa:/run/dump1090-fa
    devices:
      - /dev/bus/usb/001/005:/dev/bus/usb/001/005
    networks:
      primary:
        ipv4_address: 172.168.238.10

But when Podman-compose creates the container it does not seem to pass --device to create.

podman create --name=flightaware-dump1090 --pod=root -l io.podman.compose.config-hash=123 -l io.podman.compose.project=root -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=flightaware-dump1090 --mount type=bind,source=/var/lib/containers/storage/volumes/root_run-dump1090-fa/_data,destination=/run/dump1090-fa,bind-propagation=Z --add-host flightaware-dump1090:127.0.0.1 --add-host flightaware-dump1090:127.0.0.1 --hostname slim2-flightaware-dump1090 boxel/flightaware-dump1090:latest

Any thoughts on what might be happening?

[bug] Mistakes in processing commands

When there are several commands under the command option, which is valid in docker-compose's world, podman-compose treat the later commands as the first one's value.

Here is an example:

➜  cat docker-compose.yml 
version: '3'

services:
  db:
    image: mariadb
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW

➜ podman-compose.py -t 1podfw -f docker-compose.yml up
➜ podman logs nextcloud-test_db_1
ERROR: mysqld failed while attempting to check config
command was: "--transaction-isolation=READ-COMMITTED --binlog-format=ROW --verbose --help --log-bin-index=/tmp/tmp.FKTmfwEDrA"

However,

➜  cat docker-compose.yml 
version: '3'

services:
  db:
    image: mariadb
    command: ["--transaction-isolation=READ-COMMITTED", "--binlog-format=ROW"]

gives the right output.

all containers get the same port mappings

When running podman-compose with a config with more than 1 container, all containers get the same port forwarding.

version: '3.1'

services:

  db:
    image: mariadb
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: defaultpw
    ports:
      - "3306:3306"

  adminer:
    image: adminer
    restart: always
    ports:
      - "8229:8080"

To start my config I run:

 podman-compose -p mdb -f podman-compose-mariadb.yaml up

when I run podman ps it shows both containers have both port forwards going to them.

➜  ~ podman ps
CONTAINER ID  IMAGE                             COMMAND               CREATED        STATUS            PORTS                                           NAMES
b8a362e02472  docker.io/library/adminer:latest  php -S [::]:8080 ...  6 seconds ago  Up 3 seconds ago  0.0.0.0:8229->8080/tcp, 0.0.0.0:3306->3306/tcp  mdb_adminer_1
576f606d2410  docker.io/library/mariadb:latest  mysqld                7 seconds ago  Up 3 seconds ago  0.0.0.0:8229->8080/tcp, 0.0.0.0:3306->3306/tcp  mdb_db_1

I just pulled the latest version and it has the same problem:

➜  ~ md5sum /usr/local/bin/podman-compose
2547131631078811b7438e7d369d7c5f  /usr/local/bin/podman-compose

Add a --version flag

While trying to report another issue I had to resort to pip3 list to find the version of podman-compose I had installed. :)

podman-compose up error

when i copy examples/awx3 and run

podman-compose up

It doesn't work

podman pod create --name=docker-yaml --share net -p 8080:8052
rootless networking does not allow port binding to the host
125
podman create --name=docker-yaml_postgres_1 --pod=docker-yaml -l io.podman.compose.config-hash=123 -l io.podman.compose.project=docker-yaml -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=postgres -e POSTGRES_USER=awx -e POSTGRES_PASSWORD=awxpass -e POSTGRES_DB=awx --add-host postgres:127.0.0.1 --add-host docker-yaml_postgres_1:127.0.0.1 --add-host rabbitmq:127.0.0.1 --add-host docker-yaml_rabbitmq_1:127.0.0.1 --add-host memcached:127.0.0.1 --add-host docker-yaml_memcached_1:127.0.0.1 --add-host awx_web:127.0.0.1 --add-host docker-yaml_awx_web_1:127.0.0.1 --add-host awx_task:127.0.0.1 --add-host docker-yaml_awx_task_1:127.0.0.1 postgres:9.6
flag provided but not defined: -l
See 'podman create --help'.
125
podman create --name=docker-yaml_rabbitmq_1 --pod=docker-yaml -l io.podman.compose.config-hash=123 -l io.podman.compose.project=docker-yaml -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=rabbitmq -e RABBITMQ_DEFAULT_VHOST=awx --add-host postgres:127.0.0.1 --add-host docker-yaml_postgres_1:127.0.0.1 --add-host rabbitmq:127.0.0.1 --add-host docker-yaml_rabbitmq_1:127.0.0.1 --add-host memcached:127.0.0.1 --add-host docker-yaml_memcached_1:127.0.0.1 --add-host awx_web:127.0.0.1 --add-host docker-yaml_awx_web_1:127.0.0.1 --add-host awx_task:127.0.0.1 --add-host docker-yaml_awx_task_1:127.0.0.1 rabbitmq:3
flag provided but not defined: -l
See 'podman create --help'.
125
podman create --name=docker-yaml_memcached_1 --pod=docker-yaml -l io.podman.compose.config-hash=123 -l io.podman.compose.project=docker-yaml -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=memcached --add-host postgres:127.0.0.1 --add-host docker-yaml_postgres_1:127.0.0.1 --add-host rabbitmq:127.0.0.1 --add-host docker-yaml_rabbitmq_1:127.0.0.1 --add-host memcached:127.0.0.1 --add-host docker-yaml_memcached_1:127.0.0.1 --add-host awx_web:127.0.0.1 --add-host docker-yaml_awx_web_1:127.0.0.1 --add-host awx_task:127.0.0.1 --add-host docker-yaml_awx_task_1:127.0.0.1 memcached:alpine
flag provided but not defined: -l
See 'podman create --help'.
125
podman create --name=docker-yaml_awx_web_1 --pod=docker-yaml -l io.podman.compose.config-hash=123 -l io.podman.compose.project=docker-yaml -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=awx_web -e SECRET_KEY=aabbcc -e DATABASE_USER=awx -e DATABASE_PASSWORD=awxpass -e DATABASE_NAME=awx -e DATABASE_PORT=5432 -e DATABASE_HOST=postgres -e RABBITMQ_USER=guest -e RABBITMQ_PASSWORD=guest -e RABBITMQ_HOST=rabbitmq -e RABBITMQ_PORT=5672 -e RABBITMQ_VHOST=awx -e MEMCACHED_HOST=memcached -e MEMCACHED_PORT=11211 --add-host postgres:127.0.0.1 --add-host docker-yaml_postgres_1:127.0.0.1 --add-host rabbitmq:127.0.0.1 --add-host docker-yaml_rabbitmq_1:127.0.0.1 --add-host memcached:127.0.0.1 --add-host docker-yaml_memcached_1:127.0.0.1 --add-host awx_web:127.0.0.1 --add-host docker-yaml_awx_web_1:127.0.0.1 --add-host awx_task:127.0.0.1 --add-host docker-yaml_awx_task_1:127.0.0.1 -u root --hostname awxweb ansible/awx_web:3.0.1
flag provided but not defined: -l
See 'podman create --help'.
125
podman create --name=docker-yaml_awx_task_1 --pod=docker-yaml -l io.podman.compose.config-hash=123 -l io.podman.compose.project=docker-yaml -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=awx_task -e SECRET_KEY=aabbcc -e DATABASE_USER=awx -e DATABASE_PASSWORD=awxpass -e DATABASE_NAME=awx -e DATABASE_PORT=5432 -e DATABASE_HOST=postgres -e RABBITMQ_USER=guest -e RABBITMQ_PASSWORD=guest -e RABBITMQ_HOST=rabbitmq -e RABBITMQ_PORT=5672 -e RABBITMQ_VHOST=awx -e MEMCACHED_HOST=memcached -e MEMCACHED_PORT=11211 --add-host postgres:127.0.0.1 --add-host docker-yaml_postgres_1:127.0.0.1 --add-host rabbitmq:127.0.0.1 --add-host docker-yaml_rabbitmq_1:127.0.0.1 --add-host memcached:127.0.0.1 --add-host docker-yaml_memcached_1:127.0.0.1 --add-host awx_web:127.0.0.1 --add-host docker-yaml_awx_web_1:127.0.0.1 --add-host awx_task:127.0.0.1 --add-host docker-yaml_awx_task_1:127.0.0.1 --add-host awxweb:127.0.0.1 -u root --hostname awx ansible/awx_task:3.0.1
flag provided but not defined: -l
See 'podman create --help'.
125
podman start -a docker-yaml_postgres_1
unable to find container docker-yaml_postgres_1: no container with name or ID docker-yaml_postgres_1 found: no such container
125
podman start -a docker-yaml_rabbitmq_1
unable to find container docker-yaml_rabbitmq_1: no container with name or ID docker-yaml_rabbitmq_1 found: no such container
125
podman start -a docker-yaml_memcached_1
unable to find container docker-yaml_memcached_1: no container with name or ID docker-yaml_memcached_1 found: no such container
125
podman start -a docker-yaml_awx_web_1
unable to find container docker-yaml_awx_web_1: no container with name or ID docker-yaml_awx_web_1 found: no such container
125
podman start -a docker-yaml_awx_task_1
unable to find container docker-yaml_awx_task_1: no container with name or ID docker-yaml_awx_task_1 found: no such container
125

my podman version

podman version 1.0.2-dev

Killing container with ctrl+c

Hi,
I just switched from docker and docker-compose to podman and podman-compose :)
With docker-compose if you started the container with docker-compose up and then press Ctrl+C the container will be stopped the same happens in podman-compose. But if I try to restart the container via podman-compose up an error message occurs that the container is already existent. Then I have to manually do `podman-compose down'.
Do you think it would be possible to mimic the docker-compose behaviour?

With best regards and thanks for podman-compose
Mario

[recommendation] don't use shebang #!/usr/bin/env python

Please don't use shebang #!/usr/bin/env python. The reason is that python can refer either to python2 or python3 on different systems. On my current Debian 9, it seems to prefer python3...

So better use #!/usr/bin/env python2, which defaults to Python 2.7.latest

Supporting podman-compose build SERVICE

Hello, everyone in podman-compose project.

I am trying to add podman-compose to my project here rpm-py-installer/rpm-py-installer#218 .
Is there a plan to add podman-compose build SERVICE syntax like docker-compose?

Currently it seems that podman-compose does not support it.

$ sudo pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz

$ podman-compose build --help
usage: podman-compose build [-h] [--pull] [--pull-always]

optional arguments:
  -h, --help     show this help message and exit
  --pull         attempt to pull a newer version of the image
  --pull-always  attempt to pull a newer version of the image, Raise an error
                 even if the image is present locally.

As a reference, here is the result of docker-compose on my local environment.

$ docker-compose version
docker-compose version 1.22.0, build f46880f
docker-py version: 3.7.0
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.1c FIPS  28 May 2019

$ docker-compose build --help
Build or rebuild services.

Services are built once and then tagged as `project_service`,
e.g. `composetest_db`. If you change a service's `Dockerfile` or the
contents of its build directory, you can run `docker-compose build` to rebuild it.

Usage: build [options] [--build-arg key=val...] [SERVICE...]

Options:
    --compress              Compress the build context using gzip.
    --force-rm              Always remove intermediate containers.
    --no-cache              Do not use cache when building the image.
    --pull                  Always attempt to pull a newer version of the image.
    -m, --memory MEM        Sets memory limit for the build container.
    --build-arg key=val     Set build-time variables for services.

Thank you.

Unable to create volume

I trying docker-compose from codelibs/docker-fess. My environment is Centos 8 and latest git commit of podman-compose.
I run it with:

podman_compose.py -p fess up

but I receive an error when it create volume.

# podman_compose.py -p fess up
podman pod create --name=fess --share net -p 9200:9200 -p 8080:8080 -p 9201:9200 -p 5601:5601
e0171b13537adf8fbfce95cf90574c5c916e96612939074721d97bba24b421d9
0
podman volume inspect fess_esdata01 || podman volume create fess_esdata01
ERRO[0000] "unable to find volume fess_esdata01: volume with name fess_esdata01 not found: no such volume" 
Traceback (most recent call last):
  File "/usr/bin/podman_compose.py", line 1264, in <module>
    main()
  File "/usr/bin/podman_compose.py", line 1261, in main
    podman_compose.run()
  File "/usr/bin/podman_compose.py", line 750, in run
    cmd(self, args)
  File "/usr/bin/podman_compose.py", line 933, in wrapped
    return func(*args, **kw)
  File "/usr/bin/podman_compose.py", line 1054, in compose_up
    detached=args.detach, podman_command=podman_command)
  File "/usr/bin/podman_compose.py", line 482, in container_to_args
    mount_args = mount_desc_to_args(compose, volume, cnt['_service'], cnt['name'])
  File "/usr/bin/podman_compose.py", line 409, in mount_desc_to_args
    mount_desc = mount_dict_vol_to_bind(compose, fix_mount_dict(mount_desc, proj_name, srv_name))
  File "/usr/bin/podman_compose.py", line 385, in mount_dict_vol_to_bind
    src = json.loads(out)[0]["mountPoint"]
TypeError: 'NoneType' object is not subscriptable

this is the file docker-compose.yml

version: "3"
services:
  fess01:
    image: codelibs/fess:13.4.0
    container_name: fess01
    ports:
      - "8080:8080"
    depends_on:
      - es01
      - es02
    environment:
      - RUN_ELASTICSEARCH=false
      - "ES_HTTP_URL=http://es01:9200"
      - "FESS_DICTIONARY_PATH=/usr/share/elasticsearch/config/dictionary/"
    networks:
      - esnet

  es01:
    image: codelibs/fess-elasticsearch:7.4.0
    container_name: es01
    environment:
      - node.name=es01
      - discovery.seed_hosts=es02
      - cluster.initial_master_nodes=es01,es02
      - cluster.name=fess-es
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - "FESS_DICTIONARY_PATH=/usr/share/elasticsearch/config/dictionary"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata01:/usr/share/elasticsearch/data
      - esdictionary01:/usr/share/elasticsearch/config/dictionary
    ports:
      - 9200:9200
    networks:
      - esnet

  es02:
    image: codelibs/fess-elasticsearch:7.4.0
    container_name: es02
    environment:
      - node.name=es02
      - discovery.seed_hosts=es01
      - cluster.initial_master_nodes=es01,es02
      - cluster.name=fess-es
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - "FESS_DICTIONARY_PATH=/usr/share/elasticsearch/config/dictionary"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata02:/usr/share/elasticsearch/data
      - esdictionary02:/usr/share/elasticsearch/config/dictionary
    ports:
      - 9201:9200
    networks:
      - esnet

  kibana:
    image: docker.elastic.co/kibana/kibana:7.4.0
    container_name: kibana
    depends_on:
      - es01
      - es02
    environment:
      - "ELASTICSEARCH_HOSTS=http://es01:9200"
    ports:
      - 5601:5601
    networks:
      - esnet

volumes:
  esdata01:
    driver: local
  esdictionary01:
    driver: local
  esdata02:
    driver: local
  esdictionary02:
    driver: local

networks:
  esnet:
    driver: bridge

My workaround is make the volume before with the same command used by podman_compose.
Is it normal? Is a my issue? Could you resolve it?

permission denied on Mounting

I am trying to mount a file however it always gives "permission denied"
and privileged : true
seems not supported !


`version: '3.3'
services:

  DB:
    image: mysql:latest
    container_name: sql_server
#    privileged: true	  
    environment:
      MYSQL_DATABASE: 'db'
      MYSQL_ROOT_PASSWORD: 'password'
    ports:
      - '3306:3306'
    expose:
      - '3306'
    command: --init-file /data/application/initialize.sql
    volumes:
      - ./initialize.sql:/data/application/initialize.sql`

Support static network configurations

Static networks currently don't appear to be supported yet. An example test case, for your convenience:

version: "2"
services:
    mailhog:
        image: inventis/mailhog:latest
        command:
            - -jim-accept=0
        networks:
            static_net:
                ipv4_address: 172.13.0.6

networks:
    static_net:
        driver: bridge
        ipam:
            config:
                - subnet: 172.13.0.0/24

In Docker, after running, you can access a web interface on 172.13.0.6 (port 80).

I'm not sure, but it looks like podman-compose currently ignores static networks.

Thanks for this initiative!

Shared Volume getting Z propogation and permission denied in container

Strangely, when defining a shared volume (by defining it in the root level volumes and then referencing it in each service):

version: '3'
services:
  flightaware-dump1090:
    container_name: flightaware-dump1090
    image: boxel/flightaware-dump1090:latest
    hostname: slim2-flightaware-dump1090
    ports:
      - "30002:30002"
      - "30003:30003"
      - "30005:30005"
    args:
      - LATITUDE=38.672
      - LONGITUDE=-121.091
    volumes:
      - run-dump1090-fa:/run/dump1090-fa
    devices:
      - /dev/bus/usb/001/005:/dev/bus/usb/001/005
    networks:
      primary:
        ipv4_address: 172.168.238.10
    restart: unless-stopped

  flightaware-skyview1090:
    container_name: flightaware-skyview1090
    image: boxel/flightaware-skyview1090:latest
    hostname: slim2-flightaware-skyview1090
    depends_on:
      - flightaware-dump1090
    ports:
      - "8080:80"
    args:
      - LATITUDE=38.672
      - LONGITUDE=-121.091
    volumes:
      - run-dump1090-fa:/run/dump1090-fa
    networks:
      primary:
        ipv4_address: 182.168.238.11
    restart: unless-stopped
volumes:
  run-dump1090-fa:
networks:
  primary:
    ipam:
      driver: default
      config:
        - subnet: "172.16.238.0/24"

... the services are binding the shared volume with propagation Z which makes it private and unshared. The second service to mount wins, and the first no longer has access.

Looking at the source at

ret["bind"]["propagation"]="Z"

reveals that as long as the volume exists in shared_vols (which is defined from the root volumes element in the docker-compose.yml), then it should get z instead of Z.

However, you can see from the podman create runs that it's passing Z:

podman volume inspect root_run-dump1090-fa || podman volume create root_run-dump1090-fa
podman run --name=flightaware-dump1090 -d --pod=root -l io.podman.compose.config-hash=123 -l io.podman.compose.project=root -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=flightaware-dump1090 --device /dev/bus/usb/001/005:/dev/bus/usb/001/005 --mount type=bind,source=/var/lib/containers/storage/volumes/root_run-dump1090-fa/_data,destination=/run/dump1090-fa,bind-propagation=Z --add-host flightaware-dump1090:127.0.0.1 --add-host flightaware-dump1090:127.0.0.1 --add-host flightaware-skyview1090:127.0.0.1 --add-host flightaware-skyview1090:127.0.0.1 --hostname slim2-flightaware-dump1090 boxel/flightaware-dump1090:latest
247e57c92ea32d8138277a11f509668802cd123a51157831b5b20c47df026f82
0
podman volume inspect root_run-dump1090-fa || podman volume create root_run-dump1090-fa
podman run --name=flightaware-skyview1090 -d --pod=root -l io.podman.compose.config-hash=123 -l io.podman.compose.project=root -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=flightaware-skyview1090 --mount type=bind,source=/var/lib/containers/storage/volumes/root_run-dump1090-fa/_data,destination=/run/dump1090-fa,bind-propagation=Z --add-host flightaware-dump1090:127.0.0.1 --add-host flightaware-dump1090:127.0.0.1 --add-host flightaware-skyview1090:127.0.0.1 --add-host flightaware-skyview1090:127.0.0.1 --hostname slim2-flightaware-skyview1090 boxel/flightaware-skyview1090:latest
0c2bb6eade87ef844d39801afed31ee5ca361968ea94fcbc37d2a705099059a8

Also, upon inspection of the containers with podman inspect, only the second container actually has a Mounts element listed, and it has propagation listed as rprivate, which would restrict to that container.

        "Mounts": [
            {
                "Type": "bind",
                "Name": "",
                "Source": "/var/lib/containers/storage/volumes/root_run-dump1090-fa/_data",
                "Destination": "/run/dump1090-fa",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            }
        ],

The first container has no Mounts at all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.