Giter Site home page Giter Site logo

concourse-docker's Introduction

Concourse: the continuous thing-doer.

Discord Build Contributors Help Wanted

Concourse is an automation system written in Go. It is most commonly used for CI/CD, and is built to scale to any kind of automation pipeline, from simple to complex.

booklit pipeline

Concourse is very opinionated about a few things: idempotency, immutability, declarative config, stateless workers, and reproducible builds.

The road to Concourse v10

Concourse v10 is the code name for a set of features which, when used in combination, will have a massive impact on Concourse's capabilities as a generic continuous thing-doer. These features, and how they interact, are described in detail in the Core roadmap: towards v10 and Re-inventing resource types blog posts. (These posts are slightly out of date, but they get the idea across.)

Notably, v10 will make Concourse not suck for multi-branch and/or pull-request driven workflows - examples of spatial change, where the set of things to automate grows and shrinks over time.

Because v10 is really an alias for a ton of separate features, there's a lot to keep track of - here's an overview:

Feature RFC Status
set_pipeline step #31 ✔ v5.8.0 (experimental)
Var sources for creds #39 ✔ v5.8.0 (experimental), TODO: #5813
Archiving pipelines #33 ✔ v6.5.0
Instanced pipelines #34 ✔ v7.0.0 (experimental)
Static across step 🚧 #29 ✔ v6.5.0 (experimental)
Dynamic across step 🚧 #29 ✔ v7.4.0 (experimental, not released yet)
Projects 🚧 #32 🙏 RFC needs feedback!
load_var step #27 ✔ v6.0.0 (experimental)
get_var step #27 🚧 #5815 in progress!
Prototypes #37 ⚠ Pending first use of protocol (any of the below)
run step 🚧 #37 ⚠ Pending its own RFC, but feel free to experiment
Resource prototypes #38 🙏 #5870 looking for volunteers!
Var source prototypes 🚧 #6275 planned, may lead to RFC
Notifier prototypes 🚧 #28 ⚠ RFC not ready

The Concourse team at VMware will be working on these features, however in the interest of growing a healthy community of contributors we would really appreciate any volunteers. This roadmap is very easy to parallelize, as it is comprised of many orthogonal features, so the faster we can power through it, the faster we can all benefit. We want these for our own pipelines too! 😆

If you'd like to get involved, hop in Discord or leave a comment on any of the issues linked above so we can coordinate. We're more than happy to help figure things out or pick up any work that you don't feel comfortable doing (e.g. UI, unfamiliar parts, etc.).

Thanks to everyone who has contributed so far, whether in code or in the community, and thanks to everyone for their patience while we figure out how to support such common functionality the "Concoursey way!" 🙏

Installation

Concourse is distributed as a single concourse binary, available on the Releases page.

If you want to just kick the tires, jump ahead to the Quick Start.

In addition to the concourse binary, there are a few other supported formats. Consult their GitHub repos for more information:

Quick Start

$ wget https://concourse-ci.org/docker-compose.yml
$ docker-compose up
Creating docs_concourse-db_1 ... done
Creating docs_concourse_1    ... done

Concourse will be running at 127.0.0.1:8080. You can log in with the username/password as test/test.

⚠️ If you are using an M1 mac: M1 macs are incompatible with the containerd runtime. After downloading the docker-compose file, change CONCOURSE_WORKER_RUNTIME: "containerd" to CONCOURSE_WORKER_RUNTIME: "houdini". This feature is experimental

Next, install fly by downloading it from the web UI and target your local Concourse as the test user:

$ fly -t ci login -c http://127.0.0.1:8080 -u test -p test
logging in to team 'main'

target saved

Configuring a Pipeline

There is no GUI for configuring Concourse. Instead, pipelines are configured as declarative YAML files:

resources:
- name: booklit
  type: git
  source: {uri: "https://github.com/vito/booklit"}

jobs:
- name: unit
  plan:
  - get: booklit
    trigger: true
  - task: test
    file: booklit/ci/test.yml

Most operations are done via the accompanying fly CLI. If you've got Concourse installed, try saving the above example as booklit.yml, target your Concourse instance, and then run:

fly -t ci set-pipeline -p booklit -c booklit.yml

These pipeline files are self-contained, maximizing portability from one Concourse instance to the next.

Learn More

Contributing

Our user base is basically everyone that develops software (and wants it to work).

It's a lot of work, and we need your help! If you're interested, check out our contributing docs.

concourse-docker's People

Contributors

aoldershaw avatar arulagrawal avatar chenbh avatar clarafu avatar danielrs avatar dbbaskette avatar dtimm avatar elfolink avatar eudes avatar gregarcara avatar jmcduffie32 avatar mason-jones-ck avatar mlilien avatar muntac avatar nitinnbisht avatar pivotal-gabriel-dumitrescu avatar rvandegrift avatar scottbri avatar sergueifedorov avatar taylorsilva avatar vito avatar wolfv avatar xtremerui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

concourse-docker's Issues

Docker swarm incompatability

It might be worth to write in the documentation this won't work on docker swarm due to the requirement of privileged mode.
The database and web containers will work just fine however the worker node will fail with some very cryptic error messages like:
{"timestamp":"2019-09-30T14:31:24.520408669Z","level":"error","source":"guardian","message":"guardian.starting-guardian-backend","data":{"error":"bulk starter: mounting subsystem 'cpuset' in '/sys/fs/cgroup/cpuset': operation not permitted"}}
and
{"timestamp":"2019-09-30T14:31:24.528488853Z","level":"error","source":"worker","message":"worker.garden-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ngdn exited with error: exit status 1\ndns-proxy exited with nil\n","session":"8"}}
which disappears rather quickly because the following error gets spammed repeatedly
{"timestamp":"2019-09-30T14:31:28.144058311Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.5"}}

The web node also registers the worker node leading to further confusion.
Hopefully this saves someone else a couple of painful hours.

Worker fails to start on newer version of docker

On a fresh clean repository I do the following:

  1. ./keys/generate
  2. docker-compose up -d --build
  3. I get the below error from the worker and it crashes

My environment:

  • Arch linux
  • Docker version 20.10.8, build 3967b7d28e
  • docker-compose version 1.29.2
{"timestamp":"2021-09-18T12:00:59.488503476Z","level":"info","source":"baggageclaim","message":"baggageclaim.using-driver","data":{"driver":"overlay"}}
{"timestamp":"2021-09-18T12:00:59.489308705Z","level":"info","source":"baggageclaim","message":"baggageclaim.listening","data":{"addr":"127.0.0.1:7788"}}
{"timestamp":"2021-09-18T12:00:59.489768866Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-connect-to-tsa","data":{"error":"dial tcp 172.26.0.4:2222: connect: connection refused","session":"4.1"}}
{"timestamp":"2021-09-18T12:00:59.489798847Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa","data":{"error":"all worker SSH gateways unreachable","session":"4.1.1"}}
{"timestamp":"2021-09-18T12:00:59.489811980Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-dial","data":{"error":"all worker SSH gateways unreachable","session":"4.1"}}
{"timestamp":"2021-09-18T12:00:59.489832501Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.exited-with-error","data":{"error":"all worker SSH gateways unreachable","session":"4.1"}}
{"timestamp":"2021-09-18T12:00:59.489854216Z","level":"error","source":"worker","message":"worker.beacon-runner.failed","data":{"error":"all worker SSH gateways unreachable","session":"4"}}
{"timestamp":"2021-09-18T12:01:00.463191900Z","level":"info","source":"guardian","message":"guardian.no-port-pool-state-to-recover-starting-clean","data":{}}
{"timestamp":"2021-09-18T12:01:00.463716950Z","level":"info","source":"guardian","message":"guardian.metrics-notifier.starting","data":{"interval":"1m0s","session":"5"}}
{"timestamp":"2021-09-18T12:01:00.463739568Z","level":"info","source":"guardian","message":"guardian.start.starting","data":{"session":"6"}}
{"timestamp":"2021-09-18T12:01:00.463781384Z","level":"info","source":"guardian","message":"guardian.metrics-notifier.started","data":{"interval":"1m0s","session":"5","time":"2021-09-18T12:01:00.46377979Z"}}
{"timestamp":"2021-09-18T12:01:00.464889552Z","level":"info","source":"guardian","message":"guardian.cgroups-tmpfs-already-mounted","data":{"path":"/sys/fs/cgroup"}}
{"timestamp":"2021-09-18T12:01:00.464948854Z","level":"info","source":"guardian","message":"guardian.mount-cgroup.started","data":{"path":"/sys/fs/cgroup/cpuset","session":"7","subsystem":"cpuset"}}
{"timestamp":"2021-09-18T12:01:00.465058050Z","level":"info","source":"guardian","message":"guardian.start.completed","data":{"session":"6"}}
{"timestamp":"2021-09-18T12:01:00.465072541Z","level":"error","source":"guardian","message":"guardian.starting-guardian-backend","data":{"error":"bulk starter: mounting subsystem 'cpuset' in '/sys/fs/cgroup/cpuset': operation not permitted"}}
bulk starter: mounting subsystem 'cpuset' in '/sys/fs/cgroup/cpuset': operation not permitted
bulk starter: mounting subsystem 'cpuset' in '/sys/fs/cgroup/cpuset': operation not permitted
{"timestamp":"2021-09-18T12:01:00.469955305Z","level":"error","source":"worker","message":"worker.garden.gdn-runner.logging-runner-exited","data":{"error":"exit status 1","session":"1.2"}}
{"timestamp":"2021-09-18T12:01:00.470019538Z","level":"error","source":"worker","message":"worker.garden-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ngdn exited with error: exit status 1\n","session":"8"}}
{"timestamp":"2021-09-18T12:01:00.470077670Z","level":"info","source":"worker","message":"worker.container-sweeper.sweep-cancelled-by-signal","data":{"session":"6","signal":2}}
{"timestamp":"2021-09-18T12:01:00.470115898Z","level":"info","source":"worker","message":"worker.baggageclaim-runner.logging-runner-exited","data":{"session":"9"}}
{"timestamp":"2021-09-18T12:01:00.470078780Z","level":"info","source":"worker","message":"worker.volume-sweeper.sweep-cancelled-by-signal","data":{"session":"7","signal":2}}
{"timestamp":"2021-09-18T12:01:00.470204966Z","level":"info","source":"worker","message":"worker.volume-sweeper.logging-runner-exited","data":{"session":"14"}}
{"timestamp":"2021-09-18T12:01:00.470091895Z","level":"info","source":"worker","message":"worker.debug-runner.logging-runner-exited","data":{"session":"10"}}
{"timestamp":"2021-09-18T12:01:00.470129680Z","level":"info","source":"worker","message":"worker.container-sweeper.logging-runner-exited","data":{"session":"13"}}
{"timestamp":"2021-09-18T12:01:00.470134031Z","level":"info","source":"worker","message":"worker.healthcheck-runner.logging-runner-exited","data":{"session":"11"}}
{"timestamp":"2021-09-18T12:01:04.490229991Z","level":"info","source":"worker","message":"worker.beacon-runner.restarting","data":{"session":"4"}}
{"timestamp":"2021-09-18T12:01:04.490835796Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-connect-to-tsa","data":{"error":"dial tcp 172.26.0.4:2222: connect: connection refused","session":"4.1"}}
{"timestamp":"2021-09-18T12:01:04.490865694Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa","data":{"error":"all worker SSH gateways unreachable","session":"4.1.2"}}
{"timestamp":"2021-09-18T12:01:04.490880940Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-dial","data":{"error":"all worker SSH gateways unreachable","session":"4.1"}}
{"timestamp":"2021-09-18T12:01:04.490906496Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.signal.signalled","data":{"session":"4.1.3"}}
{"timestamp":"2021-09-18T12:01:04.490930614Z","level":"info","source":"worker","message":"worker.beacon-runner.logging-runner-exited","data":{"session":"12"}}
error: Exit trace for group:
garden exited with error: Exit trace for group:
gdn exited with error: exit status 1

baggageclaim exited with nil
volume-sweeper exited with nil
debug exited with nil
container-sweeper exited with nil
healthcheck exited with nil
beacon exited with nil

Document CONCOURSE_GARDEN_ENABLE_DNS_PROXY flag

This flag is configurable for Linux worker nodes, and is generally only used when running Concourse with Docker due to how DNS resolution works inside Docker containers when using embedded user-defined networks. Since this wouldn't be used by other deployment methods like BOSH or systemd, it doesn't make sense for this to live in the main docs but it could be useful to explain this flag in the Docker-specific docs.

Permission denied when creating workers. Rootfs.

Hopefully someone here can help me with this. I'm running ConcourseCI v4.2.1. Running it via docker-compose.
The version of Docker on the host is Docker version 17.09.1-ce. I can successfully setup Concourse. However, I get the following error in the tasks of the pipeline I have pushed:

runc run: exit status 1: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/worker-state/4.2.1/assets/bin/init\\\" to rootfs \\\"/worker-state/volumes/live/3e85b13b-522b-428a-6d14-5d0d605e45bb/volume\\\" at \\\"/worker-state/volumes/live/3e85b13b-522b-428a-6d14-5d0d605e45bb/volume/tmp/garden-init\\\" caused \\\"open /worker-state/volumes/live/3e85b13b-522b-428a-6d14-5d0d605e45bb/volume/tmp/garden-init: permission denied\\\"\""

TRIED:

  • Controlling that the worker container is running in privileged mode. It is.
  • Adding
cap_add:
    - SYS_ADMIN
security_opt:
          - apparmor=unconfined
          - seccomp=unconfined

to the worker container. It still fails.

  • Search ConcourseCI discord channels for people that already have had this issue. Found some, however I wasn't able to apply any suggestions with success.
  • Did a search on various search engines as well as on GitHub. Nothing that finally solved it for me.
    -- Found: opencontainers/runc#1658
  • Also tried v3.10 of Concourse. The issue is the same.

I can conclude that the host running docker is on a rather old Linux kernel. It is v4.2.8. However, I am on a kernel higher than the min. requirement. As mentioned on https://concourse-ci.org/install.html

Any help will be highly appreciated. Thank you.

web: failed to load authorized keys - compose up not working

Getting some errors on the web.

Using Docker for Mac Version 17.03.1-ce-mac12 (17661)

First run:

$ git clone ...
$ cd concourse-docker
$ export CONCOURSE_LOGIN=admin
$ export CONCOURSE_PASSWORD=password
$ export CONCOURSE_EXTERNAL_URL=http://`ipconfig getifaddr en0`:8080
$ docker-compose up

First time web doesn't start with this error:

failed to migrate database: dial tcp 172.18.0.2:5432: getsockopt: connection refused

^C and do a docker-compose up again (this time the database is ready in time), but you get this error:

concourse-web_1     | {"timestamp":"1496326114.223787785","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"4"}}
concourse-web_1     | failed to load authorized keys: open : no such file or directory

Can anybody else confirm/reproduce?

Better instructions how how to fix concourse after restart

The current instructions under:
https://github.com/concourse/concourse-docker#caveats

are severely lacking:
what does

At the moment, workers running via Docker will not automatically leave the cluster gracefully on shutdown. This means you'll have to run fly prune-worker to reap them.

even mean?

how does issue concourse/concourse#1457 even relate to the restart problem or provide any guidance for how to fix it?

If 'concoruse-docker' is the recommended 'getting started' method for using concourse, but when you restart it, it totally craps out and there are no clear instructions for how to fix it. it makes for a really crappy onboarding experience.

Worker fails to create containers

When running the docker image of the worker in privileged mode I get

iptables: create-instance-chains: iptables: No chain/target/match by that name.

My docker-compose file is

version: '3'

services:
  worker:
     image: private-concourse-worker-with-keys
     command: worker
     ports:
     - "7777:7777"
     - "7788:7788"
     - "7799:7799"
     #restart: on-failure
     privileged: true
     environment:
     - CONCOURSE_TSA_HOST=concourse-web-1.dev
     - CONCOURSE_GARDEN_NETWORK

My Dockerfile

FROM concourse/concourse

COPY keys/tsa_host_key.pub /concourse-keys/tsa_host_key.pub
COPY keys/worker_key /concourse-keys/worker_key

Some more errors

    worker_1  | {"timestamp":"1526507528.298546791","source":"guardian","message":"guardian.create.containerizer-create.finished","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2"}}
    worker_1  | {"timestamp":"1526507528.298666477","source":"guardian","message":"guardian.create.containerizer-create.watch.watching","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2.4"}}
    worker_1  | {"timestamp":"1526507528.303164721","source":"guardian","message":"guardian.create.network.started","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
    worker_1  | {"timestamp":"1526507528.303202152","source":"guardian","message":"guardian.create.network.config-create","log_level":1,"data":{"config":{"ContainerHandle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","HostIntf":"wbpuf2nmpege-0","ContainerIntf":"wbpuf2nmpege-1","IPTablePrefix":"w--","IPTableInstance":"bpuf2nmpege","BridgeName":"wbrdg-0afe0000","BridgeIP":"x.x.0.1","ContainerIP":"x.x.0.2","ExternalIP":"x.x.0.2","Subnet":{"IP":"x.x.0.0","Mask":"/////A=="},"Mtu":1500,"PluginNameservers":null,"OperatorNameservers":[],"AdditionalNameservers":["x.x.0.2"]},"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
    worker_1  | {"timestamp":"1526507528.324085236","source":"guardian","message":"guardian.iptables-runner.command.failed","log_level":2,"data":{"argv":["/worker-state/3.6.0/assets/iptables/sbin/iptables","--wait","-A","w--instance-bpuf2nmpege-log","-m","conntrack","--ctstate","NEW,UNTRACKED,INVALID","--protocol","all","--jump","LOG","--log-prefix","426762cc-b9a8-47b0-711a-8f5c ","-m","comment","--comment","426762cc-b9a8-47b0-711a-8f5ce18ff46c"],"error":"exit status 1","exit-status":1,"session":"1.26","stderr":"iptables: No chain/target/match by that name.\n","stdout":"","took":"1.281243ms"}}

Quickstart with docker-compose not working

Seeing this issue when trying out concourse via docker-compose from Quick Start and then proceeding with first task from Hello World Tutorial

I run on High Sierra and user docker-machine for my docker environment.

$ fly -t tutorial execute -c task_hello_world.yml
executing build 1 at http://192.168.99.101:8080/builds/1
initializing
iptables: create-instance-chains: iptables: No chain/target/match by that name.

Environment

MacOS High Sierra with docker-machine

docker-machine version 0.16.0, build 702c267 (should be most recent)

boot2docker VM by docker-machine

iptables v1.4.21
Linux default 4.14.79-boot2docker #1 SMP Thu Nov 8 01:56:42 UTC 2018 x86_64 GNU/Linux

Docker:

Client: Docker Engine - Community
Version:           18.09.0
API version:       1.39
Go version:        go1.11.2
Git commit:        4d60db4
Built:             Wed Dec 12 13:12:25 2018
OS/Arch:           darwin/amd64
Experimental:      false

Server: Docker Engine - Community
Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:52:55 2018
  OS/Arch:          linux/amd64
  Experimental:     false

docker-compose

docker-compose version 1.23.2, build 1110ad0
docker-py version: 3.6.0
CPython version: 2.7.15
OpenSSL version: OpenSSL 1.0.2q  20 Nov 2018

Excerpt from docker-compose log concourse

{
    "data": {
        "argv": [
            "/worker-state/4.2.2/assets/iptables/sbin/iptables",
            "--wait",
            "--table",
            "nat",
            "-A",
            "w--prerouting",
            "--jump",
            "w--instance-t8vmr0jgube",
            "-m",
            "comment",
            "--comment",
            "1a6fadf7-9954-43e1-4bb1-96b29f1e4976"
        ],
        "error": "exit status 1",
        "exit-status": 1,
        "session": "1.2",
        "stderr": "iptables: No chain/target/match by that name.\n",
        "stdout": "",
        "took": "3.139223ms"
    },
    "log_level": 2,
    "message": "guardian.iptables-runner.command.failed",
    "source": "guardian",
    "timestamp": "1546176981.191809177"
}

docker-compose.yml

version: '3'
services:
  concourse-db:
    image: postgres
    environment:
    - POSTGRES_DB=concourse
    - POSTGRES_PASSWORD=concourse_pass
    - POSTGRES_USER=concourse_user
    - PGDATA=/database

  concourse:
    image: concourse/concourse
    command: quickstart
    privileged: true
    depends_on: [concourse-db]
    ports: ["8080:8080"]
    environment:
    - CONCOURSE_POSTGRES_HOST=concourse-db
    - CONCOURSE_POSTGRES_USER=concourse_user
    - CONCOURSE_POSTGRES_PASSWORD=concourse_pass
    - CONCOURSE_POSTGRES_DATABASE=concourse
    - CONCOURSE_EXTERNAL_URL=http://${MY_DOCKER_HOST_IP}:8080
    - CONCOURSE_ADD_LOCAL_USER=test:REDACTED
    - CONCOURSE_MAIN_TEAM_ALLOW_ALL_USERS=true
    - CONCOURSE_WORKER_GARDEN_NETWORK

If I would need to guess, this is an issue with iptables version used in boot2docker

How to connect the worker to the web instance?

Running the given docker-compose.yml file after generating the keys on my local machine results in
worker_1 | {"timestamp":"2022-01-07T09:06:38.289690700Z","level":"info","source":"baggageclaim","message":"baggageclaim.using-driver","data":{"driver":"overlay"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.294664400Z","level":"info","source":"baggageclaim","message":"baggageclaim.listening","data":{"addr":"127.0.0.1:7788"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.299029000Z","level":"info","source":"worker","message":"worker.garden.dns-proxy.started","data":{"session":"1.2"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.299643400Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-connect-to-tsa","data":{"error":"dial tcp 172.21.0.3:2222: connect: connection refused","session":"4.1"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.299787200Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa","data":{"error":"all worker SSH gateways unreachable","session":"4.1.1"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.299860800Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-dial","data":{"error":"all worker SSH gateways unreachable","session":"4.1"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.299938900Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.exited-with-error","data":{"error":"all worker SSH gateways unreachable","session":"4.1"}} worker_1 | {"timestamp":"2022-01-07T09:06:38.300243600Z","level":"error","source":"worker","message":"worker.beacon-runner.failed","data":{"error":"all worker SSH gateways unreachable","session":"4"}}

How can I get the connection to work?

--worker-cert= not recognized by quickstart command

Hello,

We are trying to use a custom Certificate to use with a child workder (sonarqube runner mvn) and we try to use --worker-cert= but this does not work:

concourse:
restart: always
image: concourse/concourse:4.2.1
command: quickstart --worker-cert=/etc/ssl
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
volumes:
- /etc/ssl/certs:/etc/ssl/certs
- /usr/share/ca-certificates/extra:/usr/share/ca-certificates/extra
- /home/beheer/hbr-hm-devtools/certificates:/home/hbr-hm-devtools/certificates
environment:
- CONCOURSE_POSTGRES_HOST=concourse-db
- CONCOURSE_POSTGRES_USER=concourse_user
- CONCOURSE_POSTGRES_PASSWORD=concourse_pass
- CONCOURSE_POSTGRES_DATABASE=concourse
- CONCOURSE_EXTERNAL_URL=######
- CONCOURSE_ADD_LOCAL_USER=test:$$2a$$10$$0W9/ilCpYXY/yCPpaOD.6eCrGda/fnH3D4lhsw1Mze0WTID5BuiTW
- CONCOURSE_MAIN_TEAM_ALLOW_ALL_USERS=true
- CONCOURSE_WORKER_GARDEN_NETWORK
- CONCOURSE_CERTS_DIR=/etc/ssl/certs
- CONCOURSE_WORKER_CERT=/etc/ssl/certs
....

any idea why?

Docker Quickstart results in endless worker connection refused messages

$ wget https://concourse-ci.org/docker-compose.yml
$ docker-compose up -d
Creating docs_concourse-db_1 ...
Creating docs_concourse-db_1 ... done
Creating docs_concourse_1 ...
Creating docs_concourse_1 ... done

results in

{"timestamp":"2021-05-29T21:21:07.530094663Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:08.530528880Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:08.530886477Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:09.531543319Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:09.532092566Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:10.532301463Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:10.532757400Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:11.532892560Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:11.533284631Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:12.533781118Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:12.534243804Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:13.534368807Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:13.534741415Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:13.540038008Z","level":"info","source":"atc","message":"atc.scanner.tick.start","data":{"session":"20.3"}}


{"timestamp":"2021-05-29T21:21:13.556258259Z","level":"info","source":"atc","message":"atc.scanner.tick.end","data":{"session":"20.3"}}


{"timestamp":"2021-05-29T21:21:14.535360158Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:14.535799862Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:15.535955153Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.retrying","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}


{"timestamp":"2021-05-29T21:21:15.536154988Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}

Support service discovery via the magic `127.x.x.x` DNS server

Many many people have run into this. Docker Compose and various other magic-Docker-sauce runtimes set up a local DNS server to talk to other components (e.g. a Docker registry or other dependent service), but the address it wires in to the container (127.0.0.11) is not reachable from the container's network namespace. Nowadays Garden strips it out, which is an improvement as it won't resolve to the right address anyway, but it'd be nice if this Just Worked.

Is there something we can do, e.g. with https://wiki.archlinux.org/index.php/dnsmasq to forward DNS requests from the container to Docker's DNS server automatically?

Some context for this is in cloudfoundry/guardian#42

Can't login with fly

I'm using this docker-compose file:

version: '3'

services:
  concourse-db:
    image: postgres
    environment:
    - POSTGRES_DB=concourse
    - POSTGRES_PASSWORD=concourse_pass
    - POSTGRES_USER=concourse_user
    - PGDATA=/database

  concourse:
    image: concourse/concourse
    command: quickstart
    privileged: true
    depends_on: [concourse-db]
    ports: ["8080:8080"]
    environment:
    - CONCOURSE_POSTGRES_HOST=concourse-db
    - CONCOURSE_POSTGRES_USER=concourse_user
    - CONCOURSE_POSTGRES_PASSWORD=concourse_pass
    - CONCOURSE_POSTGRES_DATABASE=concourse
    - CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
    - CONCOURSE_ADD_LOCAL_USER=test:$$2a$$10$$0W9/ilCpYXY/yCPpaOD.6eCrGda/fnH3D4lhsw1Mze0WTID5BuiTW
    - CONCOURSE_MAIN_TEAM_ALLOW_ALL_USERS=true
    - CONCOURSE_WORKER_GARDEN_NETWORK

I can login via web UI. But when i try to login via concole i get this:

Aleksandrs-Mini:concourse aleksandr$ fly -t local login -c http://192.168.99.100:8080/
logging in to team 'main'

navigate to the following URL in your browser:

  http://192.168.99.100:8080/sky/login?redirect_uri=http://127.0.0.1:64552/auth/callback

or enter token manually: 

I navigate browser to that url and get this:

panic: assignment to entry in nil map

goroutine 1 [running]:
github.com/concourse/fly/rc.SaveTarget(0x7ffeefbffbf7, 0x5, 0x7ffeefbffc06, 0x1a, 0xc420073900, 0x15b6074, 0x4, 0xc420195fe0, 0x0, 0x0, ...)
        /Users/pilot/Worker/workdir/volumes/live/6eb60b0b-4163-4e0e-7da5-a3fa39959fa3/volume/concourse/src/github.com/concourse/fly/rc/targets.go:86 +0x15a
github.com/concourse/fly/commands.(*LoginCommand).saveTarget(0x18a43c0, 0x7ffeefbffc06, 0x1a, 0xc420073b50, 0x0, 0x0, 0x2a4, 0x0)
        /Users/pilot/Worker/workdir/volumes/live/6eb60b0b-4163-4e0e-7da5-a3fa39959fa3/volume/concourse/src/github.com/concourse/fly/commands/login.go:266 +0xf3
github.com/concourse/fly/commands.(*LoginCommand).Execute(0x18a43c0, 0xc4200a8fa0, 0x0, 0x5, 0x18a43c0, 0x1)
        /Users/pilot/Worker/workdir/volumes/live/6eb60b0b-4163-4e0e-7da5-a3fa39959fa3/volume/concourse/src/github.com/concourse/fly/commands/login.go:130 +0x7a0
github.com/jessevdk/go-flags.(*Parser).ParseArgs(0xc42001ede0, 0xc42001e0d0, 0x5, 0x5, 0x14e8720, 0xc4201cd530, 0xc420073f28, 0x1485f1f, 0xc4201286c0)
        /Users/pilot/Worker/workdir/volumes/live/6eb60b0b-4163-4e0e-7da5-a3fa39959fa3/volume/concourse/src/github.com/jessevdk/go-flags/parser.go:316 +0x80b
github.com/jessevdk/go-flags.(*Parser).Parse(0xc42001ede0, 0x15b74fb, 0x8, 0xc420124780, 0x0, 0xc4200656e0)
        /Users/pilot/Worker/workdir/volumes/live/6eb60b0b-4163-4e0e-7da5-a3fa39959fa3/volume/concourse/src/github.com/jessevdk/go-flags/parser.go:186 +0x71
main.main()
        /Users/pilot/Worker/workdir/volumes/live/6eb60b0b-4163-4e0e-7da5-a3fa39959fa3/volume/concourse/src/github.com/concourse/fly/main.go:24 +0x10b

I just want to play with concourse but it is not so easy in latest version :(

[7.2.0] Error starting worker - btrfs command

Hi there
I'm trying to bring up concourse with the docker-compose file but its failing to bring up the worker

{
   "timestamp":"2021-04-28T13:24:16.201601866Z",
   "level":"error",
   "source":"baggageclaim",
   "message":"baggageclaim.fs.run-command.failed",
   "data":{
      "args":[
         "bash",
         "-e",
         "-x",
         "-c",
         "\n\t\tif [ ! -e $IMAGE_PATH ] || [ \"$(stat --printf=\"%s\" $IMAGE_PATH)\" != \"$SIZE_IN_BYTES\" ]; then\n\t\t\ttouch $IMAGE_PATH\n\t\t\ttruncate -s ${SIZE_IN_BYTES} $IMAGE_PATH\n\t\tfi\n\n\t\tlo=\"$(losetup -j $IMAGE_PATH | cut -d':' -f1)\"\n\t\tif [ -z \"$lo\" ]; then\n\t\t\tlo=\"$(losetup -f --show $IMAGE_PATH)\"\n\t\tfi\n\n\t\tif ! file $IMAGE_PATH | grep BTRFS; then\n\t\t\tmkfs.btrfs --nodiscard $IMAGE_PATH\n\t\tfi\n\n\t\tmkdir -p $MOUNT_PATH\n\n\t\tif ! mountpoint -q $MOUNT_PATH; then\n\t\t\tmount -t btrfs -o discard $lo $MOUNT_PATH\n\t\tfi\n\t"
      ],
      "command":"/bin/bash",
      "env":[
         "PATH=/usr/local/concourse/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
         "MOUNT_PATH=/worker-state/volumes",
         "IMAGE_PATH=/worker-state/volumes.img",
         "SIZE_IN_BYTES=4817158144"
      ],
      "error":"exit status 1",
      "session":"3.1",
      "stderr":"+ '[' '!' -e /worker-state/volumes.img ']'\n+ touch /worker-state/volumes.img\n+ truncate -s 4817158144 /worker-state/volumes.img\n++ losetup -j /worker-state/volumes.img\n++ cut -d: -f1\n+ lo=\n+ '[' -z '' ']'\n++ losetup -f --show /worker-state/volumes.img\nlosetup: /worker-state/volumes.img: failed to set up loop device: No such file or directory\n+ lo=\n",
      "stdout":""
   }
}

To get round it I changed the driver to be detect (I commented it out to get the error again)

image

Happy to PR if you think that should be the default for the docker-compose file

I'm running centos 7 running 5.7.10-1

Thanks

Gary

Set various env vars for keys only for the appropriate command (`web` or `worker`)

Related: concourse/concourse#5491

Currently we configure all the env vars unconditionally:

# 'web' keys
ENV CONCOURSE_SESSION_SIGNING_KEY /concourse-keys/session_signing_key
ENV CONCOURSE_TSA_AUTHORIZED_KEYS /concourse-keys/authorized_worker_keys
ENV CONCOURSE_TSA_HOST_KEY /concourse-keys/tsa_host_key
# 'worker' keys
ENV CONCOURSE_TSA_PUBLIC_KEY /concourse-keys/tsa_host_key.pub
ENV CONCOURSE_TSA_WORKER_PRIVATE_KEY /concourse-keys/worker_key

This is a handy shortcut but no longer works now that we've modified go-flags to actually perform validation against env vars.

For backwards compatibility, it would be nice to preserve these default values (ha, I guess we're using them that way after all). Maybe we can do so by having an entrypoint that sets the appropriate vars based on the command being run?

concourse 4.2.1 - web login is redirecting to 127.0.0.1

I am running concourse latest version (from Docker-compose.yml) in a ubuntu 18.0.4 Server and I could access the concourse web UI after bringing up the docker-compose however on login, it redirect the the request to 127.0.0.1 instead of the actual ubuntu server IP. Is there any way to fix it by keeping the server IP instead of redirecting to local server IP 127.0.0.1

Concourse 6.1.0 Workes Fail with net.ipv4.tcp_keepalive_time

When running Concourse worker on dcoker Compose, guardian failes with: failed to retrieve kernel parameter "net.ipv4.tcp_keepalive_time": open /proc/sys/net/ipv4/tcp_keepalive_time: no such file or directory.
So inessence, not able to start any Job on my Docker Worker . reverting the worker back to 6.0 just works fine

external URL 127.0.0.1 doesn't work with task execute - workaround using local IP

SETUP: set the external URL as recommended like so:

CONCOURSE_EXTERNAL_URL=http://127.0.0.1:8080

Then attempt to use fly execute you get a strange tar/gzip error when it attempts to upload the local inputs:

executing build 33
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
gzip: invalid magic
tar: Child returned status 1
tar: Error is not recoverable: exiting now
exit status 2

even though you can connect to 127.0.0.1 port 8080:

% nc -v 127.0.0.1 8080                                                                                 (master|✚1…)
found 0 associations
found 1 connections:
     1: flags=82<CONNECTED,PREFERRED>
    outif lo0
    src 127.0.0.1 port 56044
    dst 127.0.0.1 port 8080
    rank info not available
    TCP aux info available

Connection to 127.0.0.1 port 8080 [tcp/http-alt] succeeded!

The workaround for now is to use a real IP instead of 127.0.0.1 or localhost

Source of this workaround

Running a task fails in /opt/resource/check with 'failed to ping registry'

I am trying to follow the tutorial using the docker-compose.yml to run concourse.

When I try to execute the hello_world task, it fails with:

resource script '/opt/resource/check []' failed: exit status 1

stderr:
failed to ping registry: 2 error(s) occurred:

I can curl the docker registry from within the running worker (using a docker-compose exec concourse-worker bash).

My host is running Debian stretch with

  • kernel 4.18.6-1~bpo9+1
  • docker 18.09.1
  • docker-compose 1.23.2 (pip installed in a python 3.5 venv).
  • concourse image 474635a78a6c , concourse 4.2.2

Any idea?

Thanks,
David

Unable to log in to the Concourse web page

After a docker-compose up -d, the containers start, however when trying to login in (test/test) I get an HTTP 400 error.

From concourse-docker_web_1:

{"timestamp":"2019-05-01T09:31:31.300276529Z","level":"error","source":"atc","message":"atc.sky.callback.failed-to-fetch-cookie-state","data":{"error":"http: named cookie not present","session":"5.5"}

OS: macOS
Docker engine: 18.09.2

Generate keys script fails on MINGW64 env

The generate keys script fails when run on windows using MINGW64 (Git bash).

 keys/generate
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/Users/userx/concourse-docker/keys/web;C"
doesn't exist and is not known to Docker.

Concourse web does not respond

I used your docker-compose.yml as base for my setup. So far I basically removed the worker completely and modified the path of the keys. When starting I get the following output.

Basically the output looks good, but I cannot connect to the server, I get a time out in the browser. Also my request does not create any further log output.

Thank you in advance for your help.

concourse-db_1   | The files belonging to this database system will be owned by user "postgres".
concourse-db_1   | This user must also own the server process.
concourse-db_1   | 
concourse-db_1   | The database cluster will be initialized with locale "en_US.utf8".
concourse-db_1   | The default database encoding has accordingly been set to "UTF8".
concourse-db_1   | The default text search configuration will be set to "english".
concourse-db_1   | 
concourse-db_1   | Data page checksums are disabled.
concourse-db_1   | 
concourse-db_1   | fixing permissions on existing directory /database ... ok
concourse-db_1   | creating subdirectories ... ok
concourse-db_1   | selecting default max_connections ... 100
concourse-db_1   | selecting default shared_buffers ... 128MB
concourse-db_1   | selecting dynamic shared memory implementation ... posix
concourse-db_1   | creating configuration files ... ok
concourse-db_1   | running bootstrap script ... ok
concourse-db_1   | performing post-bootstrap initialization ... ok
concourse-db_1   | syncing data to disk ... ok
concourse-db_1   | 
concourse-db_1   | WARNING: enabling "trust" authentication for local connections
concourse-db_1   | You can change this by editing pg_hba.conf or using the option -A, or
concourse-db_1   | --auth-local and --auth-host, the next time you run initdb.
concourse-db_1   | 
concourse-db_1   | Success. You can now start the database server using:
concourse-db_1   | 
concourse-db_1   |     pg_ctl -D /database -l logfile start
concourse-db_1   | 
concourse-db_1   | waiting for server to start....2018-09-20 14:39:37.727 UTC [40] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
concourse-db_1   | 2018-09-20 14:39:37.760 UTC [41] LOG:  database system was shut down at 2018-09-20 14:39:37 UTC
concourse-db_1   | 2018-09-20 14:39:37.774 UTC [40] LOG:  database system is ready to accept connections
concourse-db_1   |  done
concourse-db_1   | server started
concourse-db_1   | CREATE DATABASE
concourse-db_1   | 
concourse-db_1   | 
concourse-db_1   | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
concourse-db_1   | 
concourse-db_1   | 2018-09-20 14:39:38.173 UTC [40] LOG:  received fast shutdown request
concourse-db_1   | waiting for server to shut down....2018-09-20 14:39:38.177 UTC [40] LOG:  aborting any active transactions
concourse-db_1   | 2018-09-20 14:39:38.181 UTC [40] LOG:  worker process: logical replication launcher (PID 47) exited with exit code 1
concourse-db_1   | 2018-09-20 14:39:38.183 UTC [42] LOG:  shutting down
concourse-db_1   | 2018-09-20 14:39:38.199 UTC [40] LOG:  database system is shut down
concourse-db_1   |  done
concourse-db_1   | server stopped
concourse-db_1   | 
concourse-db_1   | PostgreSQL init process complete; ready for start up.
concourse-db_1   | 
concourse-db_1   | 2018-09-20 14:39:38.297 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
concourse-db_1   | 2018-09-20 14:39:38.298 UTC [1] LOG:  listening on IPv6 address "::", port 5432
concourse-db_1   | 2018-09-20 14:39:38.301 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
concourse-db_1   | 2018-09-20 14:39:38.321 UTC [58] LOG:  database system was shut down at 2018-09-20 14:39:38 UTC
concourse-db_1   | 2018-09-20 14:39:38.328 UTC [1] LOG:  database system is ready to accept connections
concourse-db_1   | 2018-09-20 14:40:38.349 UTC [1] LOG:  received smart shutdown request
concourse-db_1   | 2018-09-20 14:40:38.353 UTC [1] LOG:  worker process: logical replication launcher (PID 64) exited with exit code 1
concourse-db_1   | 2018-09-20 14:40:38.355 UTC [59] LOG:  shutting down
concourse-db_1   | 2018-09-20 14:40:38.400 UTC [1] LOG:  database system is shut down
concourse-db_1   | 2018-09-20 14:43:14.582 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
concourse-db_1   | 2018-09-20 14:43:14.582 UTC [1] LOG:  listening on IPv6 address "::", port 5432
concourse-db_1   | 2018-09-20 14:43:14.586 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
concourse-db_1   | 2018-09-20 14:43:14.609 UTC [21] LOG:  database system was shut down at 2018-09-20 14:40:38 UTC
concourse-db_1   | 2018-09-20 14:43:14.618 UTC [1] LOG:  database system is ready to accept connections
concourse-db_1   | 2018-09-20 14:43:40.768 UTC [1] LOG:  received smart shutdown request
concourse-db_1   | 2018-09-20 14:43:40.773 UTC [1] LOG:  worker process: logical replication launcher (PID 27) exited with exit code 1
concourse-db_1   | 2018-09-20 14:43:40.775 UTC [22] LOG:  shutting down
concourse-db_1   | 2018-09-20 14:43:40.794 UTC [1] LOG:  database system is shut down
concourse-db_1   | 2018-09-20 14:47:38.972 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
concourse-db_1   | 2018-09-20 14:47:38.972 UTC [1] LOG:  listening on IPv6 address "::", port 5432
concourse-db_1   | 2018-09-20 14:47:38.976 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
concourse-db_1   | 2018-09-20 14:47:38.998 UTC [20] LOG:  database system was shut down at 2018-09-20 14:43:40 UTC
concourse-db_1   | 2018-09-20 14:47:39.005 UTC [1] LOG:  database system is ready to accept connections
concourse-db_1   | 2018-09-20 14:47:56.566 UTC [33] LOG:  could not receive data from client: Connection reset by peer
concourse-db_1   | 2018-09-20 14:47:56.796 UTC [1] LOG:  received smart shutdown request
concourse-db_1   | 2018-09-20 14:47:56.801 UTC [1] LOG:  worker process: logical replication launcher (PID 26) exited with exit code 1
concourse-db_1   | 2018-09-20 14:47:56.808 UTC [21] LOG:  shutting down
concourse-db_1   | 2018-09-20 14:47:56.829 UTC [1] LOG:  database system is shut down
concourse-db_1   | 2018-09-20 14:48:48.941 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
concourse-db_1   | 2018-09-20 14:48:48.941 UTC [1] LOG:  listening on IPv6 address "::", port 5432
concourse-db_1   | 2018-09-20 14:48:48.944 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
concourse-db_1   | 2018-09-20 14:48:48.966 UTC [21] LOG:  database system was shut down at 2018-09-20 14:47:56 UTC
concourse-db_1   | 2018-09-20 14:48:48.970 UTC [1] LOG:  database system is ready to accept connections
concourse-db_1   | 2018-09-20 14:51:56.941 UTC [34] LOG:  could not receive data from client: Connection reset by peer
concourse-web_1  | {"timestamp":"1537455626.109407663","source":"tsa","message":"tsa.listening","log_level":1,"data":{}}
concourse-web_1  | {"timestamp":"1537455626.112788439","source":"atc","message":"atc.listening","log_level":1,"data":{"debug":"127.0.0.1:8079","http":"0.0.0.0:8080"}}

quick start: web interface does not respond

We're evaluating transitioning from buildbot to concourse but we could not get the Quick Start to work.

Here's the steps I've done:

sudo apt install docker docker-compose
mkdir concourse
cd concourse
wget https://concourse-ci.org/docker-compose.yml
docker-compose up
Open 127.0.0.1:8080 in chrome (tried firefox too)

Both browsers do not fail immediately, there seems to be something at the endpoint:

< lsof -i tcp:8080
COMMAND  PID    USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
chrome  3182 churlin   48u  IPv6 3014378      0t0  TCP ip6-localhost:43190->ip6-localhost:http-alt (ESTABLISHED)
chrome  3182 churlin   49u  IPv4 3006192      0t0  TCP localhost:53028->localhost:http-alt (ESTABLISHED)

but ultimately nothing shows up. I've attached the log of docker-compose up. I'm using Ubuntu 18.04, docker version is 18.09.5, build e8ff056, docker-compose version is 1.17.1, build unknown

docker-compose-up-log.txt

panic when running docker-compose up

concourse-web_1 | panic: runtime error: invalid memory address or nil pointer dereference
concourse-web_1 | [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xfe3485]

Any idea?

Multi-arch ARM docker build

With ARM on the rise and most major cloud providers offering ARM instances, would it be possible to get ARM docker builds?

fatal: repository '/tmp/git-resource-repo-cache' does not exist

I've been setting up a small lab environment based on Concourse using the docker-compose YAML in this repo. I have a very simple test jobs that tries to fetch a GH repo but when I'm getting the following error:

fatal: repository '/tmp/git-resource-repo-cache' does not exist

I see that error mentioned a few places on the Internet. Most of them are rather old and unresolve and many of them involve usage of the docker-compose manifest.

I've bumped up the Concourse logging to info but and I thought perhaps the issue was that /var/lib/docker was not on btrfs or that the Docker daemon was not using the btrfs storage driver but I've since modified my lab environment for those things and I'm still getting the error.

The following links are to the Vagrant-based lab environment and the simple test job.

I'm sort of at a loss. Thoughts?

Running concourse-docker on CF/PCF

Is there any documentation for running concourse-docker on CF or PCF? I need to sort out a few errors after a cf push including assigning the correct port and:
[ERR] Please specify one command of: generate-key, land-worker, migrate, quickstart, retire-worker, web or worker Thanks.

Large base image

Is using the large ubuntu base image required here? One image layer is 1.11 GB.

Help setting up AWS Secrets manager

I have been trying to figure out how to configure Concourse to use the AWS Secrets Manager and I added the following environment variables into the docker-compose file but from the logs it doesn't look like it ever reaches out to AWS to fetch the creds. Am I missing something or should this automatically happen when adding these environment variables under environment in the docker-compose file?

      CONCOURSE_AWS_SECRETSMANAGER_REGION: XXXXX
      CONCOURSE_AWS_SECRETSMANAGER_ACCESS_KEY: XXXXXXXX
      CONCOURSE_AWS_SECRETSMANAGER_SECRET_KEY: XXXXXXXX

containerized concourse 7.4.1 with cgroup v2 + containerd results in "max containers reached" errors

We use concourse 7.4.1 in a docker container on a system with cgroups v2 enabled. The worker is configured with CONCOURSE_RUNTIME: containerd. We get "max containers reached" errors shortly after startup. When we list the workers containers we get a list of around 120 entries, but not 250. Expected is a value around 60.
The error seems to be related to our hosts cgroup v2 configuration:

time="2021-11-02T14:50:46.996827586Z" level=info msg="starting signal loop" namespace=concourse path=/run/containerd/io.containerd.runtime.v2.task/concourse/1f728166-46a8-4aea-57f3-1e6c4c6cb67d pid=80
time="2021-11-02T14:50:47.148009483Z" level=error msg="failed to enable controllers ([cpuset cpu io memory pids rdma])" error="failed to write subtree controllers [cpuset cpu io memory pids rdma] to \"/sys/fs/cgroup/cgroup.subtree_control\": write /sys/fs/cgroup/cgroup.subtree_control: operation not supported"
...
#lots of:
{"timestamp":"2021-11-02T15:26:08.599015769Z","level":"error","source":"worker","message":"worker.garden.garden-server.destroy.failed","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/61ed35fc-d484-4a41-7bab-c2888aac853a/cgroup.procs: operation not supported\n: unknown","handle":"61ed35fc-d484-4a41-7bab-c2888aac853a","session":"1.4.10525"}}
{"timestamp":"2021-11-02T15:26:08.599018255Z","level":"error","source":"worker","message":"worker.garden.garden-server.destroy.failed","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/garden/24029699-6902-40c3-7d04-603388b95014/cgroup.procs: operation not supported\n: unknown","handle":"24029699-6902-40c3-7d04-603388b95014","session":"1.4.10526"}}
{"timestamp":"2021-11-02T15:26:08.599253941Z","level":"error","source":"worker","message":"worker.container-sweeper.tick.failed-to-destroy-container","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/61ed35fc-d484-4a41-7bab-c2888aac853a/cgroup.procs: operation not supported\n: unknown","handle":"61ed35fc-d484-4a41-7bab-c2888aac853a","session":"6.72"}}
{"timestamp":"2021-11-02T15:26:08.599354579Z","level":"error","source":"worker","message":"worker.container-sweeper.tick.failed-to-destroy-container","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/garden/24029699-6902-40c3-7d04-603388b95014/cgroup.procs: operation not supported\n: unknown","handle":"24029699-6902-40c3-7d04-603388b95014","session":"6.72"}}
{"timestamp":"2021-11-02T15:26:08.600572572Z","level":"error","source":"worker","message":"worker.garden.garden-server.destroy.failed","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/bb048491-1275-4383-421b-0fe76fc3ed16/cgroup.procs: operation not supported\n: unknown","handle":"bb048491-1275-4383-421b-0fe76fc3ed16","session":"1.4.10527"}}
{"timestamp":"2021-11-02T15:26:08.600745099Z","level":"error","source":"worker","message":"worker.container-sweeper.tick.failed-to-destroy-container","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/bb048491-1275-4383-421b-0fe76fc3ed16/cgroup.procs: operation not supported\n: unknown","handle":"bb048491-1275-4383-421b-0fe76fc3ed16","session":"6.72"}}
{"timestamp":"2021-11-02T15:26:08.601499295Z","level":"error","source":"worker","message":"worker.garden.garden-server.destroy.failed","data":{"error":"gracefully killing task: graceful kill: kill task execed processes: task execed processes: pid listing: runc did not terminate successfully: exit status 1: container_linux.go:187: getting all container pids from cgroups caused: read /sys/fs/cgroup/garden/3eae43f4-1fa6-4361-52f0-f21b4492793e/cgroup.procs: operation not supported\n: unknown","handle":"3eae43f4-1fa6-4361-52f0-f21b4492793e","session":"1.4.10529"}}

...
{"timestamp":"2021-11-02T15:28:30.143224228Z","level":"error","source":"worker","message":"worker.garden.garden-server.create.failed","data":{"error":"new container: checking container capacity: max containers reached","request":{"Handle":"3272600c-218d-4bd6-619f-8248b197d285","GraceTime":0,"RootFSPath":"raw:///worker-state/volumes/live/721c8836-2e07-4d2c-546f-56fac591d58c/volume","BindMounts":[{"src_path":"/worker-state/volumes/live/3bc7d120-c34a-498f-46b4-c44d88d345db/volume","dst_path":"/scratch","mode":1}],"Network":"","Privileged":true,"Limits":{"bandwidth_limits":{},"cpu_limits":{},"disk_limits":{},"memory_limits":{},"pid_limits":{}}},"session":"1.4.11671"}}

A fix is to move process 1 in to an own cgroup in the entrypoint. Then setting the subtree controller succeeds.

CONCOURSE_TSA_HOST and CONCOURSE_TSA_PORT expectations have broken?

I just upgraded from 3.10.0 to 3.14.1 and my workers would not connect. Before I could set CONCOURSE_TSA_HOST: "<url>" and it automatically assumed it was going to use port 2222.

Now when I do that it throws the error:
{"timestamp":"1529341199.565750837","source":"worker","message":"worker.beacon.beacon.beacon-client.failed-to-connect-to-tsa","log_level":2,"data":{"error":"dial tcp: address concourse.<place>.com: missing port in address","session":"4.1.1"}}

After looking through some online docs and examples, I found it saying you could put the port in a new variable: CONCOURSE:2222

However that still threw the error.

I had to update it to have the port in the URL, and I couldn't find any documentation as to why it was broken.

Worker fails: failed to retrieve kernel parameter "net.ipv4.tcp_retries1"

Hello folks,

the worker with image tag Version 7.5.0 and with 7.5.0-ubuntu-20211012 is failing to retrieve kernel parameter.
message in concourse gui:

run check: find or create container on worker concourse-worker-0: failed to retrieve kernel parameter "net.ipv4.tcp_retries1": open /proc/sys/net/ipv4/tcp_retries1: no such file or directory

Put ./keys/generate into the docker image

Hi,

put the file into the docker image so that one can do a one-stop-shop key generation. I don't want to pull down a git repo to generate keys that are needed inside a docker image.

Cu

A script like this, from the top of my head:

#!/usr/bin/env bash
set -o nounset
declare __BASEDIRECTORY="/keys"
declare -a __SUBDIRECTORIES=("web" "worker")
declare -a __RSA_KEYS=( "/keys/web/session_signing_key" )
declare -a __SSH_KEYS=( "/keys/web/tsa_host_key" "/keys/worker/worker_key" )

for __SUBDIRECTORY in "${__SUBDIRECTORIES[@]}"; do

    if [[ ! -d "${__BASEDIRECTORY}/${__SUBDIRECTORY}" ]]; then
        mkdir -p "${__BASEDIRECTORY}/${__SUBDIRECTORY}"
    fi

done

for __KEY in "${__RSA_KEYS[@]}"; do

    if [[ ! -f "${__KEY}" ]]; then
        generate-key -t rsa -f "${__KEY}"
    fi

done

for __KEY in "${__SSH_KEYS[@]}"; do
    if [[ ! -f "${__KEY}" ]]; then
        generate-key -t ssh -f "${__KEY}"
    fi
done

This is what I put together to auto generate the keys on the outside...

#!/usr/bin/env bash
set -o nounset
declare __BASEDIRECTORY="/srv/containers/tools/concourse/config/keys"
declare -a __SUBDIRECTORIES=("web" "worker")
declare -a __RSA_KEYS=( "/web/session_signing_key" )
declare -a __SSH_KEYS=( "/web/tsa_host_key" "/worker/worker_key" )

for __SUBDIRECTORY in "${__SUBDIRECTORIES[@]}"; do

    if [[ ! -d "${__BASEDIRECTORY}/${__SUBDIRECTORY}" ]]; then
        mkdir -p "${__BASEDIRECTORY}/${__SUBDIRECTORY}"
    fi

done

for __KEY in "${__RSA_KEYS[@]}"; do

    if [[ ! -f "${__BASEDIRECTORY}/${__KEY}" ]]; then
        docker run --rm -v "${__BASEDIRECTORY}:/keys" concourse/concourse generate-key -t rsa -f "/keys/${__KEY}"
    fi

done

for __KEY in "${__SSH_KEYS[@]}"; do
    if [[ ! -f "${__BASEDIRECTORY}/${__KEY}" ]]; then
        docker run --rm -v "${__BASEDIRECTORY}:/keys" concourse/concourse generate-key -t ssh -f "/keys/${__KEY}"
    fi
done

cp "${__BASEDIRECTORY}/worker/worker_key.pub" "${__BASEDIRECTORY}/web/authorized_worker_keys"
cp "${__BASEDIRECTORY}/web/tsa_host_key.pub" "${__BASEDIRECTORY}/worker/tsa_host_key.pub"

set-team & set-pipeline fails with error: forbidden

  • Concourse Version: 4.2.1
  • Did this used to work? N/A

When i try to run my setup.sh (https://github.com/GONEproject/engine/tree/TASK/ISS%239/.ci) script already fails on the set-team job with error: forbidden.

Note: When skipping the set-team command the script fails on the set-pipeline job with the same error =\

Image of setup.sh output

Heres my docker-compose.yml on the serverside:

version: '3'

services:
  concourse-db:
    image: postgres
    environment:
    - POSTGRES_DB=concourse
    - POSTGRES_PASSWORD=1234
    - POSTGRES_USER=concourse_user
    - PGDATA=/database

  concourse-web:
    image: concourse/concourse:4.2.1
    command: quickstart
    links: [concourse-db]
    depends_on: [concourse-db]
    privileged: true
    ports: ["3000:3000"]
    environment:
    - CONCOURSE_POSTGRES_HOST=concourse-db
    - CONCOURSE_POSTGRES_USER=concourse_user
    - CONCOURSE_POSTGRES_PASSWORD=1234
    - CONCOURSE_POSTGRES_DATABASE=concourse
    - CONCOURSE_BIND_PORT=3000
    - CONCOURSE_EXTERNAL_URL=https://concourse.api-spot.com
    - CONCOURSE_ADD_LOCAL_USER=concourse:12345
    - CONCOURSE_MAIN_TEAM_ALLOW_ALL_USERS=true
    - CONCOURSE_WORKER_GARDEN_NETWORK

  concourse-worker:
    image: concourse/concourse:4.2.1
    command: worker
    privileged: true
    links: [concourse-web]
    depends_on: [concourse-web]
    environment:
    - CONCOURSE_TSA_HOST=concourse-web:2222
    - CONCOURSE_GARDEN_NETWORK

Thanks,
Robin Rpr.

resource script '/opt/resource/in [/tmp/build/get]' failed: exit status 1 in resource

Hello.

I faced the following error in github-pr-resource resource on RancherOS with Portainer. There's no stderr output.

resource script '/opt/resource/in [/tmp/build/get]' failed: exit status 1

Previously, I ran concourse-docker with docker-compose on BargeOS with VirtualBox. Everything worked fine on the environment.

I also found the below similar issue and tried to ran concourse-docker using docker-compose command purely without Portainer on clean RancherOS but the result was same.
concourse/concourse#1489 (comment)

I'm using concourse/concourse:5.3.3

Could you have any help for the issue?

Dockerfile references files that are not in this repo

clone the repo, run

docker build . and you get

debconf: falling back to frontend: Teletype
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for ca-certificates (20170717~16.04.2) ...
Updating certificates in /etc/ssl/certs...
148 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container 746725d242b5
 ---> 9c492d584d63
Step 3/16 : ADD bin/dumb-init /usr/local/bin
ADD failed: stat /var/lib/docker/tmp/docker-builder398262277/bin/dumb-init: no such file or directory

Concourse 6.1.0 worker fails with private key not provided

I just updated to 6.1.0 from 6.0.0 and now my worker container complains about a
private key not provided.

{"timestamp":"2020-05-22T15:42:32.973358021Z","level":"error","source":"worker","message":"worker.volume-sweeper.tick.failed-to-dial","data":{"error":"private key not provided","session":"7.4"}}


{"timestamp":"2020-05-22T15:42:32.973409963Z","level":"error","source":"worker","message":"worker.volume-sweeper.tick.failed-to-get-volumes-to-destroy","data":{"error":"private key not provided","session":"7.4"}}


{"timestamp":"2020-05-22T15:42:32.973369746Z","level":"error","source":"worker","message":"worker.container-sweeper.tick.failed-to-dial","data":{"error":"private key not provided","session":"6.4"}}


{"timestamp":"2020-05-22T15:42:32.973481887Z","level":"error","source":"worker","message":"worker.container-sweeper.tick.failed-to-get-containers-to-destroy","data":{"error":"private key not provided","session":"6.4"}}

When I try and run a pipeline then I get another error about a base resource not found.

base resource type not found: git

atc.sky.callback.failed-to-fetch-cookie-state

Steps followed:

git clone http://github.com/concourse/concourse-docker
cd concourse-docker/
./generate-keys.sh
docker-compose up

When I tried logging in with fly -t local login --concourse-url 127.0.0.1:8080 it gave me a link to follow to login, when doing so using test:test for username:password the two lines below log out in docker-compose session. Came across posts for similar issue in old concourse versions but haven't found anything for this one.

Also, I have a concourse bosh deployment (v5.0.0) that fly works with and this version via concourse-docker is v5.0.1. Not sure if that has anything to do with this but it may or may not be worth noting.

Also also, this same error occurs if I navigate to 127.0.0.1:8080 and login once again with test:test for username:password

If I have missed some obvious step I do apologize but I didn't see anything.

concourse-web_1 | {"timestamp":"2019-04-03T21:13:41.475327676Z","level":"info","source":"atc","message":"atc.dex.event","data":{"fields":{},"message":"login successful: connector "local", username="test", email="test", groups=[]","session":"6"}}
concourse-web_1 | {"timestamp":"2019-04-03T21:13:41.495535187Z","level":"error","source":"atc","message":"atc.sky.callback.failed-to-fetch-cookie-state","data":{"error":"http: named cookie not present","session":"5.7"}}

"failed to create volume", Concourse running in docker-compose on Linux

I've got Concourse running on a NixOS 18.03 VPS inside docker-compose, and this is working fine. I'm now trying to deploy exactly the same Concourse configuration to another NixOS 18.03 machine, but aren't having any luck. I'm using the same docker-compose file, and the same pipelines.

The new machine gives errors about being unable to create volumes:

Apr 12 21:55:49 nyarlathotep docker-compose[26088]: concourse_1  | {"timestamp":"2019-04-12T20:55:49.753780802Z","level":"error","source":"atc","message":"atc.pipelines.radar.scan-resource.interval-runner.tick.find-or-create-cow-volume-for-container.failed-to-create-volume-in-baggageclaim","data":{"container":"af97f489-2d27-4007-57b4-e5cb9c43e659","error":"failed to create volume","pipeline":"ci","resource":"concoursefiles-git","session":"18.1.4.1.1.3","team":"main","volume":"e843e1a7-4122-494b-5397-d0a94294e418"}}
Apr 12 21:55:49 nyarlathotep docker-compose[26088]: concourse_1  | {"timestamp":"2019-04-12T20:55:49.793734883Z","level":"error","source":"atc","message":"atc.pipelines.radar.scan-resource.interval-runner.tick.failed-to-fetch-image-for-container","data":{"container":"af97f489-2d27-4007-57b4-e5cb9c43e659","error":"failed to create volume","pipeline":"ci","resource":"concoursefiles-git","session":"18.1.4.1.1","team":"main"}}
Apr 12 21:55:49 nyarlathotep docker-compose[26088]: concourse_1  | {"timestamp":"2019-04-12T20:55:49.794088237Z","level":"error","source":"atc","message":"atc.pipelines.radar.scan-resource.interval-runner.tick.failed-to-initialize-new-container","data":{"error":"failed to create volume","pipeline":"ci","resource":"concoursefiles-git","session":"18.1.4.1.1","team":"main"}}

The concoursefiles-git resource it's failing to create a volume for there is a normal git resource. The other resources in the pipeline are failing with the same error.

The pipeline is here: https://github.com/barrucadu/concoursefiles/blob/master/pipelines/ci.yml

This is the docker-compose file:

version: '3'

services:
  concourse:
    image: concourse/concourse
    command: quickstart
    privileged: true
    depends_on: [postgres, registry]
    ports: ["3003:8080"]
    environment:
      CONCOURSE_POSTGRES_HOST: postgres
      CONCOURSE_POSTGRES_USER: concourse
      CONCOURSE_POSTGRES_PASSWORD: concourse
      CONCOURSE_POSTGRES_DATABASE: concourse
      CONCOURSE_EXTERNAL_URL: "https://ci.nyarlathotep.barrucadu.co.uk"
      CONCOURSE_MAIN_TEAM_GITHUB_USER: "barrucadu"
      CONCOURSE_GITHUB_CLIENT_ID: "<omitted>"
      CONCOURSE_GITHUB_CLIENT_SECRET: "<omitted>"
      CONCOURSE_LOG_LEVEL: error
      CONCOURSE_GARDEN_LOG_LEVEL: error
    networks:
      - ci

  postgres:
    image: postgres
    environment:
      POSTGRES_DB: concourse
      POSTGRES_PASSWORD: concourse
      POSTGRES_USER: concourse
      PGDATA: /database
    networks:
      - ci
    volumes:
      - pgdata:/database

  registry:
    image: registry
    networks:
      ci:
        ipv4_address: "172.21.0.254"
        aliases: [ci-registry]
    volumes:
      - regdata:/var/lib/registry

networks:
  ci:
    ipam:
      driver: default
      config:
        - subnet: 172.21.0.0/16

volumes:
  pgdata:
  regdata:

I'm using the latest concourse/concourse image, as I set this up today. The version of docker is 18.09.2 (build 62479626f213818ba5b4565105a05277308587d5). What can I look at to help debug this?

Concourse docker image is missing 'file' binary used by btrfs driver.

The baggageclaim's btrfs driver is using 'file' to identify whether a volume.img is a btrfs image [1]. Since concourse docker image is missing this file, the check fails [2], invoking an attempt to create btrfs filesystem which then fails. In my case (running in kubernetes), this prevented the worker to start correctly, crashlooping in the process.

[1] https://github.com/concourse/baggageclaim/blob/master/fs/btrfs.go#L45
[2]

+ '[' '!' -e /concourse-work-dir/volumes.img ']'
++ stat --printf=%s /concourse-work-dir/volumes.img
+ '[' 199567056896 '!=' 199567056896 ']'
++ losetup -j /concourse-work-dir/volumes.img
++ cut -d: -f1
+ lo=/dev/loop0
+ '[' -z /dev/loop0 ']'
+ file /concourse-work-dir/volumes.img
bash: line 11: file: command not found
+ grep BTRFS
+ mkfs.btrfs --nodiscard /concourse-work-dir/volumes.img
/concourse-work-dir/volumes.img appears to contain an existing filesystem (btrfs).
ERROR: use the -f option to force overwrite of /concourse-work-dir/volumes.img
"    
   stdout:  "btrfs-progs v4.15.1
See http://btrfs.wiki.kernel.org for more information.

"    

Could not resolve host: github.com

I am using the concourse-docker on Mac and I am getting the error below as soon as I start a pipeline. Any suggestion how to solve it?

run check step: run check step: check: resource script '/opt/resource/check []' failed: exit status 128

stderr:
Cloning into '/tmp/git-resource-repo-cache'...
fatal: unable to access 'https://github.com/concourse/docs/': Could not resolve host: github.com

Thanks!

Docker container dies at latest version, and at 5.0.1

Following the instructions here:

https://concourse-ci.org/install.html

Leads to the docker instance running concourse to simply die. There is no error message that docker-compose logs -f gives (only the postgres server logs anything), the container simply cannot run, even if I manually try to do docker run -d -p 8080:8080 <image_name>

I've downgraded to v5.0.1, and the same issue happens.

Downgrading to v5.0.0 fixes the issue. This is the working docker-compose.yml file (the change from the default is me explicitly putting the version number):

version: '3'

services:
  concourse-db:
    image: postgres
    environment:
      POSTGRES_DB: concourse
      POSTGRES_PASSWORD: concourse_pass
      POSTGRES_USER: concourse_user
      PGDATA: /database

  concourse:
    image: concourse/concourse:5.0.0
    command: quickstart
    privileged: true
    depends_on: [concourse-db]
    ports: ["8080:8080"]
    environment:
      CONCOURSE_POSTGRES_HOST: concourse-db
      CONCOURSE_POSTGRES_USER: concourse_user
      CONCOURSE_POSTGRES_PASSWORD: concourse_pass
      CONCOURSE_POSTGRES_DATABASE: concourse
      CONCOURSE_EXTERNAL_URL: http://localhost:8080
      CONCOURSE_ADD_LOCAL_USER: test:test
      CONCOURSE_MAIN_TEAM_LOCAL_USER: test

I'm running on an AWS t2.large instance with 19G free disk space.

can not start web ui concourse 5.0.1

concourse-web_1 | {"timestamp":"2019-04-04T08:15:50.787838900Z","level":"info","source":"atc","message":"atc.cmd.finish","data":{"duration":234400,"session":"1"}}
concourse-web_1 | {"timestamp":"2019-04-04T08:15:50.787976000Z","level":"info","source":"tsa","message":"tsa.starting-tsa-without-authorized-keys","data":{}}
concourse-web_1 | panic: runtime error: invalid memory address or nil pointer dereference
concourse-web_1 | [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x571e05]
concourse-web_1 |
concourse-web_1 | goroutine 1 [running]:
concourse-web_1 | crypto/rsa.(*PrivateKey).Public(0x0, 0x0, 0x0)
concourse-web_1 | /usr/local/go/src/crypto/rsa/rsa.go:100 +0x5
concourse-web_1 | golang.org/x/crypto/ssh.NewSignerFromSigner(0x7f204d2a9580, 0xc0000bcd48, 0xc0000bcd48, 0x7f204d2a9580, 0xc0000bcd48, 0x1)
concourse-web_1 | /tmp/build/70f2e240/gopath/pkg/mod/golang.org/x/[email protected]/ssh/keys.go:720 +0x35
concourse-web_1 | golang.org/x/crypto/ssh.NewSignerFromKey(0x1ae9dc0, 0xc0000bcd48, 0x0, 0xc000f0b270, 0x415918, 0x30)
concourse-web_1 | /tmp/build/70f2e240/gopath/pkg/mod/golang.org/x/[email protected]/ssh/keys.go:695 +0x1c2
concourse-web_1 | github.com/concourse/concourse/tsa/tsacmd.(*TSACommand).configureSSHServer(0xc000298780, 0xc000cbd490, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0009effc0, 0x0, ...)
concourse-web_1 | /tmp/build/70f2e240/concourse/tsa/tsacmd/command.go:179 +0x16d
concourse-web_1 | github.com/concourse/concourse/tsa/tsacmd.(*TSACommand).Runner(0xc000298780, 0xc000a5be60, 0x0, 0x1, 0x25f15e0, 0xc000cad620, 0x0, 0x0)
concourse-web_1 | /tmp/build/70f2e240/concourse/tsa/tsacmd/command.go:96 +0x30b
concourse-web_1 | main.(*WebCommand).Runner(0xc0000dcd88, 0xc000a5be60, 0x0, 0x1, 0x7, 0x6, 0xc000a5bcc0, 0xc00045e940)
concourse-web_1 | /tmp/build/70f2e240/concourse/cmd/concourse/web.go:57 +0xf2
concourse-web_1 | main.(*WebCommand).Execute(0xc0000dcd88, 0xc000a5be60, 0x0, 0x1, 0x1a0a6c0, 0x1bfde40)
concourse-web_1 | /tmp/build/70f2e240/concourse/cmd/concourse/web.go:37 +0x64
concourse-web_1 | github.com/vito/twentythousandtonnesofcrudeoil.installEnv.func2(0x7f204d2fb7d8, 0xc0000dcd88, 0xc000a5be60, 0x0, 0x1, 0x1, 0x0)
concourse-web_1 | /tmp/build/70f2e240/gopath/pkg/mod/github.com/vito/twentythousandtonnesofcrudeoil@v0.0.0-20180305154709-3b21ad808fcb/environment.go:40 +0x8a
concourse-web_1 | github.com/jessevdk/go-flags.(*Parser).ParseArgs(0xc00009cd80, 0xc0000c2010, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
concourse-web_1 | /tmp/build/70f2e240/gopath/pkg/mod/github.com/jessevdk/[email protected]/parser.go:314 +0x86c
concourse-web_1 | github.com/jessevdk/go-flags.(*Parser).Parse(...)
concourse-web_1 | /tmp/build/70f2e240/gopath/pkg/mod/github.com/jessevdk/[email protected]/parser.go:186
concourse-web_1 | main.main()
concourse-web_1 | /tmp/build/70f2e240/concourse/cmd/concourse/main.go:31 +0x21b
concourse-db_1 | 2019-04-04 08:15:50.797 UTC [71] LOG: could not receive data from client: Connection reset by peer
ivanyip_concourse-web_1 exited with code 2
concourse-worker_1 | {"timestamp":"2019-04-04T08:15:55.051667600Z","level":"info","source":"worker","message":"worker.beacon-runner.restarting","data":{"session":"9"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:15:55.055394800Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-connect-to-tsa","data":{"error":"dial tcp: lookup concourse-web on 127.0.0.11:53: no such host","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:15:55.055499000Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa","data":{"error":"all worker SSH gateways unreachable","session":"9.1.3"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:15:55.055546300Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-dial","data":{"error":"all worker SSH gateways unreachable","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:15:55.055623800Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.exited-with-error","data":{"error":"all worker SSH gateways unreachable","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:15:55.055680100Z","level":"error","source":"worker","message":"worker.beacon-runner.failed","data":{"error":"all worker SSH gateways unreachable","session":"9"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:00.056415900Z","level":"info","source":"worker","message":"worker.beacon-runner.restarting","data":{"session":"9"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:00.063588100Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-connect-to-tsa","data":{"error":"dial tcp: lookup concourse-web on 127.0.0.11:53: no such host","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:00.063808800Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa","data":{"error":"all worker SSH gateways unreachable","session":"9.1.4"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:00.063870600Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-dial","data":{"error":"all worker SSH gateways unreachable","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:00.063961700Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.exited-with-error","data":{"error":"all worker SSH gateways unreachable","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:00.064125500Z","level":"error","source":"worker","message":"worker.beacon-runner.failed","data":{"error":"all worker SSH gateways unreachable","session":"9"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:05.064876200Z","level":"info","source":"worker","message":"worker.beacon-runner.restarting","data":{"session":"9"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:05.074391800Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-connect-to-tsa","data":{"error":"dial tcp: lookup concourse-web on 127.0.0.11:53: no such host","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:05.074547600Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.dial.failed-to-connect-to-any-tsa","data":{"error":"all worker SSH gateways unreachable","session":"9.1.5"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:05.074681600Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.failed-to-dial","data":{"error":"all worker SSH gateways unreachable","session":"9.1"}}
concourse-worker_1 | {"timestamp":"2019-04-04T08:16:05.074920000Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.exited-with-error","data":{"error":"all worker SSH gateways unreachable","session":"9.1"}}
concourse-worker_1 | {"timestam

Web not connecting to Db on Fedora 32

Hi

I have just taken the latest cut of master and I am noticing in the logs for docker-compose up that the web is having trouble connecting to the database:

{"timestamp":"2020-06-01T15:37:52.206102235Z","level":"error","source":"atc","message":"atc.db.failed-to-open-db-retrying","data":{"error":"dial tcp 172.23.0.2:5432: connect: no route to host","session":"3"}}

I am using docker community edition on fedora 22 with the following details:

$ uname -r
5.6.14-300.fc32.x86_64
$ cat /etc/os-release
NAME=Fedora
VERSION="32 (Workstation Edition)"
ID=fedora
VERSION_ID=32
VERSION_CODENAME=""
PLATFORM_ID="platform:f32"
PRETTY_NAME="Fedora 32 (Workstation Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:32"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f32/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=32
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=32
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
$ docker -v
Docker version 19.03.10, build 9424aeaee9
$ docker-compose -v
docker-compose version 1.25.4, build unknown
$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                    NAMES
f139c503fb4b        concourse/concourse   "dumb-init /usr/loca…"   10 minutes ago      Up 7 minutes                                 concourse-docker_worker_1
c1892dcd79da        concourse/concourse   "dumb-init /usr/loca…"   10 minutes ago      Up 7 minutes        0.0.0.0:8080->8080/tcp   concourse-docker_web_1
c17ec6e66ba3        postgres              "docker-entrypoint.s…"   10 minutes ago      Up 7 minutes        5432/tcp                 concourse-docker_db_1

I also tried a few network mode configurations (bridge & host) without any luck. Is there anything else you could recommend that I could try?

Thanks

Web not connecting to DB

I have followed instructions about running ssh key gen script, then run docker-compose.
Here is what I see when doing that.

Attaching to concourse_concourse-db_1, concourse_concourse-web_1, concourse_concourse-worker_1
concourse-web_1     | {"timestamp":"2019-04-06T21:01:50.941371647Z","level":"info","source":"atc","message":"atc.cmd.start","data":{"session":"1"}}
concourse-web_1     | {"timestamp":"2019-04-06T21:01:50.945134121Z","level":"error","source":"atc","message":"atc.db.failed-to-open-db-retrying","data":{"error":"dial tcp 172.20.0.2:5432: connect: connection refused","session":"3"}}
concourse-db_1      | The files belonging to this database system will be owned by user "postgres".
concourse-db_1      | This user must also own the server process.

Here is my compose file just trying to run some testing

version: '3'

services:
  concourse-db:
    image: postgres
    environment:
    - POSTGRES_DB=concourse
    - POSTGRES_PASSWORD=password123
    - POSTGRES_USER=concourse
    - PGDATA=/database

  concourse-web:
    image: concourse/concourse
    command: web
    links: [concourse-db]
    depends_on: [concourse-db]
    ports: ["8080:8080"]
    volumes: ["./keys/web:/concourse-keys"]
    environment:
    - CONCOURSE_POSTGRES_HOST=concourse-db
    - CONCOURSE_POSTGRES_USER=concourse
    - CONCOURSE_POSTGRES_PASSWORD=password123
    - CONCOURSE_POSTGRES_DATABASE=concourse
    - CONCOURSE_EXTERNAL_URL=http://server-ipaddress:8080
    - CONCOURSE_ADD_LOCAL_USER=test:test
    - CONCOURSE_MAIN_TEAM_LOCAL_USER=test

  concourse-worker:
    image: concourse/concourse
    command: worker
    privileged: true
    links: [concourse-web]
    depends_on: [concourse-web]
    volumes: ["./keys/worker:/concourse-keys"]
    environment:
    - CONCOURSE_TSA_HOST=concourse-web:2222
    - CONCOURSE_GARDEN_NETWORK

Support for ca-certificates

When attempting to use github enterprise OAuth, the web binary throws the following error:

concourse-web_1 | {"timestamp":"1469720047.538653135","source":"atc","message":"atc.oauth-callback.callback.failed-to-exchange-token","log_level":2,"data":{"error":"Post https://github.<private domain>.com/login/oauth/access_token: x509: failed to load system roots and no roots provided","session":"3.1"}}

execing into the image and installing ca-certificates fixes this problem. The older @gregarcara version of this repo was installing ca-certificates (https://github.com/gregarcara/concourse-docker/blob/master/Dockerfile#L3) where this functionality also worked correctly and I believe that this needs to be re-added:

https://github.com/concourse/concourse-docker/blob/master/Dockerfile#L3

PR incoming for this for your consideration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.