Giter Site home page Giter Site logo

build-contract's People

Contributors

atamon avatar solsson avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

solsson

build-contract's Issues

Networks (and probably other things) are never cleaned up and docker has limits

I ran into the following while running build-contract locally:

Creating network "yoleanlive3yoleanliveclientunittests_default" with the default driver
ERROR: failed to parse pool request for address space "LocalDefault" pool "" subpool "": could not find an available predefined network

It seems related to moby/moby#23971 where they conclude that docker-compose down removes any created networks from the docker-compose run. Might we want to do this instead of docker-compose kill at https://github.com/Yolean/build-contract/blob/master/build-contract#L66?

build-contract container ends up as Completed in kubernetes on docker-compose timeouts

I had a build job that failed due to:

...
An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

But build-contract exits with 0 and thus any pod in kubernetes that runs it will show up as Completed instead of Error. These timeout errors are often transient, so re-running the build-contract once would result in a proper build

docker-compose build --pull breaks dependent builds within same compose file

After 1602ed4 and 078eeea we can no longer run compose files with the following convention:

  productionimage:
    build:
      context: ../
    image: registry/yolean/something
  testservice:
    depends_on:
      - httpd-latest
    build:
      # This dockerfile has "FROM registry/yolean/something"
      context: ../modify-for-testing

An example such build is https://github.com/Reposoft/docker-svn/blob/master/build-contracts/docker-compose.yml where production image builds have local dependencies.

Build can fail due to services that have been removed

To reproduce:

  • Create a build-contract service that fails with exit >0, run the build.
  • Delete the failing service from docker-compose.yml
  • Next build on the same machine will fail

The reason is that we pick up the exit status based on label and project name.

Related to #15

Use output from other builds in target Dockerfile

This is a discussion on how build-contract relates to some core Docker design decisions.

Example: We have a FROM nginx build that needs to include a js file built with webpack, i.e. a quite lengthy Node.js/NPM execution. A typical build server like Jenkins would have Node.js installed locally and produce the js in a step prior to running docker build on the target image. A FROM node build could easily do the build, but we wouldn't get the file out of it, at least not at build time.

The target build could install the dependency + run the build + clean up, but that's quite messy and needs to make assumptions about the nginx image.

Relates to moby/moby#13026, where "Multi-stage building" looks like a similar concept.

Build sometimes terminated after one of many test containers exited 0

I don't have a clear repro but I've seen this before and now I added a third test container to a docker-compose.yml that had two already. Once the third test exits, build-contract goes:

dockercompose_hooks-test_1 exited with code 0
Killing configstoredockercompose_site-admin-test_1 ...

where both hooks-test and site-admin-test have:

    labels:
      - com.yolean.build-contract

Disocurage use of untagged images

I'm investigating local build performance now. We tend to use :latest in test containers, but that slows builds down. In production containers we tend to use :[whatever version] but that's a false sense of stability as we learned from httpd:2.4.23 which had a config change within that tag that broke a downstream build.

This is just an idea but I felt build-contract would be a good place for a reminder about the value of :version@sha in FROM.

List failed containers at aborted build

We currently echo that we exit 1, but not why. Current code has no list of failing containers, just a count.

We cluld use our current docker ps filter with an added exited=0 to get all non-failing containers, and the failing ones would be test_containers that are not in that list.

Or to get failing directly we could use inspect with some template magic like docker inspect -f '{{if ne 0.0 .State.ExitCode }}{{.Name}} {{.State.ExitCode}}{{ end }}' $(docker ps -aq)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.