Giter Site home page Giter Site logo

uber-archive / makisu Goto Github PK

View Code? Open in Web Editor NEW
2.4K 43.0 156.0 7.94 MB

Fast and flexible Docker image building tool, works in unprivileged containerized environments like Mesos and Kubernetes.

License: Apache License 2.0

Go 82.27% Makefile 0.50% Dockerfile 0.05% Python 2.23% Shell 0.05% HTML 5.13% CSS 0.39% JavaScript 9.38%
docker docker-image container kubernetes ci-cd uber mesos

makisu's Introduction

Makisu

Build Status GoReportCard Github Release

This project will be deprecated and be archived by 4th of May 2021

The makisu project is no longer actively maintained and will soon be archived. Please read the details in this issue.

Makisu is a fast and flexible Docker image build tool designed for unprivileged containerized environments such as Mesos or Kubernetes.

Some highlights of Makisu:

  • Requires no elevated privileges or containerd/Docker daemon, making the build process portable.
  • Uses a distributed layer cache to improve performance across a build cluster.
  • Provides control over generated layers with a new optional keyword #!COMMIT, reducing the number of layers in images.
  • Is Docker compatible. Note, the Dockerfile parser in Makisu is opinionated in some scenarios. More details can be found here.

Makisu has been in use at Uber since early 2018, building thousands of images every day across 4 different languages. The motivation and mechanism behind it are explained in https://eng.uber.com/makisu/.

Building Makisu

Building Makisu image

To build a Docker image that can perform builds inside a container:

make images

Building Makisu binary and build simple images

To get the makisu binary locally:

go get github.com/uber/makisu/bin/makisu

For a Dockerfile that doesn't have RUN, makisu can build it without Docker daemon, containerd or runc:

makisu build -t ${TAG} --dest ${TAR_PATH} ${CONTEXT}

Running Makisu

For a full list of flags, run makisu build --help or refer to the README here.

Makisu anywhere

To build Dockerfiles that contain RUN, Makisu needs to run in a container. To try it locally, the following snippet can be placed inside your ~/.bashrc or ~/.zshrc:

function makisu_build() {
    makisu_version=${MAKISU_VERSION:-latest}
    cd ${@: -1}
    docker run -i --rm --net host \
        -v /var/run/docker.sock:/docker.sock \
        -e DOCKER_HOST=unix:///docker.sock \
        -v $(pwd):/makisu-context \
        -v /tmp/makisu-storage:/makisu-storage \
        gcr.io/uber-container-tools/makisu:$makisu_version build \
            --commit=explicit \
            --modifyfs=true \
            --load \
            ${@:1:${#@}-1} /makisu-context
    cd -
}

Now you can use makisu_build like you would use docker build:

$ makisu_build -t myimage .

Note:

  • Docker socket mount is optional. It's used together with --load for loading images back into Docker daemon for convenience of local development. So does the mount to /makisu-storage, which is used for local cache. If the image would be pushed to registry directly, please remove --load for better performance.
  • The --modifyfs=true option let Makisu assume ownership of the filesystem inside the container. Files in the container that don't belong to the base image will be overwritten at the beginning of build.
  • The --commit=explicit option let Makisu only commit layer when it sees #COMMIT and at the end of the Dockerfile. See "Explicit Commit and Cache" for more details.

Makisu on Kubernetes

Makisu makes it easy to build images from a GitHub repository inside Kubernetes. A single pod (or job) is created with an init container, which will fetch the build context through git or other means, and place that context in a designated volume. Once it completes, the Makisu container will be created and executes the build, using that volume as its build context.

Creating registry configuration

Makisu needs registry configuration mounted in to push to a secure registry. The config format is described in documentation. After creating configuration file on local filesystem, run the following command to create the k8s secret:

$ kubectl create secret generic docker-registry-config --from-file=./registry.yaml
secret/docker-registry-config created

Creating Kubernetes job spec

To setup a Kubernetes job to build a GitHub repository and push to a secure registry, you can refer to our Kubernetes job spec template (and out of the box example) .

With such a job spec, a simple kubectl create -f job.yaml will start the build. The job status will reflect whether the build succeeded or failed

Using cache

Configuring distributed cache

Makisu supports distributed cache, which can significantly reduce build time, by up to 90% for some of Uber's code repos. Makisu caches docker image layers both locally and in docker registry (if --push parameter is provided), and uses a separate key-value store to map lines of a Dockerfile to names of the layers.

For example, Redis can be setup as a distributed cache key-value store with this Kubernetes job spec. Then connect Makisu to redis cache by passing --redis-cache-addr=redis:6379 argument. If the Redis server is password-protected, use --redis-cache-password=password argument. Cache has a 14 day TTL by default, which can be configured with --local-cache-ttl=14d argument.

For more options on cache, please see Cache.

Explicit commit and cache

By default, Makisu will cache each directive in a Dockerfile. To avoid committing and caching everything, the layer cache can be further optimized via explicit caching with the --commit=explicit flag. Dockerfile directives may then be manually cached using the #!COMMIT annotation:

FROM node:8.1.3

ADD package.json package.json
ADD pre-build.sh

# A bunch of pre-install steps here.
...
...
...

# A step to be cached. A single layer will be committed and cached here on top of base image.
RUN npm install #!COMMIT

...
...
...

# The last step of the last stage always commit by default, generating and caching another layer.
ENTRYPOINT ["/bin/bash"]

In this example, only 2 additional layers on top of base image will be generated and cached.

Configuring Docker Registry

For the convenience to work with any public Docker Hub repositories including library/.*, a default config is provided:

index.docker.io:
  .*:
    security:
      tls:
        client:
          disabled: false
      // Docker Hub requires basic auth with empty username and password for all public repositories.
      basic:
        username: ""
        password: ""

Registry configs can be passed in through the --registry-config flag, either as a file path of as a raw json blob (converted to json using yq):

--registry-config='{"gcr.io": {"uber-container-tools/*": {"push_chunk": -1, "security": {"basic": {"username": "_json_key", "password": "<escaped key here>"}}}}}'

For more details on configuring Makisu to work with your registry client, see the documentation.

Comparison With Similar Tools

Bazel

We were inspired by the Bazel project in early 2017. It is one of the first few tools that could build Docker compatible images without using Docker or any form of containerizer. It works very well with a subset of Docker build scenarios given a Bazel build file. However, it does not support RUN, making it hard to replace most docker build workflows.

Kaniko

Kaniko provides good compatibility with Docker and executes build commands in userspace without the need for Docker daemon, although it must still run inside a container. Kaniko offers smooth integration with Kubernetes, making it a competent tool for Kubernetes users. On the other hand, Makisu has some performance tweaks for large images with multi-phase builds by avoiding unnecessary disk scans, and offers more control over cache generation and layer size through #!COMMIT, make it optimal for complex workflows.

BuildKit / img

BuildKit and img depend on runc/containerd and supports parallel stage executions, whereas Makisu and most other tools execute Dockefile in order. However, BuildKit and img still need seccomp and AppArmor to be disabled to launch nested containers, which is not ideal and may not be doable in some production environments.

Contributing

Please check out our guide.

Contact

To contact us, please join our Slack channel.

makisu's People

Contributors

ahmagdy avatar akihirosuda avatar apourchet avatar bpaquet avatar captn3m0 avatar codygibb avatar disenchant avatar evelynl94 avatar josecv avatar kukaryambik avatar lasse-damgaard avatar mazuninky avatar nnordrum avatar orisano avatar poweroftrue avatar preslavmihaylov avatar rowern avatar sema avatar serg1i avatar talaniz avatar timdaman avatar ubermunck avatar wutongjie23hao avatar yiranwang52 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

makisu's Issues

master fails to build images with `failed to load manifest storage dir:....`

Current master (e47f057) of makisu fails to run makisu build with:

{"level":"fatal","ts":1545782991.790016,"msg":"Failed to load manifest storage dir /makisu-storage/manifest/cache: file does not exist"}

Probably #134 broke it.

Reproduction:

โžœ  makisu-use docker run -it --rm  \
        -v /var/run/docker.sock:/docker.sock \
        -v $HOME/.config/gcloud:/root/.config/gcloud -e DOCKER_HOST=unix:///docker.sock \
        -v $(pwd):/makisu-context \
        -v /tmp/makisu-storage:/makisu-storage \
        makisu-gcr:latest build --modifyfs=true --load --registry-config=/makisu-context/registry.yaml --redis-cache-addr=10.0.0.11:6379 -t=gcr.io/my-project/makisu -f /makisu-context/test.Dockerfile ./
{"level":"info","ts":1545782991.7671587,"msg":"Starting Makisu build (version=v0.1.6-11-ge47f057)"}
{"level":"fatal","ts":1545782991.790016,"msg":"Failed to load manifest storage dir /makisu-storage/manifest/cache: file does not exist"}

"makisu version" broken on "go get"

If you run go get github.com/uber/makisu/bin/makisu like the instructions say, you get build-hash-2018-3-28. Works as expected in the container though.

Pushing to Docker Hub with docker config creds

I tried various combinations in the --registry-config to pass in creds for pushing to Docker hub without any success. All attempts failed.

Could you provide instructions on how I could use Docker hub with makisu?

I'm sure this isn't really a bug but I wasn't able to join the Slack workspace linked from the README as it only allows users with uber.com emails.

cannot pull from gcr.io/makisu-project/makisu:0.1.0

โžœ  cd-tools git:(master) โœ— docker pull gcr.io/makisu-project/makisu:0.1.0
Error response from daemon: pull access denied for gcr.io/makisu-project/makisu, repository does not exist or may require 'docker login'

you probably have to make this a public registry (should be possible with GCR) or publish it on dockerhub instead.

Can't use cached layers without pushing

When omitting --push=<registry> cached layers are not used (apparently just noop cache is being used). I would like to be able to build an image using cached layers bot not push the image immediately.

kaniko info in readme is incorrect

Hey, cool to see more tools emerging in this space. I'm the original author of the Bazel stuff, and Uber TL'd kaniko.

Kaniko is tightly integrated with Kubernetes, and manages secrets with Google Cloud Credential...

This isn't really accurate. kaniko heavily uses https://github.com/google/go-containerregistry, which is a generic container registry library (also used in skaffold, buildpacks v3, Knative, ko), and for auth uses a "keychain" that mimics the Docker keychain.

Because setting up auth (esp. in a Container) can be a royal pain, the tool provides options for making things smoother via GCP credentials, but also Azure and AWS credentials.

In fact, the only "tight" integration with Kubernetes I'm aware of is that it falls back on Kubelet-style authentication using Node identity (instead of anonymous) if the standard Docker keychain resolution fails to find a credential (think: universal credential helper).

However, Makisu's more flexible caching features make it optimal for higher build volume across many repos and developers.

I'm curious if you have followed the "kanikache" work, where kaniko leverages the final Docker registry as a distributed cache? I'd be surprised if a redis-based cache out-performed this because while redis is fast, the registry yields no-copy caching. kaniko won't even download the image if the only remaining directives are metadata manipulation. This is mostly done, but there are a few places left that the team's working on optimizing.


For lack of a better forum to ask, I figured I'd reach out and see if you would be interested in coming to the Knative Build working group to talk about makisu? While the general focus is on Knative Build and Pipelines, this group is deeply interested in safe on-cluster Build, and typically has representatives from related groups (buildah, kaniko, buildpacks). We've had presentations on all three in the past, so I'd love to hear more about makisu.

cc @imjasonh who typically runs these meetings. Also feel free to reach out over email (my github handle at google.com), or find me on Knative slack (same handle) if you want to chat or exchange tips/tricks for building container images.

Again, very cool to see this :)

Unable to build multi-stage Dockerfile with `FROM <alias>`

Attempting to build a multi-stage Dockerfile (followed this example) https://github.com/uber/makisu/blob/master/lib/parser/dockerfile/test-files/example-multistage-dockerfile

Here is my exact Dockerfile

FROM node:8.11.1-slim AS build

COPY package*.json src/

ENV DIR=/src
WORKDIR /$DIR

RUN npm install --verbose

COPY . /$DIR

RUN npm run transpile

FROM build AS test

ARG TIMESTAMP=
ARG PROVIDER_VERSION=
ARG PROVIDER_VERIFY=false

However upon build I get

{"level":"info","ts":1547055046.8055413,"msg":"* Step 1/5 (commit,modifyfs) : FROM build AS test  (d6b022e3)"}
{"level":"info","ts":1547055046.8056219,"msg":"* Started pulling image index.docker.io/library/build:latest"}
2019/01/09 17:30:48 Command failure: failed to execute build plan: execute stage: build stage test: build node: do execute: execute step: get manifest: pull image index.docker.io/library/build:latest: pull manifest: http send error: GET https://index.docker.io/v2/library/build/manifests/latest 401: {"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"library/build","Action":"pull"}]}]}
~/code/misc/node-helloworld

It wants to pull the previous stage as a public image. The example provided doesn't use previous stage as the start of the next stage. Is this scenario supported by makisu?

Improve ENV/ARG error message

The following Dockerfile caused an issue:

....
ARG VAR=x
ENV VAR $VAR

The error was the following:

failed to create build plan: failed to get dockerfile: failed to parse dockerfile: failed to create new directive (line 5): failed to parse ENV directive with args 'VAR': Missing space in single value ENV

failure to push cache

Makisu fails to push the cache layers with this error

{"level":"info","ts":1544131874.1233864,"msg":"Stored cacheID mapping to KVStore: 63ee1ad1 => MAKISU_CACHE_EMPTY"}
{"level":"error","ts":1544131874.2301095,"msg":"Failed to push cache: push layer sha256:11c7c4087b02b2f738627b3a4edd566b0189c8cea77a8b2dc3aac1fb1b240194: check layer exists: gcr.io/makisu/cache (sha256:11c7c4087b02b2f738627b3a4edd566b0189c8cea77a8b2dc3aac1fb1b240194): check manifest exists: HEAD https://gcr.io/v2/makisu/cache/blobs/sha256:11c7c4087b02b2f738627b3a4edd566b0189c8cea77a8b2dc3aac1fb1b240194 401; push layer sha256:1dbcab28ce46b65c0174e5e82658492107396fead31e9144c343e6bc96e471c7: check layer exists: gcr.io/makisu/cache (sha256:1dbcab28ce46b65c0174e5e82658492107396fead31e9144c343e6bc96e471c7): check manifest exists: HEAD https://gcr.io/v2/makisu/cache/blobs/sha256:1dbcab28ce46b65c0174e5e82658492107396fead31e9144c343e6bc96e471c7 401"}

The command I executed was

docker run -i --rm --network host \
        -v /var/run/docker.sock:/docker.sock \
        -e DOCKER_HOST=unix:///docker.sock \
        -v $(pwd):/makisu-context \
        -v /tmp/makisu-storage:/makisu-storage \
        gcr.io/makisu-project/makisu:v0.1.3 build --modifyfs=true --load -registry-config=/makisu-context/registry.yaml -t="gcr.io/my-project/christian-playground/makisu-test" --push=gcr.io /makisu-context

My registry config looks like this:

"gcr.io":
  "my-project/*":
    push_chunk: -1
    security:
      basic:
        username: oauth2accesstoken
        password: somethingsecret

This error also occurs when attempting to use a redis based remote cache key value store.
Btw makisu successfully pushes the image layers, but not the cache layers.
Also it creates a bunch of cache entries with the value MAKISU_CACHE_EMPTY (both locally and in redis).

From the error logs it appears that makisu attempts to push the cache images to the wrong repository (gcr.io/makisu/cache instead of gcr.io/my-project/makisu/cache).

Makisu deleted all my files! (in my container)

I ran makisu in a container using the alpine container, as documented - which seemed to produce my image correctly.

However when I tried to ls it afterwards, I realised my whole filesystem had been deleted.

I realise this is probably related to my use of the --modifyfs feature, but I hadn't realised how intrusive the changes were going to be! Could this be noted in the documentation?

In particular, i'm experimenting with running this on CircleCI - many jobs will want to run some steps after producing the image, so it would be nice if there were a way to enable this.

Please let me know if i'm barking up the wrong tree here!

Docker pull fails on public images from non docker-library

I'm using your makisu_build snippet.

Dockerfile:

FROM wodby/php

Fails with every public image from docker hub that are not from docker library, e.g. php works but not wodby/php:

$ makisu_build -t myimage .                                                                                                                              
Starting makisu (version=40cdf56)
{"level":"info","ts":1542886246.9101796,"msg":"Using build context: /makisu-context"}
{"level":"info","ts":1542886246.9177778,"msg":"No registry or cache option provided, not using distributed cache"}
{"level":"info","ts":1542886246.9179394,"msg":"* Stage 1/1 : (alias=,latestfetched=-1)"}
{"level":"info","ts":1542886246.917976,"msg":"* Step 1/1 (commit) : FROM wodby/php  (9d594030)"}
{"level":"info","ts":1542886246.918113,"msg":"* Started pulling image index.docker.io/wodby/php:latest"}
Command failure: failed to execute build plan: build stage: build node: do execute: execute step: get manifest: pull image index.docker.io/wodby/php:latest: pull manifest: http send error: network error: Get https://index.docker.io/v2/wodby/php/manifests/latest: x509: certificate signed by unknown authority

basic authentication with json registry config doesn't work

In our company we use artifactory with anonymous downloads allowed. To be able to fetch a base image I use following makisu.yaml and it works just fine

artifactory.mycompany.com:
  docker-registry:
    security:
      tls:
        client:
          disabled: true
      basic:
        username: ""
        password: ""

But being converted to json it just won't work

{ "artifactory.mycompany.com": { "docker-registry": { "security": { "tls": { "client": { "disabled": true } }, "basic": { "username": "", "password": "" } } } } }

I tried using {}s for empty values, tried removing basic block, etc.. It never works.

Seems, that json mapping is broken.

Let me know, if any other details are needed to reproduce this.

May be related to #157

Fix flaky unit tests

--- FAIL: TestCreateLayerByScan (0.00s)
    --- FAIL: TestCreateLayerByScan/Simple (0.00s)
        require.go:1159: 
            	Error Trace:	testutils_test.go:55
            	            				mem_fs_test.go:602
            	Error:      	Should be true
            	Test:       	TestCreateLayerByScan/Simple
            	Messages:   	/test1/test2 headers not similar:
            	            	expected=&{Typeflag:53 Name:test1/test2/ Linkname: Size:0 Mode:493 Uid:0 Gid:0 Uname: Gname: ModTime:2019-01-09 04:56:00.997935188 +0000 UTC AccessTime:2019-01-09 04:56:00.997935188 +0000 UTC ChangeTime:2019-01-09 04:56:00.997935188 +0000 UTC Devmajor:0 Devminor:0 Xattrs:map[] PAXRecords:map[] Format:<unknown>}
            	            	actual=  &{Typeflag:53 Name:test1/test2/ Linkname: Size:0 Mode:493 Uid:0 Gid:0 Uname: Gname: ModTime:2019-01-09 04:56:01.001935272 +0000 UTC AccessTime:2019-01-09 04:56:00.997935188 +0000 UTC ChangeTime:2019-01-09 04:56:01.001935272 +0000 UTC Devmajor:0 Devminor:0 Xattrs:map[] PAXRecords:map[] Format:<unknown>}

Global command-line options are ignored

Currently the value of any global command-line option, --log-fmt, --log-level, --log-output, and --cpu-profile, provided on the command line are ignored. Their values are processed before the command line gets parsed by cobra, so the default values are used regardless of what is provided on the command line.

The logger initialization and CPU profiling parts of cmd.Execute() should likely be moved to a separate function that is called from each subcommand as appropriate.

RUN directive parsing buggy

I'm noticing a bunch of issues with makisu not being able to parse certain RUN directives that just work fine with docker build (18.06.1-ce).

If that happens I get an error message similar to this:

2018/11/19 00:25:14 Starting makisu (version=v0.1.0-13-gac3b5bb)
{"level":"info","ts":1542587114.7364843,"msg":"Using build context: /makisu-context"}
2018/11/19 00:25:14 Command failure: failed to get dockerfile: failed to parse dockerfile: failed to create new directive (line 16): failed to parse ECHO directive with args 'foo': Unsupported directive type

Here are examples which work with docker build, but not with makisu:

  1. example:
RUN \ 
  echo foo

NOTICE: There is an empty space after the \ which makes makiso choke.

  1. example:
RUN \
  # some comment that makisu doesn't like
  echo foo

This example doesn't contain an empty space after \ but it contains a # comment that makes makiso choke as well.

push tag and latest

It would be great if you could push a unique image:build_id and image:latest with one command.

Support for MAINTAINER directive

Any dockerfile with MAINTAINER right now errors out with something like this:

2018/11/14 22:24:55 Command failure: failed to create build plan: failed to convert parsed stage: convert parsed stage to build stage: convert directive to build step: convert directive: unsupported directive type: &dockerfile.MaintainerDirective{baseDirective:(*dockerfile.baseDirective)(0xc0003b8630), author:"Uber Technologies Inc"}

Fix flaky test TestBuildPlanContextDirs

https://travis-ci.org/uber/makisu/builds/457311397

--- FAIL: TestBuildPlanContextDirs (0.00s)
	Error Trace:	build_plan_test.go:95
	Error:      	Not equal: 
	            	expected: map[string][]string{"stage1":[]string{"/hello", "/hello2"}}
	            	actual  : map[string][]string{"stage1":[]string{"/hello2", "/hello"}}
	            	
	            	Diff:
	            	--- Expected
	            	+++ Actual
	            	@@ -2,4 +2,4 @@
	            	  (string) (len=6) "stage1": ([]string) (len=2) {
	            	-  (string) (len=6) "/hello",
	            	-  (string) (len=7) "/hello2"
	            	+  (string) (len=7) "/hello2",
	            	+  (string) (len=6) "/hello"
	            	  }
	Test:       	TestBuildPlanContextDirs

Final layer (image config) always gets different SHA1, despite all layers being used from cache

It seems that the final layer always ends up having a different SHA1 even if nothing has changed and all layers were previously cached. That means makisu always pushes the last layer, despite no changes.

Example:

FROM alpine

RUN sleep 30 && echo first > first && echo "first layer" 
RUN sleep 30 && echo second > second && echo "second layer"
  makisu-use docker run -i --rm --network host \
        -v /var/run/docker.sock:/docker.sock \
        -v $HOME/.config/gcloud:/root/.config/gcloud -e DOCKER_HOST=unix:///docker.sock \
        -v $(pwd):/makisu-context \
        -v /tmp/makisu-storage:/makisu-storage \
        makisu-gcr:latest build --modifyfs=true --load -registry-config=/makisu-context/registry.yaml -redis-cache-addr=10.0.0.11:6379 -t=gcr.io/my-project/christian-playground/makisu-test1 --push=gcr.io -f /makisu-context/test.Dockerfile ./makisu-context

here's the log output:

โžœ  makisu-use docker run -i --rm --network host \
        -v /var/run/docker.sock:/docker.sock \
        -v $HOME/.config/gcloud:/root/.config/gcloud -e DOCKER_HOST=unix:///docker.sock \
        -v $(pwd):/makisu-context \
        -v /tmp/makisu-storage:/makisu-storage \
        makisu-gcr:latest build --modifyfs=true --load -registry-config=/makisu-context/registry.yaml -redis-cache-addr=10.0.0.11:6379 -t=gcr.io/my-project/christian-playground/makisu-test1 --push=gcr.io -f /makisu-context/test.Dockerfile ./makisu-context
{"level":"info","ts":1545243470.2675872,"msg":"Starting Makisu build (version=v0.1.6-8-g2312b39)"}
{"level":"info","ts":1545243470.270377,"msg":"Using build context: /makisu-context"}
{"level":"info","ts":1545243470.2755976,"msg":"Using redis at 10.0.0.11:6379 for cacheID storage"}
{"level":"info","ts":1545243470.783478,"msg":"Found mapping in cacheID KVStore: d54b73c => c7acfc2853d1dc7ab082765d57373e8c4585b893d3f590a32c3ca850871daa1b,3dbe2ffb5491fe01193af9e6872d42507866b8e722303d7a75eb9ed5234c05af"}
{"level":"info","ts":1545243470.7867148,"msg":"* Skipped pulling existing layer my-project/christian-playground/makisu-test1:sha256:3dbe2ffb5491fe01193af9e6872d42507866b8e722303d7a75eb9ed5234c05af"}
{"level":"info","ts":1545243471.1164923,"msg":"Found mapping in cacheID KVStore: 62e3baef => f649ccb7cf136191972cf1571af0fd7ab9d5c525d3d8fecc4086a3dfbdc8e763,15407b25fb3010ceaea41ee2e7b54f85545d07c829284a52b30626fbdced3675"}
{"level":"info","ts":1545243471.1198263,"msg":"* Skipped pulling existing layer my-project/christian-playground/makisu-test1:sha256:15407b25fb3010ceaea41ee2e7b54f85545d07c829284a52b30626fbdced3675"}
{"level":"info","ts":1545243471.1198847,"msg":"* Stage 1/1 : (alias=0,latestfetched=-1)"}
{"level":"info","ts":1545243471.12015,"msg":"* Step 1/3 (commit,modifyfs) : FROM alpine  (76c4d71f)"}
{"level":"info","ts":1545243471.120436,"msg":"* Started pulling image index.docker.io/library/alpine:latest"}
{"level":"info","ts":1545243473.6239452,"msg":"* Skipped pulling existing layer library/alpine:sha256:196d12cf6ab19273823e700516e98eb1910b03b17840f9d5509f03858484d321"}
{"level":"info","ts":1545243473.6239972,"msg":"* Skipped pulling existing layer library/alpine:sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde"}
{"level":"info","ts":1545243473.6265252,"msg":"* Finished pulling image index.docker.io/library/alpine:latest in 2.5060273s"}
{"level":"info","ts":1545243473.6310925,"msg":"* Processing FROM layer 4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde"}
{"level":"info","ts":1545243473.7080991,"msg":"* Untarred 469 files to / in 77ms"}
{"level":"info","ts":1545243473.7085698,"msg":"* Merged 469 headers from tar to memfs"}
{"level":"info","ts":1545243473.7086647,"msg":"* Execute FROM alpine  (76c4d71f) took 2.5884505s"}
{"level":"info","ts":1545243473.7118616,"msg":"* Committed gzipped layer sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde (2206931 bytes)"}
{"level":"info","ts":1545243473.7119517,"msg":"* Pushing with cache ID 76c4d71f"}
{"level":"info","ts":1545243473.7134151,"msg":"* Step 2/3 (commit,modifyfs) : RUN sleep 30 && echo first > first && echo \"first layer\"  (d54b73c)"}
{"level":"info","ts":1545243473.7161434,"msg":"* Applying cache layer 3dbe2ffb5491fe01193af9e6872d42507866b8e722303d7a75eb9ed5234c05af (unpack=true)"}
{"level":"info","ts":1545243473.7165127,"msg":"* Untarred 3 files to / in 0s"}
{"level":"info","ts":1545243473.7165604,"msg":"* Merged 3 headers from tar to memfs"}
{"level":"info","ts":1545243473.7166305,"msg":"* Skipping execution; cache was applied *"}
{"level":"info","ts":1545243473.7169766,"msg":"* Step 3/3 (commit,modifyfs) : RUN sleep 30 && echo second > second && echo \"second layer\"  (62e3baef)"}
{"level":"info","ts":1545243473.7190568,"msg":"* Applying cache layer 15407b25fb3010ceaea41ee2e7b54f85545d07c829284a52b30626fbdced3675 (unpack=true)"}
{"level":"info","ts":1545243473.7195861,"msg":"* Untarred 1 files to / in 0s"}
{"level":"info","ts":1545243473.7196596,"msg":"* Merged 1 headers from tar to memfs"}
{"level":"info","ts":1545243473.7196949,"msg":"* Skipping execution; cache was applied *"}
{"level":"info","ts":1545243473.7199235,"msg":"* Moving directories [] to /makisu-storage/sandbox/sandbox400623144/stages/MA=="}
{"level":"info","ts":1545243476.1945984,"msg":"* Skipped pushing existing layer my-project/christian-playground/makisu-test1:sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde"}
{"level":"info","ts":1545243476.5169935,"msg":"Stored cacheID mapping to KVStore: 76c4d71f => df64d3292fd6194b7865d7326af5255db6d81e9df29f48adde61a918fbd8c332,4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde"}
{"level":"info","ts":1545243476.5350976,"msg":"Computed total image size 2207244","total_image_size":2207244}
{"level":"info","ts":1545243476.5354578,"msg":"Successfully built image my-project/christian-playground/makisu-test1:latest"}
{"level":"info","ts":1545243476.536066,"msg":"* Started pushing image gcr.io/my-project/christian-playground/makisu-test1:latest"}
{"level":"info","ts":1545243478.9334683,"msg":"* Image gcr.io/my-project/christian-playground/makisu-test1:latest already exists, overwriting"}
{"level":"info","ts":1545243481.136437,"msg":"* Skipped pushing existing layer my-project/christian-playground/makisu-test1:sha256:3dbe2ffb5491fe01193af9e6872d42507866b8e722303d7a75eb9ed5234c05af"}
{"level":"info","ts":1545243481.1929977,"msg":"* Skipped pushing existing layer my-project/christian-playground/makisu-test1:sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde"}
{"level":"info","ts":1545243481.4203205,"msg":"* Skipped pushing existing layer my-project/christian-playground/makisu-test1:sha256:15407b25fb3010ceaea41ee2e7b54f85545d07c829284a52b30626fbdced3675"}
{"level":"info","ts":1545243485.0824435,"msg":"* Started pushing layer sha256:5d36b483aa47765548782ab14a482ede21da32dd8472b52c2bddb95310ce4eea"}
{"level":"info","ts":1545243491.0503204,"msg":"* Finished pushing layer sha256:5d36b483aa47765548782ab14a482ede21da32dd8472b52c2bddb95310ce4eea"}
{"level":"info","ts":1545243494.5341387,"msg":"* Finished pushing image gcr.io/my-project/christian-playground/makisu-test1:latest in 17.9975221s"}
{"level":"info","ts":1545243494.5342157,"msg":"Successfully pushed gcr.io/my-project/christian-playground/makisu-test1:latest to gcr.io"}
{"level":"info","ts":1545243494.5342376,"msg":"Loading image my-project/christian-playground/makisu-test1:latest"}
{"level":"info","ts":1545243494.5438924,"msg":"Image tarrer dir: /makisu-storage/sandbox/sandbox400623144/my-project/christian-playground/makisu-test1/latest"}
{"level":"info","ts":1545243494.7470047,"msg":"Successfully loaded image gcr.io/my-project/christian-playground/makisu-test1:latest"}
{"level":"info","ts":1545243494.7471056,"msg":"Finished building my-project/christian-playground/makisu-test1:latest"}

Support HTTP Cache

It will be nice to support not only Redis but a simple PUT/GET HTTP Cache. Similar to what Buck, Bazel, Pants and other build system use.

Avoid pushing same layer twice

When makisu reaches the end of the build, it will try to push all layers, despite some layers might be in the middle of cache upload, resulting in duplicated work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.