autopilotpattern / jenkins Goto Github PK
View Code? Open in Web Editor NEWExtension of official Jenkins Docker image that supports Joyent's Triton for elastic slave provisioning
Extension of official Jenkins Docker image that supports Joyent's Triton for elastic slave provisioning
Jenkins requires SSL to be operated securely. You should only run Jenkins behind a reverse proxy that supports SSL (ex. Nginx).
Once we have a blueprint for Let's Encrypt support in Nginx (ref autopilotpattern/nginx#25) we should give this Jenkins blueprint an Nginx front-end.
I have been playing with this recently, and am curious how Triton decided for a men_limit of 4G
to offer me a g4-highcpu-4G
instance instead of a g4-general-4G
?
Is there a way to specify the metadata(aka Triton package) through docker-compose.yml, or would I rely on triton-cli for resizing the container after the fact?
I was doing an experiment on having applications automatically scale threads based on available bursting capacity and this caused me to circle back to the proclimit script in this repo. The number of threads we're assuming here seems at odds with the number of available vCPU for a given container. I took the typical values for a Joyent CN and ran it against this calculation (numbers in GB for clarity):
#!/bin/bash
TOTAL_MEM=256.0 # typical CN size in GB
CORES=48 # typical number of cores
calc_mem() {
local zone_mem=$1
local expected=$2
local got=$(echo "8k $zone_mem $TOTAL_MEM / $CORES * pq" | dc)
echo "For $zone_mem GB zone, expected $expected vCPU, got $got"
}
# Docker container sizes on public cloud
calc_mem 0.128 0.0625
calc_mem 0.256 0.125
calc_mem 0.512 0.25
calc_mem 1 0.5
calc_mem 2 1
calc_mem 8 4
calc_mem 16 8
calc_mem 32 16
calc_mem 64 32
./proclimit.sh
For 0.128 GB zone, expected 0.0625 vCPU, got .02400000
For 0.256 GB zone, expected 0.125 vCPU, got .04800000
For 0.512 GB zone, expected 0.25 vCPU, got .09600000
For 1 GB zone, expected 0.5 vCPU, got .18750000
For 2 GB zone, expected 1 vCPU, got .37500000
For 8 GB zone, expected 4 vCPU, got 1.50000000
For 16 GB zone, expected 8 vCPU, got 3.00000000
For 32 GB zone, expected 16 vCPU, got 6.00000000
For 64 GB zone, expected 32 vCPU, got 12.00000000
It looks to me like we're being overly conservative with this calculation and that it's worse with larger instance sizes because we take a floor of 1. Any thoughts on this @dekobon or @misterbisson ?
I am using Jenkins on Triton using this repo. But I am really new to sdc-docker and its not building for me within the context of a Jenkins job. More details below:
I am using my private registry with a docker-compose.yml like so:
version: "3.0"
services:
lib:
build: "./"
image: "gitlab.fathm.io:4657/lib/schema-survey-data"
depends_on:
- rethink
rethink:
image: "gitlab.fathm.io:4657/dockerfiles/rethinkdb:2.3.5"
Step 1 : FROM gitlab.fathm.io:4657/dockerfiles/node:6.9.3
---> fcad3edd2866
Step 2 : MAINTAINER [email protected]
---> Using cache
---> 8bb9f7f7fd65
Step 3 : COPY .npmrc /src/
---> Using cache
---> 4f15faff6e77
Step 4 : COPY package.json /src/
---> Using cache
---> 55b9303693eb
Step 5 : RUN cd /src && echo "# REPLACE ME" > README.md && npm install && npm cache clean
---> Using cache
---> ac9b3ba3ffe0
Step 6 : COPY . /src
Service 'lib' failed to build: Error: image ac9b3ba3ffe0ca43e197dd3439125dce5b70fb2633c14b2fe420a4b8154687bb:latest not found (62b5c066-f633-4a6f-9547-ae00beb44cff)
The message above doesn't make much sense to me at all. When I login into the CLI: I can use sdc-docker
successfully.
... stuff omitted ...
---> ac9b3ba3ffe0
Step 6 : COPY . /src
---> cf1dbcaa4619
Step 7 : CMD /usr/bin/npm test
---> b94d29af8629
Importing image 8bb9f7f7fd65 into IMGAPI
Importing image 4f15faff6e77 into IMGAPI
Importing image 55b9303693eb into IMGAPI
Importing image ac9b3ba3ffe0 into IMGAPI
Importing image cf1dbcaa4619 into IMGAPI
Importing image b94d29af8629 into IMGAPI
Successfully built b94d29af8629
I am curious as to why the difference between Jenkins and CLI run.
Currently we inject secrets into the Jenkins container via environment variables in the setup script:
# munge the private key so that we can pass it into an env var sanely
# and then unmunge it in our startup script
echo PRIVATE_KEY=$(cat ${DOCKER_CERT_PATH}/key.pem | tr '\n' '#') >> _env
echo 'Edit the _env file to include a JENKINS_PASSWD and GITHUB_* config'
This blueprint can be a first use case for implementing secrets management via Vault. Although supporting secure injection for launching production containers requires the help of a scheduler, we can get away without that in the case of a one-off container like a Jenkins master. This will let us build an example workflow for secrets management that we can then enhance when Mariposa is completed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.