Giter Site home page Giter Site logo

docker-boshrelease's People

Contributors

alex-slynko avatar altonf4 avatar bkrannich avatar cagiti avatar christianang avatar christopherclark avatar dependabot[bot] avatar dmlemos avatar drnic avatar frodenas avatar iainsproat avatar karampok avatar kinjelom avatar lnguyen avatar mordebites avatar nehagjain15 avatar neil-hickey avatar pgoodwin avatar poblin-orange avatar romnikit avatar rumshenoy avatar semanticallynull avatar srm09 avatar starkandwayne-bot avatar svennela avatar swalner-pivotal avatar tenczar avatar tvs avatar vlerenc avatar yue-wen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-boshrelease's Issues

Some part of releases stored in root directory

All of the root directories of my deployments are typically the same size, but I started having problems loading very large (~5GB) docker images through this release, it stated the hard drive was full (even when I gave it a huge ephemeral storage). I then noticed that the root directory is "more full" on any deployments that use docker-boshrelease, by about the same size as the docker images themselves.

I'm not sure where, as I'm not a bosh expert, but I think the docker images aren't being loaded/stored in the ephemeral storage as they should be. This means large docker images can't be loaded at all, since stemcells typically have a 3GB root partition.

I'm probably just going to make a custom stemcell for now with a large root partition, but that's an ugly workaround. Any ideas?

upload release to microbosh getting timeouts

bosh upload release releases/docker-9.yml
...
it appears I am having troubles downloading ruby packages from blob store.. I have successfully uploaded diego 0.825 and cf release 197 so I am not sure why this docker bosh is having issues.

Copying packages

ruby (1) FOUND REMOTE
Downloading package ruby (1) from blobstore (id=4877a96d-bcaa-4227-8da3-17fceba6e7fa)...
Blobstore error: Failed to fetch object, underlying error: #<Timeout::Error: execution expired> /usr/local/rvm/rubies/ruby-1.9.3-p547/lib/ruby/1.9.1/net/http.rb:763:in `initialize'

docker container, --net configuration flag

Trying to user a redsocks based docker image to solve my (recurrent) transparent internet access issue.

The docker image i use requires host networking (docker run --net=host). Is this configurable in the container manifest section ?

AppArmor enabled on system but the docker-default profile could not be loaded

Using release v24. Docker 1.10.2 is unable to start because it's unable to find the docker-default apparmor profile:

time="2016-03-10T17:59:30.456330433Z" level=warning msg="/!\\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\"
time="2016-03-10T17:59:30.549290724Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
time="2016-03-10T17:59:30.860662498Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
time="2016-03-10T17:59:30.987538458Z" level=fatal msg="Error starting daemon: AppArmor enabled on system but the docker-default profile could not be loaded."

Tested this on a 3302 ubuntu stemcell.

support docker exec as errand

As an operator, I'd like to be able to reuse docker images for one off execution of a docker command within a bosh errand.

Use cases include cheap packaging of operations tools as a bosh release (for cases where there is no yet bosh packages available for reuse such as npm or python, needed for elastidump )

The docker deamon on which the command will be executed may either be:

  1. started on the fly by the errand

  2. located within the docker template, and securely accessed through TLS 2 way authen leveragin existing docker.tls_cacert property.

    Extending docker bosh release to support a new errand job (ie: container-exec) would be helpful.

This would complement the list of existing errands supported by docker-boshrelease including fetch-containers-images

Number of files limit hits docker

Hi @frodenas some months ago we ran into an issue that back then I only had time to patch on the VM: we ran out of limits for number open files on the Docker node which is apparently at 1024 (soft limit) on the stemcell. You can check the general limits in /etc/security/limits.conf and also increase it there, but it doesn't affect current shell or already running processes. To avoid reboot/restart, I patched the running process (see http://lzone.de/cheat-sheet/ulimit).

Now, while cleaning up (getting rid of the forks and hacks; as you may have noticed ;-)), I asked myself how to do it right. Setting the limits in the control script is not working (I believe):

    # Increase ulimits for Docker user
    # soft_nofile_ulimit="${DOCKER_USER}   soft   nofile   1000000"
    # [[ ! $(cat "/etc/security/limits.conf" | grep "${soft_nofile_ulimit}") ]] && echo "${soft_nofile_ulimit}" >> "/etc/security/limits.conf"
    # hard_nofile_ulimit="${DOCKER_USER}   hard   nofile   1000000"
    # [[ ! $(cat "/etc/security/limits.conf" | grep "${hard_nofile_ulimit}") ]] && echo "${hard_nofile_ulimit}" >> "/etc/security/limits.conf"

Proposing a stemcell change would be wrong (specific Docker/containerization issue only) and introducing a C program to set it dynamically as in the article mentioned above seems overkill, so I wondered whether you have a good idea how to resolve this issue once you see more and more containers on a Docker node?

flannel option broke execution in v28

$ cat monit_debugger.docker_ctl.log
MONIT-DEBUG date
Sun Aug  7 11:57:34 UTC 2016
MONIT-DEBUG env
MONIT_DATE=Sun, 07 Aug 2016 11:57:34 +0000
MONIT_LOG_DIR=/var/vcap/sys/log/monit
MONIT_HOST=localhost
PATH=/bin:/usr/bin:/sbin:/usr/sbin
PWD=/etc/sv/monit
MONIT_PROCESS_PID=0
MONIT_EVENT=Started
MONIT_PROCESS_MEMORY=0
SHLVL=1
MONIT_PROCESS_CPU_PERCENT=0
MONIT_SERVICE=docker
MONIT_PROCESS_CHILDREN=0
MONIT_DESCRIPTION=Started
_=/usr/bin/env
MONIT-DEBUG docker_ctl /var/vcap/jobs/docker/bin/docker_ctl start
/var/vcap/jobs/docker/bin/job_properties.sh: line 109: /run/flannel/subnet.env: No such file or directory
MONIT-DEBUG exit code 1

Release v24: blobstore sha1 mistmatch

Trying to use the newest v24:

➜  Development git clone [email protected]:cloudfoundry-community/docker-boshrelease.git
Cloning into 'docker-boshrelease'...
remote: Counting objects: 1629, done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 1629 (delta 4), reused 0 (delta 0), pack-reused 1608
Receiving objects: 100% (1629/1629), 474.99 KiB | 0 bytes/s, done.
Resolving deltas: 100% (835/835), done.
Checking connectivity... done.
➜  Development cd docker-boshrelease
➜  docker-boshrelease git:(master) bosh upload release releases/docker/docker-24.yml
Acting as user 'admin' on 'micro-google'
Downloading from blobstore (id=3f4eccc8-936f-468a-a8b9-1b42cb7af81d)...
Blobstore error: sha1 mismatch expected=0e0e0f6f2e735bf4003d007022a4e730b77ffba2 actual=4aef8d186d671cee9024a755d344cd4edd0d9ee2
➜  docker-boshrelease git:(master) bosh create release releases/docker/docker-24.yml --with-tarball
Recreating release from the manifest
Downloading from blobstore (id=dddaad4f-e888-4959-85aa-22466c194fd8)...
Blobstore error: sha1 mismatch expected=70336f7a629ea484182189692b6c6f9d3187cc6a actual=7b7223aa31946232eb7f5123795ec6d943de63fa

Only the blobs have been recreated. Packages & jobs also need to be recreated:
https://lists.cloudfoundry.org/archives/list/[email protected]/thread/H365YFWROOHGOGBGLPBNWT7LLBGBIQEB/

erb template not working with bosh director 1.3262.23.0

Hi,

The latest docker-boshrelease seems to contain erb templating that bosh director 1.3262.23.0(PCF 1.8) can’t handle. I tried to fork it and revert the below commit, but it is failing in bosh director 260.0.0(PCF 1.9).

ed6fb07

Error 100: Unable to render instance groups for deployment. Errors are:
   - Unable to render jobs for instance group 'docker-bosh-tg_test_app4'. Errors are:
     - Error filling in template '(unknown)' (line (unknown): containers/monit:1: syntax error, unexpected ';'
    ...rs', []).each do |container| -; _erbout.concat "\ncheck proc...
    ...                               ^
    containers/monit:7: syntax error, unexpected ';'
    ;  end -; _erbout.concat "\n"
             ^)
     - Unable to render templates for job 'docker'. Errors are:
       - Error filling in template '(unknown)' (line (unknown): docker/bin/job_properties.sh.erb:22: syntax error, unexpected ';'

bosh package conflict - bosh helper

Hello,
im trying to colocate 2 bosh releases in my deployment

The aim is to give transparent internet access to the docker vm, by adding a defaut gateway to our internal internet gateway.

I meet a bosh package conflict (see below).
Are the 2 packages identical ? How could i work around this issue ?
Thks for ur help
Pierre

Started preparing deployment > Binding templates. Failed: Package name collision detected in job docker': templatedocker/docker' depends on package docker/bosh-helpers', templatenetworking/routes' depends on `networking/bosh-helpers'. BOSH cannot currently collocate two packages with identical names from separate releases. (00:00:00)

Error 80011: Package name collision detected in job docker': templatedocker/docker' depends on package docker/bosh-helpers', templatenetworking/routes' depends on `networking/bosh-helpers'. BOSH cannot currently collocate two packages with identical names from separate releases.

Reusing same disk as previous service instances/not recreating containers

@frodenas I think that I'm observing cf-containers/docker reusing the same disk/containers between service instances.

In the image below it should be showing brand new logs in a branch new logstash docker container, but instead it is showing older logs from earlier in the day:

image

I have deleted the service instance (cf ds my-logstash-service) and recreated it, then attached the kibana app (from the image) and see old logs.

Ideas for debugging?

Failed to create release

Failed to create release in local.

The commands are

git clone https://github.com/cloudfoundry-community/docker-boshrelease.git
cd docker-boshrelease
./update
bosh create release --force --name docker

And the errors are:

  >   * cf-uaa-lib-3.2.5.gem
  >   * concurrent-ruby-1.0.1.gem
  >   * daemons-1.2.3.gem
  >   * excon-0.48.0.gem
  >   * docker-api-1.26.2.gem
  >   * eventmachine-1.0.7.gem
  >   * hashie-3.4.3.gem
  >   * json_pure-1.8.3.gem
  >   * kgio-2.10.0.gem
  >   * thor-0.19.1.gem
  >   * railties-4.2.6.gem
  >   * lograge-0.3.6.gem
  >   * thin-1.6.4.gem
  >   * nats-0.5.1.gem
  >   * omniauth-1.3.1.gem
  > /tmp/d20161201-27761-d4nfvr/d20161201-27761-ez3qbb/cf-containers-broker/vendor/cache/omniauth-uaa-oauth2-f892fd3415f9/lib/omniauth/uaa_oauth2/version.rb:16: warning: already initialized constant OmniAuth::Cloudfoundry::VERSION
  > /home/ubuntu/.rvm/gems/ruby-2.2.1@test/bundler/gems/omniauth-uaa-oauth2-f892fd3415f9/lib/omniauth/uaa_oauth2/version.rb:16: warning: previous definition of VERSION was here
  >   * sprockets-3.5.2.gem
  >   * sprockets-rails-3.0.4.gem
  >   * rails-4.2.6.gem
  >   * rails-api-0.4.0.gem
  >   * raindrops-0.16.0.gem
  >   * sass-3.4.21.gem
  >   * tilt-2.0.2.gem
  >   * sass-rails-5.0.4.gem
  >   * settingslogic-2.0.9.gem
  >   * unicorn-5.0.1.gem
  > + RAILS_ENV=assets
  > + bundle exec rake assets:precompile
  > /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/spec_set.rb:92:in `block in materialize': Could not find addressable-2.4.0 in any of the sources (Bundler::GemNotFound)
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/spec_set.rb:85:in `map!'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/spec_set.rb:85:in `materialize'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/definition.rb:132:in `specs'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/definition.rb:177:in `specs_for'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/definition.rb:166:in `requested_specs'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/environment.rb:18:in `requested_specs'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/runtime.rb:13:in `setup'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler.rb:122:in `setup'
  > 	from /home/ubuntu/.rvm/gems/ruby-2.2.1@global/gems/bundler-1.8.4/lib/bundler/setup.rb:18:in `<top (required)>'
  > 	from /home/ubuntu/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  > 	from /home/ubuntu/.rvm/rubies/ruby-2.2.1/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
'cf-containers-broker' pre-packaging failed

bump to docker 1.12

docker 1.12 was released recently and made some great progress.

any chance to get this release bumped to docker 1.12 anytime soon?

Use SemVer for future of this release?

Currently every version bump has been major.

I propose moving to SemVer.
Major - Breaking changes
Minor - New feature
Patch - Bugfix etc.

Any objections?

Fetch containers errand not working

@frodenas Not sure whether this is an error or I missed how to do it right, but when I set broker.fetch_images to false and try to run the fetch-containers errand, it fails with:

[stdout]
I, [2015-09-23T07:09:10.188753 #4125] INFO -- : Looking for container images at the Services Catalog
I, [2015-09-23T07:09:10.283757 #4125] INFO -- : Fetching Docker image frodenas/mongodb:2.6'... E, [2015-09-23T07:09:10.285708 #4125] ERROR -- : +-> Cannot fetch Docker imagefrodenas/mongodb:2.6': #<Docker::Error::ArgumentError: Must have id, got: {"id"=>nil, :headers=>{}}>

[stderr]
/var/vcap/data/packages/cf-containers-broker/235ec5f70b97c6eebd41e39fdd497c61b17406f7.1-deb10451ba593ff97427b5eb32c492b415db6fa7/app/models/docker_manager.rb:98:in rescue in fetch_image': Cannot fetch Docker imagefrodenas/mongodb:2.6 (Exceptions::BackendError)
from /var/vcap/data/packages/cf-containers-broker/235ec5f70b97c6eebd41e39fdd497c61b17406f7.1-deb10451ba593ff97427b5eb32c492b415db6fa7/app/models/docker_manager.rb:94:in fetch_image' from /var/vcap/data/packages/cf-containers-broker/235ec5f70b97c6eebd41e39fdd497c61b17406f7.1-deb10451ba593ff97427b5eb32c492b415db6fa7/lib/container_images.rb:11:inblock in fetch'
from /var/vcap/data/packages/cf-containers-broker/235ec5f70b97c6eebd41e39fdd497c61b17406f7.1-deb10451ba593ff97427b5eb32c492b415db6fa7/lib/container_images.rb:10:in each' from /var/vcap/data/packages/cf-containers-broker/235ec5f70b97c6eebd41e39fdd497c61b17406f7.1-deb10451ba593ff97427b5eb32c492b415db6fa7/lib/container_images.rb:10:infetch'
from /var/vcap/packages/cf-containers-broker/bin/fetch_container_images:7:in `

'

Downloading log bundle (7e1f2b7f-423a-4841-4cf9-1842723ea4de)...

The log bundle is empty. The property fetch_containers.docker_url is set to tcp://10.3.0.11:2375 (in my case) which generally works. More (fetch-containers-specific) properties I didn't see. The broker.services were set (same deployment) and it wanted to install one of them as you see above.

Anything else I should do to make it work? I haven't yet looked into the source.

Minor comment about the name of the errand: it looks as if somewhere on the way the name got truncated to fetch-containers, while it is in fact a fetch-images or fetch-container-images.

Failed to configure memory quota

Also see moby/moby#14325
Will release upgrade to docker 1.7.0 soon?

Need suggestion as how memory quota should be configured in the manifest. Tried following solutions without a success.

  1. Configured container memory quota according to https://github.com/cf-platform-eng/docker-boshrelease/blob/master/CONTAINERS.md. Memory quota is not configured according to docker inspect
jobs:
- instances: 1
  name: docker
  networks:
  - name: default
  persistent_disk: 204800
  resource_pool: default
  templates:
  - name: docker
    release: docker
  - name: cf-containers-broker
    release: docker
  - name: monitor-server
    release: monitor-server
  properties:
    container:
      memory: '25m'

Excerpt from docker inspect output -

[{
"AppArmorProfile": "",
"Args": [],
"Config": {
...
"Memory": 0,
"MemorySwap": 0,
...
},
...
"HostConfig": {
...
"Memory": 0,
"MemorySwap": 0,

  1. Memory quota seems to be configured under Config, but not for HostConfig.
    Excerpt from manifest -
properties:
+  monitor_server:
+    ephemeral_disk:
+      alert_percent: 80
+    persistent_disk:
+      alert_percent: 80
+  cfcontainersbroker:
+    auth_password: password
+    auth_username: username
+    cc_api_uri:  
+    component_name: cf-containers-broker
+    cookie_secret:  
+    external_host:  
+    max_containers: 1200
+    services:
+    - bindable: true
+      dashboard_client:
+        id: p-logstash14-client
+        redirect_uri:  
+        secret: p-logstash14-secret
+      description: Logstash 1.4 service for application development and testing
+      id: b324e710-1f6b-11e5-867f-0800200c9a66
+      metadata:
+        displayName: Logstash 1.4
+        documentationUrl: http://docs.run.pivotal.io
+        longDescription: A Logstash 1.4 service for development and testing running
+          inside a Docker container
+        providerDisplayName: Pivotal Software
+        supportUrl: http://support.run.pivotal.io/home
+      name: logstash14-stress
+      plans:
+      - container:
+          backend: docker
+          image: lnguyen/logstash
+          tag: '1.4'
+          memory: '25m'
+        description: Free Trial

Excerpt from docker inspect output -

[{
    "AppArmorProfile": "",
    "Args": [],
    "Config": {
...
        "Memory": 26214400,
        "MemorySwap": 0,
...
    },
...
    "HostConfig": {
...
       "Memory": 0,
       "MemorySwap": 0,

Final release for v25 and v26 missing

I found the new support for docker 1.10 and 1.11 which is great, but I was not able to locate the final releases like we do for v24. would be very helpful if we could download the final release directly

Docker node shutdown takes very long with hundreds of containers

@frodenas some time ago we found an alternative solution for the unmount issue if you remember (686b875).

Although it works, I believe we introduced another problem: Practically we now serialized the shutdown. I tried adding a & to the end of line https://github.com/cf-platform-eng/docker-boshrelease/blame/master/jobs/docker/templates/bin/docker_ctl#L97 to run the stops in parallel and it works/shuts down fast.

The question is, will the mount issue now pop up again?

And what was again the background for it and can we in the meanwhile (with Docker 1.8.2) rely on Docker to perform a cleaner shutdown (as compared to back then with Docker 1.6.0 I believe)?

Can't upload docker-4.yml

Trying to follow the README (using the GCE bosh):

$ bosh -v
BOSH 1.2479.0
$ bosh upload release releases/docker-4.yml

Copying packages
----------------
docker (2)                    FOUND REMOTE
Downloading 6d881152-1350-4c38-9948-af6ce68e56ea...
Blobstore error: Failed to fetch object, underlying error: #<AWS::Core::Http::NetHttpHandler::TruncatedBodyError: content-length does not match> /usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/http/net_http_handler.rb:83:in `ensure in block (2 levels) in handle'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/http/net_http_handler.rb:83:in `block (2 levels) in handle'
/usr/local/rvm/rubies/ruby-1.9.3-p545/lib/ruby/1.9.1/net/http.rb:1323:in `block (2 levels) in transport_request'
/usr/local/rvm/rubies/ruby-1.9.3-p545/lib/ruby/1.9.1/net/http.rb:2672:in `reading_body'
/usr/local/rvm/rubies/ruby-1.9.3-p545/lib/ruby/1.9.1/net/http.rb:1322:in `block in transport_request'
/usr/local/rvm/rubies/ruby-1.9.3-p545/lib/ruby/1.9.1/net/http.rb:1317:in `catch'
/usr/local/rvm/rubies/ruby-1.9.3-p545/lib/ruby/1.9.1/net/http.rb:1317:in `transport_request'
/usr/local/rvm/rubies/ruby-1.9.3-p545/lib/ruby/1.9.1/net/http.rb:1294:in `request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/http/connection_pool.rb:330:in `request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/http/net_http_handler.rb:61:in `block in handle'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/http/connection_pool.rb:129:in `session_for'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/http/net_http_handler.rb:55:in `handle'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:246:in `block in make_sync_request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:275:in `retry_server_errors'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:242:in `make_sync_request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:504:in `block (2 levels) in client_request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:384:in `log_client_request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:470:in `block in client_request'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:366:in `return_or_raise'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/core/client.rb:469:in `client_request'
(eval):3:in `get_object'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/s3/s3_object.rb:1363:in `get_object'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/aws-sdk-1.32.0/lib/aws/s3/s3_object.rb:1083:in `read'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/blobstore_client-1.2479.0/lib/blobstore_client/s3_blobstore_client.rb:94:in `get_file'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/blobstore_client-1.2479.0/lib/blobstore_client/base.rb:50:in `get'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/blobstore_client-1.2479.0/lib/blobstore_client/sha1_verifiable_blobstore_client.rb:19:in `get'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/blobstore_client-1.2479.0/lib/blobstore_client/retryable_blobstore_client.rb:19:in `block in get'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_common-1.2479.0/lib/common/retryable.rb:21:in `block in retryer'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_common-1.2479.0/lib/common/retryable.rb:19:in `loop'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_common-1.2479.0/lib/common/retryable.rb:19:in `retryer'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/blobstore_client-1.2479.0/lib/blobstore_client/retryable_blobstore_client.rb:18:in `get'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/release_compiler.rb:150:in `find_in_indices'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/release_compiler.rb:109:in `find_package'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/release_compiler.rb:59:in `block in compile'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/release_compiler.rb:53:in `each'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/release_compiler.rb:53:in `compile'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/commands/release.rb:209:in `upload_manifest'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/commands/release.rb:116:in `upload'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/command_handler.rb:57:in `run'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/runner.rb:56:in `run'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/lib/cli/runner.rb:16:in `run'
/usr/local/rvm/gems/ruby-1.9.3-p545/gems/bosh_cli-1.2479.0/bin/bosh:7:in `<top (required)>'
/usr/local/rvm/gems/ruby-1.9.3-p545/bin/bosh:23:in `load'
/usr/local/rvm/gems/ruby-1.9.3-p545/bin/bosh:23:in `<main>'
/usr/local/rvm/gems/ruby-1.9.3-p545/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-1.9.3-p545/bin/ruby_executable_hooks:15:in `<main>'

Docker processes not stopping during `stop`

Was trying to upgrade, and the unmount step failed because various container processes (rabbit, redis, postgres) were still running; even though monit said docker & cf-containers were stopped.

Ideas on how come this might have occurred?

Private Docker Registry

Hello,

We are looking to have a docker bosh release that will start up a private docker registry. We were thinking that we could accomplish this a couple of ways but wanted to get input.

  1. We could create an errand to run the registry
  2. We can add a flag to the CTL to push the registry -- We have a prototype working in bosh-lite at the moment.

example: https://docs.docker.com/registry/deploying/

Thanks,
@grayisgreat && @amitshah3

create-service-broker failing

Hello,

I've been able to deploy successfully this docker-bosh release alongside my existing CF release (v.197) and my Diego release (v.825) but ran into a problem with the creation of the service broker piece.

I'm running this command and have encountered this error:

ubuntu@boshcli2:~/bosh-workspace/releases/docker-boshrelease$ cf create-service-broker cf-containers-broker containers containers http://cf-containers-broker.cf-ice-dev.fmr.com
Creating service broker cf-containers-broker as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker API returned an error from http://cf-containers-broker.cf-ice-dev.fmr.com/v2/catalog: 404 Not Found

When reviewing my gorouter logfiles I can see that its not recognizing the cf-containers-broker domain as registered and returns a 404 error. Below are the gorouter and cloudcontroller log snippets:

gorouter log:
{"timestamp":1423165231.038710356,"process_id":1815,"source":"router.proxy.request-handler","log_level":"warn","message":"404 Not Found: Requested route ('cf-containers-broker.cf-ice-dev.fmr.com') does not exist.","data":{"Host":"cf-containers-broker.cf-ice-dev.fmr.com","Path":"/v2/catalog","RemoteAddr":"192.168.5.21:54568","X-Forwarded-For":["10.207.139.11"],"X-Forwarded-Proto":["http"]}}
{"timestamp":1423167703.983408451,"process_id":1815,"source":"router.proxy.request-handler","log_level":"warn","message":"proxy.endpoint.not-found","data":{"Host":"cf-containers.cf-ice-dev.fmr.com","Path":"/v2/catalog","RemoteAddr":"192.168.5.21:59967","X-Forwarded-For":["10.207.139.11"],"X-Forwarded-Proto":["http"]}}
{"timestamp":1423167703.984487772,"process_id":1815,"source":"router.proxy.request-handler","log_level":"warn","message":"404 Not Found: Requested route ('cf-containers.cf-ice-dev.fmr.com') does not exist.","data":{"Host":"cf-containers.cf-ice-dev.fmr.com","Path":"/v2/catalog","RemoteAddr":"192.168.5.21:59967","X-Forwarded-For":["10.207.139.11"],"X-Forwarded-Proto":["http"]}}

cloudcontroller_log:
{"timestamp":1423165226.1368384,"message":"(0.001015s) SELECT count() AS "count" FROM "service_brokers" WHERE ("name" = 'cf-containers-broker') LIMIT 1","log_level":"debug","source":"cc.db","data":{"request_guid":"ecfdf483-5d61-4824-56d9-d27d72e1f72b::e60cf16c-6661-42e1-a896-c2bb1cf05957"},"thread_id":69879481574120,"fiber_id":69879493101160,"process_id":3348,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.17.0/lib/sequel/database/logging.rb","lineno":70,"method":"block in log_each"}
{"timestamp":1423165226.1386719,"message":"(0.000859s) SELECT count(
) AS "count" FROM "service_brokers" WHERE ("broker_url" = 'http://cf-containers-broker.cf-ice-dev.fmr.com') LIMIT 1","log_level":"debug","source":"cc.db","data":{"request_guid":"ecfdf483-5d61-4824-56d9-d27d72e1f72b::e60cf16c-6661-42e1-a896-c2bb1cf05957"},"thread_id":69879481574120,"fiber_id":69879493101160,"process_id":3348,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.17.0/lib/sequel/database/logging.rb","lineno":70,"method":"block in log_each"}
{"timestamp":1423165226.1426187,"message":"Sending get to http://cf-containers-broker.cf-ice-dev.fmr.com/v2/catalog, BODY: nil, HEADERS: ","log_level":"debug","source":"cc.service_broker.v2.http_client","data":{"request_guid":"ecfdf483-5d61-4824-56d9-d27d72e1f72b::e60cf16c-6661-42e1-a896-c2bb1cf05957"},"thread_id":69879481574120,"fiber_id":69879493101160,"process_id":3348,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/services/service_brokers/v2/http_client.rb","lineno":78,"method":"make_request"}
{"timestamp":1423165227.2595744,"message":"Sending registration: {:host=>"192.168.5.55", :port=>9022, :uris=>["api.cf-ice-dev.fmr.com"], :tags=>{:component=>"CloudController"}, :index=>0, :private_instance_id=>nil}","log_level":"debug","source":"cf.registrar","data":{},"thread_id":69879470576380,"fiber_id":69879498660520,"process_id":3348,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/cache/cf-registrar-e586e0c16bbb/lib/cf/registrar.rb","lineno":100,"method":"send_registration_message"}

If I try and use the IP address of the actual broker, that doesn't work either.

Any thoughts?

upgrade from release v15 to 23 failing

upgrade from Version 15 to v23 is failing, Docker processes are not coming back up ..

Error response from daemon: client is newer than server (client API version: 1.21, server API version: 1.19)

multiple bind_volume for a a container : conflicting mount point

Hello,
I am trying to deploy a gitlab ce docker image in a bosh deployment.
The docker image is designed with 3 volumes, for log / config / data disctinct externalization (in our case, i expected 3 subdirs mountpoints in /var/vcap/store/containers/gitlab)

It seems the docker launch job-propertie.sh script uses a single directory for the 3 volumes

# Create a bind mount to a directory
export gitlab_bind_volumes="--volume=${CONTAINERS_STORE_DIR}/gitlab:/etc/gitlab --volume=${CONTAINERS_STORE_DIR}/gitlab:/var/log/gitlab --volume=${CONTAINERS_STORE_DIR}/gitlab:/var/opt/gitlab"

Here the bosh manifest snippet for the gitlab docker image

      containers:
        - name: gitlab
          image: "gitlab/gitlab-ce:rc"
          hostname: "gitlab.example.com"
          #command: "--dir /var/lib/redis/ --appendonly yes"
          bind_ports:
            - "80:80"
            - "443:443"
            - "2222:22"
          bind_volumes:
            - "/etc/gitlab"
            - "/var/log/gitlab"
            - "/var/opt/gitlab"
          memory: "2024m"

Can't find template `swarm_manager'

I edited docker-swarm-aws.yml for my environment and then ran:

bosh upload release releases/docker-9.yml
bosh deployment examples/docker-swarm-aws.yml
bosh -n deploy

Started preparing deployment > Binding templates.Failed: Can't find template `swarm_manager' (00:00:00)

I am pretty new to bosh so I am not sure exactly what is wrong, but I noticed that docker-9.yml does not have a reference to swarm_manager.

bosh create release failing

...
Building cf-containers-broker...
  Final version:   NOT FOUND
  Dev version:     NOT FOUND
  Generating...
  Pre-packaging...
  > + set -e
  > + set -u
  > + cd /var/folders/pf/c8pdkkyj75nb3hclrsr6hkdh0000gn/T/d20140827-12646-lwi2lh/d20140827-12646-13qhx8s/cf-containers-broker
  > + bundle package --all
  > Using rake (10.3.2)
  > Using i18n (0.6.11)
  > Using json (1.8.1)
  > Using minitest (5.4.0)
  > Using thread_safe (0.3.4)
  > Using tzinfo (1.2.2)
  > Using activesupport (4.1.5)
  > Using builder (3.2.2)
  > Using erubis (2.7.0)
  > Using actionview (4.1.5)
  > Using rack (1.5.2)
  > Using rack-test (0.6.2)
  > Using actionpack (4.1.5)
  > Using mime-types (1.25.1)
  > Using polyglot (0.3.5)
  > Using treetop (1.4.15)
  > Using mail (2.5.4)
  > Using actionmailer (4.1.5)
  > Using activemodel (4.1.5)
  > Using arel (5.0.1.20140414130214)
  > Using activerecord (4.1.5)
  > Using addressable (2.3.6)
  > Using archive-tar-minitar (0.5.2)
  > Using timers (1.1.0)
  > Using celluloid (0.15.2)
  > Using eventmachine (1.0.3)
  > Using daemons (1.1.9)
  > Using json_pure (1.8.1)
  > Using thin (1.6.2)
  > Using nats (0.5.0.beta.14)
  > Using vcap-concurrency (0.1.0)
  > Using yajl-ruby (1.2.1)
  > Using cf-message-bus (0.2.0)
  > Using msgpack (0.5.8)
  > Using fluent-logger (0.4.9)
  > Using steno (1.2.4)
  > Using cf-registrar (1.0.1) from git://github.com/cloudfoundry/cf-registrar.git (at master)
  > Using multi_json (1.10.1)
  > Using cf-uaa-lib (1.3.10)
  > Using coderay (1.1.0)
  > Using safe_yaml (1.0.3)
  > Using crack (0.4.2)
  > Using diff-lcs (1.2.5)
  > Using excon (0.39.5)
  > Using docker-api (1.13.2)
  > Using ffi (1.9.3)
  > Using formatador (0.2.5)
  > Using rb-fsevent (0.9.4)
  > Using rb-inotify (0.9.5)
  > Using listen (2.7.9)
  > Using lumberjack (1.0.9)
  > Using method_source (0.8.2)
  > Using slop (3.6.0)
  > Using pry (0.10.1)
  > Using thor (0.19.1)
  > Using guard (2.6.1)
  > Using guard-rails (0.5.3)
  > Using hashie (3.2.0)
  > Using hike (1.2.3)
  > Using kgio (2.9.2)
  > Using omniauth (1.2.2)
  > 
  > omniauth-uaa-oauth2 at /Users/drnic/.rvm/gems/ruby-2.1.0/bundler/gems/omniauth-uaa-oauth2-2bbb78dc3c13 did not have a valid gemspec.
  > This prevents bundler from installing bins or native extensions, but that may not affect its functionality.
  > The validation message from Rubygems was:
  >   duplicate dependency on cf-uaa-lib (< 2.0, >= 1.3.1), (< 2.0, >= 1.3.1) use:
  >     add_runtime_dependency 'cf-uaa-lib', '< 2.0, >= 1.3.1', '< 2.0, >= 1.3.1'
  > Using omniauth-uaa-oauth2 (0.0.3) from git://github.com/cloudfoundry/omniauth-uaa-oauth2.git (at master)
  > Using bundler (1.5.2)
  > Using railties (4.1.5)
  > Using tilt (1.4.1)
  > Using sprockets (2.11.0)
  > Using sprockets-rails (2.1.3)
  > Using rails (4.1.5)
  > Using rails-api (0.2.1)
  > Using raindrops (0.13.0)
  > Using rspec-support (3.0.4)
  > Using rspec-core (3.0.4)
  > Using rspec-expectations (3.0.4)
  > Using rspec-mocks (3.0.4)
  > Using rspec-rails (3.0.2)
  > Using sass (3.2.19)
  > Using sass-rails (4.0.3)
  > Using settingslogic (2.0.9)
  > Using unicorn (4.8.3)
  > Using webmock (1.18.0)
  > Your bundle is complete!
  > Use `bundle show [gemname]` to see where a bundled gem is installed.
  > Updating files in vendor/cache
  >   * rake-10.3.2.gem
  >   * i18n-0.6.11.gem
  > Could not find json-1.8.1.gem for installation
`cf-containers-broker' pre-packaging failed

There is no Gemfile for the bosh release. Is there a setup step I'm missing?

Failed: 'broker/0' is not running after update

I am trying to set up docker containers for services in my CF installation. However the deployment fails with this error:

Started updating job broker > broker/0 (231e3451-82a6-40e8-910f-c9512ae6bd2f). Failed: 'broker/0 (231e3451-82a6-40e8-910f-c9512ae6bd2f)' is not running after update. Review logs for failed jobs: docker, cf-containers-broker, cf-containers-broker-route-registrar (00:01:15)

Error 400007: 'broker/0 (231e3451-82a6-40e8-910f-c9512ae6bd2f)' is not running after update. Review logs for failed jobs: docker, cf-containers-broker, cf-containers-broker-route-registrar

Task 133 error

Here is my manifest file docker-broker-openstack.txt

As you can see I am also under a proxy.

This is the error message present in fetch_container_images.stderr in the cf_containers_broker directory in the broker_0

/var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/docker_manager.rb:268:in `rescue in validate_docker_remote_api': Unable to connect to the Docker Remote API `unix:///var/vcap/sys/run/docker/docker.sock': No such file or directory - connect(2) for /var/vcap/sys/run/docker/docker.sock (Errno::ENOENT) (Exceptions::BackendError)
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/docker_manager.rb:258:in `validate_docker_remote_api'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/docker_manager.rb:20:in `initialize'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/plan.rb:67:in `new'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/plan.rb:67:in `build_container_manager'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/plan.rb:26:in `initialize'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/plan.rb:11:in `new'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/plan.rb:11:in `build'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/service.rb:11:in `block in build'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/service.rb:11:in `map'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/service.rb:11:in `build'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/catalog.rb:13:in `block in services'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/catalog.rb:13:in `map'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/catalog.rb:13:in `services'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/app/models/catalog.rb:21:in `plans'
        from /var/vcap/data/packages/cf-containers-broker/0876194f52cc409e4d55b4163f86910d8a2ade5d.1-8b5bc27d90fa4a43338c3a56231b18b1cd828bab/lib/container_images.rb:10:in `fetch'
        from /var/vcap/packages/cf-containers-broker/bin/fetch_container_images:7:in `<main>'

Any help is appreciated

Thanks
Sreenath

Upgrading job fails when there is a docker api version mismatch

After the docker binary has changed and monit stops the docker job this line will cause the error Error response from daemon: client is newer than server (client API version: 1.23, server API version: 1.22)

Due to the set -e [at the top of the script](Error response from daemon: client is newer than server %28client API version: 1.23, server API version: 1.22%29) the script will exit and the old process will still be running.

A way to resolve the issue currently is bosh ssh into the job and kill $(cat /var/vcap/sys/run/docker/docker.pid && monit restart all but it would be great if the docker_ctl script could handle restarting even when facing an api version upgrade.

"cannot extract license tarball" error when uploading release

I'm attempting to upload the latest release 28.0.0 (I've tried others also) Each time the upload gets to the "Copying License" stage and errors out with the error "Cannot extract license tarball". I am unable to complete the uploads. I am using BOSH 1.3262.4.0 and uploading to PCF on Vsphere.

I'm a bit at a loss of how to correct this error.

Is there an equivalent to /etc/init/docker.conf?

Hello,

I'm trying to enable the remote api for docker as described here: http://blog.trifork.com/2013/12/24/docker-from-a-distance-the-remote-api/

which according to them, in order to get this to work, we would need to add:

description     "Docker daemon"

start on filesystem and started lxc-net
stop on runlevel [!2345]

respawn

script
    /usr/bin/docker -H tcp://127.0.0.1:4243 -d
end script

I've tried this without a bosh release and it works like a charm, however, I've tried to use the same thing on (creating the docker.conf file and putting it in /etc/init/docker.conf) but that didn't work (didn't think it would anyway).

My question is, how to get this to work with this bosh release? Is there a configuration file similar to this some where that I haven't noticed or something?

Thanks in advance

EDIT: The reason why I need this, is because I have a jenkins master (located in a different vm) that uses containers as slaves.

Error filling in template `settings.yml.erb'

Hi,

Trying to upgrade to v16, but getting the following on bosh -n deploy:

Director task 463
  Started preparing deployment
  Started preparing deployment > Binding deployment. Done (00:00:00)
  Started preparing deployment > Binding releases. Done (00:00:01)
  Started preparing deployment > Binding existing deployment. Done (00:00:00)
  Started preparing deployment > Binding resource pools. Done (00:00:00)
  Started preparing deployment > Binding stemcells. Done (00:00:00)
  Started preparing deployment > Binding templates. Done (00:00:00)
  Started preparing deployment > Binding properties. Done (00:00:00)
  Started preparing deployment > Binding unallocated VMs. Done (00:00:00)  Started preparing deployment > Binding instance networks. Done (00:00:00)
     Done preparing deployment (00:00:01)

  Started preparing package compilation > Finding packages to compile. Done (00:00:00)

  Started preparing dns > Binding DNS. Done (00:00:00)

  Started preparing configuration > Binding configuration. Failed: Error filling in template `settings.yml.erb' for `docker/0' (line 3: Can't find property `["broker.username"]') (00:00:01)

Error 100: Error filling in template `settings.yml.erb' for `docker/0' (line 3: Can't find property `["broker.username"]')

Container lost when recreating docker node when using swarm

I deployed the docker boshrelease using docker-services-boshworkspace (deployment docker-swarm-warden.yml) on bosh-lite.

I then created a service instance:

curl -v 'http://containers:[email protected]/v2/service_instances/ee539f6a-f0ee-4b1b-ab67-83d8a7d399d0' -d '{
  "service_id":        "2fd814ac-d1f7-4d4a-a4f7-d386cd8fd8e3",
  "plan_id":           "1a0efffc-eb45-4bf8-8ee3-a3c1a9a53151",
  "organization_guid": "ee539f6a-f0ee-4b1b-ab67-83d8a7d399d0l",
  "space_guid":        "ee539f6a-f0ee-4b1b-ab67-83d8a7d399d0"

}' -X PUT -H "X-Broker-API-Version: 2.4" -H "Content-Type: application/json"

And verified it worked by doing:

unset DOCKER_TLS_VERIFY
docker -H=tcp://10.244.8.2:2375 ps
docker -H=tcp://10.244.8.2:2375 logs <container_id>

After running: bosh -n recreate docker 0 && bosh -n recreate docker 1 my container does not show up when running: docker -H=tcp://10.244.8.2:2375 ps.

While debugging I found that stopping the swarm manager (monit stop swarm_manager) while recreating the docker nodes would preserve the containers.

Issue while downloading the Image from Private registry

Hi,
While downloading the image from the Private registry , I need to have the credentials supplied within the ".dockercfg" file .
Current work around is to have this copied to VM maually and that is creating a issue in the deployment.

We should have some clean way of running the docker login command as part of the deployment to authenticate against the private registry

`bosh sync blobs` fails due to sha1 missmatch

bosh sync blobs
Syncing blobs...
docker/aufs-tools_20120411-3_amd64.deb downloading 89.6K (536%)
/Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/blobstore_client-1.3202.0/lib/blobstore_client/sha1_verifiable_blobstore_client.rb:38:in `check_sha1': sha1 mismatch expected=2dfc1fe386cd3f05ac7e0b4ebcf3ebc8a7f3b04d actual=26cf59ecfdfd980c044ed8d95ebc407d47ebba6b (Bosh::Blobstore::BlobstoreError)
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/blobstore_client-1.3202.0/lib/blobstore_client/sha1_verifiable_blobstore_client.rb:24:in `get'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/blobstore_client-1.3202.0/lib/blobstore_client/retryable_blobstore_client.rb:19:in `block in get'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/retryable.rb:28:in `call'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/retryable.rb:28:in `block in retryer'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/retryable.rb:26:in `loop'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/retryable.rb:26:in `retryer'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/blobstore_client-1.3202.0/lib/blobstore_client/retryable_blobstore_client.rb:18:in `get'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_cli-1.3202.0/lib/cli/blob_manager.rb:316:in `download_blob'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_cli-1.3202.0/lib/cli/blob_manager.rb:242:in `block (3 levels) in process_index'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/thread_pool.rb:77:in `call'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/thread_pool.rb:63:in `loop'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/bosh_common-1.3202.0/lib/common/thread_pool.rb:63:in `block in create_thread'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
        from /Users/jcarter/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'

Use of env var DOCKER_TLS_VERIFY has undesired side effect

Hello Ferran,

while trying to extend the swarm approach in your bosh release, @holgerkoser and I noticed that you have used (most likely by accident) an env variable that docker uses as well. You define quite a few in [1] and hand them over when you call docker in [2]. Now, the one that clashes is DOCKER_TLS_VERIFY. Usually it will be set to --tlsverify=false. However, when docker sees this env var and it carries any value (whatever it is), it will assume it's true as described in [3]:

Setting the DOCKER_TLS_VERIFY environment variable to any value other than the empty string is equivalent to setting the --tlsverify flag.

This makes it hard (unset DOCKER_TLS_VERIFY helps, but is a workaround) to call the docker client in [2]. Calling the docker client is handy to do some clean-up, in particular to stop the containers cleanly before stopping or even killing the daemon via monit. If they are not stopped cleanly, unmounting fails (also in a plain broker+docker deployment).

Best regards,
Vedran

P.S.: Maybe you are interested in swarm issue [4]? You are using the docker-api to get to the ApiVersion. This is the correct spelling (as used/returned by the docker ReST API), but Swarm reports it as APIVersion (API all uppercase) which is strictly speaking incompatible with docker and no mentioned exception. This again results in an error when the broker tries to talk to the swarm manager instead of the docker daemon.

[1] https://github.com/cf-platform-eng/docker-boshrelease/blob/master/jobs/docker/templates/bin/job_properties.sh.erb
[2] https://github.com/cf-platform-eng/docker-boshrelease/blob/master/jobs/docker/templates/bin/docker_ctl
[3] https://docs.docker.com/reference/commandline/cli
[4] docker-archive/classicswarm#687

Blobstore error: sha1 mismatch when downloading release

Hi,

I'm using BOSH 1.3184.1.0 to download the newest release of docker-boshrelease. However, when doing so I'm getting a sha1 checksum error:

Fetching release 'docker' to satisfy template references
Version '23' has been checkout into:
- /home/ubuntu/workspace/deployments/docker-services-boshworkspace/.releases/docker

Uploading 'docker/23'
Recreating release from the manifest
Downloading from blobstore (id=dddaad4f-e888-4959-85aa-22466c194fd8)...
Blobstore error: sha1 mismatch expected=70336f7a629ea484182189692b6c6f9d3187cc6a actual=7b7223aa31946232eb7f5123795ec6d943de63fa

This looks like a problem with the release. Could you verify it?

service broker creation failed

I was able to deploy the service broker using bosh deploy command. While creating the service broker (cf create-service-broker containers containers http:://cf-containers-broker.XX.XX.XX.XX.xip.io), its failing with an unknown error.

Error trace:

REQUEST: POST http://api.52.72.233.245.xip.io/v2/service_brokers REQUEST_HEADERS: Accept : application/json Authorization : [PRIVATE DATA HIDDEN] Content-Length : 153 Content-Type : application/json REQUEST_BODY: {"name":"cf-containers-broker","broker_url":"http://cf-containers- broker.52.72.233.245.xip.io","auth_username":"containers","auth_password":"conta iners"} RESPONSE: [500] RESPONSE_HEADERS: content-length : 99 content-type : application/json;charset=utf-8 date : Fri, 19 Feb 2016 09:43:27 GMT server : nginx x-cf-requestid : 40d370ee-688d-4b9f-4138-7f412ed84a5f x-content-type-options : nosniff x-vcap-request-id : 044e8c41-a802-4031-4103-cae15d0527b8::38a71413-31f9-4f9c-a 61b-6c3e9ebe156d RESPONSE_BODY: { "error_code": "UnknownError", "description": "An unknown error occurred.", "code": 10001 }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.