Giter Site home page Giter Site logo

jpetazzo / container.training Goto Github PK

View Code? Open in Web Editor NEW
3.5K 132.0 1.6K 37.21 MB

Slides and code samples for training, tutorials, and workshops about Docker, containers, and Kubernetes.

Home Page: http://container.training/

License: Other

Ruby 0.15% Python 22.92% HTML 6.44% JavaScript 2.45% Shell 40.35% CSS 2.84% Dockerfile 0.44% HCL 22.78% Starlark 1.62%
workshop docker swarm kubernetes training labs slides compose helm stern

container.training's Introduction

Container Training

This repository (formerly known as orchestration-workshop) contains materials (slides, scripts, demo app, and other code samples) used for various workshops, tutorials, and training sessions around the themes of Docker, containers, and orchestration.

For the moment, it includes:

  • Introduction to Docker and Containers,
  • Container Orchestration with Docker Swarm,
  • Container Orchestration with Kubernetes.

These materials have been designed around the following principles:

  • they assume very little prior knowledge of Docker, containers, or a particular programming language;
  • they can be used in a classroom setup (with an instructor), or self-paced at home;
  • they are hands-on, meaning that they contain lots of examples and exercises that you can easily reproduce;
  • they progressively introduce concepts in chapters that build on top of each other.

If you're looking for the materials, you can stop reading right now, and hop to http://container.training/, which hosts all the slides decks available.

The rest of this document explains how this repository is structured, and how to use it to deliver (or create) your own tutorials.

Why a single repository?

All these materials have been gathered in a single repository because they have a few things in common:

What are the different courses available?

Introduction to Docker is derived from the first "Docker Fundamentals" training materials. For more information, see jpetazzo/intro-to-docker. The version in this repository has been adapted to the Markdown publishing pipeline. It is still maintained, but only receives minor updates once in a while.

Container Orchestration with Docker Swarm (formerly known as "Orchestration Workshop") is a workshop created by Jérôme Petazzoni in June 2015. Since then, it has been continuously updated and improved, and received contributions from many others authors. It is actively maintained.

Container Orchestration with Kubernetes was created by Jérôme Petazzoni in October 2017, with help and feedback from a few other contributors. It is actively maintained.

Repository structure

  • bin
    • A few helper scripts that you can safely ignore for now.
  • dockercoins
    • The demo app used throughout the orchestration workshops.
  • efk, elk, prom, snap:
    • Logging and metrics stacks used in the later parts of the orchestration workshops.
  • prepare-local, prepare-machine:
    • Contributed scripts to automate the creation of local environments. These could use some help to test/check that they work.
  • prepare-vms:
    • Scripts to automate the creation of AWS instances for students. These are routinely used and actively maintained.
  • slides:
    • All the slides! They are assembled from Markdown files with a custom Python script, and then rendered using gnab/remark. Check this directory for more details.
  • stacks:
    • A handful of Compose files (version 3) allowing to easily deploy complex application stacks.

Course structure

(This applies only for the orchestration workshops.)

The workshop introduces a demo app, "DockerCoins," built around a micro-services architecture. First, we run it on a single node, using Docker Compose. Then, we pretend that we need to scale it, and we use an orchestrator (SwarmKit or Kubernetes) to deploy and scale the app on a cluster.

We explain the concepts of the orchestrator. For SwarmKit, we setup the cluster with docker swarm init and docker swarm join. For Kubernetes, we use pre-configured clusters.

Then, we cover more advanced concepts: scaling, load balancing, updates, global services or daemon sets.

There are a number of advanced optional chapters about logging, metrics, secrets, network encryption, etc.

The content is very modular: it is broken down in a large number of Markdown files, that are put together according to a YAML manifest. This allows to re-use content between different workshops very easily.

DockerCoins

The sample app is in the dockercoins directory. It's used during all chapters for explaining different concepts of orchestration.

To see it in action:

  • cd dockercoins && docker-compose up -d
  • this will build and start all the services
  • the web UI will be available on port 8000

Running the Workshop

If you want to deliver one of these workshops yourself, this section is for you!

*This section has been mostly contributed by Bret Fisher, who was one of the first persons to have the bravery of delivering this workshop without me. Thanks Bret! 🍻

Jérôme.*

General timeline of planning a workshop

  • Fork repo and run through slides, doing the hands-on to be sure you understand the different dockercoins repo's and the steps we go through to get to a full Swarm Mode cluster of many containers. You'll update the first few slides and last slide at a minimum, with your info.
  • Your docs directory can use GitHub Pages.
  • This workshop expects 5 servers per student. You can get away with as little as 2 servers per student, but you'll need to change the slide deck to accommodate. More servers = more fun.
  • If you have more then ~20 students, try to get an assistant (TA) to help people with issues, so you don't have to stop the workshop to help someone with ssh etc.
  • AWS is our most tested process for generating student machines. In prepare-vms you'll find scripts to create EC2 instances, install docker, pre-pull images, and even print "cards" to place at each students seat with IP's and username/password.
  • Test AWS Scripts: Be sure to test creating all your needed servers a week before workshop (just for a few minutes). You'll likely hit AWS limits in the region closest to your class, and it sometimes takes days to get AWS to raise those limits with a support ticket.
  • Create a https://gitter.im chat room for your workshop and update slides with url. Also useful for TA to monitor this during workshop. You can use it before/after to answer questions, and generally works as a better answer then "email me that question".
  • If you can send an email to students ahead of time, mention how they should get SSH, and test that SSH works. If they can ssh github.com and get permission denied (publickey) then they know it worked, and SSH is properly installed and they don't have anything blocking it. SSH and a browser are all they need for class.
  • Typically you create the servers the day before or morning of workshop, and leave them up the rest of day after workshop. If creating hundreds of servers, you'll likely want to run all these workshopctl commands from a dedicated instance you have in same region as instances you want to create. Much faster this way if you're on poor internet. Also, create 2 sets of servers for yourself, and use one during workshop and the 2nd is a backup.
  • Remember you'll need to print the "cards" for students, so you'll need to create instances while you have a way to print them.

Things That Could Go Wrong

  • Creating AWS instances ahead of time, and you hit its limits in region and didn't plan enough time to wait on support to increase your limits. :(
  • Students have technical issues during workshop. Can't get ssh working, locked-down computer, host firewall, etc.
  • Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also prevent you from losing your place if you get disconnected from servers. https://tmux.github.io
  • Forget to print "cards" and cut them up for handing out IP's.
  • Forget to have fun and focus on your students!

Creating the VMs

prepare-vms/workshopctl is the script that gets you most of what you need for setting up instances. See prepare-vms/README.md for all the info on tools and scripts.

Content for Different Workshop Durations

With all the slides, this workshop is a full day long. If you need to deliver it in shorter timelines, here's some recommendations on what to cut out. You can replace --- with ??? which will hide slides. Or leave them there and add something like (EXTRA CREDIT) to title so students can still view the content but you also know to skip during presentation.

3 Hour Version

  • Limit time on debug tools, maybe skip a few. "Chapter 1: Identifying bottlenecks"
  • Limit time on Compose, try to have them building the Swarm Mode by 30 minutes in
  • Skip most of Chapter 3, Centralized Logging and ELK
  • Skip most of Chapter 4, but keep stateful services and DAB's if possible
  • Mention what DAB's are, but make this part optional in case you run out of time

2 Hour Version

  • Skip all the above, and:
  • Skip the story arc of debugging dockercoins all together, skipping the troubleshooting tools. Just focus on getting them from single-host to multi-host and multi-container.
  • Goal is first 30min on intro and Docker Compose and what dockercoins is, and getting it up on one node in docker-compose.
  • Next 60-75 minutes is getting dockercoins in Swarm Mode services across servers. Big Win.
  • Last 15-30 minutes is for stateful services, DAB files, and questions.

Pre-built images

There are pre-built images for the 4 components of the DockerCoins demo app: dockercoins/hasher:v0.1, dockercoins/rng:v0.1, dockercoins/webui:v0.1, and dockercoins/worker:v0.1. They correspond to the code in this repository.

There are also three variants, for demo purposes:

  • dockercoins/rng:v0.2 is broken (the server won't even start),
  • dockercoins/webui:v0.2 has bigger font on the Y axis and a green graph (instead of blue),
  • dockercoins/worker:v0.2 is 11x slower than v0.1.

Past events

Since its inception, this workshop has been delivered dozens of times, to thousands of people, and has continuously evolved. This is a short history of the first times it was delivered. Look also in the "tags" of this repository: they all correspond to successive iterations of this workshop. If you attended a past version of the workshop, you can use these tags to see what has changed since then.

  • QCON, New York City (2015, June)
  • KCDC, Kansas City (2015, June)
  • JDEV, Bordeaux (2015, July)
  • OSCON, Portland (2015, July)
  • StrangeLoop, Saint Louis (2015, September)
  • LISA, Washington D.C. (2015, November)
  • SCALE, Pasadena (2016, January)
  • Zenika, Paris (2016, February)
  • Container Solutions, Amsterdam (2016, February)
  • ... and much more!

Problems? Bugs? Questions?

If there is a bug and you can fix it: submit a PR. Make sure that I know who you are so that I can thank you (because you're the real MVP!)

If there is a bug and you can't fix it, but you can reproduce it: submit an issue explaining how to reproduce.

If there is a bug and you can't even reproduce it: sorry. It is probably an Heisenbug. We can't act on it until it's reproducible, alas.

“Please teach us!”

If you have attended one of these workshops, and want your team or organization to attend a similar one, you can look at the list of upcoming events on http://container.training/.

You are also welcome to reuse these materials to run your own workshop, for your team or even at a meetup or conference. In that case, you might enjoy watching Bridget Kromhout's talk at KubeCon 2018 Europe, explaining precisely how to run such a workshop yourself.

Finally, you can also contact the following persons, who are experienced speakers, are familiar with the material, and are available to deliver these workshops at your conference or for your company:

  • jerome dot petazzoni at gmail dot com
  • bret at bretfisher dot com

(If you are willing and able to deliver such workshops, feel free to submit a PR to add your name to that list!)

Thank you!

container.training's People

Contributors

antweiss avatar arcln avatar arthurzenika avatar atsaloli avatar bretfisher avatar bridgetkromhout avatar christianbumann avatar crd avatar ctas582 avatar dependabot[bot] avatar djalal avatar fc92 avatar guilhem avatar gurayops avatar jgarrouste avatar jpetazzo avatar jsubirat avatar juliogomez avatar maximede avatar mkrupczak3 avatar raulkite avatar rdegez avatar schrodervictor avatar soulshake avatar stefanlasiewski avatar svx avatar tianon avatar tiffanyfay avatar tullo avatar zempashi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container.training's Issues

LTS update

Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.15.0-1023-azure x86_64)
.
.
.
New release '18.04.1 LTS' available.

We could test with the newest LTS, then change to using that.

Move to GitHub docs for slides

I went to checkout the slides earlier today at http://view.dckr.info:8080/ and it seemed to be down. That got me thinking it might be handy to try Publish Your Project Documentation with GitHub Pages for the slides.

I tried this out at https://everett-toews.github.io/orchestration-workshop/ and it seems to work pretty well. It's running out of this folder in my fork. I did have to include remark-0.13.min.js in that dir to avoid mixed content and cert errors.

I'm not sure if there are other reasons I'm unaware of for running the slides at http://view.dckr.info:8080. If not and if this is a change you'd like, I could submit it as a PR.

errors from Prometheus?

Looking at http://70.37.55.196:31277/#!/pod/default/winsome-wasp-prometheus-alertmanager-784d9bddf6-4pcqw?namespace=default I see this:

screen shot 2018-07-16 at 3 11 59 pm

screen shot 2018-07-16 at 3 12 07 pm

I ran through that section starting at https://oscon2018.container.training/#325 pretty quickly so maybe I missed something, but I'm left with two questions:

  1. Is this expected - the persistent volumes claim error and related percentage unavailable?

screen shot 2018-07-16 at 3 14 09 pm

  1. What am I missing in the Prometheus section? Shouldn't I have a URL I send people to, to look at it?

EFK issues

Hi, @jpetazzo - I'm curious about the memory/cpu specs of a cluster where you have this working: https://github.com/jpetazzo/container.training/blame/master/slides/kube/logs-centralized.md#L43

When I try it on either Standard_D1_v2 (1 core, 3.5 GiB memory) or Standard_D2_v2 (2 cores, 7 GiB memory) Azure instances that have been used for all the workshop exercises up to that point, it doesn't go well. Within seconds of me applying the yaml to get EFK started, one or both worker nodes become unresponsive. It's happened several times in a row so I don't think it's a one-off.

$ kubectl get no
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    38m       v1.10.0
node2     NotReady   <none>    38m       v1.10.0
node3     Ready      <none>    38m       v1.10.0
[52.186.28.163] (local) docker@node1 ~/container.training/stacks
$ kubectl describe node node2
Name:               node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=node2
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 12 Apr 2018 01:41:13 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                    Message
  ----             ------    -----------------                 ------------------                ------                    -------
  OutOfDisk        Unknown   Thu, 12 Apr 2018 02:15:48 +0000   Thu, 12 Apr 2018 02:16:29 +0000   NodeStatusUnknown         Kubelet stopped posting node status.
  MemoryPressure   Unknown   Thu, 12 Apr 2018 02:15:48 +0000   Thu, 12 Apr 2018 02:16:29 +0000   NodeStatusUnknown         Kubelet stopped posting node status.
  DiskPressure     Unknown   Thu, 12 Apr 2018 02:15:48 +0000   Thu, 12 Apr 2018 02:16:29 +0000   NodeStatusUnknown         Kubelet stopped posting node status.
  PIDPressure      False     Thu, 12 Apr 2018 02:15:48 +0000   Thu, 12 Apr 2018 01:41:13 +0000   KubeletHasSufficientPID   kubelet has sufficient PID available
  Ready            Unknown   Thu, 12 Apr 2018 02:15:48 +0000   Thu, 12 Apr 2018 02:16:29 +0000   NodeStatusUnknown         Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.0.0.4
  Hostname:    node2
Capacity:
 cpu:                1
 ephemeral-storage:  30428648Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3501580Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  28043041951
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3399180Ki
 pods:               110
System Info:
 Machine ID:                 edcb2ab1249e461ca2ee3e33d9fb18c3
 System UUID:                E89B9EC3-013A-C741-AD45-9024596F270D
 Boot ID:                    6b913929-387c-4de8-8c3e-c3a4520e6aa3
 Kernel Version:             4.13.0-1012-azure
 OS Image:                   Ubuntu 16.04.4 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.0
 Kubelet Version:            v1.10.0
 Kube-Proxy Version:         v1.10.0
ExternalID:                  node2
Non-terminated Pods:         (21 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  default                    elastic-664569cb68-5jqbn          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    elastic-664569cb68-7hx8h          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    elastic-664569cb68-s2s9s          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    elastic-664569cb68-spzq5          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    elasticsearch-555dd49fc9-nflz2    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    fluentd-sngfb                     100m (10%)    0 (0%)      200Mi (6%)       200Mi (6%)
  default                    pingpong-74d57674fc-5hpj8         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    pingpong-74d57674fc-lp4rj         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    pingpong-74d57674fc-n9b9s         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    pingpong-74d57674fc-swgtm         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    pingpong-74d57674fc-vr8vd         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    redis-b48685f8b-2hd8s             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    rng-4xwr9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    worker-675df947b7-4xmql           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    worker-675df947b7-84vgq           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    worker-675df947b7-899nm           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    worker-675df947b7-gfk59           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    worker-675df947b7-pc55n           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-ds9tt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                socat-b8966767c-zzvmj             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-h4bxd                   20m (2%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  120m (12%)    0 (0%)      200Mi (6%)       200Mi (6%)
Events:
  Type    Reason                   Age   From               Message
  ----    ------                   ----  ----               -------
  Normal  Starting                 39m   kubelet, node2     Starting kubelet.
  Normal  NodeHasSufficientDisk    39m   kubelet, node2     Node node2 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  39m   kubelet, node2     Node node2 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    39m   kubelet, node2     Node node2 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     39m   kubelet, node2     Node node2 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  39m   kubelet, node2     Updated Node Allocatable limit across pods
  Normal  Starting                 38m   kube-proxy, node2  Starting kube-proxy.
  Normal  NodeReady                38m   kubelet, node2     Node node2 status is now: NodeReady
[52.186.28.163] (local) docker@node1 ~/container.training/stacks
$ 

Update opensg to only needed ports

./trainer opensg is cool, it will add rules to current default Security Group. But it opens up all ports.

Update to only open 22/8000, which I think are the only ports needed in from Internet, and support using custom SG via envar.

Fluentd daemonset crashing

When I deploy the EFK stack using:

kubectl apply -f https://goo.gl/MUZhE4

the fluentd pods keep crashing with the following error message:

2018-08-13 18:48:12 +0000 [error]: unexpected error error_class=Errno::EACCES error=#<Errno::EACCES: Permission denied @ rb_sysopen - /var/log/fluentd-containers.log.pos>

A recent update to the fluentd-kubernetes-daemonset README says that fluentd needs to be run as the root user, which can be done by setting the FLUENT_UID envvar to "0". Deploying that additional envvar does fix it.

I'm assuming this YAML used to work, but I've been unable to trace what has changed which means it doesn't though. Have other people seen this?

Broken `docker service update rng --mode global` command (slide 93)

In the version

$ docker version
Client:
 Version:      1.12.0-rc4
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   e4a0dbc
 Built:        Wed Jul 13 04:05:31 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.0-rc4
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   e4a0dbc
 Built:        Wed Jul 13 04:05:31 2016
 OS/Arch:      linux/amd64

The command docker service update rng --mode global fails:

$ docker service update rng --mode global
unknown flag: --mode
See 'docker service update --help'.

swarmctl won't compile

Based on the instructions on the Building swarmctl I tried to download, compile, install SwarmKit with this one-liner:

docker run -v /usr/local/bin:/go/bin golang \
     go get -v github.com/docker/swarmkit/...

The full output is in this gist.

I'm assuming that swarmctl won't compile because of errors like this

# github.com/docker/swarmkit/api
src/github.com/docker/swarmkit/api/dispatcher.pb.go:1574: p.cluster undefined (type *raftProxyDispatcherServer has no field or method cluster)
src/github.com/docker/swarmkit/api/dispatcher.pb.go:1581: p.connSelector.Conn undefined (type raftselector.ConnProvider has no field or method Conn)
src/github.com/docker/swarmkit/api/dispatcher.pb.go:1593: p.connSelector.Reset undefined (type raftselector.ConnProvider has no field or method Reset)

This isn't an orchestration-workshop bug but obviously it impacts the workshop as you aren't able to do the next few slides without it. Ideally you could install a particular version of SwarmKit/swarmctl using go get but there doesn't seem to be an easy/obvious way of doing that with go get.

Considering we're installing SwarmKit/swarmctl from master this may or may not actually occur during a workshop. I suppose if it does happens it's really not that big a deal as you only wind up skipping a few slides and it's nothing critical to the workshop.

Is my analysis on this issue correct or am I missing something?

Is it possible to run the `SwarmKit` part using `docker-machine`?

I don't have the required five virtual machine nodes. I tried to reproduce your environment using five docker-machine virtualboxes (i.e. created with docker-machine create -D --driver virtualbox node1 etc)

I can ssh to the node1 without problem and initialize the docker swarm init without problem:

$ docker-machine ssh node1                                                                                                                                                                                         
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|

  WARNING: this is a build from test.docker.com, not a stable release.

Boot2Docker version 1.12.0-rc4, build HEAD : cbe6927 - Wed Jul 13 14:19:29 UTC 2016
Docker version 1.12.0-rc4, build e4a0dbc
docker@default:~$ docker node ls
ID                           HOSTNAME  MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
4o9p1jvj6pi3xiswp9kisoel6 *  node1   Accepted    Ready   Active        Leader
docker@node1:~$ %                                                                  

But when I am trying to use docker swarm join I git the following error:

docker@node2:~$ docker swarm join node1:2377
Error response from daemon: Timeout was reached before node was joined. Attempt to join the cluster will continue in the background. Use "docker info" command to see the current Swarm status of your node.

My versions of docker, docker-machine and docker-compose are the ones recommended in the tutorial.

Could you give me some pointers for solving this?

can't push on registry

Hello,

I'm trying to replay the workshop on a private instance of PWD, but I've got something which works on PWD but not on my local instance :

docker pull busybox
docker tag busybox localhost:5000/busybox
docker push localhost:5000/busybox

I've got thoses error in /docker.log :

time="2017-01-27T20:11:21.752969266Z" level=error msg="Attempting next endpoint for push after error: Get https://localhost:5000/v2/: net/http: TLS handshake timeout"
time="2017-01-27T20:11:36.753552187Z" level=error msg="Not continuing with push after error: Get http://localhost:5000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
time="2017-01-27T20:11:45.016851232Z" level=error msg="Not continuing with push after error: Get https://localhost:5000/v2/: net/http: TLS handshake timeout"

We can see in the PWD logs :

pwd    | [negroni] 2017-01-27T20:12:01Z | 200 |          10m59.189412377s | ip10_0_0_3-2375.myserver.com:80 | POST /v1.25/images/localhost:5000/busybox/push

After 10mn, I've Ctrl-C the Docker push and it has log this

Not sure what i'm doing wrong.. Any help will be much appreciate

Thanks!

Consider to rename docker-compose files

Just working through the dockercoins example at https://github.com/docker-training/orchestration-workshop/tree/master/dockercoins

The current docker files are named

  • docker-compose.yml
  • docker-compose.yml-images
  • docker-compose.yml-logging

The naming is not very usefull, as the file extension yml-<foobar> is unknown to editors - and also Github where this is shown in the training.

I would propose to rename them as follows:

  • docker-compose.yml
  • docker-compose.images.yml
  • docker-compose.logging.yml

docker-machine exercises don't work with play-with-docker.com

While at the Global Mentor Week in SF on Wednesday, a couple of us on the Ops side of the room discovered that http://play-with-docker.com/ doesn't include the docker-machine command, and therefore we couldn't run through the examples shown in "Docker Machine basic usage" and later, which are mentioned at https://jpetazzo.github.io/orchestration-workshop/#54 & later.

/ # docker-machine ls
/bin/sh: docker-machine: not found
/ #

Would it be possible to clarify the instructions so that we can follow the lessons with the Docker playground?

I'm not what's to be done about this, but I thought I would point it out since a couple of us ran into it.

What's the cheapest AWS instance type this full workshop will work on?

Right now we recommend m3.medium for local SSD speed, but it's not the cheapest node type. What is the smallest, tested node type we could recommend if someone is on a budget and is willing to sacrifice a little performance.

Basically I'm asking for someone to walk through this full workshop on t2.small/micro/nano and see if it works, and report back on something like "works, but very slow" and "works for all chapters except ELK and Metrics" etc.

Bootstrap Token

Hey, I just read part of your blog post "Letter to Santa Kube" and saw your complaint about the kubeadm bootstrap token.

You can easily list all kubeadm tokens by running kubeadm token list.

Or even better you can generate the token ahead of time before initializing by using kubeadm token generate then passing that to init via kubeadm init --token ${token_here}

change 'deploy' configuration?

During this section:

Building and pushing our images
We are going to use a convenient feature of Docker Compose
Go to the stacks directory:

cd ~/container.training/stacks
Build and push the images:

export REGISTRY
export TAG=v0.1
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push

I noticed this:

WARNING: Some services (rng, worker) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.

Do we want this to be formulated for compose since we aren't using swarm here? I wonder if the warning will confuse people.

[Side note: I'm a bit unsure how to tag this. It's not a kube problem, but this specific wording is only in the kube section of slides in https://github.com/jpetazzo/container.training/blob/master/slides/kube/ourapponkube.md#building-and-pushing-our-images, so I'm not positive whether it's the same in the other versions of the workshop.]

Support custom VPC and Security Group

Compose has AWS_VPC_ID but it does nothing.

This feature would allow specifying a pre-created custom VPC and SG, so that defaults are not used. We'll assume their routing and SG inbound rules are correct.

A later feature could auto update the SG with needed rules.

A later feature could consider creating VPC and SG from scratch for you.

docker build failing on namer app

Hello! I'm following your container.training and the Docker build of the namer app fails:

root@706ed8f6-fe4a-6eb2-93c7-fdc45af335cd:~/git/namer# docker build -t namer .
Sending build context to Docker daemon    108kB
Step 1/7 : FROM ruby
latest: Pulling from library/ruby
3e731ddb7fc9: Pull complete 
47cafa6a79d0: Pull complete 
79fcf5a213c7: Pull complete 
68e99216b7ad: Pull complete 
4822563608bb: Pull complete 
9d614f26bec1: Pull complete 
1c758cfc0888: Pull complete 
8a4fbc3666ca: Pull complete 
Digest: sha256:ed5fc221d5d03d89e1f8c1f7780b98bc708e68b4d8dba73594d017e999156619
Status: Downloaded newer image for ruby:latest
 ---> bae0455cb2b9
Step 2/7 : MAINTAINER Education Team at Docker <[email protected]>
 ---> Running in 632c8a620250
Removing intermediate container 632c8a620250
 ---> 3bd944a9bb03
Step 3/7 : COPY . /src
 ---> bc6755b4b4f3
Step 4/7 : WORKDIR /src
Removing intermediate container 868984853af0
 ---> 5d18f08cf08d
Step 5/7 : RUN bundler install
 ---> Running in a2cac03d0c00
/usr/local/lib/ruby/site_ruby/2.5.0/rubygems.rb:289:in `find_spec_for_exe': can't find gem bundler (>= 0.a) with executable bundler (Gem::GemNotFoundException)
        from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems.rb:308:in `activate_bin_path'
        from /usr/local/bin/bundler:23:in `<main>'
The command '/bin/sh -c bundler install' returned a non-zero code: 1
root@706ed8f6-fe4a-6eb2-93c7-fdc45af335cd:~/git/namer# 

Did the "ruby" image change since you wrote the slide?

add optional event hashtag to "tweet this"

I was thinking it could be fun if it were possible to customize the "tweet this" for a given event to use a (specified) event hashtag. Will investigate making it configurable.

WebUI does not display chart or mining speed

@jpetazzo If we go with default jquery file in index.html, the webui fails to display the d3 chart and the mining speed.
The following errors are generated in the browser:

jquery.js:1 Uncaught SyntaxError: Unexpected number jquery.js:1
index.html:93 Uncaught ReferenceError: $ is not defined
    at http://localhost:8000/index.html:93:1

The web page starts working if I replace the jquery js file with its neighbouring file: jquery-1.11.3.min.js in index.html

update: This behaviour is not present on play-with-docker environment.....must be specific to my machine but not browser since I tested it across 3 of them ...same result

Docker-machine not configured

Hello,

Thanks so much for this tutoriel :).

There is a little problem with docker-machine, it isn't configured on the node1.

I tried to configure it manually but it doesn't work (at least for me!)

vagrant@node1:~$ docker-machine create --driver generic --generic-ip-address=10.10.10.30 --generic-ssh-key private-key --generic-ssh-user=vagrant node3
Running pre-create checks...
Creating machine...
(node3) Importing SSH key...
(node3) Couldn't copy SSH public key : unable to copy ssh key: open private-key.pub: no such file or directory
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(upstart)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "10.10.10.30:2376": read tcp 10.10.10.10:43804->10.10.10.30:2376: read: connection reset by peer

You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.

Issue PR labels for easier mgmt?

As we grow, maybe some new labels to help segment changes into the course they affect?

swarm
kubernetes
docker (or fundamentals)?
provisioning
container.training (the site)
automation (shell scripts, testing)

Build failure

Tried to do a build with no code changes but I'm getting Invalid Chapter issues

OS: Windows 10
Env: Git on Bash w/ Python 2.7

image

Required installation of docker-machine for local vms

Hi,

First of all, thank you for that nice work-through on docker / docker-swarm. I am bit confused, for linux server(s) running local using virtualbox, do i need to install also docker-machine ? Up until now, docker-machine was required for win/macOS users in order to start playing with docker. For docker swarm should someone also install docker machine ?

[doc] issues attempting prepare-machine for AWS

Docker for Mac
Version 1.12.3 (13776)
Channel: Stable
583d1b8ffe

docker-machine version 0.8.2, build e18a919

I ran into several issues following prepare-machine/README.md (for AWS)

  1. this is probably a special case for only a subset of people, because I don't have default VPC (older AWS account/users), docker-machine create fails (need env var for vpc and subnet), and the security group ingress command fails (need security group ID)
  2. when docker-machine create, some exit 6 not setting up properly (need to do docker-machine provision after), and with documented instruction usermod fails without proper permission (need sudo)
  3. re: the workshop, on the created nodes, docker-machine and docker-compose is not available, docker-compose can be installed with apt-get, but it has an older version not supporting 'worker' (need to curl/chmod latest scripts)

I'm able to get a cluster up and running in the end. Thanks for putting this together!

Try to sniff traffic across overlay networks

I was at DevOpsCon 2017 orchestration workshop in Berlin and I'm going through the workshop on my own. For both secure (docker run --rm --net secure nicolaka/netshoot curl web) and insecure (docker run --rm --net insecure nicolaka/netshoot curl web) commands on slide 155 with heading "Try to sniff traffic across overlay networks" the output only shows #

The previous command (curl google.com) in a new terminal worked as expected i.e. output HTTP request in clear text.

Missing prometheus config

The ~/orchestration-workshop/prom directory referenced in line 3938 of the docs is missing. I tried manually creating the Dockerfile and the prometheus.yml file from the description in the docs, but it didn't work (starts and then exits immediately). I suspect I am missing a setting or instruction.

dockercoins.yml is missing networks and ports and doesn't work

Hi,
When starting the tutorial on part2 and using the instructions for catching up the environment won't start.
The reason is that you need to change several things:

  1. add the following line inside /etc/docker/daemon.json on each node and restart docker service:
    { "insecure-registries":["node1:5000", "node2:5000", "node3:5000", "node4:5000", "node5:5000" ] }

  2. change dockercoins.yml to the below:
    version: "3"

services:
rng:
build: dockercoins/rng
image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest}
deploy:
mode: global
networks:
- dockercoins
ports:
- "8001:80"

hasher:
build: dockercoins/hasher
image: ${REGISTRY-127.0.0.1:5000}/hasher:${TAG-latest}
networks:
- dockercoins
ports:
- "8002:80"

webui:
build: dockercoins/webui
image: ${REGISTRY-127.0.0.1:5000}/webui:${TAG-latest}
ports:
- "8000:80"
networks:
- dockercoins

redis:
image: redis
networks:
- dockercoins

worker:
build: dockercoins/worker
image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest}
deploy:
replicas: 10
networks:
- dockercoins
networks:
dockercoins:

  1. check where the registry has been created and then export the following env variables:

export REGISTRY=node:5000

export TAG=v1

AWS Reinvent Slides

Hi,

Just saw your presentation on From Local Docker Development to Production Deployments and itching to try some stuff out. Mind publishing your slides?

eth0 comme interface par défaut

j'ai assisté au workshop lundi dernier.
je me suis fait une remarque, que je me fais régulièrement.

on prend souvent dans certains scripts eth0 comme étant une interface par défaut afin de déterminer un ip.
c'est souvent codé en dur et :

  • je ne sais pas qui est à l'origine de ce changement, mais les nouvelles normes voudraient que les interfaces s'appellent désormais : wlps2s0 pour le wlan0 et enpmachinchose pour l'ethernet.
  • je me demande si ce qu'on cherche en fait, comme valeur par défaut, ça ne serait pas plutôt l'"IP source de la route par défaut."
    que perso j'attrape comme ça
    ip route | grep default | sed 's/.src ([^ ]) .*/\1/g'

workshopctl docker-compose.yml doesn't work on latest macOS with timezone volume

Latest mac's: (High Serria 10.13) combined with Docker for Mac 17.12

prepare-vms/docker-compose.yml has: - /etc/localtime:/etc/localtime:ro but when you try to share /etc/ in latest Docker for Mac Settings you get denied.

This issue in Docker for Mac repo shows others with the same issue and various workarounds.

I choose to change docker-compose.yml to use $PWD and just make a localtime file with my timezone. Not sure yet how we fix this for everyone. I found that sudo systemsetup -gettimezone can return local TZ but it always requires sudo :/

PyCon 2016: Can't run script "build-tag-push.py"

Hey Jérome,

Thank you so much for all you dedication. I was really comfortable with docker-compose and I wanted to move on with Swarm. I feel I'll be there really soon :))

I'm listening carefully your workshop "Deploying and scaling applications with Docker, Swarm, and a tiny bit of Python magic - PyCon 2016". At this point I have my 5 swarm clusters, consul and registry running.

At 2h20m33s https://youtu.be/GpHMTR7P2Ms?t=2h20m33s, you bring a python script called build-tag-push.py. For some reason I can't run it :-/

dockercoins|master⚡ ⇒ eval $(docker-machine env node1)
dockercoins|master⚡ ⇒ export DOCKER_REGISTRY=localhost:5000
dockercoins|master⚡ ⇒ ../bin/build-tag-push.py
Traceback (most recent call last):
  File "../bin/build-tag-push.py", line 3, in <module>
    from common import ComposeFile
  File "/Users/pascalandy/github/jpetazzo/orchestration-workshop/bin/common.py", line 5, in <module>
    import yaml
ImportError: No module named yaml

So I tried to this to fix it, but got the same result:

bin|master⚡ ⇒ chmod +x build-tag-push.py
bin|master⚡ ⇒ build-tag-push.py
zsh: command not found: build-tag-push.py
bin|master⚡ ⇒ python
bin|master⚡ ⇒ python build-tag-push.py
Traceback (most recent call last):
  File "build-tag-push.py", line 3, in <module>
    from common import ComposeFile
  File "/Users/pascalandy/github/jpetazzo/orchestration-workshop/bin/common.py", line 5, in <module>
    import yaml
ImportError: No module named yaml

Any idea of what could go wrong here?

Cheers!
Pascal

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.