Giter Site home page Giter Site logo

openebs / openebs Goto Github PK

View Code? Open in Web Editor NEW
8.7K 159.0 926.0 22.83 MB

Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.

Home Page: https://www.openebs.io

License: Apache License 2.0

storage storage-container persistent-storage docker pod devops k8s kubernetes ebs ebs-volumes

openebs's Introduction

Welcome to OpenEBS


We are an ultra-modern Block Mode storage platform, a Hyper-Converged software Storage System and an Enterprise Grade virtual NVMe-oF SAN (vSAN) Fabric that is natively & very tightly integrated into Kubernetes.

Important

OpenEBS provides...

  • Stateful persistent Dynamically provisioned storage volumes for Kubernetes
  • High Performance NVMe-oF storage access optimized for All-Flash Solid State storage media
  • is a 100% Cloud-Native storage platform
  • Delivers a Kubernetes cluster-wide vSAN fabric that provides containers/Pods with resilient access to storage across the entire cluster.
  • Enterprise Grade data management capabilities such as snapshots, clones, replicated volumes, DiskGroups, Volume Groups, Aggregates, RAID

Multiple Storage Engines

OpenEBS is a Kubernetes provides Persistent Stateful Storage Platform that has 5 core Data-Engines
Each Storage engine provides different Capabilities, Flexibility, Resilience, Data Protections, and Performance features.

ID Data-Eegines Where to create your issues
Replicated PV Replicated storage and data volumes
1 Replicated PV Mayastor Distributed vSAN Fabric attached volumes that are replicated
Β 
Local PV Non-replicated node local storage and volumes
2. Local PV HostPath Dynamically provisioned Node-local volumes with Hostpath resident backend data
3. Local PV ZFS Dynamically provisioned Node-local volumes with an integrated ZFS storage backend
4. Local PV LVM Dynamically provisioned Node-local volumes with an integrated LVM storage backend
5. Local PV Raw-device-File Dynamically provisioned Node-local volumes via soft-lun RAW Device files on Hostpath resident backend filesystem


OpenEBS is very popular : Live OpenEBS systems actively report back product metrics every day, to our Global Anaytics metrics engine (unless disabled by the user). Here are our key project popularity metrics as of: 01 Feb 2024

πŸš€ Β  OpenEBS is the #1 deployed Storage Platform for Kubernetes
⭐   We are the #1 GitHub Star ranked K8s Data Storage platform
πŸ’Ύ Β  We have +49 Million Volumes deployed globally
πŸ“Ί Β  We have +8 Million Global installations
⚑   1 Million OpenEBS K8s Containers are spawned per week
😎   1.1 Million global users


We have a very large active community, and many storage users contribute to our product with discussions, ideas, Issues, Feature requests and even code contribnutions.

There are many ways to get in touch with our team.

Reach out via GitHuib to the OpenEBS core leadership team.
πŸš€ Β  Ed Robinson | @edrob999
⭐   David Brace | @orville-wright
⚑   Vishnu Attur | @avishnu
😎   Tiago Castro | @tiagolobocastro

Try our Slack channel
If you have questions about using OpenEBS, please use the CNCF Kubernetes OpenEBS slack channel, it is open for anyone to ask a question


Current status

Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

Activity dashbaord

Alt

https://openebs.io/

Read this in πŸ‡©πŸ‡ͺ πŸ‡·πŸ‡Ί πŸ‡ΉπŸ‡· πŸ‡ΊπŸ‡¦ πŸ‡¨πŸ‡³ πŸ‡«πŸ‡· πŸ‡§πŸ‡· πŸ‡ͺπŸ‡Έ πŸ‡΅πŸ‡± πŸ‡°πŸ‡· other languages.

OpenEBS is the most widely deployed and easy to use open-source storage solution for Kubernetes.

OpenEBS is the leading open-source example of a category of cloud native storage solutions sometimes called Container Attached Storage. OpenEBS is listed as an open-source example in the CNCF Storage White Paper under the hyperconverged storage solutions.

Some key aspects that make OpenEBS different compared to other traditional storage solutions:

  • Built using the micro-services architecture like the applications it serves. OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. Uses Kubernetes itself to orchestrate and manage OpenEBS components.
  • Built completely in userspace making it highly portable to run across any OS/platform.
  • Completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.
  • OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use the LocalPV engine for lowest latency writes. Monolithic applications like MySQL and PostgreSQL can use the ZFS engine (cStor) for resilience. Streaming applications like Kafka can use the NVMe engine Mayastor for best performance in edge environments. Across engine types, OpenEBS provides a consistent framework for high availability, snapshots, clones and manageability.

Deployment

OpenEBS itself is deployed as just another container on your host and enables storage services that can be designated on a per pod, application, cluster or container level, including:

  • Automate the management of storage attached to the Kubernetes worker nodes and allow the storage to be used for Dynamically provisioning OpenEBS Replicated or Local PVs.
  • Data persistence across nodes, dramatically reducing time spent rebuilding Cassandra rings for example.
  • Synchronous replication of volume data across availability zones improving availability and decreasing attach/detach times for example.
  • A common layer so whether you are running on AKS, or your bare metal, or GKE, or AWS - your wiring and developer experience for storage services is as similar as possible.
  • Backup and Restore of volume data to and from S3 and other targets.

An added advantage of being a completely Kubernetes native solution is that administrators and developers can interact and manage OpenEBS using all the wonderful tooling that is available for Kubernetes like kubectl, Helm, Prometheus, Grafana, Weave Scope, etc.

Our vision is simple: let storage and storage services for persistent workloads be fully integrated into the environment so that each team and workload benefits from the granularity of control and Kubernetes native behaviour.

Roadmap (as of Jan 2024)

OpenEBS is 100% open source software. The project source code is spread across multiple repos and covers multiple projects:

Our main Roadmap is focused exclusively on the modern (STANDARD Edition) Data-Engine Mayastor. It does not define any net-new features or capabilities for any LEGACY projects or projects that are tagged & defined as DEPRECATED or ARCHIVED. Currently those projects are defined as the follows (see references above for the details on the projects DEPRECATED and ARCHIVAL strategy)...

  • Jiva
  • cStor
  • NFS-Provisioner

**MayaStor Roadmap 2024 Roadmap

Scalability

OpenEBS can scale to include an arbitrarily large number of containerized storage controllers. Kubernetes is used to provide fundamental pieces such as using etcd for inventory. OpenEBS scales to the extent your Kubernetes scales.

Installation and Getting Started

OpenEBS can be set up in a few easy steps. You can get going on your choice of Kubernetes cluster by having open-iscsi installed on the Kubernetes nodes and running the openebs-operator using kubectl.

Start the OpenEBS Services using operator

# apply this yaml
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml

Start the OpenEBS Services using helm

helm repo update
helm install --namespace openebs --name openebs stable/openebs

You could also follow our QuickStart Guide.

OpenEBS can be deployed on any Kubernetes cluster - either in the cloud, on-premise or developer laptop (minikube). Note that there are no changes to the underlying kernel that are required as OpenEBS operates in userspace. Please follow our OpenEBS Setup documentation.

Status

OpenEBS is one of the most widely used and tested Kubernetes storage infrastructures in the industry. A CNCF Sandbox project since May 2019, OpenEBS is the first and only storage system to provide a consistent set of software-defined storage capabilities on multiple backends (local, nfs, zfs, nvme) across both on-premise and cloud systems, and was the first to open source its own Chaos Engineering Framework for Stateful Workloads, the Litmus Project, which the community relies on to automatically readiness assess the monthly cadence of OpenEBS versions. Enterprise customers have been using OpenEBS in production since 2018.

The status of various storage engines that power the OpenEBS Persistent Volumes are provided below. The key difference between the statuses are summarized below:

  • alpha: The API may change in incompatible ways in a later software release without notice, recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
  • beta: Support for the overall features will not be dropped, though details may change. Support for upgrading or migrating between versions will be provided, either through automation or manual steps.
  • stable: Features will appear in released software for many subsequent versions and support for upgrading between versions will be provided with software automation in the vast majority of scenarios.
Storage Engine Status Details
Jiva stable Best suited for running Replicated Block Storage on nodes that make use of ephemeral storage on the Kubernetes worker nodes
cStor stable A preferred option for running on nodes that have Block Devices. Recommended option if Snapshot and Clones are required
Local Volumes stable Best suited for Distributed Application that need low latency storage - direct-attached storage from the Kubernetes nodes.
Mayastor stable Persistent storage solution for Kubernetes, with near-native NVMe performance and advanced data services.

For more details, please refer to OpenEBS Documentation.

Contributing

OpenEBS welcomes your feedback and contributions in any form possible.

Show me the Code

This is a meta-repository for OpenEBS. Please start with the pinned repositories or with OpenEBS Architecture document.

License

OpenEBS is developed under Apache License 2.0 license at the project level. Some components of the project are derived from other open source projects and are distributed under their respective licenses.

OpenEBS is part of the CNCF Projects. CNCF logo OpenEBS is a CNCF project and DataCore, Inc is a CNCF Silver member. DataCore support's CNCF extensively and has funded OpenEBS participating in every KubeCon event since 2020. Our project team is managed under the CNCF Storage Landscape and we contribute to the CNCF CSI and TAG Storage project initiatives. We proudly support CNCF Cloud Native Community Groups initatives.

Container Storage Interface group Storage Technical Advisory Group Cloud Native Community Groups


Thanks for dropping by.

Commercial Offerings

This is a list of third-party companies and individuals who provide products or services related to OpenEBS. OpenEBS is a CNCF project which does not endorse any company. The list is provided in alphabetical order.

openebs's People

Contributors

akhilerm avatar ashishranjan738 avatar chandansagar avatar dargasudarshan avatar dinukadesilva avatar epowell101 avatar gkganesh126 avatar gprasath avatar harshshekhar15 avatar hrishike avatar kmova avatar mahebbar avatar muratkars avatar niladrih avatar nsathyaseelan avatar orville-wright avatar pawanpraka1 avatar payes avatar prateekpandey14 avatar ranjithwingrider avatar satyamz avatar shashank855 avatar shubham14bajpai avatar sonasingh46 avatar swarnalatha-k avatar umamukkara avatar utkarshmani1997 avatar vharsh avatar vibhor995 avatar yudaykiran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openebs's Issues

Technical writeups, blogs, docs leads to better community understanding & collaboration

Let this be a placeholder, where anyone can write down some topics that is not well understood.
This in turn should be explained in form of docs, tutorial, blogs, etc by core members in various forms.

Topics:

  • Building blocks of distributed systems
  • distributed file systems vs. distributed block systems
  • zero-to-low-maintenance turn-key storage solution
  • storage on a fault tolerant filesystem
  • Need for a posix filesystem
  • Linux sparse files

You should be able to understand the stuff written below after reading above πŸ₯‡

weave-net-w8fdk and kube-proxy-2mc6t failed to run/restart in kubemaster-01

[this is the status of the pods which are displayed after running the kubemaster-01]
some of the proxy and weave fails to start..and get connected.
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-world 1/1 Unknown 0 9h
kube-system dummy-2088944543-qphfc 1/1 Running 4 13h
kube-system etcd-kubemaster-01 1/1 Running 4 13h
kube-system kube-apiserver-kubemaster-01 1/1 Running 4 13h
kube-system kube-controller-manager-kubemaster-01 1/1 Running 4 13h
kube-system kube-discovery-1769846148-1zxv9 0/1 MatchNodeSelector 0 10h
kube-system kube-discovery-1769846148-7zpvc 1/1 Running 1 9h
kube-system kube-discovery-1769846148-w1pfb 0/1 MatchNodeSelector 0 13h
kube-system kube-dns-2924299975-mbzbn 4/4 Running 16 13h
kube-system kube-proxy-2mc6t 1/1 NodeLost 2 13h
kube-system kube-proxy-xrr45 1/1 Running 4 13h
kube-system kube-scheduler-kubemaster-01 1/1 Running 4 13h
kube-system weave-net-cd2k4 2/2 Running 8 13h
kube-system weave-net-w8fdk 2/2 NodeLost 4 13h

How to upgrade to VirtualBox 5.1 from older version on Ubuntu 16.04

Some times, the upgrade can fail due to older version references. In case, you have an option to clean out earlier version, follow these steps:

(a) Remove an existing copy of VirtualBox

sudo apt-get remove --purge virtualbox
sudo rm ~/"VirtualBox VMs" -Rf
sudo rm ~/.config/VirtualBox/ -Rf

(b) Update the /etc/apt/sources.list, to add the following line at the end:

deb http://download.virtualbox.org/virtualbox/debian xenial contrib

(c) Upgrade:

wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
sudo apt update
sudo apt install virtualbox-5.1

some of the Kubernetes system services are not running

kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-kubemaster-01 1/1 Running 0 2d
kube-system kube-apiserver-kubemaster-01 1/1 Running 9 2d
kube-system kube-controller-manager-kubemaster-01 1/1 Running 0 2d
kube-system kube-scheduler-kubemaster-01 1/1 Running 1 2d

Refactor K8s demo Vagrant file to reuse for stand-alone VMs provisioning.

The Vagrantfile contains the logic to install and configure different types of nodes -- k8s master, k8s minion, openebs master and openebs storage host. While this is useful, when trying to quickly bring up a setup on laptop or demo host, this model will not scale well for machines with lower RAM/CPU or to build a setup with higher number of nodes.

Push the logic of install and configuration of each type of node to its own script. The same scripts can be re-used for multi-node or scaled setups.

Fix the docker pulls badge

Is there an issue with the docker pulls badge? It keeps showing the text "docker pulls" instead of the actual number.

OpenEBS UI console Requirements Tracker

This is an place-holder issue to gather the requirements/thoughts on what should be included as part of the UI Console.

Functional Requirements:

  • Manage infrastructure i.e. OpenEBS Nodes
  • Upgrade infrastructure
  • Manage OpenEBS storage pods
  • Upgrade OpenEBS storage pods
  • Manage application pods (in hyperconverged mode)
  • Upgrade application pods (in hyperconverged mode)
  • Provide delegated access control

Non-Functional Requirements:

while network fails vagrant up suddenly quits without any warnings/report of failure

this may lead to some of the issues like whlie getting pods from the kubemaster-01 will get loaded some of the pods like:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-kubemaster-01 1/1 Running 0 2d
kube-system kube-apiserver-kubemaster-01 1/1 Running 9 2d
kube-system kube-controller-manager-kubemaster-01 1/1 Running 0 2d
kube-system kube-scheduler-kubemaster-01 1/1 Running 1 2d
[these are the only running files]
"couldn't find some of them"

A README inside k8s-demo folder

This should contain info about various VMs that gets created by the Vagrantfile.
There should be explanation about each VM.
Info about typical operations carried out from each VM.

Automate the Kubernetes Cluster Setup using Vagrant

Currently the Kubernetes Cluster Setup is a manual process. This process has to optimized by automating this. using vagrant for the creation of the nodes and forming a cluster out of the created nodes.

Controller getting recreated after every 30 seconds in k8s-demo

Nomad has entered into some bad state where the controller is getting recreated after every 30 seconds. Below logs are being printed regularly:

Mar  2 13:20:00 ubuntu-xenial nomad[5712]:     2017/03/02 13:20:00.231100 [INFO] client: task "fe" for alloc "e6903863-f46d-96f5-a63d-d6249beb6d9d" completed successfully
Mar  2 13:20:00 ubuntu-xenial nomad[5712]: client: task "fe" for alloc "e6903863-f46d-96f5-a63d-d6249beb6d9d" completed successfully
Mar  2 13:20:00 ubuntu-xenial nomad[5712]:     2017/03/02 13:20:00.232923 [INFO] client: Restarting task "fe" for alloc "e6903863-f46d-96f5-a63d-d6249beb6d9d" in 31.166015219s
Mar  2 13:20:00 ubuntu-xenial nomad[5712]: client: Restarting task "fe" for alloc "e6903863-f46d-96f5-a63d-d6249beb6d9d" in 31.166015219s

Sometimes these logs are also being seen:

Mar  2 13:20:14 ubuntu-xenial dhclient[1733]: DHCPREQUEST of 172.28.128.8 on enp0s8 to 172.28.128.2 port 67 (xid=0x6a36c51a)
Mar  2 13:20:14 ubuntu-xenial dhclient[1733]: DHCPACK of 172.28.128.8 from 172.28.128.2
Mar  2 13:20:14 ubuntu-xenial dhclient[1733]: bound to 172.28.128.8 -- renewal in 562 seconds.

Unable to deploy a pod on Kubernetes cluster

Gives following error on kubectl describe pods:

Warning: FailedSync Error syncing pod, skipping: failed to "SetupNetwork" <pod_name> with SetupNetworkError: "Failed to setup network for pod using network plugins "cni\

Doing omm-status after maya setup throws the following error: Error querying servers: Get http://127.0.0.1:4646/v1/agent/members: dial tcp 127.0.0.1:4646: getsockopt: connection refused

Steps to reproduce:

  1. Download binaries from release (version 0.0.5).
  2. Unzip and move the binaries to /usr/bin/.
  3. Run maya setup-omm -self-ip=machine-ip
  4. After the setup, run:
    maya omm-status
    Output:
    Error querying servers: Get http://127.0.0.1:4646/v1/agent/members: dial tcp 127.0.0.1:4646: getsockopt: connection refused

Issue reproducible on multiple installation retries.

Documentation should include clear instruction on Prerequisites for various install guides.

I feel that , the documentation still needs to be crisp .
It lacks requirement specification such as required operation system is Linux in which Ubuntu is specific bundle of linux required, then the disk space consumed by the packages ,etc.. . Similarly, for end users , the product needs to be explained in detail and a promotional video or a video explaining complete functioning with audio explanation in the live demo will be further helpful for users.

Not able to exec inside the container for docker version 1.13.1

Command issued: sudo docker exec -it ls
Error Message: rpc error: code = 2 desc = containerd: container not found
This is seen in k8s-demo inside osh-01 VM.

This seems to be an issue with docker release 1.13.1.
Can we fix the docker version to some stable one.

k8s/demo - make the deployment mode as configurable

The k8s/demo/Vagrantfile creates and dedicated openebs demo setup. This should be controlled via an configurable parameter (OPENEBS_DEPLOY_MODE).

0 - The hosts are created with Ubuntu
1 - Hosts are configured with kubernetes cluster and dedicated openebs cluster (default)
2 - Hosts are configured with kubernetes and hyper-converged openebs

Evaluate the K8s volume drivers for connecting to OpenEBS volumes.

Related Issue - #14

This issue is to track the pros/cons of using the different implementation options for connecting the OpenEBS storage to K8s.

  • Custom/New Driver
  • Flex Volume Driver
  • EBS Driver

The preference is to re-use the existing driver (optionally, by adding a adapter on the maya api server).

intermittently minion fails to configure and gets stuck with Connection to 127.0.0.1 closed

During the "vagrant up", the minion fails to configure with the following errors on the screen:
Connection to 127.0.0.1 closed

The following errors are seen in the /var/log/syslog:

Feb 24 11:10:35 ubuntu-xenial kubelet[6485]: I0224 11:10:35.265645 6485 feature_gate.go:181] feature gates: map[]
Feb 24 11:10:35 ubuntu-xenial kubelet[6485]: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory
Feb 24 11:10:35 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 11:10:35 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 11:10:35 ubuntu-xenial systemd[1]: kubelet.service: Failed with result 'exit-code'.

syslogs in kubemaster-01 for k8s-demo getting filled by these messages

At this point of time there was no kubernetes pod running.
Is this due to some stale entry.

Mar  2 13:51:00 ubuntu-xenial kubelet[6146]: I0302 13:51:00.268784    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9e16582f-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "9e16582f-ff0f-11e6-9c2c-029ae74d4174" (UID: "9e16582f-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.455675315Z" level=error msg="Handler for GET /containers/97c4dbccfe0ff19d551a28b2f95e930f6f038ed84e3a60675869df8a7ce27c4e/json returned error: No such container: 97c4dbccfe0ff19d551a28b2f95e930f6f038ed84e3a60675869df8a7ce27c4e"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.456940396Z" level=error msg="Handler for GET /containers/674ec64f9fcd926d81c136f5879819d926b2c58676bc8d24105ebf205cafd718/json returned error: No such container: 674ec64f9fcd926d81c136f5879819d926b2c58676bc8d24105ebf205cafd718"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.458103104Z" level=error msg="Handler for GET /containers/e2bcece6e45968b2d8b2a62d51b643d81ef4fbe0111b9a9669b49219da6d215f/json returned error: No such container: e2bcece6e45968b2d8b2a62d51b643d81ef4fbe0111b9a9669b49219da6d215f"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.459158111Z" level=error msg="Handler for GET /containers/660030d2a7bf5d42853d3c2cd07acd63114c2a7698bdacab344b9445d8e1b150/json returned error: No such container: 660030d2a7bf5d42853d3c2cd07acd63114c2a7698bdacab344b9445d8e1b150"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.460193928Z" level=error msg="Handler for GET /containers/666a5ea57b669867a61aaf6941f4e7273edbaa1c51a2dadb63796990cc75cecc/json returned error: No such container: 666a5ea57b669867a61aaf6941f4e7273edbaa1c51a2dadb63796990cc75cecc"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.461197388Z" level=error msg="Handler for GET /containers/6200ad4f80918103658c61e2db581bd9559a4b46d32bc50b88114ff37aa05b32/json returned error: No such container: 6200ad4f80918103658c61e2db581bd9559a4b46d32bc50b88114ff37aa05b32"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.462247476Z" level=error msg="Handler for GET /containers/37cc6f3f9f6e25cfb425aec1635ade92465e92dcbf07609f5e902eb9ed846afd/json returned error: No such container: 37cc6f3f9f6e25cfb425aec1635ade92465e92dcbf07609f5e902eb9ed846afd"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.463306835Z" level=error msg="Handler for GET /containers/cd2d59dfa68ceafc1c1712d02980dbd56e8dd7606634afdcd6edcc36242574d1/json returned error: No such container: cd2d59dfa68ceafc1c1712d02980dbd56e8dd7606634afdcd6edcc36242574d1"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.464284019Z" level=error msg="Handler for GET /containers/407e1f15e0b66b55dd8a6e082c27cbdb1c1a48aebef74ad1197a2b822d1317a8/json returned error: No such container: 407e1f15e0b66b55dd8a6e082c27cbdb1c1a48aebef74ad1197a2b822d1317a8"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.465413466Z" level=error msg="Handler for GET /containers/3f9510fddc1e5122509b8c7dec2ae4c4dc46035ff292f9efa01427a06d5e714b/json returned error: No such container: 3f9510fddc1e5122509b8c7dec2ae4c4dc46035ff292f9efa01427a06d5e714b"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.466411448Z" level=error msg="Handler for GET /containers/174a2d628ea5edcba3b7094f4fd6bc7e2c003907d59030dfbad55afcf5c370f1/json returned error: No such container: 174a2d628ea5edcba3b7094f4fd6bc7e2c003907d59030dfbad55afcf5c370f1"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.467378891Z" level=error msg="Handler for GET /containers/a4d32a50e2760d8e1b59ed1947f9cf9bc16b447af71f16875dab2297145ec00a/json returned error: No such container: a4d32a50e2760d8e1b59ed1947f9cf9bc16b447af71f16875dab2297145ec00a"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.468399706Z" level=error msg="Handler for GET /containers/35c29d8fd5707b362366b33c4f71e0b3244837865f2a14084f2a89ba63e57e72/json returned error: No such container: 35c29d8fd5707b362366b33c4f71e0b3244837865f2a14084f2a89ba63e57e72"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.469370923Z" level=error msg="Handler for GET /containers/aa61ce3b5abe9953e9a4d9e43c666623c50e2c0cf8f536c8765838f00b09d37a/json returned error: No such container: aa61ce3b5abe9953e9a4d9e43c666623c50e2c0cf8f536c8765838f00b09d37a"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.470322965Z" level=error msg="Handler for GET /containers/896f83173ee8807f670633d67aa55d27c39d07f78e3981e62972239ada41f7ef/json returned error: No such container: 896f83173ee8807f670633d67aa55d27c39d07f78e3981e62972239ada41f7ef"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.471308606Z" level=error msg="Handler for GET /containers/b55ff8f9ee522d249cd628a6ff0132fbd08d93c9654a7fa94b8233177cae32ee/json returned error: No such container: b55ff8f9ee522d249cd628a6ff0132fbd08d93c9654a7fa94b8233177cae32ee"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.472291166Z" level=error msg="Handler for GET /containers/51c5c4e3f3d72068572b743fea746a17fca029d5f11646587dff632887272814/json returned error: No such container: 51c5c4e3f3d72068572b743fea746a17fca029d5f11646587dff632887272814"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.473301329Z" level=error msg="Handler for GET /containers/1073572a60665714c8bd4be849bfe2cd05bc9efad17c5a57959708b451b4f67c/json returned error: No such container: 1073572a60665714c8bd4be849bfe2cd05bc9efad17c5a57959708b451b4f67c"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.474273573Z" level=error msg="Handler for GET /containers/35216a82828d39b05002d05549e53be8cbe9a7982292aa0f276dea02ab51110f/json returned error: No such container: 35216a82828d39b05002d05549e53be8cbe9a7982292aa0f276dea02ab51110f"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.475286841Z" level=error msg="Handler for GET /containers/8a798de19a734db692d99d1aa4fafddd4a2c0ce06cf4af952758d0118493aa0b/json returned error: No such container: 8a798de19a734db692d99d1aa4fafddd4a2c0ce06cf4af952758d0118493aa0b"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.476186842Z" level=error msg="Handler for GET /containers/e07089f495d87b883a498069153f43ca8bbcce45c34aa9077fd57214be283d19/json returned error: No such container: e07089f495d87b883a498069153f43ca8bbcce45c34aa9077fd57214be283d19"
Mar  2 13:51:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:51:07.477275915Z" level=error msg="Handler for GET /containers/df48c249a689b32f2e9e9562865df715f71875a033b268e0630f201c261d5ea1/json returned error: No such container: df48c249a689b32f2e9e9562865df715f71875a033b268e0630f201c261d5ea1"
Mar  2 13:51:33 ubuntu-xenial kubelet[6146]: I0302 13:51:33.236252    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9f1e578d-ff0f-11e6-9c2c-029ae74d4174-clusterinfo" (spec.Name: "clusterinfo") pod "9f1e578d-ff0f-11e6-9c2c-029ae74d4174" (UID: "9f1e578d-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:51:33 ubuntu-xenial kubelet[6146]: I0302 13:51:33.237847    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9f1e578d-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "9f1e578d-ff0f-11e6-9c2c-029ae74d4174" (UID: "9f1e578d-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:51:43 ubuntu-xenial kubelet[6146]: I0302 13:51:43.226430    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/ad724dc3-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "ad724dc3-ff0f-11e6-9c2c-029ae74d4174" (UID: "ad724dc3-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:51:51 ubuntu-xenial kubelet[6146]: I0302 13:51:51.209392    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/be73f029-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "be73f029-ff0f-11e6-9c2c-029ae74d4174" (UID: "be73f029-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.473518589Z" level=error msg="Handler for GET /containers/37cc6f3f9f6e25cfb425aec1635ade92465e92dcbf07609f5e902eb9ed846afd/json returned error: No such container: 37cc6f3f9f6e25cfb425aec1635ade92465e92dcbf07609f5e902eb9ed846afd"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.475385797Z" level=error msg="Handler for GET /containers/b55ff8f9ee522d249cd628a6ff0132fbd08d93c9654a7fa94b8233177cae32ee/json returned error: No such container: b55ff8f9ee522d249cd628a6ff0132fbd08d93c9654a7fa94b8233177cae32ee"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.476426916Z" level=error msg="Handler for GET /containers/8a798de19a734db692d99d1aa4fafddd4a2c0ce06cf4af952758d0118493aa0b/json returned error: No such container: 8a798de19a734db692d99d1aa4fafddd4a2c0ce06cf4af952758d0118493aa0b"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.477026428Z" level=error msg="Handler for GET /containers/a4d32a50e2760d8e1b59ed1947f9cf9bc16b447af71f16875dab2297145ec00a/json returned error: No such container: a4d32a50e2760d8e1b59ed1947f9cf9bc16b447af71f16875dab2297145ec00a"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.478196439Z" level=error msg="Handler for GET /containers/cd2d59dfa68ceafc1c1712d02980dbd56e8dd7606634afdcd6edcc36242574d1/json returned error: No such container: cd2d59dfa68ceafc1c1712d02980dbd56e8dd7606634afdcd6edcc36242574d1"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.479335634Z" level=error msg="Handler for GET /containers/35c29d8fd5707b362366b33c4f71e0b3244837865f2a14084f2a89ba63e57e72/json returned error: No such container: 35c29d8fd5707b362366b33c4f71e0b3244837865f2a14084f2a89ba63e57e72"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.480587510Z" level=error msg="Handler for GET /containers/896f83173ee8807f670633d67aa55d27c39d07f78e3981e62972239ada41f7ef/json returned error: No such container: 896f83173ee8807f670633d67aa55d27c39d07f78e3981e62972239ada41f7ef"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.481681485Z" level=error msg="Handler for GET /containers/e07089f495d87b883a498069153f43ca8bbcce45c34aa9077fd57214be283d19/json returned error: No such container: e07089f495d87b883a498069153f43ca8bbcce45c34aa9077fd57214be283d19"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.482671532Z" level=error msg="Handler for GET /containers/174a2d628ea5edcba3b7094f4fd6bc7e2c003907d59030dfbad55afcf5c370f1/json returned error: No such container: 174a2d628ea5edcba3b7094f4fd6bc7e2c003907d59030dfbad55afcf5c370f1"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.483754144Z" level=error msg="Handler for GET /containers/df48c249a689b32f2e9e9562865df715f71875a033b268e0630f201c261d5ea1/json returned error: No such container: df48c249a689b32f2e9e9562865df715f71875a033b268e0630f201c261d5ea1"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.484952306Z" level=error msg="Handler for GET /containers/6200ad4f80918103658c61e2db581bd9559a4b46d32bc50b88114ff37aa05b32/json returned error: No such container: 6200ad4f80918103658c61e2db581bd9559a4b46d32bc50b88114ff37aa05b32"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.486048317Z" level=error msg="Handler for GET /containers/97c4dbccfe0ff19d551a28b2f95e930f6f038ed84e3a60675869df8a7ce27c4e/json returned error: No such container: 97c4dbccfe0ff19d551a28b2f95e930f6f038ed84e3a60675869df8a7ce27c4e"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.487004733Z" level=error msg="Handler for GET /containers/407e1f15e0b66b55dd8a6e082c27cbdb1c1a48aebef74ad1197a2b822d1317a8/json returned error: No such container: 407e1f15e0b66b55dd8a6e082c27cbdb1c1a48aebef74ad1197a2b822d1317a8"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.488113898Z" level=error msg="Handler for GET /containers/35216a82828d39b05002d05549e53be8cbe9a7982292aa0f276dea02ab51110f/json returned error: No such container: 35216a82828d39b05002d05549e53be8cbe9a7982292aa0f276dea02ab51110f"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.489170251Z" level=error msg="Handler for GET /containers/666a5ea57b669867a61aaf6941f4e7273edbaa1c51a2dadb63796990cc75cecc/json returned error: No such container: 666a5ea57b669867a61aaf6941f4e7273edbaa1c51a2dadb63796990cc75cecc"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.490253905Z" level=error msg="Handler for GET /containers/aa61ce3b5abe9953e9a4d9e43c666623c50e2c0cf8f536c8765838f00b09d37a/json returned error: No such container: aa61ce3b5abe9953e9a4d9e43c666623c50e2c0cf8f536c8765838f00b09d37a"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.491310297Z" level=error msg="Handler for GET /containers/e2bcece6e45968b2d8b2a62d51b643d81ef4fbe0111b9a9669b49219da6d215f/json returned error: No such container: e2bcece6e45968b2d8b2a62d51b643d81ef4fbe0111b9a9669b49219da6d215f"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.492393046Z" level=error msg="Handler for GET /containers/660030d2a7bf5d42853d3c2cd07acd63114c2a7698bdacab344b9445d8e1b150/json returned error: No such container: 660030d2a7bf5d42853d3c2cd07acd63114c2a7698bdacab344b9445d8e1b150"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.493409477Z" level=error msg="Handler for GET /containers/674ec64f9fcd926d81c136f5879819d926b2c58676bc8d24105ebf205cafd718/json returned error: No such container: 674ec64f9fcd926d81c136f5879819d926b2c58676bc8d24105ebf205cafd718"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.494511070Z" level=error msg="Handler for GET /containers/1073572a60665714c8bd4be849bfe2cd05bc9efad17c5a57959708b451b4f67c/json returned error: No such container: 1073572a60665714c8bd4be849bfe2cd05bc9efad17c5a57959708b451b4f67c"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.495639508Z" level=error msg="Handler for GET /containers/3f9510fddc1e5122509b8c7dec2ae4c4dc46035ff292f9efa01427a06d5e714b/json returned error: No such container: 3f9510fddc1e5122509b8c7dec2ae4c4dc46035ff292f9efa01427a06d5e714b"
Mar  2 13:52:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:52:07.496642790Z" level=error msg="Handler for GET /containers/51c5c4e3f3d72068572b743fea746a17fca029d5f11646587dff632887272814/json returned error: No such container: 51c5c4e3f3d72068572b743fea746a17fca029d5f11646587dff632887272814"
Mar  2 13:52:14 ubuntu-xenial kubelet[6146]: I0302 13:52:14.211410    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/b1231205-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "b1231205-ff0f-11e6-9c2c-029ae74d4174" (UID: "b1231205-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:52:17 ubuntu-xenial kubelet[6146]: I0302 13:52:17.240967    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9e16582f-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "9e16582f-ff0f-11e6-9c2c-029ae74d4174" (UID: "9e16582f-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:52:49 ubuntu-xenial kubelet[6146]: I0302 13:52:49.250012    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9f1e578d-ff0f-11e6-9c2c-029ae74d4174-clusterinfo" (spec.Name: "clusterinfo") pod "9f1e578d-ff0f-11e6-9c2c-029ae74d4174" (UID: "9f1e578d-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:52:49 ubuntu-xenial kubelet[6146]: I0302 13:52:49.250721    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9f1e578d-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "9f1e578d-ff0f-11e6-9c2c-029ae74d4174" (UID: "9f1e578d-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:52:58 ubuntu-xenial kubelet[6146]: I0302 13:52:58.199239    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/be73f029-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "be73f029-ff0f-11e6-9c2c-029ae74d4174" (UID: "be73f029-ff0f-11e6-9c2c-029ae74d4174").
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.459389773Z" level=error msg="Handler for GET /containers/8a798de19a734db692d99d1aa4fafddd4a2c0ce06cf4af952758d0118493aa0b/json returned error: No such container: 8a798de19a734db692d99d1aa4fafddd4a2c0ce06cf4af952758d0118493aa0b"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.460023157Z" level=error msg="Handler for GET /containers/660030d2a7bf5d42853d3c2cd07acd63114c2a7698bdacab344b9445d8e1b150/json returned error: No such container: 660030d2a7bf5d42853d3c2cd07acd63114c2a7698bdacab344b9445d8e1b150"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.461155629Z" level=error msg="Handler for GET /containers/896f83173ee8807f670633d67aa55d27c39d07f78e3981e62972239ada41f7ef/json returned error: No such container: 896f83173ee8807f670633d67aa55d27c39d07f78e3981e62972239ada41f7ef"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.462389040Z" level=error msg="Handler for GET /containers/1073572a60665714c8bd4be849bfe2cd05bc9efad17c5a57959708b451b4f67c/json returned error: No such container: 1073572a60665714c8bd4be849bfe2cd05bc9efad17c5a57959708b451b4f67c"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.463458863Z" level=error msg="Handler for GET /containers/a4d32a50e2760d8e1b59ed1947f9cf9bc16b447af71f16875dab2297145ec00a/json returned error: No such container: a4d32a50e2760d8e1b59ed1947f9cf9bc16b447af71f16875dab2297145ec00a"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.464636190Z" level=error msg="Handler for GET /containers/174a2d628ea5edcba3b7094f4fd6bc7e2c003907d59030dfbad55afcf5c370f1/json returned error: No such container: 174a2d628ea5edcba3b7094f4fd6bc7e2c003907d59030dfbad55afcf5c370f1"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.465674551Z" level=error msg="Handler for GET /containers/6200ad4f80918103658c61e2db581bd9559a4b46d32bc50b88114ff37aa05b32/json returned error: No such container: 6200ad4f80918103658c61e2db581bd9559a4b46d32bc50b88114ff37aa05b32"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.466693993Z" level=error msg="Handler for GET /containers/666a5ea57b669867a61aaf6941f4e7273edbaa1c51a2dadb63796990cc75cecc/json returned error: No such container: 666a5ea57b669867a61aaf6941f4e7273edbaa1c51a2dadb63796990cc75cecc"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.467917131Z" level=error msg="Handler for GET /containers/b55ff8f9ee522d249cd628a6ff0132fbd08d93c9654a7fa94b8233177cae32ee/json returned error: No such container: b55ff8f9ee522d249cd628a6ff0132fbd08d93c9654a7fa94b8233177cae32ee"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.468794753Z" level=error msg="Handler for GET /containers/35c29d8fd5707b362366b33c4f71e0b3244837865f2a14084f2a89ba63e57e72/json returned error: No such container: 35c29d8fd5707b362366b33c4f71e0b3244837865f2a14084f2a89ba63e57e72"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.469768503Z" level=error msg="Handler for GET /containers/e2bcece6e45968b2d8b2a62d51b643d81ef4fbe0111b9a9669b49219da6d215f/json returned error: No such container: e2bcece6e45968b2d8b2a62d51b643d81ef4fbe0111b9a9669b49219da6d215f"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.470799677Z" level=error msg="Handler for GET /containers/674ec64f9fcd926d81c136f5879819d926b2c58676bc8d24105ebf205cafd718/json returned error: No such container: 674ec64f9fcd926d81c136f5879819d926b2c58676bc8d24105ebf205cafd718"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.471774471Z" level=error msg="Handler for GET /containers/97c4dbccfe0ff19d551a28b2f95e930f6f038ed84e3a60675869df8a7ce27c4e/json returned error: No such container: 97c4dbccfe0ff19d551a28b2f95e930f6f038ed84e3a60675869df8a7ce27c4e"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.472754849Z" level=error msg="Handler for GET /containers/aa61ce3b5abe9953e9a4d9e43c666623c50e2c0cf8f536c8765838f00b09d37a/json returned error: No such container: aa61ce3b5abe9953e9a4d9e43c666623c50e2c0cf8f536c8765838f00b09d37a"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.473674857Z" level=error msg="Handler for GET /containers/cd2d59dfa68ceafc1c1712d02980dbd56e8dd7606634afdcd6edcc36242574d1/json returned error: No such container: cd2d59dfa68ceafc1c1712d02980dbd56e8dd7606634afdcd6edcc36242574d1"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.474575057Z" level=error msg="Handler for GET /containers/35216a82828d39b05002d05549e53be8cbe9a7982292aa0f276dea02ab51110f/json returned error: No such container: 35216a82828d39b05002d05549e53be8cbe9a7982292aa0f276dea02ab51110f"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.475541451Z" level=error msg="Handler for GET /containers/df48c249a689b32f2e9e9562865df715f71875a033b268e0630f201c261d5ea1/json returned error: No such container: df48c249a689b32f2e9e9562865df715f71875a033b268e0630f201c261d5ea1"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.476432668Z" level=error msg="Handler for GET /containers/37cc6f3f9f6e25cfb425aec1635ade92465e92dcbf07609f5e902eb9ed846afd/json returned error: No such container: 37cc6f3f9f6e25cfb425aec1635ade92465e92dcbf07609f5e902eb9ed846afd"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.477211394Z" level=error msg="Handler for GET /containers/3f9510fddc1e5122509b8c7dec2ae4c4dc46035ff292f9efa01427a06d5e714b/json returned error: No such container: 3f9510fddc1e5122509b8c7dec2ae4c4dc46035ff292f9efa01427a06d5e714b"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.478019885Z" level=error msg="Handler for GET /containers/51c5c4e3f3d72068572b743fea746a17fca029d5f11646587dff632887272814/json returned error: No such container: 51c5c4e3f3d72068572b743fea746a17fca029d5f11646587dff632887272814"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.478869816Z" level=error msg="Handler for GET /containers/e07089f495d87b883a498069153f43ca8bbcce45c34aa9077fd57214be283d19/json returned error: No such container: e07089f495d87b883a498069153f43ca8bbcce45c34aa9077fd57214be283d19"
Mar  2 13:53:07 ubuntu-xenial dockerd[4072]: time="2017-03-02T13:53:07.479659878Z" level=error msg="Handler for GET /containers/407e1f15e0b66b55dd8a6e082c27cbdb1c1a48aebef74ad1197a2b822d1317a8/json returned error: No such container: 407e1f15e0b66b55dd8a6e082c27cbdb1c1a48aebef74ad1197a2b822d1317a8"
Mar  2 13:53:08 ubuntu-xenial kubelet[6146]: I0302 13:53:08.278543    6146 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/ad724dc3-ff0f-11e6-9c2c-029ae74d4174-default-token-lbz3m" (spec.Name: "default-token-lbz3m") pod "ad724dc3-ff0f-11e6-9c2c-029ae74d4174" (UID: "ad724dc3-ff0f-11e6-9c2c-029ae74d4174").

k8s demo setup failed - due to errors contacting github

The output of final stage of Vagrant Up:

==> osh-01: Setting up the node using IPAddress: 172.28.128.6 The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed!

maya setup-osh -self-ip=172.28.128.6 -omm-ips=172.28.128.5

Stdout from the command:

Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease Hit:2 http://security.ubuntu.com/ubuntu xenial-security InRelease Hit:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease Reading package lists... Reading package lists... Building dependency tree... Reading state information... unzip is already the newest version (6.0-20ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Cleaning old maya boostrapping if any ... Fetching utility scripts ...

Stderr from the command:

mesg: ttyname failed: Inappropriate ioctl for device W: Can't drop privileges for downloading as file '/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_xenial_InRelease' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied) curl: (35) gnutls_handshake() failed: The TLS connection was non-properly terminated. Error executing cmd: exit status 35 Install failed: Error while bootstraping OpenEBS Host setup failed

Verification of the setup failed at:
maya osh-status
No output was shown for above command.

Vagrant version: Vagrant 1.9.1
Virtual box version: 5.1.14r112924

Correction in video embedded in README

At the time instance 3:15, "iscsiadm -m session -P 3" needs to be run instead of "iscsiadm -m session -P 1" to get the dev path of the newly attached target.

Dynamically provision OpenEBS volumes from K8s

This issue will track the enhancements that will enable the provisioning of OpenEBS volumes from K8s.

The high level use case is as follows:

  • OpenEBS will be configured to use the same multi-host networking like K8s (Example: Flannel)
  • Developer creates an K8s pod yaml file, specifying the volume to be provisioned on OpenEBS.
  • K8s volume driver, will instantiate a new OpenEBS VSM that is accessible as an iSCSI volume to be mounted and formatted by the K8s minion.
  • When the K8s pod is destroyed, the corresponding volume is also deleted from the OpenEBS.

Some of the items for further exploration are:

  • Can K8s EBS volume driver be used to mount OpenEBS volumes? Or will there be a need for writing another volume driver?

Kubernetes MySQL with iSCSI stops running

When using iSCSI volumes, the MySQL fails to start in the following cases:

  • the mount directory ( /var/lib/kubelet/plugins/kubernetes.io/iscsi// ) contains lost+found folder
  • due to network connectivity, the volume gets remounted as read-only

OpenEBS components deployment strategy / tracker

This will act as a common placeholder to discuss & debate about deployment, maintenance, upgrade, rollback strategies of various OpenEBS components.

We should have clarity to questions like:

  • Should OpenEBS components rely on systemd ?
  • Should OpenEBS components run as containers & managed by some tool similar to kubelet ?
  • Should OpenEBS components runs as containers or standalone binaries & managed by Nomad ?
    • Here Nomad is being referred as tool for deployment & management of OpenEBS components
  • What happens if container runtime require an upgrade ?
  • Can multiple versions of container runtime (e.g. dockerd, rkt engine) be used across hosts ?
  • What happens if orchestrator engine requires an upgrade ?
  • Can multiple versions of orchestrator engine be used across hosts ?

Alternatively:

- Can we just use tried & tested tools like Ansible:
  - https://github.com/openshift/openshift-ansible
  - https://www.ansible.com/tower
  - https://github.com/ansible-semaphore/semaphore
  - https://github.com/purpleidea/mgmt
  - https://github.com/dmsimard/ara

Alternative:

- http://operable.io/

References:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.