Giter Site home page Giter Site logo

storageos / cluster-operator Goto Github PK

View Code? Open in Web Editor NEW
69.0 13.0 18.0 52.29 MB

cluster-operator creates, configures and helps manage StorageOS cluster on Kubernetes

License: MIT License

Go 89.70% Dockerfile 1.10% Shell 7.46% Makefile 1.74%
kubernetes operator storage olm crd kubernetes-operator

cluster-operator's Introduction

StorageOS cluster-operator

Go Report Card Build Status CircleCI CodeQL Active PR's Welcome License

The StorageOS Cluster Operator deploys and configures a StorageOS cluster on Kubernetes.

For quick installation of the cluster operator, refer to the install section in the releases page.

Pre-requisites

  • Kubernetes 1.9+
  • Kubernetes must be configured to allow (configured by default in 1.10+):
    • Privileged mode containers (enabled by default)
    • Feature gate: MountPropagation=true. This can be done by appending --feature-gates MountPropagation=true to the kube-apiserver and kubelet services.

Refer to the StorageOS prerequisites docs for more information.

Setup/Development

  1. Build operator container image with make dev-image. Publish or copy this container image to an existing k8s cluster to make it available for use within the cluster.
  2. Generate install manifest file with make install-manifest. This will generate storageos-operator.yaml.
  3. Install the operator kubectl create -f storageos-operator.yaml
  4. Install a StorageOSCluster by creating a custom resource kubectl create -f deploy/crds/*_storageoscluster_cr.yaml.

NOTE: Installing StorageOS on Minikube is not currently supported due to missing kernel prerequisites. There are custom built Kubernetes in Docker (KinD) node image compatible with StorageOS available at https://hub.docker.com/r/storageos/kind-node.

For development, run the operator outside of the k8s cluster by running:

make local-run

Build operator container image:

make operator-image OPERATOR_IMAGE=storageos/cluster-operator:test

This builds all the components and copies the binaries into the same container.

For any changes related to Operator Lifecycle Manager(OLM), update deploy/storageos-operators.configmap.yaml and run make metadata-update to automatically update all the CRD, CSV and package files.

After creating a StorageOSCluster resource, query the resource:

$ kubectl get storageoscluster
NAME                       READY     STATUS    AGE
example-storageoscluster   3/3       Running   4m

Inspect a StorageOSCluster Resource

Get all the details about the cluster:

$ kubectl describe storageoscluster/example-storageoscluster
Name:         example-storageoscluster
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storageos.com/v1","kind":"StorageOSCluster","metadata":{"annotations":{},"name":"example-storageoscluster","namespace":"default"},"spec":{"...
API Version:  storageos.com/v1
Kind:         StorageOSCluster
Metadata:
  Creation Timestamp:  2018-07-21T12:57:11Z
  Generation:          1
  Resource Version:    10939030
  Self Link:           /apis/storageos.com/v1/namespaces/default/storageosclusters/example-storageoscluster
  UID:                 955b24a4-8ce5-11e8-956a-1866da35eee2
Spec:
  Join:  test07
Status:
  Node Health Status:
  ...
  ...
  Nodes:
    test09
    test08
    test07
  Phase:  Running
  Ready:  3/3
Events:   <none>

StorageOSCluster Resource Configuration

Once the StorageOS operator is running, a StorageOS cluster can be deployed by creating a Cluster Configuration. The parameters specified in the configuration will define how StorageOS is deployed, the rest of the installation details are handled by the operator.

The following tables lists the configurable spec parameters of the StorageOSCluster custom resource and their default values.

Parameter Description Default
secretRefName Reference name of storageos secret
secretRefNamespace Namespace of storageos secret
namespace Namespace where storageos cluster resources are created storageos
images.nodeContainer StorageOS node container image storageos/node:v2.4.4
images.initContainer StorageOS init container image storageos/init:v2.1.0
images.apiManagerContainer StorageOS API Manager container image storageos/api-manager:v1.1.3
images.csiNodeDriverRegistrarContainer CSI Node Driver Registrar Container image Varies depending on Kubernetes version
images.csiClusterDriverRegistrarContainer CSI Cluster Driver Registrar Container image Varies depending on Kubernetes version
images.csiExternalProvisionerContainer CSI External Provisioner Container image Varies depending on Kubernetes version
images.csiExternalAttacherContainer CSI External Attacher Container image Varies depending on Kubernetes version
images.csiExternalResizerContainer CSI External Resizer Container image Varies depending on Kubernetes version
ìmages.csiLivenessProbeContainer CSI Liveness Probe Container Image Varies depending on Kubernetes version
service.name Name of the Service used by the cluster storageos
service.type Type of the Service used by the cluster ClusterIP
service.externalPort External port of the Service used by the cluster 5705
service.internalPort Internal port of the Service used by the cluster 5705
service.annotations Annotations of the Service used by the cluster
ingress.enable Enable ingress for the cluster false
ingress.hostname Hostname to be used in cluster ingress storageos.local
ingress.tls Enable TLS for the ingress false
ingress.annotations Annotations of the ingress used by the cluster
sharedDir Path to be shared with kubelet container when deployed as a pod /var/lib/kubelet/plugins/kubernetes.io~storageos
kvBackend.address Comma-separated list of addresses of external key-value store. (1.2.3.4:2379,2.3.4.5:2379)
kvBackend.backend Name of the key-value store to use. Set to etcd for external key-value store. embedded
tlsEtcdSecretRefName Name of the secret object that contains the etcd TLS certs.
tlsEtcdSecretRefNamespace Namespace of the secret object that contains the etcd TLS certs.
pause Pause the operator for cluster maintenance false
debug Enable debug mode for all the cluster nodes false
disableFencing Disable Pod fencing false
disableTelemetry Disable telemetry reports false
disableTCMU Disable TCMU to allow co-existence with other storage systems but degrades performance false
forceTCMU Forces TCMU to be enabled or causes StorageOS to abort startup false
disableScheduler Disable StorageOS scheduler for data locality false
nodeSelectorTerms Set node selector for storageos pod placement, including NFS pods
tolerations Set pod tolerations for storageos pod placement
resources Set resource requirements for the containers
k8sDistro The name of the Kubernetes distribution is use, e.g. rancher or eks
storageClassName The name of the default StorageClass created for StorageOS volumes fast

Upgrading a StorageOS Cluster

An existing StorageOS cluster can be upgraded to a new version of StorageOS by creating an Upgrade Configuration. The cluster-operator takes care of downloading the new container image and updating all the nodes with new version of StorageOS. An example of StorageOSUpgrade resource is storageos_v1_storageosupgrade_cr.yaml.

Only offline upgrade is supported for now by cluster-operator. During the upgrade, StorageOS maintenance mode is enabled, the applications that use StorageOS volumes are scaled down and the whole StorageOS cluster is restarted with a new version. Once the StorageOS cluster becomes usable, the applications are scaled up to their previous configuration. Once the update is complete, make sure to delete the upgrade resource to put the StorageOS cluster in normal mode. This will disable the maintenance mode.

Once an upgrade resource is created, events related to the upgrade can be viewed in the upgrade object description. All the status and errors, if any, encountered during the upgrade are posted as events.

$ kubectl describe storageosupgrades example-storageosupgrade
Name:         example-storageosupgrade
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storageos.com/v1","kind":"StorageOSUpgrade","metadata":{"annotations":{},"name":"example-storageosupgrade","namespace":"default"},...
API Version:  storageos.com/v1
Kind:         StorageOSUpgrade
...
Spec:
  New Image:  storageos/node:1.0.0
Events:
  Type    Reason           Age   From                Message
  ----    ------           ----  ----                -------
  Normal  PullImage         4m    storageos-upgrader  Pulling the new container image
  Normal  PauseClusterCtrl  2m    storageos-upgrader  Pausing the cluster controller and enabling cluster maintenance mode
  Normal  UpgradeInit       2m    storageos-upgrader  StorageOS upgrade of cluster example-storageos started
  Normal  UpgradeComplete   0s    storageos-upgrader  StorageOS upgraded to storageos/node:1.0.0. Delete upgrade object to disable cluster maintenance mode

StorageOSUpgrade Resource Configuration

The following table lists the configurable spec parameters of the StorageOSUpgrade custom resource and their default values.

Parameter Description Default
newImage StorageOS node container image to upgrade to

Cleanup Old Configurations

StorageOS creates and saves its files at /var/lib/storageos on the hosts. This also contains some configurations of the cluster. To do a fresh install of StorageOS, these files need to be deleted.

WARNING: This will delete any existing data and won't be recoverable.

NOTE: When using an external etcd, the data related to storageos should also be removed.

ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints http://storageos-etcd-server:2379 del --prefix storageos

The cluster-operator provides a Jobresource that can execute certain tasks on all nodes or on selected nodes. This can be used to easily perform cleanup task. An example would be to create a Job resource:

apiVersion: storageos.com/v1
kind: Job
metadata:
  name: cleanup-job
spec:
  image: darkowlzz/cleanup:v0.0.2
  args: ["/var/lib/storageos"]
  mountPath: "/var/lib"
  hostPath: "/var/lib"
  completionWord: "done"
  nodeSelectorTerms:
    - matchExpressions:
      - key: node-role.kubernetes.io/worker
        operator: In
        values:
        - "true"

When applied, this job will run darkowlzz/cleanup container on the nodes that have label node-role.kubernetes.io/worker with value "true", mounting /var/lib and passing the argument /var/lib/storageos. This will run rm -rf /var/lib/storageos in the selected nodes and cleanup all the storageos files. To run it on all the nodes, remove the nodeSelectorTerms attribute. On completion, the resource description shows that the task is completed and can be deleted.

$ kubectl describe jobs.storageos.com cleanup-job
Name:         cleanup-job
Namespace:    default
...
...
Spec:
  Completion Word:
  Args:
    /var/lib/storageos
  Host Path:            /var/lib
  Image:                darkowlzz/cleanup:v0.0.2
  ...
Status:
  Completed:  true
Events:
  Type    Reason        Age   From                       Message
  ----    ------        ----  ----                       -------
  Normal  JobCompleted  39s   storageoscluster-operator  Job Completed. Safe to delete.

Deleting the resource, will terminate all the pods that were created to run the task.

Internally, this Job is backed by a controller that creates pods using a DaemonSet. Job containers have to be built in a specific way to achieve this behavior.

In the above example, the cleanup container runs a shell script(script.sh):

#!/bin/ash

set -euo pipefail

# Gracefully handle the TERM signal sent when deleting the daemonset
trap 'exit' TERM

# This is the main command that's run by this script on
# all the nodes.
rm -rf $1

# Let the monitoring script know we're done.
echo "done"

# this is a workaround to prevent the container from exiting
# and k8s restarting the daemonset pod
while true; do sleep 1; done

And the container image is made with Dockerfile:

FROM alpine:3.6
COPY script.sh .
RUN chmod u+x script.sh
ENTRYPOINT ["./script.sh"]

The script, after running the main command, enters into a sleep state, instead of exiting. This is needed because we don't want the container to exit and start again and again. Once completed, it echos "done". This is read by the Job controller to figure out when the task is completed. Once all the pods have completed the task, the Job status is completed and it can be deleted.

This can be extended to do other similar cluster management operations. This is also used internally in the cluster upgrade process.

Job (jobs.storageos.com) Resource Configuration

The following table lists the configurable spec parameters of the Job custom resource and their default values.

Parameter Description Default
image Container image that the job runs
args Any arguments to be passed when the container is run
hostPath Path on the host that is mounted on the job container
mountPath Path on the job container where the hostPath is mounted
completionWord The word that job controller looks for in the pod logs to determine if the task is completed
labelSelector Labels that are added to the job pods and are used to select them.
nodeSelectorTerms This can be used to select the nodes where the job runs.

TLS Support

To enable TLS, ensure that an ingress controller is installed in the cluster. Set ingress.enable and ingress.tls to true. Store the TLS cert and key as part of the storageos secret as:

apiVersion: v1
kind: Secret
metadata:
  name: "storageos-api"
...
...
data:
  # echo -n '<secret>' | base64
  ...
  ...
  # Add base64 encoded TLS cert and key.
  tls.crt:
  tls.key:

CSI

StorageOS supports the Container Storage Interface (CSI) to communicate with Kubernetes.

CSI credentials are required for deploying StorageOS. Specify the CSI credentials as part of the storageos secret object as:

apiVersion: v1
kind: Secret
metadata:
  name: "storageos-api"
...
...
data:
  # echo -n '<secret>' | base64
  ...
  ...
  csiProvisionUsername:
  csiProvisionPassword:
  csiControllerPublishUsername:
  csiControllerPublishPassword:
  csiNodePublishUsername:
  csiNodePublishPassword:
  csiControllerExpandUsername:
  csiControllerExpandPassword:

cluster-operator's People

Contributors

andrebriggs avatar arau avatar croomes avatar darkowlzz avatar domodwyer avatar mhmxs avatar tobiaskohlbau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-operator's Issues

StorageOS 2 and OKD 4.4 volumes created after installation not visible in dashboard and not spread across nodes

StorageOS 2 and OKD 4.4 volumes created after installation not visible in dashboard and not spread across nodes.
So when I create 2 replica volume, I can see on compute node that all replicas has been in fact created on single compute node.
This however works correctly when volume is created from dashboard.
Tested on bare metal OKD 4.4 Beta3 installation with 3 masters and 3 compute nodes.

[FEATURE] UI show volumes for all namespaces

Running storageos 2.0.0, migrated now also to a SSD only server.
Startup of the same statefulsets feels like factor x20.

One thing I noticed is that there is no option to show volumes for all namespaces on one page.

I think it would make sense to have this option for the nodes view
(if a namespace workload is on that node) and the volumes view

Feature Request: use a storage class for data volume instead of host dir

Optionally use a stateful with PVCs for the StorageOS cluster. You already mount a local volume to each pod in the StorageOS cluster. Replacing that with another storage class would allow the cluster to use our cloud providers storage class to back StorageOS.

This has the following benefits:

  • Simpler setup since nodes do not have to be provisioned with extra storage ahead of time.
  • abstracts the hosts completely from the storage cluster.
  • makes adding more storage to the cluster as simple as scaling up the stateful set.

Expand PersistentVolumes in k8s

On this page https://storageos.com/why-storageos/ I read something about volume resize.

However I wasn't able to find further documentation about it.

I am testing StorageOS on Kubernetes, both latest version as of writing.
According to the Kubernetes documentation StorageOS doesn't support expanding (yet) https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims

Can you give more insights into this topic please? Is it already possible on k8s? If not is it coming soon?

Thanks a lot!

Error on creation of PVC

2019-12-04T18:39:28.264Z	INFO	storageos.node	Failed to sync node labels	{"Request.Namespace": "", "Request.Name": "chichi-master-server", "error": "API error (Server failed to process your request. Was the data correct?): node not found"

Add daemonset node tolerations support

We've the desire to deploy specific storage nodes. In order to achieve this we set a node taint to avoid scheduling any workload pods onto them. In order to schedule the storageos pods onto them, the operator needs to inject a node toleration in the daemonset. I would like to see this be supported by adding support in similar way as #19 did for node selectors.

What are your thoughts about this?
If you're willing to support this, I could find some time to workout a draft PR based on the code of #19.

Add CRD spec and status descriptors in CSV

All the fields in spec and status should have human readable descriptions.
Examples of a few spec and status field description are here.
All the fields in spec and status should have their description under the StorageOSCluster definition.

Likewise for upgrade and job CRDs.

These descriptions count towards the operator scorecard points.

PVC attach/mount failed - csi.storageos.com not found

In a microk8s environment, I installed storageos cluster-operator with helm3 chart , csi enabled .

Everything runs ok , including GUI, and PVC creation.

POD's instead fails to attach - timeout with error : cannot find csi.storageos.com

My install command is as follow :
helm3 install grid-storageos storageos/storageos-operator
--namespace kube-system
--set cluster.kvBackend.address=192.168.0.10:2379 \ # VIP address of external etcd cluster (TLS disabled)
--set cluster.admin.username=storageos
--set cluster.admin.password=********
--set csi.enable=true
--set csi.deploymentStrategy=deployment
--set csi.enableControllerPublishCreds=true
--set csi.enableNodePublishCreds=true
--set csi.enableProvisionCreds=true

Problem with AKS storage

Hi all,

Am I missing smth or I don't see any capabilities to install storageos (even 2.x version) on dedicated pvcs attached to the nodes in clouds Kubernetes distrubutions?
For example this possibility has rook - could be provisioned and use pvc mounts from osd pods
The relevant part of rook:

        spec:
          resources:
            requests:
              storage: 128Gi
          storageClassName: managed-premium
          volumeMode: Block
          accessModes:
            - ReadWriteOnce

The scenario is that in clouds like Azure - cloud creates AKS nodes without possibility to manipulate local storage
On the other hand, I could do some workarounds like initcontainers invoking /var/lib/storageos space covered by azure-disk but I'm expecting storageos to do that..
I'm expecting the possibility to point /var/lib/storageos into PVC volume

Regards,
Michal

failed to dial all known cluster members, (http://storageos.kube-system.svc.cluster.local)

I'm trying to connect a in-cluster client to the StorageOS API but cluster members cannot be found.

This is my go code

func InitClient() error {
	cli, err := storageos.NewClient(env.Get(env.STORAGEOS_HOST))
	if err != nil {
		return err
	}
	cli.SetAuth(env.Get(env.STORAGEOS_USERNAME), env.Get(env.STORAGEOS_PASSWORD))
	client = cli
	return nil
}

and then I run a test ping

if err = client.Ping(); err != nil {
	log.Error(err, "Failed ping test StorageOS API")
	os.Exit(1)
}

and there is the error

"error":"failed to dial all known cluster members, (http://storageos.kube-system.svc.cluster.local)

The pod is running in another namespace.

This is the storageos service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: storageos
  name: storageos
  namespace: kube-system
spec:
  clusterIP: 10.43.68.167
  ports:
  - name: storageos
    port: 5705
    protocol: TCP
    targetPort: 5705
  selector:
    app: storageos
    kind: daemonset
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

error retrieving resource lock on k8s 1.17.0

Fresh install using 1.5.1 operator on Kubernetes 1.17.0 (shipped with Ubuntu 18.04) results in the following error in storageos-scheduler pod:

E1215 17:48:23.737788       1 leaderelection.go:331] error retrieving resource lock storageos/storageos-scheduler: leases.coordination.k8s.io "storageos-scheduler" is forbidden: User "system:serviceaccount:storageos:storageos-scheduler-sa" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "storageos"
I1215 17:48:23.737814       1 leaderelection.go:247] failed to acquire lease storageos/storageos-scheduler
kubectl  get serviceaccounts -n storageos
NAME                      SECRETS   AGE
default                   1         43m
storageos-csi-helper-sa   1         43m
storageos-daemonset-sa    1         43m
storageos-scheduler-sa    1         43m

kubectl get rolebindings -n storageos
NAME                       AGE
storageos:key-management   43m

I'm able to work-around it by adding leases to clusterrole.rbac.authorization.k8s.io/storageos:scheduler-extender

- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch

(Note that is probably more verbs than necessary)

Node containers can't connect to ETCD

Hi there,
I am trying to deploy storageos for testing but I could not make node containers (storageos-daemonset-xyz) connect to etcd. I tried two different options:

  1. Use an external etcd cluster I have.
  2. Create an etcd cluster inside k8s cluster, as suggested in your documentation https://docs.storageos.com/v2.1/docs/prerequisites/etcd/

This is what log spits out:

{"level":"debug","msg":"tracing is disabled","time":"2020-12-08T21:44:36.779463135Z"}
{"level":"debug","msg":"tracer created successfully","time":"2020-12-08T21:44:36.779626243Z"}
{"config_file_path":"/var/lib/storageos/config.json","level":"debug","msg":"local file configuration persistence ready","time":"2020-12-08T21:44:36.779691175Z"}
{"file_config_joined_cluster":"24a3c885-ebac-4b69-871b-07b7605bbbb9","file_config_local_node_id":"a2274eee-f6d0-4055-a8e3-f2189a44c0c9","file_config_node_created_at":"2020-12-08T15:40:17Z","level":"debug","msg":"read stored file configuration","time":"2020-12-08T21:44:36.779818426Z"}
{"level":"info","msg":"starting StorageOS","time":"2020-12-08T21:44:36.779878308Z"}
{"env_advertise_address":"x.y.z.k","env_api_bind_address":"","env_api_tls_ca":"","env_bind_address":"","env_bootstrap_namespace":"","env_bootstrap_username":"admin","env_csi_endpoint":"unix:///var/lib/kubelet/plugins_registry/storageos/csi.sock","env_csi_version":"v1","env_dataplane_daemon_dir":"","env_dataplane_dir":"","env_device_dir":"","env_dial_timeout":"","env_disable_crash_reporting":"false","env_disable_telemetry":"false","env_disable_version_check":"false","env_etcd_endpoints":"storageos-etcd-client.storageos-etcd.svc.xyz:2379","env_etcd_namespace":"","env_etcd_tls_client_ca":"","env_etcd_tls_client_cert":"","env_etcd_tls_client_key":"","env_etcd_username":"","env_gossip_advertise_address":"","env_gossip_bind_address":"","env_health_grace_period":"","env_health_probe_interval":"","env_health_probe_timeout":"","env_hostname":"xyz","env_internal_api_advertise_address":"","env_internal_api_bind_address":"","env_internal_tls_ca_cert":"","env_internal_tls_node_cert":"","env_internal_tls_node_key":"","env_io_advertise_address":"","env_io_bind_address":"","env_jaeger_endpoint":"","env_jaeger_service_name":"","env_k8s_config_path":"","env_k8s_distribution":"upstream","env_k8s_enable_scheduler_extender":"true","env_k8s_namespace":"kube-system","env_log_file":"","env_log_format":"json","env_log_level":"xdebug","env_log_size_limit":"","env_node_capacity_interval":"","env_root_dir":"","env_socket_dir":"","env_supervisor_advertise_address":"","env_supervisor_bind_address":"","level":"info","msg":"environment variables at startup","time":"2020-12-08T21:44:36.779983858Z"}
{"bin_build_date":"2020-06-23T13:13:48.474931189+00:00","bin_build_ref":"","bin_git_branch":"release/v2.1.0","bin_git_commit_hash":"404e6917a7859c9419897be06299b5704fbbc4b1","bin_version":"2.1.0","level":"debug","msg":"version information","time":"2020-12-08T21:44:36.780135244Z"}
{"level":"debug","msg":"got persisted configuration from cache","time":"2020-12-08T21:44:36.780218711Z"}
{"level":"debug","msg":"read node ID from persisted config","time":"2020-12-08T21:44:36.780299874Z"}
{"level":"info","msg":"local node StorageOS id: a2274eee-f6d0-4055-a8e3-f2189a44c0c9","time":"2020-12-08T21:44:36.780370056Z"}
{"level":"info","msg":"local node Hostname is: xyz","time":"2020-12-08T21:44:36.780427004Z"}
{"error":"no TLS ca certificate provided","level":"warning","msg":"no TLS certificate provided for etcd, communications will not be secured","time":"2020-12-08T21:44:36.780488098Z"}
{"level":"debug","msg":"connecting to ETCD at: [storageos-etcd-client.storageos-etcd.svc.xyz:2379]","time":"2020-12-08T21:44:36.780537663Z"}
{"endpoints":["storageos-etcd-client.storageos-etcd.svc.xyz:2379"],"error":"context deadline exceeded","level":"error","msg":"unable to connect to etcd","store":"etcd","time":"2020-12-08T21:44:41.780858551Z"}
{"error":"failed to instantiate ETCD: context deadline exceeded","level":"error","msg":"failed to initialise store client","time":"2020-12-08T21:44:41.780958599Z"}
{"level":"info","msg":"shutting down","time":"2020-12-08T21:44:41.780982726Z"}
{"level":"debug","msg":"shutting down subsystem","subsystem_name":"tracer","time":"2020-12-08T21:44:41.780997343Z"}

Some important comments:

  1. I always get the same message, for internal and external etcd.
  2. I ran a test pod with a simple netcat against the service/endpoint in the same namespace where node container runs and resolves OK.
  3. Also tried a etcdctl --endpoints=http://storageos-etcd-client.storageos-etcd.svc.xyz:2379 member list and works OK.
  4. etcd clusters are insecure, so do not require tls info.
  5. K8s cluster has more than 100 pods and services defined a communicating each other so I doubt problem is related to Kubernetes itself.

So at this point I wonder if you could help me to determine the cause, maybe it is only a misconfiguration.

Thanks!

operator-sdk build storageos/cluster-operator:test fails

from readme :
$GOPATH/bin/operator-sdk build storageos/cluster-operator:test
INFO[0000] Building Docker image storageos/cluster-operator:test
Sending build context to Docker daemon 281.3 MB
Step 1/19 : ARG BUILD_IMAGE=golang:1.11.5
Please provide a source image with from prior to commit
Error: failed to output build image storageos/cluster-operator:test: (failed to exec []string{"docker", "build", ".", "-f", "build/Dockerfile", "-t", "storageos/cluster-operator:test"}: exit status 1)

trying :
:~/go/src/github.com/storageos/cluster-operator# docker build . -f build/Dockerfile -t storageos/cluster-operator:test
Sending build context to Docker daemon 281.3 MB
Step 1/19 : ARG BUILD_IMAGE=golang:1.11.5
Please provide a source image with from prior to commit

storageos-csi-helper pod on CrashLoopBackOff

Version: 2.4.4

hello,
Having issue with csi-helper pod. its continuously restarting with following error.

storageos-csi-helper in on CrashLoopBackOff state with following error.

error: a container name must be specified for pod storageos-csi-helper-779fc4c6d8-cmw6f, choose one of: [csi-external-provisioner csi-external-attacher csi-external-resizer]

Missing properties from CR

The CRs are missing the properties with default values and the properties that have inferred values.

In the StorageOSCluster resource, the spec contains only the values set in the CR:

Spec:
  Csi:
    Enable:      true
  Namespace:             storageos
  Secret Ref Name:       storageos-api
  Secret Ref Namespace:  default
Status:
  Members:
...

The cluster default configurations should be set in the CR.

In the NFSServer resource, due to the same reason, the resource has no storageclass set, which is an inferred property.

$ kubectl get nfsserver
NAME                STATUS    CAPACITY   TARGET         ACCESS MODES   STORAGECLASS   AGE
example-nfsserver   Running   1Gi        10.98.237.25   RW                            3m1s

storageos pod is not evicted when kube node is NotReady

Hello,

i see that
storageos-daemonset POD is not being "removed" form the NotReady node.

dm103   Ready      master   37d   v1.13.0   192.168.3.249   <none>        Ubuntu 16.04.5 LTS   4.4.0-87-generic   docker://17.3.2
dm104   NotReady   <none>   37d   v1.13.0   192.168.3.251   <none>        Ubuntu 16.04.3 LTS   4.4.0-87-generic   docker://17.3.2
dm201   Ready      <none>   37d   v1.13.0   192.168.3.231   <none>        Ubuntu 16.04.3 LTS   4.4.0-87-generic   docker://17.3.2
dm202   Ready      <none>   37d   v1.13.0   192.168.3.229   <none>        Ubuntu 16.04.3 LTS   4.4.0-87-generic   docker://17.3.2
dm203   Ready      <none>   37d   v1.13.0   192.168.3.225   <none>        Ubuntu 16.04.3 LTS   4.4.0-87-generic   docker://17.3.2
dm204   Ready      <none>   37d   v1.13.0   192.168.3.226   <none>        Ubuntu 16.04.3 LTS   4.4.0-87-generic   docker://17.3.2


storageos            storageos-daemonset-h2kzs                     3/3     Running                      0          87m    192.168.3.231   dm201   <none>           <none>
storageos            storageos-daemonset-hhlz4                     3/3     Running                      0          18m    192.168.3.251   dm104   <none>           <none>
storageos            storageos-daemonset-jrlcn                     3/3     Running                      0          87m    192.168.3.225   dm203   <none>           <none>
storageos            storageos-daemonset-n2tqs                     3/3     Running                      0          87m    192.168.3.226   dm204   <none>           <none>
storageos            storageos-daemonset-s6v88                     3/3     Running                      0          87m    192.168.3.229   dm202   <none>           <none>

i waited for minuets

Node metrics in v2

First of all: thanks you for this great product.

Using StorageOS in a small private cluster for more than 2 years I believe and love it. Currently running v2.3.0, installed via helm chart.

I see that on the node pod an exporter starts on port 9721, it's secured using the default user/password and is not documented at all. It contains a lot of interesting metrics but also not those similar to the old version.
The node pod on port 5705 doesn't have a /metrics endpoint any longer with v2 if I'm correct.

Question:
-> I am wondering though, maybe I overlook something, are there metrics available for the node in the same way as for v1.2.0+ which show the IOPS etc?

[Kubernetes][KubeSpray] Error: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"storageos\": executable file not found in $PATH": unknown

Some of a StorageOS cluster's pods fail to start the container with the following error: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"storageos\": executable file not found in $PATH": unknown

The describe of the pod:

  Normal   Scheduled  5m6s                   default-scheduler  Successfully assigned storageos/storageos-daemonset-s7hbl to node3
  Normal   Pulled     5m5s                   kubelet, node3     Container image "storageos/init:0.1" already present on machine
  Normal   Created    5m5s                   kubelet, node3     Created container enable-lio
  Normal   Started    5m5s                   kubelet, node3     Started container enable-lio
  Warning  BackOff    3m42s (x7 over 4m56s)  kubelet, node3     Back-off restarting failed container
  Normal   Pulled     3m28s (x5 over 5m4s)   kubelet, node3     Container image "storageos/node:1.2.1" already present on machine
  Normal   Created    3m28s (x5 over 5m4s)   kubelet, node3     Created container storageos
  Warning  Failed     3m27s (x5 over 5m4s)   kubelet, node3     Error: failed to start container "storageos": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"storageos\": executable file not found in $PATH": unknown

Any advise?

Thanks

storageos operator missing pods/log permission

$ kubectl logs -f storageos-cluster-operator-598bbc4d79-k66qj -n storageos-operator
2019-09-07T02:46:03.279Z INFO storageos.job Creating a new DaemonSet {"Request.Namespace": "default", "Request.Name": "storageos-job"}
2019-09-07T02:46:03.342Z DEBUG kubebuilder.controller Successfully Reconciled {"controller": "job-controller", "request": "default/storageos-job"}
2019-09-07T02:46:03.478Z INFO storageos.job No DaemonSets found
2019-09-07T02:46:03.889Z INFO storageos.job Failed to get logs from pod {"pod": "storageos-job-daemonset-job-7876t", "error": "pods "storageos-job-daemonset-job-7876t" is forbidden: User "system:serviceaccount:storageos-operator:storageoscluster-operator-sa" cannot get resource "pods/log" in API group "" in the namespace "default""}
2019-09-07T02:46:03.891Z INFO storageos.job Failed to get logs from pod {"pod": "storageos-job-daemonset-job-lswzv", "error": "pods "storageos-job-daemonset-job-lswzv" is forbidden: User "system:serviceaccount:storageos-operator:storageoscluster-operator-sa" cannot get resource "pods/log" in API group "" in the namespace "default""}
2019-09-07T02:46:03.892Z INFO storageos.job Failed to get logs from pod {"pod": "storageos-job-daemonset-job-p4wvl", "error": "pods "storageos-job-daemonset-job-p4wvl" is forbidden: User "system:serviceaccount:storageos-operator:storageoscluster-operator-sa" cannot get resource "pods/log" in API group "" in the namespace "default""}

Volume still in mounted=true state after pod has volume unmounted

The StorageOS volume's mount state is not switching to false after the volume is unmounted.

Steps to reproduce:

  • create a volume storageos-volume-test in storageos
  • apply following yaml after updating secret to access your storageos cluster:
apiVersion: v1
data:
  apiAddress: XXXXXXXXXX
  apiPassword: XXXXXXXXXX
  apiUsername: XXXXXXXXXX
kind: Secret
metadata:
  name: storageos-secret-test
type: "kubernetes.io/storageos"
---
apiVersion: v1
kind: Pod
metadata:
  name: test-issue
spec:
  terminationGracePeriodSeconds: 0
  volumes:
    - name: vol
      storageos:
        volumeName: storageos-volume-test
        secretRef:
          name: storageos-secret-test
  containers:
    - image: busybox
      name: busybox
      args:
        - sleep
        - "1000000"
      volumeMounts:
        - mountPath: /data/test
          name: vol

The pod is successfully mounting to the fresh volume.
Now when deleting the pod kubectl delete pod test-issue and looking up the volume (with the client or GUI) the volume's mounted state is still true, which should be false because the deleted pod is not longer using the volume.

Also when running the client: storageos volume rm YOUR_NS/storageos-volume-test the output is as expected API error (Precondition Failed): cannot delete mounted volume 'YOUR_NS/storageos-volume-test'.

I also have a quick question: How can I create a snaphot/copy of a volume?

Additional info:

storageos version`
Client:
 Version:      1.2.1
 API version:  1
 Go version:   go1.12.4
 Git commit:   8e7f857
 Built:        2019-05-15T150300Z
 OS/Arch:      linux/amd64

Server:
 Version:      1.2.1
 API version:  1
 Go version:   go1.11.5
 Git commit:   68ca908
 Built:        2019-05-15T164954Z
 OS/Arch:      linux/amd64
 Experimental: false
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Deployed using helm storageos-operator version 0.2.6

pod with storageos disk recreates about 5 minutes.

2020-05-18T08:54:40.323Z ERROR controller-runtime.controller Reconciler error {"controller": "node-controller", "request": "/k8snode3", "error": "API error (Server failed to process your request. Was the data correct?): node not found"} github.com/storageos/cluster-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error /go/src/github.com/storageos/cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 2020-05-18T08:54:42.634Z ERROR controller-runtime.controller Reconciler error {"controller": "node-controller", "request": "/k8snode2", "error": "API error (Server failed to process your request. Was the data correct?): node not found"} github.com/storageos/cluster-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error /go/src/github.com/storageos/cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 2020-05-18T08:54:47.165Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "node-controller", "request": "/k8snode1"} 2020-05-18T08:54:47.594Z ERROR controller-runtime.controller Reconciler error {"controller": "node-controller", "request": "/k8smaster1", "error": "API error (Server failed to process your request. Was the data correct?): node not found"} github.com/storageos/cluster-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error /go/src/github.com/storageos/cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker /go/src/github.com/storageos/cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until /go/src/github.com/storageos/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 2020-05-18T08:54:49.477Z ERROR controller-runtime.controller Reconciler error {"controller": "node-controller", "request": "/k8snode3", "error": "API error (Server failed to process your request. Was the data correct?): node not found"}
Kubernets 1.17.5
Storageos installed with https://docs.storageos.com/docs/install/kubernetes/ for 1.17 version.

Cluster operator UI broken on OpenShift 4.2 - "Invariant Violation"

We are testing brand new, vanilla StroageOS deployment on OpenShift 4. 2 - when browsing cluster operator UI status URL:

https://cluster-console.app.local/k8s/ns/storageos-namespace/clusterserviceversions/storageosoperator.v1.5.1/storageos.com~v1~StorageOSCluster/example-storageos

We are getting the following error:

Oh no! Something went wrong.

Invariant Violation
Description:
Minified React error #31; visit https://reactjs.org/docs/error-decoder.html?invariant=31&args[]=object%20with%20keys%20%7BdirectfsInitiator%2C%20director%2C%20kv%2C%20kvWrite%2C%20nats%2C%20presentation%2C%20rdb%7D&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings.

Stack Trace:
Invariant Violation: Minified React error #31; visit https://reactjs.org/docs/error-decoder.html?invariant=31&args[]=object%20with%20keys%20%7BdirectfsInitiator%2C%20director%2C%20kv%2C%20kvWrite%2C%20nats%2C%20presentation%2C%20rdb%7D&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings. 
    at https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:425
    at a (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:528)
    at po (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:47246)
    at p (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:48881)
    at m (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:49983)
    at https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:51959
    at wi (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:58902)
    at Mi (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:67287)
    at Va (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:90685)
    at qa (https://cluster-console.app.local/static/vendors~main-chunk-67899ee09ff4a8f7b1bd.min.js:104:91069)

The cluster seems to work fine though and we can provision volumes - so it looks like an UI glitch (although it makes it rather hard to manage / monitor state...)

Operator projects using the removed APIs in k8s 1.22 requires changes.

Problem Description

Kubernetes has been deprecating API(s), which will be removed and are no longer available in 1.22. Operators projects using these APIs versions will not work on Kubernetes 1.22 or any cluster vendor using this Kubernetes version(1.22), such as OpenShift 4.9+. Following the APIs that are most likely your projects to be affected by:

  • apiextensions.k8s.io/v1beta1: (Used for CRDs and available since v1.16)
  • rbac.authorization.k8s.io/v1beta1: (Used for RBAC/rules and available since v1.8)
  • admissionregistration.k8s.io/v1beta1 (Used for Webhooks and available since v1.16)

Therefore, looks like this project distributes solutions via the Red Hat Connect with the package name as storageos2 and does not contain any version compatible with k8s 1.22/OCP 4.9. Following some findings by checking the distributions published:

NOTE: The above findings are only about the manifests shipped inside of the distribution. It is not checking the codebase.

How to solve

It would be very nice to see new distributions of this project that are no longer using these APIs and so they can work on Kubernetes 1.22 and newer and published in the Red Hat Connect collection. OpenShift 4.9, for example, will not ship operators anymore that do still use v1beta1 extension APIs.

Due to the number of options available to build Operators, it is hard to provide direct guidance on updating your operator to support Kubernetes 1.22. Recent versions of the OperatorSDK greater than 1.0.0 and Kubebuilder greater than 3.0.0 scaffold your project with the latest versions of these APIs (all that is generated by tools only). See the guides to upgrade your projects with OperatorSDK Golang, Ansible, Helm or the Kubebuilder one. For APIs other than the ones mentioned above, you will have to check your code for usage of removed API versions and upgrade to newer APIs. The details of this depend on your codebase.

If this projects only need to migrate the API for CRDs and it was built with OperatorSDK versions lower than 1.0.0 then, you maybe able to solve it with an OperatorSDK version >= v0.18.x < 1.0.0:

$ operator-sdk generate crds --crd-version=v1
INFO[0000] Running CRD generator.
INFO[0000] CRD generation complete.

Alternatively, you can try to upgrade your manifests with controller-gen (version >= v0.4.1) :

If this project does not use Webhooks:

$ controller-gen crd:trivialVersions=true,preserveUnknownFields=false rbac:roleName=manager-role paths="./..."

If this project is using Webhooks:

  1. Add the markers sideEffects and admissionReviewVersions to your webhook (Example with sideEffects=None and admissionReviewVersions={v1,v1beta1}: memcached-operator/api/v1alpha1/memcached_webhook.go):

  2. Run the command:

$ controller-gen crd:trivialVersions=true,preserveUnknownFields=false rbac:roleName=manager-role webhook paths="./..."

For further info and tips see the blog.

Thank you for your attention.

[Kubernetes][KubeSpray] MountVolume.SetUp failed for volume "pvc-371a14b0-8426-4ce9-86f5-abe83e59743b" : exit status 5

In a Kubernetes cluster deployed using KubeSpray, StorageOS Operator has been deployed, and also a storageos cluster has been deployed using the operator, and all its pods are running. However, I try to deploy an app that uses pvc, the pod stays in creating state, and this error appears in the describe of the pod: MountVolume.SetUp failed for volume "pvc-371a14b0-8426-4ce9-86f5-abe83e59743b" : exit status 5

The pvc itself:

kubectl get pvc -n test
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound    pvc-371a14b0-8426-4ce9-86f5-abe83e59743b   2Gi        RWO            fast           3m28s

Any advise?

Thanks

Which fsTypes are supported

I have a zpool mounted on /var/lib/storageos.

Using ext4 as fsType crashes when the pvc is used and the final mount -t ext4 .. is executed.
I do not find any other supported fsTypes. I have tried setting zfs as fsType without luck.

Is is possible in general or do I have to use ext4?

"node has no NodeID annotation" when attaching a volume

I've built a cluster of 3 managers & 4 workers from scratch just to debug this issue. Below is most/all of the relevant debugging info I could find in that cluster.

Since building clusters only takes a couple of minutes, I've bothered to try StorageOS on kubernetes 1.15, 1.16 and 1.17. All deployments/versions hit the exact same problem.

This is on StorageOS 1.5.2 (latest stable).

What am I missing / doing wrong?

Resources

  1. storageos-operator.yaml 1.5.2 release yaml from github.com/storageos/cluster-operator releases page

  2. storageos-secret.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: "storageos-api"
  namespace: "storageos-operator"
  labels:
    app: "storageos"
type: "kubernetes.io/storageos"
data:
  apiUsername: <someBase64>
  apiPassword: <someBase64>
  1. storageos-cluster.yaml
---
apiVersion: "storageos.com/v1"
kind: StorageOSCluster
metadata:
  name: example-storageos
  namespace: storageos-operator
spec:
  secretRefName: storageos-api
  secretRefNamespace: storageos-operator
  k8sDistro: kubernetes
  storageClassName: system
  namespace: storageos
  images:
    nodeContainer: "storageos/node:1.5.2"
  csi:
    enable: true
    deploymentStrategy: deployment
  resources:
    requests:
    memory: "512Mi"
  disableTelemetry: true
  1. nginx-pvc-test.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testpvc
spec:
  storageClassName: system
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: nginx
  name: nginx-pvc-test
spec:
  containers:
    - name: nginx
      image: nginx:alpine
      ports:
        - containerPort: 80
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-www
  volumes:
    - name: nginx-www
      persistentVolumeClaim:
        claimName: testpvc

Logs / Debugging info

Summary

This is in chronological order:

storageos-scheduler: failed to filter nodes: id or name not specified

storageos-scheduler: Failed filter with extender at URL http://storageos.storageos.svc.cluster.local:5705/v1/scheduler/filter, code 500

storageos-scheduler: Successfully assigned default/test-storageos-nginx-sc-pvc to k8s-worker3.domain.tld

attachdetach-controller: AttachVolume.Attach failed for volume "pvc-082193b7-f2f4-4f65-acdf-26bcef9f58cd" : node "k8s-worker3.domain.tld" has no NodeID annotation

kubelet k8s-worker3.domain.tld: Unable to attach or mount volumes: unmounted volumes=[nginx-www], unattached volumes=[nginx-www default-token-ghb9h]: timed out waiting for the condition

storageos node ls

$ storageos node ls
NAME          ADDRESS     HEALTH             SCHEDULER  VOLUMES     TOTAL   USED    VERSION
k8s-worker0.domain.tld  172.16.0.10  Healthy 5 minutes  false      M: 0, R: 0  112GiB  14.91%  1.5.2
k8s-worker1.domain.tld  172.16.0.11  Healthy 5 minutes  false      M: 0, R: 0  112GiB  14.40%  1.5.2
k8s-worker2.domain.tld  172.16.0.12  Healthy 5 minutes  true       M: 0, R: 0  112GiB  14.61%  1.5.2
k8s-worker3.domain.tld  172.16.0.13  Healthy 5 minutes  false      M: 1, R: 0  112GiB  13.68%  1.5.2

storageos cluster health

$ storageos cluster health
NODE          CP_STATUS  DP_STATUS
k8s-worker0.domain.tld  Healthy    Healthy
k8s-worker1.domain.tld  Healthy    Healthy
k8s-worker2.domain.tld  Healthy    Healthy
k8s-worker3.domain.tld  Healthy    Healthy

storageos volume ls

$ storageos volume ls
NAMESPACE/NAME                                    SIZE  MOUNT  SELECTOR  STATUS  REPLICAS  LOCATION
default/pvc-082193b7-f2f4-4f65-acdf-26bcef9f58cd  1GiB                   active  0/0       k8s-worker3.domain.tld (healthy)

kubectl get -A storageosclusters.storageos.com

$ kubectl get -A storageosclusters.storageos.com
NAMESPACE            NAME                READY   STATUS    AGE
storageos-operator   example-storageos   4/4     Running   13m

kubectl describe -A storageosclusters.storageos.com

$ kubectl describe -A storageosclusters.storageos.com
Name:         example-storageos
Namespace:    storageos-operator
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"storageos.com/v1","kind":"StorageOSCluster","metadata":{"annotations":{},"name":"example-storageos","namespace":"storageos-...
API Version:  storageos.com/v1
Kind:         StorageOSCluster
Metadata:
  Creation Timestamp:  2020-01-08T18:35:41Z
  Finalizers:
    finalizer.storageoscluster.storageos.com
  Generation:        73
  Resource Version:  11250
  Self Link:         /apis/storageos.com/v1/namespaces/storageos-operator/storageosclusters/example-storageos
  UID:               f4489ca9-723b-4f12-a871-83c6a4a55620
Spec:
  Csi:
    Deployment Strategy:  deployment
    Enable:               true
  Disable Telemetry:      true
  Images:
    Csi Cluster Driver Registrar Container:  quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1
    Csi External Attacher Container:         quay.io/k8scsi/csi-attacher:v2.0.0
    Csi External Provisioner Container:      storageos/csi-provisioner:v1.4.0
    Csi Liveness Probe Container:            quay.io/k8scsi/livenessprobe:v1.1.0
    Csi Node Driver Registrar Container:     quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
    Hyperkube Container:                     gcr.io/google_containers/hyperkube:v1.17.0
    Init Container:                          storageos/init:1.0.0
    Node Container:                          storageos/node:1.5.2
  Ingress:
  Join:       172.16.0.11,172.16.0.13,172.16.0.10,172.16.0.12
  k8sDistro:  kubernetes
  Kv Backend:
  Namespace:  storageos
  Resources:
  Secret Ref Name:       storageos-api
  Secret Ref Namespace:  storageos-operator
  Service:
    External Port:     5705
    Internal Port:     5705
    Name:              storageos
    Type:              ClusterIP
  Storage Class Name:  system
Status:
  Members:
    Ready:
      172.16.0.11
      172.16.0.13
      172.16.0.10
      172.16.0.12
  Node Health Status:
    172.16.0.10:
      Directfs Initiator:  alive
      Director:            alive
      Kv:                  alive
      Kv Write:            alive
      Nats:                alive
      Presentation:        alive
      Rdb:                 alive
    172.16.0.11:
      Directfs Initiator:  alive
      Director:            alive
      Kv:                  alive
      Kv Write:            alive
      Nats:                alive
      Presentation:        alive
      Rdb:                 alive
    172.16.0.12:
      Directfs Initiator:  alive
      Director:            alive
      Kv:                  alive
      Kv Write:            alive
      Nats:                alive
      Presentation:        alive
      Rdb:                 alive
    172.16.0.13:
      Directfs Initiator:  alive
      Director:            alive
      Kv:                  alive
      Kv Write:            alive
      Nats:                alive
      Presentation:        alive
      Rdb:                 alive
  Nodes:
    172.16.0.11
    172.16.0.13
    172.16.0.10
    172.16.0.12
  Phase:  Running
  Ready:  4/4
Events:
  Type     Reason         Age   From                       Message
  ----     ------         ----  ----                       -------
  Warning  ChangedStatus  14m   storageoscluster-operator  0/4 StorageOS nodes are functional
  Warning  ChangedStatus  14m   storageoscluster-operator  1/4 StorageOS nodes are functional
  Warning  ChangedStatus  14m   storageoscluster-operator  2/4 StorageOS nodes are functional
  Normal   ChangedStatus  14m   storageoscluster-operator  4/4 StorageOS nodes are functional. Cluster healthy

storageos logs -f (before/during failure)

$ storageos logs -l debug
$ storageos logs -f
time="2020-01-08T19:11:14Z" level=debug host=k8s-worker0.domain.tld module=logger category=streamer msg="starting remote" category="<nil>" host="<nil>" module="<nil>" name=k8s-worker3.domain.tld url="map[ForceQuery:false Fragment: Host:172.16.0.13:5705 Opaque: Path:/v1/logs/9622279b-931c-669e-0fde-6a4b0a6893a0 RawPath: RawQuery: Scheme:ws User:<nil>]"
time="2020-01-08T19:11:14Z" level=debug host=k8s-worker0.domain.tld module=logger category=streamer msg="connecting to remote" category="<nil>" host="<nil>" module="<nil>" name=k8s-worker3.domain.tld url="map[ForceQuery:false Fragment: Host:172.16.0.13:5705 Opaque: Path:/v1/logs/9622279b-931c-669e-0fde-6a4b0a6893a0 RawPath: RawQuery: Scheme:ws User:<nil>]"
time="2020-01-08T19:11:14Z" level=debug host=k8s-worker0.domain.tld module=logger category=streamer msg="starting remote" category="<nil>" host="<nil>" module="<nil>" name=k8s-worker2.domain.tld url="map[ForceQuery:false Fragment: Host:172.16.0.12:5705 Opaque: Path:/v1/logs/5c8836d3-8673-0359-2724-bb191b238fbf RawPath: RawQuery: Scheme:ws User:<nil>]"
time="2020-01-08T19:11:14Z" level=debug host=k8s-worker0.domain.tld module=logger category=streamer msg="connecting to remote" category="<nil>" host="<nil>" module="<nil>" name=k8s-worker2.domain.tld url="map[ForceQuery:false Fragment: Host:172.16.0.12:5705 Opaque: Path:/v1/logs/5c8836d3-8673-0359-2724-bb191b238fbf RawPath: RawQuery: Scheme:ws User:<nil>]"
time="2020-01-08T19:11:14Z" level=debug host=k8s-worker3.domain.tld module=logger category=streamer msg="finished starting hoses" category="<nil>" host="<nil>" module="<nil>"
time="2020-01-08T19:11:14Z" level=debug host=k8s-worker2.domain.tld module=logger category=streamer msg="finished starting hoses" category="<nil>" host="<nil>" module="<nil>"
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker2.domain.tld module=scheduler category=leader msg="syncing node state from store" action=establish category="<nil>" host="<nil>" module="<nil>" term=1
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:11:40Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker2.domain.tld module=scheduler category=leader msg="syncing node state from store" action=establish category="<nil>" host="<nil>" module="<nil>" term=1
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:10Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:15Z" level=info host=k8s-worker1.domain.tld module=csi msg="ensuring volume" host="<nil>" labels="map[]" module="<nil>" name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 namespace=default size=1
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=csi msg="checking licence" host="<nil>" licensed_capacity=50 module="<nil>" namespace=default provisioned_capacity=0 requested_capacity=1 status="this cluster has the default (basic) licence" volume=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964
time="2020-01-08T19:12:15Z" level=info host=k8s-worker1.domain.tld module=csi msg="creating volume" host="<nil>" labels="map[]" module="<nil>" name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 namespace=default size=1
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=2 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 volume_name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 volume_revision=3482 watcher=volume
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=2 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 volume_name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 volume_revision=3482 watcher=volume
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=2 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 volume_name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 volume_revision=3482 watcher=volume
time="2020-01-08T19:12:15Z" level=info host=k8s-worker1.domain.tld module=csi msg="waiting for volume" host="<nil>" module="<nil>" namespace=default volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=taskrunner category=volume_ingress msg=nodes action=propose available=4 category="<nil>" host="<nil>" module="<nil>" namespace=default volume=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=store category=client msg="got lock" action=lock category="<nil>" host="<nil>" iterations=0 key=locks/mount/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>"
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 volume_name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 volume_revision=3488 watcher=volume
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 volume_name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 volume_revision=3488 watcher=volume
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 volume_name=pvc-c44b645d-d6e9-4a5f-860f-ee92db93b964 volume_revision=3488 watcher=volume
time="2020-01-08T19:12:15Z" level=error host=k8s-worker1.domain.tld module=cp msg="failed to filter nodes: id or name not specified" host="<nil>" module="<nil>"
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=store category=client msg="received stop signal" action=refreshlock category="<nil>" host="<nil>" key=locks/mount/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>"
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=client msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=director-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=director-presentation msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=fs-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=1 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker1.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=client msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=director-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=director-presentation msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=fs-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=1 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker2.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=client msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=director-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=director-presentation msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=fs-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=1 host="<nil>" module="<nil>" resource=fsVolume revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=fs-presentation msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=1 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker0.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=rdb msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=director-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="[]" revision=3488
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=director-presentation msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:15Z" level=debug host=k8s-worker3.domain.tld module=fs-volume msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:16Z" level=debug host=k8s-worker3.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=1 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:16Z" level=debug host=k8s-worker3.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:16Z" level=debug host=k8s-worker3.domain.tld module=stats msg="creating config" action=create host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:16Z" level=info host=k8s-worker1.domain.tld module=csi msg="volume ready" host="<nil>" module="<nil>" namespace=default volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:16Z" level=info host=k8s-worker2.domain.tld module=pod-scheduler msg="Scheduled pod" host="<nil>" module="<nil>" score="[map[Host:k8s-worker3.domain.tld Score:15] map[Host:k8s-worker0.domain.tld Score:5] map[Host:k8s-worker2.domain.tld Score:5] map[Host:k8s-worker1.domain.tld Score:5]]"
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker1.domain.tld module=store category=client msg="got lock" action=lock category="<nil>" host="<nil>" iterations=0 key=locks/mount/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>"
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=8 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 watcher=volume
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=8 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 watcher=volume
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=8 host="<nil>" key=volumes/default/d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 module="<nil>" tree=volumes volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 watcher=volume
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=client msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=fs-presentation msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker0.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=rdb msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=fs-presentation msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:33Z" level=debug host=k8s-worker3.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker1.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker3.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/9622279b-931c-669e-0fde-6a4b0a6893a0 module="<nil>" node_id=9622279b-931c-669e-0fde-6a4b0a6893a0 node_name=k8s-worker3.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker1.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker1.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker1.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker2.domain.tld module=client msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker2.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsPresentation revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker2.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker2.domain.tld module=stats msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-113084 inode=113084 module="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker0.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsVolume revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker3.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:34Z" level=debug host=k8s-worker3.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsVolume revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker1.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsVolume revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker2.domain.tld module=client msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker2.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker2.domain.tld module=statesync msg="publishing volume lifecycle event" event_type=4 host="<nil>" module="<nil>" resource=fsVolume revision=3488 volume_id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker0.domain.tld module=client msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker0.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker0.domain.tld module=director-presentation msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker3.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:35Z" level=debug host=k8s-worker3.domain.tld module=director-presentation msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:36Z" level=debug host=k8s-worker1.domain.tld module=client msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:36Z" level=debug host=k8s-worker1.domain.tld module=director-volume msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:36Z" level=debug host=k8s-worker1.domain.tld module=director-presentation msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5 inode=113084 module="<nil>" revision=3488 target=95326
time="2020-01-08T19:12:36Z" level=debug host=k8s-worker0.domain.tld module=client msg="skipping due to missing dependencies" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:36Z" level=debug host=k8s-worker0.domain.tld module=director-volume msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:36Z" level=debug host=k8s-worker3.domain.tld module=director-volume msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" replicas="<nil>" revision=3488
time="2020-01-08T19:12:37Z" level=debug host=k8s-worker0.domain.tld module=client msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:38Z" level=debug host=k8s-worker1.domain.tld module=client msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:38Z" level=debug host=k8s-worker2.domain.tld module=client msg="deleting config" action=delete host="<nil>" id=d9e9c01f-cc01-a655-eacc-3e2a7611a3f5-95326 inode=95326 module="<nil>" revision=3488
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=scheduler category=leader msg="syncing node state from store" action=establish category="<nil>" host="<nil>" module="<nil>" term=1
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5d8e5874-13bb-1730-2ce8-f8a32234d1d4 module="<nil>" node_id=5d8e5874-13bb-1730-2ce8-f8a32234d1d4 node_name=k8s-worker1.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/5c8836d3-8673-0359-2724-bb191b238fbf module="<nil>" node_id=5c8836d3-8673-0359-2724-bb191b238fbf node_name=k8s-worker2.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker0.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node
time="2020-01-08T19:12:40Z" level=debug host=k8s-worker2.domain.tld module=watcher msg="update cache" action=1 host="<nil>" key=nodes/4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 module="<nil>" node_id=4f20f80b-8a4b-cb88-aa45-8e5b3fe17726 node_name=k8s-worker0.domain.tld tree=nodes watcher=node

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.