Giter Site home page Giter Site logo

kubernetes / cloud-provider-openstack Goto Github PK

View Code? Open in Web Editor NEW
592.0 35.0 590.0 5.5 MB

License: Apache License 2.0

Makefile 0.62% Shell 5.42% Go 91.58% Dockerfile 0.74% Mustache 0.69% Smarty 0.17% Jinja 0.43% Python 0.35%
openstack kubernetes cloud-controller-manager csi-plugin k8s-sig-cloud-provider k8s-sig-storage

cloud-provider-openstack's Introduction

Cloud Provider OpenStack

Thank you for visiting the Cloud Provider OpenStack repository!

This Repository hosts various plugins relevant to OpenStack and Kubernetes Integration

NOTE:

  • Cinder Standalone Provisioner, Manila Provisioner and Cinder FlexVolume Driver were removed since release v1.18.0.
  • Version 1.17 was the last release of Manila Provisioner, which is unmaintained from now on. Due to dependency issues, we removed the code from master but it is still accessible in the release-1.17 branch. Please consider migrating to Manila CSI Plugin.
  • Start from release v1.26.0, neutron lbaasv1 support is removed and only Octavia is supported.

Developing

Refer to Getting Started Guide for setting up development environment and contributing.

Contact

Please join us on Kubernetes provider-openstack slack channel

Project Co-Leads:

  • @dulek - Michał Dulko
  • @jichenjc - Chen Ji
  • @kayrus
  • @zetaab - Jesse Haka

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

cloud-provider-openstack's People

Contributors

adisky avatar aglitke avatar anguslees avatar chrigl avatar dagnello avatar dims avatar dixudx avatar dulek avatar edisonxiang avatar fedosin avatar feiskyer avatar fengyunpan2 avatar gman0 avatar ixdy avatar jianglingxia avatar jichenjc avatar johscheuer avatar jsafrane avatar k8s-ci-robot avatar kayrus avatar kiwik avatar lingxiankong avatar mdbooth avatar mikedanese avatar mikejoh avatar ramineni avatar saad-ali avatar scruplelesswizard avatar thockin avatar zetaab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-provider-openstack's Issues

make vet fmt is broken

$ make vet fmt
cd /Users/dims/go/src/git.openstack.org/openstack/openstack-cloud-controller-manager && go vet ./...
# git.openstack.org/openstack/openstack-cloud-controller-manager/pkg/volume/cinder/provisioner
pkg/volume/cinder/provisioner/iscsi_test.go:61:4: err declared but not used
pkg/volume/cinder/provisioner/iscsi_test.go:105:4: err declared but not used
pkg/volume/cinder/provisioner/iscsi_test.go:153:4: err declared but not used
pkg/volume/cinder/provisioner/rbd_test.go:78:4: err declared but not used
vet: typecheck failures
make: *** [vet] Error 2

Allow for the CSI cinder driver to interact with nodes not managed by OpenStack

/kind feature

The CSI driver requires the Kubernetes cluster to be running in an OpenStack cloud. This expectation leaves out scenarios, where Kubernetes is not running as part of a tenant in an OpenStack managed VM.

It should be possible to refactor the plugin in a way that we can provide support for both, instance and BM, volume attachments either as part of an OpenStack tenant or not.

The goal is to support the following scenarios:

Kubernetes running side-by-side with OpenStack:

  • 2 separate clouds (hardware) that can talk to each other.
  • Kubernetes has access to the OpenStack cloud and wants to use Cinder for volume management

Kubernetes running inside an OpenStack VM:

  • This is the scenario that is currently supported

Kubernetes running on a BM node as part of an OpenStack tenant:

  • This may work with the current implementation but I haven't fully tested it.

It's possible that we may need some changes in the current CSI implementation to be able to gather more information about the target node in advance.See this issue for more info: container-storage-interface/spec#216

That said, I believe this could be implemented, perhaps not in the most ideal way, with what we have today in gophercloud and the CPO implementation.

kubectl fails with openstack auth-provider

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
kubectl fails with openstack auth-provider.

$ kubectl get node                                                                                                                                                     
panic: assignment to entry in nil map                                                                                                                                  
                                                                                                                                                                       
goroutine 1 [running]:                                                                                                                                                 
k8s.io/kubernetes/vendor/k8s.io/client-go/plugin/pkg/client/auth/openstack.newOpenstackAuthProvider(0xc420984720, 0x1b, 0x0, 0x2a41500, 0xc4202d74e0, 0x1e08701, 0x6, 0
xc4207a74c8, 0xae9d24)  
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/plugin/pkg/client/auth/openstack/openstack.go:137 +0x25a
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.GetAuthProvider(0xc420984720, 0x1b, 0xc420255a60, 0x2a41500, 0xc4202d74e0, 0x0, 0x0, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/plugin.go:72 +0x114       
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Config).TransportConfig(0xc42015ba40, 0xc42036f5de, 0x1, 0x0)                       
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/transport.go:63 +0x66c    
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.TransportFor(0xc42015ba40, 0xc420404900, 0xc42036f5de, 0x1, 0x0)                      
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/transport.go:40 +0x2f     
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.UnversionedRESTClientFor(0xc42015ba40, 0x0, 0x0, 0x4116ad)                            
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/config.go:235 +0x9b       
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.NewDiscoveryClientForConfig(0xc42015af00, 0xc4207829a0, 0x0, 0x0)                
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:378 +0x9b
k8s.io/kubernetes/pkg/kubectl/cmd/util.(*discoveryFactory).DiscoveryClient(0xc420429c40, 0x41e886, 0xc4209981e0, 0x7f6bd03c37e0, 0x429b40)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/factory_client_access.go:113 +0xa1
k8s.io/kubernetes/pkg/kubectl/cmd/util.(*ring0Factory).DiscoveryClient(0xc420046300, 0xc420297560, 0x7f6bd03d453d, 0x8, 0xab)        
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/factory_client_access.go:186 +0x34
k8s.io/kubernetes/pkg/kubectl/cmd/util.(*ring1Factory).CategoryExpander(0xc420046330, 0xc420046330, 0xc4209981e0)                    
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/factory_object_mapping.go:113 +0x66
k8s.io/kubernetes/pkg/kubectl/cmd/util.(*ring2Factory).NewBuilder(0xc420429c60, 0x0)                                                 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/factory_builder.go:156 +0x16b     
k8s.io/kubernetes/pkg/kubectl/cmd/util.(*factory).NewBuilder(0xc420046360, 0xd411031)                                                
        <autogenerated>:1 +0x3d  
k8s.io/kubernetes/pkg/kubectl/resource.(*Builder).Flatten(...)    
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/resource/get.go:235                    
k8s.io/kubernetes/pkg/kubectl/cmd/resource.(*GetOptions).Run(0xc4200de840, 0x2a6f640, 0xc420046360, 0xc42017dd40, 0xc4201602c0, 0x1, 0x1, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/resource/get.go:247 +0xfd              
k8s.io/kubernetes/pkg/kubectl/cmd/resource.NewCmdGet.func1(0xc42017dd40, 0xc4201602c0, 0x1, 0x1)                                     
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/resource/get.go:149 +0x115             
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc42017dd40, 0xc420160260, 0x1, 0x1, 0xc42017dd40, 0xc420160260) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234    
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420174000, 0x5000107, 0x0, 0xffffffffffffffff)                
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe    
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420174000, 0xc420046360, 0x2a42440)                            
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b     
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)                   
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:41 +0xd5                    
main.main()                      
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:27 +0x26                        

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
Run these commands bellow according to keystone auth document

kubectl config set-credentials openstackuser --auth-provider=openstack
kubectl config set-context --cluster=kubernetes --user=openstackuser openstackuser@kubernetes
kubectl config use-context openstackuser@kubernetes

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
    Ubuntu 16.04.3 LTS
  • Kernel (e.g. uname -a):
    4.4.0-112-generic
  • Install tools:
  • Others:
    kubectl v1.9.2

TestVolumes failed caused by volume status did not change to expected status after 30s

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
Run TestVolumes failed caused by volume status did not change to expected status after 30s, however according to the logs [1][2], we found it doesn't wait at all to check the expected volume status and execute post commands immediately rather than wait for 30s to check status. Wonder if this a test case bug.

[1] http://80.158.23.49/logs/5/5/4c8c41c009fb5543116c1130f37de49f7eed2556/cloud-provider-openstack-all/cloud-provider-openstack-unittest/7d1c865/job-output.txt.gz
[2] http://80.158.23.49/logs/5/5/133543ee3dd987b533f8f62fd98f99d8a995377e/check-generic-cloud/cloud-provider-openstack-unittest/27e5859/job-output.txt.gz

What you expected to happen:
Wait 30s to check the expected volume status.

How to reproduce it (as minimally and precisely as possible):
Run TestVolumes unit test

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

standalone-cinder-provisioner deployment create pod status CrashLoopBackOff not running

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

Uncomment only one, leave it on its own line:
/kind bug

What happened:
I deploy a cluster in bare-metal enviroment,then create the deployment named standalone-cinder-provisioner,but created pod not running and status CrashLoopBackOff! is the image problem?
the image is me use :docker pull quay.io/external_storage/standalone-cinder-provisioner:latest
the deployment yaml:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: standalone-cinder-provisioner
labels:
app: standalone-cinder-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: standalone-cinder-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: standalone-cinder-provisioner
spec:
containers:
- name: standalone-cinder-provisioner
image: "quay.io/external_storage/standalone-cinder-provisioner:latest"
imagePullPolicy: IfNotPresent
env:
- name: OS_CINDER_ENDPOINT
value: http://10.1.101.1:8776/v2
- name: OS_USERNAME
value: k8s
- name: OS_TENANT_NAME
value: kubernetes

[root@localhost:/home/ubuntu/yaml-baremetal]$ kubectl get po
NAME READY STATUS RESTARTS AGE
standalone-cinder-provisioner-6797c5db6b-rtjqc 0/1 CrashLoopBackOff 19 1h

the kubelet log:

I0403 02:33:00.122888 20622 kuberuntime_manager.go:739] checking backoff for container "standalone-cinder-provisioner" in pod "standalone-cinder-provisioner-6797c5db6b-rtjqc_default(87835eee-3699-11e8-bfec-4c09b4b0c25b)"
I0403 02:33:00.122967 20622 kuberuntime_manager.go:749] Back-off 5m0s restarting failed container=standalone-cinder-provisioner pod=standalone-cinder-provisioner-6797c5db6b-rtjqc_default(87835eee-3699-11e8-bfec-4c09b4b0c25b)
E0403 02:33:00.122995 20622 pod_workers.go:182] Error syncing pod 87835eee-3699-11e8-bfec-4c09b4b0c25b ("standalone-cinder-provisioner-6797c5db6b-rtjqc_default(87835eee-3699-11e8-bfec-4c09b4b0c25b)"), skipping: failed to "StartContainer" for "standalone-cinder-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=standalone-cinder-provisioner pod=standalone-cinder-provisioner-6797c5db6b-rtjqc_default(87835eee-3699-11e8-bfec-4c09b4b0c25b)"

What you expected to happen:
the deployment created pod can running
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:k8sv1.8.5

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Create a periodic job for e2e conformance test and upload results to test grid

Now that we have the e2e conformance test run against PR(s), we need a CI job that runs the same thing periodically (6 hours? 12 hours?) Here are some notes:

  • We need a periodic job for Kubernetes/Kubernetes master against cloud-provider-openstack master (so we know if we break something)
  • We need a periodic job for Kubernetes/Kubernetes release-1.10 branch against cloud-provider-openstack master (se we make sure we don't break any one who wants to use our code against the last release)
  • Need to create a GCS bucket and upload files for each run, so we can display the results in kubernetes test-grid. See details in [1]

Nits:

  • test_results.html is empty as the junit xml file is under kubernetes directory (it's junit_01.xml)

Links:

CSI Plugin needs to check for existing volumes with same name on create

kind bug

What happened:
The CSI spec states that the CreateVolume call is idempotent and should not create duplicate volumes if/when called multiple times with the same Volume Name. The current implementation does not check for existing Volumes with the specified Name and thus allows creation of duplicate named Volumes.

What you expected to happen:
The CSI plugin should respond with the existing Volume and NOT create a new Volume.

How to reproduce it (as minimally and precisely as possible):
Run the CSI Plugin and issue a create call multiple times with the same name, then check out the Cinder List and you'll see multiple volumes with the same name.

Anything else we need to know?:
PR is in progress by jgriffith

Environment:

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Migrate from glide to dep

Now we use Glide for dependency management, but the project is not actively developing and the authors recommend to migrate to dep until it's too late.

Unable to mount Openstack Cinder Volume in a pod

From @walteraa on March 2, 2018 20:56

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
Using Openstack cloud provider, I am not able to mount my Cinder volumes in pods. I am getting this error in my pod events:

Events:
  Type     Reason                 Age   From                               Message
  ----     ------                 ----  ----                               -------
  Normal   Scheduled              16s   default-scheduler                  Successfully assigned mongo-controller-5sktj to walter-atmosphere-minion
  Normal   SuccessfulMountVolume  16s   kubelet, walter-atmosphere-minion  MountVolume.SetUp succeeded for volume "default-token-7cx2x"
  Warning  FailedMount            16s   kubelet, walter-atmosphere-minion  MountVolume.SetUp failed for volume "walter-test" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7df75a03-1e58-11e8-93a7-fa163ec86641/volumes/kubernetes.io~cinder/walter-test --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/cinder/mounts/ea7e96fe-24cb-40f3-9fb3-420ac7ac1752 /var/lib/kubelet/pods/7df75a03-1e58-11e8-93a7-fa163ec86641/volumes/kubernetes.io~cinder/walter-test
Output: Running scope as unit run-r488c59ffc9324542af0c41f646f6ff99.scope.
mount: special device /var/lib/kubelet/plugins/kubernetes.io/cinder/mounts/ea7e96fe-24cb-40f3-9fb3-420ac7ac1752 does not exist
  Warning  FailedMount  15s  kubelet, walter-atmosphere-minion  MountVolume.SetUp failed for volume "walter-test" : mount failed: exit status 32
Mounting command: systemd-run

My openstack-cloud-provider is showing the following error:

ERROR: logging before flag.Parse: I0302 20:36:33.026783       1 openstack_instances.go:46] Claiming to support Instances
ERROR: logging before flag.Parse: I0302 20:36:38.029334       1 openstack_instances.go:46] Claiming to support Instances
ERROR: logging before flag.Parse: I0302 20:36:43.035928       1 openstack_instances.go:46] Claiming to support Instances
(...)

Is it important to know that, before this error, I was getting another error:

ERROR: logging before flag.Parse: E0302 18:34:19.759493       1 reflector.go:205] git.openstack.org/openstack/openstack-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/cloud/pvlcontroller.go:109: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:pvl-controller" cannot list persistentvolumes at the cluster scope

Then, I work around it by runnig the following commands:

kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin-1 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:pvl-controller kube-system-cluster-admin-2 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:cloud-node-controller kube-system-cluster-admin-3 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:cloud-controller-manager kube-system-cluster-admin-4 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:shared-informers kube-system-cluster-admin-5 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:kube-controller-manager  kube-system-cluster-admin-6 --clusterrole cluster-admin

What you expected to happen:
I expect that my Cinder Openstack volume could be mounted in my pod.

How to reproduce it (as minimally and precisely as possible):

  • Deploy the openstack-cloud-provider in your cluster by running the command kubectl create -f https://raw.githubusercontent.com/dims/openstack-cloud-controller-manager/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
    • I made sure that it works by creating an internal service LoadBalancer and it works fine for me.
    • I should to did a workaround(creating the permissive bind) mentioned before, because my controller wasn't able to access the persistent volume API.
  • Create a volume in OpenStack
    • I created it by running the command openstack volume create walter-test --size 10, which gave me a volume:
+---------------------+------------------------------------------------------------------+
| Field               | Value                                                            |
+---------------------+------------------------------------------------------------------+
| attachments         | []                                                               |
| availability_zone   | cinderAZ_1                                                       |
| bootable            | false                                                            |
| consistencygroup_id | None                                                             |
| created_at          | 2018-03-02T20:17:31.408441                                       |
| description         | None                                                             |
| encrypted           | False                                                            |
| id                  | ea7e96fe-24cb-40f3-9fb3-420ac7ac1752                             |
| multiattach         | False                                                            |
| name                | walter-test                                                      |
| properties          |                                                                  |
| replication_status  | disabled                                                         |
| size                | 10                                                               |
| snapshot_id         | None                                                             |
| source_volid        | None                                                             |
| status              | creating                                                         |
| type                | None                                                             |
| updated_at          | None                                                             |
| user_id             | f0cc6d2bcea9d6fe9c2b68264e7d9343c537323c0243d068a0eb119c05fc3c45 |
+---------------------+------------------------------------------------------------------+
  • I've created the following resources:
---

apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: "walter-test"
spec:
  storageClassName: cinder
  capacity:
    storage: "5Gi"
  accessModes:
    - "ReadWriteOnce"
  cinder:
    fsType: ext4
    volumeID: ea7e96fe-24cb-40f3-9fb3-420ac7ac1752

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: atmosphere-pv-claim
spec:
  storageClassName: cinder
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

---

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: mongo
  name: mongo-controller
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mongo
    spec:
      volumes:
        - name: atmosphere-storage
          persistentVolumeClaim:
           claimName: atmosphere-pv-claim
      containers:
      - image: mongo
        name: mongo
        ports:
        - name: mongo
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
            - name: atmosphere-storage
              mountPath: /data/db
  • Now you can check the pod stucked in "ContainerCreating" status
ubuntu@walter-atmosphere:~$ kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
mongo-controller-5sktj   0/1       ContainerCreating   0          21m

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version: dims/openstack-cloud-controller-manager:0.1.0
  • OS (e.g. from /etc/os-release): Ubuntu
  • Kernel (e.g. uname -a): Linux walter-atmosphere 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others:

Copied from original issue: dims/openstack-cloud-controller-manager#81

the way cinder-endpoint to deploy provisioner that the image standalone-cinder-provisioner not support

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
I use the cinder-endpoint way to deploy the provisioner,but found the image that quay.io/external_storage/standalone-cinder-provisioner:latest not updated and not supproted! then I plan to make a image use the master branch code,and clone the cloud-provider-openstack code then command the make,but found need glide,then install glide and make again,but found the log,can help me or I maybe wrong in some place,thanks

[INFO] --> Fetching k8s.io/apiextensions-apiserver
[INFO] --> Fetching k8s.io/kube-openapi
[INFO] --> Fetching k8s.io/kubernetes
[WARN] Unable to checkout golang.org/x/net
[ERROR] Update failed for golang.org/x/net: Cannot detect VCS
[WARN] Unable to checkout golang.org/x/crypto
[ERROR] Update failed for golang.org/x/crypto: Cannot detect VCS
[WARN] Unable to checkout golang.org/x/sys
[ERROR] Update failed for golang.org/x/sys: Cannot detect VCS
[WARN] Unable to checkout golang.org/x/text
[ERROR] Update failed for golang.org/x/text: Cannot detect VCS
[WARN] Unable to checkout google.golang.org/genproto
[ERROR] Update failed for google.golang.org/genproto: Cannot detect VCS
[WARN] Unable to checkout golang.org/x/time
[ERROR] Update failed for golang.org/x/time: Cannot detect VCS
[WARN] Unable to checkout google.golang.org/grpc
[ERROR] Update failed for google.golang.org/grpc: Cannot detect VCS

What you expected to happen:
can provider a new image ?
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Filter main ip address(es)

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:

Goal: have a way to configure Openstack cloud provider to use a specific address for each node.

More precisely, following kubernetes/kubernetes#62163 and #89, it is now possible to disable IPv6 addresses. But what if a node has two (or more) IPv4 and/or IPv6 addresses ? For instance let's have the following configuration:

  status:
    addresses:
    - address: 10.1.0.45
      type: InternalIP
    - address: 2001:41d0:302:1100::a:1720
      type: InternalIP
    - address: 54.38.91.250
      type: InternalIP
    - address: k8s-worker-1
      type: Hostname

The order of addresses is based on comparison between the address string values. So the first one (10.1.0.45) is ideally the main address used by kubernetes.

But it is more problematic for the following ones:

  status:
    addresses:
    - address: 15.13.37.45
      type: InternalIP
    - address: 192.168.27.34
      type: InternalIP
    - address: k8s-worker-1
      type: Hostname

Even if there is no IPv6 address, the public address is used as the kubernetes main address, which is something not really secure, especially if there is a private network.

What you expected to happen:

Openstack cloud provider must be provide a way to force that a specific address is the main address.

Running Kubernetes IPv6 CI tests on OpenStack cloud

Based one from @leblancd

Let's start with the conformance test suite and see what it would take to enable IPv6 based job

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: 1.10.0
api:
  advertiseAddress: fd00::100
networking:
  serviceSubnet: fd00:1234::/110
unifiedControlPlaneImage: diverdane/hyperkube-amd64:v1.9.0-beta.0.ipv6.2
tokenTTL: 0s
nodeName: kube-master
  • After that they use the following snippet to run tests:
export KUBECONFIG=/home/openstack/.kube/config
export KUBE_MASTER=local
export KUBE_MASTER_IP="[fd00:1234::1]:443"
export KUBERNETES_CONFORMANCE_TEST=n
cd $GOPATH/src/k8s.io/kubernetes
go run hack/e2e.go -- --provider=local --v --test --test_args="--host=https://[fd00:1234::1]:443 --ginkgo.focus=Networking|Services --ginkgo.skip=IPv4|Networking-Performance|Federation|preserve\ssource\spod|session\saffinity:\sudp|functioning\sNodePort --num-nodes=2"

why the os-initialize_connection Initiator not same VM caused the volume always attaching

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
the standalone cinder provisioner created the pv and volume,but the volume always attaching,I found the cinder api.log is the os-initialize_connection Initiator not same VM! the place I not corrected ?thanks!

[root@localhost:/etc/iscsi]$ kubectl get po
NAME READY STATUS RESTARTS AGE
standalone-cinder-provisioner-5d85c99899-w6vx7 1/1 Running 0 1h

[root@localhost:/etc/iscsi]$ kubectl get sc
NAME PROVISIONER
standard (default) openstack.org/standalone-cinder

[root@localhost:/etc/iscsi]$ kubectl get secret
NAME TYPE DATA AGE
default-token-dl44v kubernetes.io/service-account-token 3 10d
standard-cephx-secret kubernetes.io/rbd 1 1d

[root@localhost:/etc/iscsi]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
keystone-sc-pvc Bound pvc-b4b87e3b-3e72-11e8-a4fc-4c09b4b0c25b 1Gi RWO standard 37m

the cinder volume status:
image

the cinder api.log
image

and the cinder volume.log
image
image
image

What you expected to happen:
the volume attached the vm success
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment: k8sv1.8.5

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Document what languages plugins can be written in

This issue is to discuss (and then document) our willingness to host plugins that are written in other programming languages.

There are cases when using a different programming language would make the implementation cleaner, simple and more stable. An example of this can be found in the following discussion related to the CSI plugin: #95 (comment)

  • Is it ok for us to host/test code that is not Go?
  • Would the current members and reviewers feel comfortable with maintaining such code?
  • Can we provide testing pipelines for such code?

Starting this issue to have a broader discussion on this topic. I'll make sure to document the consensus.

OpenStack Octavia based ingress controller

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
It could be a great addon to have an ingress controller option implemented based on Octavia service for the OpenStack cloud provider, especially those who already deployed Octavia, just like how GKE does here. FYI, as Octavia official doc said: Octavia will fully replace Neutron LBaaS as the load balancing solution for OpenStack.

What you expected to happen:
Deploy an ingress-controller with OpenStack Octavia as backend.

How to reproduce it (as minimally and precisely as possible):
None

Anything else we need to know?:
None

Environment:
None

Add pull policy for driver-registrar image to support local image built from latest source code

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
Currently, during the OpenLab CI job the driver-registrar image is always pulled from remote(docker hub) which may be obsolete against the source code where it is built from, so this issue wants to add a pull policy to support local image built from latest source code.

What you expected to happen:
We can keep using the latest image in OpenLab CI job.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Authorization failed because of int type value returned instead of bool

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
When using k8s-keystone-auth container for authorization, kubectl failed with the error: Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/default/pods?limit=500\": v1beta1.SubjectAccessReview: Status: v1beta1.SubjectAccessReviewStatus: Allowed: ReadBool: expect t or f, but found 1, error found in #10 byte of ...|llowed\": 1\n }\n}|..., bigger context ...|mestamp\": null\n },\n \"status\": {\n \"allowed\": 1\n }\n}|...") has prevented the request from succeeding (get pods)

What you expected to happen:
After sourced openstack credential, kubectl should work

How to reproduce it (as minimally and precisely as possible):

  • k8s-keystone-auth pod is running
  • config --authentication-token-webhook-config-file=/etc/kubernetes/pki/webhookconfig.yaml, --authorization-webhook-config-file=/etc/kubernetes/pki/webhookconfig.yaml and --authorization-mode=Node,Webhook,RBAC for kube-apiserver.yaml
  • source openstack credential
  • kubectl get pods

Anything else we need to know?:
no

Environment:

  • openstack-cloud-controller-manager version: 0ac2e6a
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
Linux lingxian-k8s-master 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others: NA

external network identified as internal

/kind feature

What happened:
public ipv4 node address identified as Internal

What you expected to happen:
public ipv4 node address identified as External

How to reproduce it (as minimally and precisely as possible):
Don't name your openstack public network "public"
Create an instance with an IP on this network

Use best practices to run external CCM

From @dims on March 21, 2018 18:57

This list is from @andrewsykim

  • use hostnetwork: true so you don't depend on CNI to initialize nodes
  • use dnsPolicy: Default since kube-dns likely doesn't tolerate the node.cloudprovider.kubernetes.io/uninitialized taint
  • tolerate node-role.kubernetes.io/master since you probably want it running on master nodes
  • use --leader-elect if you want HA

Copied from original issue: dims/openstack-cloud-controller-manager#92

Add full example for kubernetes.io/cinder provisioner

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
Add full kubernetes.io/cinder provisioner example, make it easy for beginner to understand the difference with openstack.org/standalone-cinder provisioner.

What you expected to happen:
Understand kubernetes.io/cinder and openstack.org/standalone-cinder scenario.

How to reproduce it (as minimally and precisely as possible):
NA

Anything else we need to know?:
NA

Environment:
NA

Create cinder volume failed with default availability zone 'nova'

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
create cinder volume failed against a specific cloud provider due to invalid parameters caused by incorrect availability zone, because the default value of 'nova' [1] in this repository doesn't match the real case in cloud provider.

[1] https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/volume/cinder/volumeservice/actions.go#L75

What you expected to happen:
create cinder volume succeed by change the default value of availability zone to empty.

How to reproduce it (as minimally and precisely as possible):

  1. install openstack cinder standalone service
  2. create a k8s cluster via local-up-cluster.sh
  3. run the cinder-provisioner binary built from this repository
  4. run test from this repository, kubectl apply -f examples/persistent-volume-provisioning/cinder/cinder-full.yaml

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version: latest
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.4 LTS
  • Kernel (e.g. uname -a): Linux 4.4.0-116-generic
  • Install tools: git, make
  • Others:

Support flexible authorization policy configuration

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
Currently, the authorization policy configuration is not scalable, it's painful for the admin to set the correct rules. For example, it's impossible to config the rule that allow users with a specific role and in a specific project.

What you expected to happen:
As k8s admin, I need flexible authorization policy configuration like the following:

[
  {
    "resource": {
      "verb": ["get", "list", "watch"],
      "resource": "*",
      "version": "*",
      "namespace": "*"
    },
    "match": [
      {
        "type": "role",
        "value": ["k8s-admin", "k8s-viewer", "k8s-editor"]
      },
      {
        "type": "project",
        "value": ["c1f7910086964990847dc6c8b128f63c"]
      },
    ]
  }
]

How to reproduce it (as minimally and precisely as possible):
NONE

Anything else we need to know?:
NONE

Environment:
NONE

Use best practices to run external CCM

From @dims on March 21, 2018 18:57

This list is from @andrewsykim

  • use hostnetwork: true so you don't depend on CNI to initialize nodes
  • use dnsPolicy: Default since kube-dns likely doesn't tolerate the node.cloudprovider.kubernetes.io/uninitialized taint
  • tolerate node-role.kubernetes.io/master since you probably want it running on master nodes
  • use --leader-elect if you want HA

Copied from original issue: dims/openstack-cloud-controller-manager#92

add support to proxy protocol v1 in octavia lbaas

Is this a BUG REPORT or FEATURE REQUEST?: FEATURE

/kind feature

What happened: Currently when we create lbaas resources, in 100% of use-cases at least I am using protocol TCP if my server is listening https. This is really problem if we want to get end-user ip address through to POD.

What you expected to happen: I except that lbaas could forward end-user ip address somehow to pod. This can be done using proxy protocol in octavia lbaas (and then pod should have proxy protocol support as well, then it will parse the ip address from the headers).

Support to launch specified binary in local-up-cluster.sh, like: cinder-provisioner and so on

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
We have several binaries in repo:

  • openstack-cloud-controller-manager
  • cinder-provisioner
  • cinder-csi-plugin
  • cinder-flex-volume-driver
  • k8s-keystone-auth

Now local-up-cluster.sh only support to launch and config openstack-cloud-controller-manager, is there any plan to support launching all of these above binaries in local-up-cluster.sh? That will help OpenLab to reduce the complexity of automation script, like: https://github.com/theopenlab/openlab-zuul-jobs/blob/master/playbooks/cloud-provider-openstack-acceptance-test-keystone-authentication-authorization/run.yaml#L86-L88 and https://github.com/theopenlab/openlab-zuul-jobs/blob/master/playbooks/cloud-provider-openstack-acceptance-test-keystone-authentication-authorization/run.yaml#L92-L98

What you expected to happen:
We can control to launch specified binary with some environment variables in local-up-cluster.sh, like: EXTERNAL_CLOUD_PROVIDER_BINARY do now.

How to reproduce it (as minimally and precisely as possible):
NA

Anything else we need to know?:
NA

Environment:

  • openstack-cloud-controller-manager version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Get creating vm as TargetNode, that cause TestRoutes case failed

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
In unit test case TestRoutes, if the first vm in vm list is creating status, the vm do not allocate IP address yet, that will cause ErrNoAddressFound in following logic.

https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/cloudprovider/providers/openstack/openstack_routes_test.go#L50

What you expected to happen:
Use running vm as TargetNode, for example: the vm that unit test is running in.

How to reproduce it (as minimally and precisely as possible):
It happen in OpenLab integration tests, it is test result and log.
http://logs.openlabtesting.org/logs/77/77/f26a6e3b931cabf01e68eea02c93b64fd3d11908/cloud-provider-openstack-all/cloud-provider-openstack-unittest/3f9e57e/

Anything else we need to know?:
NA

Environment:

  • openstack-cloud-controller-manager version: master

RBAC rules needed for running as pod/daemonset

From @dims on January 15, 2018 20:0

hack we can use for now is ... we need a better way

# Hack for RBAC for all for the new cloud-controller process, we need to do better than this
cluster/kubectl.sh create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin-1 --clusterrole cluster-admin
cluster/kubectl.sh create clusterrolebinding --user system:serviceaccount:kube-system:pvl-controller kube-system-cluster-admin-2 --clusterrole cluster-admin
cluster/kubectl.sh create clusterrolebinding --user system:serviceaccount:kube-system:cloud-node-controller kube-system-cluster-admin-3 --clusterrole cluster-admin
cluster/kubectl.sh create clusterrolebinding --user system:serviceaccount:kube-system:cloud-controller-manager kube-system-cluster-admin-4 --clusterrole cluster-admin
cluster/kubectl.sh create clusterrolebinding --user system:serviceaccount:kube-system:shared-informers kube-system-cluster-admin-5 --clusterrole cluster-admin
cluster/kubectl.sh create clusterrolebinding --user system:kube-controller-manager  kube-system-cluster-admin-6 --clusterrole cluster-admin
cluster/kubectl.sh create clusterrolebinding --user system:serviceaccount:kube-system:attachdetach-controller kube-system-cluster-admin-7 --clusterrole cluster-admin
cluster/kubectl.sh set subject clusterrolebinding system:node --group=system:nodes

Copied from original issue: dims/openstack-cloud-controller-manager#12

Filter instances ipv4/6 addresses

/kind feature

What happened:
my nodes are identified either by their ipv4 addr or their ipv6, according
to the numerical order of the adresses.
Once added to the the nodeAddresses list, the type of the addr is lost.

What you expected to happen:
something more consistent & at least either ipv4 addrs or ipv6 addrs

How to reproduce it (as minimally and precisely as possible):
setup an openstack network with ipv4 & ipv6 support such as your instances
get 2 addresses.

workaround for "x509: failed to load system roots and no roots provided" issue on CoreOS

/kind bug

What happened:
When deploying Kubernetes on CoreOS using kubeadm and a config file specifying cloud-provider: openstack the controller-manager will consistently die with the following error (from docker logs):

error building controller context: cloud provider could not be initialized: could not init cloud provider "openstack": Post https://URL/v2.0/tokens: x509: failed to load system roots and no roots provided

On CoreOS the /etc/ssl/certs files are all symlinks to /usr/share/ca-certificates

When the kuberenets controller container runs it seems unable to actually read the files in that directory, and throws an error

What you expected to happen:
The controller manager does not throw an error and die.

How to reproduce it (as minimally and precisely as possible):

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration

# pass in the openstack configuration to Kubernetes.
cloudProvider: openstack
apiServerExtraArgs:
    cloud-provider: openstack
    cloud-config: /etc/cloud/bootstrap/cloud.conf
apiServerExtraVolumes:
- name: oscloudcfg
  hostPath: /etc/cloud/bootstrap/
  mountPath: /etc/cloud/bootstrap/

controllerManagerExtraArgs:
    cloud-provider: openstack
    cloud-config: /etc/cloud/bootstrap/cloud.conf
controllerManagerExtraVolumes:
- name: oscloudcfg
  hostPath: /etc/cloud/bootstrap/
  mountPath: /etc/cloud/bootstrap/

Anything else we need to know?:
The workaround is to add the following sections to your kubeadm.conf file:

apiServerExtraVolumes:
- name: ca-certs
  hostPath: /usr/share/ca-certificates/
  mountPath: /etc/ssl/certs/
controllerManagerExtraVolumes:
- name: ca-certs
  hostPath: /usr/share/ca-certificates/
  mountPath: /etc/ssl/certs/

Environment:

  • openstack-cloud-controller-manager version: N/A
  • OS (e.g. from /etc/os-release):
ID=coreos
VERSION=1632.3.0
VERSION_ID=1632.3.0
BUILD_ID=2018-02-14-0338
  • Kernel (e.g. uname -a):

Linux kubestack-controller0 4.14.19-coreos #1 SMP Wed Feb 14 03:18:05 UTC 2018 x86_64 Intel Core Processor (Haswell, no TSX) GenuineIntel GNU/Linux

  • Install tools: kubeadm
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.