Giter Site home page Giter Site logo

gardener / gardener-extension-provider-alicloud Goto Github PK

View Code? Open in Web Editor NEW
10.0 14.0 66.0 37.85 MB

Gardener extension controller for the Alibaba cloud provider (https://alibabacloud.com).

Home Page: https://gardener.cloud

License: Apache License 2.0

Shell 2.59% Dockerfile 0.12% Makefile 0.76% Go 95.19% Smarty 0.20% HCL 0.72% Python 0.43%

gardener-extension-provider-alicloud's Introduction

REUSE status CI Build status Go Report Card

Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.

Recently, most of the vendor specific logic has been developed in-tree. However, the project has grown to a size where it is very hard to extend, maintain, and test. With GEP-1 we have proposed how the architecture can be changed in a way to support external controllers that contain their very own vendor specifics. This way, we can keep Gardener core clean and independent.

This controller implements Gardener's extension contract for the Alicloud provider.

An example for a ControllerRegistration resource that can be used to register this controller to Gardener can be found here.

Please find more information regarding the extensibility concepts and a detailed proposal here.

Supported Kubernetes versions

This extension controller supports the following Kubernetes versions:

Version Support Conformance test results
Kubernetes 1.30 1.30.0+ Gardener v1.30 Conformance Tests
Kubernetes 1.29 1.29.0+ Gardener v1.29 Conformance Tests
Kubernetes 1.28 1.28.0+ Gardener v1.28 Conformance Tests
Kubernetes 1.27 1.27.0+ Gardener v1.27 Conformance Tests
Kubernetes 1.26 1.26.0+ Gardener v1.26 Conformance Tests
Kubernetes 1.25 1.25.0+ Gardener v1.25 Conformance Tests

Please take a look here to see which versions are supported by Gardener in general.


How to start using or developing this extension controller locally

You can run the controller locally on your machine by executing make start.

Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

gardener-extension-provider-alicloud's People

Contributors

acumino avatar ary1992 avatar ccwienk avatar dependabot[bot] avatar dguendisch avatar dimitar-kostadinov avatar dimityrmirchev avatar docktofuture avatar emoinlanyu avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar himanshu-kun avatar ialidzhikov avatar jia-jerry avatar kevin-lacoo avatar kostov6 avatar martinweindel avatar n-boshnakov avatar oliver-goetz avatar rfranzke avatar shafeeqes avatar shaoyongfeng avatar stoyanr avatar tedteng avatar timebertt avatar timuthy avatar vlerenc avatar vlvasilev avatar vpnachev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gardener-extension-provider-alicloud's Issues

Add eip rotate part in end-user usage

How to categorize this issue?

/area documentation
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
Just like https://github.com/gardener/gardener-extension-provider-aws/blob/master/docs/usage-as-end-user.md#infrastructureconfig, we should add EIP configuration part for shoot configuration in https://github.com/gardener/gardener-extension-provider-alicloud/blob/master/docs/usage-as-end-user.md#infrastructureconfig
Why is this needed:

Generated machine class name exceeds 63 character limitation

How to categorize this issue?

/area control-plane
/kind bug
/priority normal
/platform alicloud

What happened:
Our user created a Shoot with multiple worker groups, some worker groups have failed to be created.
After investigation, we found that it is caused by exceed of length limitation of metadata.name for corresponding machineclass resources.
Our current way of naming it is shoot--(project_name)--(shoot_name)--(worker_group_name)--(region_id)--(worker_name). And Gardener limits:

  • Project name: 10 chars
  • Shoot name: 15 chars
  • Worker group name: 15 chars

Worker name is auto generated and is usually 5 chars.
If we sum above numbers up (including word shoot and multiple -s), at minimum, there are only 3 chars left for region_id.

What you expected to happen:
Whatever users name their projects, shoots, worker groups, workers should not block the workers from being successfully created.

How to reproduce it (as minimally and precisely as possible):
Create a worker in a worker group whose name is maximum char length, in a shoot whose name is maximum char length, in a project whose name is maximum char length.

Anything else we need to know?:
Currently we see this issue on AliCloud. But I suppose it is the same across all cloud providers.

Environment:
AliCloud Canary

  • Gardener version (if relevant):
  • Extension version: <= 1.19.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Provider-specific webhooks in Garden cluster

From gardener-attic/gardener-extensions#407

With the new core.gardener.cloud/v1alpha1.Shoot API Gardener does no longer understand the provider-specifics, e.g., the infrastructure config, control plane config, worker config, etc.
This allows end-users to harm themselves and create invalid Shoot resources the Garden cluster. Errors will only become present during reconciliation part creation of the resource.

Also, it's not possible to default any of the provider specific sections. Hence, we could also think about mutating webhooks in the future.

As we are using the controller-runtime maintained by the Kubernetes SIGs it should be relatively easy to implement these webhooks as the library abstracts already most of the things.

We should have a separate, dedicated binary incorporating the webhooks for each provider, and a separate Helm chart for the deployment in the Garden cluster.

Similarly, the networking and OS extensions could have such webhooks as well to check on the providerConfig for the networking and operating system config.

Part of gardener/gardener#308

KubeAPI Server can't get accessed

What happened:
After set ExternalTrafficPolicy of kube-apiserver service to local, some shoot clusters can't be accessed. The node port exposed on the node where apiserver pod is running can't be connected. After investigated by TCPDump, the syn tcp packets from Cloud SLB probers are not responded. It is weird. It could be iptable rules issue because some shoots are normal. Need some time to track. But as it is a very critical issue for Gardener users, let's set ExternalTrafficPolicy back to cluster for now.
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
Not always. All malfunctional shoots are in Hangzhou region while seed is in Shanghai.
Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Change externalTrafficPolicy to local for all services running in Alicloud

Loadbalancer type service in Alicloud should be set externalTrafficPolicy to Local.
In kube-apiserver, you may find logs like

I0113 09:09:57.020050       1 log.go:172] http: TLS handshake error from 10.243.140.1:3956: read tcp 10.243.143.153:443->10.243.140.1:3956: read: connection reset by peer
I0113 09:09:57.080184       1 log.go:172] http: TLS handshake error from 10.243.147.1:38221: read tcp 10.243.143.153:443->10.243.147.1:38221: read: connection reset by peer
I0113 09:09:57.088999       1 log.go:172] http: TLS handshake error from 10.243.140.1:31667: read tcp 10.243.143.153:443->10.243.140.1:31667: read: connection reset by peer
I0113 09:09:57.096259       1 log.go:172] http: TLS handshake error from 10.243.140.1:54397: read tcp 10.243.143.153:443->10.243.140.1:54397: read: connection reset by peer
I0113 09:09:57.102187       1 log.go:172] http: TLS handshake error from 10.243.139.1:63852: read tcp 10.243.143.153:443->10.243.139.1:63852: read: connection reset by peer
I0113 09:09:57.107350       1 log.go:172] http: TLS handshake error from 10.243.145.1:15497: read tcp 10.243.143.153:443->10.243.145.1:15497: read: connection reset by peer
I0113 09:09:57.143102       1 log.go:172] http: TLS handshake error from 10.243.140.1:56287: read tcp 10.243.143.153:443->10.243.140.1:56287: read: connection reset by peer
I0113 09:09:57.195062       1 log.go:172] http: TLS handshake error from 10.243.142.1:61408: read tcp 10.243.143.153:443->10.243.142.1:61408: read: connection reset by peer
I0113 09:09:57.199729       1 log.go:172] http: TLS handshake error from 10.243.138.1:29431: read tcp 10.243.143.153:443->10.243.138.1:29431: read: connection reset by peer
I0113 09:09:57.201230       1 log.go:172] http: TLS handshake error from 10.243.141.1:20891: read tcp 10.243.143.153:443->10.243.141.1:20891: read: connection reset by peer
I0113 09:09:57.282068       1 log.go:172] http: TLS handshake error from 10.242.16.26:53323: read tcp 10.243.143.153:443->10.242.16.26:53323: read: connection reset by peer
I0113 09:09:57.291427       1 log.go:172] http: TLS handshake error from 10.243.139.1:15582: read tcp 10.243.143.153:443->10.243.139.1:15582: read: connection reset by peer
I0113 09:09:57.308555       1 log.go:172] http: TLS handshake error from 10.243.146.1:31295: read tcp 10.243.143.153:443->10.243.146.1:31295: read: connection reset by peer
I0113 09:09:57.309843       1 log.go:172] http: TLS handshake error from 10.243.140.1:52618: read tcp 10.243.143.153:443->10.243.140.1:52618: read: connection reset by peer
I0113 09:09:57.368583       1 log.go:172] http: TLS handshake error from 10.243.146.1:14009: read tcp 10.243.143.153:443->10.243.146.1:14009: read: connection reset by peer
I0113 09:09:57.371320       1 log.go:172] http: TLS handshake error from 10.243.144.1:27785: read tcp 10.243.143.153:443->10.243.144.1:27785: read: connection reset by peer
I0113 09:09:57.382198       1 log.go:172] http: TLS handshake error from 10.243.145.1:23833: read tcp 10.243.143.153:443->10.243.145.1:23833: read: connection reset by peer
I0113 09:09:57.390851       1 log.go:172] http: TLS handshake error from 10.242.16.26:39598: read tcp 10.243.143.153:443->10.242.16.26:39598: read: connection reset by peer
I0113 09:09:57.395719       1 log.go:172] http: TLS handshake error from 10.243.145.1:44811: read tcp 10.243.143.153:443->10.243.145.1:44811: read: connection reset by peer
I0113 09:09:57.412029       1 log.go:172] http: TLS handshake error from 10.243.146.1:50746: read tcp 10.243.143.153:443->10.243.146.1:50746: read: connection reset by peer
I0113 09:09:57.422297       1 log.go:172] http: TLS handshake error from 10.243.138.1:7800: read tcp 10.243.143.153:443->10.243.138.1:7800: read: connection reset by peer
I0113 09:09:57.445050       1 log.go:172] http: TLS handshake error from 10.243.138.1:32230: read tcp 10.243.143.153:443->10.243.138.1:32230: read: connection reset by peer
I0113 09:09:57.446378       1 log.go:172] http: TLS handshake error from 10.242.16.26:7240: read tcp 10.243.143.153:443->10.242.16.26:7240: read: connection reset by peer
I0113 09:09:57.450862       1 log.go:172] http: TLS handshake error from 10.243.144.1:2126: read tcp 10.243.143.153:443->10.243.144.1:2126: read: connection reset by peer
I0113 09:09:57.480035       1 log.go:172] http: TLS handshake error from 10.243.140.1:59843: read tcp 10.243.143.153:443->10.243.140.1:59843: read: connection reset by peer
I0113 09:09:57.533060       1 log.go:172] http: TLS handshake error from 10.243.139.1:16113: read tcp 10.243.143.153:443->10.243.139.1:16113: read: connection reset by peer
I0113 09:09:57.543240       1 log.go:172] http: TLS handshake error from 10.243.139.1:59077: read tcp 10.243.143.153:443->10.243.139.1:59077: read: connection reset by peer
I0113 09:09:57.580431       1 log.go:172] http: TLS handshake error from 10.242.16.26:40541: read tcp 10.243.143.153:443->10.242.16.26:40541: read: connection reset by peer
I0113 09:09:57.591473       1 log.go:172] http: TLS handshake error from 10.243.146.1:36067: read tcp 10.243.143.153:443->10.243.146.1:36067: read: connection reset by peer
I0113 09:09:57.622527       1 log.go:172] http: TLS handshake error from 10.243.141.1:58834: read tcp 10.243.143.153:443->10.243.141.1:58834: read: connection reset by peer
I0113 09:09:57.632065       1 log.go:172] http: TLS handshake error from 10.242.16.26:64805: read tcp 10.243.143.153:443->10.242.16.26:64805: read: connection reset by peer
I0113 09:09:57.667315       1 log.go:172] http: TLS handshake error from 10.243.144.1:34742: read tcp 10.243.143.153:443->10.243.144.1:34742: read: connection reset by peer
I0113 09:09:57.691272       1 log.go:172] http: TLS handshake error from 10.243.146.1:56098: read tcp 10.243.143.153:443->10.243.146.1:56098: read: connection reset by peer
I0113 09:09:57.712362       1 log.go:172] http: TLS handshake error from 10.243.142.1:7233: read tcp 10.243.143.153:443->10.243.142.1:7233: read: connection reset by peer
I0113 09:09:57.730917       1 log.go:172] http: TLS handshake error from 10.243.144.1:39113: read tcp 10.243.143.153:443->10.243.144.1:39113: read: connection reset by peer
I0113 09:09:57.741747       1 log.go:172] http: TLS handshake error from 10.243.147.1:11158: read tcp 10.243.143.153:443->10.243.147.1:11158: read: connection reset by peer
I0113 09:09:57.744090       1 log.go:172] http: TLS handshake error from 10.243.145.1:35634: read tcp 10.243.143.153:443->10.243.145.1:35634: read: connection reset by peer
I0113 09:09:57.751385       1 log.go:172] http: TLS handshake error from 10.243.145.1:65202: read tcp 10.243.143.153:443->10.243.145.1:65202: read: connection reset by peer
I0113 09:09:57.784026       1 log.go:172] http: TLS handshake error from 10.243.144.1:64747: read tcp 10.243.143.153:443->10.243.144.1:64747: read: connection reset by peer
I0113 09:09:57.852507       1 log.go:172] http: TLS handshake error from 10.243.141.1:8611: read tcp 10.243.143.153:443->10.243.141.1:8611: read: connection reset by peer
I0113 09:09:57.943325       1 log.go:172] http: TLS handshake error from 10.243.146.1:32559: read tcp 10.243.143.153:443->10.243.146.1:32559: read: connection reset by peer

This is because SLB solution in Alicloud is quite different with other hyperscalers. There are distributed TCP listeners health check. After externalTrafficPolicy is set to Local, the log entries drop significently per second.

I0113 09:10:41.483515       1 log.go:172] http: TLS handshake error from 100.117.45.1:1059: read tcp 10.243.143.153:443->100.117.45.1:1059: read: connection reset by peer
I0113 09:10:41.516120       1 log.go:172] http: TLS handshake error from 100.117.45.131:25811: read tcp 10.243.143.153:443->100.117.45.131:25811: read: connection reset by peer
I0113 09:10:41.879089       1 log.go:172] http: TLS handshake error from 100.117.44.130:20375: read tcp 10.243.143.153:443->100.117.44.130:20375: read: connection reset by peer
I0113 09:10:42.823252       1 log.go:172] http: TLS handshake error from 100.117.44.1:25737: read tcp 10.243.143.153:443->100.117.44.1:25737: read: connection reset by peer
I0113 09:10:43.338902       1 log.go:172] http: TLS handshake error from 100.97.209.131:16299: read tcp 10.243.143.153:443->100.97.209.131:16299: read: connection reset by peer
I0113 09:10:43.452208       1 log.go:172] http: TLS handshake error from 100.117.45.131:24959: read tcp 10.243.143.153:443->100.117.45.131:24959: read: connection reset by peer
I0113 09:10:43.544250       1 log.go:172] http: TLS handshake error from 100.117.45.1:54893: read tcp 10.243.143.153:443->100.117.45.1:54893: read: connection reset by peer
I0113 09:10:43.801620       1 log.go:172] http: TLS handshake error from 100.117.44.130:29934: read tcp 10.243.143.153:443->100.117.44.130:29934: read: connection reset by peer
I0113 09:10:44.870779       1 log.go:172] http: TLS handshake error from 100.117.44.1:23121: read tcp 10.243.143.153:443->100.117.44.1:23121: read: connection reset by peer
I0113 09:10:45.342568       1 log.go:172] http: TLS handshake error from 100.97.209.131:62502: read tcp 10.243.143.153:443->100.97.209.131:62502: read: connection reset by peer
I0113 09:10:45.508952       1 log.go:172] http: TLS handshake error from 100.117.45.131:16026: read tcp 10.243.143.153:443->100.117.45.131:16026: read: connection reset by peer
I0113 09:10:45.599944       1 log.go:172] http: TLS handshake error from 100.117.45.1:39509: read tcp 10.243.143.153:443->100.117.45.1:39509: read: connection reset by peer
I0113 09:10:45.721739       1 log.go:172] http: TLS handshake error from 100.117.44.130:17338: read tcp 10.243.143.153:443->100.117.44.130:17338: read: connection reset by peer

And also, source IP is preseved.

Metrics server reports connection error on Alicloud

What happened:
On alicloud, metrics server reports errors as below:

I0309 03:33:33.356973       1 manager.go:120] Querying source: kubelet_summary:izuf6intcn2jwb3jz1tc4qz
I0309 03:33:33.957099       1 manager.go:148] ScrapeMetrics: time: 899.993654ms, nodes: 14, pods: 557
E0309 03:33:33.957121       1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:izuf6aq39j2yullicucdhyz: unable to get CPU for container "curator" in pod shoot--g-chen--test/hourly-curator-1583724780-4jxh4 on node "10.242.16.33", discarding data: missing cpu usage metric
E0309 03:33:34.189392       1 reststorage.go:160] unable to fetch pod metrics for pod shoot--g-chen--test/hourly-curator-1583724780-4jxh4: no metrics known for pod
I0309 03:33:34.857955       1 trace.go:81] Trace[1575515895]: "List /apis/metrics.k8s.io/v1beta1/pods" (started: 2020-03-09 03:33:34.188183668 +0000 UTC m=+4049179.418065821) (total time: 669.697412ms):

The default setting kubelet-preferred-address-types of metrics server is host name which can't be resolved by Alicloud internally. This is not the issue for other hyperscalers like AWS and GCP
What you expected to happen:
Suggest to set kubelet-preferred-address-types to InternalIP
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gardener version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Implement `ControlPlane` controller for Alicloud provider

Similar to how we have implemented the ControlPlane extension resource controller for the AWS provider let's please now do it for Alicloud.

Based on the current implementation the ControlPlaneConfig should look like this:

apiVersion: alicloud.provider.extensions.gardener.cloud/v1alpha1
kind: ControlPlaneConfig
cloudControllerManager:
  featureGates:
    CustomResourceValidation: true

No ControlPlaneStatus needs to be implemented right now (not needed yet).

external-snapshotter@v1 is deployed even for Kubernetes >= 1.17

How to categorize this issue?

/area storage
/kind bug
/platform alicloud

What happened:
Currently in the image vector we have

- name: csi-snapshotter
sourceRepository: https://github.com/kubernetes-csi/external-snapshotter
repository: quay.io/k8scsi/csi-snapshotter
tag: v1.2.2
targetVersion: ">= 1.14"
- name: csi-snapshotter
sourceRepository: https://github.com/kubernetes-csi/external-snapshotter
repository: quay.io/k8scsi/csi-snapshotter
tag: v2.1.1
targetVersion: ">= 1.17"

Actually when you create for example a v1.18 cluster, [email protected] is being deployed as it also matches the condition >= 1.14. As csi-snapshotter@v1 is not compatible with csi-snapshotter@v2 this makes the volumesnapshot feature unusable on Kubernetes >= v1.17 currently.

What you expected to happen:
csi-snapshotter@v1 to be deployed for Kubernetes >= 1.17

How to reproduce it (as minimally and precisely as possible):

  1. Create a v1.18 Shoot
  2. Make sure that csi-snapshotter@v1 is deployed
$ k -n shoot--foo--bar get po csi-plugin-controller-7b6d85fc6b-9c8c7 -o yaml | grep image

    image: registry.eu-central-1.aliyuncs.com/gardener-de/csi-plugin-alicloud:v1.14.8-41
    imagePullPolicy: IfNotPresent
    image: quay.io/k8scsi/csi-attacher:v2.2.0
    imagePullPolicy: IfNotPresent
    image: quay.io/k8scsi/csi-provisioner:v1.6.0
    imagePullPolicy: IfNotPresent
    image: quay.io/k8scsi/csi-snapshotter:v1.2.2
    imagePullPolicy: IfNotPresent
    image: quay.io/k8scsi/csi-resizer:v0.5.0
    imagePullPolicy: IfNotPresent
    image: quay.io/k8scsi/csi-attacher:v2.2.0

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.19.1
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Support user provide IP for NATGateay

What would you like to be added:
When user create an shoot, we allow user to provide EIP for NATGateway source entries. EIP should be configured by zone, because EIP can only attached to VSwitch level.

apiVersion: alicloud.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
  vpc: # specify either 'id' or 'cidr'
  # id: my-vpc
    cidr: 10.250.0.0/16
  zones:
  - name: eu-central-1a
    workers: 10.250.1.0/24
    eips: ["1.2.3.4"]

Why is this needed:

Change csi tag to be semver-compliant

How to categorize this issue?

/area control-plane
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
Image tag of registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.14.8.38-fe611ad1-aliyun is not semver-compliant. As a result, we're currently having a Problem with our Protecode scan .
Why is this needed:

Adapt to terraform v0.12 language changes

How to categorize this issue?

/area open-source
/kind cleanup
/priority normal
/platform alicloud

What would you like to be added:
provider-alicloud needs an adaptation of the terraform configuration to v0.12. For provider-aws this is done with this PR - gardener/gardener-extension-provider-aws#111.

Why is this needed:
Currently the terraformer run is only omitting warnings but in a future version of terraform, the warnings will be switched to errors.

Cannot delete infrastructure when credentials data keys a re missing in secret

From gardener-attic/gardener-extensions#577

If the account secret does not contain a service account json, the cluster can for sure not be created.
But when trying to delete such a cluster this fails because of the same reason:

Waiting until shoot infrastructure has been destroyed
Last Error
task "Waiting until shoot infrastructure has been destroyed" failed: Failed to delete infrastructure: Error deleting infrastructure: secret shoot--berlin--rg-kyma/cloudprovider doesn't have a service account json

It is the same also for the other providers. This is not something specific to gcp.

Prohibit user to create shoot within 100.64.0.0/10 cidr

How to categorize this issue?

/area networking
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
Check service/pod/node network cidrs in shoot don't drop into 100.64.0.0/10 range.
Why is this needed:
Alicloud use 100.64.0.0/10 to provide VPC accessed services like DNS, Meta service and so one.

It is not allowed to change Zone of worker

How to categorize this issue?

/area robustness
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
Add a webhook check. Forbit user to change existing zone of a worker group.
Why is this needed:
Otherwise, error occurs during reconcile of infrastructure:

Last Errors
Infrastructure operation failed as unmanaged resources exist in your cloud provider account. Please delete all manually created resources related to this Shoot.
task "Waiting until shoot infrastructure has been reconciled" failed: Error while waiting for Infrastructure shoot--devx--master/master to become ready: extension encountered error during reconciliation: Error reconciling infrastructure: failed to apply the terraform config: Terraform execution for command 'apply' could not be completed. The following issues have been found in the logs:

-> Pod 'master.infra.tf-apply-99rzk' reported:
* [ERROR] terraform-provider-alicloud/alicloud/resource_alicloud_vswitch.go:161: Resource vsw-uf6sovk6s64kvq2pd9d6e DeleteVSwitch Failed!!! [SDK alibaba-cloud-sdk-go ERROR]:

Add infrastructure permission documentation

How to categorize this issue?

/area documentation
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:

We need more detailed docs regarding required infrastructure permissions for both the operator and end-user similar to this documentation.

Why is this needed:
Better ops/user experience.

Machine-controller-manager provider for Alicloud

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added: As discussed with @minchaow, opening the issue here to support alicloud provider for machine-controller-manager.

Please find the steps to add support for a new provider here: https://github.com/gardener/machine-controller-manager/blob/master/docs/development/cp_support_new.md

Already available provider for reference:

Why is this needed: This is needed as part of the extensibility of MCM.

runtime error: invalid memory address or nil pointer dereference

How to categorize this issue?

/kind bug
/platform alicloud

What happened:

E0318 08:29:42.987561    6898 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 824 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x321e8e0, 0x55259f0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89
panic(0x321e8e0, 0x55259f0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/gardener/gardener/pkg/controllerutils.tryPatchFinalizers.func1(0x10786bc, 0x552b1e0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:66 +0xea
k8s.io/client-go/util/retry.OnError.func1(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:51 +0x3c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0009e3790, 0x31fa000, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0, 0xc0009e3790, 0x100c9c8, 0x344b1e0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:399 +0x55
k8s.io/client-go/util/retry.OnError(0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0, 0x380b580, 0xc0009e3830, 0x32cfd00, 0x344b228)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:50 +0xa6
k8s.io/client-go/util/retry.RetryOnConflict(...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:104
github.com/gardener/gardener/pkg/controllerutils.tryPatchFinalizers(0x39eca80, 0xc000da2c00, 0x0, 0x0, 0x2eb30ed8, 0xc0008273b0, 0x3a2ad40, 0xc00000d0e0, 0x380c800, 0x372ec10, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:60 +0x168
github.com/gardener/gardener/pkg/controllerutils.EnsureFinalizer(...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:49
github.com/gardener/gardener/extensions/pkg/controller/infrastructure.(*reconciler).reconcile(0xc000928910, 0x39eca80, 0xc000da2c00, 0x39f8560, 0xc000e10390, 0xc00000d0e0, 0xc00093b9e0, 0x36f92d7, 0x6, 0x0, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/infrastructure/reconciler.go:127 +0x11f
github.com/gardener/gardener/extensions/pkg/controller/infrastructure.(*reconciler).Reconcile(0xc000928910, 0x39eca80, 0xc000da2c00, 0xc000397b20, 0x1a, 0xc000f7d920, 0x9, 0x3a2ad40, 0xc00000cd20, 0x0, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/infrastructure/reconciler.go:121 +0x6c8
github.com/gardener/gardener/extensions/pkg/controller.(*operationAnnotationWrapper).Reconcile(0xc0009a6240, 0x39eca80, 0xc000da2c00, 0xc000397b20, 0x1a, 0xc000f7d920, 0x9, 0xc000da2c00, 0x100c29f, 0xc00003c000, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/reconciler.go:75 +0x259
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0009977c0, 0x39ec9c0, 0xc000a3c440, 0x32cfde0, 0xc000d99cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:263 +0x317
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0009977c0, 0x39ec9c0, 0xc000a3c440, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1(0x39ec9c0, 0xc000a3c440)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:198 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bb6f50)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009e3f50, 0x398e960, 0xc000744060, 0xc000a3c401, 0xc000eb4cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bb6f50, 0x3b9aca00, 0x0, 0x1, 0xc000eb4cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x39ec9c0, 0xc000a3c440, 0xc000a94bd0, 0x3b9aca00, 0x0, 0x1)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x39ec9c0, 0xc000a3c440, 0xc000a94bd0, 0x3b9aca00)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:195 +0x4e7
E0318 08:29:42.988966    6898 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 824 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x321e8e0, 0x55259f0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89
panic(0x321e8e0, 0x55259f0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x321e8e0, 0x55259f0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/gardener/gardener/pkg/controllerutils.tryPatchFinalizers.func1(0x10786bc, 0x552b1e0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:66 +0xea
k8s.io/client-go/util/retry.OnError.func1(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:51 +0x3c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0009e3790, 0x31fa000, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0, 0xc0009e3790, 0x100c9c8, 0x344b1e0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:399 +0x55
k8s.io/client-go/util/retry.OnError(0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0, 0x380b580, 0xc0009e3830, 0x32cfd00, 0x344b228)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:50 +0xa6
k8s.io/client-go/util/retry.RetryOnConflict(...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:104
github.com/gardener/gardener/pkg/controllerutils.tryPatchFinalizers(0x39eca80, 0xc000da2c00, 0x0, 0x0, 0x2eb30ed8, 0xc0008273b0, 0x3a2ad40, 0xc00000d0e0, 0x380c800, 0x372ec10, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:60 +0x168
github.com/gardener/gardener/pkg/controllerutils.EnsureFinalizer(...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:49
github.com/gardener/gardener/extensions/pkg/controller/infrastructure.(*reconciler).reconcile(0xc000928910, 0x39eca80, 0xc000da2c00, 0x39f8560, 0xc000e10390, 0xc00000d0e0, 0xc00093b9e0, 0x36f92d7, 0x6, 0x0, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/infrastructure/reconciler.go:127 +0x11f
github.com/gardener/gardener/extensions/pkg/controller/infrastructure.(*reconciler).Reconcile(0xc000928910, 0x39eca80, 0xc000da2c00, 0xc000397b20, 0x1a, 0xc000f7d920, 0x9, 0x3a2ad40, 0xc00000cd20, 0x0, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/infrastructure/reconciler.go:121 +0x6c8
github.com/gardener/gardener/extensions/pkg/controller.(*operationAnnotationWrapper).Reconcile(0xc0009a6240, 0x39eca80, 0xc000da2c00, 0xc000397b20, 0x1a, 0xc000f7d920, 0x9, 0xc000da2c00, 0x100c29f, 0xc00003c000, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/reconciler.go:75 +0x259
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0009977c0, 0x39ec9c0, 0xc000a3c440, 0x32cfde0, 0xc000d99cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:263 +0x317
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0009977c0, 0x39ec9c0, 0xc000a3c440, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1(0x39ec9c0, 0xc000a3c440)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:198 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bb6f50)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009e3f50, 0x398e960, 0xc000744060, 0xc000a3c401, 0xc000eb4cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bb6f50, 0x3b9aca00, 0x0, 0x1, 0xc000eb4cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x39ec9c0, 0xc000a3c440, 0xc000a94bd0, 0x3b9aca00, 0x0, 0x1)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x39ec9c0, 0xc000a3c440, 0xc000a94bd0, 0x3b9aca00)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:195 +0x4e7
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x2e08bea]

goroutine 824 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x321e8e0, 0x55259f0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x321e8e0, 0x55259f0)
	/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/gardener/gardener/pkg/controllerutils.tryPatchFinalizers.func1(0x10786bc, 0x552b1e0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:66 +0xea
k8s.io/client-go/util/retry.OnError.func1(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:51 +0x3c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0009e3790, 0x31fa000, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0, 0xc0009e3790, 0x100c9c8, 0x344b1e0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:399 +0x55
k8s.io/client-go/util/retry.OnError(0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0, 0x380b580, 0xc0009e3830, 0x32cfd00, 0x344b228)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:50 +0xa6
k8s.io/client-go/util/retry.RetryOnConflict(...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/client-go/util/retry/util.go:104
github.com/gardener/gardener/pkg/controllerutils.tryPatchFinalizers(0x39eca80, 0xc000da2c00, 0x0, 0x0, 0x2eb30ed8, 0xc0008273b0, 0x3a2ad40, 0xc00000d0e0, 0x380c800, 0x372ec10, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:60 +0x168
github.com/gardener/gardener/pkg/controllerutils.EnsureFinalizer(...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/pkg/controllerutils/finalizers.go:49
github.com/gardener/gardener/extensions/pkg/controller/infrastructure.(*reconciler).reconcile(0xc000928910, 0x39eca80, 0xc000da2c00, 0x39f8560, 0xc000e10390, 0xc00000d0e0, 0xc00093b9e0, 0x36f92d7, 0x6, 0x0, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/infrastructure/reconciler.go:127 +0x11f
github.com/gardener/gardener/extensions/pkg/controller/infrastructure.(*reconciler).Reconcile(0xc000928910, 0x39eca80, 0xc000da2c00, 0xc000397b20, 0x1a, 0xc000f7d920, 0x9, 0x3a2ad40, 0xc00000cd20, 0x0, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/infrastructure/reconciler.go:121 +0x6c8
github.com/gardener/gardener/extensions/pkg/controller.(*operationAnnotationWrapper).Reconcile(0xc0009a6240, 0x39eca80, 0xc000da2c00, 0xc000397b20, 0x1a, 0xc000f7d920, 0x9, 0xc000da2c00, 0x100c29f, 0xc00003c000, ...)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/github.com/gardener/gardener/extensions/pkg/controller/reconciler.go:75 +0x259
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0009977c0, 0x39ec9c0, 0xc000a3c440, 0x32cfde0, 0xc000d99cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:263 +0x317
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0009977c0, 0x39ec9c0, 0xc000a3c440, 0x0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1(0x39ec9c0, 0xc000a3c440)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:198 +0x4a
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0x37
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bb6f50)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009e3f50, 0x398e960, 0xc000744060, 0xc000a3c401, 0xc000eb4cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bb6f50, 0x3b9aca00, 0x0, 0x1, 0xc000eb4cc0)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x39ec9c0, 0xc000a3c440, 0xc000a94bd0, 0x3b9aca00, 0x0, 0x1)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0xa6
k8s.io/apimachinery/pkg/util/wait.UntilWithContext(0x39ec9c0, 0xc000a3c440, 0xc000a94bd0, 0x3b9aca00)
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99 +0x57
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/src/github.com/gardener/gardener-extension-provider-alicloud/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:195 +0x4e7
exit status 2
make: *** [start] Error 1

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

  1. Create Infrastructure

  2. Observe the above panic

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Remove code which support old version of NAT Gateway

How to categorize this issue?

/area networking
/kind enhancement
/priority 3
/platform alicloud

What would you like to be added:
Currently, we only create/maintain enhanced NATGateway. We need to remove legacy code which related to old version of NAT Gateway.
Why is this needed:

Problems during make check

What happened:
Running make check returns the following errors:

Executing check-generate
SA1019: Package github.com/gardener/gardener/test/integration/framework is deprecated: this is the deprecated gardener testframework. Use gardener/test/framework instead
Executing golangci-lint
Checking for format issues with gofmt
Unformatted files detected:
pkg/imagevector/packrd/packed-packr.go

What you expected to happen:
make check to run without errors

How to reproduce it (as minimally and precisely as possible):

  1. Checkout the master branch.
  2. Execute make check

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Volume is not formatted on AliCloud

What happened:
Occasionally, when creating a PVC, pv is created successfully. However, it has an error during attaching disk:

Warning  FailedMount             8s (x5 over 52s)   kubelet, izgw83dd97ell9qnfjjrl9z  MountVolume.MountDevice failed for volume "pvc-ec189fa7-b1c4-11e9-a77c-8e5ce80e9300" : rpc error: code = Internal desc = mounting failed: exit status 32 cmd: 'mount -t ext4 -o shared /dev/vdf /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ec189fa7-b1c4-11e9-a77c-8e5ce80e9300/globalmount' output: "mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ec189fa7-b1c4-11e9-a77c-8e5ce80e9300/globalmount: wrong fs type, bad option, bad superblock on /dev/vdf, missing codepage or helper program, or other error.\n"

What you expected to happen:
Disk can be attached successfully
How to reproduce it (as minimally and precisely as possible):
The ration is about 1/20. Create as many pods which needs pvs as you can.
Anything else we need to know?:

Environment:

  • Gardener version: 1.33
  • Kubernetes version (use kubectl version): 1.14.4
  • Cloud provider or hardware configuration: AliCloud
  • Others:

Check sum missing in checksum/secret-cloud-provider-config of Cloud controller manager

How to categorize this issue?

/area control-plane
/kind bug
/priority normal
/platform alicloud

What happened:
Checksum of checksum/secret-cloud-provider-config is missing in CCM deployment
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Cloud Controller Manager can't access alicloud provider API server

What happened:
In Frankfurt region, Cloud Controller Manager can't access Cloud API server https://slb.eu-central-1.aliyuncs.com. Recently, slb.eu-central-1.aliyuncs.com is resolved to 100.100.0.18, which is a non-public IP address.
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Configure mcm-settings from worker to machine-deployment.

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added: Machine-controller-manager now allows configuring certain controller-settings, per machine-deployment. Currently, the following fields can be set:

Also, with the PR gardener/gardener#2563 , these settings can be configured via shoot-resource as well.

We need to enhance the worker-extensions to read these settings from worker-object and set respectively on MachineDeployment.

Similar PR on AWS worker-extension: gardener/gardener-extension-provider-aws#148
Dependencies:

  • Vendor the MCM 0.33.0
  • gardener/gardener#2563 should be merged.
  • g/g with the #2563 change should be vendored.

Why is this needed:
To allow a fine configuration of MCM via worker-object.

Update NAT gateway

How to categorize this issue?

/area networking
/kind enhancement
/priority critical
/platform alicloud

What would you like to be added:
Switch normal NAT Gateway to enhanced NAT Gateway.
Why is this needed:
AliCloud provide an enhanced NAT Gateway solution, which greatly enhances the current one from preformance to price. More important, the current one will be deprecated in future

LoadBalancer cannot be created

How to categorize this issue?

/area control-plane
/kind bug
/priority normal
/platform alicloud

What happened:
For LoadBalancer type service, the backend LoadBalancer cannot be created successfully.
What you expected to happen:
The backend LoadBalancer can be created successfully.
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
Error messages:

Warning CreatingLoadBalancerFailed 3m37s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: FB167837-646D-4D84-9C0A-B01FC9A9CEE7 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Warning CreatingLoadBalancerFailed 3m31s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: D7FEB440-ECCD-4C97-A983-CC782DC0CC90 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Warning CreatingLoadBalancerFailed 3m21s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: 25AD93F5-DC89-41A9-91C3-49D16C4C66C4 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Normal EnsuringLoadBalancer 3m1s (x4 over 3m37s) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed 3m service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: 6E19C658-3957-4552-91A5-56CD1C290081 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Warning CreatingLoadBalancerFailed 82s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: 42F0B26B-1B78-4FD8-94A9-2D76CF222974 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Warning CreatingLoadBalancerFailed 16s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: BA0F9D30-CCF4-43DA-96DC-8713FB6E7610 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Warning CreatingLoadBalancerFailed 5s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: D2C23350-CC0F-4011-9851-501A96478861 Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.
Normal EnsuringLoadBalancer (x4 over 82s) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed service-controller Error creating load balancer (will retry): failed to ensure load balancer for service shoot--hc-dev--i302530-haas/kube-apiserver: Aliyun API Error: RequestId: 9F2E9504-6943-4165-93EC-B990C94D4B3C Status Code: 400

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

cloud provider configuration will be deprecated for kubelet prameters

How to categorize this issue?

/area control-plane
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
We need to move provider-id, cloud-provider and enable-controller-attach-detach from parameter to configuration file for kubelet
Why is this needed:

Dec 13 21:32:32 iZgw87uepwv66jorfgr1cmZ kubelet[2841]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 21:32:32 iZgw87uepwv66jorfgr1cmZ kubelet[2841]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Dec 13 21:32:32 iZgw87uepwv66jorfgr1cmZ kubelet[2841]: Flag --enable-controller-attach-detach has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.```

SeedNetworkPoliciesTest fails always

From gardener-attic/gardener-extensions#293

What happened:
The test defined in SeedNetworkPoliciesTest.yaml fails always.
Most of the time the following 3 specs fail:

2019-07-29 11:32:33	Test Suite Failed
2019-07-29 11:32:33	Ginkgo ran 1 suite in 3m20.280138435s
2x		2019-07-29 11:32:33	
2019-07-29 11:32:32	FAIL! -- 375 Passed | 3 Failed | 0 Pending | 126 Skipped
2019-07-29 11:32:32	Ran 378 of 504 Specs in 85.218 seconds
2019-07-29 11:32:32	
2019-07-29 11:32:32	> /go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1194
2019-07-29 11:32:32	[Fail] Network Policy Testing egress for mirrored pods elasticsearch-logging [AfterEach] should block connection to "Garden Prometheus" prometheus-web.garden:80
2019-07-29 11:32:32	
2019-07-29 11:32:32	/go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1062
2019-07-29 11:32:32	[Fail] Network Policy Testing components are selected by correct policies [AfterEach] gardener-resource-manager
2019-07-29 11:32:32	
2019-07-29 11:32:32	/go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1194
2019-07-29 11:32:32	[Fail] Network Policy Testing egress for mirrored pods gardener-resource-manager [AfterEach] should block connection to "External host" 8.8.8.8:53

@mvladev can you please check?

Environment:
TestMachinery on all landscapes (dev, ..., live)

Implement `Worker` controller for Alicloud provider

Similar to how we have implemented the Worker extension resource controller for the AWS provider let's please now do it for Alicloud.

There is no special provider config required to be implemented, however, we should have component configuration for the controller that should look as follows:

---
apiVersion: alicloud.provider.extensions.config.gardener.cloud/v1alpha1
kind: ControllerConfiguration
machineImages:
- name: coreos
  version: 2023.5.0
  id: coreos_2023_4_0_64_30G_alibase_20190319.vhd

Support modifying infrastructureConfig.networks.zones

How to categorize this issue?
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
Please change validation so that it is possible to change infrastructureConfig.networks.zones for existing shoots.

Why is this needed:
This is required to add additional zones to a shoot that are currently not already used and for which no network configuration exists.

Update credentials during Worker deletion

From gardener-attic/gardener-extensions#523

Steps to reproduce:

  1. Create a Shoot with valid cloud provider credentials my-secret.
  2. Ensure that the Shoot is successfully created.
  3. Invalidate the my-secret credentials.
  4. Delete the Shoot.
  5. Update my-secret credentials with valid ones.
  6. Ensure that the Shoot deletion fails waiting the Worker to be deleted.

Currently we do no sync the cloudprovider credentials in the <Provider>MachineClass during Worker deletion. Hence machine-controller-manager fails to delete the machines because the credentials are the invalid ones.

NatGateway integration

Static Public IP addresses for Gardener-based Shoot Cluster Egress (Outbound) Internet Connectivity.

Allow users to bring their own public static IP addresses and/or public static IP addresses ranges / prefixes which should be attached to the NatGateway.
The specific IP addresses can be re-assigned in case cluster crashed or misconfigured or deleted, etc.
Or, even (optional / low priority) move specific IP addresses between different clusters - the classic use case can be moving IP addresses from main cluster to backup cluster during disaster recovery (DR) procedure.

The feature is needed for IP addresses whitelisting by customers (the end-users, who work with the shoot cluster):

  • In development / test / validation systems, both within enterprise network and outside, like in public regulated market cloud.
  • Also in production for specific products, such as HaaS and HANA Cloud, which have dedicated Source IP address whitelisting feature.

In these cases the customer can whitelist the Shoot Cluster Egress IP addresses.

Minimal Permissions for user credentials

From gardener-attic/gardener-extensions#133

We have narrowed down the access permissions for AWS shoot clusters (potential remainder tracked in #178), but not yet for Azure, GCP and OpenStack, which this ticket is now about. We expect less success on these infrastructures as AWSes permision/policy options are very detailed. This may break the "shared account" idea on these infrastructures (Azure and GCP - OpenStack can be mitigated by programmatically creating tenants on the fly).

Forbid replacing secret with new account for existing Shoots

What would you like to be added:
Currently we don't have a validation that would prevent user to replace its cloudprovider secret with credentials for another account. Basically we do have only a warning in the dashboard - ref gardener/dashboard#422.

Steps to reproduce:

  1. Get an existing Shoot.
  2. Update its secret with credentials for another account.
  3. Ensure that on new reconciliation, new infra resources will be created in the new account. The old infra resources and machines in the old account will leak.
    For me the reconciliation failed at
    lastOperation:
      description: Waiting until the Kubernetes API server can connect to the Shoot
        workers
      lastUpdateTime: "2020-02-20T14:56:43Z"
      progress: 89
      state: Processing
      type: Reconcile

wtih reason

$ k describe svc -n kube-system vpn-shoot
Events:
  Type     Reason                   Age                  From                Message
  ----     ------                   ----                 ----                -------
  Normal   EnsuringLoadBalancer     7m38s (x6 over 10m)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed   7m37s (x6 over 10m)  service-controller  Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB

Why is this needed:
Prevent users to harm themselves.

Support zone retirement

How to categorize this issue?

/area scalability
/kind enhancement
/priority normal
/platform alicloud

What would you like to be added:
It is expected to remove an existing zone from a Worker group. If a zone is not used, we could remove this zone from InfrastrsuctureConfig.

Why is this needed:
Recently, some machine types (like G6) are out of stock in some zones from time to time. It is suggested to use another zone(s) where there is enough stock. For an existing shoot, we could append a zone. However, we are not allowed to remove an existing zone. We need to remove zones where stock is lacking. Suppose customer may bear node rolling update. We could start with zone change in an existing worker group.

Add validation to prevent worker.min to be set to 0.

What would you like to be added: We need to add the validation to prevent the worker.Min from being set to 0 when worker.Max is non-zero. This is to avoid cluster-autoscaler from scaling down the worker-pool to zero, and not being able to scale-up later.

Why is this needed:
Please refer for more info.: gardener/gardener#2045

Validate cloudprovider credentials

(recreating issue from the g/g repo: gardener/gardener#2293)

What would you like to be added:
Add validation for cloudprovider secret

Why is this needed:
Currently, when uploading secrets via the UI, all secret fields are required and validated. However, when creating those credentials via the cloudprovider secret, there is no validation. This results in errors such as this error: (specific to Azure but a similar error would be generated for AliCloud):

Flow "Shoot cluster reconciliation" encountered task errors: [task "Waiting until shoot infrastructure has been reconciled" failed: failed to create infrastructure: retry failed with context deadline exceeded, last error: extension encountered error during reconciliation: Error reconciling infrastructure: secret shoot--xxxx--xxxx/cloudprovider doesn't have a subscription ID] Operation will be retried.

Screen Shot 2020-05-07 at 1 20 52 PM

Remove zone field in shoot controlPlaneConfig section

What would you like to be added:
Here is the example config in Shoot for Alicloud:

  provider:
    type: alicloud
    controlPlaneConfig:
      apiVersion: alicloud.provider.extensions.gardener.cloud/v1alpha1
      kind: ControlPlaneConfig
      zone: cn-shanghai-f

This zone is used for cloud controller manager config. Zone info is used in a configmap (cloud-provider-config) in shoot controlplain namespace.

  1. Use anyzone defined in worker groups. The first one is OK.
  2. Change configmap to secret.
    Why is this needed:

Unable to create volumes on alicloud with specific CIDR

When creating a shoot with specific networking CIDR's kubernetes is unable to create the pvc with the error message:

Normal   ExternalProvisioning  13s (x2 over 19s)  persistentvolume-controller                                                                                  waiting for a volume to be created, either by external provisioner "diskplugin.csi.alibabacloud.com" or manually created by system administrator
Warning  ProvisioningFailed    9s                 diskplugin.csi.alibabacloud.com_csi-plugin-controller-78d89b9fd5-5tpgg_ca47ea55-ec22-11e9-83d0-4a4f7dd934ba  failed to provision volume with StorageClass "default": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Normal   Provisioning          8s (x2 over 19s)   diskplugin.csi.alibabacloud.com_csi-plugin-controller-78d89b9fd5-5tpgg_ca47ea55-ec22-11e9-83d0-4a4f7dd934ba  External provisioner is provisioning volume for claim "default/redis-data-tetst"

How to reproduce:

  1. Create a alicloud shoot with the following network configuration (e.g. https://github.com/gardener/gardener/blob/2253ecb2b16bacbc72dc944c82b8bca911a23f2c/example/90-shoot.yaml#L212)
networking:
    type: calico
    pods: 100.96.0.0/11
    nodes: 10.250.0.0/16
    services: 100.64.0.0/13
  1. Create a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: diskplugin.csi.alibabacloud.com
  name: redis-data-tetst
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: default
  volumeMode: Filesystem

[AliCloud] Support Customized image

CoreOS is not well supported by AliCloud. There might be security issues of outdated CoreOS version. We need to be assure Gardener clusters running on a latest CoreOS. A cusomized image will be a solution.

After adaption:

  • Customized image IDs in different regions are different
  • By default, a customized image is private. That means, for other sub-account, there needs some extra configuration for this account to consume customized image
  • Since we can do everything in a customized image, we can support cloud-config when a VM is bootstrapped. That means we could use standard coreos extension to boot a node.
  • We continue to support CoreOS image provided by AliCloud. As it is not updated version, it is not suggested to use in production.

PV is not deleted after shoot is deleted sometimes

What happened:
PV is not deleted after shoot is deleted sometimes.
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Make node name of shoot more pretty

What would you like to be added:
Currently, the node name of the shoot looks no pretty. It looks like izuf6dav0ojlhh1wxqo7ufz. Shall we make it more pretty just like GCP or even AWS?
Why is this needed:

istio-ingressgateway issues on alicloud

How to categorize this issue?

/area testing
/kind bug
/priority critical
/platform alicloud

What happened:
The following conformance tests are failing/flaky on alicloud:

time="2020-11-17T16:24:56Z" level=info msg="test suite summary: {ExecutedTestcases:305 SuccessfulTestcases:301 FailedTestcases:4 FlakedTestcases:24 Flaked:true TestsuiteDuration:2079 TestsuiteSuccessful:false DescriptionFile:working.json StartTime:2020-11-17 15:50:17.359754303 +0000 UTC m=+711.749361038 FinishedTime:2020-11-17 16:24:56.359754303 +0000 UTC m=+2790.749361038 ExecutionGroup:conformance FailedTestcaseNames:[[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]]}\n"

In a more human readable format:

[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]

We see the failures since 16.11.2020 but there were 8 failures in a row which means that the corresponding conformance tests are not flaky but there are simply failing for some reason.

I tried to run the failing tests on their own and they passed.

What you expected to happen:
No conformance test failures.

How to reproduce it (as minimally and precisely as possible):
I believe

export GO111MODULE=on; export E2E_EXPORT_PATH=/tmp/export; export KUBECONFIG=/mye2e/shoot.config; export GINKGO_PARALLEL=false
go run -mod=vendor ./integration-tests/e2e --k8sVersion=1.19.3 --cloudprovider=alicloud --testcasegroup="conformance"

See https://github.com/gardener/test-infra/tree/0.177.0/integration-tests/e2e

Environment:

  • Gardener version (if relevant): v1.12.8
  • Extension version: v1.19.0
  • Kubernetes version (use kubectl version):v1.19.3
  • Cloud provider or hardware configuration:
  • Others:

Implement `Infrastructure` controller for Alicloud provider

Similar to how we have implemented the Infrastructure extension resource controller for the AWS provider let's please now do it for GCP.

Based on the current implementation the InfrastructureConfig should look like this:

apiVersion: alicloud.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
  vpc:
    cidr: 10.250.0.0/16
#   id: some-id
  zones:
  - name: zone-1a
    workers: 10.250.0.0/19

Based on the current implementation the InfrastructureStatus should look like this:

---
apiVersion: alicloud.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureStatus
keyName: ssh-key-name
networks:
  vpc:
    id: vpc-id
    vswitches:
    - purpose: nodes
      id: vswitch-id
      zone: zone-1a
    securityGroups:
    - purpose: nodes
      id: sec-group-id

The current infrastructure creation/deletion implementation can be found here. Please try to change as little as possible (with every change the risk that we break something increases!) and just move the code over into the extensions infrastructure actuator.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.