kubernetes-sigs / cloud-provider-huaweicloud Goto Github PK
View Code? Open in Web Editor NEWHUAWEI CLOUD Controller Manager is an external cloud controller manager for running kubernetes in a HUAWEI CLOUD cluster.
License: Apache License 2.0
HUAWEI CLOUD Controller Manager is an external cloud controller manager for running kubernetes in a HUAWEI CLOUD cluster.
License: Apache License 2.0
Currently only Loadbalancer are supported. What are the plans to support additional services:
Volume Service (similar OpenStack Cinder)
DNS Service (similar External-DNS Designate)
...
A roadmap would be fine. Also a plan hich Huawei Cloud versions (API versions) are supported.
With many thanks
1.When create service without backend pod. CCM create LB without Listener.
2.Then delete service=> CCM not delete the LB.
This is cause by "if len(listeners) == 0" in alb.go=>GetLoadBalancer
`
func (alb *ALBCloud) GetLoadBalancer(ctx context.Context, clusterName string, service *v1.Service)
(status *v1.LoadBalancerStatus, exists bool, err error) {
status = &v1.LoadBalancerStatus{}
albProvider, err := alb.getALBClient(service.Namespace)
if err != nil {
if apierrors.IsNotFound(err) {
return nil, false, nil
}
return nil, false, err
}
listeners, err := albProvider.findListenerOfService(service)
if err != nil {
return nil, false, err
}
if len(listeners) == 0 {
return nil, false, nil
}
status.Ingress = append(status.Ingress, v1.LoadBalancerIngress{IP: service.Spec.LoadBalancerIP})
return status, true, nil
}
What happened:
I tried to set up the Huawei Cloud Controller Manager on a self managed Kubernetes cluster to provision Service Type: LoadBalancers
But it reported following error to me:
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://172.16.0.4:6443/api/v1/secrets?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
What you expected to happen:
For the CCM to execute flawlessly and provision a LoadBalancer for me
How to reproduce it (as minimally and precisely as possible):
Compile the code and run it with the following as the cloud config
{
"LoadBalancer": {
"apiserver": "https://172.16.0.4:6443",
"signerType": "ec2",
"elbAlgorithm": "roundrobin",
"region": "ap-southeast-3",
"vpcId": "742639a7-083e-47e2-b4d1-ef20000743bb",
"subnetId": "5d17a274-fde3-4d97-9e4b-a1c251030220",
"ecsEndpoint": "https://ecs.ap-southeast-3.myhuaweicloud.com",
"elbEndpoint": "https://elb.ap-southeast-3.myhuaweicloud.com",
"albEndpoint": "https://elb.ap-southeast-3.myhuaweicloud.com",
"vpcEndpoint": "https://vpc.ap-southeast-3.myhuaweicloud.com",
"natEndpoint": "https://nat.ap-southeast-3.myhuaweicloud.com",
"enterpriseEnable": "false"
},
"Auth": {
"AccessKey" : "<AK>",
"SecretKey" : "<SK>",
"IAMEndpoint": "https://iam.myhuaweicloud.com",
"ECSEndpoint": "https://ecs.ap-southeast-3.myhuaweicloud.com",
"DomainID": "d9f69be543bd440987bd9a8a21006e73",
"ProjectID": "0de63ee1f900f42d2f98c01594559aa6",
"Region": "ap-southeast-1",
"Cloud": "myhwclouds.com"
}
}
Anything else we need to know?:
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
cat /etc/os-release
):uname -a
):What would you like to be added:
Introduce a GitHub Action workflow to publish latest
images automatically. Just like Karmada workflow.
I've configured the four secrets:
Why is this needed:
We should support make image
to create an image automatically.
What happened:
When we disable insecure port, calls to the API Server get x509 errors.
I0923 08:49:36.423555 1 round_trippers.go:424] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: huawei-cloud-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format" 'https://192.168.0.199:6443/api/v1/secrets?limit=500&resourceVersion=0'
I0923 08:49:36.430818 1 round_trippers.go:444] GET https://192.168.0.199:6443/api/v1/secrets?limit=500&resourceVersion=0 in 7 milliseconds
I0923 08:49:36.430833 1 round_trippers.go:450] Response Headers:
E0923 08:49:36.430912 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://192.168.0.199:6443/api/v1/secrets?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
What you expected to happen:
The call should also succeed for services using certificates signed by unknown authorities.
Or provide an option for me to choose to skip verification.
How to reproduce it (as minimally and precisely as possible):
The k8s cluster API uses the https protocol, which happens when cloud-provider-huaweicloud is started.
Anything else we need to know?:
Environment:
kubectl version
): v1.19.16What happened:
The thread pool of the Endpoint listener is used incorrectly and cannot effectively control the number of threads.
What would you like to be added:
After modifying huawei-cloud-provider/loadbalancer-config
, the configuration should take effect immediately instead of restarting CCM.
Why is this needed:
To be more efficient and reduce errors.
I'm having problems trying to deploy huawei-cloud-controller-manager, see The huawei-cloud-controller-manager log has the following message:
W0706 05:18:15.792435 1 throttle.go:265] Throttle config file is not exist.
I0706 05:18:16.091719 1 serving.go:312] Generated self-signed cert in-memory
W0706 05:18:16.838581 1 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0706 05:18:16.840644 1 controllermanager.go:120] Version: v0.0.0-master+$Format:%h$
I0706 05:18:16.840797 1 config.go:86] Log conf, Auth.IAMEndpoint: iam.cn-south-1.myhuaweicloud.com
I0706 05:18:16.840810 1 config.go:87] Log conf, LoadBalancer.SecretName: cloud-controller-manager-secret
W0706 05:18:16.840933 1 shared_informer.go:386] The specified resyncPeriod 30s is invalid because this shared informer doesn't support resyncing
E0706 05:18:16.841644 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:17.842579 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:18.843384 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:19.844206 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:20.844961 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:21.845657 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:22.846465 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:23.847325 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:24.848067 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
E0706 05:18:25.848811 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Secret: the server rejected our request for an unknown reason (get secrets)
What would you like to be added:
When creating a service of type LoadBalancer
backed by an LB with a public IP a security group has to be created in order to expose it to the external. AFAIK today the responsibility of doing this is on the user, I propose to add an option to have the CCM taking care of this similarly to what OpenStack CCM is doing.
Why is this needed:
To simplify and automate services exposal.
The flag --version
added by cloud-controller-manager :
https://github.com/kubernetes/kubernetes/blob/f437ff75d455176eb5d5f85df258ae07a4ec35e7/cmd/cloud-controller-manager/app/controllermanager.go#L88
And the action is here:
https://github.com/kubernetes/kubernetes/blob/f437ff75d455176eb5d5f85df258ae07a4ec35e7/cmd/cloud-controller-manager/app/controllermanager.go#L69
Since we are built out of kubenetes, so we didn't inject version.
So, the default ugly version would be shown out:
Kubernetes v0.0.0-master+$Format:%h$
Possible solutions:
Modify command after we get a default command.
https://github.com/huawei-cloudnative/cloud-provider-huaweicloud/blob/16de52d427670a2e6bb8ceefd5a54e39ce4c4957/cmd/cloud-controller-manager/cloud-controller-manager.go#L36
Mock PrintAndExitIfRequested() to change it's behaviour
https://github.com/kubernetes/kubernetes/blob/f437ff75d455176eb5d5f85df258ae07a4ec35e7/cmd/cloud-controller-manager/app/controllermanager.go#L69
Hi,
i am facing issue running this. I have created cloud-config secret as per guide
root@k8s-master-0:# kubectl get pod -n kube-system | grep huawei-cloud-controller-manager# kubectl logs -n kube-system huawei-cloud-controller-manager-869b854df9-xwm9j
huawei-cloud-controller-manager-869b854df9-xwm9j 0/1 Error 1 (7s ago) 8s
root@k8s-master-0:
I0313 07:36:57.636259 1 flags.go:64] FLAG: --allocate-node-cidrs="false"
I0313 07:36:57.636301 1 flags.go:64] FLAG: --allow-untagged-cloud="false"
I0313 07:36:57.636304 1 flags.go:64] FLAG: --authentication-kubeconfig=""
I0313 07:36:57.636308 1 flags.go:64] FLAG: --authentication-skip-lookup="false"
I0313 07:36:57.636310 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0313 07:36:57.636313 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false"
I0313 07:36:57.636315 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]"
I0313 07:36:57.636319 1 flags.go:64] FLAG: --authorization-kubeconfig=""
I0313 07:36:57.636321 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0313 07:36:57.636324 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0313 07:36:57.636326 1 flags.go:64] FLAG: --bind-address="0.0.0.0"
I0313 07:36:57.636329 1 flags.go:64] FLAG: --cert-dir=""
I0313 07:36:57.636332 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator"
I0313 07:36:57.636334 1 flags.go:64] FLAG: --client-ca-file=""
I0313 07:36:57.636337 1 flags.go:64] FLAG: --cloud-config="/etc/config/cloud-config"
I0313 07:36:57.636339 1 flags.go:64] FLAG: --cloud-provider="huaweicloud"
I0313 07:36:57.636342 1 flags.go:64] FLAG: --cluster-cidr=""
I0313 07:36:57.636344 1 flags.go:64] FLAG: --cluster-name="kubernetes"
I0313 07:36:57.636346 1 flags.go:64] FLAG: --concurrent-service-syncs="1"
I0313 07:36:57.636352 1 flags.go:64] FLAG: --configure-cloud-routes="true"
I0313 07:36:57.636355 1 flags.go:64] FLAG: --contention-profiling="false"
I0313 07:36:57.636357 1 flags.go:64] FLAG: --controller-start-interval="0s"
I0313 07:36:57.636360 1 flags.go:64] FLAG: --controllers="[*]"
I0313 07:36:57.636363 1 flags.go:64] FLAG: --enable-leader-migration="false"
I0313 07:36:57.636366 1 flags.go:64] FLAG: --external-cloud-volume-plugin=""
I0313 07:36:57.636368 1 flags.go:64] FLAG: --feature-gates=""
I0313 07:36:57.636372 1 flags.go:64] FLAG: --help="false"
I0313 07:36:57.636374 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0"
I0313 07:36:57.636377 1 flags.go:64] FLAG: --kube-api-burst="30"
I0313 07:36:57.636379 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0313 07:36:57.636382 1 flags.go:64] FLAG: --kube-api-qps="20"
I0313 07:36:57.636385 1 flags.go:64] FLAG: --kubeconfig=""
I0313 07:36:57.636387 1 flags.go:64] FLAG: --leader-elect="true"
I0313 07:36:57.636390 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s"
I0313 07:36:57.636392 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s"
I0313 07:36:57.636394 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases"
I0313 07:36:57.636396 1 flags.go:64] FLAG: --leader-elect-resource-name="cloud-controller-manager"
I0313 07:36:57.636399 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system"
I0313 07:36:57.636401 1 flags.go:64] FLAG: --leader-elect-retry-period="2s"
I0313 07:36:57.636404 1 flags.go:64] FLAG: --leader-migration-config=""
I0313 07:36:57.636406 1 flags.go:64] FLAG: --log-flush-frequency="5s"
I0313 07:36:57.636409 1 flags.go:64] FLAG: --master=""
I0313 07:36:57.636412 1 flags.go:64] FLAG: --min-resync-period="12h0m0s"
I0313 07:36:57.636414 1 flags.go:64] FLAG: --node-monitor-period="5s"
I0313 07:36:57.636416 1 flags.go:64] FLAG: --node-status-update-frequency="5m0s"
I0313 07:36:57.636418 1 flags.go:64] FLAG: --node-sync-period="0s"
I0313 07:36:57.636421 1 flags.go:64] FLAG: --permit-address-sharing="false"
I0313 07:36:57.636423 1 flags.go:64] FLAG: --permit-port-sharing="false"
I0313 07:36:57.636425 1 flags.go:64] FLAG: --profiling="true"
I0313 07:36:57.636427 1 flags.go:64] FLAG: --requestheader-allowed-names="[]"
I0313 07:36:57.636429 1 flags.go:64] FLAG: --requestheader-client-ca-file=""
I0313 07:36:57.636432 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0313 07:36:57.636435 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]"
I0313 07:36:57.636438 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]"
I0313 07:36:57.636442 1 flags.go:64] FLAG: --route-reconciliation-period="10s"
I0313 07:36:57.636444 1 flags.go:64] FLAG: --secure-port="10258"
I0313 07:36:57.636447 1 flags.go:64] FLAG: --tls-cert-file=""
I0313 07:36:57.636449 1 flags.go:64] FLAG: --tls-cipher-suites="[]"
I0313 07:36:57.636452 1 flags.go:64] FLAG: --tls-min-version=""
I0313 07:36:57.636454 1 flags.go:64] FLAG: --tls-private-key-file=""
I0313 07:36:57.636456 1 flags.go:64] FLAG: --tls-sni-cert-key="[]"
I0313 07:36:57.636460 1 flags.go:64] FLAG: --use-service-account-credentials="true"
I0313 07:36:57.636462 1 flags.go:64] FLAG: --v="5"
I0313 07:36:57.636465 1 flags.go:64] FLAG: --version="false"
I0313 07:36:57.636477 1 flags.go:64] FLAG: --vmodule=""
I0313 07:36:57.942334 1 serving.go:348] Generated self-signed cert in-memory
W0313 07:36:57.942359 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0313 07:36:58.087733 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController
I0313 07:36:58.088125 1 cloud-controller-manager.go:102] cloudConfig: {"Name":"huaweicloud","CloudConfigFile":"/etc/config/cloud-config"}
F0313 07:36:58.088142 1 plugins.go:154] Couldn't open cloud provider configuration /etc/config/cloud-config: &fs.PathError{Op:"open", Path:"/etc/config/cloud-config", Err:0x2}
If i am right the manifest file has some issues, can you identify and resolve
What would you like to be added:
For example, this line we added a goroutine function, and it didn't have any control by its parent routine's context.
Why is this needed:
There are many orphan goroutines inside our code, and they're NOT managed by context, which could be cancellable or timeout by global control. So I think this may need to be changed.
What would you like to be added:
When workload has been scale up to new node. there are no ensuring event to update backend member in elb.
Same problem when scale down.
Why is this needed:
Since design to to send traffic to correct node, ensuring should be fired when workload up/down/scaling
[root@ecs-d8b6 cloud-provider-huaweicloud]# go build cmd/cloud-controller-manager/cloud-controller-manager.go
go: finding k8s.io/component-base latest
go: k8s.io/[email protected] requires
k8s.io/[email protected]: reading k8s.io/api/go.mod at revision v0.0.0: unknown revision v0.0.0
Or
[root@ecs-d8b6 cloud-provider-huaweicloud]# go get -v all
go: finding k8s.io/component-base latest
get "k8s.io/cli-runtime": found meta tag get.metaImport{Prefix:"k8s.io/cli-runtime", VCS:"git", RepoRoot:"https://github.com/kubernetes/cli-runtime"} at //k8s.io/cli-runtime?go-get=1
get "k8s.io/cluster-bootstrap": found meta tag get.metaImport{Prefix:"k8s.io/cluster-bootstrap", VCS:"git", RepoRoot:"https://github.com/kubernetes/cluster-bootstrap"} at //k8s.io/cluster-bootstrap?go-get=1
get "k8s.io/sample-apiserver": found meta tag get.metaImport{Prefix:"k8s.io/sample-apiserver", VCS:"git", RepoRoot:"https://github.com/kubernetes/sample-apiserver"} at //k8s.io/sample-apiserver?go-get=1
get "k8s.io/apiserver": found meta tag get.metaImport{Prefix:"k8s.io/apiserver", VCS:"git", RepoRoot:"https://github.com/kubernetes/apiserver"} at //k8s.io/apiserver?go-get=1
get "k8s.io/kubelet": found meta tag get.metaImport{Prefix:"k8s.io/kubelet", VCS:"git", RepoRoot:"https://github.com/kubernetes/kubelet"} at //k8s.io/kubelet?go-get=1
get "k8s.io/metrics": found meta tag get.metaImport{Prefix:"k8s.io/metrics", VCS:"git", RepoRoot:"https://github.com/kubernetes/metrics"} at //k8s.io/metrics?go-get=1
get "k8s.io/kube-controller-manager": found meta tag get.metaImport{Prefix:"k8s.io/kube-controller-manager", VCS:"git", RepoRoot:"https://github.com/kubernetes/kube-controller-manager"} at //k8s.io/kube-controller-manager?go-get=1
get "k8s.io/kube-proxy": found meta tag get.metaImport{Prefix:"k8s.io/kube-proxy", VCS:"git", RepoRoot:"https://github.com/kubernetes/kube-proxy"} at //k8s.io/kube-proxy?go-get=1
get "k8s.io/cloud-provider": found meta tag get.metaImport{Prefix:"k8s.io/cloud-provider", VCS:"git", RepoRoot:"https://github.com/kubernetes/cloud-provider"} at //k8s.io/cloud-provider?go-get=1
get "k8s.io/code-generator": found meta tag get.metaImport{Prefix:"k8s.io/code-generator", VCS:"git", RepoRoot:"https://github.com/kubernetes/code-generator"} at //k8s.io/code-generator?go-get=1
get "k8s.io/component-base": found meta
Hi,
I tried to use this CCM in a test cluster and I observed that the nodes did not get initialised i.e. taint node.cloudprovider.kubernetes.io/uninitialized
was never removed. This is the root issue:
E0629 09:47:05.321556 1 node_controller.go:364] failed to set node provider id: failed to get instance ID from cloud provider: unimplemented
In fact checking the code I noticed that method InstanceID is not implemented (among others). After manually patching the node to add the ProviderID
it was initialised successfully.
I think that this is necessary to be able to use this CCM as either manually patching the nodes or setting explicitly the kubelet flag --provider-id
don't seem to be viable options.
Additional info:
Kubernetes version: v1.17.5
What happened:
Followed the guide to create a configmap for default configuration for dedicated ELB. I tried to create the LoadBalancer service without giving the annotations. The service creation fails as the CCM is not able to configure the listener for ELB. The default configuration in the configmap does not work for dedicated ELB
What you expected to happen:
Loadbalancer service should be created with Dedicated ELB and listeners added properly
How to reproduce it (as minimally and precisely as possible):
Create a configmap as per sample below, replace AZ and Flavor ID. Try to create a ELB service without giving annotations
apiVersion: v1
kind: ConfigMap
metadata:
namespace: huawei-cloud-provider
name: loadbalancer-config
data:
loadBalancerOption: |-
{ "availability-zones":"az"
"l4-flavor-id":"flavor id",
"lb-algorithm": "ROUND_ROBIN",
"eip-auto-create-option": {
"ip_type": "5_bgp", "bandwidth_size": 5, "share_type": "PER"
},
"keep-eip": false,
"session-affinity-flag": "on",
"session-affinity-option": {
"type": "SOURCE_IP",
"persistence_timeout": 15
},
"health-check-flag": "on",
"health-check-option": {
"delay": 5,
"timeout": 15,
"max_retries": 5
}
}
Anything else we need to know?:
Environment:
kubectl version
): 1.25.9cat /etc/os-release
): Ubuntu 20.04uname -a
):What would you like to be added:
Add parameters availability-zones
and eip-auto-create-option
the loadbalancer-config
configuration.
Why is this needed:
What happened:
Trying to create a ELB on a partner cloud by using Huawei Cloud K8s CCM, i choose dedicated elb and auto create EIP
x
What you expected to happen:
ELB should be created with EIP attached
How to reproduce it (as minimally and precisely as possible):
Create ELB using CCM and provide following
metadata:
annotations:
kubernetes.io/elb.class: dedicated
kubernetes.io/elb.availability-zones: ae-ad-1a
kubernetes.io/elb.l4-flavor-id: a53df111-cc06-4c49-a417-2aac536f8eb3
kubernetes.io/elb.lb-algorithm: ROUND_ROBIN
kubernetes.io/elb.keep-eip: "false"
kubernetes.io/elb.eip-auto-create-option: >-
{"ip_type": "5_bgp", "bandwidth_size": 5, "share_type": "PER"}
Anything else we need to know?:
If i don't provide this option, ELB gets created without EIP attached. I have also modified the
{"bandwidth_name":"some_name","ip_type": "5_bgp", "bandwidth_size": 5, "share_type": "PER"}
and
{"name":"some_name","ip_type": "5_bgp", "bandwidth_size": 5, "share_type": "PER"}
Environment:
kubectl version
): 1.24.11cat /etc/os-release
): ubuntu20.04 LTSuname -a
): 5.4.0-99-genericLogs from CCM:
E0313 12:37:42.110722 1 dedicated_elb.go:47] Error in wrapper handler(), args: []interface {}{"Loadbalancer", (**model.LoadBalancer)(0xc000514c60)}, error: {"status_code":400,"request_id":"1402655ec2c66d91dd28901d69ec3d20","error_code":"ELB.8902","error_message":"Unable to find 'name' required attribute in bandwidth when id is null."}
I0313 12:37:42.110748 1 controller.go:808] Finished syncing service "take11/loadbalancer-service-demo" (4.515507678s)
E0313 12:37:42.110762 1 controller.go:289] error processing service take11/loadbalancer-service-demo (will retry): failed to ensure load balancer: {"status_code":400,"request_id":"1402655ec2c66d91dd28901d69ec3d20","error_code":"ELB.8902","error_message":"Unable to find 'name' required attribute in bandwidth when id is null."}
I0313 12:37:42.110803 1 event.go:294] "Event occurred" object="take11/loadbalancer-service-demo" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message=<
Error syncing load balancer: failed to ensure load balancer: {"status_code":400,"request_id":"1402655ec2c66d91dd28901d69ec3d20","error_code":"ELB.8902","error_message":"Unable to find 'name' required attribute in bandwidth when id is null."}
What happened:
In the file https://github.com/kubernetes-sigs/cloud-provider-huaweicloud/blob/master/pkg/config/loadbalancerconfig.go in the LoadELBConfig function you reference an object with wrong name:
metadataOption := []byte(data["metadataOption"])
if err := json.Unmarshal(metadataOption, &cfg.MetadataOpts); err != nil {
klog.Errorf("error parsing metadataOption config: %s", err)
}
It should be:
metadataOptions := []byte(data["metadataOption"])
if err := json.Unmarshal(metadataOptions, &cfg.MetadataOpts); err != nil {
klog.Errorf("error parsing metadataOption config: %s", err)
}
because of this type, doesn't matter what you put as metadataOption it always takes the default values and gives an error that the metadataoption json input has error:
E0420 07:05:49.335832 1 loadbalancerconfig.go:128] error parsing metadataOption config: unexpected end of JSON input
What you expected to happen:
There should be no issue with loading metadata options.
How to reproduce it (as minimally and precisely as possible):
Simply start a controller manager with any value
Anything else we need to know?:
Environment:
kubectl version
): v1.24.2cat /etc/os-release
): CentOS 7.9uname -a
): 3.10.0-1160.83.1.el7.x86_64HI,
When I create a service of type LoadBalancer
backed by and ELB using the example manifests and I'm getting the following error:
E0701 22:23:20.864914 27379 controller.go:243] error processing service default/classic-service (will retry): failed to ensure load balancer: Failed to GetListenersList : request failed: {"message":"Authorization information is wrong","request_id":"7e70435102c3e74aa64c7110e78aad62"}
I don't exclude that it's just a configuration issue, but I double checked it several times and I did not found anything evidently wrong:
{
"LoadBalancer": {
"apiserver": "127.0.0.1:8080",
"secretName": "example.secret",
"signerType": "ec2",
"elbAlgorithm": "roundrobin",
"tenantId": "<MY-PROJECT-ID>",
"region": "eu-de",
"vpcId": "<VPC-ID>",
"subnetId": "<SUBNET-ID>",
"ecsEndpoint": "https://ecs.eu-de.otc.t-systems.com",
"elbEndpoint": "https://elb.eu-de.otc.t-systems.com",
"albEndpoint": "https://elb.eu-de.otc.t-systems.com",
"vpcEndpoint": "https://vpc.eu-de.otc.t-systems.com",
"natEndpoint": "https://nat.eu-de.otc.t-systems.com",
"enterpriseEnable": "false"
},
"Auth": {
"SecretName": "",
"AccessKey": "<MY-ACCESS_KEY>",
"SecretKey": "<MY-SECRET-KEY>",
"IAMEndpoint": "https://iam.eu-de.otc.t-systems.com",
"DomainID": "<MY-DOMAIN-ID>",
"ProjectID": "<MY-PROJECT-ID>",
"Region": "eu-de",
"Cloud": "otc.t-systems.com"
}
}
And here the secret:
apiVersion: v1
kind: Secret
metadata:
name: example.secret
type: Opaque
data:
access: <MY-ACCESS-KEY_BASE64>
secret: <MY-SECRET-KEY_BASE64>
I instrumented the code to get some more info and those are request and response:
I0702 07:44:40.935961 6830 http.go:129] request sent: GET /v1.0/<MY-PROJECT-ID>/elbaas/listeners HTTP/1.1^M
Host: elb.eu-de.otc.t-systems.com^M
Connection: close^M
Authorization: SDK-HMAC-SHA256 Access=<MY-ACCESS-KEY>, SignedHeaders=x-project-id;x-sdk-date, Signature=<SIGNATURE>^M
X-Project-Id: <MY-PROJECT-ID>^M
X-Sdk-Date: 20200702T074440Z^M
^M
I0702 07:44:40.977038 6830 http.go:155] request received to : HTTP/1.1 401 Unauthorized^M
Connection: close^M
Content-Length: 97^M
Content-Type: application/json^M
Date: Thu, 02 Jul 2020 07:44:40 GMT^M
Server: Web Server^M
^M
{"message":"Authorization information is wrong","request_id":"a37b23c372334707975b54d82df8c3b2"}
Note that calls to the server endpoint which rely on the credentials in the Auth
section, which are the same in the secret, are successful. Any idea?
BTW the example secret seems to be misaligned with the code which expects access
and secret
instead of access.key
and secret.key
Hi,
What would you like to be added:
I'm using RKE (from Rancher) to deploy a Kubernetes cluster on a Huwai cloud cluster (Flexible Engine, Orange Group).
I can deploy a new cluster using the OpenStack cloud provider built in RKE but some specific functionnalities are missing like loadbalancer settings.
The documentation isn't clear enough to explain how to deploy and use the Huawei Cloud Provider, could you please give a detail (step by step ?) way to deploy it ?
Why is this needed:
Can't use specific Huawei Cloud settings for Kubernetes Cluster.
Thanks,
Mikaël Morvan
What would you like to be added:
Currently the configuration contains an authentication section:
But when dealing with LBs this section is ignored and the credentials are coming from a secret that must be located on the same namespace as the Service of type LoadBalancer
, which name is configured at:
I'm not sure to understand the benefit of having the possibility of specifying the credentials per namespace in which Services of type LoadBalancer
are created, but should not be handier at least to fallback to the global credentials in case namespaced one are not present?
Why is this needed:
Simplify operations.
What would you like to be added:
We currently use a forked huaweicloud-sdk-go
as a dependency:
cloud-provider-huaweicloud/go.mod
Line 6 in daa23b8
The reason for that is huaweicloud-sdk-go not support Go Module
, so as a temporary solution, we forked the repo and defined a Module
in it.
Why is this needed:
Now, a new SDK named huaweicloud-sdk-go-v3 has been released, and it supports Go Module
, so, it's time to update the dependency to the new SDK.
Iteration Tasks:
Tasks can be splitted as per interfaces that depend on the old huaweicloud-sdk-go
:
I found that cloud provider current use a old version of APIGW SDK to commucating with APIGW, can we update SDK to 2.0.2?
What would you like to be added:
Add some script filess for E2E testing to quickly run E2E tests.
Why is this needed:
We have written the E2E test code, if we want to run the e2e tests, we need to make an image, push the image and run the tests with ginkgo to run the tests.
Maybe some scripts can do the above action exactly.
What would you like to be added:
Put the authentication credential secret in the specified namespace, and read it from here when using it.
This is easier to use and easier to maintain.
Why is this needed:
I see that HUAWEI CLOUD authentication credential secret must be created in the namespace of the ELB service, which is not very convenient to use.
If ELB services are created in multiple namespaces, we need to create the same secret in each namespace, which is inconvenient.
Update all docs and replace huawei-cloudnative/cloud-provider-huaweicloud
with kubernetes-sigs/cloud-provider-huaweicloud
After #31, we don't need signerType
and region
anymore.
LBConfig
structNewELBClient
What happened:
At the beginning, the load balancing still worked normally. However, after a period of time, I found in Huawei cloud console that the back-end server of elb was missing, but my pod and service in k8s were working normally, which caused elb to fail to work
What you expected to happen:
The elb should works
How to reproduce it (as minimally and precisely as possible):
I don't kown how to reproduce it. but it happen many times.
I need a solution to trace it ( for example logs )
Anything else we need to know?:
Environment:
kubectl version
):apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-1693240433-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx-1693240433
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.1
helm.sh/chart: ingress-nginx-4.7.1
annotations:
kubernetes.io/elb.class: shared
kubernetes.io/elb.health-check-flag: 'on'
kubernetes.io/elb.health-check-option: '{"delay": 3, "timeout": 15, "max_retries": 3}'
kubernetes.io/elb.id: XXXXXXXX
kubernetes.io/elb.lb-algorithm: LEAST_CONNECTIONS
meta.helm.sh/release-name: ingress-nginx-1693240433
meta.helm.sh/release-namespace: ingress-nginx
status:
loadBalancer:
ingress:
- ip: 192.168.0.247
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 80
targetPort: http
nodePort: 31825
- name: https
protocol: TCP
appProtocol: https
port: 443
targetPort: https
nodePort: 32684
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx-1693240433
app.kubernetes.io/name: ingress-nginx
clusterIP: 10.233.8.69
clusterIPs:
- 10.233.8.69
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
OS (e.g: cat /etc/os-release
):
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Kernel (e.g. uname -a
):
Linux node1 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Network plugin and version (if this is a network-related bug):
calico v3.25.1
Others:
There are description about kubernetes.io/elb.subnet-id annotation but I can't found code related to it
docs/resources/service/annotations.md
and
examples/loadbalancers/service_enhanced.yaml
So document should be updated
This is my configuration
{
"LoadBalancer": {
"apiserver": "172.17.0.1:8080",
"secretName": "huawei-ccm-secret",
"signerType": "ec2",
"elbAlgorithm": "roundrobin",
"tenantId": "{tenant id}",
"region": "eu-de",
"vpcId": "{vpc id}",
"subnetId": "{subnet id}",
"ecsEndpoint": "https://ecs.eu-de.otc.t-systems.com",
"elbEndpoint": "https://elb.eu-de.otc.t-systems.com",
"albEndpoint": "https://elb.eu-de.otc.t-systems.com",
"vpcEndpoint": "https://vpc.eu-de.otc.t-systems.com",
"natEndpoint": "https://nat.eu-de.otc.t-systems.com",
"enterpriseEnable": "false"
},
"Auth": {
"SecretName": "",
"AccessKey" : "{AccessKey}",
"SecretKey" : "{SecretKey}",
"IAMEndpoint": "https://iam.eu-de.otc.t-systems.com",
"ECSEndpoint": "https://ecs.eu-de.otc.t-systems.com",
"DomainID": "a01aafcf63744d988ebef2b1e04c5c34",
"ProjectID": "bf74229f30c0421fae270386a43315ee",
"Region": "eu-de",
"Cloud": "otc.t-systems.com"
}
}
Environment:
kubectl version
): v1.20.0-alpha.0.1400+bd39d3933be27d-dirtycat /etc/os-release
): CentOS Linux 7uname -a
): 3.10.0-1127.19.1.el7.x86_64Possible tasks:
test
endpoint into Makefile.We should add HuaweiCloud to the Vendor list at Vendor Implementations.
Hi,
Yes the huaweicloud provider is not supported by RKE but RKE seems to support custom cloud providers.
They provide this example.
The question is : How can we build and install the huawei cloud provider to be used with RKE ?
For the configuration, I hope something like openstack cloud provider should work.
[Global] auth-url = xxxx username = xxxx password = xxxx tenant-id = xxxx domain-name = xxxx region = xxxx [LoadBalancer] subnet-id = xxxx
Thank you for your help.
Hello @Edge94,
the above configuration looks like Rancher in-tree cloud provider, doesn't it ? What about the alternative to trigger cloud-provider-huaweicloud as a Rancher specific external cloud provider instead of cloud-provider-openstack which does not support non-Octavia API compatible Huawei Cloud Load Balancer. Here is an example of such setup.
By the way @RainbowMango, does cloud-provider-huaweicloud plan to support Dedicated Load Balancers ? Thanks for you support.
Originally posted by @rcarre in #94 (comment)
I have deploied the cloud-provider-huaweicloud, but there is still no external ip on my nodes. All my nodes have bind eip. The node1 witch running the huawei-cloud-controller-manager is bind two internel ip, one is the local ip, and one is the eip. And there are no eip bind to other nodes, neither internalIp or externalip.
What you expected to happen:
I need a way to discovery eip for my nodes.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):What happened:
What you expected to happen:
should not do ensuring 2 process simultaneously.
How to reproduce it (as minimally and precisely as possible):
create new k8s svc with "kubernetes.io/elb.autocreate" annotation
Anything else we need to know?:
Environment:
kubectl version
): 1.21The method should be deleted.
What would you like to be added:
e2e tests need to be added for Nat type LB
Why is this needed:
to ensure the Nat type LB work well
What would you like to be added:
Add some document to internal-load-balancer
Why is this needed:
Current code connect to classic load balancer with endpoint:
url := "/v1.0/" + e.elbClient.TenantId + "/elbaas/loadbalancers/" + loadbalancerId
What would you like to be added:
Support share load balancers
/v2.0/lbaas/loadbalancers
Why is this needed:
APIGW.0101 The API does not exist or has not been published in the environment
What happened:
If EIP is not used, an EIP deletion error will occur when deleting the service.
What you expected to happen:
If EIP is not used, there should be no errors when calling the API.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The module name in go.mod
should be updated after repo migration:
k8s.io/cloud-provider-huaweicloud
--> sigs.k8s.io/cloud-provider-huaweicloud
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.