Giter Site home page Giter Site logo

kubernetes / legacy-cloud-providers Goto Github PK

View Code? Open in Web Editor NEW
51.0 7.0 62.0 6.22 MB

This repository hosts the legacy in-tree cloud providers. Out-of-tree cloud providers can consume packages in this repo to support legacy implementations of their Kubernetes cloud provider.

License: Apache License 2.0

Go 100.00%
k8s-sig-cloud-provider k8s-staging

legacy-cloud-providers's Introduction

legacy-cloud-providers

This repository hosts the legacy cloud providers that were previously hosted under k8s.io/kubernetes/pkg/cloudprovider/providers. Out-of-tree cloud providers can consume packages in this repo to support legacy implementations of their Kubernetes cloud provider.

Note: go-get or vendor this package as k8s.io/legacy-cloud-providers.

Purpose

To be consumed by out-of-tree cloud providers that wish to support legacy behavior from their in-tree equivalents.

Compatibility

The legacy providers here follow the same compatibility rules as cloud providers that were previously in k8s.io/kubernetes/pkg/cloudprovider/providers.

Where does it come from?

legacy-cloud-providers is synced from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/legacy-cloud-providers. Code changes are made in that location, merged into k8s.io/kubernetes and later synced here.

Things you should NOT do

  1. Add a new cloud provider here.
  2. Directly modify anything under this repo. Those are driven from k8s.io/kubernetes/staging/src/k8s.io/legacy-cloud-providers. sig-cloudprovider.
  3. Add new features/integrations to a cloud provider in this repo. Changes sync here should only be incremental bug fixes.

legacy-cloud-providers's People

Contributors

andrewsykim avatar andyzhangx avatar aramase avatar bentheelder avatar cheftako avatar cici37 avatar dims avatar feiskyer avatar gaurav1086 avatar gnufied avatar gongguan avatar humblec avatar jefftree avatar jpbetz avatar jsafrane avatar justaugustus avatar k8s-publishing-bot avatar kishorj avatar liggitt avatar m00nf1sh avatar madhavjivrajani avatar nilo19 avatar pacoxu avatar pohly avatar prameshj avatar rainbowmango avatar thockin avatar v-xuxin avatar weijiehu avatar yangl900 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

legacy-cloud-providers's Issues

The retry will not effective when the conflict happened ?

k8s.io/legacy-cloud-providers/vsphere/vsphere.go

func tryUpdateNode(ctx context.Context, client clientset.Interface, updatedNode *v1.Node) error {
for i := 0; i < updateNodeRetryCount; i++ {
_, err := client.CoreV1().Nodes().Update(ctx, updatedNode, metav1.UpdateOptions{})
if err != nil {
if !apierrors.IsConflict(err) {
return fmt.Errorf("vSphere cloud provider can not update node with zones info: %v", err)
}
} else {
return nil
}
}
return fmt.Errorf("update node exceeds retry count")
}

How do I modify the health check interval of aws NLB TargetGroup

I created a service (type=loadBalancer, service.beta.kubernetes.io/aws-load-balancer-type: nlb), but I found that can't change HealthCheckIntervalSeconds.

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    # Enable PROXY protocol
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    # by default the type is elb (classic load balancer).
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
  # this setting is to make sure the source IP address is preserved.
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https

I found through aws_loadbalancer.go that it was set to 30s, but I didn't find anything that could be changed.

Excuse me, when I am in Kubernetes through service to create change HealthCheckIntervalSeconds NLB does not support?

Support IP Specification for AWS NLB via Annotation

We are provisioning AWS NLBs via AWS Service provider and Kubernetes service type load balancer. We need the NLB IPs to be static in case of an NLB deletion/recreation. AWS API supports specifying IPs to be used by the AWS NLB; can you please support the same via service annotations?

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Ensure GCP list calls are made using paginated list calls.

The current list of list calls not using the paginated API are

  1. gce/gce_clusters.go:97: list, err := g.containerService.Projects.Locations.Clusters.List(location).Do()
  2. gce/gce_tpu.go:129: response, err := g.tpuService.projects.Locations.Nodes.List(parent).Do()
  3. gce/gce_tpu.go:140: response, err := g.tpuService.projects.Locations.List(parent).Do()
  4. gce/gce.go:873: res, err := listCall.Do()

For large paginated results these calls will do the wrong thing.

Does the NLB require at least 8 free IP addresses?

if eipList, present := annotations[ServiceAnnotationLoadBalancerEIPAllocations]; present {
allocationIDs = strings.Split(eipList, ",")
if len(allocationIDs) != len(subnetIDs) {
return nil, fmt.Errorf("error creating load balancer: Must have same number of EIP AllocationIDs (%d) and SubnetIDs (%d)", len(allocationIDs), len(subnetIDs))
}
}

Looks as if the logic for creating an NLB is the same as an ELB even though you have the option to specify one Elastic IP address per subnet with the NLB.

No option for 'target-type' available

It is recommended to start using network load balancers instead of application load balancers, but network load balancer configured as internal does not support hairpinning nor loopback. If the worker node selected as the destination is the same as the source we have a timeout (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-troubleshooting.html). One of the solutions provided by aws is to do the registration of target using the IP address instead of using IDs, so it would be nice if we could have that option available when using Istio.

Error syncing load balancer: failed to ensure load balancer: EnsureBackendPoolDeleted: failed to parse the VMAS ID : getAvailabilitySetNameByID: failed to parse the VMAS ID

Env: Azure Openshift Cluster (ARO)
Cloud: Azure
Openshuft Server Version: 4.7.21
Kubernetes Version: v1.20.0+558d959

Issue: Not able to create azure cloud native load-balancer service in the Openshift cluster. The load-balancer service creation gets stuck(in pending state) and not assigning public IP.

Error message on the service:
-> oc describe service

Events:
  Type     Reason                  Age                  From                Message
  ----     ------                  ----                 ----                -------
  Normal   EnsuringLoadBalancer    83s (x6 over 4m10s)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  78s (x6 over 4m5s)   service-controller  Error syncing load balancer: failed to ensure load balancer: EnsureBackendPoolDeleted: failed to parse the VMAS ID : getAvailabilitySetNameByID: failed to parse the VMAS ID

Diagnosis from our end:

  • Openshift uses Machinesets(CRD) and Machines(CRD). Machines translates to standalone VMs and not VMsets
  • Openshift doesn't use VM scale sets or availability sets.
  • When Openshift fails to procure new Machines(Machine custom resource in Failed state. Azure VM in failed state on the platform side), we are not able to create any new LB service for our applications in our cluster.

err = az.VMSet.EnsureBackendPoolDeleted(service, lbBackendPoolID, vmSetName, backendpoolToBeDeleted)

  • Loadbalancer reconciliation fails (in EnsureBackendPoolDeleted function) due to the failed node in the cluster and fails to get past the next steps of creating public IPs, and assigning.
  • EnsureBackendPoolDeleted deals with deleting/ decoupling vmsets from backendpool. I don't see any code handling the scenario of standalone VMs. So, the function handles the failed standalone VM like VMset and trying to parse the VMAS ID (to decouple) and fails to parse the vmsetid and thus the error failed to parse the VMAS ID

return fmt.Errorf("EnsureBackendPoolDeleted: failed to parse the VMAS ID %s: %v", vmasID, err)

Expected behavior:
 - Load-balancer reconciliation on the EnsureBackendPoolDeleted step(which is responsible for decoupling nodes from backend pool) should not block creating a new load-balancer service when there are failed VMs(Failed to create) part of the backend pool.

Work-arounds:

  1. Delete the Machine that failed from openshift using "oc delete machine machine-name" command. Machineset recreates the deleted machine successfully and the problem resolved automatically. (or)
  2. Scale-down the machine set to zero replica that deleted the failed machine and the problem resolved automatically.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.