Giter Site home page Giter Site logo

gardener / gardener-extension-shoot-dns-service Goto Github PK

View Code? Open in Web Editor NEW
8.0 12.0 34.0 28.23 MB

Gardener extension controller for DNS services for shoot clusters.

Home Page: https://gardener.cloud

License: Apache License 2.0

Shell 13.00% Dockerfile 0.52% Makefile 2.62% Go 78.36% Smarty 3.00% Mustache 0.20% Python 2.31%

gardener-extension-shoot-dns-service's Introduction

REUSE status CI Build status Go Report Card

Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.

Recently, most of the vendor specific logic has been developed in-tree. However, the project has grown to a size where it is very hard to extend, maintain, and test. With GEP-1 we have proposed how the architecture can be changed in a way to support external controllers that contain their very own vendor specifics. This way, we can keep Gardener core clean and independent.

Extension-Resources

Example extension resource:

apiVersion: extensions.gardener.cloud/v1alpha1
kind: Extension
metadata:
  name: "extension-dns-service"
  namespace: shoot--project--abc
spec:
  type: shoot-dns-service

How to start using or developing this extension controller locally

You can run the controller locally on your machine by executing make start. Please make sure to have the kubeconfig to the cluster you want to connect to ready in the ./dev/kubeconfig file. Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

gardener-extension-shoot-dns-service's People

Contributors

acumino avatar aleksandarsavchev avatar andreasburger avatar andrerun avatar ccwienk avatar danielfoehrkn avatar dependabot[bot] avatar dimitar-kostadinov avatar dimityrmirchev avatar etiennnr avatar g-pavlov avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar hendrikkahl avatar ialidzhikov avatar kostov6 avatar mandelsoft avatar martinweindel avatar n-boshnakov avatar oliver-goetz avatar rfranzke avatar scheererj avatar shafeeqes avatar stoyanr avatar timebertt avatar timuthy avatar voelzmo avatar vpnachev avatar zkdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gardener-extension-shoot-dns-service's Issues

Remove referenced resource in shoot spec if additional DNS provider is deleted

How to categorize this issue?

/area control-plane
/kind bug

What happened:

If an additional DNS provider is added in the shoot spec like

spec:  dns:
    domain: webhook-test.lukas.shoot.dev.k8s-hana.ondemand.com
    providers:
      - primary: false
        secretName: other-secret
        type: aws-route53

the mutating webhook of the shoot-dns-service extension adds a referenced resource:

resources:
    - name: shoot-dns-service-other-secret
      resourceRef:
        kind: Secret
        name: other-secret
        apiVersion: v1

If the DNS provider is deleted, the referenced resource entry is not deleted and reconciliation fails if the secret is deleted.

What you expected to happen:

The mutating webhook should remove the referenced resource.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Add Gardener ErrorCode on DNS deletion errors

How to categorize this issue?

/area ops-productivity
/kind enhancement

What would you like to be added:

details := a.collectProviderDetailsOnDeletingDNSEntries(ctx, list)
return &reconcilerutils.RequeueAfterError{
Cause: fmt.Errorf("waiting until shoot DNS entries have been deleted: %s", details),
RequeueAfter: 15 * time.Second,
}

returns errors like

task "Waiting until extension resources hibernated before kube-apiserver hibernation are ready" failed: Error while waiting for Extension shoot--some-shoot/shoot-dns-service to become ready: error during reconciliation: Error reconciling Extension: waiting until shoot DNS entries have been deleted: provider <provider-name> has status: cannot get hosted zones: InvalidClientTokenId: The security token included in the request is invalid.
	status code: 403, request id: 5ce248d4-bb7d-5e32-8629-1815b6badd0f

But no GardenerError code is added.
We should also call util.DetermineError here similar to

err = retry.RetriableError(util.DetermineError(err, helper.KnownCodes))

Why is this needed:
Better ops-productivity.

dns-source-controller support for custom domains in shoot cluster

What would you like to be added:
I would like to manage DNS-records for workloads on top of shoot clusters with Gardeners external-dns-management controller.

Whenever I create an ingress, or a service with an annotation/label, I would like Gardener to manage the DNS entries in the cloud providers DNS service. A separate DNSEntry would be fine, however integration into ingress/service objects would be preferred.

I don't want to manage external-dns or cert-manager deployments/credentials myself.

Why is this needed:
A common requirement for workloads is to be addressable (DNS) and be secured (TLS). For both requirements external components exist (external-dns, cert-manager), however they need to be managed (including credentials). Since Gardener needs dns- and certificate-management internally it would be great if it could be made available to shoot clusters as well.

DNS Controller throws panic on multiple entries deletion

How to categorize this issue?

/area robustness
/kind bug
/priority normal

What happened:
The DNS Controller enters in a CrashLoopBackOff during multiple entries deletion.
Impacts the whole Seed cluster and users cannot provision DNS using the controller.

panic: runtime error: slice bounds out of range [3:1] [recovered]
	panic: runtime error: slice bounds out of range [3:1]

goroutine 680 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x1a31020, 0xc01750d900)
	/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/common.(*StateHandler).EnsureEntries(0xc004b990e0, 0x2cfdb00, 0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/common/state.go:153 +0x440
github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/common.(*StateHandler).Refresh(0xc004b990e0, 0xc005fde240, 0x1df3f60, 0xc0001d0390, 0xc001f885c0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/common/state.go:139 +0x8d
github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/common.NewStateHandler(0x1ddee20, 0xc000372000, 0xc000c6ad00, 0xc00340a340, 0xc003938501, 0x0, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/common/state.go:78 +0x37f
github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/lifecycle.(*actuator).deleteSeedResources(0xc000994420, 0x1ddee20, 0xc000372000, 0xc00340a340, 0x0, 0x0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/lifecycle/actuator.go:291 +0x406
github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/lifecycle.(*actuator).Delete(0xc000994420, 0x1ddee20, 0xc000372000, 0xc00340a340, 0x1b3f93e, 0x6)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/pkg/controller/lifecycle/actuator.go:192 +0x4d
github.com/gardener/gardener/extensions/pkg/controller/extension.(*reconciler).delete(0xc000a86180, 0x1ddee20, 0xc000372000, 0xc00340a340, 0x2cff020, 0x1a, 0xc000a29180, 0x0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/github.com/gardener/gardener/extensions/pkg/controller/extension/reconciler.go:239 +0x1da
github.com/gardener/gardener/extensions/pkg/controller/extension.(*reconciler).Reconcile(0xc000a86180, 0xc000b10ea0, 0x1a, 0xc000b10e80, 0x11, 0x0, 0x11, 0x1da4a20, 0xc00340a000)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/github.com/gardener/gardener/extensions/pkg/controller/extension/reconciler.go:193 +0x82a
github.com/gardener/gardener/extensions/pkg/controller.(*operationAnnotationWrapper).Reconcile(0xc0008d0fc0, 0xc000b10ea0, 0x1a, 0xc000b10e80, 0x11, 0x0, 0xbfd97860efdf10d2, 0xc0007f2123, 0xc00022efc8)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/github.com/gardener/gardener/extensions/pkg/controller/reconciler.go:90 +0x283
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00015c000, 0x198a4e0, 0xc0021ca900, 0xc0001f9e00)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:245 +0x161
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00015c000, 0xc0001f9f00)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:221 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00015c000)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:200 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000427cb0)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5f
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000427cb0, 0x3b9aca00, 0x0, 0x1, 0xc00073e360)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc000427cb0, 0x3b9aca00, 0xc00073e360)
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/src/github.com/gardener/gardener-extension-shoot-dns-service/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:182 +0x526

What you expected to happen:
To handle the deletion successfully

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:
aws

  • Gardener version (if relevant):n- Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Stop using github.com/pkg/errors

How to categorize this issue?

/area TODO
/kind enhancement
/priority 3

What would you like to be added:
Similar to gardener/gardener#4280 we should be using Go native error wrapping (available since GO 1.13).

$ grep -r '"github.com/pkg/errors"' | grep -v vendor/ | cut -f 1 -d ':' | cut -d '/' -f 1-3 | sort | uniq -c | sort
      1 pkg/apis/helper
      1 pkg/controller/lifecycle
      1 test/system/shootdns_test.go

Why is this needed:
Getting rid of vendors in favor of using stdlib is always nice. Others seem to do this as well - kubernetes/kubernetes#103043 and containerd/console#54.

Add appropriate error code for error `no domain matching hosting zones. Need to be a (sub)domain of [...]`

How to categorize this issue?

/area ops-productivity
/kind enhancement

What would you like to be added:
We see Extensions of type shoot-dns-service that fail to be created with:

status:
  lastError:
    description: "Error reconciling Extension: 1 error occurred:\n\t* Error while
      waiting for DNSProvider shoot--foo--bar/aws-route53-shoot-dns-service-baz-foo
      to become ready: state Error: no domain matching hosting zones. Need to be a
      (sub)domain of [<omitted>]\n\n"
    lastUpdateTime: "2024-02-28T09:53:53Z"
  lastOperation:
    description: "Error reconciling Extension: 1 error occurred:\n\t* Error while
      waiting for DNSProvider shoot--foo--bar/aws-route53-shoot-dns-service-baz-foo
      to become ready: state Error: no domain matching hosting zones. Need to be a
      (sub)domain of [<omitted>]\n\n"
    lastUpdateTime: "2024-02-28T09:53:53Z"
    progress: 50
    state: Error
    type: Create

The error reveals that Shoot wants to include a dns domain that is not supported by the backing account (?).
If possible, such error should be reported with error code ERR_CONFIGURATION_PROBLEM in the Extension status.

The error itself is raised by external-dns-management: https://github.com/gardener/external-dns-management/blob/256e812ea1d5b4ec9432195365ff3c810d7592f0/pkg/dns/provider/selection/selection.go#L134-L135

Why is this needed:
To prevent ops people looking into Shoots which are pure end user configuration problems.

/cc @adenitiu @nickytd

Discontinue support for `SyncProvidersFromShootSpecDNS`

How to categorize this issue?

/area open-source
/kind cleanup

What would you like to be added:
With gardener/gardener#8199, Gardener deprecates the support for external-dns-management specific features. Although we could still consider syncing the DNS providers with their "basic" setup, users should rather use the specific provider config that contains all supported specifications, instead of relying on a common denominator.

Thus, syncing providers should be deprecated and eventually be removed.

Make leader election resource lock configurable and default to leases

How to categorize this issue?

/area control-plane
/kind enhancement
/priority 3

What would you like to be added:
Currently both shoot-dns-service extension and shoot-cert-service extension don't have the leader election resource configurable and use configmapleases.
The provider extensions and OS extensions already have this functionality implemented:

It would be nice if we have similar PRs also for:

Why is this needed:
kubernetes/kubernetes#80289 outlines several good reasons why to choose leases for leader election.

Activating DNSAnnotation controller

How to categorize this issue?

/area networking
/kind enhancement
/priority 3

What would you like to be added:

Activation of DNSAnnotation Resource (https://github.com/gardener/external-dns-management/blob/master/examples/70-dnsannotation.yaml)

Why is this needed:

The dns.gardener.cloud/dnsnames annotation currently cannot be used together with the istio ingressgateway that comes when using kyma. Those annotations will be lost during upgrades. Therefore its required to create an independent resource for letting gardener set dns entries. DNSEntry does not support referring kubernetes services directly. Since IP-Adresses of cloud provider loadbalancers can change, there is the need for the DNSAnnotation in a kyma based usage scenario.

Reusing DNS names not working

How to categorize this issue?
/area cleanup
/kind bug

What happened:
After deleting an Nginx ingress controller from our Kyma cluster, the dns names referenced in the "dns.gardener.cloud/dnsnames" annotation before, can not be used by other ingresses. It seems that the cluster gets into a messy state. If I want to reach the hostname, I get an 404 response.

What you expected to happen:
I expect the dns names to be cleaned up and I can reuse the host names in other ingresses.

How to reproduce it (as minimally and precisely as possible):
Deploy Nginx ingress controller (https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml) and add the service annotations "dns.gardener.cloud/dnsnames: $HOST_NAME". After that, delete the Nginx ingress controller again and try to re-use $HOST_NAME.

Environment:
Kyma

  • Kubernetes version (use kubectl version): v1.26.7

Define default resource requests for comonent gardener-extension-admission-shoot-dns

How to categorize this issue?
/area control-plane
/kind enhancement

What would you like to be added:
The gardener-extension-admission-shoot-dns-service component, running in the garden namespace in the garden runtime cluster, currently does not specify resource requests, and initially runs as a "best effort" pod until it is evicted and VPA sets a value for requests. In theory, we could improve initial pod placement and increase the mean time to first eviction by providing appropriate default requests.

Why is this needed:
That would be a minor improvement and this issue is filed for general tracking purposes, not because of any urgent problem. The suggested improvement should result in a minor reduction in the frequency of eviction-driven webhook latency spikes.

Shoot DNS Service is not labelled as controlplane and hence isn't managed by DWD

How to categorize this issue?

/area control-plane robustness high-availability
/kind bug
/priority normal

What happened:
If the kube-apiserver is down for long enough, the dependent control-plane components go into CrashloopBackoff and when the kube-apiserver recovers, the dependency-watchdog pro-actively restarts (deletes) the rest of the control-plane pods that are still in CrashloopBackoff.

https://github.com/gardener/gardener/blob/b6877b359f19933b14270ce080d4b2efb3087400/charts/seed-bootstrap/charts/dependency-watchdog/templates/endpoint-configmap.yaml#L17-L20

However, the shoot DNS service pod doesn't match the selector that selects control-plane pods. This prevents it from recovering as fast as possible after kube-apiserver recovers.

What you expected to happen:

The shoot DNS service also should be made to recover from CrashloopBackoff by the dependency-watchdog like other control-plane pods by labelling it as control-plane.

How to reproduce it (as minimally and precisely as possible):
Manually scale-down kube-apiserver to 0 for long enough to get the shoot DNS service pod (and other control-plane pods) into CrashloopBackoff and then scale it back up again.

Anything else we need to know?:

Environment:

  • Gardener version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Add appropriate error code for error ` duplicate zones [...]`

How to categorize this issue?

/area ops-productivity
/kind enhancement

What would you like to be added:
We see Extensions of type shoot-dns-service that fail to be created with:

$ k get extensions shoot-dns-service -o yaml
...
status:
  conditions:
  - lastTransitionTime: "2024-05-14T08:26:26Z"
    lastUpdateTime: "2024-05-14T08:26:26Z"
    message: All health checks successful
    reason: HealthCheckSuccessful
    status: "True"
    type: ControlPlaneHealthy
  lastError:
    description: "Error reconciling Extension: 1 error occurred:\n\t* Error while
      waiting for DNSProvider ...
      to become ready: state Error: duplicate zones ...\n\n"
    lastUpdateTime: "2024-05-14T09:11:22Z"
  lastOperation:
    description: "Error reconciling Extension: 1 error occurred:\n\t* Error while
      waiting for DNSProvider ...
      to become ready: state Error: duplicate zones ...
      and...\n\n"
    lastUpdateTime: "2024-05-14T09:11:22Z"
    progress: 50
    state: Error
    type: Create

We can see two hosted zones with the same hosted zone name. If possible, such error should be reported with error code ERR_CONFIGURATION_PROBLEM in the Extension status.

It seems it comes from here:

if state := dnspr.Status.State; state != dnsv1alpha1.STATE_READY {
var err error
if msg := dnspr.Status.Message; msg != nil {
err = fmt.Errorf("state %s: %s", state, *msg)
} else {
err = fmt.Errorf("state %s", state)
}
// TODO(timebertt): this should be the other way round: ErrorWithCodes should wrap the errorWithDNSState.
// DetermineError first needs to be improved to properly wrap the given error, afterwards we can clean up this
// code here
if state == dnsv1alpha1.STATE_ERROR || state == dnsv1alpha1.STATE_INVALID {
// return a retriable error for an Error or Invalid state (independent of the error code detection), which makes
// WaitUntilObjectReadyWithHealthFunction not treat the error as severe immediately but still surface errors
// faster, without retrying until the entire timeout is elapsed.
// This is the same behavior as in other extension components which leverage health.CheckExtensionObject, where
// ErrorWithCodes is returned if status.lastError is set (no matter if status.lastError.codes contains error codes).
err = retry.RetriableError(util.DetermineError(err, helper.KnownCodes))
}
return &errorWithDNSState{underlying: err, state: state}
}
return nil
}

Why is this needed:
To prevent ops people looking into Shoots which are pure end user configuration problems.

Shoot DNS test: unsupported kubernetes version "v1.22.2"

How to categorize this issue?

/area testing
/kind bug
/priority 3

What happened:
With [email protected] the Shoot DNS test fails against K8s v1.22 Shoot with reason:

  Unexpected error:
      <*fmt.wrapError | 0xc00035ce40>: {
          msg: "error discovering kubernetes version: unsupported kubernetes version \"v1.22.2\"",
          err: <*errors.errorString | 0xc0007de660>{
              s: "unsupported kubernetes version \"v1.22.2\"",
          },
      }
      error discovering kubernetes version: unsupported kubernetes version "v1.22.2"
  occurred

It seems that this issue is fixed on the master branch thanks to #85. So it would be nice a have a new release of shoot-dns-service where the test does not complain about the Shoot K8s version.
In longterm, if possible, the test can be adapted to do not rely on the supported K8s versions list in gardener. Similar hidden dependency was present for the networking extensions and it was resolved for example with gardener/gardener-extension-networking-calico#111.

What you expected to happen:
No similar error when running the Shoot DNS test.

How to reproduce it (as minimally and precisely as possible):
See above.

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.15.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Do not wait for a DNSProvider to be Ready when a Shoot is hibernated

/area quality
/kind bug

What happened:
With [email protected] deployed on a landscape I see a lot of hibernated Shoots that fail to be reconciled with:

  lastErrors:
    - description: "task \"Waiting until extension resources are ready\" failed: Error while waiting for Extension shoot--foo--bar/shoot-dns-service to become ready: error during reconciliation: Error reconciling extension: 1 error occurred:\n\t* Error while waiting for DNSProvider shoot--foo--bar/external to become ready: observed generation outdated (0/1)\n\n"
      taskID: Waiting until extension resources are ready
      lastUpdateTime: '2022-03-11T15:38:27Z'

The corresponding Extension resource:

  lastError:
    description: "Error reconciling extension: 1 error occurred:\n\t* Error while
      waiting for DNSProvider shoot--foo--bar/external to become ready:
      observed generation outdated (0/1)\n\n"
    lastUpdateTime: "2022-03-11T15:41:59Z"

The corresponding dnsprovider does not seem to be reconciling because the Shoot is hibernated.

$ k -n shoot--foo--bar get dnsprovider
NAME       TYPE   STATUS   AGE   INCLUDED_DOMAINS
external                   97m

I assume the dns-controller-manager deployed as part of the control plane has to reconcile the dnsprovider but this cannot happen as the whole control plane scaled down.

What you expected to happen:
Reconciliation for a hibernated Shoot to succeed and the component to do not wait for a condition that cannot be completed.

How to reproduce it (as minimally and precisely as possible):

  1. Create a Shoot

  2. Hibernate it.

  3. Make sure reconciliation of the hibernated Shoot fails with the above error.

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.18.1
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

DNS service is installed for shoots without DNS domain

What happened:
Shoots that don't specify a DNS domain (.spec.dns = nil) or that are scheduled to a seed that is tainted with "DNS disabled" still get the DNS service.

What you expected to happen:
The DNS service shoot only be installed for shoots with a DNS domain.

How to reproduce it (as minimally and precisely as possible):
Create a shoot with .spec.dns=nil and register the DNS extension (globally enabled). See it getting the DNS service pod.

  • Gardener version (if relevant): v1.0.4
  • Extension version: v1.3.0
  • Kubernetes version (use kubectl version): doesn't matter
  • Cloud provider or hardware configuration: doesn't matter

Improve error message for DNS entry failed deletion

How to categorize this issue?

/kind enhancement

What would you like to be added:
Currently when the credential for dnsprovider is/are wrong the deletion of dnsEntry gets stuck with error message

Error deleting the extension: waiting until shoot DNS entries have been deleted

Why is this needed:
This error needs to be improved so that customer can know its something they have to deal with, and it could be marked as user error and not pop up on the dashboard

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.