Giter Site home page Giter Site logo

crossplane-runtime's Introduction

OpenSSF Best Practices CI Go Report Card

Crossplane

Crossplane is a framework for building cloud native control planes without needing to write code. It has a highly extensible backend that enables you to build a control plane that can orchestrate applications and infrastructure no matter where they run, and a highly configurable frontend that puts you in control of the schema of the declarative API it offers.

Crossplane is a Cloud Native Computing Foundation project.

Get Started

Crossplane's Get Started Docs cover install and cloud provider quickstarts.

Releases

GitHub release Artifact Hub

Currently maintained releases, as well as the next few upcoming releases are listed below. For more information take a look at the Crossplane release cycle documentation.

Release Release Date EOL
v1.14 Nov 1, 2023 Aug 2024
v1.15 Feb 15, 2024 Nov 2024
v1.16 May 15, 2024 Feb 2025
v1.17 Early Aug '24 May 2025
v1.18 Early Nov '24 Aug 2025
v1.19 Early Feb '25 Nov 2025

You can subscribe to the community calendar to track all release dates, and find the most recent releases on the releases page.

Roadmap

The public roadmap for Crossplane is published as a GitHub project board. Issues added to the roadmap have been triaged and identified as valuable to the community, and therefore a priority for the project that we expect to invest in.

Milestones assigned to any issues in the roadmap are intended to give a sense of overall priority and the expected order of delivery. They should be considered approximate estimations and are not a strict commitment to a specific delivery timeline.

Crossplane Roadmap

Get Involved

Slack Twitter Follow YouTube Channel Subscribers

Crossplane is a community driven project; we welcome your contribution. To file a bug, suggest an improvement, or request a new feature please open an issue against Crossplane or the relevant provider. Refer to our contributing guide for more information on how you can help.

The Crossplane community meeting takes place every 4 weeks on Thursday at 10:00am Pacific Time. You can find the up to date meeting schedule on the Community Calendar.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or raise general questions with the broader community is encouraged to join.

Special Interest Groups (SIG)

Each SIG collaborates in Slack and some groups have regular meetings, you can find the meetings in the Community Calendar.

Adopters

A list of publicly known users of the Crossplane project can be found in ADOPTERS.md. We encourage all users of Crossplane to add themselves to this list - we want to see the community's growing success!

License

Crossplane is under the Apache 2.0 license.

FOSSA Status

crossplane-runtime's People

Contributors

bassam avatar bobh66 avatar crossplane-renovate[bot] avatar dependabot[bot] avatar displague avatar epk avatar ezgidemirel avatar hasheddan avatar ichekrygin avatar jbw976 avatar lsviben avatar lukeweber avatar mistermx avatar muvaf avatar negz avatar pedjak avatar phisco avatar prasek avatar rbwsam avatar renovate[bot] avatar saschagrunert avatar sergenyalcin avatar smcavallo avatar sttts avatar suskin avatar thephred avatar toastwaffle avatar turkenh avatar ulucinar avatar vaspahomov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crossplane-runtime's Issues

Make AttributeReferencer a no-op when its target field is already set?

What problem are you facing?

Crossplane supports cross-resource-referencing, such that:

spec:
  # This reference field
  fooRef:
    name: someOtherResource
  # Can be used to set this target field
  foo: someString

Initially references were resolved only once. Resolving references successfully set the "references resolved" status condition to true, and resources would only be resolved when said condition was untrue. #66 removed this gate, instead resolving references on every reconcile. We quickly noticed that this caused #74 and raised #75 to address that. At the time of writing references are resolved on create and update attempt, but not on deletes.

This means:

  • Reference fields take precedence over their target field; if both are set the reference field will overwrite the target field.
  • We resolve references a bunch when we don't really need to. This will typically involve reading each referenced resource twice per reconcile, but the reads should be from cache.
  • References are never resolved at delete time, even if they're stale.

I wouldn't call these problems as such, but I want to explore a different user experience.

How could Crossplane help solve your problem?

If we instead:

  • Invoked ResolveReferences on every reconcile (even deletes).
  • Maintained the existing behaviour in which a "references access error" blocked the managed resource reconciler from proceeding to observe, create, update, or delete the external resource.
  • Did not try to resolve fooRef at all when foo was set.

This would give the user finer control over if and when reference resolution happened. e.g.

  • If you only want reference resolution to happen once set the reference field and don't set the target field. The target field will be set for you once, and not touched until it is deleted.
  • If you want to force a refresh of the reference resolution for a specific field, edit the managed resource and unset its target value.

Going down this path may force us to use a "monolithic" alternative to the current AttributeReferencer, which would push more of the contract on the implementor e.g.:

func (r *MonolithicReferencer) Resolve(ctx context.Context, c client.Reader, obj runtime.Object) ([]resource.ReferenceStatus, error) {
  // Referencer is a managed resource that references a Referenced managed resource.
  referencer, ok := obj.(*Referencer)
  if !ok {
    return nil, errNotReferencer
  }

  if referencer.Spec.TargetField != "" {
    return nil, nil
  }

  // Referenced is a managed resource referenced by a Referencer
  referenced := &Referenced{}
  if err := c.Get(ctx, types.NamespacedName{Name: r.Name}, referenced); err != nil {
    if kerrors.IsNotReady(err) {
      return []resource.ReferenceStatus{{Name: r.Name, Status: resource.ReferenceNotFound}}, nil
    }
    return nil, err
  }

  if !resource.IsConditionTrue(referenced.GetCondition(runtimev1alpha1.TypeReady)) {
    return []resource.ReferenceStatus{{Name: r.Name, Status: resource.ReferenceNotReady}}, nil
  }
  
  referencer.Spec.TargetField = referenced.Spec.ReferencedField
  return []resource.ReferenceStatus{{Name: r.Name, Status: resource.ReferenceReady}}, nil
}

Automatically install required operators in target cluster if not present

What problem are you facing?

With Kubernetes-native in-cluster infrastructure providers like Rook, the target cluster for resources must have the necessary operators installed. Otherwise, Crossplane will not be able to successfully provision the resources.

Additional context: crossplane/crossplane#841

How could Crossplane help solve your problem?

While this will likely initially be handled manually in the managed reconciler connecter methods, it may be possible to make this part of the shared managed reconciler's Reconcile() method when the client provided is a k8s client.

Managed reconciler should update before deletion

What problem are you facing?

There are cases where deletion fails because of a config field but since managed reconciler doesn't update before deletion, the deletion always fails, hence result in a deadlock. One example is deletionProtectionEnabled config of AWS RDS. If it's true, all deletion requests fail until it's made false, however, even if you update the resource deletion is called before update and fails. So, there is no way for a Crossplane customer to trigger a valid deletion if the option was true and they tried to delete the instance.

How could Crossplane help solve your problem?

We can call update before deletion. In fact, I think deletion should be called no matter what in the case it's requested. An Update call could fail but we'd still want the deletion to be called if the user wanted it.

Webhook implementation of resource.ExternalClient

What problem are you facing?

Currently folks wishing to add support for a new managed resource to Crossplane must write Go. Specifically, they must author a type that satisfies managed.ExternalClient in order to CRUD external resources, and instantiate a managed.Reconciler that uses said managed.ExternalClient.

How could Crossplane help solve your problem?

It should be possible to model the parameters and results of the managed.ExternalClient methods as a JSON REST webhook. If Crossplane supported an ExternalClient-to-webhook adaptor folks would be able to implement managed resource CRUD logic in their language of choice behind the webhook.

Make use of finalizer mechanism mandatory

What problem are you facing?

Currently, finalizer add/remove mechanism is enabled by default in all generic managed reconciler instances. However, with the introduction of #45 there will be some cases that developer doesn't want to use the ManagedNameAsExternalName initializer for development of resources that get their generated name from the provider. This could potentially lead to forgetting to supply APIManagedFinalizerAdder in the initialization chain.

How could Crossplane help solve your problem?

I think there is no case that we don't want to use APIManagedFinalizerAdder or APIManagedFinalizerRemover. So, we can just make them mandatory by calling directly or injecting into the supplied initialization chain manually.

Remove forked CreateOrUpdate once running controller-runtime >=0.2.0

This issue has been created to track the work originally expressed in crossplane/crossplane#426. The core crossplane repo was updated to controller-runtime v0.2.0 as part of crossplane/crossplane#675, which closed crossplane/crossplane#502

Original Issue Body

crossplane/crossplane#420 (comment)

Per the above comment crossplane/crossplane#420 introduces a fork of the CreateOrUpdate function from an unreleased version of https://github.com/kubernetes-sigs/controller-runtime. We should remove our fork and return to mainline once we bump controller-runtime to 0.2.0 or above.

Related

Note in pkg/util/crud.go:

// https://github.com/kubernetes-sigs/controller-runtime/blob/6100e07/pkg/controller/controllerutil/controllerutil.go#L117
//
// This file contains a fork of the above function. At the time of writing the
// latest release of controller-runtime is v0.1.10, which contains a buggy
// CreateOrUpdate implementation.
// TODO(negz): Revert to mainline CreateOrUpdate once we're running v0.2.0 or
// higher per crossplane/crossplane#426.

Could be accomplished as part of #1

Handling provider-added fields to maps and slices

What problem are you facing?

Currently, if there is a configurable slice of fields on a managed resource, we check to make sure that the external resource and the managed resource have the same elements of that slice. One example of this is with labels. We want to add all labels that a user provides, and remove any that get set outside the context of Crossplane. However, this becomes an issue if the provider itself adds some elements to the slice. For example, when gVisor is enabled on a GKE NodePool, GCP adds the following to labels and taints:

                DiskType:  "pd-standard",                                                                                                                                                                   
                ImageType: "COS_CONTAINERD",                                                                                                                                                                
                Labels: map[string]string{                                                                                                                                                                  
+                       "sandbox.gke.io/runtime": "gvisor",                                                                                                                                                 
                        "some-label":            "true",                                                                                                                                                   
                },
                LocalSsdCount: 0,
                MachineType:   "n1-standard-2",
                ... // 6 identical fields
                ShieldedInstanceConfig: &container.ShieldedInstanceConfig{EnableIntegrityMonitoring: true, EnableSecureBoot: true},                                                                        
                Tags:                   nil,
                Taints: []*container.NodeTaint{
                        &{Effect: "NO_SCHEDULE", Key: "some-pods", Value: "false"},
+                       &{Effect: "NO_SCHEDULE", Key: "sandbox.gke.io/runtime", Value: "gvisor"},
                },
                WorkloadMetadataConfig: nil,
                ForceSendFields:        nil,
                NullFields:             nil,

When we compare the external resource and the managed resource, we see that there is a diff and attempt to remove the additional fields. This can cause a continuous update loop as we remove the fields and GCP keeps adding them back.

How could Crossplane help solve your problem?

The solution here will be more of a design decision than a bug fix. One could argue that the controller is acting correctly here and a user should provide these labels and taints in their spec if they want Crossplane to not try and remove them. However, that results in a rather poor user experience.

The most straight-forward solution is that we late initialize any slice values that we deem "mergeable" with values the cloud provider adds. This would be in accordance with the Kubernetes late initialization patterns. However, this would necessitate some sort of change in our managed reconciler because we currently late initialize before updating an external resource. This leads to the situation where a user:

  1. Delete an element from a slice
  2. The controller reconciles the object, notices the element is present on the cloud provider, so late initializes
  3. On the next reconcile, the controller now sees that element present again (because of late initialization) and does not detect a diff so will not try to delete it

Another option is to manually implement logic that strategically ignores some field updates (such as the gVisor ones above), but that does not scale well across all API types.

ProviderReference for a managed resource should be optional

What problem are you facing?

The ProviderReference is currently a required field in both managed resource, and class template definitions. However, this has two issues:

  • Since this field is never consumed in the runtime library, it's not clear whether this should a required field.
  • If a resource is using another mechanism for connecting to the provider (Vault token, local Kubernetes resource, GCP auth, etc...), it will need to put a dummy value for this field, even though it never gets used.

How could Crossplane help solve your problem?

Make this field _optional.

Remove unpublish connection details step?

What problem are you facing?

Crossplane managed reconcilers currently orchestrate the following interface to publish connection details.

// A ManagedConnectionPublisher manages the supplied ConnectionDetails for the
// supplied Managed resource. ManagedPublishers must handle the case in which
// the supplied ConnectionDetails are empty.
type ManagedConnectionPublisher interface {
	// PublishConnection details for the supplied Managed resource. Publishing
	// must be additive; i.e. if details (a, b, c) are published, subsequently
	// publicing details (b, c, d) should update (b, c) but not remove a.
	PublishConnection(ctx context.Context, mg Managed, c ConnectionDetails) error

	// UnpublishConnection details for the supplied Managed resource.
	UnpublishConnection(ctx context.Context, mg Managed, c ConnectionDetails) error
}

In practice the only implementation of this interface publishes connection details as Kubernetes secrets. In this implementation UnpublishConnection is a no-op - we rely on Kubernetes garbage collection to remove the secrets when their owner (the managed resource) is deleted.

So, we have no real implementation of this functionality today but the concept is compelling. You might imagine a ManagedConnectionPublisher that posted to a webhook, or sent connection details to Hashicorp Vault, etc.

One catch with the current implementation relates to the signature - the caller is expected to supply the set of ConnectionDetails (a type of map[string][]byte) to unpublish. In practice the managed resource reconciler calls Unpublish with the ConnectionDetails returned during observation of the managed resource, which is frequently not the full set of connection details. Credentials, for example, are frequently only returned at creation time.

How could Crossplane help solve your problem?

Given we can't explicitly pass all of the credentials we want to unpublish, I think we should either update the signature of UnpublishConnection to omit the ConnectionDetails, and assume that if called all previously recorded connection details will be deleted, or just remove the functionality until we have a need for it.

Claim reconciler does not honor managed resource claimRef

What happened?

Currently the resource claim reconciler supports two 'provisioning' options:

  • Dynamic managed resource provisioning. If a resource claim sets its .spec.classRef and omits its .spec.resourceRef the reconciler will attempt to use the referenced class to create and bind to a new managed resource.
  • Static managed resource provisioning. If a resource claim sets its .spec.resourceRef the reconciler will ignore any .spec.classRef and attempt to bind to the referenced managed resource.

All Crossplane managed resources expose a .spec.claimRef field, which is an object reference to the resource claim to which the manage resource either is bound, or (I assume) intends to be bound. During dynamic provisioning the claim reconciler sets this field to reference the managed resource it creates. During static provisioning it ignores it completely; if claim A references unbound resource B the claim reconciler will bind to it, even if it references claim C.

It's not immediately obvious to me whether a managed resource author explicitly specifying the resource claim to which it should bind is a use case we want to support. If it's not we should move .spec.claimRef to .status.claimRef. If it is we should update the resource claim reconcile logic to avoid attempting to bind to managed resources that reference another claim. Either way, we should ensure the claimRef is set during the static provisioning scenario.

How can we reproduce it?

  1. Create managed resource A with a claimRef to resource claim B
  2. Create resource claim C with a resourceRef to resource A

You should see claim C successfully bind to resource A, despite it referencing claim B.

What environment did it happen in?

Crossplane version:

Introduce an "unclaimable managed resource" reconciler variant?

What problem are you facing?

In crossplane/crossplane#615, crossplane/crossplane#616, and crossplane/crossplane#617 we've added a series of network connectivity managed resources for each cloud provider, for example networks, subnets, and security groups. At the time of writing these managed resources do not have corresponding resource claims or classes; they can only be created manually.

These resources use the generic managed resource reconciler, which requires all managed resources it reconciles to satisfy the resource.Managed interface. Said interface requires getters and setters for resource class reference, resource claim reference, (claim) binding status, and connection secret reference. All managed resources, including these new ones, embed a ResourceSpec in their spec, and a ResourceStatus in their status as part of implementing these getters and setters. This is misleading, because it results in the managed resources exposing fields like .spec.classRef, .spec.writeConnectionSecretToRef, and .status.bindingPhase, despite not actually supporting these functionalities.

How could Crossplane help solve your problem?

At the time of writing it's unclear whether Crossplane will add resource classes and claims for these new managed resource kinds. If Crossplane continues to support "unclaimable" managed resources, we should consider introducing a trimmed down variant of the managed resource reconciler that does not require managed resources to expose fields they do not support.

Statically provisioned resources should be unbound when their claim is deleted

What happened?

The resource claim reconciler supports either dynamically or statically provisioning managed resources. The former involves using a resource class to create a new managed resource, while the latter involves binding to an existing resource.

A resource claim is the Kubernetes owner reference of any managed resource it dynamically provisions, but not the owner reference of any statically provisioned managed resource it later binds to. When a resource claim is deleted its underlying managed resource will only be deleted if it was dynamically provisioned and the user did not supply the --cascade option to kubectl delete.

While it is possible to delete a resource claim without deleting its managed resource, the resource claim reconciler doesn't currently update the managed resource in this scenario. It remains in binding state "bound", with a claim reference to the (now deleted) claim that it was bound to. This is probably not the behaviour we want.

How can we reproduce it?

  1. Statically provision a Crossplane managed resource
  2. Create a claim that specifies the managed resource from step 1 in its .spec.resourceRef
  3. Wait for the claim and resource to bind.
  4. Delete the claim created in step 2
  5. Observe that the managed resource created in step 1 continues to be "bound" to the now deleted resource claim.

What environment did it happen in?

Crossplane version:

Add EquateParameters cmp helper

What problem are you facing?

https://github.com/crossplaneio/stack-gcp/pull/151/files?file-filters%5B%5D=.go#diff-888e0ec46a21ebd2bbdc8d86d4553d9bR92

	return cmp.Equal(in, currentParams, cmpopts.IgnoreInterfaces(struct{ resource.AttributeReferencer }{}))

We frequently use the above cmpopt when determining whether managed resources are equivalent in the IsUpToDate method of each resource's controller. The readability of this option is a little opaque in how it relates to Crossplane.

How could Crossplane help solve your problem?

Perhaps we should add a function like our existing EquateErrors that equates managed resource parameters? This could just be an abstraction on IgnoreInterfaces to begin with.

Managed Reconciler: Observe should be skipped if the resource is set to be deleted

What happened?

Currently when reconciling a managed resource, it's Observe method is executed even if the resource is set to be deleted. The issue with this order is that if Observe for any reason fails, reconcile keeps throwing an error in this stage without ever reaching the Delete method. Since the resource is intended to be deleted, such observation errors should be ignored entirely.

What environment did it happen in?

Crossplane-runtime version: v0.3.0

Generate crossplane-runtime interface method sets

What problem are you facing?

Any Crossplane API type that wishes to leverage crossplane-runtime's built in resource.ManagedReconciler or resource.ClaimReconciler must satisfy one of various interfaces, e.g. resource.Managed, resource.Claim, etc. The interfaces are typically boilerplate getters and setters of fields that are embedded into each API type.

The Services Developer Guide details which common types each Crossplane API type should embed (e.g. any resource claim must embed ResourceSpec in its Spec struct). It also mentions that various getters and setters must be added to satisfy the relevant interface, but glosses over exactly how to write those getters and setters in the interest of brevity.

In summary, the set of methods an API type must implement to use the crossplane-runtime interfaces are undocumented, and are repetitive boilerplate.

How could Crossplane help solve your problem?

Crossplane could provide a utility to generate these method sets automatically. This would allow us to avoid including lengthy examples of the required methods in the developer guide. It would also have the added bonus of making it easier to refactor the codebase if and when we wanted to alter the method set required to satisfy a core interface.

Determine whether resource update is necessary during observation.

What problem are you facing?

Controllers using the the Generic Managed Resource Reconciler (GMRR) must make two cloud provider API calls during the most common reconcile path, when they only require one API call.

The GMRR requires controller authors to satisfy the resource.ExternalClient interface, which looks like:

type ExternalClient interface {
	Observe(ctx context.Context, mg Managed) (ExternalObservation, error)
	Create(ctx context.Context, mg Managed) (ExternalCreation, error)
	Update(ctx context.Context, mg Managed) (ExternalUpdate, error)
	Delete(ctx context.Context, mg Managed) error
}

The GMRR first calls Observe to report whether the external resource exists, and if so sync the managed resource's status. It then either:

  • Calls Delete if the external resource exists but the managed resource was deleted.
  • Calls Create if the external resource does not exist.
  • Calls Update if neither of the above cases apply.

Currently Update is called regardless of whether the external resource is out of sync with the managed resource. The Update implementation must either perform a no-op update of the external resource (typically at the expense of an API call), or get a fresh view of the external resource in order to determine whether it requires updating.

The vast majority of reconcile invocations over a managed resource's lifecycle will hit the Update branch, and not require an update. This will typically involve one of the two following cloud provider API call patterns.

If the Update implementation chooses to perform no-op updates:

  1. Get the external resource from the cloud provider API during Observe.
  2. Update the external resource in the cloud provider API during Update.

If the Update implementation chooses to determine whether an update is required:

  1. Get the external resource from the cloud provider API during Observe.
  2. Get the external resource from the cloud provider API during Update.

In either case this is one more API call than is necessary; the Observe method has all the information necessary (i.e. a copy of the managed resource and a copy of the external resource) to determine whether the external resource must be updated.

How could Crossplane help solve your problem?

The GMRR could make Observe rather than Update responsible for determining whether an update was necessary by adding a ResourceInSync field to resource.ExternalObservation:

type ExternalObservation struct {
	ResourceExists    bool
	ResourceInSync    bool
	ConnectionDetails ConnectionDetails
}

This would allow the GRMM logic to return early if no create, update, or delete was necessary, having made only one cloud provider API call during Observe.

Context deadline exceed error does not appear on status

What happened?

I have a resource whose queries apparently exceeds the context deadline. However, I was only able to find that out through the logs of the controller, there is no information on custom resource status. The pod log:

{"level":"error","ts":1577272335.9786623,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"mysqlservervirtualnetworkrule.database.azure.crossplane.io","request":"/sample-vnet-rule","error":"cannot update managed resource status: context deadline exceeded","errorVerbose":"context deadline exceeded\ncannot update managed resource status\ngithub.com/crossplaneio/crossplane-runtime/pkg/resource.(*ManagedReconciler).Reconcile\n\t/home/upbound/go/pkg/mod/github.com/crossplaneio/[email protected]/pkg/resource/managed_reconciler.go:568\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/upbound/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/upbound/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/upbound/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/upbound/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/upbound/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/upbound/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1337","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/upbound/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/upbound/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/upbound/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/upbound/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/upbound/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/upbound/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/upbound/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}

I think, cr.Status().Update() function calls should use context.Background() instead of the current context because the reconciliation context is related to specific operations while status updates report to the user. We could desire not to keep going with the operations after a deadline is met, but reporting to user is not something we may want to skip in any case. However, there is the risk of timout for cr.Status().Update() calls but that's fairly lower than the total of all operations that are conducted in one reconciliation pass.

How can we reproduce it?

In my case, it happened with MySQLServerVirtualNetworkRule resource but I can't say it will happen every time. One way could be to somehow throttle your network connection.

What environment did it happen in?

Crossplane version: 0.6.0

References to deleting managed resources blocks deletion.

What happened?

In #66 we updated the managed resource reconciler such that managed resources resolve their references on every reconcile. We resolve our references before we observe, create, update, or delete the external resource that corresponds to our managed resource. This is because:

  1. We must resolve references before we observe the external resource, e.g. for resources like connection.servicenetworking.gcp.crossplane.io that may need its .spec.networkRef set in order to observe the external resource.
  2. We must observe the external resource in order to know whether we need to create, update, or delete it.

Cross resource resolution returns an error and does not proceed when the managed resource under reconciliation is dependent upon a resource that does not exist, or does not have condition Ready=True. This means that when, for example:

  • A Subnetwork references a Network.
  • A Network cannot be deleted until all of its Subnetworks have been deleted.
  • A Subnetwork cannot be deleted when its Network is deleting.

We end up stuck. The Subnetwork can't be deleted because the Network (managed resource) is deleting, and the Network can't be deleted because the Subnetwork (external resource) hasn't been deleted.

How can we reproduce it?

  1. Create GCP Network managed resource.
  2. Create a GCP Subnetwork managed resource that references said Network.
  3. Delete the Network resource.
  4. Delete the Subnetwork resource.

Observe that neither resource ever successfully deletes.

What environment did it happen in?

Crossplane version:

Claim defaulting and scheduling reconcilers should requeue

What problem are you facing?

The claim scheduling and defaulting reconcilers currently return reconcile.Result{Requeue: false} if they find no suitable resource classes. This means that if they don't find a suitable resource class for a newly submitted claim they'll never reprocess it, even if a suitable resource class appears later.

How could Crossplane help solve your problem?

The defaulting and scheduling reconcilers could return reconcile.Result{RequeueAfter: aShortWait} if they found no resource classes - this means they'd try again to default or schedule the resource claim after a short wait, in case a suitable class was created. If another controller found and set a suitable class the queued reconcile would be dropped by our controller predicates, which filter out any resource claims with a classRef set.

Late initialization overrides desired value when it is zero-value

What happened?

We have implemented late-initialization pattern across the resources. However, it relies upon the field's value being nil or zero-value. Example late-init functions from stack-gcp:

// LateInitializeInt64 implements late initialization for int64 type.
func LateInitializeInt64(i *int64, from int64) *int64 {
	if i != nil || from == 0 {
		return i
	}
	return &from
}

// LateInitializeBool implements late initialization for bool type.
func LateInitializeBool(b *bool, from bool) *bool {
	if b != nil || !from {
		return b
	}
	return &from
}

This implementation is buggy because it overrides when the user actually wants the zero-value for that field. For example, the value of field foo *bool is true in both spec and cloud provider and the user wants to make it false. In that case LateInitializeBool will override that value with true because it will assume that zero-value is a case where user doesn't have an opinion about that field and the value from cloud provider will be used. This seems easy to fix for pointer types by converting the function into:

// LateInitializeBool implements late initialization for bool type.
func LateInitializeBool(b *bool, from bool) *bool {
	if b != nil {
		return b
	}
	return &from
}

But there is a catch. We follow Kubernetes pattern for optional fields in our API design, which means that optional fields have the following jsontag:

// +optional
foo *string `json:"foo,omitempty"`

The tag omitempty actually causes the value to be nil in the Go code if assigned value is zero-value, in this case false. So, it comes as nil even if user wanted it to be false. This works for Kubernetes late-inited fields like pod.spec.nodeName because those fields will be eventually filled but that's not the case for us. The Go zero-value for a field could actually be the final value that the user wants for that field.

I'd like us to discuss not using omitempty tag at all, so that zero-values come to the controller as is instead of being converted to nil. In terms of api-server validation, nothing changes as // +optional tag is used to mark the fields as required or not in the CRD. Though not using omitempty is divergence from the optional/required design pattern that Kubernetes suggests.

Remove pkg/util

This issue has been created to track the work originally expressed in crossplane/crossplane#448

It's generally considered a mispattern in the Go world to create package names like util, or common. The preferred alternatives are typically one or more of:

  • Breaking out functions into targeted packages with self documenting names like strings, metadata, etc.
  • Tolerating the existence of duplicate instances of simple utility functions like util.ToLowerRemoveSpaces defined closer to where they're used.
  • Avoiding such extremely simple utility functions altogether.

Sources:

Have ExternalConnector accept a smaller interface than resource.Managed

What problem are you facing?

The Generic Managed Resource Reconciler (GMRR) deals primarily with the resource.Managed interface, which is fairly large. Ideally it would use smaller interfaces, where possible.

type ExternalConnecter interface {
	Connect(ctx context.Context, mg Managed) (ExternalClient, error)
}

The Connect method above is one place we could easily use a smaller interface. All current Connect implementations don't actually use any of the methods of resource.Managed; they simply assert the resource.Managed to the concrete managed resource type they expect, get the provider it references and its secret, and return a client.

How could Crossplane help solve your problem?

func (f *Foo) GetProviderReference() *corev1.ObjectReference { return f.Spec.ProviderReference }
func (f *Foo) SetProviderReference(r *corev1.ObjectReference) { f.Spec.ProviderReference = r }

type ProviderReferencer interface {
    GetProviderReference() *corev1.ObjectReference
    SetProviderReference(r *corev1.ObjectReference)
}

type ExternalConnecter interface {
	Connect(ctx context.Context, pr ProviderReferencer) (ExternalClient, error)
}

Adding the above getters and setters to all managed resources would allow us to update the Connect methods to accept the above, much smaller interface.

Make it possible to reclaim a managed resource

As of #87 this is not possible anymore. For the full context it might help to look at the discussions on that PR.

What problem are you facing?

Currently, we don't support rescheduling a Released managed resource to a new claim. This is inline with Kubernetes PVC/PV model, however, not all managed resources can be considered as stateful as volumes. So, users might want to re-use the same managed resource.

Quoting from #87 (comment)

Persistent volumes support a deprecated Recycle policy that attempts to clean up a previously claimed PV and let others claim it. I'm guessing this was deprecated because it's difficult to securely erase volumes with arbitrary underlying storage technologies, and because there's just not that much point recycling a volume when you could instead treat it as cattle; delete it and dynamically provision an identical new one. I suspect these are both as or more true for our managed resources.

I can see the point in not supporting Recycle for volumes because volume is a completely stateful entity and it's probably useless without a cleanup. But this doesn't apply to all managed resources we might support. There are some stateless resources like network, logical resource groups or high level pipeline services. To me, volume looks like one end of the spectrum and logical groupings on the other end. Database server is somewhat in-between since there could be separate apps use the same database server with different schemas. Another example could be that a user might want to provision a giant k8s cluster on top of reserved instances(less costly) and let it be reused by different teams. An example from outside of k8s world could be Amazon EMR clusters where you reserve instances for cost reasons but different people submit different jobs that are completely independent. My point is our model should be able to cover stateless cloud resources as well as stateful ones. As long as teams and/or apps are aware that the resource is recycled, it looks OK to me.

How could Crossplane help solve your problem?

Introduce a new reclaim policy option Recycle but do not go into business of cleaning up the managed resource. Document that this just makes the managed resource available to reclaim and let the user decides whether they want it or not.

Resource claim reconciler should assume managed resources have status subresources

What problem are you facing?

Currently the majority of managed resource implementations do not use the status subresource, so the resource reconciler uses a ManagedBinder and ManagedFinalizer that uses client.Update() instead of client.Status().Update(). This means resource claim reconcilers for more modern managed resources must remember to supply the following options:

resource.WithManagedBinder(resource.NewAPIManagedStatusBinder(mgr.GetClient()))
resource.WithManagedFinalizer(resource.NewAPIManagedStatusUnbinder(mgr.GetClient()))

How could Crossplane help solve your problem?

Crossplane should make our desired behaviour (using the status subresource) the default assumption of our reconcilers. We should be able to do this by:

resource.WithManagedBinder(resource.NewAPIManagedBinder(mgr.GetClient()))
resource.WithManagedFinalizer(resource.NewAPIManagedUnbinder(mgr.GetClient()))
  1. Updating all resource claim reconcilers that deal in managed resources that do not have subresources to explicitly supply the above options.
  2. Updating the defaults in controller-runtime.

Improved logging and eventing

What problem are you facing?

It's currently difficult for Crossplane users and operators to debug issues. Most errors are reflected only as a conditioned status, meaning only the latest error is exposed. Additionally, we have little to no logging and timeseries data. crossplane/crossplane#858 began thinking about how we might improve our logging and events.

How could Crossplane help solve your problem?

It would be good to start prototyping how we could enable this in our existing reconciler scaffolds.

Requeue with exponential backoff

What problem are you facing?

Both the resource claim and managed resource reconcilers requeue a reconcile after exactly 30 seconds when they encounter an error (unless the error is known to be unrecoverable). Ideally we'd retry a little sooner with a reasonably capped exponential backoff.

Note that controller-runtime applies exponential backoff when a reconciler returns {Requeue: true}, and/or an error. Unfortunately the backoff starts at 5ms and doubles until it reaches a little over 16 minutes, and this is not configurable. This leads to situations like crossplane/crossplane#241, in which a resource could take much longer to reconcile than expected, for reasons that are not obvious to users.

How could Crossplane help solve your problem?

The various reconcilers we ship in crossplane-runtime could support exponential backoff. I believe it's important that this backoff be:

  • Bounded appropriately for the controller in question..
  • Discoverable by users who do not have access to Crossplane and/or Stack pod logs

We could implement this ourselves by manipulating RequeueAfter, but ideally we'd avoid reinventing controller-runtime's wheels. I've raised kubernetes-sigs/controller-runtime#631 requesting controller-runtime allow us to plug in our own backoff logic.

Static provisioning requires nonsensical managed resources

What happened?

Changes were made to our resource claim controller watch predicates recently in #18 and #24. The latter refactored the watch predicates used by resource claim controllers with the intention of supporting both statically and dynamically provisioned managed resources. It does this by accepting resources that satisfy any of three watch predicates, applied in the following order:

  1. resource.HasManagedResourceReferenceKind accepts resource claims with a .spec.resourceRef to the kind of managed resource the reconciler is concerned with.
  2. resource.HasDirectClassReferenceKind accepts managed resources with a .spec.classRef to the kind of non-portable class the reconciler is concerned with.
  3. resource.HasIndirectClassReferenceKind accepts resource claims with a .spec.classRef to a portable class with a .classRef to the kind of non-portable class the reconciler is concerned with.

In retrospect resource.HasDirectClassReferenceKind only makes sense in a dynamic provisioning context, because statically provisioned managed resources don't use a resource class, and thus typically don't set their .spec.classRef. Based on my read of the current predicate logic (I have not confirmed this) I expect our resource claim reconcilers would never notice changes to statically provisioned managed resources.

This is not too big an issue given the resource claim reconciler only watches managed resources to wait for them to become bindable, which they typically do before the resource claim is created in a static provisioning scenario. I'm guessing that's why I didn't encounter this case when I tested static provisioning per #24 (comment).

How can we reproduce it?

  1. Create a resource claim that references a managed resource that does not yet exist.
  2. Create the referenced managed resource.

My guess is that the claim will never bind.

There are two workarounds for this:

  1. Create the managed resource and wait for it its .status.bindingPhase to become Unbound before creating a resource claim that references it.
  2. Set the managed resource's .spec.classRef.kind and .spec.classRef.apiVersion to its corresponding non-portable resource class kind.

What environment did it happen in?

crossplane-runtime version: 26a458d

Make portable classes subtypes of runtimev1alpha1.PortableClass

What problem are you facing?

Currently every portable class is a distinct type that embeds runtimev1alpha1.PortableClass, with the exact same shape. For example:

type PortableClass struct {
	NonPortableClassReference *corev1.ObjectReference `json:"classRef,omitempty"`
}

type RedisClusterClass struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	runtimev1alpha1.PortableClass `json:",inline"`
}

How could Crossplane help solve your problem?

We could probably write a little less code by making them a type (in the Go sense) of PortableClass, e.g.:

type PortableClass struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	NonPortableClassReference *corev1.ObjectReference `json:"classRef,omitempty"`
}

type RedisClusterClass PortableClass

Managed Reconciler: ExternalObservation type should pass more information to the Create method

What problem are you facing?

When using the managed reconciler pattern, it's sometimes the case that more than just one operation need to be performed in the external resource's Create method, conditioned on the observation made. For instance, an external resource might need 3 distinct sub-components to be provisioned in order, for it to be created. Currently, every time the Create method is called, it uses the CR's fields, to determine what operation it should run, which is not efficient and essentially the observation is done again in the Create method (it is triggered merely based on the fact that ExternalObservation{ ResourceExists: false }).

How could Crossplane help solve your problem?

More information (for instance a map of key-values) could be passed to Create method, making it more context-aware.

writeConnectionSecretToRef should be optional

What happened?

Crossplane managed resources support writing their connection details to a Kubernetes Secret by populating their writeConnectionSecretToRef field. This functionality should be optional; i.e. managed resources that omit this reference should simply not write their connection details to a Secret.

https://github.com/crossplaneio/crossplane-runtime/blob/e4d61ee/pkg/resource/managed_reconciler.go#L390
https://github.com/crossplaneio/crossplane-runtime/blob/e4d61ee/pkg/resource/publisher.go#L70
https://github.com/crossplaneio/crossplane-runtime/blob/e4d61ee/pkg/resource/resource.go#L39

We've heard reports that instead resources without a populated writeConnectionSecretToRef encounter an error. Looking at the code I imagine they experience a nil pointer inside ConnectionSecretFor because o.GetWriteConnectionSecretToReference() is nil issues looking up a Secret with a zero value name.

How can we reproduce it?

  1. Create a managed resource that uses the generic managed resource reconciler (e.g. ReplicationGroup), omitting the writeConnectionSecretToRef
  2. Inspect the conditioned status of said resource.

What environment did it happen in?

Crossplane version: e4d61ee

Managed Reconciler: A resource with unresolved references never gets deleted

What happened?

Currently if a managed resource has unresolved references, it never gets deleted, because it exits the reconciliation loop early.

How can we reproduce it?

  • create a resource which has references that doesn't exist (for example a Subnetwork with a network reference NetworkRef that doesn't exist)
  • delete the Subnetwork resource
  • observe that Subnetwork never gets deleted

What environment did it happen in?

Crossplane version: crossplaneio/crossplane-runtime,master@6916b94

CRR: `ReferencesResolved` condition is still `False` even though the `Ready` condition is `True`

What happened?

CRR: ReferencesResolved condition is still False even though the Ready condition is True.

How can we reproduce it?

  1. Install [email protected]
  2. Install [email protected]
  3. Setup gcp account provider
  4. Run
kubectl apply -k github.com/crossplaneio/crossplane//cluster/examples/workloads/kubernetes/wordpress/gcp/network-config?ref=release-0.4

Observation:
Run kubectl get globaladdress.compute.gcp.crossplane.io/sample-globaladdress -o yaml to observe the behavior:

apiVersion: compute.gcp.crossplane.io/v1alpha3
kind: GlobalAddress
metadata:
  annotations:
    crossplane.io/external-name: sample-globaladdress
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"compute.gcp.crossplane.io/v1alpha3","kind":"GlobalAddress","metadata":{"annotations":{},"name":"sample-globaladdress"},"spec":{"addressType":"INTERNAL","name":"my-cool-globaladdress","networkRef":{"name":"sample-network"},"prefixLength":16,"providerRef":{"name":"gcp-provider"},"purpose":"VPC_PEERING","reclaimPolicy":"Delete"}}
  creationTimestamp: "2019-11-14T06:06:03Z"
  finalizers:
  - finalizer.managedresource.crossplane.io
  generation: 3
  name: sample-globaladdress
  resourceVersion: "9686"
  selfLink: /apis/compute.gcp.crossplane.io/v1alpha3/globaladdresses/sample-globaladdress
  uid: 3274f21b-96e6-477e-9ded-85c4178f6a02
spec:
  address: 10.29.0.0
  addressType: INTERNAL
  name: my-cool-globaladdress
  network: projects/crossplane-playground/global/networks/my-cool-network
  networkRef:
    name: sample-network
  prefixLength: 16
  providerRef:
    name: gcp-provider
  purpose: VPC_PEERING
  reclaimPolicy: Delete
status:
  conditions:
  - lastTransitionTime: "2019-11-14T06:06:03Z"
    message: '[{reference:sample-network status:NotFound}]'
    reason: One or more referenced resources do not exist, or are not yet Ready
    status: "False"
    type: ReferencesResolved
  - lastTransitionTime: "2019-11-14T06:06:34Z"
    reason: Managed resource is available for use
    status: "True"
    type: Ready
  - lastTransitionTime: "2019-11-14T06:06:34Z"
    reason: Successfully reconciled managed resource
    status: "True"
    type: Synced
  creationTimestamp: "2019-11-13T22:06:34.318-08:00"
  id: 8409971861024231429
  selfLink: https://www.googleapis.com/compute/v1/projects/crossplane-playground/global/addresses/my-cool-globaladdress
  status: RESERVED

What environment did it happen in?

crossplane-runtime version: v0.2.1

Crossplane should refuse to operate on existing external resources

What problem are you facing?

As more and more Crossplane managed resources support updating (in addition to creating and deleting) their external resources, we're likely to run into cases where Crossplane will "implicitly adopt" and start managing existing external resources. This will become more prevalent as more Crossplane managed resources allow configuration of their underlying external resource names per crossplane/crossplane#624.

For example, assume a critical 1TB CloudSQL database named prod exists in a particular GCP account. Further assume Crossplane is configured to use this GCP account, and a resource claim for a 10GB CloudSQL database with external name prod was created. Today there is no logic that would prevent Crossplane considering the existing database to be under Crossplane management, and trying to resize it down to 10GB.

How could Crossplane help solve your problem?

Crossplane will need to be able to distinguish external resources it created from existing external resources. This may be difficult, given that not all cloud providers and external resources support storing metadata.

Relates to #22.

Define a way to let controllers "late initialize" spec fields

What problem are you facing?

While it's general consensus that the controller should not manipulate spec, i.e. desired state, in some cases the developer might want to update spec to adopt late initialization pattern that exists in Kubernetes https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#late-initialization

One could argue that defaulting via admission webhook could solve this, however, for external resources some fields are only available after the creation of the resource.

How could Crossplane help solve your problem?

There should be a way that allows spec update but I don't think all update calls we make should be converted from status. update. We need to define a way that implies that only in some cases, developer should opt for whole resource update, otherwise keep updating only status.

Managed reconciler should denote which external client function failed

What problem are you facing?

Right now, in order for user to know in which step the error happened is through the error messages inside the ExternalClient functions. However, that can be quite ambiguous. Think of the following case: Update call fails but it because of the Get call of the provider API in the Update function. Usually, developer puts an error message saying Getfailed but the only way for them to say Get failed in the context of Update is to wrap a prefix error message. But this is already known by the managed reconciler, so, we can let the developer worry about only the error of the specific call and be sure that enough context is provided for debugging/tracing.

How could Crossplane help solve your problem?

Instead of using managed.SetConditions(v1alpha1.ReconcileError(err)) in all calls of the ExternalClient, we can use something like managed.SetConditions(v1alpha1.ReconcileError(errors.Wrap(err, "<Create/Observe/Update/Delete> failed")))

Managed reconciler should set the finalizer only after resource is created

What problem are you facing?

Currently, for some reason, if the steps before Create call fails and the reason also causes Delete to fail, the resource is stuck and we can't delete it. Examples:

  • Insufficient permissions of the provider's credential causes even the first Observe call to fail, but since finalizer is already set, user cannot delete the CR.
  • If references cannot be solved forever, there is no way to delete the resource since the finalizer is set already and Delete is never executed. #62

This started to happen after the merge of https://github.com/crossplaneio/crossplane-runtime/pull/45/files#diff-c7df5a255a05fb18975e474f9af689a9L400 since we set the finalizer no matter what on this PR.

How could Crossplane help solve your problem?

We can set the finalizer only after the Observe call returns that the resource exists and remove it after deletion signal is received and resource doesn't exist. But I think these calls should be native part of the managed reconciler reconcile loop instead of yet another Initializer or Finalizer objects. There has been some discussion around this #45 (comment) but I think we should fix it for 0.5 of Crossplane.

Claim cannot be deleted if managed resource is already gone

What happened?

If claim reconciler cannot find the managed resource, it just requeues even though the user requested the claim to be deleted.

How can we reproduce it?

What environment did it happen in?

Crossplane-runtime version: 0.2.2

Managed resource .spec.classRef appears to be purely informational

What happened?

When documenting our ResourceSpec it occurred to me that we don't really support a Crossplane user setting the classRef field; we'd just ignore it. It mostly serves an an artifact of the resource class that was used to dynamically provision a managed resource (if it was, in fact, dynamically provisioned) so it seems misleading to expose it in the resource's spec rather than its status.

How can we reproduce it?

Author a managed resource, set the class reference, and observe that it does nothing!

What environment did it happen in?

Crossplane version:

Avoid corev1.ObjectReference and LocalObjectReference

What problem are you facing?

Most inter-resource references in Crossplane are either an ObjectReference or a LocalObjectReference. ObjectReference contains many optional fields (e.g. UID, FieldPath, etc) that Crossplane never uses. This could be misleading to users, since some fields (i.e. Name) are always optional, and it's not obvious that the other fields are frequently not required when reading the API documentation or CRD specs. The same is true for LocalObjectReference, which exposes only one field - name - that is (for some reason that is not obvious to me) marked optional.

How could Crossplane help solve your problem?

Crossplane could use purpose-specific references that represent a subset of ObjectReference. This is compliant with the API conventions, which state:

Object references should either be called fooName if referring to an object of kind Foo by just the name (within the current namespace, if a namespaced resource), or should be called fooRef, and should contain a subset of the fields of the ObjectReference type.

Allow existing resources to be "imported" into Crossplane to be managed

Overview

One of the key Crossplane functionality is support for provisioning and life-cycle management of the public cloud managed resources. Crossplane offers two modes of resource creation:

  • Static: Resource is created using a dedicated resource type (CR)
  • Dynamic: Resource is created using ResourceClaim type (CR)
    In either case, crossplane takes "full lifecycle" ownership of the created resource, and based on the specified Reclaim Policy, can delete managed resource upon the clean-up,

There are, however, use cases when the user would like to represent existing resources (i.e. resources provisioned outside the Crossplane) as Crossplane Resource instances. To achieve that, Crossplane must support resource import functionality.

Further Consideration

In order for Crossplane to provide resource import functionality following questions need to be addressed:

Resource Lifecycle

Will the crossplane become a "full" owner of the managed resource lifecycle.

Deletion

If crossplane imports a resource, can this resource be specified with ReclaimPolicy: Delete. Initial thinking, no, it cannot, i.e. all imported resources automatically get assigned (and hopefully at some point enforced) Reclaim Policy: Retain ("cannot delete something you didn't create")

Update

Can crossplane perform manage resource update, i.e. changing resource state/properties with the potential impact on any/all existing downstream resource consumers?

Cloud Provider compatibility

There should be a special concideration that the imported resource could be "fully" managed by the existing cloud provider (service account scopes/permissions, etc)

Develop Versioning / Release Strategy for Crossplane-Runtime

What problem are you facing?

As more repositories begin to take a dependency on crossplane-runtime, it is likely that different stacks will want to take a dependency on different versions of crossplane-runtime. Currently, this is done by pegging to a specific commit.

How could Crossplane help solve your problem?

A more robust semantic versioning strategy should be used to allow for clearly defined changes in the library. Useful references for library versioning include:

ProviderRef should not be of type ObjectReference

What happened?

ProviderReference field that exists in ResourceSpec is of type ObjectReference which has a lot of identifier such as name, namespace, uid, fieldpath etc. In some controllers, we call NamespacedNameOf function which returns an ObjectKey that includes the namespace. This is error-prone since users might add namespace and controller wouldn't be able to locate the object.

Provider objects are cluster-scoped and the type casting happens in managed resource reconciler, meaning its type is hard-coded in the codebase. So, we only need the name of the Provider object. We can have a type in crossplane-runtime like:

type ClusterScopedReference struct {
  APIVersion string `json:"apiVersion,omitempty"
  Kind            string `json:"kind,omitempty"
  Name         string `json:"name"

How can we reproduce it?

Create a managed resource with providerRef that has namespace field populated.

What environment did it happen in?

Crossplane version: 0.5.1

Resource claims should handle deletion of their managed resources

What problem are you facing?

Currently if you delete a managed resource that is bound to a resource claim the claim will transition its Ready condition to False stating that it's "waiting for managed resource to become bindable", but stay in phase Bound with a resource reference to the deleted managed resource. This is a fairly confusing state.

How could Crossplane help solve your problem?

Crossplane should handle this case. It could either:

  1. Automatically replace the deleted managed resource.
  2. Transition into some kind of failed condition.

I suspect option two would be the better UX. It would be very surprising if, as a Crossplane user who made a PostgreSQLInstance claim, suddenly your claim was pointing at a new database instance because the old one was deleted. I feel that we should alert the user clearly of what happened, and let them resolve it by deleting and recreating the claim.

Do not re-queue not recoverable requests

As a general rule, we requeue every failed request.
However, there are cases when failures are non-recoverable and require an update to the resource definition itself (which in turn will re-queue it)

For example: If the resource contains an invalid property value

"InvalidParameterValue: Invalid database identifier"

There is no point of re-queue-int this request since the property value will not fix itself.

ReferenceResolver: Remove the condition on `ReferenceResolutionSuccess` to resolve references

What problem are you facing?

origin: #46 (comment)

One downside of gating reference resolution on a condition being set is that we'll only ever process the resolution once. In the current implementation if someone edits a managed resource and changes a reference that corresponds to a mutable field, we'll ignore it.
Perhaps we should just run reference resolution every time we reconcile the managed resource? Definitely happy to defer this to a future PR in the interest of getting this iteration merged.

How could Crossplane help solve your problem?

Per @negz, we could avoid this by updating the AttributeReferencer implementations to be no-ops if the fields they would assign were already set. This would avoid the need to check the TypeReferencesResolved condition, but would have the same issue in that it wouldn't support updating references after creation time.

Spec drift is not reported to user

What problem are you facing?

Currently, if I change a field in the spec and, for some reason, it didn't get reflected to the observed state, I have no idea about it if cloud API doesn't return an error, which happens in some cases where Update calls are selective about the fields to send.

How could Crossplane help solve your problem?

Observe method can compare the received object with Spec and if there is a drift, it can include that information into ExternalObservation (maybe with a bool) and then generic managed reconciler would set a condition to tell the user there is a drift. It could be a new condition or just a new type in the existing condition.

Mentioned as future consideration in crossplane/crossplane#840

Resource claim reconciler only propagates connection secrets at binding time

What happened?

https://github.com/crossplaneio/crossplane-runtime/blob/d805043/pkg/resource/claim_reconciler.go#L401

The resource claim reconciler currently propagates connection secrets from a managed resource to the resource claim it is bound once, at binding time. This is insufficient for some managed resource kinds, including at least EKS clusters, that constantly refresh and update their connection secrets. Crossplane must notice changes to the managed resource's connection secret (even if the managed resource itself does not change), and propagate those changes to the resource claim.

This is a must-fix bug for the next stack-aws release. Currently we expect we are not able to deploy workloads to EKS clusters more than 60 minutes after their creation time.

How can we reproduce it?

  1. Dynamically provision an EKS cluster via a KubernetesCluster claim.
  2. Wait for the token granted at creation time to expire - this should take 60 minutes.
  3. Try to deploy a KubernetesApplication to the aforementioned KubernetesCluster

You should find that the Kubernetes bearer token stored in the EKS cluster connection secret has been refreshed, but the KubernetesCluster claim's connection secret's bearer token has not.

What environment did it happen in?

Crossplane version:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.