Giter Site home page Giter Site logo

template-operator's Introduction

REUSE status

Template Operator

This documentation and template serve as a reference for implementing a module operator for integration with Lifecycle Manager. It utilizes the kubebuilder framework with some modifications to implement Kubernetes APIs for Custom Resource Definitions (CRDs). Additionally, it hides Kubernetes boilerplate code to develop fast and efficient control loops in Go.

Contents

Understanding Module Development in Kyma

Before going in-depth, make sure you are familiar with:

This guide serves as a comprehensive step-by-step tutorial on properly creating a module from scratch using the operator that installs the Kubernetes YAML resources.

NOTE: While other approaches are encouraged, there are no dedicated guides available yet. These will follow with sufficient requests and the adoption of Kyma modularization.

Basic Principles

Every Kyma module using the operator follows five basic principles:

  • Is declared as available for use in a release channel through the ModuleTemplate custom resource (CR) in Control Plane
  • Is declared as the desired state within the Kyma CR in the runtime or Control Plane
  • Is installed or managed in the runtime by Lifecycle Manager through the Manifest CR in Control Plane
  • Owns at least one CRD that defines the contract towards a runtime administrator and configures its behavior
  • Operates on at most one runtime at any given time

Release Channels

Release channels let the customers try new modules and features early and decide when to apply the updates. For more information, see the release channels documentation in the modularization overview.

The following rules apply to the channel naming:

  1. Lowercase letters from a to z.
  2. The total length is between 3 and 32 characters.

If you are planning to migrate a pre-existing module within Kyma, read the transition plan for existing modules.

Comparison to Other Established Frameworks

Operator Lifecycle Manager (OLM)

Compared to OLM, modular Kyma differs in a few aspects. While OLM is built heavily around a static dependency expression, the Kyma modules are expected to resolve dependencies dynamically. This means that in OLM, a module must declare CRDs and APIs it depends on. In Kyma, all modules can depend on each other without declaring it in advance. This makes it harder to understand compared to a strict dependency graph, but it comes with a few key advantages:

  • Concurrent optimization on the controller level: every controller in Kyma is installed simultaneously and is not blocked from installation until other operators are available. This makes it easy to create or configure resources that do not need to wait for the dependency. For example, ConfigMap can be created even before a Deployment that must wait for an API to be present. While this forces controllers to include a case where the dependency is absent, we encourage eventual consistency and do not enforce a strict lifecycle model on the modules.
  • Discoverability is handled not through a registry or server but through a declarative configuration. Every module is installed through ModuleTemplate, which is semantically the same as registering an operator in an OLM registry or CatalogSource. ModuleTemplate, however, is a normal CR and can be installed on Control Plane. This allows multiple Control Planes to offer differing modules simply at configuration time. Also, we do not use file-based catalogs to maintain our catalog but maintain every ModuleTemplate through Open Component Model, an open standard to describe software artifact delivery.

Regarding release channels for operators, Lifecycle Manager operates at the same level as OLM. However, with Kyma, we ensure the bundling of ModuleTemplate to a specific release channel. We are heavily inspired by the way that OLM handles release channels, but we do not use an intermediary Subscription that assigns the catalog to the channel. Instead, every module is already delivered in ModuleTemplate in a channel's ModuleTemplate.

There is a distinct difference in the ModuleTemplate parts. ModuleTemplate contains not only a specification of the operator to be installed through a dedicated layer. It also consists of a set of default values for a given channel when installed for the first time. When you install an operator from scratch using Kyma, the module is already initialized with a default set of values. However, when upgrading, Lifecycle Manager is not expected to update the values to new defaults. Instead, it is a way for module developers to prefill their operators with instructions based on a given environment (the channel). Note that these default values are static once installed, and are not updated unless a new module installation occurs, even when the content of ModuleTemplate changes. This is because a customer is expected to be able to change the settings of the module CR at any time without the Kyma ecosystem overriding it. Thus, the module CR can also be treated as a customer or runtime-facing API that allows us to offer typed configuration for multiple parts of Kyma.

Crossplane

With Crossplane, you are fundamentally allowing providers to interact with your Control Plane. The most similar aspect of the Crossplane lifecycle is that we also use opinionated OCI images to bundle our modules. We use ModuleTemplate to reference our layers containing the necessary metadata to deploy our controllers, just like Crossplane. However, we do not speculate on the permissions of controllers and enforce stricter versioning guarantees, only allowing semver to be used for modules and Digest for the sha digests for individual layers of modules.

Fundamentally different is also the way that Providers and Composite Resources work compared to the Kyma ecosystem. While Kyma allows any module to bring an individual CR into the cluster for configuration, a Provider in Crossplane is located in Control Plane and only directs installation targets. We handle this kind of data centrally through acquisition strategies for credentials and other centrally managed data in the Kyma CR. Thus, it is most fitting to consider the Kyma ecosystem as a heavily opinionated Composite Resource from Crossplane, with the Managed Resource being tracked with the Lifecycle Manager manifest.

Compared to Crossplane, we also encourage the creation of our CRDs in place of the concept of the Managed Resource, and in the case of configuration, we can synchronize not only a desired state for all modules from Control Plane but also from the runtime. Similarly, we make the runtime module catalog discoverable from inside the runtime with a dedicated synchronization mechanism.

Lastly, compared to Crossplane, we have fewer choices regarding revision management and dependency resolution. While in Crossplane, it is possible to define custom package, revision, and dependency policies, in Kyma, managed use cases usually require unified revision handling, and we do not target a generic solution for revision management of different module ecosystems.

Implementation

Prerequisites

WARNING: For all use cases in the guide, you need a cluster for end-to-end testing outside your envtest integration test suite. It's HIGHLY RECOMMENDED that you follow this guide for a smooth development process. This is a good alternative if you do not want to use the entire Control Plane infrastructure and still want to test your operators properly.

Use one of the following options to install kubebuilder:

Homebrew

```bash
brew install kubebuilder
```

Fetch Sources Directly

```bash
curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/
```

Generate the kubebuilder Operator

  1. Initialize the kubebuilder project. Make sure the domain is set to kyma-project.io. Execute the following command in the test-operator folder.

    kubebuilder init --domain kyma-project.io --repo github.com/kyma-project/test-operator --project-name=test-operator --plugins=go/v4-alpha
  2. Create the API group version and kind for the intended CR(s). Make sure group is set to operator.

    kubebuilder create api --group operator --version v1alpha1 --kind Sample --resource --controller --make
  3. Run make manifests to generate respective CRDs.

  4. Set up a basic kubebuilder operator with appropriate scaffolding.

Optional: Adjust the Default Config Resources

If the module operator is deployed under the same namespace with other operators, differentiate your resources by adding common labels.

  1. Add commonLabels to default kustomization.yaml. See reference implementation.

  2. Include all resources (for example, manager.yaml) that contain label selectors by using commonLabels.

Further reading: Kustomize Built-In commonLabels

API Definition Steps

  1. Refer to State requirements and similarly include them in your Status sub-resource.

This Status sub-resource must contain all valid State (.status.state) values to be compliant with the Kyma ecosystem.

package v1alpha1
// Status defines the observed state of Module CR.
type Status struct {
    // State signifies current state of Module CR.
    // Value can be one of ("Ready", "Processing", "Error", "Deleting").
    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Enum=Processing;Deleting;Ready;Error
    State State `json:"state"`
}

Include the State values in your Status sub-resource, either through inline reference or direct inclusion. These values have literal meaning behind them, so use them properly.

  1. Optionally, you can add additional fields to your Status sub-resource. For instance, Conditions are added to Sample CR in the API definition. This also includes the required State values, using an inline reference. See the following Sample CR reference implementation.

    package v1alpha1
    // Sample is the Schema for the samples API
    type Sample struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`
    
        Spec   SampleSpec   `json:"spec,omitempty"`
        Status SampleStatus `json:"status,omitempty"`
    }
    
    type SampleStatus struct {
        Status `json:",inline"`
    
        // Conditions contain a set of conditionals to determine the State of Status.
        // If all Conditions are met, State is expected to be in StateReady.
        Conditions []metav1.Condition `json:"conditions,omitempty"`
    
        // add other fields to status subresource here
    }
  2. Run make generate manifests to generate boilerplate code and manifests.

Controller Implementation Steps

WARNING: This sample implementation is only for reference. You can copy parts of it but do not add this repository as a dependency to your project.

  1. Implement State handling to represent the corresponding state of the reconciled resource by following the kubebuilder guidelines on how to implement controllers.

  2. Refer to the Sample CR controller implementation for setting the appropriate State and Conditions values to your Status sub-resource.

The Sample CR is reconciled to install or uninstall a list of rendered resources from a YAML file on the file system.

r.setStatusForObjectInstance(ctx, objectInstance, status.
WithState(v1alpha1.StateReady).
WithInstallConditionStatus(metav1.ConditionTrue, objectInstance.GetGeneration()))
  1. The reference controller implementations listed above use Server-Side Apply instead of conventional methods to process resources on the target cluster. You can leverage parts of this logic to implement your own controller logic. Check out functions inside these controllers for state management and other implementation details.

Local Testing

  • Connect to your cluster and ensure kubectl is pointing to the desired cluster.
  • Install CRDs with make install WARNING: This installs a CRD on your cluster, so create your cluster before running the install command. See Prerequisites for details on the cluster setup.
  • Local setup: install your module CR on a cluster and execute make run to start your operator locally.

WARNING: Note that while make run fully runs your controller against the cluster, it is not feasible to compare it to a productive operator. This is mainly because it runs with a client configured with privileges derived from your KUBECONFIG environment variable. For in-cluster configuration, see Guide on RBAC Management.

Bundling and Installation

Grafana Dashboard for Simplified Controller Observability

You can extend the operator further by using automated dashboard generation for Grafana.

Use the following command to generate two Grafana dashboard files with the controller-related metrics in the /grafana folder:

kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha

To import Grafana dashboard, read the official Grafana guide. This feature is supported by the kubebuilder Grafana plugin.

Role-Based Access Control (RBAC)

Ensure you have appropriate authorizations assigned to your controller binary before running it inside a cluster (not locally with make run). The Sample CR controller implementation includes RBAC generation (via kubebuilder) for all resources across all API groups. Adjust it according to the chart manifest resources and reconciliation types.

Towards the earlier stages of your operator development, RBACs can accommodate all resource types and adjust them later according to your requirements.

package controllers
// TODO: dynamically create RBACs! Remove line below.
//+kubebuilder:rbac:groups="*",resources="*",verbs="*"

REMEMBER: Run make manifests after this adjustment for it to take effect.

Prepare and Build Module Operator Image

WARNING: This step requires the working OCI registry. See Prerequisites.

  1. Include the static module data in your Dockerfile:
    FROM gcr.io/distroless/static:nonroot
    WORKDIR /
    COPY module-data/ module-data/
    COPY --from=builder /workspace/manager .
    USER 65532:65532
    
    ENTRYPOINT ["/manager"]

The sample module data in this repository includes a YAML manifest in the module-data/yaml directories. Reference the YAML manifest directory with the spec.resourceFilePath attribute of the Sample CR. The example CRs in the config/samples directory already reference the mentioned directories. Feel free to organize the static data differently. The included module-data directory serves just as an example. You may also decide not to include any static data at all. In that case, you must provide the controller with the YAML data at runtime using other techniques, such as Kubernetes volume mounting.

  1. If necessary, build and push your module operator binary by adjusting IMG and running the inbuilt kubebuilder commands. Assuming your operator image has the following base settings:
  • is hosted at op-kcp-registry.localhost:8888/unsigned/operator-images
  • controller image name is sample-operator
  • controller image has version 0.0.1

you can run the following command:

make docker-build docker-push IMG="op-kcp-registry.localhost:8888/unsigned/operator-images/sample-operator:0.0.1"

This builds the controller image and then pushes it as the image defined in IMG based on the kubebuilder targets.

Build and Push Your Module to the Registry

WARNING: This step requires the working OCI Registry, cluster, and Kyma CLI. See Prerequisites.

  1. Generate the CRDs and resources for the module from the default kustomization into a manifest file using the following command:
    make build-manifests

You can use this file as a manifest for the module configuration in the next step.

Furthermore, make sure the settings from Prepare and Build Module Operator Image for single-cluster mode, and the following module settings are applied:

  • is hosted at op-kcp-registry.localhost:8888/unsigned
  • for a k3d registry, the insecure flag (http instead of https for registry communication) is enabled
  • Kyma CLI in $PATH under kyma is used
  • the default sample under config/samples/operator.kyma-project.io_v1alpha1_sample.yaml has been adjusted to be a valid CR

WARNING: The settings above reflect your default configuration for a module. To change them, adjust them manually to a different configuration. You can also define multiple files in config/samples, but you must specify the correct file during the bundling.

  • .gitignore has been adjusted and the following ignores have been added

    # generated dummy charts
      charts
    # template generated by kyma create module
      template.yaml
  1. To configure the module, adjust the file module-config.yaml, located at the root of the repository.

    The following fields are available for the configuration of the module:

    • name: (Required) The name of the module.
    • version: (Required) The version of the module.
    • channel: (Required) The channel that must be used in ModuleTemplate. Must be a valid Kyma state.
    • manifest: (Required) The relative path to the manifest file (generated in the first step).
    • defaultCR: (Optional) The relative path to a YAML file containing the default CR for the module.
    • resourceName: (Optional) The name for ModuleTemplate that is created.
    • namespace: (Optional) The namespace where ModuleTemplate is deployed.
    • security: (Optional) The name of the security scanners configuration file.
    • internal: (Optional) Determines whether ModuleTemplate must have the internal flag or not. The type is bool.
    • beta: (Optional) Determines whether ModuleTemplate must have the beta flag or not. The type is bool.
    • labels: (Optional) Additional labels for ModuleTemplate.
    • annotations: (Optional) Additional annotations for ModuleTemplate.
    • customStateCheck: (Optional) Specifies custom state checks for the module.

    An example configuration:

    name: kyma-project.io/module/template-operator
    version: v1.0.0
    channel: regular
    manifest: template-operator.yaml
  2. Run the following command to create the module configured in module-config.yaml and push your module operator image to the specified registry:

    kyma alpha create module --insecure --registry op-kcp-registry.localhost:8888/unsigned --module-config-file module-config.yaml 

WARNING: For external registries (for example, Google Container/Artifact Registry) never use insecure. Instead, specify credentials. You can find more details in the CLI help documentation.

  1. Verify that the module creation succeeded and observe the generated template.yaml file. It will contain the ModuleTemplate CR and descriptor of the component under spec.descriptor.component.

    component:
      componentReferences: []
      labels:
      - name: security.kyma-project.io/scan
        value: enabled
        version: v1
      name: kyma-project.io/module/template-operator
      provider: internal
      repositoryContexts:
      - baseUrl: http://op-kcp-registry.localhost:8888/unsigned
        componentNameMapping: urlPath
        type: ociRegistry
      resources:
      - access:
          digest: sha256:d008309948bd08312016731a9c528438e904a71c05a110743f5a151f0c3c4a9e
          type: localOciBlob
        name: raw-manifest
        relation: local
        type: yaml
        version: v1.0.0
      sources:
      - access:
          commit: 4f2ae6474ea7ababf9be246abe74b40f1baf1121
          repoUrl: https://github.com/LeelaChacha/kyma-cli.git
          type: gitHub
        labels:
        - name: git.kyma-project.io/ref
          value: refs/heads/feature/#90-update-build-instructions
          version: v1
        - name: scan.security.kyma-project.io/language
          value: golang-mod
          version: v1
        - name: scan.security.kyma-project.io/subprojects
          value: "false"
          version: v1
        - name: scan.security.kyma-project.io/exclude
          value: '**/test/**,**/*_test.go'
          version: v1
        name: module-sources
        type: git
        version: v1.0.0
      version: v1.0.0

The CLI created various layers that are referenced in the blobs directory. For more information on the layer structure, check the module creation with kyma alpha mod create --help.

Using Your Module in the Lifecycle Manager Ecosystem

Deploying Kyma Infrastructure Operators with kyma alpha deploy

WARNING: This step requires the working OCI registry and cluster. See Prerequisites.

Now that everything is prepared in a cluster of your choice, you can reference the module within any Kyma CR in your Control Plane cluster.

Deploy the Lifecycle Manager to the Control Plane cluster with:

kyma alpha deploy

Deploying ModuleTemplate into the Control Plane

Run the command for creating ModuleTemplate in your cluster. After this the module will be available for consumption based on the module name configured with the label operator.kyma-project.io/module-name in ModuleTemplate.

WARNING: Depending on your setup against either a k3d cluster or registry, you must run the script /scripts/patch_local_template.sh before pushing ModuleTemplate to have the proper registry setup. This is necessary for k3d clusters due to port-mapping issues in the cluster that the operators cannot reuse. Take a look at the relevant issue for more details.

kubectl apply -f template.yaml

You can use the following command to enable the module you created:

kyma alpha enable module <module-identifier> -c <channel>

This adds your module to .spec.modules with a name originally based on the "operator.kyma-project.io/module-name": "sample" label generated in template.yaml:

spec:
  modules:
  - name: sample

If required, you can adjust this Kyma CR based on your testing scenario. For example, if you are running a dual-cluster setup, you might want to enable the synchronization of the Kyma CR into the runtime cluster for a full E2E setup. When creating this Kyma CR in your Control Plane cluster, installation of the specified modules should start immediately.

Debugging the Operator Ecosystem

The operator ecosystem around Kyma is complex, and it might become troublesome to debug issues in case your module is not installed correctly. For this reason, here are some best practices on how to debug modules developed using this guide.

  1. Verify the Kyma installation state is Ready by verifying all conditions.
     JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.reason}:{@.status};{end}{end}' \
     && kubectl get kyma -o jsonpath="$JSONPATH" -n kcp-system
  2. Verify the Manifest installation state is ready by verifying all conditions.
     JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
     && kubectl get manifest -o jsonpath="$JSONPATH"-n kcp-system
  3. Depending on your issue, observe the deployment logs from either Lifecycle Manager or Module Manager. Make sure that no errors have occurred.

Usually, the issue is related to either RBAC configuration (for troubleshooting minimum privileges for the controllers, see our dedicated RBAC section), misconfigured image, module registry or ModuleTemplate. As a last resort, make sure that you are running within a single-cluster or a dual-cluster setup, watch out for any steps with a WARNING specified and retry with a freshly provisioned cluster. For cluster provisioning, make sure to follow the recommendations for clusters mentioned in our Prerequisites for this guide.

Lastly, if you are still unsure, open an issue with a description and steps to reproduce. We will be happy to help you with a solution.

Registering your Module Within the Control Plane

For global usage of your module, the generated template.yaml from Build and Push your Module to the Registry must be registered in our Control Plane. This relates to Phase 2 of the module transition plane. Please be patient until we provide you with a stable guide on integrating your template.yaml properly with an automated test flow into the central Control Plane offering.

template-operator's People

Contributors

adityabhatia avatar ajinkyapatil8190 avatar ameteiko avatar c-pius avatar clebs avatar dependabot[bot] avatar fourthisle avatar grego952 avatar janmedrek avatar jeremyharisch avatar kyma-bot avatar leelachacha avatar lindnerby avatar nesmabadr avatar pbochynski avatar ruanxin avatar tomasz-smelcerz-sap avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

template-operator's Issues

Provide a Helm-based example for an operator

Description

While applying raw resources is recommended, some teams cannot shift towards Helm-less deployments. For that reason, we need to provide an exemplary operator that would use Helm to render manifests and apply them to the cluster.

We can either create a separate repository just for that or provide a new example of how to create such an operator in template-operator.

Reasons

Enable the module teams by providing a Helm-based example.

Acceptance Criteria

  • Provide a Helm-based exemplary operator
  • Helm-based operator clearly marked as the possibility, but not the preferred way to go
  • Clear separation and basic description of the differences between various approaches (raw vs Helm)

Introduce new flag to set Final-Deleting-State

Description
To support more complex test scenarios we need a new flag to set the State when the template-operator module is being disabled.
This topic first came up by investigation in the following issue: kyma-project/lifecycle-manager#1080

Acceptance Criterias

  • Add new flag to Template-Operator which allows to set the State when Module is in Deleting (DeletionTimestamp unequal zero); default Deleting (normal behaviour, but not being stuck in Deleting) -> Can be used to simulate more complex scenarios like the one describes in kyma-project/lifecycle-manager#1080 (comment)
  • Example flag name Final-Deleting-State
  • Default template-operator should behave normal as before
  • Cover new flag in tests
  • After implementation has been done, update and unblock kyma-project/lifecycle-manager#1080

Review the README

As @kyma-project/technical-writers were not added as CODEOWNERS the README.md was merged without language review. Let's review the README and make necessary structure changes if needed.

update outdated template-operator document

The Bundling and installation in template operator document is outdated, which affects kyma module hands-on experience. We got issue reported here.

AC:

Revisiting the Template Operator example and the documentation

Description

With the current state of the documentation in the Template Operator repository, there are a lot of ambiguities that can lead to misunderstandings and misconceptions about what exactly is expected from the Module Operators. The docu should clearly state what are the available options, which of them suit which scenarios and what is the default, recommended way of implementing a custom Module Operator.

The declarative Approach was meant to only be used with the simplest manifests that require no interaction after they are installed, right now the doc is kind of focused on this point, and this requires a change. There should be a clear separation on which types of reconcile loops/operators suit which scenarios, manifests and tools used. While it is impossible to describe all the viable solutions there are 3 that we should at least indicate in the docu:

  • Simple reconcile loop that just applies resources to the cluster (default, go-to approach)
  • Helm-based custom implementation (with the declarative library as an example)
  • Helm Operator

The most flexible solution for every team is to write their own reconcile loop, which should be the main point of the Template Operator documentation. It allows for full customization and does not require any kind of common library that would lead to a serious implementation and maintenance effort. For this case, there also should be an example provided that would show a basic loop with a simple YAML file applied as an effect.

What is still under the discussion is the requirement for implementing the State interface, this will be handled as a separate point in the future.

While both Helm-based solutions are an option, we should noticeably mark them as non-preferred options (as they bring multiple shortcomings present in Helm). We've also decided to completely drop the Kustomization mentions in this doc.

Another point to mark is the resource usage of Module Operators, we should at least include recommended values for the CPU and memory consumption.

Overall, the documentation in this repository should be kept as simple as possible to reduce any potential misconceptions or misunderstandings when approaching the topic.

Reasons

We want the Module Teams to own their Operators and implement custom reconcile loops that would suit their needs instead of basing on the simple, Helm-coupled library. This will allow them for further customizations and more flexibility towards different deployment approaches that are required by different components.

Acceptance Criteria:

  • Exemplary Template Operator uses only plain K8S resource apply, it includes neither Helm nor Kustomize
  • Custom reconcile loop is visibly marked as the "go-to" solution
  • Helm-related options are listed as the possibilities for the simplest flows
  • Technical steps in the docs adjusted to the new, default approach
  • Adjust the guidances included in the CLI accordingly

Related Issues:
#11

Revisit TODOs for the Template Operator repository

Description:
There are multiple TODO comments in the repository. While most are related to minor improvements, some may impact the overall performance or the quality of the whole component.

As pointed out by the DoD, quality standards, and having everything logged as backlog items, those should be analyzed and properly addressed.

Acceptance Criteria:

  • All the TODOs were addressed, either by:
    • Introducing improvements/fixes to the code
    • Logging an issue for the improvement/fix
  • Clean up Helm controller

Outdated installation with kyma in k3d

Description

Following the README.md bundling and installation triggers deprecation warnings and guide with k3d installation is not well documented. After several tries and errors, I couldn't make the template operator work using kyma alpha deploy.

I could manage to install the template-operator following https://github.com/kyma-project/lifecycle-manager/blob/main/docs/developer-tutorials/local-test-setup.md#skr-cluster-setup (uses istio and 2 k3d clusters), but not with kyma deploy + kyma alpha deploy.

Installation with kyma deploy:

❯ kyma provision k
  Checking if port flags are valid
  Checking if k3d registry of previous kyma installation exists
  Checking if k3d cluster of previous kyma installation exists
  Deleted k3d registry of previous kyma installation
- k3d status verified (507ms)
- Created k3d registry 'k3d-kyma-registry:5001' (399ms)
- Created k3d cluster 'kyma' (20.394s)
❯ kyma deploy -s=2.18.1
- Detecting Environment (0s)
...
Kyma is installed in version:   2.18.1
❯ cd ~/go/src/github.com/kyma-project/template-operator
❯ git status
On branch main
Your branch is up to date with 'origin/main'.
nothing to commit, working tree clean
❯ git pull
Already up to date
❯ make docker-build docker-push IMG="k3d-kyma-registry:5001/unsigned/operator-images/sample-operator:0.0.1"
docker build -t k3d-kyma-registry:5001/unsigned/operator-images/sample-operator:0.0.1 --build-arg TARGETARCH=amd64 .
...
docker push k3d-kyma-registry:5001/unsigned/operator-images/sample-operator:0.0.1
The push refers to repository [k3d-kyma-registry:5001/unsigned/operator-images/sample-operator]
...
0.0.1: digest: sha256:1c7ca131ad5f2f7825953bf055a59b75d3f23000ff4f4948ae92ce763d944523 size: 2610
# I believe image name might be wrong, so I pushed also with `template-operator` image name
❯ kyma alpha create module --version 0.0.1 --insecure --registry k3d-kyma-registry:5000/unsigned
WARNING: This command is experimental and might change in its final version. Use at your own risk.
WARNING: "--module-config-file" flag is required. If you want to build a module from a Kubebuilder project instead, use the "--kubebuilder-project" flag.
Error: "--module-config-file" flag is required
❯ make build-manifests
/Users/I507794/go/src/github.com/kyma-project/template-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/I507794/go/src/github.com/kyma-project/template-operator/bin/kustomize build config/default > template-operator.yaml
# did both for 5000 and 5001
❯ kyma alpha create module --version 0.0.1 --insecure --registry k3d-kyma-registry:5000/unsigned --module-config-file=module-config.yaml
...
- Module successfully pushed to "k3d-kyma-registry:5000/unsigned" (54ms)
- Template successfully generated at template.yaml (3ms)
# did first with 5000, then used 5001 since this is where it is exposed on my local machine
❯ kyma alpha create module --version 0.0.1 --insecure --registry k3d-kyma-registry:5001/unsigned --module-config-file=module-config.yaml
# assigning the field .spec.target to value control-plane 

❯ nano template.yaml
❯ kyma alpha deploy
- Deployed cert-manager.io in version v1.11.0 (23.793s)
- Used Lifecycle-Manager: europe-docker.pkg.dev/kyma-project/prod/lifecycle-manager:latest (0s)
...
Kyma is installed in version:   alpha deployment with lifecycle-manager
...
❯ ./hack/local-template.sh
http:5000//k3d-kyma-registry:5000/unsigned
# wrong address -> should be http://k3d-kyma-registry:5000/unsigned -> changed manually
# revert and change manually to http://k3d-kyma-registry:5000/unsigned
❯ kubectl apply -f template.yaml
moduletemplate.operator.kyma-project.io/template-operator-fast created
# changed to `template-operator` from `sample` -> outdated module name
❯ kubectl patch kyma default-kyma -n kyma-system --type='json' -p='[{"op": "add", "path": "/spec/modules", "value": [{"name": "template-operator" }] }]'
kyma.operator.kyma-project.io/default-kyma patched
❯ k describe kyma -n kyma-system
...
Warning  ModuleReconciliationError  76s (x25 over 2m4s)  lifecycle-manager  no templates were found: in channel regular for module template-operator
# patch to fast channel -> maybe add this to documentation?
❯ k describe kyma -n kyma-system
Status:
  Active Channel:  fast
  Conditions:
    Last Transition Time:  2023-09-28T09:48:20Z
    Message:               not all modules are in ready state
    Observed Generation:   5
    Reason:                Ready
    Status:                False
    Type:                  Modules
  Last Operation:
    Last Update Time:  2023-09-28T09:48:20Z
    Operation:         waiting for all modules to become ready
  Modules:
    Channel:  fast
    Fqdn:     kyma-project.io/module/template-operator
    Manifest:
      API Version:  operator.kyma-project.io/v1beta2
      Kind:         Manifest
      Metadata:
        Generation:  1
        Name:        default-kyma-template-operator-2235966007
        Namespace:   kyma-system
    Name:            template-operator
    Resource:
      API Version:  operator.kyma-project.io/v1alpha1
      Kind:         Sample
      Metadata:
        Name:       sample-yaml
        Namespace:  kyma-system
    State:          Error
    Template:
      API Version:  operator.kyma-project.io/v1beta2
      Kind:         ModuleTemplate
      Metadata:
        Generation:  1
        Name:        template-operator-fast
        Namespace:   kcp-system
    Version:         v1.0.0
  State:             Error
❯ k describe kyma -n kyma-system
...
Warning  ModuleReconciliationError  112s (x150 over 6m53s)  lifecycle-manager  no templates were found: in channel fast for module template-operator

After this and trying a bit more to make it work, I stopped trying and went for the other guide.

Could you please update and enhance the documentation explaining how to make it work with a single k3d cluster and via kyma commands?

Thanks!

Area

  • Template operator
  • Modules
  • Lifecycle manager

Reasons

Trying to develop a new module locally out of the template doesn't work or is at least not straight forward (at least from the steps I tried). Documentation is at least outdated, and requires reading docs from another repository.

Assignees

@kyma-project/technical-writers

Attachments

See logs

Decoupling of the referenced helm chart

Description

The helm chart for the component being installed and managed is tightly-coupled(hardcoded). It would be helpful if it can be referenced via url.

Reasons

This way the need for double-maintanance of the helm chart will be eliminated.

Update the build module section

Description

Right now, the Template Operator uses the old build command based on the kubebuilder project. It should be aligned with the new way of building modules.

Reasons

We want to have an up-to-date guide on the module implementation.

Acceptance Criteria

  • Describe how to generate manifest yaml containing all the Kubernetes resources for the Template Operator
  • Build command updated (in this section)
  • Metadata YAML (module-config.yaml) file used for the building command described in the documentation section

Release the Module Template and Manifest

Description

It would be great to have the Module Template and manifest (k8s yamls) released for the Template Operator. It will make it easier to present/use the example in various tutorials, documents, or guides.

Keda Manager, BTP Manager and eventing-manager already have their pipelines built, this can be a starting point for this implementation.

Acceptance Criteria

  • GH actions pipeline built
  • ModuleTemplate available in GH release (0.1.2)
  • Default CR available in GH release (0.1.2)
  • K8S manifests available in GH release (0.1.2)
  • (stretch) include module-config file for the CLI build command

Reasons

It will be much easier to just download the Module Template and Manifest instead of building it locally.

Create a new release of template-operator

Description

We need to release a new version of the template-operator and update the sec-scanners-config. It may need to be a 1.x.x. release to comply with SRE expectations for the new pipelines.

Sec scanners config:

- europe-docker.pkg.dev/kyma-project/prod/template-operator:0.1.2

Reasons

  • the current 0.1.2 release uses golang 1.21.7 which has a known security vulnerability which is reported by BDBA
    • it is fixed from goalng 1.22.0 onwards which is already used in main
  • SRE need releases for the new deployment pipelines in order to comply with securtiy/software-lifecycle requirements

Attachments

Extend sample module data with YAML manifest

Description

Extend the sample data included in the docker image with an example YAML manifest.

Reasons

After #14 is merged, the controller is capable of reconciling two different Custom Resources: Sample and SampleHelm.
Both of these CRs refer to some directory in a filesystem the controller can access at runtime.
The Sample CR reconciliation reads a YAML manifest (a single file) from that directory and applies it to the cluster. The SampleHelm CR reads a Helm chart from the directory and applies it to the cluster.
To illustrate how the controller works, a module-chart directory, containing an example chart, is embedded in the controller docker image.
But we don't have a similar example YAML file in the docker file, so there's no simple way to test a Sample CR out of the box.

AC:

[x] - extend the sample data embedded in the docker image so that both CRs can be tested out of the box.

make template operator api as go module

Template operator is heavily used as e2e test source for CLI, LM components. For better testing, it's better we can directly use Sample CR instead of create it as Unstructured. However, directly introduce template operator as dependencies to CLI or LM will bring high coupling and introduce many unnecessary dependencies. e.g: controller-runtime.

If we can refactor template operator API as a dedicated go module, and introduce only necessary dependencies for this module, then we can directly use this API for other project with low coupling.

AC:

  • refactor template operator API as dedicated go module, with minimum dependencies
  • introduce this API as local dependencies for template operator (use replace)
  • verify make-targets build, manifest, generate, lint, etc. still work
  • write followup issue to introduce API as dependencies to CLI and LM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.