Giter Site home page Giter Site logo

flanksource / template-operator Goto Github PK

View Code? Open in Web Editor NEW
25.0 8.0 11.0 574 KB

template operator is a Kubernetes operator, for runtime, reconciliation-based templating providing a simple way to create composite CRD's and update/copy resources between namespaces and clusters

License: Apache License 2.0

Dockerfile 0.46% Makefile 1.76% Go 96.82% Shell 0.96%
kubernetes operator kubernetes-operator composition templating gomplate crd-controller crd

template-operator's Introduction

Template Operator

Simple, reconciliation-based runtime templating

The Template Operator is for platform engineers needing an easy and reliable way to create, copy and update kubernetes resources.

Design principles

  • 100% YAMLTemplates are valid YAML and IDE validation and autocomplete of k8s resources works as normal.
  • Simple – Easy to use and quick to get started.
  • Reconciliation based – Changes are applied quickly and resiliently (unlike webhooks) at runtime.

Further reading

This README replicates much of the content from Simple, reconciliation-based runtime templating.

For further examples, see part 2 in the series: Powering up with Custom Resource Definitions (CRDs).

Alternatives

There are alternative templating systems in use by the k8s community – each has valid use cases and noting the downsides for runtime templating is not intended as an indictment – all are excellent choices under the right conditions.

Alternative Downside for templating
crossplane Complex due to design for infrastructure composition
kyverno Webhook based
Designed as a policy engine
helm Not 100% YAML
Not reconciliation based (build time)

Installation

API documentation available here.

Prerequisites

This guide assumes you have either a kind cluster or minikube cluster running, or have some other way of interacting with a cluster via kubectl.

Install

export VERSION=0.4.0
# For the latest release version: https://github.com/flanksource/template-operator/releases

# Apply the operator
kubectl apply -f https://github.com/flanksource/template-operator/releases/download/v${VERSION}/operator.yml

Run kubectl get pods -A and you should see something similar to the following in your terminal output:

NAMESPACE            NAME                                                    READY
template-operator    template-operator-controller-manager-6bd8c5ff58-sz8q6   2/2

Following the logs

To follow the manager logs, open a new terminal and, changing what needs to be changed, run :

kubectl logs -f --since 10m -n template-operator deploy/template-operator-controller-manager
-c  manager

These logs are where reconciliation successes and errors show up – and the best place to look when debugging.

Use case: Creating resources per namespace

As a platform engineer, I need to quickly provision Namespaces for application teams so that they are able to spin up environments quickly.

As organisations grow, platform teams are often tasked with creating Namespaces for continuous integration or for development.

To configure a Namespace, platform teams may need to commit or apply many boilerplate objects.

For this example, suppose you need a set of Roles and RoleBindings to automatically deploy for a Namespace .

Step 1: Adding a namespace and a template

Add a Namespace. You might add this after applying the Template, but it's helpful to see that the Template Operator doesn't care when objects are applied – a feature of the reconciliation-based approach. Note the label – this tags the Namespace as one that should produce RoleBindings.

cat <<EOF | kubectl apply -f -
kind: Namespace
apiVersion: v1
metadata:
  name: store-5678
  labels:
    # This will be used to select on later
    type: application 
EOF

With the Namespace configured, you can apply the Template (see inline notes).

cat <<EOF | kubectl apply -f -
apiVersion: templating.flanksource.com/v1
kind: Template
metadata:
  name: namespace-rolebinder-developer
  namespace: template-operator
spec:
  # The "source" field selects for the objects to monitor.
  # API docs here: https://pkg.go.dev/github.com/flanksource/template-operator/api/v1#ResourceSelector
  source: 
    # Selects for the apiVersion
    apiVersion: v1
    # Selects for the kind
    kind: Namespace
    # Selects for the label
    labelSelector:
      matchLabels:
        type: application
  # For every matched object, Template Operator will generate the listed resources.
  resources: 
  - kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: developer
      # {{.metadata.name}} comes from the source object (".").
      # Syntax is based on go text templates with gomplate functions (https://docs.gomplate.ca).
      namespace: "{{.metadata.name}}" 
    rules:
    - apiGroups: [""]
      resources: ["secrets", "pods", "pods/log", "configmaps"]
      verbs: ["get", "watch", "list"]
  - kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: developer
      namespace: "{{.metadata.name}}"
    subjects:
      - kind: Group
        name: developer
        apiGroup: rbac.authorization.k8s.io
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role 
      name: developer
EOF

Because of reconciliation, though the Namespace "store-5678" was applied before the Template namespace-rolebinder-developer, the operator will still produce/update the required objects.

Step 2: See results

Once the Template Operator has reconciled (you can see this if you're tailing the logs), run kubectl get roles.rbac.authorization.k8s.io -A to see the newly created Role:

NAMESPACE         NAME          CREATED AT
store-5678        developer     2021-07-16T06:30:27Z

Run kubectl get rolebindings.rbac.authorization.k8s.io -A, for the RoleBinding:

NAMESPACE     NAME          ROLE            AGE
store-5678    developer     Role/developer  10s

Step 3: Adding a second namespace

Now you can apply a second Namespace:

cat <<EOF | kubectl apply -f -
kind: Namespace
apiVersion: v1
metadata:
  name: store-7674
  labels:
    type: application
EOF

Step 4: See results

The Template Operator will create/update the resources in its next cycle. Once the Template Operator reconciles, run kubectl get rolebindings.rbac.authorization.k8s.io -A and you should see something like:

NAMESPACE     NAME          ROLE            AGE
store-7674    developer     Role/developer  2m
store-5678    developer     Role/developer  8s

And for kubectl get roles.rbac.authorization.k8s.io -A:

NAMESPACE         NAME          CREATED AT
store-5678        developer     2021-07-16T06:30:27Z
store-7674        developer     2021-07-16T06:33:14Z

And you're done! In the next example, you'll learn how to add a Template to copy Secrets across Namespaces.

Use case: Copying secrets between namespaces

As a platform engineer, I need to automatically copy appropriate Secrets to newly created Namespaces so that application teams have access to the Secrets they need by default.

Suppose you have a Namespace containing Secrets you want to copy to every development Namespace.

Step 1: Add secrets and namespace

Apply the following manifests to set up the Namespace with the Secrets.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: development-secrets
  labels:
    environment: development
---
apiVersion: v1
kind: Secret
metadata:
  name: development-secrets-username
  namespace: development-secrets
  labels:
    secrets.flanksource.com/label: development
stringData:
  username: rvvq6c8p272!
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  name: development-secrets-api
  namespace: development-secrets
  labels:
    secrets.flanksource.com/label: development
stringData:
  apikey: 7jmpsscrd272jlh 
type: Opaque
EOF

Step 2: Apply the template

Then add a Template with the copyToNamespaces field.

cat <<EOF | kubectl apply -f -    
kind: Template
apiVersion: templating.flanksource.com/v1
metadata:
  name: copy-development-secrets
spec:
  source:
    apiVersion: v1
    kind: Secret 
    # selects on the Namespace label
    namespaceSelector:
      matchLabels:
        environment: development
    # selects on the Secret label
    labelSelector:
      matchLabels:
        secrets.flanksource.com/label: development 
  copyToNamespaces:
    # selects on the Namespace label 
    namespaceSelector:
      matchLabels:
        type: application 
EOF

Step 3: See results

Once the Template Operator has reconciled, run kubectl get secrets -A to see the copied secrets:

NAMESPACE             NAME                             TYPE     DATA   AGE
store-5678            development-secrets-api          Opaque   1      3s
store-5678            development-secrets-username     Opaque   1      3s
store-7674            development-secrets-api          Opaque   1      5s
store-7674            development-secrets-username     Opaque   1      5s

template-operator's People

Contributors

brendangalloway avatar elgohr avatar flanksaul avatar kaitou786 avatar moshloop avatar teodor-pripoae avatar tobernguyen avatar yashmehrotra avatar zoidyzoidzoid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

template-operator's Issues

"items.when" is not working?

I tried to use when in my template (https://raw.githubusercontent.com/tobernguyen/template-operator-library/85cad345222ff09436f8ed3e579bf02a6bded486/config/templates/postgresql-db.yaml). In the template you can see I have both cases of "when" and "when not" but the template-operator didn't deploy either of them.

And here is the response from k get postgresqldb.v1.db.flanksource.com -n postgres-operator -o yaml

apiVersion: v1
items:
- apiVersion: db.flanksource.com/v1
  kind: PostgresqlDB
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"db.flanksource.com/v1","kind":"PostgresqlDB","metadata":{"annotations":{},"name":"test-db-v1","namespace":"postgres-operator"},"spec":{"backup":{"bucket":"test-pg-backup","schedule":"*/5 * * * *"},"parameters":{"archive_mode":"on","archive_timeout":"60s","log_destination":"stderr","max_connections":"600","shared_buffers":"2048MB"},"replicas":1,"storage":{"size":"1Gi"}}}
    creationTimestamp: "2021-04-21T05:35:19Z"
    generation: 1
    managedFields:
    - apiVersion: db.flanksource.com/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:spec:
          .: {}
          f:backup:
            .: {}
            f:bucket: {}
            f:restic: {}
            f:schedule: {}
          f:parameters:
            .: {}
            f:archive_mode: {}
            f:archive_timeout: {}
            f:log_destination: {}
            f:max_connections: {}
            f:shared_buffers: {}
          f:replicas: {}
          f:storage:
            .: {}
            f:size: {}
      manager: kubectl-client-side-apply
      operation: Update
      time: "2021-04-21T05:35:19Z"
    name: test-db-v1
    namespace: postgres-operator
    resourceVersion: "1244"
    selfLink: /apis/db.flanksource.com/v1/namespaces/postgres-operator/postgresqldbs/test-db-v1
    uid: 57d058c6-1565-4227-94a6-ba932de2dad9
  spec:
    backup:
      bucket: test-pg-backup
      restic: false
      schedule: '*/5 * * * *'
    parameters:
      archive_mode: "on"
      archive_timeout: 60s
      log_destination: stderr
      max_connections: "600"
      shared_buffers: 2048MB
    replicas: 1
    storage:
      size: 1Gi
- apiVersion: db.flanksource.com/v1
  kind: PostgresqlDB
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"db.flanksource.com/v2","kind":"PostgresqlDB","metadata":{"annotations":{},"name":"test-db-v2","namespace":"postgres-operator"},"spec":{"backup":{"bucket":"test-pg-backup","retention":{"keepDaily":30,"keepLast":20},"schedule":"*/5 * * * *"},"parameters":{"archive_mode":"on","archive_timeout":"60s","log_destination":"stderr","max_connections":"600","shared_buffers":"2048MB"},"replicas":1,"storage":{"size":"1Gi"}}}
    creationTimestamp: "2021-04-21T05:35:24Z"
    generation: 1
    managedFields:
    - apiVersion: db.flanksource.com/v2
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:spec:
          .: {}
          f:backup:
            .: {}
            f:bucket: {}
            f:restic: {}
            f:retention:
              .: {}
              f:keepDaily: {}
              f:keepLast: {}
            f:schedule: {}
          f:parameters:
            .: {}
            f:archive_mode: {}
            f:archive_timeout: {}
            f:log_destination: {}
            f:max_connections: {}
            f:shared_buffers: {}
          f:replicas: {}
          f:storage:
            .: {}
            f:size: {}
      manager: kubectl-client-side-apply
      operation: Update
      time: "2021-04-21T05:35:24Z"
    name: test-db-v2
    namespace: postgres-operator
    resourceVersion: "1250"
    selfLink: /apis/db.flanksource.com/v1/namespaces/postgres-operator/postgresqldbs/test-db-v2
    uid: 443de291-d39f-453c-908f-031eb4c681f1
  spec:
    backup:
      bucket: test-pg-backup
      restic: true
      schedule: '*/5 * * * *'
    parameters:
      archive_mode: "on"
      archive_timeout: 60s
      log_destination: stderr
      max_connections: "600"
      shared_buffers: 2048MB
    replicas: 1
    storage:
      size: 1Gi
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Any ideas?

Templates failing to convert values missing from spec to appropriate nil type

When passing values from DependencyMatrix to Service, missing values do not covert to their appropriate nil type:

apiVersion: templating.flanksource.com/v1
kind: Template
metadata:
  name: dependencymatrix-template
spec:
  source:
    apiVersion: services.domain.io/v1
    kind: DependencyMatrix
  resources:
    - forEach: "{{.spec.services}}"
      apiVersion: services.domain.io/v1
      kind: Service
      metadata:
        name: "{{.each.key}}"
        namespace: "{{.metadata.namespace}}"
      spec:
        image: "{{.each.key}}:{{.each.value.version}}"
        replicas: "{{.each.value.replicas }}"
        resources:
          heap: 
            minMB: "{{.each.value.resources.heap.minMB}}"
            maxMB: "{{.each.value.resources.heap.maxMB}}"

The following error is issued in the template-operator logs:

ts=2021-06-01T13:59:48.154713698Z level=error logger=controllers.Template msg="failed to ducktype object" template=/dependencymatrix-template error="failed to duck type object: failed to transform string to type &{[integer] }: strconv.Atoi: parsing \"<no value>\": invalid syntax" errorVerbose="strconv.Atoi: parsing \"<no value>\": invalid syntax\nfailed to transform string to type &{[integer] }

The Resources type is shared by both DependencyMatrix & Service CRDs

type Heap struct {
	MinMB int `json:"minMB,omitempty"`
	MaxMB int `json:"maxMB,omitempty"`
}

type Resources struct {
	Heap Heap                        `json:"heap,omitempty"`
	CPU  corev1.ResourceRequirements `json:"cpu,omitempty"`
}

I'm expecting to be able to pass the following Service custom resource manifest – the cpu fields have similar issues:

apiVersion: services.domain.io/v1
kind: Service
metadata:
  name: quote
  namespace: <somenamespace>
spec:
  image: <someimagename>
  replicas: 10
  resources: 
    heap:
        minMB: 100
        maxMB: 200
    cpu: 
        limits: 1000m
        requests: 100m

Using ConfigMaps or Secrets as templating inputs?

Hi there,

Love this project, it is something that is very much needed in the Kubernetes space.

I am looking at an in-cluster templating operator that can also leverage values from existing ConfigMaps and Secrets. Based on my reading of the docs I am not sure this is possible here, but I wondering if it is? If so, how much I do it.

Cheers!

Feature request: adding an object as a value

This is really a great project!
However i'm missing a feature: It would be really nice, if i could embedd whole objects as values.

I created a generic CRD that uses a json.RawMessage in my Spec (see details for spec) to take all kinds of values no matter what the struct is. This way i can template whatever i want.

/*
Copyright 2022.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package v1alpha1

import (
	"encoding/json"

	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.

// MyCRDOverriderSpec defines the desired state of MyCRDOverrider
type MyCRDOverriderSpec struct {
	// Foo is an example field of MyCRDOverrider. Edit MyCRDoverrider_types.go to remove/update
	// +kubebuilder:validation:Schemaless
	// +kubebuilder:pruning:PreserveUnknownFields
	// +kubebuilder:validation:Type=object
	Values json.RawMessage `json:"values"`
}

// MyCRDOverriderStatus defines the observed state of MyCRDOverrider
type MyCRDOverriderStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "make" to regenerate code after modifying this file
}

//+kubebuilder:object:root=true
//+kubebuilder:subresource:status

// MyCRDOverrider is the Schema for the MyCRDoverriders API
type MyCRDOverrider struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   MyCRDOverriderSpec   `json:"spec,omitempty"`
	Status MyCRDOverriderStatus `json:"status,omitempty"`
}

//+kubebuilder:object:root=true

// MyCRDOverriderList contains a list of MyCRDOverrider
type MyCRDOverriderList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []MyCRDOverrider `json:"items"`
}

func init() {
	SchemeBuilder.Register(&MyCRDOverrider{}, &MyCRDOverriderList{})
}

This works great for simple values, e.g.

kind: MyCRDOverrider
apiVersion: mygroup.mydomain/v1alpha1
metadata:
  name: unicorn
  namespace: template
spec:
  values:
    name: unicorn

With this template

apiVersion: templating.flanksource.com/v1
kind: Template
metadata:
  name: bash
  namespace: template
spec:
  source:
    apiVersion: mygroup.mydomain/v1alpha1
    kind: MyCRDOverrider
  resources:
    - kind: Deployment
      apiVersion: apps/v1
      metadata:
        name: bash
        namespace: template
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: bash
        template:
          metadata:
            labels:
              app: bash
          spec:
            containers:
              - image: bash
                imagePullPolicy: IfNotPresent
                command: ["bash", "-c", "sleep 99999999"]
                name: "{{.spec.values.name}}"

However, what doesn't work is putting a whole object into values, like adding the resources object.

Values:

kind: EdgeApplicationOverrider
apiVersion: edgeapplication.kubeedge/v1alpha1
metadata:
  name: unicorn
  namespace: template
spec:
  values:
    name: unicorn
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

and the template like above

...
          spec:
            containers:
              - image: bash
                imagePullPolicy: IfNotPresent
                command: ["bash", "-c", "sleep 99999999"]
                name: "{{.spec.values.name}}"
                resources: "{{.spec.values.resources}}"

The error log from template-operator is

ts=2022-11-16T10:58:58.880200737Z level=info logger=controllers.Template msg=Reconciling template=/bash template=bash
ts=2022-11-16T10:58:58.883745874Z level=info logger=controllers.Template msg="Found resources for template" template=/bash template=bash count=1
ts=2022-11-16T10:58:58.884844986Z level=error logger=controllers.Template msg="failed to ducktype object" template=/bash error="failed to duck type object: field: spec.template.spec.containers.0.resources failed to transform string to type &{[object] }: failed to transform string value to object: invalid character 'm' looking for beginning of value: invalid character 'm' looking for beginning of value" errorVerbose="invalid character 'm' looking for beginning of value\nfailed to transform string value to object: invalid character 'm' looking for beginning of value\ngithub.com/flanksource/template-operator/k8s.transformStringToType\n\t/workspace/k8s/schema_manager.go:466\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:170\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:141\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).DuckType\n\t/workspace/k8s/schema_manager.go:111\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResultObject\n\t/workspace/k8s/template_manager.go:396\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResult\n\t/workspace/k8s/template_manager.go:380\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Template\n\t/workspace/k8s/template_manager.go:372\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjects\n\t/workspace/k8s/template_manager.go:440\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjectsFromResources\n\t/workspace/k8s/template_manager.go:813\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).HandleSource\n\t/workspace/k8s/template_manager.go:249\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Run\n\t/workspace/k8s/template_manager.go:209\ngithub.com/flanksource/template-operator/controllers.(*TemplateReconciler).Reconcile\n\t/workspace/controllers/template_controller.go:92\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371\nfield: spec.template.spec.containers.0.resources failed to transform string to type &{[object] }\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:172\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:141\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).duckType\n\t/workspace/k8s/schema_manager.go:156\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).DuckType\n\t/workspace/k8s/schema_manager.go:111\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResultObject\n\t/workspace/k8s/template_manager.go:396\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResult\n\t/workspace/k8s/template_manager.go:380\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Template\n\t/workspace/k8s/template_manager.go:372\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjects\n\t/workspace/k8s/template_manager.go:440\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjectsFromResources\n\t/workspace/k8s/template_manager.go:813\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).HandleSource\n\t/workspace/k8s/template_manager.go:249\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Run\n\t/workspace/k8s/template_manager.go:209\ngithub.com/flanksource/template-operator/controllers.(*TemplateReconciler).Reconcile\n\t/workspace/controllers/template_controller.go:92\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371\nfailed to duck type object\ngithub.com/flanksource/template-operator/k8s.(*SchemaManager).DuckType\n\t/workspace/k8s/schema_manager.go:113\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResultObject\n\t/workspace/k8s/template_manager.go:396\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResult\n\t/workspace/k8s/template_manager.go:380\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Template\n\t/workspace/k8s/template_manager.go:372\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjects\n\t/workspace/k8s/template_manager.go:440\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjectsFromResources\n\t/workspace/k8s/template_manager.go:813\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).HandleSource\n\t/workspace/k8s/template_manager.go:249\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Run\n\t/workspace/k8s/template_manager.go:209\ngithub.com/flanksource/template-operator/controllers.(*TemplateReconciler).Reconcile\n\t/workspace/controllers/template_controller.go:92\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371" stacktrace="github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResultObject\n\t/workspace/k8s/template_manager.go:397\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).duckTypeTemplateResult\n\t/workspace/k8s/template_manager.go:380\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Template\n\t/workspace/k8s/template_manager.go:372\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjects\n\t/workspace/k8s/template_manager.go:440\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).getObjectsFromResources\n\t/workspace/k8s/template_manager.go:813\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).HandleSource\n\t/workspace/k8s/template_manager.go:249\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Run\n\t/workspace/k8s/template_manager.go:209\ngithub.com/flanksource/template-operator/controllers.(*TemplateReconciler).Reconcile\n\t/workspace/controllers/template_controller.go:92\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"
ts=2022-11-16T10:58:58.886327813Z level=info logger=controllers.Template msg=Applying template=/bash kind=Deployment namespace=template name=bash
ts=2022-11-16T10:58:58.886450736Z level=error logger=controller-runtime.manager.controller.template msg="Reconciler error" reconcilergroup=templating.flanksource.com reconcilerkind=Template name=bash namespace= error="cannot restore struct from: string" stacktrace="github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:302\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"

Template events should include full error

e.g.
instead of: Warning Failed 22s (x16 over 5m6s) template-operator Failed to apply new resource kind=Deployment name=sandbox-awx

it should be Warning Failed 22s (x16 over 5m6s) template-operator Failed to apply new resource kind=Deployment name=sandbox-awx: spec.template.spec.containers[2].resources.requests: Invalid value: \"128Mi\": must be less than or equal to memory limit

kget shoud support "current" namespace

instead of

kget (print "secret/postgres-operator/postgres.postgres-" .metadata.name  ".credentials")

Allow

kget (print "secret/postgres.postgres-" .metadata.name  ".credentials")

where the namespaces comes from the source object

Source: GitRepository

source:
      gitRepository:
          namespace: flux
          name: ref-to-git-repo
          glob: "/grafana/dashboards/*.json"
resources:
  -      
apiVersion: integreatly.org/v1alpha1
kind: GrafanaDashboard
metadata:
  name: " {{.filename}}"
  labels:
    app: grafana
spec:
  json:  "{{ .content }}

```

Failed to parse new ingress object: error finding kind networking.k8s.io/v1

ts=2021-06-01T10:00:06.452386435Z level=error logger=controllers.Template msg="failed to ducktype object" template=/omservice error="error finding kind networking.k8s.io/v1, Kind=Ingress: schema for group=networking.k8s.io version=v1 kind=Ingress not found" errorVerbose="schema for group=networking.k8s.io version=v1 kind=Ingress not found\ngithub.com/flanksource/template-operator/k8s.

Expand namespace allow filebeat specification

To allow forwarding of logs to elastic search instances outside of the clusters, namespaces will need to be annotated to be correctly picked up by a filebeat instance linked to the elasticsearch server.

kget gomplate function

I.e {{ kget "cm/quack/quack-config" "data.domain" }} should be the equivalent of kubectl get cm -n quack quack-config -o jsonpath=data.domain

dependson attribute

e.g.

# Template provides the backing implementation for the CRD
kind: Template
metadata:
  name: CloudFront
spec:
  source:
    # this template work on CloudFront kinds
    kind: CloudFront
  # resources is a list of 1 or more resources to create for each CloudFront CRD created
  resources:
    # the RFC kind submits an AMS RFC and then waits for the results
    - id: s3
      apiVersion: ams/v1
      kind: RFC
      metadata:
        # construct an rfc title from the name passed through in the CloudFront CRD
        name: cloudfront-s3-{{.metadata.name}}
      spec:
        changeType: ct-abc
        params:
    # depends is a directive tha will wait for the specified resource (RFC) to be created and ready
    - depends: [s3]
      kind: RFC
      metadata:
        name: IngestCloudFormation

Ignore knative annotations

validation.webhook.serving.knative.dev" denied the request: validation failed: annotation value is immutable: metadata.annotations.serving.knative.dev/creator\ninvalid value: : metadata.annotations.serving.knative.dev/lastModifier"

Duck Typing for CRD's

When creating new resources, lookup the OpenAPI spec and convert and fields that need to number/bools into the correct types.

e.g. the PostgresqDB type defines replicas, which needs to be set as number when creating the underlying Zalando object

Template Operator - Failing to parse matchExpressions selector

Operator failing to apply matchExpressions with the following syntax:

spec:
  source:
    apiVersion: v1
    kind: ConfigMap
    labelSelector:
      matchExpressions:
        - key: environment-class
          operator: In
          values: 
            - global
            - sat
            - sit

Error log:

ts=2021-05-25T07:17:25.735807231Z level=error logger=controller msg="Reconciler error" reconcilerGroup=templating.flanksource.com reconcilerKind=Template controller=template name=alertmanager-secrets namespace= error="failed to transform LabelSelector to map: operator \"In\" without a single value cannot be converted into the old label selector format" errorVerbose="operator \"In\" without a single value cannot be converted into the old label selector format\nfailed to transform LabelSelector to map\ngithub.com/flanksource/template-operator/k8s.labelSelectorToString\n\t/workspace/k8s/template_manager.go:397\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).selectResources\n\t/workspace/k8s/template_manager.go:129\ngithub.com/flanksource/template-operator/k8s.(*TemplateManager).Run\n\t/workspace/k8s/template_manager.go:150\ngithub.com/flanksource/template-operator/controllers.(*TemplateReconciler).Reconcile\n\t/workspace/controllers/template_controller.go:62\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357" stacktrace="github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:246\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"

template-operator run

template-operator run template.yaml object.yaml

Render a template to stdout for local development and troubleshooting

for_each field

Allow creating multiple resources using a _for_each field that points to an array or map

Garbage collection

  • Add a new field on Template to enable/disable garbage collection
  • When an object is removed from a template, delete existing instances using mark and sweep on the OwnerRef.

Secret generator

e.g. password: "{{ randomPassword 20 }}"

But don't overwrite existing values during subsequent reconciles

Advanced templating

In some cases I need to render complex templates with values. Any objections to adding a 2nd CRD that allows values to be defined and the resource(s) to be a Helm template? I'm happy to do the work of adding it, just want to discuss the the feature and get a 👍.

resource output lookup

e.g. {{ s3.status.outputs.stackId" }} would lookup the the "s3" resource and

# Template provides the backing implementation for the CRD
kind: Template
metadata:
  name: CloudFront
spec:
  source:
    # this template work on CloudFront kinds
    kind: CloudFront
  # resources is a list of 1 or more resources to create for each CloudFront CRD created
  resources:
    # the RFC kind submits an AMS RFC and then waits for the results
    - id: s3
      apiVersion: ams/v1
      kind: RFC
      metadata:
        # construct an rfc title from the name passed through in the CloudFront CRD
        name: cloudfront-s3-{{.metadata.name}}
      spec:
        changeType: ct-abc
        params:
    # depends is a directive tha will wait for the specified resource (RFC) to be created and ready
    - depends: [s3]
      kind: RFC
      metadata:
        name: IngestCloudFormation
      spec:
        changeType: ct-abc
        params:
          # read common variables for the namespace (account)
          account: '{{ kget "cm/account" "accountId" }}'
          vpc: '{{ kget "cm/account" "vpcId" }}'
          region: eu-west-1
          # retrieve the output of the s3 RFC to use in the followup
          s3Bucket: '{{ s3.status.outputs.stackId" }}'
        document:
          valueFrom:
            configMapRef:
              name: cf-templates
              key: cloudfront.cf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.