Giter Site home page Giter Site logo

janus-idp / operator Goto Github PK

View Code? Open in Web Editor NEW
14.0 8.0 14.0 2.06 MB

Deprecated - Operator for Backstage, based on the Operator SDK framework - see https://github.com/redhat-developer/rhdh-operator

Home Page: https://github.com/redhat-developer/rhdh-operator

License: Apache License 2.0

Dockerfile 2.37% Makefile 6.94% Go 83.64% Shell 7.05%
backstage janus-idp kubernetes kubernetes-operator kubernetes-operator-sdk kubernetes-operators openshift operator operator-sdk backstage-operator

operator's Issues

Allow passing `own-runtime` flag to `make run`

#37 introduced a new own-runtime flag. But there is currently no way to pass it when running make run.


Per your instructions, I've tried to test this by running make run own-runtime=true, but it looks like this flag is always false. I double-checked by printing the value in L99 (after it is parsed and before it is used to initialize the reconciler). How can I test this? Do we need to update the run target in the Makefile as well?

For now, I am running go run ./main.go -own-runtime=true to test this new flag.

Originally posted by @rm3l in #37 (comment)

Make sure the Operator is working with external database

For the time Backstage by default is deployed along with PostgreSQL (as another Pod) database but it may not be the best option for some cases, for example if user wants to use some external/cloud database.
There is a config option which disables installing this (local) database.

The goal of this issue is to test deployment with external database and document some recommendations for this configuration.

TODO

  • Deploy and test Backstage w/o the local DB
  • Document the process

Estimation: 3d

NOTE:

  • Do we need to create the DB schema for Backstage? Or is it done automatically by Backstage? (seems to be handled automatically by Backstage)

Update `PROJECT` and generated Kustomize files

This is a follow-up issue to #22


Looking at the old backstage operator (https://github.com/janus-idp/backstage-operator/blob/main/PROJECT), I think we could use the following values:

  • domain could be: janus-idp.io (keeping backstage.io for now - can be changed later)
  • projectName could be: backstage-operator
  • repo could be: github.com/janus-idp/operator (not sure if this needs to be changed at this point - can be changed later if needed)

Originally posted by @rm3l in #22 (comment)


Can this be prefixed with backstage- as before? I noticed that the operator was installed into an operator-system namespace and the Deployment was named operator-controller-manager, which IMO might be confusing to people administering the cluster.

Originally posted by @rm3l in #22 (comment)

Consider predictable (generated) names of Backstage runtime objects

For the time we can configure the names of Backstage runtime objects (Deployment, Service, DB related ...)
Which:

  • is not necessary b/c we mainly do not care about those names itself, they just have to be unique and preferable somehow associated with root Backstage CR
  • may cause consistency problem in case if we update configuration with different name (will probably create another object)
  • is not optimal for reconcilation if we do not want to fetch and check default config for created objects as well (can be configurable if we want so)

So the proposal is to make objects' names predictable and (optionally) save the names in the status for better integrity.

NOTES:
- From @rm3l: as commented out in #51 (comment), the Backstage container expects the POSTGRES_HOST env var to be set. Currently, the DB Service has a hardcoded name, but once it has a unique and predictable name, we would need to inject that information into the Backstage container.

Operator (Cluster)-scope default configuration

For the time default configuration is hardcoded
Make it possible to configure it on Operator level.

The idea is to have the same structure of ConfigMaps/[Secrets] to be deployed with Operator (on operator controller's namespace) to be marshalled on start time, instead of hardcoded strings.
This way Cluster Admin would be able to configure Cluster's defaults.

CI: End-to-end testing against real cluster

User Story

As a QE engineer, I want to leverage upstream E2E tests against downstream builds, So that I can test the Operator for RHDH.
Currently, we have unit + integration tests (with EnvTest), but the idea is to also have some end-to-end tests done against a real cluster.

Acceptance Criteria

  • It should add new test cases with E2E scenario in mind
  • Testing only on K8s cluster is fine, to begin with: GitHub Action can spin up a local Kind or Minikube cluster inside the runner
  • E2E tests should be run against any Kubernetes context in current Kubeconfig
  • E2E tests should be isolated in their own namespaces, which should be cleaned up after
  • It should be able to run tests in parallel against the same K8s cluster
  • ...

Links

  • Related to RHIDP-897

Enable Operator for restricted network (air-gapped or proxied) environments

As an administrator, I want to run the Operator in a restricted network, or disconnected, environment, So that users can have a fully functional Backstage instance running under such restricted environments.

See https://docs.openshift.com/container-platform/4.14/operators/operator_sdk/osdk-generating-csvs.html#olm-enabling-operator-for-restricted-network_osdk-generating-csvs

See also https://issues.redhat.com/browse/RHIDP-488

Support multiple Configuration Profiles

Problem to solve: in order to offer Operator as an installation option for different Backstage container/configuration we need to support multiple Default Configuration Profiles.
Default Configuration Profile it is a directory with manifests for K8s objects created/updated by Operator, including:

  • Deployment and Service for Backstage application
  • StatefulSet, Services and Secret for local Database (if configured)
  • Route (if configured)
  • Backstage Configuration objects (ConfigMaps and Secrets)

Initially we have single config/manager/default-config directory which contain basic Janus-IDP configuration.
On deployment stage this directory is transformed to ConfigMap, placed on the same namespace as Operator Manager and used as a default configuration for each Backstage instance created by Operator.

The goal is to be able to create and deploy other configuration (first candidate is bare backstage container with empty config) like this sending dedicated variable to the make deploy/make run/make test commands.
E g:
PROFILE=my-config make deploy

Add labels and annotations recommended for OpenShift/Kubernetes applications to all resources created by the Operator

This is a recommendation to increase interoperability with other tools working with K8s objects.
For example, it can help group a Backstage-managed instance in the OpenShift Topology Viewer, and make it easier to visualize a Backstage-managed application.
See https://github.com/redhat-developer/app-labels/blob/master/labels-annotation-for-openshift.adoc and https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels for more details.


This is a follow-up to a review comment on #29, which added some of those labels.

Would it make sense to have all these common labels applied on every sub-resource, like Pods as well? Not only on the parent Deployment?
This seems to be recommended in https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels.
I can only see the backstage.io/app label on the Backstage and DB Pods.

[...]
spec:
  [...]
  selector:
    matchLabels:
      backstage.io/app: backstage-bs1
  template:
    metadata:
      creationTimestamp: null
      labels:
        backstage.io/app: backstage-bs1

Originally posted by @rm3l in #29 (comment)

Use `janus-idp.io` instead of `backstage.io`

As discussed in the previous weekly call, we should use janus-idp.io instead of backstage.io everywhere.

NOTES:

  • CR:
    • apiVersion should be janus-idp.io/v1alpha1
    • kind can remain Backstage

Investigate the reason why PostgreSQL PV is not deleted with Backstage CR

The way to reproduce:

  • Make run operator with local Postgre and "own-runtime = true" parameter
  • Create (default) Backstage CR (bs1 for example)
  • Make sure all deployed correctly, check PV object, it must contain something like:
ownerReferences:                                                                                                                                                                                                
    - apiVersion: backstage.io/v1alpha1                                                                                                                                                                             
       blockOwnerDeletion: true                                                                                                                                                                                      
       controller: true                                                                                                                                                                                              
       kind: Backstage                                                                                                                                                                                               
       name: bs1   
  • Delete CR
    Expected result: all associated runtime objects are deleted
    Observed: PV is not deleted even if contain correct ownerReferences.

Add enough useful information in CR status

User Story

As a user submitting a Custom Resource (CR), I want the operator to report back any relevant information to the CR status, So that I can have a better understanding of the status of my application by looking at the CR.
This can be useful in inspecting potential reconciliation issues or knowing if everything is fine from the operator's perspective.

Acceptance Criteria

  • When there are issues, it should return meaningful messages about the reason (like the status of the Backstage/PostgreSQL pods, e.g., ImagePullBackOff, CrashLoopBackOff, ...)
  • It should report whether the application is Ready or not
  • If Route is enabled in the CR, it should report the application route /URL once available
  • ...

Notes:

Layered operator design

The goal of this issue is redesigning the Operator making it layered for:

  • separate the internal configuration logic (the biggest part) from the K8s API calls
  • better testability, implementing the absolute most testing logic as plain unit tests, decreasing amount of big and slow K8s envtests
  • better documentation and, therefore supportability, to be able to express how operator is designed.

[Epic] Operator based on the Operator SDK

Goal

This epic aims to provide an operator based on Operator SDK that is releaseable and usable state where it offers a comparable stable experience as installing via helm chart.

Acceptance criteria

  • Operator is able to deploy Backstage
  • Operator can reconcile changes
  • Operator can enforce Janus IDP image
  • Operator can enforce proper routing

Requirements

  • Test plan
  • Documentation

Roadmap

https://github.com/orgs/janus-idp/projects/5/views/1

Notes

Additional context
Add any other context or screenshots about the epic here.

Controller failed to apply StatefulSet, already exist

Creating Backstage CR on EMPTY (no StatefulSet) namespace
Backstage instance is created but following log entry appeared on the Controller's log:

ERROR Reconciler error {"controller": "backstage", "controllerGroup": "backstage.io", "controllerKind": "Backstage", "Backstage": {"name":"my-backstage-app-with-app-config","namespace":"backstage"}, "namespace": "backstage", "name": "my-backstage-app-with-app-config", "reconcileID": "5ab3e72c-55b9-4539-ba75-78486622473f", "error": "failed to apply Database Deployment: failed to create deplyment, reason: statefulsets.apps "backstage-psql-my-backstage-app-with-app-config" already exists"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler

Add labels for better application grouping and icons in OpenShift Topology View

User Story

As a user of the Operator on OpenShift, I want, in the Topology View, to be able to distinguish the resources created and managed by the Operator from the rest of my other resources, So that I can have a clear separation between different apps in the same namespace.

While doing a quick test of the RHDH image on OpenShift, I noticed that resources from the Helm Chart are currently grouped and have proper icons displayed for PostgreSQL and Backstage. But resources managed by the Operator are using a generic icon and are floating, making it look like they are completely unrelated.
See the screenshot below - Helm resources on the left, and resources created by the operator on the right:

image

Acceptance Criteria

Notes

Update README and documentation

/kind task

Acceptance Criteria

  • How to develop on the Operator
  • How to deploy the Operator on both Kubernetes and OpenShift
  • Mention that the backend auth secret (introduced in #27) takes precedence over any backend.auth.keys value from config files: #27 (comment) (no longer true)
  • Link to Custom Resource Definition fields and their descriptions
  • How to create Custom Resource with simple examples
  • How to use with external Database (#8)
  • How to expose the Backstage Service on non-OpenShift clusters (#13 (comment))
  • Working under air-gapped/proxied environments (part of Admin doc) (#80 (comment))
  • ...

[Epic] M1 functional scope

The context: Backstage Operator notes

Functional requirements of M1:

  • Use operator-sdk to generate the CRD and scaffolder the code.
  • Implement the reconciler logic
  • Operator performs initial installation/configuration of Backstage instance (update is out of scope).
  • Default configuration k8s Runtime Objects (Deployment, Service...) and AppConfigs
  • Custom configuration Runtime Objects (as k8s resources) and AppConfigs
  • Tested with Janus-IDP showcase image
  • Tested on OpenShift with rhdh image
  • Tested with external database (requires #8)

Formatting issues with default config ConfigMap, making it hard to read and edit

The current ConfigMap storing the default config of the operator contains raw strings that are not easy to read and edit.
I think this is due to the placeholders introduced recently (first with {POSTGRESQL_SECRET}, and now with {RELATED_IMAGE_*}).

If we want administrators to be able to edit this ConfigMap, I think it would make sense to make it more readable.

db-statefulset.yaml: "apiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name:
backstage-psql-cr1 # placeholder for 'backstage-psql-<cr-name>'\nspec:\n podManagementPolicy:
OrderedReady\n replicas: 1\n selector:\n matchLabels:\n janus-idp.io/app:
backstage-psql-cr1 # placeholder for 'backstage-psql-<cr-name>'\n serviceName:
backstage-psql-cr1-hl # placeholder for 'backstage-psql-<cr-name>-hl'\n template:\n
\ metadata:\n labels:\n janus-idp.io/app: backstage-psql-cr1 # placeholder
for 'backstage-psql-<cr-name>'\n name: backstage-db-cr1 # placeholder for
'backstage-psql-<cr-name>'\n spec:\n persistentVolumeClaimRetentionPolicy:\n
\ whenDeleted: Retain\n whenScaled: Retain\n containers:\n -
env:\n - name: POSTGRESQL_PORT_NUMBER\n value: \"5432\"\n
\ - name: POSTGRESQL_VOLUME_DIR\n value: /var/lib/pgsql/data\n
\ - name: PGDATA\n value: /var/lib/pgsql/data/userdata\n
\ envFrom:\n - secretRef:\n name: \"{POSTGRESQL_SECRET}\"
\ # will be replaced with 'backstage-psql-secrets-<cr-name>' \n image:
\"{RELATED_IMAGE_postgresql}\" # will be replaced with the actual image\n imagePullPolicy:
IfNotPresent\n securityContext:\n runAsNonRoot: true\n allowPrivilegeEscalation:
false\n seccompProfile:\n type: RuntimeDefault\n capabilities:\n
\ drop:\n - ALL\n livenessProbe:\n exec:\n
\ command:\n - /bin/sh\n - -c\n -
exec pg_isready -U ${POSTGRES_USER} -h 127.0.0.1 -p 5432\n failureThreshold:
6\n initialDelaySeconds: 30\n periodSeconds: 10\n successThreshold:
1\n timeoutSeconds: 5\n name: postgresql\n ports:\n
\ - containerPort: 5432\n name: tcp-postgresql\n protocol:
TCP\n readinessProbe:\n exec:\n command:\n -
/bin/sh\n - -c\n - -e\n - |\n exec
pg_isready -U ${POSTGRES_USER} -h 127.0.0.1 -p 5432\n failureThreshold:
6\n initialDelaySeconds: 5\n periodSeconds: 10\n successThreshold:
1\n timeoutSeconds: 5\n resources:\n requests:\n
\ cpu: 250m\n memory: 256Mi\n limits:\n memory:
1024Mi\n volumeMounts:\n - mountPath: /dev/shm\n name:
dshm\n - mountPath: /var/lib/pgsql/data\n name: data\n
\ restartPolicy: Always\n securityContext: {}\n serviceAccount:
default\n serviceAccountName: default\n volumes:\n - emptyDir:\n
\ medium: Memory\n name: dshm\n updateStrategy:\n rollingUpdate:\n
\ partition: 0\n type: RollingUpdate\n volumeClaimTemplates:\n - apiVersion:
v1\n kind: PersistentVolumeClaim\n metadata:\n name: data\n spec:\n
\ accessModes:\n - ReadWriteOnce\n resources:\n requests:\n
\ storage: 1Gi\n"
deployment.yaml: "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: # placeholder
for 'backstage-<cr-name>'\nspec:\n replicas: 1\n selector:\n matchLabels:\n
\ janus-idp.io/app: # placeholder for 'backstage-<cr-name>'\n template:\n
\ metadata:\n labels:\n janus-idp.io/app: # placeholder for 'backstage-<cr-name>'\n
\ spec:\n # serviceAccountName: default\n volumes:\n -
ephemeral:\n volumeClaimTemplate:\n spec:\n accessModes:\n
\ - ReadWriteOnce\n resources:\n requests:\n
\ storage: 1Gi\n name: dynamic-plugins-root\n -
name: dynamic-plugins-npmrc\n secret:\n defaultMode: 420\n
\ optional: true\n secretName: dynamic-plugins-npmrc\n\n
\ initContainers:\n - command:\n - ./install-dynamic-plugins.sh\n
\ - /dynamic-plugins-root\n env:\n - name: NPM_CONFIG_USERCONFIG\n
\ value: /opt/app-root/src/.npmrc.dynamic-plugins\n image:
\"{RELATED_IMAGE_backstage}\" # will be replaced with the actual image quay.io/janus-idp/backstage-showcase:next\n
\ imagePullPolicy: IfNotPresent\n name: install-dynamic-plugins\n
\ volumeMounts:\n - mountPath: /dynamic-plugins-root\n name:
dynamic-plugins-root\n - mountPath: /opt/app-root/src/.npmrc.dynamic-plugins\n
\ name: dynamic-plugins-npmrc\n readOnly: true\n subPath:
.npmrc\n workingDir: /opt/app-root/src\n\n containers:\n -
name: backstage-backend\n image: \"{RELATED_IMAGE_backstage}\" # will
be replaced with the actual image quay.io/janus-idp/backstage-showcase:next\n
\ imagePullPolicy: IfNotPresent\n args:\n - \"--config\"\n
\ - \"dynamic-plugins-root/app-config.dynamic-plugins.yaml\"\n readinessProbe:\n
\ failureThreshold: 3\n httpGet:\n path: /healthcheck\n
\ port: 7007\n scheme: HTTP\n initialDelaySeconds:
30\n periodSeconds: 10\n successThreshold: 2\n timeoutSeconds:
2\n livenessProbe:\n failureThreshold: 3\n httpGet:\n
\ path: /healthcheck\n port: 7007\n scheme:
HTTP\n initialDelaySeconds: 60\n periodSeconds: 10\n successThreshold:
1\n timeoutSeconds: 2\n ports:\n - name: backend\n
\ containerPort: 7007\n env:\n - name: APP_CONFIG_backend_listen_port\n
\ value: \"7007\"\n envFrom:\n - secretRef:\n
\ name: \"{POSTGRESQL_SECRET}\" # will be replaced with 'backstage-psql-secrets-<cr-name>'
\ \n # - secretRef:\n # name: backstage-secrets\n
\ volumeMounts:\n - mountPath: /opt/app-root/src/dynamic-plugins-root\n
\ name: dynamic-plugins-root"

Related to:

Make it possible to inject user-provided environment variables into the Backstage Deployment

In #27, we added support for user-defined app-configs and dynamic plugins config, backed by either ConfigMaps or Secrets.

As discussed in #27 (comment), the common and recommended approach for most app-config files would be files referencing environment variables backed by Secrets, like so:

auth:
  environment: prod
  providers:
    github:
      prod:
        clientId: ${AUTH_GITHUB_CLIENT_ID}
        clientSecret: ${AUTH_GITHUB_CLIENT_SECRET}
        ## uncomment if using GitHub Enterprise
        # enterpriseInstanceUrl: ${AUTH_GITHUB_ENTERPRISE_INSTANCE_URL}

One possible approach, depicted in the upstream Backstage deployment guide on Kubernetes, could be to reference a backstage-secrets secret, just like what we are doing with the DB credentials. Here, this secret would need to be optional.

envFrom:
  - secretRef:
      name: postgres-secrets
  - secretRef:
      name: backstage-secrets
      optional: true

This would not require adding specific fields to the CRD.

Missing link between the Backstage instance and the DB Service

Reproduction Steps

Using b2b23ea:

$ make install run
$ kubectl apply -f examples/postgres-secret.yaml
$ kubectl apply -f examples/bs1.yaml

Actual behavior

The DB StatefulSet is properly started, but the Backstage instance ends up in CrashLoopBackoff status:

$ kubectl logs backstage-85f5d94969-p5mbn

Defaulted container "backstage-backend" out of: backstage-backend, install-dynamic-plugins (init)                                                                                              
Loaded config from app-config.yaml, app-config.example.yaml, app-config.example.production.yaml, app-config.dynamic-plugins.yaml, env
[...]
2023-11-29T20:01:56.575Z auth info Enabled Provider Factories : {} type=plugin
2023-11-29T20:01:56.575Z auth info Configuring "database" as KeyStore provider type=plugin
Backend failed to start up Error: Failed to connect to the database to make sure that 'backstage_plugin_auth' exists, Error: connect ECONNREFUSED ::1:5432
    at /opt/app-root/src/node_modules/@backstage/backend-common/dist/index.cjs.js:1047:17
    at async KeyStores.fromConfig (/opt/app-root/src/node_modules/@backstage/plugin-auth-backend/dist/index.cjs.js:2981:35)
    at async Object.createRouter (/opt/app-root/src/node_modules/@backstage/plugin-auth-backend/dist/index.cjs.js:3257:20)
    at async createPlugin$6 (/opt/app-root/src/packages/backend/dist/index.cjs.js:466:10)
    at async addPlugin (/opt/app-root/src/packages/backend/dist/index.cjs.js:692:31)
    at async main (/opt/app-root/src/packages/backend/dist/index.cjs.js:776:3)

Expected behavior

From the logs, it looks like the DB hostname is not known to Backstage, which is why it is trying to connect to ::1:5432. It looks like the POSTGRES_HOST environment variable is no longer available in the container.

The sample postgres-secret Secret used to have a hardcoded POSTGRES_HOST key corresponding to the (static) DB Service name, but this was changed in #46.
Now that the name of the DB Service is dynamic (based on the CR name), we need to inject the Database Service Name somehow into the Backstage Deployment.

Do not make creating a secret for PostgreSQL a prerequisite

User Story

As an Operator user, I want to have a Backstage instance running without having to explicitly create a secret for PostgreSQL, so that I can get started as quickly as possible.
Currently, I need to create a secret named postgres-secrets first, which is then injected as environment variables into both the PostgreSQL StatefulSet and the Backstage Deployment. Otherwise, the pods won't be able to start.
The operator could easily generate such a secret for me.

Acceptance Criteria

  • It should not require a postgres-secrets Secret to be created as a prerequisite
  • If enableLocalDb is true or unset (i.e., a local Database should be created), the operator should dynamically create such secret (unless it exists already) with the necessary values to use in the DB and Backstage resources
  • It should create random values for password fields in the Secret
  • It should set the name of this secret into the CR status
  • It should inject the right information (env vars) into the DB and Backstage containers
  • It should not create a Postgres Secret if enableLocalDb is false (the user probably wants to use their own external DB)

Define the data structure for the CRD

Goal

Define how the Custom Resource(s) should look like, from the perspective of people consuming them to install a Backstage application.

In the current POC, we have some examples of what is possible (https://github.com/janus-idp/operator/tree/main/examples), but the goal of this issue is to clearly define which fields should be exposed and that would make sense for a Backstage application.

Notes

  • appconfig
  • postgresql

Acceptance Criteria

  • It should be able to handle multiple app-config files, probably in the form of a ConfigMap mounted as files into the backstage container
  • It should be able to handle both internal and external database use cases
  • It should leave room for further customization, e.g., the ability to configure caching for Backstage
  • ...

Test and make sure the default CR configuration works properly on OpenShift

I just tried to use the operator locally against an OpenShift cluster (provided by ClusterBot), and noticed a few issues:

  • PostgreSQL database in CrashLoopBackOff state, with the following logs (maybe the image we are using can't be run with a non-root user; I guess we should probably use a different image):
chmod: /var/lib/postgresql/data: Permission denied
chmod: /var/run/postgresql: Operation not permitted
The files belonging to this database system will be owned by user "1000690000".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: error: could not access directory "/var/lib/postgresql/data": Permission denied
  • Backstage Pod, failing as well. Not sure from the logs if this was due to the lack of connection to the DB:
node:internal/modules/cjs/loader:1031
throw err;
^
Error: Cannot find module '/app/packages/backend'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1028:15)
at Function.Module._load (node:internal/modules/cjs/loader:873:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:22:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}

Repro Steps

$ make install run
$ oc apply -f examples/postgres-secret.yaml
$ oc apply -f examples/bs1.yaml

image

TODO:

  • Perhaps use the same PostgreSQL image as the one used by the Helm Chart. Check if this image is certified or not.

Not possible to 'make deploy' operator due to RBAC

Steps to reproduce:

  • Perform 'make deploy'

$ kubectl logs -n backstage-system backstage-controller-manager-564fcf64f5-mdksp

2023-11-13T17:05:22Z INFO Starting Controller {"controller": "backstage", "controllerGroup": "backstage.io", "controllerKind": "Backstage"}
W1113 17:05:22.464312 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
E1113 17:05:22.464351 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.Backstage: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
W1113 17:05:23.753288 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
E1113 17:05:23.753378 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.Backstage: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
W1113 17:05:26.668711 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
E1113 17:05:26.668739 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.Backstage: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
W1113 17:05:32.684640 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope
E1113 17:05:32.684704 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.Backstage: failed to list *v1alpha1.Backstage: backstages.backstage.io is forbidden: User "system:serviceaccount:backstage-system:backstage-controller-manager" cannot list resource "backstages" in API group "backstage.io" at the cluster scope

Armel Soro has paused their notifications

Allow to disable automatic creation of Route on OpenShift

User Story

As an OpenShift user, I want to disable the default behavior of the operator creating a Route by default for me, So that I can control how I want my Backstage instance to be exposed.
Since #67, the operator can detect whether the current cluster is OpenShift or not. On OpenShift, it will automatically create a Route when reconciling a given CR. However some users may not want that behavior by default. So we should provide a way for them to skip the creation of this Route.

Acceptance Criteria

  • It should not change the current default behavior, i.e., it should create a Route if spec.application.route is unset.
  • It should provide a way in the CR to completely disable the creation of a Route
    • For example, we can have a new spec.application.route.enabled boolean field (defaulting to true) in the CRD

Update CRD with initial set of meaningful parameters

/kind task

Follow-up to #21 (comment), where we reviewed and agreed upon a set of minimal viable parameters that could be included into the CRD.

This issue is about exposing and handling those parameters:

spec:
  enableLocalDb: false
  application:
    appConfig:
      # All ConfigMaps will be mounted in the same place
      mountPath: "/opt/app-root/src"
      configMaps:
          - name: "my-app-config-cm12"
            # key is optional. All files will be mounted if not set. Otherwise, only the specified key will be mounted.
            key: "my-file1"
    extraFiles:
      # All ConfigMaps and Secrets will be mounted in the same place
      mountPath: /opt/app-root/src/path/to/my-extra-files
      configMaps:
          - name: "my-extra-config-secret12"
            # key is optional. All files will be mounted if not set.  Otherwise, only the specified key will be mounted.
            key: "my-file1"
      secrets:
          - name: "my-extra-config-secret12"
            # key is optional. All files will be mounted if not set.  Otherwise, only the specified key will be mounted.
            key: "my-file1"    
    # CM must have a key named 'dynamic-plugins.yaml'
    dynamicPluginsConfigMapName: "my-dynamic-plugins-config-cm"

    backendAuthSecret:
        name: "my-backend-auth-secret"
        # optional key in secret. Default value is "backend-secret"
        key: "my-auth-key"

    extraEnvs:
      configMaps:
        - name: "cm-name"
          # key is optional. Env vars from all keys if key is not set
          key: "key"
      secrets:
        - name: "secret-name"
          # key is optional. Env vars from all keys if key is not set
          key: "key"
      envs:
        - name: MY_ENV_VAR_1
          value: my-value-1
    replicas: 2
    image: "quay.io/rhdh/rhdh-hub-rhel9:1.0-194"
    imagePullSecrets:
      - rhdh-pull-secret

Previous version
spec:
  backstage:

    appConfig:
      # optional 
      mountPath:  /opt/app-root/src
      - configMapRef:
          # all keys in the my-configmap configmap will be mounted to /opt/app-root/src/<key>
          name: my-appconfig
      - secretRef:
          name: my-secret-config

    extraConfig:
      # optional 
     mountPath: /opt/app-root/src
      - configMapRef:
          name: my-extra-configmap
      - secretRef:
          name: my-extra-secret

    env:
      - name: MY_ENV_VAR
        value: my-value
    envFrom:
      - configMapRef:
          name: my-configmap
      - secretRef:
          name: my-secret

    replicas: 1

    # image and imagePullSecret should apply to all containers (main and init)
    image: example.com/my-image:latest
    imagePullSecret: my-image-pull-secret

# Postponed
#    annotations:
#      # do not override default annotations
 #     my-annotation: my-value
#    labels:
 #     # do not override default labels
  #    my-label: my-value

# Postponed
#    command: ["node", "packages/backend"]
 #   args: ["--config /config/config.yaml"]
 

# Ingress Postponed for now
#   ingress:
#      enabled: true
 #     className: nginx
  #    annotations:
   #     kubernetes.io/ingress.class: nginx
 #     host: my-app.example.com
 #     path: /
 #     tls:
  #      enabled: true
   #     secretName: my-app-tls

# Priority - only allow modifying host and tls config
    route:
      enabled: true
      annotations:
        haproxy.router.openshift.io/timeout: 5m
      path: /
      tls:
        enabled: true
        caCertificate: 
        certificate:
        destinationCACertificate:
        key:
        insecureEdgeTerminationPolicy: redirect
        termination: edge

# Postponed
 #   service:
  #    type: ClusterIP
 #     annotations:
 #       service.beta.kubernetes.io/aws-load-balancer-type: external
 #     loadBalancerIP: 192.0.2.127
 #     clusterIP: 10.0.171.239

# Not needed, but we should be able to disable it
  postgresql:
    # if set to false will not create a postgresql database and user is expected to provide one
    # default is true
    enabled: true
    # TODO

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.