Giter Site home page Giter Site logo

common's People

Contributors

beekhof avatar claudiol avatar darkdoc avatar day0hero avatar dependabot[bot] avatar mbaldessari avatar mhjacks avatar ruromero avatar soukron avatar stocky37 avatar strangiato avatar tomerfi avatar wadebee avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

common's Issues

preview-all is not rendering the clusterGroup chart

It is currently rendering only the listed applications. To have a quick validation of what is being worked on without deploying it, it'd be useful to also render the clusterGroup chart which is the entry point for a pattern.

We should also allow overriding some of the local lookups we do against the cluster here:

platform=$(oc get Infrastructure.config.openshift.io/cluster  -o jsonpath='{.spec.platformSpec.type}')
ocpversion=$(oc get clusterversion/version -o jsonpath='{.status.desired.version}' | awk -F. '{print $1"."$2}')
domain=$(oc get Ingress.config.openshift.io/cluster -o jsonpath='{.spec.domain}' | sed 's/^apps.//')

This will allow a simple way to see how things change when a user changes the platform etc.

Extend edit access to ArgoCD instances to cluster-admin role

After deploying a validated pattern, e.g. Edge Anomaly Detection, I can view the ArgoCD Applications in the cluster and project ArgoCD instances, but I'm unable to manually trigger Sync. When attempting to manually trigger Sync, I receive the following error message:

Unable to deploy revision: permission denied: applications, sync, default/edge-anomaly-detection-hub, sub: CiRjMWFiNGZiNi1kMjkxLTQzNDgtODljNy1mYmI2Y2ViYjUxNWMSCW9wZW5zaGlmdA, iat: 2023-11-08T16:36:55Z

I'm logged in as a user with cluster-admin role, but tt seems the default RBAC configuration of ArgoCD allows only kubeadmin full access. Deploying the pattern as kubeadmin is not always feasible for regular pattern users, so I propose to extend the ArgoCD RBAC rules to grant any user associated with the cluster-admin role edit permissions.

multisource and spoke clusters

We should make sure that if multisource is true, then the very same manifest with sources: on the hub is deployed also on the spokes. Currently this is not the case.

RFE: Add labels and annotation support to kind: namespace

This is coming from the TelCo team who I am working on creating the npss-tnc community pattern.

We currently support the creation of namespaces. There are instances in which users need to add additional annotations and labels to a namespace manifest. For example:

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-logging
  annotations:
    openshift.io/node-selector: ""
  labels:
    openshift.io/cluster-monitoring: "true"

The implementation would require us to update the way we define namespaces in our values file. The proposal would be something like this:

namespaces:
- name: namespaceName
  labels:
  - name: labelName
    value: labelValue
  annotations:
  - name: "annotationName"
    value: "annotationValue"

Question for the team is:

  • Do we want to support this in future versions of common ?
  • Or do we just have users create a Namespace manifest with the appropriate annotations and labels to be applied by openshift-gitops?

New secrets loading module: Empty `files` key causes surprising error

Under the current module, when the files: key is defined in values_secret.yaml but has no entries, it throws the error that NoneType is not iterable when validating the file paths.

This potentially surprising to the user - maybe there should be a validation that files: points to a dict type?

OperatorGroup without targetNamespaces

Some operators require to create an operatorGroup without targetNamespaces, in example MetalLB Operator as described in the installation guide (https://docs.openshift.com/container-platform/4.12/networking/metallb/metallb-operator-install.html).

This is not possible with the current operatorgroup.yaml template.

I'm not sure about the implications but this patch fixes it, and it creates an operatorGroup without targetNamespaces by default. Not a big deal but definitively a change in the behavior.

With Gitops 1.5.0 the secondary argo web interface is erroring out

Since yesterday's release of gitops 1.5.0 clicking on the secondary argo instance errors out with the following:

Failed to query provider "https://hub-gitops-server-multicloud-gitops-hub.apps.bandini-dc.blueprints.rhecoeng.com/api/dex": Get "http://hub-gitops-dex-server.multicloud-gitops-hub.svc.cluster.local:5556/api/dex/.well-known/openid-configuration": dial tcp 172.30.229.125:5556: connect: connection refused

The logs of the dex container give us the following:

W0421 07:32:56.239809       1 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "secrets" in API group "" in the namespace "multicloud-gitops-hub"
E0421 07:32:56.239838       1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "secrets" in API group "" in the namespace "multicloud-gitops-hub"
W0421 08:14:28.521492       1 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "configmaps" in API group "" in the namespace "multicloud-gitops-hub"
E0421 08:14:28.521517       1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "configmaps" in API group "" in the namespace "multicloud-gitops-hub"

It seems that we need to add some permissions to the hub-gitops-argocd-dex-server service account now

Additional namespace object definition for ACM Policy

Problem Statement

In some environments we have noticed that in some instances the imperative namespace does not get created on Spoke clusters. The policy in common/acm/templates/policies/acm-hub-ca-policy creates a Secret in the imperative namespace. If the namespace is not created the ACM policy will fail.

Proposed Fix

Add the following objectDefinition to the ACM Policy:

...
            - complianceType: musthave
              objectDefinition:
                kind: Namespace # must have namespace 'imperative'
                apiVersion: v1
                metadata:
                  name: imperative
...

This will create the namespace imperative if it does not exist on the Spoke cluster.

Framework support for private repo additions to ArgoCD instances

In a disconnected environment customers will have git repos that house their source code.
To deploy our validated patterns the private repositories will have to be configured in ArgoCD with a provided customer certificate for the git repository.

This will have to be configured by the patterns operator at creation of the ArgoCD instance creation for the pattern.

If application manifests are located in private repository then repository credentials have to be configured.

Proposal:

  • Extend the framework to accept private git repository configuration
  • If private git repository configuration is provided the framework should add this to the ArgoCD instances

Extend ACM Chart support to provision clusters using ClusterDeployment

Currently the framework supports the provisioning of clusters using ManagedClusterSet, and ClusterPool and ClusterClaim.

The functionality is supported via the managedClusterGroups section described in the values-hub.yaml files. Below is an example of how to describe a ClusterPool associated with a label.

# This section is used by ACM
  managedClusterGroups:
  - name: resilient
    helmOverrides:
    - name: clusterGroup.isHubCluster
      value: "false"
    clusterSelector:
      matchLabels:
        clusterGroup: resilient
      matchExpressions:
      - key: vendor
        operator: In
        values:
          - OpenShift
    clusterPools:
      # example of pool for primary spokes
      aws-ca-central-1:
        name: aws-region-ca
        openshiftVersion: 4.12.49
        baseDomain: blueprints.rhecoeng.com
        platform:
          aws:
            region: us-west-2
        clusters:
          - customspoke1
        controlPlane:
          platform:
            aws:
              type: m5.2xlarge
              zones:
                - us-west-2a
        workers:
          replicas: 3
          platform:
            aws:
              type: m5.xlarge
              zones:
                - us-west-2a
      # example of pool for secondary spokes

We would like to extend the provisioning functionality in the VP framework to support provisioning clusters using ManagedClusterSet, ClusterDeployment, and Submariner add-on. Similar to how we describe ClusterPools we would like to add how to provision ClusterDeployments. Below is an initial example of what it could look like.

# This section is used by ACM
  managedClusterGroups:
  - name: resilient
    helmOverrides:
    - name: clusterGroup.isHubCluster
      value: "false"
    clusterSelector:
      matchLabels:
        clusterGroup: resilient
      matchExpressions:
      - key: vendor
        operator: In
        values:
          - OpenShift
    clusterDeployments:
      # example of pool for primary spokes
      ocp-primary-1:
        name: ocp-primary
        version: 4.14.15
        install_config:
          apiVersion: v1
          metadata:
            name: ocp-primary
          baseDomain: blueprints.rhecoeng.com
          controlPlane:
            name: master
            replicas: 3
            platform:
              aws:
                type: m5.2xlarge
                zones:
                - us-east-1a
          compute:
          - name: worker
            replicas: 5
            platform:
              aws:
                type: m5.2xlarge
                zones:
                - us-east-1a
          networking:
            clusterNetwork:
            - cidr: 10.128.0.0/14
              hostPrefix: 23
            machineNetwork:
            - cidr: 10.0.0.0/16
            networkType: OpenShiftSDN
            serviceNetwork:
            - 172.30.0.0/16
          platform:
            aws:
              region: us-east-1
              userTags:
                project: ValidatedPatterns

          publish: External
          sshKey: ""
          pullSecret: ""
      # example of pool for secondary spokes

With ClusterDeployment the user can control the name of the clusters which is a requirement in certain environments. ClusterPool cluster names are auto-generated by ACM so there is no control over the cluster name.

override/empty target namespace

Some operators are not compatible with adding targetNamespace as their own, this is automatically done when we create a new namespace.

for example, performance-addon-operator can't monitor its own namespace.

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"operators.coreos.com/v1","kind":"OperatorGroup","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"npss-tnc-hub"},"name":"openshift-performance-addon-operator-operator-group","namespace":"openshift-performance-addon-operator"},"spec":{"targetNamespaces":["openshift-performance-addon-operator"]}}

Support for merging of namespaces, projects, subscriptions and application in overrides/values-common.yaml

Scenario:

Our company has a base set of namespaces/projects/subscriptions and application that we want deployed in all clusters (hub and spokes).

We attempted to implement this with a values-common.yaml in the overrides folder.

When we include this common file into values-hub.yaml or values-spoke as an entry in sharedValueFiles: it does not work. It appears that helm just overwrites the hub/spoke yaml namespaces/projects/subscriptions/applications with the contents of common.yaml.

Feat: Followup to definition of extraParameters under the main section of a values file.

Problem Statement

The current clustergroup schema does not allow the definition of extraParameters under the main section of a values file.

Caveat

The user defined variables in the extraParameters section would only be applied if the user deploys the pattern via the command, using ./pattern.sh make install or ./pattern.sh make operator-deploy and not via the OpenShift Validated Patterns Operator UI.

Proposed Fix

Add the extraParameters defined to the Hub and Spoke cluster ArgoCD Applications.

For more information please refer to #510

trailing zero is removed from channel in subscription

Hi

When we have a channel like 4.10, it is translated into Argo as 4.1

a workaround was set to use "4.10" and it worked.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-performance-addon-operator-subscription
  namespace: openshift-performance-addon-operator
spec:
  channel: "4.10"
  name: performance-addon-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace

External Secrets Operator for general cluster use

We have been adding the External Secrets Operator (Community Maintained) to our values file as a required subscription.

In reviewing the Validated Patterns common folder today I found a folder named golang-external-secrets that upon closer inspection contains the same operator we need (but in a localized format).

Because our company does not allow direct access to public repos like ghcr.io where that operator is hosted I would like to make cluster-wide use of your localized instance.

My questions:

  • Is cluster-wide usage of this operator VP-sanctioned or is this operator intended/configured only to support the VP infrastructure itself?
  • Is there a reason VP has prefixed this folder naming with golang? Without closer inspection - many folks might assume this is a different workload than the one most are familiar with.

Thanks,
Wade

Add extraParameters to values.schema.json

Problem Statement

The current clustergroup schema does not allow the definition of extraParameters under the main section of a values file.

Caveat

The user defined variables in the extraParameters section would only be applied if the user deploys the pattern via the command, using ./pattern.sh make install or ./pattern.sh make operator-deploy and not via the OpenShift Validated Patterns Operator UI.

Proposed Fix Description

Add the extraParameters to the definition of Main.properties in the values.schema.json:

...
  "Main": {
      "type": "object",
      "additionalProperties": false,
      "required": [
        "clusterGroupName"
      ],
      "title": "Main",
      "description": "This section contains the 'main' variables which are used by the install chart only and are passed to helm via the Makefile",
      "properties": {
        "clusterGroupName": {
          "type": "string"
        },
        "extraParameters": {
          "type": "array",
          "description": "Pass in extra Helm parameters to all ArgoCD Applications and the framework."
        },
...

This will allow users to define extra parameters that will be added by the framework to the ArgoCD applications it creates.

More Detailed Description

  • We currently have the following code in common/operator-install/templates/pattern.yaml:
  extraParameters:
{{- range .Values.main.extraParameters }}
  - name: {{ .name | quote }}
    value: {{ .value | quote }}
{{- end }} {{/* range .Values.main.extraParameters */}}
{{- end }} {{/* if .Values.main.extraParameters */}}
  • This fix allows a user to define the extraParameters in a values file under the main section.
  • A user would do the following to define their extraParameters variables:
main:
  clusterGroupName: datacenter
  multiSourceConfig:
    enabled: false
  experimentalCapabilities: initcontainers
  extraParameters:
    - name: clusterEnvironment
      value: prod
  • If you define extraParameters under the main section of the values files in a pattern these parameters are added by the Validated Patterns operator to Pattern CR. You can get the list of pattern CRs by using the oc get pattern -n openshift-operators command.
apiVersion: v1
items:
- apiVersion: gitops.hybrid-cloud-patterns.io/v1alpha1
  kind: Pattern
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"gitops.hybrid-cloud-patterns.io/v1alpha1","kind":"Pattern","metadata":{"annotations":{},"name":"industrial-edge","namespace":"openshift-operators"},"spec":{"clusterGroupName":"datacenter","experimentalCapabilities":"initcontainers","extraParameters":[{"name":"environment","value":"dev"}],"gitSpec":{"targetRepo":"https://github.com/claudiol/industrial-edge.git","targetRevision":"deploy-industrial"},"multiSourceConfig":{"enabled":false}}}
    creationTimestamp: "2024-05-15T16:23:18Z"
    finalizers:
    - foregroundDeletePattern
    generation: 2
    name: industrial-edge
    namespace: openshift-operators
    resourceVersion: "565790"
    uid: 233d6200-7de2-4221-94e3-9c36351db8cc
  spec:
    clusterGroupName: datacenter
    experimentalCapabilities: initcontainers
    extraParameters:
    - name: clusterEnvironment
      value: dev
   ...
  • In turn the operator adds these extraParameters to the extraParametersNested section as key/value pairs in the Cluster Wide ArgoCD Application created by the Validated Patterns operator.
  • This allows the user to define extra parameters that would be useful to user provided Helm charts for their particular pattern.
  • The clustergroup Helm chart will add these parameters to each ArgoCD Application created by the framework
  • The Application template found in common/clustergroup/templates/plumbing/applications.yaml will add the defined parameters as key/value pairs under the Parameters of each ArgoCD Application created by the framework.
  • Below is the Helm template code that includes these parameters:
          {{- range $k, $v := $.Values.extraParametersNested }}
          - name: {{ $k }}
            value: {{ printf "%s" $v | quote }}
          {{- end }}

Support letsencrypt with non-route53 DNS01 challenges

As a user of validated patterns I would like to be able to use non 'route-53' based challenges.

The letsencrypt helm chart provides value in terms of dealing with the guts of integration with OCP' and ensuring route certificates are updated. However, if I am not using route53 but rather using one of the other supported DNS01 challenge providers for cert-manager (https://cert-manager.io/docs/configuration/acme/dns01/) I will effectively have to fork the chart.

Therefore I would suggest that there is a "BYO issuer" where the end user can configure their own DNS01 challenge provider.

What is the proper way to inject a Corporate CA into a Pattern?

References:

What is the proper way to inject a Corporate CA into a Validated Pattern such that:

  1. The Cluster ArgoCD instance picks up the CA
  2. The Pattern ArgoCD instance picks up the CA
  3. For patterns like MultiCloud Gitops that support "pushing" patterns to a "spoke cluster" - those pushes also properly provision the CA trust?

ResourceCustomizations is deprecated

Warning  DeprecationNotice  27m         ResourceCustomizations is deprecated, please use the new formats `ResourceHealthChecks`, `ResourceIgnoreDifferences`, and `Resource
Actions` instead.

oc describe of the cluster-wide argo instance gives the above deprecation notice. We need to fix this (in our common argocd.yaml)

Allow overrides of Makefile values and include an optional values folder path

Use case:
Our team has its own utility container for working within our multicloud environment. We have rebased our container to hybridcloudpatterns/utility-container:latest so we can pick up all required VP tooling.

Our container makes use of a git clone to pull in the validated pattern repo as a subfolder. We include our own Makefile in our DIR_HOME to hand-off to VPs Makefile appropriately.

Containerfile Extract

	FROM mycompany.com/hybridcloudpatterns/utility-container:latest
	ARG CONTAINER_DIR_HOME=/home/root
        ARG MY_REPO_CLI_BRANCH ?= 'dev'
	ARG MY_REPO_CLI_URL=https://github.mycompany.com/my-utility-container.git
	ARG VP_DIR_MULTICLOUD=vp-multicloud-gitops
	ARG VP_DIR_VALUES=vp-values
        ARG VP_PATTERN_NAME=my-pattern
        ARG VP_REPO_MULTICLOUD_BRANCH=main
	ARG VP_REPO_MULTICLOUD_URL=https://github.com/validatedpatterns/multicloud-gitops.git
	
	RUN git clone --depth 1 ${VP_REPO_MULTICLOUD_URL} ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}; \
	    rm ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}/.github -R; \
	    rm ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}/.gitignore; 
	
	COPY ${VP_DIR_VALUES}/                 ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}/${VP_DIR_VALUES}
	COPY ./Makefile                        ${CONTAINER_DIR_HOME}
	
	ENV KUBECONFIG=${CONTAINER_DIR_HOME}/.kube/config \
            VP_DIR_MULTICLOUD=${VP_DIR_MULTICLOUD} \
            VP_DIR_VALUES=${VP_DIR_VALUES} \
            VP_REPO_MULTICLOUD_BRANCH=${VP_REPO_MULTICLOUD_BRANCH} \
            VP_REPO_MULTICLOUD_URL=${VP_REPO_MULTICLOUD_URL} \
            VP_PATTERN_NAME=${VP_PATTERN_NAME} 
	
	WORKDIR ${CONTAINER_DIR_HOME}
	ENTRYPOINT ["sh", "run.sh"]
	CMD ["help"]

Makefile

	export MY_REPO_CLI_BRANCH ?= 'main'
	export MY_REPO_CLI_ORIGIN ?= 'origin'
	export MY_REPO_CLI_URL ?= 'https://github.mycompany.com/my-utility-container.git'

	export VP_DIR_VALUES ?= 'my-default-vp-values-path'
	export VP_DIR_MULTICLOUD ?= 'my-default-vp-mc-path'
	export VP_PATTERN_NAME ?= 'my-default-pattern'
	export VP_REPO_MULTICLOUD_BRANCH ?= 'main'
	export VP_REPO_MULTICLOUD_URL ?= 'https://github.com/validatedpatterns/multicloud-gitops.git'
	
	export NAME ?= ${VP_PATTERN_NAME}
	export TARGET_ORIGIN ?= ${MY_REPO_CLI_ORIGIN}
	export TARGET_REPO ?= ${MY_REPO_CLI_URL}
	export TARGET_BRANCH ?= ${MY_REPO_CLI_BRANCH}
	
	%:
		@make $* -C vp-multicloud-gitops

Requested Changes

To facilitate the use case above I am requesting modifications be made to common/Makefile as follows:

  • Add a line 2 to contain a new variable that will serve as a folder path prefix for our company's value files.

VP_DIR_VALUES ?= '.'

  • Modify line 19

TARGET_REPO ?= $[….]

  • Modify line 21

TARGET_BRANCH ?= $[….]

The last set of changes I will leave to you but effectively any VP references to value files should be prefixed with $(VP_DIR_VALUES).

Example changes I have found so far include common/Makefile line 39 and 43:

HELM_OPTS=-f $(VP_DIR_VALUES)/values-global.yaml

And line 65:

$(eval CLUSTERGROUP ?= $(shell yq ".main.clusterGroupName" $(VP_DIR_VALUES)/values-global.yaml))

I believe other files are involved in this update as well like common/scripts/preview.sh.

IBM Cloud ROKS integrated use of Let's encrypt for routes creates issues with ESO Deployment

I've deployed a validated-patterns based cluster which was derived from multicloud-gitops based on this repo:

When deploying I was using 'Red Hat Openshift on IBM Cloud' where:

  1. It was a VPC cluster (e.g. not classic)
  2. The public service endpoints were enabled (e.g. routes are on the internet by default).

Here I was using the default route hostname (described here): https://cloud.ibm.com/docs/openshift?topic=openshift-openshift_routes using an "IBM provided domain".

By default the routes appear to be encrypted with lets encrypt certs including the console. So any re-encrypt/Redirect or edge/Redirect routes have usable public certs.

The adverse impact of this is that by default the External secrets Operator enforces a particular certificate chain.

The following commit enabled the ESO to function. however, does decrease security posture.
butler54/validated-patterns-demos@e427f99

references

https://cloud.ibm.com/docs/openshift?topic=openshift-openshift_routes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.