Giter Site home page Giter Site logo

flux2-multi-tenancy's Introduction

flux2-multi-tenancy

test e2e license

This repository serves as a starting point for managing multi-tenant clusters with Git and Flux v2.

Roles

Platform Admin

  • Has cluster admin access to the fleet of clusters
  • Has maintainer access to the fleet Git repository
  • Manages cluster wide resources (CRDs, controllers, cluster roles, etc)
  • Onboards the tenant’s main GitRepository and Kustomization
  • Manages tenants by assigning namespaces, service accounts and role binding to the tenant's apps

Tenant

  • Has admin access to the namespaces assigned to them by the platform admin
  • Has maintainer access to the tenant Git repository and apps repositories
  • Manages app deployments with GitRepositories and Kustomizations
  • Manages app releases with HelmRepositories and HelmReleases

Repository structure

The platform admin repository contains the following top directories:

  • clusters dir contains the Flux configuration per cluster
  • infrastructure dir contains common infra tools such as admission controllers, CRDs and cluster-wide polices
  • tenants dir contains namespaces, service accounts, role bindings and Flux custom resources for registering tenant repositories
├── clusters
│   ├── production
│   └── staging
├── infrastructure
│   ├── kyverno
│   └── kyverno-policies
└── tenants
    ├── base
    ├── production
    └── staging

A tenant repository contains the following top directories:

  • base dir contains HelmRepository and HelmRelease manifests
  • staging dir contains HelmRelease Kustomize patches for deploying pre-releases on the staging cluster
  • production dir contains HelmRelease Kustomize patches for deploying stable releases on the production cluster
├── base
│   ├── kustomization.yaml
│   ├── podinfo-release.yaml
│   └── podinfo-repository.yaml
├── production
│   ├── kustomization.yaml
│   └── podinfo-values.yaml
└── staging
    ├── kustomization.yaml
    └── podinfo-values.yaml

Bootstrap the staging cluster

Install the Flux CLI and fork this repository on your personal GitHub account and export your GitHub username and repo name:

export GITHUB_USER=<your-username>
export GITHUB_REPO=<repository-name>

Verify that your staging cluster satisfies the prerequisites with:

flux check --pre

Set the --context argument to the kubectl context to your staging cluster and bootstrap Flux:

flux bootstrap github \
    --context=your-staging-context \
    --owner=${GITHUB_USER} \
    --repository=${GITHUB_REPO} \
    --branch=main \
    --personal \
    --path=clusters/staging

At this point flux cli will ask you for your GITHUB_TOKEN (a.k.a Personal Access Token).

NOTE: The GITHUB_TOKEN is used exclusively by the flux CLI during the bootstrapping process, and does not leave your machine. The credential is used for configuring the GitHub repository and registering the deploy key.

The bootstrap command commits the manifests for the Flux components in clusters/staging/flux-system dir and creates a deploy key with read-only access on GitHub, so it can pull changes inside the cluster.

Wait for the staging cluster reconciliation to finish:

$ flux get kustomizations --watch
NAME            	READY  	MESSAGE                                                        	
flux-system     	True   	Applied revision: main/616001c38e7bc81b00ef2c65ac8cfd58140155b8	
kyverno         	Unknown	Reconciliation in progress
kyverno-policies	False  	Dependency 'flux-system/kyverno' is not ready
tenants         	False  	Dependency 'flux-system/kyverno-policies' is not ready

Verify that the tenant Git repository has been cloned:

$ flux -n apps get sources git
NAME    	READY	MESSAGE 
dev-team	True 	Fetched revision: dev-team/ca8ec25405cc03f2f374d2f35f9299d84ced01e4

Verify that the tenant Helm repository index has been downloaded:

$ flux -n apps get sources helm
NAME   	READY	MESSAGE
podinfo	True 	Fetched revision: 2022-05-23T10:09:58.648748663Z

Wait for the demo app to be installed:

$ watch flux -n apps get helmreleases
NAME   	READY	MESSAGE                         	REVISION	SUSPENDED 
podinfo	True 	Release reconciliation succeeded	5.0.3   	False 

To expand on this example, check the enforce tenant isolation for security related considerations.

Onboard new tenants

The Flux CLI offers commands to generate the Kubernetes manifests needed to define tenants.

Assuming a platform admin wants to create a tenant named dev-team with access to the apps namespace.

Create the tenant base directory:

mkdir -p ./tenants/base/dev-team

Generate the namespace, service account and role binding for the dev-team:

flux create tenant dev-team --with-namespace=apps \
    --export > ./tenants/base/dev-team/rbac.yaml

Create the sync manifests for the tenant Git repository:

flux create source git dev-team \
    --namespace=apps \
    --url=https://github.com/<org>/<dev-team> \
    --branch=main \
    --export > ./tenants/base/dev-team/sync.yaml

flux create kustomization dev-team \
    --namespace=apps \
    --service-account=dev-team \
    --source=GitRepository/dev-team \
    --path="./" \
    --export >> ./tenants/base/dev-team/sync.yaml

Create the base kustomization.yaml file:

cd ./tenants/base/dev-team/ && kustomize create --autodetect --namespace apps 

Create the staging overlay and set the path to the staging dir inside the tenant repository:

cat << EOF | tee ./tenants/staging/dev-team-patch.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: dev-team
  namespace: apps
spec:
  path: ./staging
EOF

cat << EOF | tee ./tenants/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: apps
resources:
  - ../base/dev-team
patches:
  - path: dev-team-patch.yaml
EOF

With the above configuration, the Flux instance running on the staging cluster will clone the dev-team's repository, and it will reconcile the ./staging directory from the tenant's repo using the dev-team service account. Since that service account is restricted to the apps namespace, the dev-team repository must contain Kubernetes objects scoped to the apps namespace only.

Tenant onboarding via Kyverno

Alternatively to the flux create tenant approach, Kyverno's resource generation feature can be leveraged to the same effect.

Enforce tenant isolation

To enforce tenant isolation, cluster admins must configure Flux to reconcile the Kustomization and HelmRelease kinds by impersonating a service account from the namespace where these objects are created.

Flux has built-in multi-tenancy lockdown features which enables tenant isolation at Control Plane level without the need of external admission controllers (e.g. Kyverno). The recommended patch:

  • Enforce controllers to block cross namespace references. Meaning that a tenant can’t use another tenant’s sources or subscribe to their events.
  • Deny accesses to Kustomize remote bases, thus ensuring all resources refer to local files. Meaning that only approved Flux Sources can affect the cluster-state.
  • Sets a default service account via --default-service-account to kustomize-controller and helm-controller. Meaning that, if a tenant does not specify a service account in a Flux Kustomization or HelmRelease, it would automatically default to said account.

NOTE: It is recommended that the default service account has no privileges. And each named service account used observes the least privilege model.

This repository applies this patch automatically via kustomization.yaml in both clusters.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --no-cross-namespace-refs=true
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller|notification-controller|image-reflector-controller|image-automation-controller)"
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --no-remote-bases=true
    target:
      kind: Deployment
      name: "kustomize-controller"
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --default-service-account=default
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller)"
  - patch: |
      - op: add
        path: /spec/serviceAccountName
        value: kustomize-controller
    target:
      kind: Kustomization
      name: "flux-system"

Side Effects

When Flux is bootstrapped with the patch both kustomize-controller and helm-controller will impersonate the default service account in the tenant namespace when applying changes to the cluster. The default service account exist in all namespaces and should always be kept without any privileges.

To enable a tenant to operate, a service account must be created with the required permissions and its name set to the spec.serviceAccountName of all Kustomization and HelmRelease resources the tenant has.

Tenancy policies

Depending on the aimed security posture, the Platform Admin may impose additional policies to enforce specific behaviours. Below are a few consideration points, some of which are already implemented in this repository.

Image provenance

Assuring the provenance of container images across a cluster can be achieved on several ways.

The verify-flux-images policy ensures that all Flux images used are the ones built and signed by the Flux team:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-flux-images
spec:
  validationFailureAction: enforce
  background: false
  webhookTimeoutSeconds: 30
  failurePolicy: Fail
  rules:
    - name: verify-cosign-signature
      match:
        resources:
          kinds:
            - Pod
      verifyImages:
        - imageReferences:
            - "ghcr.io/fluxcd/source-controller:*"
            - "ghcr.io/fluxcd/kustomize-controller:*"
            - "ghcr.io/fluxcd/helm-controller:*"
            - "ghcr.io/fluxcd/notification-controller:*"
          attestors:
            - entries:
                - keyless:
                    subject: "https://github.com/fluxcd/*"
                    issuer: "https://token.actions.githubusercontent.com"
                    rekor:
                      url: https://rekor.sigstore.dev

Other policies to explore:

  • Restrict what repositories can be accessed in each cluster. Some deployments may need this to be environment-specific.
  • Align image policies with pods that require securityContext that are highly privileged.

Flux Sources

Flux uses sources to define the origin of flux manifests. Some deployments may require that all of them come from a specific GitHub Organisation, as the verify-git-repositories policy shows:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-git-repositories
spec:
  validationFailureAction: audit # Change to 'enforce' once the specific org url is set.
  rules:
    - name: github-repositories-only
      exclude:
        resources:
          namespaces:
            - flux-system
      match:
        resources:
          kinds:
            - GitRepository
      validate:
        message: ".spec.url must be from a repository within the organisation X"
        anyPattern:
        - spec:
            url: "https://github.com/fluxcd/?*" # repositories in fluxcd via https
        - spec:
            url: "ssh://[email protected]:fluxcd/?*" # repositories in fluxcd via ssh

Other policies to explore:

  • Expand the policies to HelmRepository and Bucket.
  • For HelmRepository and GitRepository consider which protocols should be allowed.
  • For Bucket, consider restrictions on providers and regions.

Make serviceAccountName mandatory

The lockdown patch sets a default service account that is applied to any Kustomization and HelmRelease instances that have no spec.ServiceAccountName set.

If the recommended best practices above are followed, such instances won't be able to apply changes to a cluster as the default service account has no permissions to do so.

An additional extra could be taken to make the spec.ServiceAccountName field mandatory via a validation webhook, for example Kyverno or OPA Gatekeeper. Resulting on Kustomization and HelmRelease instances not being admitted when spec.ServiceAccountName is not set.

Reconciliation hierarchy

On cluster bootstrap, you need to configure Flux to deploy the validation webhook and its policies before reconciling the tenants repositories.

Inside the clusters dir we define in which order the infrastructure items, and the tenant workloads are going to be reconciled on the staging and production clusters:

./clusters/
├── production
│   ├── infrastructure.yaml
│   └── tenants.yaml
└── staging
    ├── infrastructure.yaml
    └── tenants.yaml

First we setup the reconciliation of custom resource definitions and their controllers. For this example we'll use Kyverno:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: kyverno
  namespace: flux-system
spec:
  interval: 10m
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/kyverno
  prune: true
  wait: true
  timeout: 5m

Then we setup cluster policies (Kyverno custom resources) to enforce a specific GitHub Organisation:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: kyverno-policies
  namespace: flux-system
spec:
  dependsOn:
    - name: kyverno
  interval: 5m
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/kyverno-policies
  prune: true

With dependsOn we tell Flux to install Kyverno before deploying the cluster policies.

And finally we setup the reconciliation for the tenants workloads with:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: tenants
  namespace: flux-system
spec:
  dependsOn:
    - name: kyverno-policies
  interval: 5m
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./tenants/staging
  prune: true

With the above configuration, we ensure that the Kyverno validation webhook will reject GitRepository that don't originate from a specific GitHub Organisation, in our case fluxcd.

Onboard tenants with private repositories

You can configure Flux to connect to a tenant repository using SSH or token-based authentication. The tenant credentials will be stored in the platform admin repository as a Kubernetes secret.

Encrypt Kubernetes secrets in Git

In order to store credentials safely in a Git repository, you can use Mozilla's SOPS CLI to encrypt Kubernetes secrets with OpenPGP, Age or KMS.

Install gnupg and sops:

brew install gnupg sops

Generate a GPG key for Flux without specifying a passphrase and retrieve the GPG key ID:

$ gpg --full-generate-key
Email address: [email protected]

$ gpg --list-secret-keys [email protected]
sec   rsa3072 2020-09-06 [SC]
      1F3D1CED2F865F5E59CA564553241F147E7C5FA4

Create a Kubernetes secret in the flux-system namespace with the GPG private key:

gpg --export-secret-keys \
--armor 1F3D1CED2F865F5E59CA564553241F147E7C5FA4 |
kubectl create secret generic sops-gpg \
--namespace=flux-system \
--from-file=sops.asc=/dev/stdin

You should store the GPG private key in a safe place for disaster recovery, in case you need to rebuild the cluster from scratch. The GPG public key can be shared with the platform team, so anyone with write access to the platform repository can encrypt secrets.

Git over SSH

Generate a Kubernetes secret with the SSH and known host keys:

flux -n apps create secret git dev-team-auth \
    --url=ssh://[email protected]/<org>/<dev-team> \
    --export > ./tenants/base/dev-team/auth.yaml

Print the SSH public key and add it as a read-only deploy key to the dev-team repository:

yq eval 'data."identity.pub"' git-auth.yaml | base64 --decode

Git over HTTP/S

Generate a Kubernetes secret with basic auth credentials:

flux -n apps create secret git dev-team-auth \
    --url=https://github.com/<org>/<dev-team> \
    --username=$GITHUB_USERNAME \
    --password=$GITHUB_TOKEN \
    --export > ./tenants/base/dev-team/auth.yaml

The GitHub token must have read-only access to the dev-team repository.

Configure Git authentication

Encrypt the dev-team-auth secret's data field with sops:

sops --encrypt \
    --pgp=1F3D1CED2F865F5E59CA564553241F147E7C5FA4 \
    --encrypted-regex '^(data|stringData)$' \
    --in-place ./tenants/base/dev-team/auth.yaml

Create the sync manifests for the tenant Git repository referencing the git-auth secret:

flux create source git dev-team \
    --namespace=apps \
    --url=https://github.com/<org>/<dev-team> \
    --branch=main \
    --secret-ref=dev-team-auth \
    --export > ./tenants/base/dev-team/sync.yaml

flux create kustomization dev-team \
    --namespace=apps \
    --service-account=dev-team \
    --source=GitRepository/dev-team \
    --path="./" \
    --export >> ./tenants/base/dev-team/sync.yaml

Create the base kustomization.yaml file:

cd ./tenants/base/dev-team/ && kustomize create --autodetect

Configure Flux to decrypt secrets using the sops-gpg key:

flux create kustomization tenants \
  --depends-on=kyverno-policies \
  --source=flux-system \
  --path="./tenants/staging" \
  --prune=true \
  --interval=5m \
  --validation=client \
  --decryption-provider=sops \
  --decryption-secret=sops-gpg \
  --export > ./clusters/staging/tenants.yaml

With the above configuration, the Flux instance running on the staging cluster will:

  • create the tenant namespace, service account and role binding
  • decrypt the tenant Git credentials using the GPG private key
  • create the tenant Git credentials Kubernetes secret in the tenant namespace
  • clone the tenant repository using the supplied credentials
  • apply the ./staging directory from the tenant's repo using the tenant's service account

Testing

Any change to the Kubernetes manifests or to the repository structure should be validated in CI before a pull request is merged into the main branch and synced on the cluster.

This repository contains the following GitHub CI workflows:

  • the test workflow validates the Kubernetes manifests and Kustomize overlays with kubeconform
  • the e2e workflow starts a Kubernetes cluster in CI and tests the staging setup by running Flux in Kubernetes Kind

flux2-multi-tenancy's People

Contributors

allymparker avatar chipzoller avatar erikgb avatar fluxcdbot avatar hiddeco avatar jacekn avatar joebowbeer avatar jtyr avatar kisoku avatar marcus-rodan-sinch avatar mpluya avatar nw0rn avatar pro avatar raphapr avatar ruzickap avatar stefanprodan avatar sthapa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flux2-multi-tenancy's Issues

Question: Where does the gotk-components.yaml file go?

I bootstrapped a fleet-infra repo using the guide and then I went on to examine this example for multi-tenancy. I guess the fleet-infra in the bootstrapping guide is the same as the "platform admin repository" referenced there? The bootstrapping created files to install the CRD and controllers for flux in flux-system/gotk-components.yaml and flux-system/gotk-sync.yaml. However this example repo seems to not have these files at all. So I'm a bit confused, should these files still be in the "platform admin repository"?

Github Actions with Organizations

It's possible we are missing something obvious, but we cloned this repo to transition to flux2 and we noticed that flux create source git flux-system was failing in the e2e github workflow due to a permissions issue.

Initially, we thought the issue was that the action needed GITHUB_TOKEN in the environment params, but we were not able to get it to work with the native access token (created on all action runs), or a PAT.

What is the appropriate setup for getting this to work?

Run flux create source git flux-system \
✚ generating GitRepository source
► applying GitRepository source
✔ GitRepository source created
◎ waiting for GitRepository source reconciliation
✗ unable to clone 'https://github.com/organizationname/flux2': authentication required
Error: Process completed with exit code 1.
{"level":"debug","ts":"2022-01-01T03:52:14.953Z","logger":"events","msg":"Normal","object":{"kind":"GitRepository","namespace":"flux-system","name":"flux-system","uid":"4ac377a0-8d3b-405d-9925-29ef109d22c5","apiVersion":"source.toolkit.fluxcd.io/v1beta1","resourceVersion":"787"},"reason":"error","message":"unable to clone 'https://github.com/organizationname/flux2': authentication required"}

how to set up multiple tenants in one cluster

Hi, Team,

In flux2-multi-tenancy repo, there's only one tenant called "dev-team", what if I need to add "tenant2" even "tenant3"... in the future?
https://github.com/fluxcd/flux2-multi-tenancy

Say for staging cluster specifically, currently you're assuming only one tenant in staging cluster below, right?
Under directory "./clusters/staging"

path: ./tenants/staging

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: tenants
  namespace: flux-system
spec:
  path: ./tenants/staging

Could I add tenant2 following these steps? or do you have recommended pattern for multi-tenancy in one cluster? Thanks!

  1. modify dev-team Kustomization to:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: tenant-dev-team
  namespace: flux-system
spec:
  path: ./tenants/staging/dev-team
  1. add tenant2 Kustomization:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: tenant2
  namespace: flux-system
spec:
  path: ./tenants/staging/tenant2

so directory "./clusters" would contain the following directories and files:

├───production
└───staging
    ├───flux-system
    ├───infrastructure.yaml
    ├───tenant-dev-team.yaml
    └───tenant2.yaml
  1. Under directory "./tenants/staging/" make the changes like these:
├───base
│   ├───dev-team
│   │   │───sync.yaml
│   │   │───rbac.yaml
│   │   └───kustomization.yaml
│   └───tenant2
│       │───sync.yaml
│       │───rbac.yaml
│       └───kustomization.yaml
├───production
└───staging
    ├───dev-team
    │   │───dev-team-patch.yaml
    │   └───kustomization.yaml
    └───tenant2
        │───tenant2-patch.yaml
        └───kustomization.yaml

So that eventually tenant dev-team and tenant2 could point to respective repository.

Add kube-prometheus-stack example

I am currently trying to familiarize myself with flux, and face some issues getting the prometheus stack running inside this multi tenancy repo.

I tried to add the Monitoring stack as described in https://fluxcd.io/flux/guides/monitoring/ by creating the corresponding flux files.
And I feel quite stuck, especially with setting correct RBAC and namespaces. I played around with a lot of different settings, but was not able to get the monitoring stack up and running.

Here is the diff of the files:
main...Pro:flux2-multi-tenancy:monitoring

The reconciliation of kube-prometheus-stack fails with:

error Kustomization/kube-prometheus-stack.flux-system - Reconciliation failed after 68.004611ms, next try in 1h0m0s Namespace/monitoring dry-run failed, reason: Forbidden: namespaces "monitoring" is forbidden: User "system:serviceaccount:flux-system:monitoring-sa" cannot patch resource "namespaces" in API group "" in the namespace "monitoring"

Could you please point me to the right direction on how to add the monitoring stack?

Tenant RBAC enhancement

Is there any example that we can extend tenant role bindings to apply cluster level?

We have a use case where my ingress controller have some cluster level roles and bindings which are not possible to apply with existing tenant role bindings.

Can you suggest how can we go with this issue?

Align multi-tenancy lockdown with documentation

The README.md of this project, as well as the kustomization.yaml of production and staging use one version of the multi-tenancy lockdown patch.

The flux documentation (source) uses a different version however.

The differences are:

  1. gotk-components.yaml and gotk-sync.yaml are correctly indented in the docs, whereas indentation seems wrong here.
  2. In the docs the paths are /spec/template/spec/containers/0/args/-, here they are /spec/template/spec/containers/0/args/0.
  3. The docs include an additional section that sets --no-remote-bases=true.

All in all, the version in the docs seems more up to date. Please correct me, if this assumption is wrong. I can also create an issue in the website repo as necessary.

It would be really nice, if there would be one version used everywhere. Having different versions without explanation is very confusing.

To avoid this in the future it might be better to avoid the code duplication.

Tenant RBAC allows privilege escalation

Example in this repo will create service accounts in the tenant namespaces and then grant cluster-admin role to them (namespace scoped).

This allows any user with enough privileges to read secrets or launch pods to escalate privileges up to the cluster-admin inside the namespace. Escalation workflow is:

  1. Launch pod with spec.serviceAccountName: dev-team
  2. Dump ServiceAccount token: kubectl -ti podname -- cat /run/secrets/kubernetes.io/serviceaccount/token
  3. Users who have read-only access to secrets can read the token directly from the secret: kubectl get secret dev-team-token-xxxxxxx
  4. Use the token to perform cluster-level perms within the namespace:
kubectl config set-credentials dev-team --token=$TOKEN
kubectl config set-context --current --user=dev-team
kubectl delete $restricted_resource

Unable to build ./clusters/staging directory

This is not a question related to multi-tenancy. I was wondering how the infrastructure.yaml and tenants.yaml inside ./clusters/staging are reconciled. I do not see a kustomization file inside the ./clusters/staging directory. I am unable to build the directory using kustomize build ./clusters/staging/ which is the path used in the main flux-system kustomization.
I get the error Error: unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/mnt/c/github/fleet-infra-multitenant/clusters/staging

Can platform admin and tenants share a single branch in a mono repo?

Besides finer management of Github permissions, what are the benefits of separating platform admin and each tenant as individual repos?

If Github permission is not a big concern for some users/orgs, it seems the current multi-repo pattern can be consolidated into a mono repo with the following structure.

├── admin
    ├── clusters
    │   ├── production
    │   └── staging
    ├── infrastructure
    │   ├── kyverno
    │   └── kyverno-policies
    └── tenants
        ├── base
        ├── production
        └── staging
├── tenants
    ├── dev-team
        ├── base
        │   ├── kustomization.yaml
        │   ├── podinfo-release.yaml
        │   └── podinfo-repository.yaml
        ├── production
        │   ├── kustomization.yaml
        │   └── podinfo-values.yaml
        └── staging
            ├── kustomization.yaml
            └── podinfo-values.yaml

Instead of adding every tenant repo as a GitSource, now we only need to add the mono-repo as a GitSource for all tenants. The difference for each tenant is the path. In details,

flux create source git mono \
    --namespace=apps \
    --url=https://github.com/<org>/flux2-mono \
    --branch=main \
    --export > ./admin/tenants/base/sync.yaml

flux create kustomization dev-team \
    --namespace=apps \
    --service-account=dev-team \
    --source=GitRepository/mono \
    --export > ./admin/tenants/base/dev-team/sync.yaml

cd ./admin/tenants/base/dev-team/ && kustomize create --autodetect

cat << EOF | tee ./admin/tenants/staging/dev-team-patch.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: dev-team
  namespace: apps
spec:
  path: ./tenants/dev-team/staging
EOF

cat << EOF | tee ./admin/tenants/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../base/dev-team
  - ../base/sync.yaml
patchesStrategicMerge:
  - dev-team-patch.yaml
EOF

The rest of instructions on multi-repo pattern should mostly still hold.

What do you think of this alternative repo pattern?

What are plain Kubernetes manifests in context of autogeneration of kustomization.yaml?

Hello everyone,

I wanted to understand what does plain Kubernetes manifests mean in

Note that the source should contain the kustomization.yaml and all the Kubernetes manifests and configuration files referenced in the kustomization.yaml. If your Git repository or S3 bucket contains only plain manifests, then a kustomization.yaml will be automatically generated.

Would a HelmRelease be considered a plain manifest in this case? If so than non-plain manifests would be Helm Charts and kustomization patches?

Thanks!

HelmRelease with partial match semver expression based on prerelease string

I use semantic versioning for my Helm charts in the form of x.y.z-<prerelease>+<build>, for example 1.0.0-feature-foo+11.

With this, at any one time there may be several features may be under development in separate feature branches. The development stream is identified in the semver prerelease string.
Each of these development streams may have multiple builds published, with each build incrementing a build number as denoted by the +N semver build metadata.

The following listing shows an example in which there are two feature branches: feature-foo and feature-bar, each with several builds.

$ helm search repo project-foobar --devel -l
NAME                            CHART VERSION           APP VERSION     DESCRIPTION
servicekit/project-foobar       1.0.0-feature-foo+12    1.0.0           Helm chart for project foobar
servicekit/project-foobar       1.0.0-feature-foo+11    1.0.0           Helm chart for project foobar
servicekit/project-foobar       1.0.0-feature-foo+10    1.0.0           Helm chart for project foobar
servicekit/project-foobar       1.0.0-feature-bar+22    1.0.0           Helm chart for project foobar
servicekit/project-foobar       1.0.0-feature-bar+21    1.0.0           Helm chart for project foobar
servicekit/project-foobar       1.0.0-feature-bar+20    1.0.0           Helm chart for project foobar

I would like to be able to deploy a HelmRelease with the version field set to match the latest build for a particular prerelease string.
For example, I deployed the following HelmRelease with version ^1.0.0-feature-bar+20 and I expect it to deploy the latest build matching the feature-bar prerelease string, in this case 1.0.0-feature-bar+22.

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: project-foobar
  namespace: foobar
spec:
  chart:
    spec:
      chart: project-foobar
      reconcileStrategy: ChartVersion
      sourceRef:
        kind: HelmRepository
        name: foobar
      version: "^1.0.0-feature-bar+20"
  interval: 1m0s
  values: {}

However, when I deploy the above HelmRelease, rather than deploying a 1.0.0-feature-bar+X release, Flux deploys the 1.0.0-feature-foo+12 release. Note the 'Last Applied Version' in the output below.

$ kubectl -n foobar describe helmreleases.helm.toolkit.fluxcd.io project-foobar | grep "1.0.0"
      Version:  ^1.0.0-feature-bar.20
  Last Applied Revision:           1.0.0-feature-foo+12
  Last Attempted Revision:         1.0.0-feature-foo+12

I have also tried excluding the build number (semver build meta) from the semver expression i.e., ^1.0.0-feature-bar but Flux still deploys 1.0.0-feature-foo+12.
Flux deploys the correct version if I update the HelmRelease version to be an exact match string, such as 1.0.0-feature-bar+22; however I would like to be able to use the ^ expression such that Flux deploys the latest build (+N).

Am I misunderstanding how the semver expressions should behave or is it a bug that Flux is deploying feature-foo when I have defined feature-bar?

Any input would be greatly appreciated.

kyverno: missing digest for ghcr.io/fluxcd/helm-controller:v0.34.1

I was following the docs on multi-tenancy-setup using kind.
(Am not that experienced yet with Flux, be patient ;))

Running flux with the exact same version as this repo, I get the following error:

2023-06-09T12:10:16.906Z error Kustomization/flux-system.flux-system - Reconciliation failed after 11.419728509s, next try in 10m0s Deployment/flux-system/helm-controller dry-run failed: admission webhook "validate.kyverno.svc-fail" denied the request: 

resource Deployment/flux-system/helm-controller was blocked due to the following policies 

verify-flux-images:
  autogen-verify-cosign-signature: missing digest for ghcr.io/fluxcd/helm-controller:v0.34.1

Is this something I can fix, or is there some key missing on the sigstore server?

Flux v2: Notifications controller

We understand that this time flux support only slack, msteams, discord, rocket, googlechat, webex, sentry or generic.. Any idea when we will be onboarding mattermost to this list?

How to best use a substitueFrom for information another namespace?

Hi,
I am trying to use the substituteFrom, however I need to use it from another namespace but do not want to allow cross namespace access. I am trying to ensure that the ingresses in each cluster have the correct service FQDN. The problem that I am running into is that I have multiple gitrepos going to different namespaces and I cannot seem to use the substituteFrom from another namespace. It has been suggested to simply copy the configmap with the cluster variables to each namespace that needs it, however this does not seem to be a good solution because if anything changes, you then have a synchronization issue on the configmaps. Any advice would be appreciated.

Question about Scaling

Hi @stefanprodan

Thanks for the repo!

How does this multi-tenancy example scale in terms of tenants and namespaces?
Imagine 3 Clusters with 10 Tenants and each of these Tenants have about 5 Namespaces (still a relatively small setup).
That's a lot of YAML in my opinion.
For each namespace you need your own GitRepository & Kustomization manifest.
Of course you can generate most of the YAML with the flux-CLI, but mainting all this and having an clear overview is still hard.

Or am I missing something?

Flux tenants crd

In continuation of #27,

if we have defined clusters/tenants.yaml as helmreases with chart source as ./tenants/base/dev-team, we could have saved generation of 5 files (provided helm controller start generating Chart.yaml, like kustomization controller).

Further the tenants are actually not a child entity of clusters, both clusters and tenants could have many to many mapping.
The whole design cab possibly be from the prospective of tenants in place of clusters? i.e. In place of tenants having refs to clusters, we can easily have clusters having refs to tenants.

This can also simplify the no. of hops one need to understand setup.

A simple example is

# cluster/production/tenants.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  namespace: flux-system
  name: tenants
spec:
  chart:
    spec:
      chart: ./tenants
      sourceRef:
        name: flux-system
        kind: GitRepository
  interval: 1m
  values:
    dev-team: true
    qa-team: false

and

# tenants/templates/dev-team.yaml
{{-if .Values.dev-team }}
content of rbac.yaml
content of sync.yaml
{{-end }}

If we can modify flux to support the tenants more natively, may be a crd etc.
So that the above example tenants rbac and sync also reduces drastically

e.g

# cluster/production/tenants.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: TenantsAllocation
metadata:
  name: cluster-name
  namespace: flux-system
spec:
  interval: 1m
  tenants
   - name: dev-team
     serviceAccountName: dev-team
      namespaces: 
            - dev-a
            - dev-b
      sourceRef:
        kind: GitRepository
        name: flux-system
      path: ./tenants/services/ap-south-1
   - name: qa-team
     serviceAccountName: dev-team
      namespaces: 
            - dev-a
            - qa-a
      sourceRef:
        kind: GitRepository
        name: flux-system
      path: ./tenants/services/ap-south-1

This setup should drastically reduce the no of vanilla files.

Question: Self-service multitenancy?

Is it possible to use fluxcd in the following scenario:

  1. cluster admin deploys fluxcd operator with appropriate settings
  2. user can create gitrepo object referencing any git repository on gitlab/github to setup CI/CD to namespace that he controls without asking the admin to pre-setup the namespace, i.e., for many users, it can be on self-service basis.

From docs, I think it is not possible as there seems to be a need for admin to setup namespace in advance. Is that so?

Question: Common Infrastructure Pattern?

I'm working of the great example here for multi-tenant. Wondered if there's a way to source a common infrastructure.yaml for all clusters, and then each individual cluster folder would include that common file and than any specific differences.

Hopefully my diagram here explains it a bit better.

/clusters
  /foo
    infra-foo.yaml
    infrastructure.yaml  //refers to the file in /clusters/common
  /bar
    infra-bar.yaml
    infrastructure.yaml  //refers to the file in /clusters/common
  /common
    infrastructure.yaml //common set of infrastructure references.

I'm looking for a way to avoid repeating infrastructure.yaml in a dozen plus different cluster folders. Hopefully just having to modify one infrastructure.yaml file and not multiple each time I add/update/delete an infrastructure component.

kyverno flux images: autogen-verify-cosign-signature: missing digest for ghcr.io/fluxcd/helm-controller:v0.32.2

Hi,

I installed a fresh version of the kyverno helm chart 3.0.0-beta.1 and added the policy from here: https://github.com/fluxcd/flux2-multi-tenancy/blob/main/infrastructure/kyverno-policies/verify-flux-images.yaml

After a minute or so I receive the following message from via a flux alert:

kustomization/flux-system.flux-system
Deployment/flux-system/helm-controller dry-run failed, error: admission webhook "validate.kyverno.svc-fail" denied the request:
policy Deployment/flux-system/helm-controller for resource violation:
verify-flux-images:
 autogen-verify-cosign-signature: missing digest for [ghcr.io/fluxcd/helm-controller:v0.32.2](http://ghcr.io/fluxcd/helm-controller:v0.32.2)
revision
main@sha1:f6efb5c770c027109fe553419f5ddc97f9b9d034

I found that if I add verifyDigest: false to the images like this:

verifyImages:
      - image: "ghcr.io/fluxcd/helm-controller:*"
        repository: "ghcr.io/fluxcd/helm-controller"
        verifyDigest: false
        roots: |
          -----BEGIN CERTIFICATE-----
          MIIB9zCCAXygAwIBAgIUALZNAPFdxHPwjeDloDwyYChAO/4wCgYIKoZIzj0EAwMw
          KjEVMBMGA1UEChMMc2lnc3RvcmUuZGV2MREwDwYDVQQDEwhzaWdzdG9yZTAeFw0y
          MTEwMDcxMzU2NTlaFw0zMTEwMDUxMzU2NThaMCoxFTATBgNVBAoTDHNpZ3N0b3Jl
          LmRldjERMA8GA1UEAxMIc2lnc3RvcmUwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAT7
          XeFT4rb3PQGwS4IajtLk3/OlnpgangaBclYpsYBr5i+4ynB07ceb3LP0OIOZdxex
          X69c5iVuyJRQ+Hz05yi+UF3uBWAlHpiS5sh0+H2GHE7SXrk1EC5m1Tr19L9gg92j
          YzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRY
          wB5fkUWlZql6zJChkyLQKsXF+jAfBgNVHSMEGDAWgBRYwB5fkUWlZql6zJChkyLQ
          KsXF+jAKBggqhkjOPQQDAwNpADBmAjEAj1nHeXZp+13NWBNa+EDsDP8G1WWg1tCM
          WP/WHPqpaVo0jhsweNFZgSs0eE7wYI4qAjEA2WB9ot98sIkoF3vZYdd3/VtWB5b9
          TNMea7Ix/stJ5TfcLLeABLE4BNJOsQ4vnBHJ
          -----END CERTIFICATE-----

I get another error:

kustomization/flux-system.flux-system
Deployment/flux-system/helm-controller dry-run failed, error: admission webhook "validate.kyverno.svc-fail" denied the request:
policy Deployment/flux-system/helm-controller for resource violation:
verify-flux-images:
 autogen-verify-cosign-signature: image is not verified
revision
main@sha1:e2d2ff8a9a43bfb23d26ae6c968ae0be217c8ac8

Any hints how I could further debug this? Is this maybe a kyverno 1.10 problem?

Complex repo setup example

TL;DR - are there any other examples of complex flux config repo with multiple clusters/apps/envs/tenants?

I really like FluxCD documentation - guides, apis - everything you need to get going and improve along the way. But one extremely frustrating missing piece is complex flux repo example. Both flux2-kustomize-helm-example and flux2-multi-tenancy are just POC to be honest - nice to try locally on kind cluster but not really suitable to run on live cluster imo. All examples seem to focus on either single cluster usage or multiple but fully/almost identical clusters. I've tried to find something more complex, but no much luck so far. Well, there is bootstrap-repo but it seems to be an enhanced version of current repo.

Don't take me wrong - it all works perfectly, but as soon as you start to scale - add clusters, apps, tenants, etc - it became quite cumbersome.

Like this repo's readme:

├── clusters
│   ├── production
│   └── staging
├── infrastructure
│   ├── kyverno
│   └── kyverno-policies
└── tenants
    ├── base
    ├── production
    └── staging

it assumes both clusters will use same exact version of infrastructure - for example save values for helm releases. Of course we can add base/overlays to infrastructure but when it scales out it became unwieldy. We need monitoring stack - prometheus, grafana, loki, promtail. Private/public ingress, can't live without cert-manager and external-dns, then EKS is no good without aws-load-balancer-controller and karpenter/autoscaler, add some kubernetes-dashboard, weave-gitops, etc - and here you are with 10 to 20 helm releases just to get cluster to be ready for an actual app deployment. (then deploy single-pod app and proudly watch all those machinery running static website with enormous 50Mb/100m resources consumption :))

Having infrastructure/base with 20 helm releases plus help helm repos files isn't that bad, but in my case with multiple EKS clusters in different accounts I need different values for most helm releases (irsa roles, increased replicas for prod, etc) So it results in infrastructure/dev folder having 20 values/secrets files and long list of kustomize's configMapGenerators.

I guess possible solution is to use variables substitution and put all values inside helm releases and just keep one/few configmaps with per-cluster variables, but I've found out that keeping plan value/secret file for helm release is really useful when maintainers release breaking change patch and you need to quickly run helm template -f values.yaml to see what's changed. (I really like Flux's helm charts autoupdate feature but sometimes reading alerts channel in the morning after nightly updates is no fun at all)

Second issue/inconvenience is dependencies or rather lack of dependency between helm release/kustomization at this moment (hope it will be possible to implement this soon/ever, subscribed to existing issue already). For example, in order to deploy cert-manager from helm release and cluster-issuer from kustomization I need to wrap hr into kustomization and then set dependency between two ks resources. So deploying from a single large folder full of helm releases and kustomzations and plain manifests is just not possible, unless you want to spend some time during cluster's bootstrap manually reconciling and suspending/resuming.

And "single folder" approach does not work well with plain/kustmized manifests either - recently tried approach with multiple apps in base folder and then single dev/kustomization.yaml deploying them all - quickly realized that one failing deployment blocks all others since wrapper Kustomization is unhealthy. Plus you'd need to suspend everything even if single deployment needs maintenance/etc.

So I ended up with "multiple proxy kustomizations, multiple components" setup which worked quite well until I got to explain it to somebody else :)

Flux-fleet repo
.
├── clusters
│   └── dev
│       ├── flux-system
│       │   ├── gotk-components.yaml
│       │   ├── gotk-sync.yaml
│       │   └── kustomization.yaml
│       ├── infrastructure
│       │   ├── cert-issuer.yaml
│       │   ├── ingress.yaml
│       │   ├── kustomization.yaml
│       │   ├── monitoring.yaml
│       │   └── system.yaml
│       └── kustomization.yaml
└── infrastructure
    ├── cert-issuer
    │   ├── base
    │   │   ├── cluster-issuer.yaml
    │   │   └── kustomization.yaml
    │   └── dev
    │       └── kustomization.yaml
    ├── ingress
    │   ├── base
    │   │   ├── hr-ingress-nginx-private.yaml
    │   │   ├── hr-ingress-nginx-public.yaml
    │   │   ├── kustomization.yaml
    │   │   ├── namespace.yaml
    │   │   └── source-ingress-nginx.yaml
    │   └── dev
    │       ├── ingress-nginx-private-values.yaml
    │       ├── ingress-nginx-public-values.yaml
    │       ├── kustomization.yaml
    │       └── kustomizeconfig.yaml
    ├── monitoring
    │   ├── base
    │   │   ├── hr-grafana.yaml
    │   │   ├── hr-kube-prometheus-stack.yaml
    │   │   ├── hr-loki.yaml
    │   │   ├── hr-promtail.yaml
    │   │   ├── kustomization.yaml
    │   │   ├── namespace.yaml
    │   │   ├── source-grafana.yaml
    │   │   └── source-prometheus-community.yaml
    │   └── dev
    │       ├── grafana-secrets.yaml
    │       ├── grafana-values.yaml
    │       ├── kube-prometheus-stack-secrets.yaml
    │       ├── kube-prometheus-stack-values.yaml
    │       ├── kustomization.yaml
    │       ├── kustomizeconfig.yaml
    │       ├── loki-values.yaml
    │       └── promtail-values.yaml
    └─── system
        ├── base
        │   ├── hr-aws-load-balancer-controller.yaml
        │   ├── hr-cert-manager.yaml
        │   ├── hr-external-dns.yaml
        │   ├── hr-metrics-server.yaml
        │   ├── kustomization.yaml
        │   ├── namespace.yaml
        │   ├── source-eks.yaml
        │   ├── source-external-dns.yaml
        │   ├── source-jetstack.yaml
        │   └── source-metrics-server.yaml
        └── dev
            ├── aws-load-balancer-controller-values.yaml
            ├── cert-manager-values.yaml
            ├── external-dns-values.yaml
            ├── kustomization.yaml
            └── kustomizeconfig.yaml

Where clusters/dev/infrastructure is bunch of flux Kustomizations referencing dev folders in infrastructure/, grouped roughly by purpose (monitoring together, ingress together)

Got very similar setup for tenant's apps repos as well:

Flux-tenant repo
├── apps
│   └── webpage
│       ├── base
│       │   ├── deployment.yaml
│       │   ├── ingress.yaml
│       │   ├── kustomization.yaml
│       │   ├── service-account.yaml
│       │   └── service.yaml
│       └── dev
│           ├── ingress.yaml
│           └── kustomization.yaml
└── clusters
    └── dev
        └── apps
            ├── kustomization.yaml
            └── webpage.yaml

I guess "if it works don't touch it" but those long chains of KS => kustomization => KS => kustomization => HR are becoming hard to keep track of. Plus all of official examples seems to be completely opposite to this. I have a strong feeling I've overengineered this but can't seem to find a way to do a simpler but still flexible setup.

So I'm looking for some different example of "prod-like" complex setup tested on live clusters, anybody is willing to share? :)
Any suggestions are much appreciated.

One team with multiple namespaces access

The examples show one team with one namespace apps.
What would be the best practice to allow team dev-team permissions in multiple namespaces?
Would it be a good approach to have one ServiceAccount for the team and give the team access to different namespaces by creating a RoleBinding in those namespaces to that SA?

Question: How to get SSH public key used for source

Hi, I'm trying to follow the example using internal git repositories.

I'm currently stuck with the source created by

    --namespace=apps \
    --url=https://github.com/<org>/<dev-team> \
    --branch=main \
    --export > ./tenants/base/dev-team/sync.yaml

Is there a way the query the SSH pubkey the source-controller will use to query the source, something like fluxctl identity in v1? Or do I have to provide my own key when using an internal git server?

Thanks for all the work, really like the project.

Multi-tenancy command for other repository like Bitbucket

Hi, with gitlab and github, we have bootstrap command like this:
flux bootstrap github \ --context=staging \ --owner=${GITHUB_USER} \ --repository=${GITHUB_REPO} \ --branch=main \ --personal \ --path=clusters/staging
But I'm using Bitbucket, so bootstrap command doesn't work. I follow this guide: https://toolkit.fluxcd.io/guides/installation/ and using command:
flux create source git flux-system \ --url=ssh://git@<host>/<org>/<repository> \ --ssh-key-algorithm=ecdsa \ --ssh-ecdsa-curve=p521 \ --branch=master \ --interval=1m
But this command don't have --path params like bootstrap command. How I can use multi-tenancy in this case.
If you need information, just ask me.
Thanks

Ps: My bad, just not read carefully, --path is Kustomization kind not Source Git kind

Instructions in README does not seem to work

I've tried to follow the steps in the README in order to create a flux-managed k8s cluster.

Sadly, somethings goes south after a few steps. Here's what I've tried:

(Note that I'm using Gitlab rather than Github)

Creating cluster in Kind and cloning tutorital repo:

kind create cluster --name stagingcluster
git clone https://github.com/fluxcd/flux2-multi-tenancy
cd ~/flux2-multi-tenancy

Env vars

export GITLAB_TOKEN=<omitted>
export GITLAB_GROUP=<omitted>
export GITLAB_REPO=multitenantflux

Boostrapping Flux

flux bootstrap gitlab \
    --context=kind-stagingcluster \
    --owner=${GITLAB_GROUP} \
    --repository=${GITLAB_REPO} \
    --branch=main \
    --path=./clusters/staging

Verifying that Flux has created repo in Gitlab

cd ~
git clone [email protected]:${GITLAB_GROUP}/${GITLAB_REPO}.git
cd ~/multitenantflux

Check that files are in place:

> tree

.
└── clusters
    └── staging
        └── flux-system
            ├── gotk-components.yaml
            ├── gotk-sync.yaml
            └── kustomization.yaml

This is where things seems to look out-of-place:

> flux get kustomizations --watch
NAME            REVISION        SUSPENDED       READY   MESSAGE
flux-system     main/a91673b    False           True    Applied revision: main/a91673b

(here i expected to see kyverno, kyverno-policies and tenants but no luck)

> flux -n apps get sources git
✗ no GitRepository objects found in apps namespace

(here i expected a tenant dev-team to be listed)
anyone knows? 🙂 🙏

Question : gotk-components.yaml repetition in multiple clusters

In the repo, why is the gotk-components.yaml file being created twice for the clusters production and staging? Is it because we are creating a seperate "pods" for respective controllers for both the production and staging environments to ensure common controllers (source, kustomize, helm, notification) won't be a problem?

Production
---
flux2-multi-tenancy/clusters/production/flux-system/gotk-components.yaml
flux2-multi-tenancy/clusters/production/flux-system/gotk-sync.yaml
flux2-multi-tenancy/clusters/production/flux-system/kustomization.yaml

Staging
---
flux2-multi-tenancy/clusters/staging/flux-system/gotk-components.yaml
flux2-multi-tenancy/clusters/staging/flux-system/gotk-sync.yaml
flux2-multi-tenancy/clusters/staging/flux-system/kustomization.yaml

From what I understand, that file will be identical and only the gotk-sync.yaml and kustomiation.yaml files will be different to point to the respective folders to deploy the required resources?

Would something like below make sense

flux2-multi-tenancy/clusters/gotk-components.yaml


Production
---
flux2-multi-tenancy/clusters/production/flux-system/gotk-sync.yaml
flux2-multi-tenancy/clusters/production/flux-system/kustomization.yaml

Staging
---
flux2-multi-tenancy/clusters/staging/flux-system/gotk-sync.yaml
flux2-multi-tenancy/clusters/staging/flux-system/kustomization.yaml

Thanks

Default service account forbidden to patch namespaces

Follow the readme with my local Kind cluster, it fails with error messages:

tenants False False Namespace/apps dry-run failed, reason: Forbidden, error: namespaces "apps" is forbidden: User "system:serviceaccount:flux-system:default" cannot patch resource "namespaces" in API group "" in the namespace "apps"

A workaround is running this to create a clusterrolebinding upon seeing the error:

    kubectl create clusterrolebinding --serviceaccount="flux-system:default" -n flux-system --clusterrole="cluster-admin" my-flux-crb

Question: Multiple instances of same app in same cluster

To me it looks like the layout in this repo assumes that each instance of the app is in the same namespace but in separate clusters. We have some cases where we have multiple instances of the same app in the same cluster. Usually it is the development cluster that has some extra instances that is more stable than the CI/CD build. In some cases we have the staging version in the same cluster as dev.

My question is how this is best accomplished using flux2 multi-tenancy. For example could I somehow create multiple namespaces for the same tenant repository (one ns per app instance)? Or is there some other way to handle this that is better suited for flux2 (not using separate ns)?

I know there are HNS which seems suited for this but it also seems a bit experimental and adds another extension to manage.

Single base repository for multiple tenants

Is it possible to have single base repository for multiple tenants?
Where i have multiple tenants and they want to refer same base manifest files from one repository, having central base repository and refer in all tenant repository's.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.