gatekeeper / gatekeeper-operator Goto Github PK
View Code? Open in Web Editor NEWArchived: Use github.com/stolostron/gatekeeper-operator
Home Page: https://github.com/stolostron/gatekeeper-operator
License: Apache License 2.0
Archived: Use github.com/stolostron/gatekeeper-operator
Home Page: https://github.com/stolostron/gatekeeper-operator
License: Apache License 2.0
I built bundle and index image and tested the operator on OCP. Currently, the install experience is
Is this the experience we want to provide to customer to install gatekeeper?
Is it possible to create the gatekeeper CR by default?
See https://github.com/font/gatekeeper-operator/pull/9/files#r509022571 for an example
This tracks the work to add support for validating the Gatekeeper Operator CRD. Additionally, validation is required to enforce immutability properties that may be necessary. The Gatekeeper Operator should implement validation using:
A validating admission controller webhook should be considered if either:
is this expected?
#121 introduced a way to ignore docs related changes from running CI jobs using the GitHub Actions paths-ignore
workflow syntax. However, this does not work for jobs that are required to pass before merging (branch protection rule) because the PR will block forever without being merge-able. Once GitHub adds that feature (see https://github.community/t/feature-request-conditional-required-checks/16761 for feature request), enable it. Otherwise consider something like https://github.com/fkirc/skip-duplicate-actions for an alternative solution.
Currently in Kubernetes we support installing Gatekeeper into any namespace as long as that namespace is also where the operator is running.
This issue tracks the work necessary to have the operator install Gatekeeper into any namespace separate from where the operator is running.
On vanilla Kubernetes, the operator may be running in one namespace, but the user may want Gatekeeper to be installed to the gatekeeper-system
canonical namespace, or another namespace separate from the operator's namespace.
On OpenShift, the operator may be installed into the openshift-operators
namespace, but the operator should install Gatekeeper into a canonical namespace e.g. openshift-gatekeeper-system
.
We should consider adding a namespace
field to the Gatekeeper operator custom resource so the user could select which namespace is desired, and the operator should use a sane default if the user leaves it empty. This would require the operator to have permissions to create namespaces. Is this desirable?
When the Gatekeeper CR named gatekeeper
is deleted, all of the Gatekeeper related resources should be subsequently deleted. This is essentially the reverse of the installation path.
The main differences to get Gatekeeper installed on OCP are around the Security Context Constraints and the seccomp profile annotations. See https://github.com/open-policy-agent/gatekeeper#running-on-openshift-4x for reference as well as https://github.com/redhat-cop/rego-policies/blob/b14ce1f9ec08e5eede257f0ecc5525a58cbb3a48/_test/deploy-gatekeeper.sh#L24-L58 for an older example.
Additionally, we would want to have a framework for supporting installing Gatekeeper on any Kubernetes distribution, OCP being one of them.
Update the operator API to add two new options introduced in Gatekeeper v3.3.0 helm chart:
validatingWebhookTimeoutSeconds: 3
enableDeleteOperations: false
In addition to the permissions the operator needs to run, the operator is currently also granted all the permissions that are needed by gatekeeper because the operator creates the RBAC resources for gatekeeper itself: https://github.com/font/gatekeeper-operator/blob/3c4a16d9356ec9bf8a4d738c86b1a28104c73a86/controllers/gatekeeper_controller.go#L96-L117
Without this, K8s issues a privilege escalation error.
Instead of granting all the same individual permissions that gatekeeper needs, we could explicitly allow specifying any permission in a Role
or ClusterRole
by giving the operator permission to perform the escalate
verb on roles
or clusterroles
resources.
This matches the Gatekeeper v3.3.0 helm chart.
Today gatekeeper CRD is defined using apiVersion: apiextensions.k8s.io/v1
. However, this is not going to work with ocp 4.5 meaning it will not show up in operator hub catalog on ocp 4.5 due to the way how bundles are being built for ocp 4.5.
From cicd team:
Once we moved the acm bundle to start including v1 instead of just v1beta1 , we started getting some errors from the container verification pipeline (i.e. sanity checks after the builds complete). After talking with the CVP team, it looks like the only option is to remove all instances of v1 until we no longer need to backport our bundle (i.e. until we no longer support OCP 4.5).
This tracks the work involved in installing the Gatekeeper resources as well as being able to update them upon reconciling the Gatekeeper Operator CR.
At the time of this PR it is v3.2.1.
This PR involves importing the v3.2.1 Gatekeeper manifests, updating the static assets, and testing to make sure things work as expected.
get following error when gatekeeper operator tries to install gatekeeper on openshift with OLM
2020-11-20T14:07:25.370Z ERROR controller Reconciler error {"reconcilerGroup": "operator.gatekeeper.sh", "reconcilerKind": "Gatekeeper", "controller": "gatekeeper", "name": "gatekeeper", "namespace": "gatekeeper-system", "error": "Unable to deploy Gatekeeper resources: Error attempting to create resource gatekeeper-system/gatekeeper-manager-role: roles.rbac.authorization.k8s.io \"gatekeeper-manager-role\" is forbidden: user \"system:serviceaccount:gatekeeper-system:default\" (groups=[\"system:serviceaccounts\" \"system:serviceaccounts:gatekeeper-system\" \"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"security.openshift.io\"], Resources:[\"securitycontextconstraints\"], ResourceNames:[\"anyuid\"], Verbs:[\"use\"]}", "errorVerbose": "roles.rbac.authorization.k8s.io \"gatekeeper-manager-role\" is forbidden: user \"system:serviceaccount:gatekeeper-system:default\" (groups=[\"system:serviceaccounts\" \"system:serviceaccounts:gatekeeper-system\" \"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"security.openshift.io\"], Resources:[\"securitycontextconstraints\"], ResourceNames:[\"anyuid\"], Verbs:[\"use\"]}\nError attempting to create resource gatekeeper-system/gatekeeper-manager-role\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).updateOrCreateResource\n\t/workspace/controllers/gatekeeper_controller.go:280\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).deployGatekeeperResources\n\t/workspace/controllers/gatekeeper_controller.go:218\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).Reconcile\n\t/workspace/controllers/gatekeeper_controller.go:179\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374\nUnable to deploy Gatekeeper resources\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).Reconcile\n\t/workspace/controllers/gatekeeper_controller.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:197
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
CSV stuck in pending state due to Policy rule not satisfied for service account
. It works on ocp 4.5 and 4.6.
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
alm-examples: |-
[
{
"apiVersion": "operator.gatekeeper.sh/v1alpha1",
"kind": "Gatekeeper",
"metadata": {
"name": "gatekeeper"
},
"spec": {
"audit": {
"logLevel": "INFO",
"replicas": 1
},
"image": {
"image": "docker.io/openpolicyagent/gatekeeper:v3.2.2"
},
"validatingWebhook": "Enabled",
"webhook": {
"logLevel": "INFO",
"replicas": 2
}
}
}
]
capabilities: Basic Install
olm.operatorGroup: gatekeeper-operator
olm.operatorNamespace: openshift-gatekeeper-operator
olm.targetNamespaces: ''
operators.operatorframework.io/builder: operator-sdk-v1.2.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v2
selfLink: >-
/apis/operators.coreos.com/v1alpha1/namespaces/openshift-gatekeeper-operator/clusterserviceversions/gatekeeper-operator.v0.0.1
resourceVersion: '126038'
name: gatekeeper-operator.v0.0.1
uid: 74ae41cc-dc64-480f-b2a4-20fb89efd2fe
creationTimestamp: '2021-01-06T00:40:14Z'
generation: 1
namespace: openshift-gatekeeper-operator
labels:
olm.api.f3883f973f52868e: provided
spec:
customresourcedefinitions:
owned:
- description: Gatekeeper is the Schema for the gatekeepers API
displayName: Gatekeeper
kind: Gatekeeper
name: gatekeepers.operator.gatekeeper.sh
version: v1alpha1
apiservicedefinitions: {}
keywords:
- Gatekeeper
displayName: Gatekeeper Operator
provider:
name: Red Hat
maturity: alpha
installModes:
- supported: false
type: OwnNamespace
- supported: false
type: SingleNamespace
- supported: false
type: MultiNamespace
- supported: true
type: AllNamespaces
version: 0.0.1
icon:
- base64data: ''
mediatype: ''
links:
- name: Gatekeeper Operator
url: 'https://github.com/gatekeeper/gatekeeper-operator'
install:
spec:
clusterPermissions:
- rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- config.gatekeeper.sh
resources:
- configs
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- config.gatekeeper.sh
resources:
- configs/status
verbs:
- get
- patch
- update
- apiGroups:
- constraints.gatekeeper.sh
resources:
- '*'
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- operator.gatekeeper.sh
resources:
- gatekeepers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- operator.gatekeeper.sh
resources:
- gatekeepers/finalizers
verbs:
- delete
- get
- patch
- update
- apiGroups:
- operator.gatekeeper.sh
resources:
- gatekeepers/status
verbs:
- get
- patch
- update
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- create
- delete
- update
- use
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- status.gatekeeper.sh
resources:
- '*'
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- templates.gatekeeper.sh
resources:
- constrainttemplates
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- templates.gatekeeper.sh
resources:
- constrainttemplates/finalizers
verbs:
- delete
- get
- patch
- update
- apiGroups:
- templates.gatekeeper.sh
resources:
- constrainttemplates/status
verbs:
- get
- patch
- update
- apiGroups:
- ''
resources:
- namespaces
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- security.openshift.io
resourceNames:
- anyuid
resources:
- securitycontextconstraints
verbs:
- use
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
serviceAccountName: default
deployments:
- name: gatekeeper-operator-controller-manager
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
control-plane: controller-manager
spec:
containers:
- args:
- '--secure-listen-address=0.0.0.0:8443'
- '--upstream=http://127.0.0.1:8080/'
- '--logtostderr=true'
- '--v=10'
image: 'gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0'
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
resources: {}
- args:
- '--metrics-addr=127.0.0.1:8080'
- '--enable-leader-election'
command:
- /manager
image: 'quay.io/gatekeeper/gatekeeper-operator:latest'
imagePullPolicy: Always
name: manager
resources:
limits:
cpu: 100m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
terminationGracePeriodSeconds: 10
permissions:
- rules:
- apiGroups:
- ''
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ''
resources:
- configmaps/status
verbs:
- get
- update
- patch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ''
resources:
- secrets
- serviceaccounts
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
serviceAccountName: default
strategy: deployment
maintainers:
- email: [email protected]
name: Ivan Font
description: Operator for OPA Gatekeeper
status:
conditions:
- lastTransitionTime: '2021-01-06T00:40:14Z'
lastUpdateTime: '2021-01-06T00:40:14Z'
message: requirements not yet checked
phase: Pending
reason: RequirementsUnknown
- lastTransitionTime: '2021-01-06T00:40:14Z'
lastUpdateTime: '2021-01-06T00:40:14Z'
message: one or more requirements couldn't be found
phase: Pending
reason: RequirementsNotMet
lastTransitionTime: '2021-01-06T00:40:14Z'
lastUpdateTime: '2021-01-06T00:40:14Z'
message: one or more requirements couldn't be found
phase: Pending
reason: RequirementsNotMet
requirementStatus:
- group: apiextensions.k8s.io
kind: CustomResourceDefinition
message: CRD is not present
name: gatekeepers.operator.gatekeeper.sh
status: NotPresent
version: v1beta1
- dependents:
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":[""],"resources":["configmaps"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["get","update","patch"],"apiGroups":[""],"resources":["configmaps/status"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create","patch"],"apiGroups":[""],"resources":["events"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["apps"],"resources":["deployments"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":[""],"resources":["secrets","serviceaccounts","services"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
namespaced
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["rbac.authorization.k8s.io"],"resources":["rolebindings","roles"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get","list","watch"],"apiGroups":["*"],"resources":["*"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["admissionregistration.k8s.io"],"resources":["validatingwebhookconfigurations"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["apiextensions.k8s.io"],"resources":["customresourcedefinitions"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["config.gatekeeper.sh"],"resources":["configs"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get","patch","update"],"apiGroups":["config.gatekeeper.sh"],"resources":["configs/status"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["constraints.gatekeeper.sh"],"resources":["*"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operator.gatekeeper.sh"],"resources":["gatekeepers"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["delete","get","patch","update"],"apiGroups":["operator.gatekeeper.sh"],"resources":["gatekeepers/finalizers"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get","patch","update"],"apiGroups":["operator.gatekeeper.sh"],"resources":["gatekeepers/status"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","update","use"],"apiGroups":["policy"],"resources":["podsecuritypolicies"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["rbac.authorization.k8s.io"],"resources":["clusterrolebindings","clusterroles"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["status.gatekeeper.sh"],"resources":["*"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["templates.gatekeeper.sh"],"resources":["constrainttemplates"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["delete","get","patch","update"],"apiGroups":["templates.gatekeeper.sh"],"resources":["constrainttemplates/finalizers"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["get","patch","update"],"apiGroups":["templates.gatekeeper.sh"],"resources":["constrainttemplates/status"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":[""],"resources":["namespaces"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["use"],"apiGroups":["security.openshift.io"],"resources":["securitycontextconstraints"],"resourceNames":["anyuid"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews"]}
status: NotSatisfied
version: v1beta1
- group: rbac.authorization.k8s.io
kind: PolicyRule
message: >-
cluster
rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]}
status: NotSatisfied
version: v1beta1
group: ''
kind: ServiceAccount
message: Policy rule not satisfied for service account
name: default
status: PresentNotSatisfied
version: v1
Currently Gatekeeper only supports installing to the gatekeeper-system
namespace. That's the default installation namespace that is specified in the Gatekeeper manifests that are imported from the upstream Gatekeeper repo. This issue is to try overriding the namespace in each of the Gatekeeper resources with the namespace being watched by the operator e.g. configured using the WATCH_NAMESPACE
passed via the downward API. This is because the operator supports running in any namespace and will watch for gatekeeper CRs in that same namespace. Ideally it would be as easy as this to support installing Gatekeeper to any namespace. Otherwise, we may need to make changes to Gatekeeper itself to successfully support this.
This should also update the CSV as part of the bundle generation so that the ALM examples annotation includes a complete example for users to reference:
whenever i try to change the webhook configurations, i get this error:
"level":"error","ts":1614165913.2016525,"logger":"cert-rotation","msg":"secret is not well-formed, cannot update webhook configurations","error":"Cert secret is not well-formed, missing ca.crt"
There's an occasional Ginkgo E2E test flake. The most recent is https://github.com/gatekeeper/gatekeeper-operator/runs/1646158400?check_suite_focus=true#step:13:135.
In the process of working on PR #37, I'm seeing multiple connection refused errors such as:
# output: Error from server (InternalError): error when creating "test/bats/tests/sync.yaml": Internal error occurred: failed calling webhook "validation.gatekeeper.sh": Post "https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admit?timeout=5s": dial tcp 10.96.85.246:443: connect: connection refused
...
...
# output: Error from server (InternalError): error when creating "test/bats/tests/good/no_dupe_ns.yaml": Internal error occurred: failed calling webhook "validation.gatekeeper.sh": Post "https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admit?timeout=5s": dial tcp 10.96.85.246:443: connect: connection refused
This could be a problem with the Github actions runner environment as local testing does not exhibit these networking errors.
See https://github.com/font/gatekeeper-operator/runs/1415031065?check_suite_focus=true for failure caused by a connection refused when attempting to kubectl port-forward
and eventually leads to a timeout occurring. See https://github.com/font/gatekeeper-operator/pull/37/checks?check_run_id=1415031065#step:7:170 for beginning of errors.
After skipping the port-forward
test, additional failures are seen whenever we attempt to contact the gatekeeper admission controller webhook for policy validation. See https://github.com/font/gatekeeper-operator/pull/37/checks?check_run_id=1416807087#step:7:100 and https://github.com/font/gatekeeper-operator/pull/37/checks?check_run_id=1416807087#step:7:108 for some examples.
The following may be of relevance https://dev.to/richicoder1/how-we-connect-to-kubernetes-pods-from-github-actions-1mg.
This issue captures the work needed to implement some level of CI testing using for example GitHub Actions. The CI tests would ideally run:
The CI testing would execute upon the creation or updating of PRs as well as any merges.
Originally posted by @ycao56 in #35 (comment)
Gatekeeper manifests should be synced and statically stored as generated Go code within this code base and Operator CR fields would override the default values stored in the generated code. For this automated code generation we should use a tool like https://github.com/go-bindata/go-bindata that would turn Gatekeeper installation manifests into static assets in Go code.
This issue tracks the work to verify that having the operator work without OLM does not have any impact when running with OLM. For example, do RBAC permissions need to be configured differently e.g. no wildcards, etc.?
Currently the Gatekeeper webhook can ignore namespaces by adding an annotation to each namespace. However, this may not be feasible for some users. This issue is to add a feature to the operator API that allows a user to request namespaces for which gatekeeper should be enabled/disabled by having the operator automatically update the validatingwebhookconfiguration namespaceSelector
field.
This issue captures the work needed to set up a place to host operator image(s) on quay.io with a robot account that would handle updating the following images within CI (see #2):
master
or the commit SHA that triggered it.v0.0.1
) and latest
. installModes:
- supported: false
type: OwnNamespace
- supported: false
type: SingleNamespace
- supported: false
type: MultiNamespace
- supported: true
type: AllNamespaces
Currently the gatekeeper operator uses AllNamespaces
install mode but it is not working as expected due to gatekeeper-operator needs to set ownerReferences on gatekeeper objects. If gatekeeper CR is created in a namespace different than gatekeeper-operator's namespace, you will see following errors:
2020-11-17T21:56:37.061Z ERROR controller Reconciler error {"reconcilerGroup": "operator.gatekeeper.sh", "reconcilerKind": "Gatekeeper", "controller": "gatekeeper", "name": "gatekeeper", "namespace": "openshift-openstack-infra", "error": "Unable to deploy Gatekeeper resources: Unable to set controller reference for gatekeeper-system/gatekeeper-webhook-server-cert: cross-namespace owner references are disallowed, owner's namespace openshift-openstack-infra, obj's namespace gatekeeper-system", "errorVerbose": "cross-namespace owner references are disallowed, owner's namespace openshift-openstack-infra, obj's namespace gatekeeper-system\nUnable to set controller reference for gatekeeper-system/gatekeeper-webhook-server-cert\ngithub.com/font/gatekeeper-operator/controllers.
So in my opinion, the OwnNamespace
install mode is more appropriate.
Add a CONTRIBUTORS.md
guide to the docs.
2020-12-17T14:04:04.046Z ERROR controller Reconciler error {"reconcilerGroup": "operator.gatekeeper.sh", "reconcilerKind": "Gatekeeper", "controller": "gatekeeper", "name": "gatekeeper", "namespace": "", "error": "Unable to deploy Gatekeeper resources: Error attempting to get resource /gatekeeper-webhook-server-cert: an empty namespace may not be set when a resource name is provided", "errorVerbose": "an empty namespace may not be set when a resource name is provided\nError attempting to get resource /gatekeeper-webhook-server-cert\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).updateOrCreateResource\n\t/workspace/controllers/gatekeeper_controller.go:258\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).deployGatekeeperResources\n\t/workspace/controllers/gatekeeper_controller.go:195\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).Reconcile\n\t/workspace/controllers/gatekeeper_controller.go:156\nsigs.k8s.io/controlle...
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:197
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
looks like the operator is broken after #83 was merged
I am seeing following errors in the gatekeeper operator pod
W1217 02:38:46.628966 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
2020-12-17T02:38:46.629Z ERROR controller Reconciler error {"reconcilerGroup": "operator.gatekeeper.sh", "reconcilerKind": "Gatekeeper", "controller": "gatekeeper", "name": "gatekeeper", "namespace": "", "error": "Unable to deploy Gatekeeper resources: Error attempting to create resource /configs.config.gatekeeper.sh: customresourcedefinitions.apiextensions.k8s.io \"configs.config.gatekeeper.sh\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>", "errorVerbose": "customresourcedefinitions.apiextensions.k8s.io \"configs.config.gatekeeper.sh\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>\nError attempting to create resource /configs.config.gatekeeper.sh\ngithub.com/gatekeeper/gatekeeper-operator/controllers.(*GatekeeperReconciler).updateOrCreateResource\n\t/workspace/controllers/gatekeeper_controller.go:253\ngithub.com/gatekeeper/gatekeeper-operator/controlle...
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:197
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
When using make import-manifests
, the manifest in config/gatekeeper/openshift added in #44 should also be auto-updated.
The Gatekeeper Operator will support some level of defaulting for various omitted fields that are not required in the Gatekeeper CRD. The list of fields that are optional are captured in the CRD API. The defaulting behavior will be done by either:
It's preferred to use option 1 wherever possible.
Add the ability to undeploy
the operator by adding a phony make target for undeploy
so that a make undeploy
will delete the operator manifests. That is, the recipe should perform the opposite of make deploy
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.