diranged / oz Goto Github PK
View Code? Open in Web Editor NEWOz RBAC Controller
License: Apache License 2.0
Oz RBAC Controller
License: Apache License 2.0
Copying labels is still bad - but we should copy the annotations from the spec.template.metadata.annotations
key in the source controller for the access templates.
If no spec.controllerTargetMutationConfig.defaultContainerNam
is supplied, we should then next try to use https://kubernetes.io/docs/reference/labels-annotations-taints/#kubectl-kubernetes-io-default-container to determine the default container before falling back to the 0
container.
We should be able to block kubectl exec <pod> -- <command ...>
calls where <command>
either matches or does not match a particular list. This would would allow operators to allow-list or deny-list certain commands, providing more protection within the container for what is allowed.
Eg.
apiVersion: crds.wizardofoz.co
kind: PodAccessTemplate
metadata:
name: ...
spec:
...
execAccessConfig:
allowedCommands:
- /opt/app/manage.py
- /opt/app/debug.py
We receive the command data through the PodExecOptions{}
resource on the webhook...
{
"uid": "7960530b-e935-4e61-aa05-205eaa5aebe5",
"kind": {
"group": "",
"version": "v1",
"kind": "PodExecOptions"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"subResource": "exec",
"requestKind": {
"group": "",
"version": "v1",
"kind": "PodExecOptions"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestSubResource": "exec",
"name": "example-7c7767bc7d-864tt",
"namespace": "oz-system",
"operation": "CONNECT",
"userInfo": {
"username": "kubernetes-admin",
"groups": [
"system:masters",
"system:authenticated"
]
},
"object": {
"kind": "PodExecOptions",
"apiVersion": "v1",
"stdin": true,
"stdout": true,
"tty": true,
"container": "nginx",
"command": [
"uptime"
]
},
"oldObject": null,
"dryRun": false,
"options": null
}
We should be able to optionally block kubectl exec -ti ...
sessions (which are opaque to Oz as to what is happening on the pod) for certain types of workloads. The PodExecOptions
that we receive for a CONNECT
call looks like this:
{
"uid": "7960530b-e935-4e61-aa05-205eaa5aebe5",
"kind": {
"group": "",
"version": "v1",
"kind": "PodExecOptions"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"subResource": "exec",
"requestKind": {
"group": "",
"version": "v1",
"kind": "PodExecOptions"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestSubResource": "exec",
"name": "example-7c7767bc7d-864tt",
"namespace": "oz-system",
"operation": "CONNECT",
"userInfo": {
"username": "kubernetes-admin",
"groups": [
"system:masters",
"system:authenticated"
]
},
"object": {
"kind": "PodExecOptions",
"apiVersion": "v1",
"stdin": true,
"stdout": true,
"tty": true,
"container": "nginx",
"command": [
"uptime"
]
},
"oldObject": null,
"dryRun": false,
"options": null
}
On Templates
On Controllers
On Requests
On Pods
not sure how i checked ad6f282#diff-ce11b77c8163a16c0238857e6c1528845b5246c94cd38ef65ec0ec600a1311b8R56 in... but obviously that was test code and meant to be deleted. i'll remove it, and implement the tests that would have caught that.
https://istio.slack.com/archives/C37A4KAAD/p1669246550291849
Hey... We're seeing an odd behavior when we use a custom in-house controller to spin up a Pod in a Namespace that has Istio Injection turned on. Fundamentally, our controller is taking a Deployment that works, copying out the spec.template.spec from it, and launching a fresh Pod with that PodSpec. We aren't setting any labels or annotations on the fresh pod (right now). TThis works totally fine for plain pods ... but when we try this on pods in istio-injection=enabled namespaces, we see the istio-validation container fail to work. The errors we get imply there is something wrong with the node, but we know that isn't the case because we have plenty of other workloads on those nodes working fine:
2022-11-23 15:30:24
2022-11-23T23:30:24.217367Z info Starting iptables validation. This check verifies that iptables rules are properly established for the network.
2022-11-23 15:30:24
2022-11-23T23:30:24.217468Z info Listening on 127.0.0.1:15001
2022-11-23 15:30:24
2022-11-23T23:30:24.217662Z info Listening on 127.0.0.1:15006
2022-11-23 15:30:24
2022-11-23T23:30:24.217819Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused
2022-11-23 15:30:25
2022-11-23T23:30:25.218219Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused
2022-11-23 15:30:26
2022-11-23T23:30:26.219418Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused
2022-11-23 15:30:27
2022-11-23T23:30:27.219751Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused
2022-11-23 15:30:28
2022-11-23T23:30:28.219994Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused
2022-11-23 15:30:29
2022-11-23T23:30:29.217968Z error iptables validation failed; workload is not ready for Istio.
2022-11-23 15:30:29
When using Istio CNI, this can occur if a pod is scheduled before the node is ready.
2022-11-23 15:30:29
2022-11-23 15:30:29
If installed with 'cni.repair.deletePods=true', this pod should automatically be deleted and retry.
2022-11-23 15:30:29
Otherwise, this pod will need to be manually removed so that it is scheduled on a node with istio-cni running, allowing iptables rules to be established.
2022-11-23 15:30:29
The istio-cni pod logs look stange too... they claim we don't have the annotation in place, but we have the annotation on the namespace itself:
2022-11-23T23:30:23.837295Z info cni istio-cni cmdAdd with k8s args: {CommonArgs:{IgnoreUnknown:true} IP:<nil> K8S_POD_NAME:diranged-v5njn-9474d53d K8S_POD_NAMESPACE:myns K8S_POD_INFRA_CONTAINER_ID:d30d3de86542b6dfc2a9ff4b32477c9412079c235779b47290685811bafc3f71}
2022-11-23T23:30:23.837349Z info cni Pod myns/diranged-v5njn-9474d53d excluded due to not containing sidecar annotation
update the cli to verify that the CRDs are are installed or handle that failure more gracefully
It isn’t possible today to dynamically use the Kubernetes Audit Logs to monitor for exec
events - managed platforms like AKS/EKS configure this for you, and typically send the logging to their own backends.
However, it should be possible to create ValidatingWebhookConfigurations that respond to exec
and debug
events. Initially we can take the webhook calls, and emit events onto the pods that they target .. providing an auditing service.
In the future, we should be able to use this to implement a second layer of security beyond the allowedGroups
setting.
2023-08-16T12:31:38Z INFO admission Handling CONNECT Operation on pods/diranged-nxwf5-dd2747d0 by [email protected] {"object": {"name":"diranged-nxwf5-dd2747d0","namespace":"xxx"}, "namespace": "xxx", "name": "diranged-nxwf5-dd2747d0", "resource": {"group":"","version":"v1","resource":"pods"}, "user": "[email protected]", "requestID": "0619762f-8a30-4586-b07b-cbcd8a1c0a7a", "request": "{\"uid\":\"0619762f-8a30-4586-b07b-cbcd8a1c0a7a\",\"kind\":{\"group\":\"\",\"version\":\"v1\",\"kind\":\"PodExecOptions\"},\"resource\":{\"group\":\"\",\"version\":\"v1\",\"resource\":\"pods\"},\"subResource\":\"exec\",\"requestKind\":{\"group\":\"\",\"version\":\"v1\",\"kind\":\"PodExecOptions\"},\"requestResource\":{\"group\":\"\",\"version\":\"v1\",\"resource\":\"pods\"},\"requestSubResource\":\"exec\",\"name\":\"diranged-nxwf5-dd2747d0\",\"namespace\":\"xxx\",\"operation\":\"CONNECT\",\"userInfo\":{\"username\":\"[email protected]\",\"groups\":[...]},\"object\":{\"kind\":\"PodExecOptions\",\"apiVersion\":\"v1\",\"stdin\":true,\"stdout\":true,\"tty\":true,\"container\":\"zzz\",\"command\":[\"/entrypoint.sh\"]},\"oldObject\":null,\"dryRun\":false,\"options\":null}"}
2023/08/16 12:31:38 http: panic serving 100.64.125.225:35478: runtime error: invalid memory address or nil pointer dereference
goroutine 2105 [running]:
net/http.(*conn).serve.func1()
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:1850 +0xb8
panic({0x11322c0, 0x2112370})
/opt/hostedtoolcache/go/1.19.12/x64/src/runtime/panic.go:890 +0x260
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Decoder).DecodeRaw(0x0, {{0x4004fc8280, 0x9a, 0xa0}, {0x0, 0x0}}, {0x15826a8, 0x4004f93590})
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/decode.go:76 +0xec
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Decoder).Decode(_, {{{0x4003b82450, 0x24}, {{0x0, 0x0}, {0x4003efed48, 0x2}, {0x4003efed70, 0xe}}, {{0x0, ...}, ...}, ...}}, ...)
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/decode.go:49 +0x74
github.com/diranged/oz/internal/controllers/podwatcher.(*PodWatcher).HandleExec(_, {_, _}, {{{0x4003b82450, 0x24}, {{0x0, 0x0}, {0x4003efed48, 0x2}, {0x4003efed70, ...}}, ...}})
/home/runner/work/oz/oz/internal/controllers/podwatcher/handle_exec.go:20 +0xb4
github.com/diranged/oz/internal/controllers/podwatcher.(*PodWatcher).Handle(_, {_, _}, {{{0x4003b82450, 0x24}, {{0x0, 0x0}, {0x4003efed48, 0x2}, {0x4003efed70, ...}}, ...}})
/home/runner/work/oz/oz/internal/controllers/podwatcher/handle.go:46 +0x364
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).Handle(_, {_, _}, {{{0x4003b82450, 0x24}, {{0x0, 0x0}, {0x4003efed48, 0x2}, {0x4003efed70, ...}}, ...}})
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/webhook.go:169 +0x188
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).ServeHTTP(0x400059eaf0, {0xffff6ce27660?, 0x4004f934a0}, 0x40021dec00)
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/http.go:98 +0x960
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerInFlight.func1({0xffff6ce27660, 0x4004f934a0}, 0x4004254f00?)
/home/runner/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:60 +0xb0
net/http.HandlerFunc.ServeHTTP(0x1591e60?, {0xffff6ce27660?, 0x4004f934a0?}, 0x3616e8?)
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:2109 +0x38
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1({0x1591e60?, 0x4000e821c0?}, 0x40021dec00)
/home/runner/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:147 +0xa0
net/http.HandlerFunc.ServeHTTP(0x4003883a58?, {0x1591e60?, 0x4000e821c0?}, 0x400385c840?)
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:2109 +0x38
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerDuration.func2({0x1591e60, 0x4000e821c0}, 0x40021dec00)
/home/runner/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:109 +0x94
net/http.HandlerFunc.ServeHTTP(0x4000e821c0?, {0x1591e60?, 0x4000e821c0?}, 0x131bf21?)
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:2109 +0x38
net/http.(*ServeMux).ServeHTTP(0x4003b82403?, {0x1591e60, 0x4000e821c0}, 0x40021dec00)
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:2487 +0x140
net/http.serverHandler.ServeHTTP({0x1584868?}, {0x1591e60, 0x4000e821c0}, 0x40021dec00)
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:2947 +0x2cc
net/http.(*conn).serve(0x4004f67860, {0x1592d58, 0x4000407020})
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:1991 +0x544
created by net/http.(*Server).Serve
/opt/hostedtoolcache/go/1.19.12/x64/src/net/http/server.go:3102 +0x43c
we should be able to use a validatingwebhookconfiguration to populate more metadata into an access request about specifically who made the request (kubernetes “user”), whether or not that user is in the “groups” that the accesstemplate lists, source IP of the request and more.
Allow setting/overriding the annotations with the podSpecMutationConfig
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.