Giter Site home page Giter Site logo

k8tz / k8tz Goto Github PK

View Code? Open in Web Editor NEW
369.0 7.0 34.0 4.64 MB

Kubernetes admission controller and a CLI tool to inject timezones into Pods and CronJobs

Home Page: http://k8tz.io

License: Apache License 2.0

Dockerfile 1.28% Makefile 3.06% Smarty 3.43% Go 92.24%
kubernetes kubernetes-controller golang go helm-charts helm-chart-repository helm-chart timezone timezones tzdata

k8tz's People

Contributors

chenhuazhong avatar dependabot[bot] avatar klzsysy avatar kmdrn7 avatar sisheogorath avatar skuffe avatar yonatankahana avatar zie619 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8tz's Issues

create single yaml file instead of helm

Hi, amazing software, works a treat with my k3s!
but im currently testing talos and automating deployments,
they currently only allow direct url YAMLs OR inline YAML files to be automated during setup/installs
you can only set it up currently via helm,
can you provide a direct YAML file that gets generated on each new release?
(as im tying this i realise it wont work for picking a timezone, but you could just set the default to UTC and then the end user can change it later after deployment)

EDIT: i also realise you can use helm template k8tz k8tz/k8tz --set timezone=Europe/London to generate one locally, but it means i have to update the yaml every time the is a new release, where as if its hosted in the repo, it would be really helpful!

Not able to inject

Hi Team,
Great work with k8tz. We are able to install it, it run perfectly with any newly created pod/deployments.
But we already have cron Job which timing we wanted to change.
Inject option you mention but it throws below error.can you help me with this?
bansari [ ~ ]$ k8tz inject --strategy=hostPath test-pod.yaml > injected-test-pod.yaml
bash: k8tz: command not found
bansari [ ~ ]$ helm test k8tz
NAME: k8tz
LAST DEPLOYED: Thu Jun 8 07:20:17 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: k8tz-health-test
Last Started: Thu Jun 8 08:52:49 2023
Last Completed: Thu Jun 8 08:52:52 2023
Phase: Succeeded
bansari [ ~ ]$

k8tz initContainer does not pick up the compute resources from values.yaml

When the namespace is guarded by a resource quota, each container needs to specify the resource requirement.
I found that the initContainer launched by the k8tz webhook does not pick up the compute resources from values.yaml.
It seems that the compute resource only gets applied to the webhook controller pod.

This leads to k8s throwing quota errors:
Error creating: pods "xxxx" is forbidden: failed quota: compute-resources: must specify requests.cpu for: k8tz; requests.memory for: k8tz

Installing k8tz in airgap environment

Hi

Being not very versed in use of kubernetes and how to inject timezone manually i tested k8tz in a test env with internet access and it seems to work great.
However are the any way to install k8tz in an airgapped environment?
If yes, how do i accomplish that?

Not Working on Image busybox:1.28

I created a pod on my Kubernetes cluster and I found that k8tz did not work on image - busybox:1.28. If I change the version to busybox:1.34, then it works!!!

Below is the YAML file I tested.

apiVersion: v1
kind: Pod
metadata:
  name: tz-test
spec:
  containers:
  - name: tz-container
    image: busybox:1.34
    imagePullPolicy: IfNotPresent
    command: [ "/bin/sh" ]
    tty: true

Should skip kube-system namespace early in MutatingWebhookConfiguration

We came across an error case that when kube-proxy and k8tz pod are scheduled to the same K8s node, creating kube-proxy pod fails as the k8s apiserver is not able to connect to k8tz webhook server. It's because the k8tz webhook Service depends on kube-proxy to write the routing rule via iptables.

This error might happen since the lastest k8tz code skips injecting kube-system namespace in the webhook handleFunc(), but not in MutatingWebhookConfiguration via namespaceSelector.

cc @klzsysy

cat not volume in pod and pod start failed

Events:
  Type     Reason          Age               From               Message
  ----     ------          ----              ----               -------
  Normal   Scheduled       48s               default-scheduler  Successfully assigned default/nginx-deployment-66b6c48dd5-65tzk to test-multi-worker01
  Normal   AddedInterface  47s               multus             Add eth0 [10.233.92.220/32] from k8s-pod-network
  Normal   Pulled          47s               kubelet            Container image "quay.io/k8tz/k8tz:0.10.0" already present on machine
  Normal   Created         47s               kubelet            Created container k8tz
  Normal   Started         47s               kubelet            Started container k8tz
  Normal   Pulling         46s               kubelet            Pulling image "nginx:1.14.2"
  Normal   Pulled          16s               kubelet            Successfully pulled image "nginx:1.14.2" in 30.763624758s
  Normal   Created         4s (x3 over 16s)  kubelet            Created container nginx
  Warning  Failed          4s (x3 over 15s)  kubelet            Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/54ec2c69-c29f-481a-ae28-4d67a495a60f/volume-subpaths/k8tz/nginx/1" to rootfs at "/etc/localtime" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
  Normal   Pulled          4s (x2 over 15s)  kubelet            Container image "nginx:1.14.2" already present on machine

Unable to deploy helm chart to existing namespace

So I am using terraform to deploy the k8tz helm chart to a cluster. However I seem to have a little chicken and egg problem here, since I am using a private registry to deploy both your chart and docker image so I need an imagePullSecret to be deployed in the same namespace so that pod can start up properly. Without this the helm deployment fails, because it is actually waiting for the controller Deployment to be healthy.
But since you have a namespace defined in the helm chart it is impossible to deploy it in an existing namespace, so it is impossible to have a secret there at the time of the deployment. So essentially it is impossible to successfully deploy this in a single run with a private registry.

So my suggestion would be to make namespace creation optional (like it is proposed in PR #86 , but probably I would go with the backward compatible approach, and would not introduce a breaking change.). However I do understand the reason why you have the namespace in the chart, so that you can apply the controller-namespace annotation to it, so k8tz will not try to inject itself. But maybe anyone creating the namespace externally could just make sure that either the namespace is annotated, or we could even add the namespace name to the namespaceSelector of the admission webhook just like the .Values.webhook.ignoredNamespaces. Another option would be to make it possible to opt out of the webhook rules on a pod level instead of just namespace level (currently the webhook is called for pod level annotations, and the controller will just ignore those, but for the k8tz pod it would not be enough) and annotate the k8tz pod itself. Probably using the objectSelector could work for that as well.

[Doc Enhancement] - k8tz is tested in OpenShift 4.x

I've successfully installed k8tz into OpenShift 4.6 and OpenShift 4.9 by helm command.
I think it is ok to remove OCP installation from TODO items.

On OpenShift 4.9

 ❯ oc run -it ubuntu --image ubuntu
If you don't see a command prompt, try pressing enter.
root@ubuntu:/# date
Wed Nov 17 05:53:04 UTC 2021
root@ubuntu:/#
root@ubuntu:/#
root@ubuntu:/# exit
exit
Session ended, resume using 'oc attach ubuntu -c ubuntu -i -t' command when the pod is running
❯
❯
❯ helm install k8tz k8tz/k8tz --set timezone=Asia/Taipei
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/jace/work/aws_ipi/auth/kubeconfig
NAME: k8tz
LAST DEPLOYED: Wed Nov 17 13:53:21 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
❯ oc get all -n k8tz
NAME                        READY   STATUS    RESTARTS   AGE
pod/k8tz-6547949b59-5cz8x   1/1     Running   0          33s

NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/k8tz   ClusterIP   10.217.4.133   <none>        443/TCP   33s

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/k8tz   1/1     1            1           33s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/k8tz-6547949b59   1         1         1       33s
❯ oc run -it ubuntu2 --image ubuntu
If you don't see a command prompt, try pressing enter.
root@ubuntu2:/# date
Wed Nov 17 13:54:37 CST 2021
root@ubuntu2:/# exit
exit
❯ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0     True        False         33d     Cluster version is 4.9.0

On OpenShift 4.6

❯  oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.41    True        False         38h     Cluster version is 4.6.41
❯
❯
❯ oc run -it ubuntu --image ubuntu
If you don't see a command prompt, try pressing enter.
root@ubuntu:/# date
Wed Nov 17 06:18:45 UTC 2021
root@ubuntu:/#
root@ubuntu:/# exit
exit
Session ended, resume using 'oc attach ubuntu -c ubuntu -i -t' command when the pod is running
❯
❯ helm install k8tz k8tz/k8tz --set timezone=Asia/Taipei
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/jace/work/aws_ipi/auth/kubeconfig
NAME: k8tz
LAST DEPLOYED: Wed Nov 17 14:18:59 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
❯ oc get all -n k8tz
NAME                        READY   STATUS    RESTARTS   AGE
pod/k8tz-79bc68c5fc-schqf   1/1     Running   0          14s

NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/k8tz   ClusterIP   172.30.46.5   <none>        443/TCP   14s

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/k8tz   1/1     1            1           14s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/k8tz-79bc68c5fc   1         1         1       14s
❯
❯ oc run -it ubuntu2 --image ubuntu
If you don't see a command prompt, try pressing enter.
root@ubuntu2:/# date
Wed Nov 17 14:19:37 CST 2021

admission.go:62: --kubeconfig not specified.

hello,when i use " helm install k8tz k8tz/k8tz -f values.yaml"
but the container logs tell me ,
INFO: 2022/04/19 14:56:16 server.go:67: k8tz v0.4.0 go1.17 linux/amd64
WARNING: 2022/04/19 14:56:16 admission.go:62: --kubeconfig not specified. Using the inClusterConfig. This might not work.
INFO: 2022/04/19 14:56:16 server.go:78: Listening on :8443

could u tell me ,How can I solve this problem

cronjob have error

2023/07/31 09:07:08 http: panic serving 172.17.0.1:52534: runtime error: invalid
memory address or nil pointer dereference
goroutine 313321 [running]:
net/http.(*conn).serve.func1()
/opt/hostedtoolcache/go/1.18.10/x64/src/net/http/server.go:1825 +0xbf
panic({0x16da100, 0x271d250})
/opt/hostedtoolcache/go/1.18.10/x64/src/runtime/panic.go:844 +0x258
github.com/k8tz/k8tz/pkg/admission.(*RequestsHandler).handleCronJobAdmissionRequest(0x1b?, 0xc0007311e0)
/home/runner/work/k8tz/k8tz/pkg/admission/admission.go:320 +0x1ce
github.com/k8tz/k8tz/pkg/admission.(*RequestsHandler).handleAdmissionReview(0xc0003c2370?, 0xc000b8db00)
/home/runner/work/k8tz/k8tz/pkg/admission/admission.go:154 +0x235
github.com/k8tz/k8tz/pkg/admission.(*RequestsHandler).handleFunc(0x0?, {0x1b60ce0, 0xc0005427e0}, 0x4edd29?)
/home/runner/work/k8tz/k8tz/pkg/admission/admission.go:110 +0x29e
net/http.HandlerFunc.ServeHTTP(0x7f74c23db3a8?, {0x1b60ce0?, 0xc0005427e0?}, 0xc000901610?)
/opt/hostedtoolcache/go/1.18.10/x64/src/net/http/server.go:2084 +0x2f
net/http.(*ServeMux).ServeHTTP(0xc0003e6407?, {0x1b60ce0, 0xc0005427e0}, 0xc0002c2800)
/opt/hostedtoolcache/go/1.18.10/x64/src/net/http/server.go:2462 +0x149
net/http.serverHandler.ServeHTTP({0x1b52d68?}, {0x1b60ce0, 0xc0005427e0}, 0xc0002c2800)
/opt/hostedtoolcache/go/1.18.10/x64/src/net/http/server.go:2916 +0x43b
net/http.(*conn).serve(0xc00015a5a0, {0x1b61578, 0xc0003e1110})
/opt/hostedtoolcache/go/1.18.10/x64/src/net/http/server.go:1966 +0x5d7
created by net/http.(*Server).Serve
/opt/hostedtoolcache/go/1.18.10/x64/src/net/http/server.go:3071 +0x4db

Getting 2 volume mount inside POD with same name "k8tz"

volumeMounts:
- mountPath: /etc/localtime
name: k8tz
readOnly: true
subPath: Asia/Kolkata
- mountPath: /usr/share/zoneinfo
name: k8tz
readOnly: true

volumes:

  • hostPath:
    path: /usr/share/zoneinfo
    type: ""
    name: k8tz

Getting 2 volume mount inside POD with same name "k8tz" . This is causing problem. Second 1 is not needed.

test.txt

pod securityContext: runAsNonRoot

Events:
Type Reason Age From Message


Normal Scheduled 26m default-scheduler Successfully assigned default/test-k8tz to node2
Warning Failed (x12 over ) kubelet Error: container has runAsNonRoot and image has non-numeric user (k8tz), cannot verify user is non-root (pod: "test-k8tz_default(4308d622-b565-4067-a243-15bb7897a89f)", container: k8tz)
Normal Pulled (x94 over ) kubelet Container image "quay.io/k8tz/k8tz:0.5.0test" already present on machine

pod status
root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
centos 1/1 Terminating 0 4h33m
test-k8tz 0/1 Init:CreateContainerConfigError 0 26m

pod yaml

`apiVersion: v1
kind: Pod
metadata:
name: test-k8tz
spec:
securityContext:
runAsNonRoot: true
containers:

  • env:
    • name: TZ
      value: Asia/Shanghai
      image: nginx
      name: nginx
      volumeMounts:
    • mountPath: /etc/localtime
      name: data
      readOnly: true
      subPath: Asia/Shanghai
    • mountPath: /usr/share/zoneinfo
      name: data
      readOnly: true
      volumes:
  • hostPath:
    path: /usr/share/zoneinfo
    name: data
    `

Sidecar container should be started with non root to avoid conflict with securitycontext

Failed calling webhook after certificate regenerated during upgrade, but Pod not restarted

Description

During helm upgrade, the secret holding the admission controller is regenerated. The MutatingWebhookConfigurations caBundle is updated accordingly.

But the admission controller pod is still running with the old certificate, and the caBundle doesn't fit anymore and pod creations will fail with the error:

Error from server (InternalError): Internal error occurred: failed calling webhook "admission-controller.k8tz.io": Post "https://k8tz.k8tz.svc:443/?timeout=10s": x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "k8tz.k8tz.svc")

How To Reproduce

This case can be easily reproduce with these steps:

helm install k8tz k8tz/k8tz
helm upgrade k8tz k8tz/k8tz
kubectl run test --image busybox

The last command will fail.

Disable copy command logs

This project is awesome! A big kudos to the maintainers.

Anyway, I'm trying to decrease the initContainer's verbosity, but I can only find a variable to increase it.
How do I set the log level to error only?

failed to lookup pod's namespace (default/ntp2): Unauthorized

root@iZj6ce0h880med3ug4nr1mZ:~# kubectl run ntp2 --image=docker.io/geoffh1977/chrony --env ALLOW_CIDR=0.0.0.0/0 -n default
Error from server: admission webhook "admission-controller.k8tz.io" denied the request: failed to lookup generator, error: failed to lookup pod's namespace (default/ntp2): Unauthorized

inject into running resources

Hi, brilliant software works a treat! even got it working automatically with talos!
(had to change failurePolicy to Ignore otherwise the cluster wouldnt start up at all)

but anyways... is it possible to inject the timezone into running pods or only when a pod is created?

Daemonset chart option

It would be nice if you add a values-option to use a Daemonset instead of deployment.

Thank you in advance

k8tz sidecar container: pulling from private registry fails

In our project we pull images from a secured internal container-registry. Therefore we provide an imagePullSecret.
grafik

Pulling the image for the k8tz controller deployment works as expected.

Problem: Pulling the image for sidecar container usage fails.

Seems like the imagePullSecret isn't considered in that case.

ignoredNamespaces is not getting values other than "kube-system"

Not sure if other people having the same issue here. But I tried to exclude several namespaces from being k8tz injected. So I tried updating helm release with a new value below :

webhook:
  failurePolicy: Fail

  tlsMinVersion: ""
  tlsCipherSuites: ""

  certManager:
    enabled: false
    secretTemplate: {}
    duration: 2160h
    renewBefore: 720h
    issuerRef:
      name: selfsigned
      kind: ClusterIssuer

  crtPEM: |

  keyPEM: |

  caBundle: |

  ignoredNamespaces:
    - kube-system
    - namespace1
    - namespace2
    - namespace3
    - namespace4

After the upgrade, helm show values shows only 'kube-system' is ignored and others are not there in 'ignoredNamespace'.
helm show values k8tz/k8tz :

webhook:
  failurePolicy: Fail

  tlsMinVersion: ""
  tlsCipherSuites: ""

  certManager:
    enabled: false
    secretTemplate: {}
    duration: 2160h
    renewBefore: 720h
    issuerRef:
      name: selfsigned
      kind: ClusterIssuer

  crtPEM: |

  keyPEM: |

  caBundle: |

  ignoredNamespaces:
    - kube-system

Do you have any suggestion on this? Thanks.

Update tzdata to 2023c

https://mm.icann.org/pipermail/tz-announce/2023-March/000079.html

This release's code and data are identical to 2023a.  In other words, 
this release reverts all changes made in 2023b other than commentary, as 
that appears to be the best of a bad set of short-notice choices for 
modeling this week's daylight saving chaos in Lebanon. (Thanks to Rany 
Hany for the heads-up about the government's announcement this week.)

Error: INSTALLATION FAILED: namespaces "k8tz" already exists

when you try to install it by setting the same namespace in the command and --create-namespace
it generates an error because the helm chart is also trying to create the namespace for it

Simon@SiMacBookPro calico % helm install k8tz k8tz/k8tz -n k8tz --create-namespace --set timezone=Europe/London 
Error: INSTALLATION FAILED: namespaces "k8tz" already exists

you should allow the user to specify which namespace it should run in and not create the namespace

EDIT: the idea with helm is it will automatically install the helm chart into the namespace you are using (defaults as default)
but when you look at your resources, the helm chart is in default, but the software is running on k8tz?

SecurityContext for initContainer

Is it somehow possible to define a custom securityContext for the injected initContainer?
We want to enforce a restricted admission policy but at the moment we can't due to the initContainers of k8tz.

Timezone injection based on node annotations

First, thanks for this very useful tool!

It would be great if the k8tz.io/timezone etc annotations would be read from the node too. In our case we are running daemonsets on IoT nodes. Each of these nodes may be located in another timezone, so we need to have the pods somehow read the k8tz annotations from the node, not the pod or namespace.

Annotations not working for Ingress-NGinx on MicroK8s

Hello, I did run this command:

sudo microk8s kubectl annotate pods --all --overwrite -n ingress k8tz.io/strategy=hostPath k8tz.io/timezone=America/Caracas
pod/nginx-ingress-microk8s-controller-qznfw annotated

I restarted microk8s because nothing happened, not even after updating the Ingress configMap to force a reload.

I also used k8tz.io/strategy=initContainer, same result.

I am using the latest --classic version with its built-in Ingress.
MicroK8s v1.26.1 revision 4595

NGINX Ingress controller
Release: v1.5.1
Build: d003aae913cc25f375deb74f898c7f3c65c06f05
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.