wave-k8s / wave Goto Github PK
View Code? Open in Web Editor NEWKubernetes configuration tracking controller
License: Apache License 2.0
Kubernetes configuration tracking controller
License: Apache License 2.0
Charts are now available via
$ helm repo add wave-k8s https://wave-k8s.github.io/wave/
$ helm install wave-k8s/wave
Proper documentation to follow.
Originally posted by @gargath in #81 (comment)
Hi,
Running "kustomize build" on example configuration don't work since it uses "../rbac" and "../manager" while kustomize (2.0.1) assumes that it is under the folder which contains the kustomization.yaml, the error is:
Error: rawResources failed to read Resources: Load from path ../manager/manager.yaml failed: security; file '../manager/manager.yaml' is not in or below '/home/wave-master/config/default'
I moved both folder under default and modify the path accordingly.
Thanks
This operator is exactly what we're looking for, but our use of secrets involves explicitly pulling in individual secrets to the deployment as separate envs. Wave only considers whole ConfigMap/Secret entries in the containers.envFrom
and volumes.volumeSource
sections though.
For example, for the deployment snippet below, changes to EXPLICIT_A
in ConfigMap config-explicit
, and SECRET_A
in Secret secrets
are not noticed.
...
env:
# Explicit config
- name: EXPLICIT_A
valueFrom:
configMapKeyRef:
name: config-explicit
key: EXPLICIT_A
# Explicit secrets
- name: SECRET_A
valueFrom:
secretKeyRef:
name: secrets
key: SECRET_A
# Global config
envFrom:
- configMapRef:
name: config-global
...
I've been prototyping some changes in a fork that includes discovery of env entries, and only includes the data from the specified *KeyRef
in the hash calculation. This has basically involved adding a new ConfigObject
type and putting the Object
inside (so Wave can still pass it to k8s), along with metadata on whether an individual field is specified, and what that field is (that can be used by the hashing code). I have something that seems to work in a minikube test environment. Would you be open to me submitting a PR?
I am trying to use wave to restart my StatefulSet Pods, on a Kubernetes secret change. One thing to highlight here is that I am using the RolloutStrategy to be Delete. I see that the secret is being updated, however I don't see the rollout / restart happening for the same. Here is the StatefulSet Yaml for reference.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
annotations:
reloader.stakater.com/auto: "true"
spec:
serviceName: "nginx"
replicas: 2
updateStrategy:
type: "OnDelete"
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: foo
secret:
secretName: test
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
- name: foo
mountPath: "/tmp/foo"
readOnly: true
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
(The purpose of this report is to alert pusher/wave
to the possible problems when pusher/wave
try to upgrade the following dependencies)
-Latest Version: v1.7.1 (Latest commit fe7bd95 5 days ago)
-Where did you use it:
https://github.com/pusher/wave/search?q=prometheus%2Fclient_golang%2Fprometheus&unscoped_q=prometheus%2Fclient_golang%2Fprometheus
-Detail:
module github.com/prometheus/client_golang
require (
github.com/beorn7/perks v1.0.1
github.com/cespare/xxhash/v2 v2.1.1
…
)
go 1.11
package prometheus
import (
"github.com/cespare/xxhash/v2"
…
)
This problem was introduced since prometheus/client_golang v1.2.0(committed 9a2ab94 on 16 Oct 2019) .Now you used version v0.9.2. If you try to upgrade prometheus/client_golang to version v1.2.0 and above, you will get an error--- no package exists at "github.com/cespare/xxhash/v2"
These dependencies all added Go modules in the recent versions.
They all comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:
A package that has migrated to Go Modules must include the major version in the import path to reference any v2+ modules. For example, Repo github.com/my/module migrated to Modules on version v3.x.y. Then this repo should declare its module path with MAJOR version suffix "/v3" (e.g., module
github.com/my/module/v3
), and its downstream project should use"github.com/my/module/v3/mypkg"
to import this repo’s package.
physical path
. So earlier versions of Go (including those that don't have minimal module awareness) plus all tooling (like dep, glide, govendor, etc) don't have minimal module awareness
as of now and therefore don't handle import paths correctly See golang/dep#1962, golang/dep#2139.Note: creating a new branch is not required. If instead you have been previously releasing on master and would prefer to tag v3.0.0 on master, that is a viable option. (However, be aware that introducing an incompatible API change in master can cause issues for non-modules users who issue a go get -u given the go tool is not aware of semver prior to Go 1.11 or when module mode is not enabled in Go 1.11+).
Pre-existing dependency management solutions such as dep currently can have problems consuming a v2+ module created in this way. See for example dep#1962.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher
Go Modules is the general trend of ecosystem, if you want a better upgrade package experience, migrating to Go Modules is a good choice.
Migrate to modules will be accompanied by the introduction of virtual paths(It was discussed above).
This "github.com/my/module/v3/mypkg" is not the
physical path
. So Go versions older than 1.9.7 and 1.10.3 plus all third-party dependency management tools (like dep, glide, govendor, etc) don't haveminimal module awareness
as of now and therefore don't handle import paths correctly.
Then the downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide, govendor…).
If pusher/wave
want to keep using the dependency manage tools (like dep, glide, govendor, etc), and still want to upgrade the dependencies, can choose this fix strategy.
Manually download the dependencies into the vendor directory and do compatibility dispose(materialize the virtual path or delete the virtual part of the path). Avoid fetching the dependencies by virtual import paths. This may add some maintenance overhead compared to using modules.
As the import paths have different meanings between the projects adopting module repos and the non-module repos, materialize the virtual path is a better way to solve the issue, while ensuring compatibility with downstream module users. A textbook example provided by repo github.com/moby/moby
is here:
https://github.com/moby/moby/blob/master/VENDORING.md
https://github.com/moby/moby/blob/master/vendor.conf
In the vendor directory, github.com/moby/moby
adds the /vN subdirectory in the corresponding dependencies.
This will help more downstream module users to work well with your package.
The prometheus/client_golang
have 1039 module-unaware users in github, such as: AndreaGreco/mqtt_sensor_exporter, seekplum/plum_exporter, arl/monitoring…
https://github.com/search?q=prometheus%2Fclient_golang+filename%3Avendor.conf+filename%3Avendor.json+filename%3Aglide.toml+filename%3AGodep.toml+filename%3AGodep.json
You can make a choice when you meet this DM issues by balancing your own development schedules/mode against the affects on the downstream projects.
For this issue, Solution 1 can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.
Do you plan to upgrade the libraries in near future?
Hope this issue report can help you ^_^
Thank you very much for your attention.
Best regards,
Kate
startupProbes went GA in kubernetes 1.20, but even with the latest version of Wave, when I try to create a deployment with one added, it is removed by Wave, presumably because it's complied against an older version of the Kubernetes API.
e.g.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-chart
labels:
helm.sh/chart: my-chart-0.1.0
app.kubernetes.io/name: my-chart
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
wave.pusher.com/update-on-config-change: "true"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-chart
template:
metadata:
labels:
app.kubernetes.io/name: my-chart
spec:
securityContext:
{}
containers:
- name: my-chart
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
startupProbe:
httpGet:
path: /
port: http
resources:
{}
when applied to the cluster doesn't have the startupProbe
I've tried wave for the first time and looks like there is a possible conflict between default kubernetes rollout when you modify configmap/secret reference. So basically k8s starts its rollout and then wave makes a change and starts another rollout.
I don't think it is critical because I can't come up with a case where it is, but maybe this can cause some unforeseen race conditions. And as far as I understand the logic of wave there is no way around such situations because it needs to update the annotation in any case?
Also, want to add that in my particular case I need to monitor only one secret because configmap updates are managed by another process, so the ideal way is to configure wave to watch only specified resource (secret) and unfortunately there is no such a feature...
I am using your controller to trigger our app to perform a rolling update when cert-manager renews a certificate. It is my preference to restrict the trigger to only occur on the particular certificate secret and not on changes to other secrets.
It looks like https://github.com/pusher/wave/blob/8bddd32d9aa5afcfb1b733df6e9ba520e0689ed4/pkg/core/hash.go#L71 only accounts for Data
and not BinaryData
. When changing my configmap with BinaryData
the hash was always the same and thus the updates not rolled out properly. Logs showed that the hash was updated but it happened to be the same value, so otherwise potentially changing https://github.com/pusher/wave/blob/217c9b8e0797131edbed580aa489db994e4cce45/pkg/core/handler.go#L109 to make this more obvious if there is a reason to not use binaryData
would make it more obvious.
If this sounds reasonable I can work on adding a fix.
Thanks!
Add support for updating DaemonSets as well as Deployments
Saw your lightning talk, wondering what this does compared to kustomize's generator functionality? Thanks for speaking!
I tried to install wave with the following helm
(v3) command:
helm upgrade \
wave \
wave \
--repo='https://wave-k8s.github.io/wave/' \
--version='2.0.0' \
--namespace='wave' \
--create-namespace \
--install \
--wait \
--wait-for-jobs \
--debug
Issue
The Helm chart installation fails with message:
ClusterRoleBinding
" in version "rbac.authorization.k8s.io/v1beta1
"ClusterRole
" in version "rbac.authorization.k8s.io/v1beta1
"Solution
Use version rbac.authorization.k8s.io/v1
instead of rbac.authorization.k8s.io/v1beta1
for ClusterRole
and ClusterRolebinding
resources.
Full Error Log
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "wave-wave" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first, resource mapping not found for name: "wave-wave" namespace: "" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first]
helm.go:84: [debug] [resource mapping not found for name: "wave-wave" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first, resource mapping not found for name: "wave-wave" namespace: "" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first]
unable to build kubernetes objects from release manifest
Versions:
v2.0.0
v3.9.0
v1.23.6+k3s1
v1.24.1
Hi!
First of all, thanks for this project, it looks very interesting. I was starting to build something like this myself but then I found this article today https://thenewstack.io/solving-kubernetes-configuration-woes-with-a-custom-controller/ and I learned about Wave and Faros. From the article and the two projects it's unclear though if Wave will replace Faros completely or if the two will co-exist.
I'm currently looking to solve two problems:
Wave seems to cover the second approach but to focus on deployments only, while faros covers the general case, so I started looking at the latter... but I'd rather put effort a project that is maintained though instead of jumping on something that has no future, hence my question 😄
Also, my two cents: I have my doubts that is possible to cover all the use cases with one single project, but Faros/Wave could be either extensible enough or become a framework/library to build easily git-first tools for Kubernetes. WDYT?
Hey all!
This is not an issue, but me reaching out to the rest of the project (such as it is).
@wonderhoss was nice enough to add me to the project after I voiced my wish to contribute.
I am not here to step on anyone's toes, but I am motivated to push this project forward.
This is my preliminary plan:
dep
to go mod
(https://www.cockroachlabs.com/blog/dep-go-modules/) ✔️FeatureRequest
, Bug
, … ✔️I will start doing these things in the upcoming weeks if nobody complains first 😃 (see stepping on toes).
Cheers,
Philipp
Add support for updating CronJobs as well as Deployments
Currently it is not easy to deploy because the helm chart is not published anywhere. A simple way to do it is via github pages. You can see how k8s external secrets do it (https://github.com/godaddy/kubernetes-external-secrets#install-with-helm).
rbac.authorization.k8s.io/v1beta1 ClusterRole and ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+, and breaks installations on newer systems
How can i use the wave operator in order to use it to renew the letsencrypt certs generated by cert-manager?
I currently deploy cert-manager using helm, so if i add the annotation wave.pusher.com/update-on-config-change: "true" on the deployments will that be sufficient for wave to restart the pods of the deployment?
volumes:
- configMap:
defaultMode: 420
name: kafka-jmx-configmap
optional: false
name: jmx-config
- name: tls-conf
projected:
defaultMode: 420
sources:
- secret:
name: kafka-secret
optional: false
- secret:
items:
- key: keystore.jks
path: kafka-0-keystore.jks
- key: truststore.jks
path: truststore.jks
name: kafka-0-tls-secret
optional: false
- secret:
items:
- key: keystore.jks
path: kafka-1-keystore.jks
name: kafka-1-tls-secret
optional: false
- secret:
items:
- key: keystore.jks
path: kafka-2-keystore.jks
name: kafka-2-tls-secret
optional: false
If I trigger a certmanager renewal of a certificate, e.g. the kafka-0-tls-secret
gets updated properly, but this is not detected by wave?
Hi,
I just have tested wave with nginx-ingress and it worked for me. However, on the following page I found a way to manage it without an extra POD:
https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change
How does wave differ from using checksum/config and sha256sum?
Regards
If you add a startupProbe
to any deployment/pod with a wave annotation that will be removed by wave. This is probably caused by an old version of the kubernetes API used in wave.
Is there a reason that this project no longer cuts releases/tags? Looks like the last was cut back in March 2021, but images published to quay.io are based off the main branch commit
At Under Armour, we run Vault to store our secrets (like many other companies do), do you envision having an integration with a tool like Vault where a secret update in vault triggers a rolling update of deployment?
I’d be happy to contribute to the project if you think Wave can watch secrets in Vault. Let me know what your thoughts on this are and how you think this can be implemented to add value to the Wave project.
I have a pod that depends on a non-sharable volume.
Wave starts a new pod which never comes up because of this.
I'd like an annotation where the deployment is marked for a non-rolling update.
Possible? New feature?
https://github.com/wave-k8s/wave/blob/c3bdcfe923debc572bec313c84e457d2e592af16/config/rbac/manager_role.yaml lists configmaps
and secrets
multiple times. The configmaps
entries have identical verbs, but the secrets
entries has a different set of verbs for one of the occurrences.
wave/config/rbac/manager_role.yaml
Lines 27 to 36 in c3bdcfe
and
wave/config/rbac/manager_role.yaml
Lines 134 to 145 in c3bdcfe
Which of these is the appropriate set of verbs? Does wave need update
, patch
, or delete
access to the secrets?
Would it be possible to get a new release including the helm chart? Installing a helm chart from master feels wrong.
Add support for updating StatefulSets as well as Deployments
Hi,
The project currently assume that you want to apply it on cluster global level. While it is good for many cases, this isn't possible in case of enterprise when we aren't the owner of cluster. I suggest to allow to limit the the controller only to one namepsace.
Thanks,
this is an awesome project, I'll have a try in my cluster. but I also have a question about the mechanism of initializing the deployments' hash annotation.
When a new deployment first comes, it will create some pod without the hash annotation. then wave will take over this deployments' config management, and add a hash annotation in podtemplate. Does this mean a new-created deployment will always do a rolling update immediately after it creates.
thanks
Hello,
the wave helm chart (latest published version is 2.0.0) uses the deprecated API rbac.authorization.k8s.io/v1beta1
for the ClusterRole and ClusterRoleBinding objects. This API is not served anymore on kubernetes versions 1.22 and above, so it's not possible to deploy wave with the helm chart on these clusters.
Since there are no notable changes between rbac.authorization.k8s.io/v1beta1
and rbac.authorization.k8s.io/v1
, the only thing that needs to be changed is the actual API version.
It would be great if we could get an updated chart published some time soon.
Thanks!
We have two sets of RBAC rules in conf
:
$ find . -iname "*role*.yaml"
./config/default/rbac/rbac_role.yaml
./config/default/rbac/rbac_role_binding.yaml
./config/rbac/manager_role.yaml
./config/rbac/manager_role_binding.yaml
While rbac/manager-role.yaml
also contains permissions for StatefulSets
and DaemonSets
, those referenced by Kustomize in default/rbac
do not have them. Looks like they have been forgotten to sync when support for those resources was created?
There is a PR trying to tackle this in #56, but this has not been worked on for more than two months now.
I'm testing wave and it works fine when there are no issues with a container. However when a container fails to start I see multiple containers being created. Is it expected ? why does wave say the config changed when it did not ? Is it because the pod died ?
Here are the events
0s Normal AddWatch configmap/app-init Adding watch for ConfigMap app-init
0s Normal ScalingReplicaSet deployment/mypod Scaled up replica set mypod-55bcb6f4c8 to 1
0s Normal ConfigChanged deployment/mypod Configuration hash updated to 0a0dcef966ea2af06283c6a7fd21c89e5c0469dd9c42894cad2cc1bcae1cca64
0s Normal ScalingReplicaSet deployment/mypod Scaled up replica set mypod-699c7876f5 to 1
0s Normal SuccessfulCreate replicaset/mypod-55bcb6f4c8 Created pod: mypod-55bcb6f4c8-pvcp8
0s Normal Scheduled pod/mypod-55bcb6f4c8-pvcp8 Successfully assigned myns/mypod-55bcb6f4c8-pvcp8 to ip-xx-xx-xx-xx.compute.internal
0s Normal SuccessfulCreate replicaset/mypod-699c7876f5 Created pod: mypod-699c7876f5-c5288
0s Normal Scheduled pod/mypod-699c7876f5-c5288 Successfully assigned myns/mypod-699c7876f5-c5288 to ip-xx-xx-xx-xx.compute.internal
0s Normal Pulled pod/mypod-699c7876f5-c5288 Container image "repo/app-init:0.1" already present on machine
0s Normal Created pod/mypod-699c7876f5-c5288 Created container
0s Normal Pulled pod/mypod-55bcb6f4c8-pvcp8 Container image "repo/app-init:0.1" already present on machine
0s Normal Created pod/mypod-55bcb6f4c8-pvcp8 Created container
0s Normal Started pod/mypod-699c7876f5-c5288 Started container
0s Normal Started pod/mypod-55bcb6f4c8-pvcp8 Started container
0s Normal Pulling pod/mypod-55bcb6f4c8-pvcp8 pulling image "repo/app:0.1"
0s Normal Pulling pod/mypod-699c7876f5-c5288 pulling image "repo/app:0.1"
0s Normal Pulled pod/mypod-699c7876f5-c5288 Successfully pulled image "repo/app:0.1"
0s Normal Pulled pod/mypod-55bcb6f4c8-pvcp8 Successfully pulled image "repo/app:0.1"
0s Normal Created pod/mypod-699c7876f5-c5288 Created container
0s Normal Created pod/mypod-55bcb6f4c8-pvcp8 Created container
0s Warning Failed pod/mypod-55bcb6f4c8-pvcp8 Error: failed to start container "app": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \".....\": no such file or directory": unknown
0s Warning Failed pod/mypod-699c7876f5-c5288 Error: failed to start container "app": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \".....\": no such file or directory": unknown
0s Normal Pulling pod/mypod-55bcb6f4c8-pvcp8 pulling image "repo/app:0.1"
0s Normal Pulling pod/mypod-699c7876f5-c5288 pulling image "repo/app:0.1"
0s Normal Pulled pod/mypod-699c7876f5-c5288 Successfully pulled image "repo/app:0.1"
0s Normal Pulled pod/mypod-55bcb6f4c8-pvcp8 Successfully pulled image "repo/app:0.1"
0s Normal Created pod/mypod-699c7876f5-c5288 Created container
0s Normal Created pod/mypod-55bcb6f4c8-pvcp8 Created container
0s Warning Failed pod/mypod-699c7876f5-c5288 Error: failed to start container "app": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \".....\": no such file or directory": unknown
0s Warning Failed pod/mypod-55bcb6f4c8-pvcp8 Error: failed to start container "app": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \".....\": no such file or directory": unknown
Hi all
Firstly thank you for a great tool, we are looking to implement this within our organisation shortly.
However, we did have one main feature request. Would it be possible to only restart a pod within a given time frame? For example only restart pods between 03:00 - 04:00. This would allow us to set restarts to be outside of our core service hours and minimize downtime for our customers.
Thanks
Callum
Is it possible to scope a specific configmap or secret in the deployment ?
Hello,
I'm trying to install Wave using the Helm chart but I'm getting an error when replicas
is set to any value, example:
$ helm template wave-k8s/wave --set replicas="3.0" --debug
Error: template: wave/templates/deployment.yaml:23:17: executing "wave/templates/deployment.yaml" at <gt .Values.replicas 1.0>: error calling gt: incompatible types for comparison
helm.go:88: [debug] template: wave/templates/deployment.yaml:23:17: executing "wave/templates/deployment.yaml" at <gt .Values.replicas 1.0>: error calling gt: incompatible types for comparison
Is there any specific reason why https://github.com/wave-k8s/wave/blob/master/charts/wave/templates/deployment.yaml#L23 is 1.0
not 1
?
The weird thing is that I used it before and it worked, but now I'm getting this error.
Can anyone help me?
I recently tried to update to Wave 0.3.0.
Since I run all pods with a restricted PodSecurityPolicy with a read-only root filesystem, Wave fails with the message:
cannot create log: open /tmp/wave.wave-1.nobody.log.INFO.*****: read-only file system
With Wave 0.2.0 this did not happen.
Still deprecated apiVersion using 2 resources
Need to change rbac.authorization.k8s.io/v1beta1
-> rbac.authorization.k8s.io/v1
There seems to be a race condition where Wave will occasionally cause secrets to be unexpectedly deleted along with a parent deployment when only the deployment is removed via kubectl apply --prune ...
.
I've compiled a minimal reproduction case here, with some extra detail: https://github.com/timothyb89/wave-bug-test
I couldn't reproduce the bug with kubectl delete deployment ...
, and after checking the exact API calls (via kubectl -v 10
) the only difference in the actual DELETE
call between kubectl delete
vs kubectl apply --prune
is that apply
sets propagationPolicy=foreground
while kubectl delete
uses the default, which is evidently background
.
Given this, I think the issue is related to Wave's finalizer and Kubernetes' deletion propagation policies. Per the docs, with background propagation, Kubernetes should wait for the parent resource to finish deleting before removing children, so it makes sense that kubectl delete deployment ...
would never delete a secret given Wave's finalizer should have already run to completion.
On the other hand, with foreground propagation, I don't think Kubernetes ensures that parent finalizers (like Wave's) will finish before it starts removing child resources (or even explicitly does the inverse, removing children and then running the parent finalizers). It's surprising to me that secrets aren't always deleted in this case, but I guess Wave's finalizer can sometimes remove the ownerReference just in time to prevent the secret from being removed.
If I have a statefulset that references a configmap or secret with existing owner references. Wave adds it's own owner references as expected however when the statefulset is delete wave removes all owner references.
Current contribution guideline:
(...)
Dependencies are **not** checked in so please download those separately.
Download the dependencies using [`dep`](https://github.com/golang/dep).
dep
is deprecated.
dep
uses in codebase with Go modulesWe have secrets in the cluster and some secrets will be automatically updated by other apps in the cluster. So we are looking to run a k8s job when the secret gets updated. Is this something wave supports?
Wave is causing an extra, short lived replica to be added to every deployment to which the wave.pusher.com/update-on-config-change: "true"
annotation is added.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
wave.pusher.com/update-on-config-change: "true"
labels:
app: bbox
name: bbox
spec:
replicas: 1
selector:
matchLabels:
app: bbox
template:
metadata:
labels:
app: bbox
spec:
containers:
- command:
- /bin/sh
- -c
- while true; do sleep 3600; done
image: busybox
name: busybox
Steps to replicate this:
First, install wave
:
helm install wave wave-k8s/wave
Output of helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
wave wave-test 1 2021-09-16 09:46:55.862965 +0100 BST deployed wave-2.0.0 v0.5.0
Apply the yaml file above:
kubectl apply -f deployment.yaml
I now see two pods start-up, one of which almost immediately enters a terminating state:
NAME READY STATUS RESTARTS AGE LABELS
bbox-57689566cf-8tc7p 1/1 Running 0 8s app=bbox,pod-template-hash=57689566cf
bbox-68d685577f-p8s89 1/1 Terminating 0 8s app=bbox,pod-template-hash=68d685577f
The pod in the Terminating
state does not have the wave annotation; the Running
one does.
Wave logs:
I0916 10:00:24.191390 1 handler.go:108] wave "level"=0 "msg"="Updating instance hash" "hash"="100444e91862dd77d7ebe29f050c1e9a7f357c771e1a7b7650aae27e6a3a031d" "name"="bbox" "namespace"="wave-test"
Output of kubectl version
:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.16", GitCommit:"7a98bb2b7c9112935387825f2fce1b7d40b76236", GitTreeState:"clean", BuildDate:"2021-02-17T11:52:32Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
If I increase the replica count of the deployment to 2
I get four pods starting, two of which immediately enter a terminating state; as before the two 'extras' don't have the the wave annotation.
Removing the wave.pusher.com/update-on-config-change: "true"
annotation from the deployment results in a deployment with no short-lived replicas, as expected.
The finalizer on the wave-system namespace prevents wave from being cleanly and completely removed from a cluster.
Attempts to delete the wave-system namespace simply hang forever.
The Kustomize configuration for Wave executes the /bin/manager
command as the entrypoint for the manager container. It looks like some recent Dockerfile changes have altered the name of this binary and subsequently attempting to deploy master or v0.3.0 using Kustomize fails.
Suggestions:
Rely on the containers default entrypoint, or reference a specific image tag to make this reproducible and avoidable in the future.
This doesn't work:
$ helm repo add wave-k8s https://wave-k8s.github.io/wave/ $ helm install wave-k8s/wave
$ helm repo add wave-k8s https://wave-k8s.github.io/wave/
"wave-k8s" has been added to your repositories
$ helm install wave-k8s/wave
Error: INSTALLATION FAILED: must either provide a name or specify --generate-name
$ helm version
version.BuildInfo{Version:"v3.8.1", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5"}
Hi guys,
since there is no new release since March 2021, i just wondering if we have to consider security related topics. We have updated our kubernetes to 1.27 and it keeps working. So we would like to keep it, if we don't have to worry security vulnerabilities.
Thank you for your work so far.
I have an initContainer on one of my deployments which reads stuff directly from a Kubernetes Secret. This doesn't currently get picked up by Wave because that Secret is not in any volumes or env/envFrom.
It would be handy if there was an optional annotation to specify extra configmaps / secrets to watch. Something like:
annotations:
wave.pusher.com/update-on-config-change: "true"
wave.pusher.com/extra-configmaps: "some-namespace/my-configmap"
wave.pusher.com/extra-secrets: "some-namespace/my-secret,some-other-namespace/foo"
For now, I have worked around this by defining a Volume but not actually mounting it into any containers in the pod.
While working on PR #44 I came across some problems with the Makefile
and configure
script.
The golangci-lint
step does not properly install the linter. It follows the steps here but the Makefile
doesn't properly execute it. It attempts to install to /bin
due to improper syntax here
It can be fixed by changing the
golangci-lint:
@ if [ ! $$(which golangci-lint) ]; then \
curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $$(go env GOPATH)/bin v1.15.0; \
fi
The install-kubebuilder-tools
step fails.
It tries to install to /usr/local/
which requires elevated privileges. Following the setup instructions here they use sudo and add /usr/local/kubebuilder/bin
to $PATH
as well
Working from a Mac I needed to do a few extra steps
The configure
script is dependent on Bash 4 while Macs come with Bash 3.2 by default https://github.com/pusher/wave/blob/master/configure#L3
I was able to upgrade painlessly with brew update && brew install bash
I manually installed golangci-lint
with curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.15.0
(or apply the fix above)
I manually installed kubebuilder
following the steps here
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.