jodevsa / wireguard-operator Goto Github PK
View Code? Open in Web Editor NEWPainless deployment of wireguard on kubernetes
License: MIT License
Painless deployment of wireguard on kubernetes
License: MIT License
Hi @jodevsa / @Matthew-Beckett ,
Thanks for the amazing work here, it really makes the deployment of wireguard painless.
I have a use case in the internet of things world, where I would like to be able to run a wireguard sidecar container with my Home Assistant (home automation hub) deployment. In my setup I have a ZigBee (communication protocol for IOT) hub that run ESPHome (IOT framework) with the wireguard component and is acting as my wireguard peer while my Home Assistant should act as the server.
My peer is on a different network than my server and is exposing zigbee to serial data through a tcp custom port. My server is responsible for scraping the data by making a connection to that tcp custom port through the vpn tunnel (like prometheus scrape the metrics endpoints).
The Kubernetes version of Home Assistant is lacking the wireguard server support and this feature would solve that problem. Having a Home Assistant docker image with wireguard is just a headache. If you sort out this draft PR, I am sure you will be bringing joy to many smart home enthusiasts and I'll be one of them. I wish I could help with this but anything else than yaml and terraform is over my writing code skills, ok, maybe a bit of python too. Hope to hear from you soon, enjoy the rest of your weekend!
Describe the bug
I'm trying to have the operator working on EKS, with an ubuntu image (https://cloud-images.ubuntu.com/docs/aws/eks/)
After a successful installation using
kubectl apply -f https://raw.githubusercontent.com/jodevsa/wireguard-operator/0.0.3/release.yaml
namespace/wireguard-system created
customresourcedefinition.apiextensions.k8s.io/wireguardpeers.vpn.example.com created
customresourcedefinition.apiextensions.k8s.io/wireguards.vpn.example.com created
serviceaccount/wireguard-controller-manager created
role.rbac.authorization.k8s.io/wireguard-leader-election-role created
clusterrole.rbac.authorization.k8s.io/wireguard-manager-role created
clusterrole.rbac.authorization.k8s.io/wireguard-metrics-reader created
clusterrole.rbac.authorization.k8s.io/wireguard-proxy-role created
rolebinding.rbac.authorization.k8s.io/wireguard-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/wireguard-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/wireguard-proxy-rolebinding created
configmap/wireguard-manager-config created
service/wireguard-controller-manager-metrics-service created
deployment.apps/wireguard-controller-manager created
I kubectl apply the following manifest
apiVersion: vpn.example.com/v1alpha1
kind: Wireguard
metadata:
name: "my-cool-vpn"
spec:
mtu: "1380"
And got the following error on manager logs:
2022-07-29T08:54:25.326Z ERROR controller.wireguard Reconciler error {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default", "error": "Operation cannot be fulfilled on wireguards.vpn.example.com \"my-cool-vpn\": the object has been modified; please apply your changes to the latest version and try again"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227
2022-07-29T08:54:25.327Z INFO controller.wireguard loaded the following wireguard image:ghcr.io/jodevsa/wireguard-operator/wireguard:sha-64c91a661f4ae6dce41e311386cfe5b8309e816c {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default"}
2022-07-29T08:54:25.327Z INFO controller.wireguard my-cool-vpn {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default"}
2022-07-29T08:54:25.327Z INFO controller.wireguard processing my-cool-vpn {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default"}
2022-07-29T08:54:25.327Z INFO controller.wireguard Found ingress {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default", "ingress": null}
Am I missing something?
Is your feature request related to a problem? Please describe.
I'm currently facing a limitation with the hardcoded IP address range "10.8.0.0/24" in the VPN project. In my environment, this range conflicts with existing network configurations, which causes connectivity issues and prevents me from integrating the VPN into my network seamlessly.
Describe the solution you'd like
I would like the VPN project to support configurable IP address ranges for client assignments. This would allow users to specify a custom IP range that fits their network environment, avoiding any potential conflicts with existing setups.
Describe alternatives you've considered
I'm not sure there is a alternative solution. The hard-coded IP range seems to be a fundamental limitation that cannot be easily circumvented without the proposed functionality.
Additional context
Allowing for a configurable IP range would greatly enhance the flexibility and adaptability of the VPN project for various use cases.
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
k8s.io/api
, k8s.io/apimachinery
, k8s.io/client-go
)actions/download-artifact
, actions/upload-artifact
)images/agent/Dockerfile
golang 1.22
rust 1-buster
debian bookworm
images/manager/Dockerfile
golang 1.22
.github/workflows/build-images.yaml
actions/checkout v4
docker/setup-qemu-action v3
docker/setup-buildx-action v3
docker/login-action v3
docker/metadata-action v5
actions/setup-go v5
docker/build-push-action v6
docker/build-push-action v6
actions/upload-artifact v3
.github/workflows/main-branch-push-workflow.yaml
actions/checkout v4
actions/setup-node v4
actions/checkout v4
actions/setup-node v4
.github/workflows/manual-dev-release-workflow.yaml
actions/setup-go v5
actions/checkout v4
actions/upload-artifact v3
.github/workflows/pull-request-workflow.yaml
docker/setup-buildx-action v3
actions/download-artifact v3
actions/checkout v4
actions/setup-go v5
azure/setup-kubectl v4
go.mod
go 1.21
go 1.22.2
github.com/fsnotify/fsnotify v1.7.0
github.com/go-logr/logr v1.4.1
github.com/go-logr/stdr v1.2.2
github.com/korylprince/ipnetgen v1.0.1
github.com/onsi/ginkgo v1.16.5
github.com/onsi/ginkgo/v2 v2.17.2
github.com/onsi/gomega v1.33.0
github.com/vishvananda/netlink v1.1.0
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20230429144221-925a1e7659e6@925a1e7659e6
k8s.io/api v0.29.4
k8s.io/apimachinery v0.29.4
k8s.io/client-go v0.29.4
sigs.k8s.io/controller-runtime v0.15.1
sigs.k8s.io/kind v0.22.0
config/manager/kustomization.yaml
Describe the bug
I noticed the config map named wireguard-manager-config
and cannot find it actually used anywhere?
To Reproduce
N/A
Expected behavior
N/A
Screenshots
N/A
Additional context
N/A
WIP
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Has anyone tried using this with Cilium CNI?
My guess it's probably not compatible since this project uses iptables and Cilium does most things with eBPF.
I've installed the operator on a fresh cluster with Cilium 1.13, created the Wireguard
and WireguardPeer
objects, got the peer config, used the config in a separate VM but I can't access any resources from the VM on K8s.
The Wireguard agent logs show that it failed to sync Wireguard:
2023/10/31 09:02:49 agent: "caller"={"file":"main.go","line":81} "level"=0 "msg"="Received a new state"
2023/10/31 09:02:49 agent/wireguard: "caller"={"file":"wireguard.go","line":199} "level"=0 "msg"="syncing Wireguard"
2023/10/31 09:02:49 agent: "caller"={"file":"main.go","line":84} "msg"="Error while sycncing wireguard" "error"="exit status 1"
2023/10/31 09:02:49 agent/iptables: "caller"={"file":"iptables.go","line":42} "level"=0 "msg"="syncing network policies"
Describe the bug
When creating a new WireGuard resource, the operator creates services and secrets, but doesn't create the deployment, the status is stuck at Waiting for service to be created
.
To Reproduce
Steps to reproduce the behavior:
Wireguard
and WireguardPeer
Wireguard
resource as YAML and see status.message
fieldExpected behavior
It should create the deployment.
Additional context
K8s version: 1.27.1
OS: Talos Linux v1.4.1
wireguard-operator version: 1.0.1
Describe the bug
I want to create a network such that peers can contact eachother as if they were on the same physical network segment.
To Reproduce
Steps to reproduce the behavior:
apiVersion: v1
kind: Namespace
metadata:
name: wireguard
---
apiVersion: vpn.wireguard-operator.io/v1alpha1
kind: Wireguard
metadata:
name: "ponyville"
namespace: wireguard
spec:
mtu: "1380"
serviceType: "NodePort"
enableIpForwardOnPodInit: true
---
apiVersion: vpn.wireguard-operator.io/v1alpha1
kind: WireguardPeer
metadata:
name: rainbow-dash
namespace: wireguard
spec:
wireguardRef: "ponyville"
---
apiVersion: vpn.wireguard-operator.io/v1alpha1
kind: WireguardPeer
metadata:
name: rarity
namespace: wireguard
spec:
wireguardRef: "ponyville"
Expected behavior
Node rainbow-dash
to be able to ping node rarity
and connect over TCP/UDP/IP.
Additional context
Add any other context about the problem here.
Hello ! I was looking for a way to use this VPN to restrict access to some Ingress resources in the same k8s cluster using nginx.ingress.kubernetes.io/whitelist-source-range
annotations for example. Is it possible ?
Describe the bug
Whilst looking at the SyncLink
function, I noticed there may be some duplicated code:
wireguard-operator/pkg/wireguard/wireguard.go
Lines 136 to 142 in 1d1bef4
It looks like we already get the link and set it to up later in the function.
wireguard-operator/pkg/wireguard/wireguard.go
Lines 145 to 150 in 1d1bef4
wireguard-operator/pkg/wireguard/wireguard.go
Lines 167 to 169 in 1d1bef4
I didn't remove it at the time because I don't know if this is deliberate, or if it can just be removed. I also wonder if the check for whether or not the link exists a second time makes sense? How could it possibly not exist if we just created it? I wonder if this function should be reworked a bit to simply call itself again after the link is created rather than getting the link multiple times.
To Reproduce
N/A
Expected behavior
N/A
Screenshots
N/A
Additional context
N/A
Describe the bug
I'm seeing some strange behavior where I cannot access cluster IPs, or load balancer IPs from the WireGuard tunnel. I can see Cilium forwarding fine.
The same IPs work fine if I attach a debug container to the pod.
❯ k debug po/media-dep-5d6846c8dd-znk9x -it --image=debian
Oddly, Cilium reports this differently.
To Reproduce
Expected behavior
Should be able to access cluster IPs and LB IPs.
Screenshots
N/A
Additional context
N/A
First off, great work!
Could it be possible to manually set DNS and ENDPOINT address for those cases when you would like your IP Endpoint to be set to your public IP, which is not always available in those scenarios where a LoadBalancer isn't installed? So that it's easier to configure a VPN solution from outside your home.
Ive found that the DNS which is provided by the cluster works well, but my GLI.NET travel router does not like the dns name for dns resolving, wireguard-system.svc.cluster.local
.
Proposed changes:
apiversion: vpn.example.com/v1alpha1
kind: Wireguard
metadata:
name: "wireguard-example-server"
spec:
mtu: "1380"
serviceType: "NodePort"
endpoint: "142.250.74.78"
dns: "1.1.1.1, 8.8.8.8"
This is my current solution:
# 10.0.0.2 is my Node IP address
kubectl get wireguardpeer travel-router --template={{.status.config}} -n wireguard-system | bash | sed -e "s/10.0.0.2/$(curl -s icanhazip.com)/g" | sed 's/, wireguard-system.svc.cluster.local//g' | qrencode -t ansiutf8
Describe the bug
Trying to set AllowedIPs for peers in a particular vpn server
To Reproduce
try adding AllowedIPs: into peers
spec:
AllowedIPs: '10.18.3.0/24,10.17.3.0/24'
Expected behavior
Modify users config to set allowed IPs
Additional context
it fails to validate it. no matter how I try it simply won't happen.
thanks
Controller manager returns after peer creation:
2024-08-13T12:18:07Z INFO Updating secret with new config {"controller": "wireguard", "controllerGroup": "vpn.wireguard-operator.io", "controllerKind": "Wireguard", "Wireguard": {"name":"wg","namespace":"wireguard-system"}, "namespace": "wireguard-system", "name": "wg", "reconcileID": "bb21cd81-d365-4dd2-808f-4f71c8e5b31a"}
...
2024-08-13T12:18:07Z ERROR Reconciler error {"controller": "wireguardpeer", "controllerGroup": "vpn.wireguard-operator.io", "controllerKind": "WireguardPeer", "WireguardPeer": {"name":"test","namespace":"wireguard-system"}, "namespace": "wireguard-system", "name": "test", "reconcileID": "b97c02d3-2209-4c49-b105-ee6c29cd3848", "error": "Operation cannot be fulfilled on wireguardpeers.vpn.wireguard-operator.io "test": the object has been modified; please apply your changes to the latest version and try again"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:324
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:265
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:226
...
2024-08-13T12:18:07Z INFO updated pod {"controller": "wireguard", "controllerGroup": "vpn.wireguard-operator.io", "controllerKind": "Wireguard", "Wireguard": {"name":"wg","namespace":"wireguard-system"}, "namespace": "wireguard-system", "name": "wg", "reconcileID": "bb21cd81-d365-4dd2-808f-4f71c8e5b31a"}
To Reproduce
Steps to reproduce the behavior:
apiVersion: vpn.wireguard-operator.io/v1alpha1
kind: Wireguard
metadata:
name: wg
namespace: wireguard-system
spec:
serviceType: LoadBalancer
mtu: "1280"
address: 1.2.3.4
dns: 10.31.0.10
apiVersion: vpn.wireguard-operator.io/v1alpha1
kind: WireguardPeer
metadata:
name: test
namespace: wireguard-system
spec:
wireguardRef: "wg"
Currently the operator exposes the VPN using a loadbalancer
service. This might not work for everyone as wireguard uses UDP and not all cloud/on-premis loadbalancer setups supports UDP.
We should allow users to customize the setup from the CRD.
Something like:
apiVersion: vpn.example.com/v1alpha1
kind: Wireguard
metadata:
name: "my-cool-vpn"
spec:
serviceType: NodePort
mtu: "1380
We are currently relying on bash in the agent to update the iptable rules. This isn't really needed and can be done without bash.
https://github.com/jodevsa/wireguard-operator/blob/main/internal/iptables/iptables.go#L27
Describe the bug
I noticed some bugs whilst working on the project, such as:
It would be good to have golangci-lint run to catch these automatically.
To Reproduce
N/A
Expected behavior
Fewer bugs.
Screenshots
See above.
Additional context
N/A
Is there an option to disable metrics or disable the NET_ADMIN permission?
GKE does not allow NET_ADMIN
2023-02-11T03:55:15.326Z ERROR controller.wireguard Failed to create new dep {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "staging", "namespace": "staging", "dep.Namespace": "staging", "dep.Name": "staging-dep", "error": "admission webhook \"gkepolicy.common-webhooks.networking.gke.io\" denied the request: GKE Warden rejected the request because it violates one or more constraints.\nViolations details: {\"[denied by autogke-default-linux-capabilities]\":[\"linux capability 'NET_ADMIN' on container 'metrics' not allowed; Autopilot only allows the capabilities: 'AUDIT_WRITE,CHOWN,DAC_OVERRIDE,FOWNER,FSETID,KILL,MKNOD,NET_BIND_SERVICE,NET_RAW,SETFCAP,SETGID,SETPCAP,SETUID,SYS_CHROOT,SYS_PTRACE'.\",\"linux capability 'NET_ADMIN' on container 'wireguard' not allowed; Autopilot only allows the capabilities: 'AUDIT_WRITE,CHOWN,DAC_OVERRIDE,FOWNER,FSETID,KILL,MKNOD,NET_BIND_SERVICE,NET_RAW,SETFCAP,SETGID,SETPCAP,SETUID,SYS_CHROOT,SYS_PTRACE'.\"]}\nRequested by user: 'system:serviceaccount:wireguard-system:wireguard-controller-manager', groups: 'system:serviceaccounts,system:serviceaccounts:wireguard-system,system:authenticated'."}
Is your feature request related to a problem? Please describe.
At the moment no readiness and liveness probe is defined for the wg-agent container generated by Wireguard
CRD. It makes it impossible for K8S to 1. restart the deployment(that should happen when liveness probe fail) or 2. prevent sending traffic to an unready wg-agent(that should happen when readiness probe fail)
Describe the solution you'd like
Define both liveness and readiness probes for the CRD. For liveness and readiness probes, it support checking with HTTP, TCP and gRPC. I guess setting up an HTTP check will be the simplest.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes
tcpSocket
for livenessProbe and httpGet
for the readinessProbe. Assuming port 8080 is used to expose the HTTP endpoint: livenessProbe:
tcpSocket:
port: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
And for the /health
route, we need to make sure wg-agent is ready to accept traffic at 51820 UDP.
Describe alternatives you've considered
This is the most K8S native way
Additional context
Love the work you are doing here @jodevsa. I managed to get i up and running in about 4mins. Seamless..
I'm not so deep in networking and wireguard, but I do envision the reverse way is also interesting.
Now I can connect to the service ips of my kubernetes pods. I'm wondering what configuration is required to get from a pod to the client (so the other way). Usecase would be to have a hybrid environment where you have api's in the cloud, but a legacy database, storage or something else on premise. How would the routing look like?
Ideally you would be able to ping the local IP from within the cluster. Just an idea ;) Great work here!
Describe the bug
❯ k describe rs
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-vxz94" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-xz8fh" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-85956" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-bh8p7" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-ln28h" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-wjsrs" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-psmgq" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 109s replicaset-controller Error creating: pods "media-dep-878876c8d-ctlb4" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 108s replicaset-controller Error creating: pods "media-dep-878876c8d-qwstr" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
Warning FailedCreate 27s (x6 over 107s) replicaset-controller (combined from similar events): Error creating: pods "media-dep-878876c8d-fvh5h" is forbidden: violates PodSecurity "baseline:latest": non-default capabilities (containers "metrics", "agent" must not include "NET_ADMIN" in securityContext.capabilities.add)
To Reproduce
Run a Kubernetes cluster with the baseline pod security standard (e.g Talos).
https://kubernetes.io/docs/concepts/security/pod-security-admission/
Expected behavior
Optionally use the userspace wireguard implementation.
Screenshots
N/A
Additional context
Describe the bug
The resource names are simply the name of the WireGuard resource, sometimes with a suffix like -dep
, or -config
which can easily lead to conflicts.
❯ k get secret -lapp=wireguard
NAME TYPE DATA AGE
media Opaque 3 6m34s
media-client Opaque 2 6m34s
❯ k get deploy -lapp=wireguard
NAME READY UP-TO-DATE AVAILABLE AGE
media-dep 0/1 0 0 6m39s
❯ k get svc -lapp=wireguard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
media-metrics-svc ClusterIP 10.104.128.120 <none> 9586/TCP 6m43s
media-svc LoadBalancer 10.106.235.249 192.168.135.12 51820:32595/UDP 6m43s
❯ k get cm -lapp=wireguard
NAME DATA AGE
media-config 0 6m47s
To Reproduce
N/A
Expected behavior
Use unique names, like other controllers do.
❯ k -n tailscale get sts
NAME READY AGE
ts-bazarr-gp98g 1/1 186d
❯ k get rs
NAME DESIRED CURRENT READY AGE
media-dep-878876c8d 1 0 0 8m20s
Screenshots
N/A
Additional context
N/A
We should audit and ensure that contributors cannot push to main and cannot approve pull requests they have authored.
Describe the bug
apiVersion: vpn.example.com/v1alpha1
Expected behavior
apiVersion: vpn.wireguard-operator.io/v1alpha1
Additional context
I think this project is amazing, but some details like those can blur it :)
Describe the bug
New clients fail to install and work. Example client yaml:
$ cat client-test.yaml
apiVersion: vpn.example.com/v1alpha1
kind: WireguardPeer
metadata:
name: test
namespace: radnimax-vpn
spec:
wireguardRef: "radnimax-vpn"
To Reproduce
Steps to reproduce the behavior:
bash: line 1: syntax error near unexpected token `newline'
bash: line 1: `<no value>'
Note: without piping to bash, the return is simply "<no value>"
Expected behavior
Should receive proper config info, which still happens with the existing clients already set up.
Additional context
Existing client configs are able to be retrieved. It is only any new ones that are added now, that are not able to be retrieved.
Is your feature request related to a problem? Please describe.
From the source, I notice that the Deployment generated from Wireguard CRD has hardcoded 1 as the value for spec.replicas
@jodevsa Is there a reason for it to be limited as 1? Can it scale to 2? It would be great to avoid single point of failure, by having a few more instances.
Describe the solution you'd like
Allow passing replica count through Wireguard
CRD, through spec.replicas
.
Describe alternatives you've considered
There seems not to be more intuitive approach than that
Additional context
It would be nice to support multi-architectures. When the container does not support the current architecture, the following error is produced and the pod will not run:
exec /manager: exec format error
Is your feature request related to a problem? Please describe.
I have implemented a WG server in test environment, two peers (A and B) and two different test namespaces (let's say AAA and BBB). I need to perform network policies in order to allow peer A to connect only namespace AAA. And peer B should be able to reach only the namespace BBB.
But when I look at the cluster traffic with Cilium I see that there is no difference between peer's A and B connection sources.
Both connections have the same "wireguard-system" namespace, same IP address, same labels of WG deployment.
Also, If I understood right, I am not allowed to create "wireguardpeer" instances in any namespace except the WG server's namespace.
Describe the solution you'd like
In case there is no existing walkaround for the issue, maybe it would be useful to have some kind of labels to differentiate the peers...
P.S. Thanks for the solution! It is extremely useful in my project.
Hi,
i tried looking into if we can extend this project instead of doing our own thing over at https://github.com/kraudcloud/wga
but i might need some help getting started with development.
according to https://book.kubebuilder.io/quick-start.html#create-a-project
you just make install
however, in minikube v1.25.2:
wireguard-operator make install
/work/kraud/wireguard-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xa0c6be]
goroutine 96 [running]:
go/types.(*Checker).handleBailout(0xc0005bca00, 0xc001af7d40)
/usr/lib/go/src/go/types/check.go:367 +0x88
panic({0xbb8e00?, 0x1297d30?})
/usr/lib/go/src/runtime/panic.go:770 +0x132
go/types.(*StdSizes).Sizeof(0x0, {0xdadaf8, 0x12a0740})
/usr/lib/go/src/go/types/sizes.go:228 +0x31e
go/types.(*Config).sizeof(...)
/usr/lib/go/src/go/types/sizes.go:333
go/types.representableConst.func1({0xdadaf8?, 0x12a0740?})
/usr/lib/go/src/go/types/const.go:76 +0x9e
go/types.representableConst({0xdb3fc0, 0x126c5e0}, 0xc0005bca00, 0x12a0740, 0xc001af74b0)
/usr/lib/go/src/go/types/const.go:92 +0x192
go/types.(*Checker).representation(0xc0005bca00, 0xc001a51500, 0x12a0740)
/usr/lib/go/src/go/types/const.go:256 +0x65
go/types.(*Checker).implicitTypeAndValue(0xc0005bca00, 0xc001a51500, {0xdadb20, 0xc00029c690})
/usr/lib/go/src/go/types/expr.go:375 +0x30d
go/types.(*Checker).assignment(0xc0005bca00, 0xc001a51500, {0xdadb20, 0xc00029c690}, {0xc85fa6, 0x14})
/usr/lib/go/src/go/types/assignments.go:52 +0x2e5
go/types.(*Checker).initConst(0xc0005bca00, 0xc0016607e0, 0xc001a51500)
/usr/lib/go/src/go/types/assignments.go:126 +0x336
go/types.(*Checker).constDecl(0xc0005bca00, 0xc0016607e0, {0xdb07b8, 0xc0018f0fa0}, {0xdb07b8, 0xc0018f0fc0}, 0x0)
/usr/lib/go/src/go/types/decl.go:490 +0x348
go/types.(*Checker).objDecl(0xc0005bca00, {0xdb9558, 0xc0016607e0}, 0x0)
/usr/lib/go/src/go/types/decl.go:191 +0xa49
go/types.(*Checker).packageObjects(0xc0005bca00)
/usr/lib/go/src/go/types/resolver.go:693 +0x4dd
go/types.(*Checker).checkFiles(0xc0005bca00, {0xc00196c000, 0x5, 0x5})
/usr/lib/go/src/go/types/check.go:408 +0x1a5
go/types.(*Checker).Files(...)
/usr/lib/go/src/go/types/check.go:372
sigs.k8s.io/controller-tools/pkg/loader.(*loader).typeCheck(0xc0002c6fc0, 0xc000fc8aa0)
/home/aep/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/loader/loader.go:283 +0x36a
sigs.k8s.io/controller-tools/pkg/loader.(*Package).NeedTypesInfo(0xc000fc8aa0)
/home/aep/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/loader/loader.go:96 +0x39
sigs.k8s.io/controller-tools/pkg/loader.(*TypeChecker).check(0xc00141c600, 0xc000fc8aa0)
/home/aep/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/loader/refs.go:263 +0x2b7
sigs.k8s.io/controller-tools/pkg/loader.(*TypeChecker).check.func1(0x53?)
/home/aep/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/loader/refs.go:257 +0x53
created by sigs.k8s.io/controller-tools/pkg/loader.(*TypeChecker).check in goroutine 26
/home/aep/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/loader/refs.go:255 +0x1c5
make: *** [Makefile:100: manifests] Error 2
Describe the bug
Per #77, it looks like the operator is creating an unused secret and it should be removed.
To Reproduce
N/A
Expected behavior
The secret should not exist.
Screenshots
N/A
Additional context
N/A
Peer1 is not connecting to wireguard
I run the kubectl command to deploy the wireguard resources, then created the server and peer1 but in peer1 I get the following:
Status: Message: Waiting for my-cool-vpn to be ready Status: error Events: <none>
Also when I run: kubectl get wireguardpeer peer1 --template={{.status.config}}
I get: no value
In addition, here are some logs:
2022-08-06T20:55:21.287Z INFO controller.wireguard loaded the following wireguard image:ghcr.io/jodevsa/wireguard-operator/wireguard:sha-64c91a661f4ae6dce41e311386cfe5b8309e816c {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default"} 2022-08-06T20:55:21.287Z INFO controller.wireguard my-cool-vpn {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default"} 2022-08-06T20:55:21.287Z INFO controller.wireguard processing my-cool-vpn {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default"} 2022-08-06T20:55:21.287Z INFO controller.wireguard Found ingress {"reconciler group": "vpn.example.com", "reconciler kind": "Wireguard", "name": "my-cool-vpn", "namespace": "default", "ingress": null} 2022-08-06T20:55:46.634Z INFO controller.wireguardpeer Creating a new secret {"reconciler group": "vpn.example.com", "reconciler kind": "WireguardPeer", "name": "peer1", "namespace": "default", "secret.Namespace": "default", "secret.Name": "peer1-peer"} 2022-08-06T20:55:46.640Z INFO controller.wireguardpeer Waiting for wireguard to be ready {"reconciler group": "vpn.example.com", "reconciler kind": "WireguardPeer", "name": "peer1", "namespace": "default"} 2022-08-06T20:55:46.642Z INFO controller.wireguardpeer Waiting for wireguard to be ready {"reconciler group": "vpn.example.com", "reconciler kind": "WireguardPeer", "name": "peer1", "namespace": "default"} 2022-08-06T20:55:46.650Z INFO controller.wireguardpeer Waiting for wireguard to be ready {"reconciler group": "vpn.example.com", "reconciler kind": "WireguardPeer", "name": "peer1", "namespace": "default"}
ps: I am running this setup on minikube.
Hello,
Is there a way to not send all traffic through the VPN?
Currently all traffic is being sent out through the VPN.
Regards
For example say I have a pod that is a container image of a node.js app that will use fetch()
to make HTTPS API requests
. I'd like to avoid the pod needing special configuration like a SOCKS proxy or anything.
Is this current possible? If so, how?
Is your feature request related to a problem? Please describe.
I'd like to schedule wireguard agent on specific nodes.
The use case would be that sometimes I need to choose from which physical node I need my traffic to go out to the internet when using the VPN. This is specially useful when using on-premise or hybrid kubernetes clusters setups where the infra is located on different datacenters but on the same cluster.
Describe the solution you'd like
I'd like to have nodeName
and nodeSelector
passed through to the resulting k8s agent deployment for the vpn server
Describe alternatives you've considered
Implementing Admission Controller to modify the agent deployment on the fly
Additional context
N/A
Is your feature request related to a problem? Please describe.
At the moment the Wireguard CRD does not allow passing resources
configuration
Describe the solution you'd like
I would like to see resources
section being allowed for Wireguard CRD
Describe alternatives you've considered
N/A, the aforementioned solution seems to be the most direct and effective
Additional context
Like it says on the tin, I'd like to be able to set the endpoint that the wireguard agent uses to connect to a peer. For my use case, I want the agent to be able to reach out and open a tunnel proactively instead of waiting for the client to connect in to it first.
I looked quite a bit through the code, but as far as I can tell, the peers are configured in createPeersConfiguration
and there does not seem to be a way to specify an endpoint.
Is your feature request related to a problem? Please describe.
Gateway API is going to replace Ingress in the long run. User should have an option to generate a UDPRoute
for an existing gateway
Describe the solution you'd like
Provide options to create UDPRoute
through Wireguard CRD. When user choose UDPRoute
, we should not generate a LoadBalancer or NodePort service, but instead a ClusterIp service.
The exact properties name are TBC.
Describe alternatives you've considered
This seems to be most straight forward.
Additional context
Is your feature request related to a problem? Please describe.
I want to access services on my NAS and my home network (192.168.0.0/24
) remotely using a wireguard tunnel from my NAS to my VPS running k3s.
I have installed the operator on my VPS and successfully set up the wireguard server and the client/peer on my NAS and established the tunnel. I can access my NAS remotely using the wireguard IP (10.8.0.XXX
).
However, I can't access my NAS using the IP from my home network (192.168.0.XXX
). According to the guide I used, the server configuration also needs AllowedIPs
for the NAS peer so that the wireguard server knows to route packets for 192.168.0.0/24
to this peer.
As far as I can tell, there currently is no way to set this part of the configuration.
Describe the solution you'd like
I'd like to set AllowedIPs
directly in the spec
of the WireguardPeer
.
Describe alternatives you've considered
Not sure if any other way would make sense.
Additional context
Explanation how AllowedIPs
also sets up routing: https://techoverflow.net/2021/07/09/what-does-wireguard-allowedips-actually-do/
Is your feature request related to a problem? Please describe.
Currently the Wireguard CRD does not provide a way to set spec.loadBalancerIP
for the Service
that it will generate. The spec.address
field is not passed to the Service
that it will generate as well. If I have a static public IP generated outside K8S, I will not be able to assign that IP to the generated service from the Wireguard CRD now.
Describe the solution you'd like
When the spec.serviceType = LoadBalancer
, pass the spec.address
to the generated Service at spec.loadBalancerIP
. No-op if spec.serviceType = NodePort
Describe alternatives you've considered
An alternative would be having a field called spec.serviceLoadBalancerIP
that accept IP. If we do this, we will have to duplicate input.
Additional context
Is your feature request related to a problem? Please describe.
It would be great if the wireguard-operator would support IPv6 - WireGuard supports this, so I guess it's just a question of whether the cluster supports it too.
Describe the solution you'd like
If there is support for IPv6 in the cluster, then the operator should be able to:
Describe alternatives you've considered
?
Additional context
Describe the bug
The metrics container currently fails on arm64 so the pod cannot start.
The container log shows the following:
thread 'main' panicked at library/alloc/src/raw_vec.rs:534:5:
capacity overflow
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
And passing the required env variable to the pod:
thread 'main' panicked at library/alloc/src/raw_vec.rs:534:5:
capacity overflow
stack backtrace:
0: rust_begin_unwind
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/std/src/panicking.rs:597:5
1: core::panicking::panic_fmt
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/core/src/panicking.rs:72:14
2: alloc::raw_vec::capacity_overflow
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/raw_vec.rs:534:5
3: alloc::raw_vec::RawVec<T,A>::allocate_in
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/raw_vec.rs:177:27
4: alloc::raw_vec::RawVec<T,A>::with_capacity_in
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/raw_vec.rs:130:9
5: alloc::vec::Vec<T,A>::with_capacity_in
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/mod.rs:670:20
6: alloc::vec::Vec<T>::with_capacity
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/alloc/src/vec/mod.rs:479:9
7: std::sys::unix::args::imp::clone
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/std/src/sys/unix/args.rs:146:28
8: std::sys::unix::args::imp::args
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/std/src/sys/unix/args.rs:131:22
9: std::sys::unix::args::args
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/std/src/sys/unix/args.rs:19:5
10: std::env::args_os
at ./rustc/79e9716c980570bfd1f666e3b16ac583f0168962/library/std/src/env.rs:794:21
11: <core::pin::Pin<P> as core::future::future::Future>::poll
12: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
13: prometheus_wireguard_exporter::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
I also tried to compile the binary in debug mode and failed on this crate:
To Reproduce
Steps to reproduce the behavior:
kubectl apply -f https://github.com/jodevsa/wireguard-operator/releases/download/v2.0.17/release.yaml
Expected behavior
The metrics container should start.
Additional context
I also ran the container on docker with the same result
docker run --rm --entrypoint /usr/local/bin/prometheus_wireguard_exporter ghcr.io/jodevsa/wireguard-operator/agent:v2.0.17
If i copy the prometheus_wireguard_exporter
from the container to the host (Ubuntu 22.04 LTS) and run it, then it executes without errors, so the problem is with the alpine image. I also switched the final image from alpine:3.18 to debian:bookworm and it ran without errors.
Possible fixes:
Is your feature request related to a problem? Please describe.
Currently the annotations added to the load balancer are hard coded, soon you'll be able to provide a set of annotations in the spec.
However, I believe it should be possible to enable a set of default annotations for cloud providers such as AWS, GCP etc.
Describe the solution you'd like
I propose something like the following:
apiVersion: vpn.example.com/v1alpha1
kind: Wireguard
metadata:
name: vpn
spec:
serviceAnnotations:
foo.bar/bar.foo: true
providers:
aws:
enableDefaultAnnotations: true
This would append and merge the default annotations required by AWS for example and also any additional annotations required for other controllers such as cert-manager, external-dns etc.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.