Giter Site home page Giter Site logo

travisghansen / kubernetes-pfsense-controller Goto Github PK

View Code? Open in Web Editor NEW
195.0 19.0 24.0 168 KB

Integrate Kubernetes and pfSense

License: Apache License 2.0

Dockerfile 0.50% PHP 99.50%
kubernetes api client k8s php controller pfsense firewall cluster metallb

kubernetes-pfsense-controller's Introduction

Image Image

Intro

kubernetes-pfsense-controller (kpc) works hard to keep pfSense and Kubernetes in sync and harmony. The primary focus is to facilitate a first-class Kubernetes cluster by integrating and/or implementing features that generally do not come with bare-metal installation(s).

This is generally achieved using the standard Kubernetes API along with the xmlrpc API for pfSense. Speaking broadly the Kubernetes API is watched and appropriate updates are sent to pfSense (config.xml) via xmlrpc calls along with appropriate reload/restart/update/sync actions to apply changes.

Please note, this controller is not designed to run multiple instances simultaneously (ie: do NOT crank up the replicas).

Disclaimer: this is new software bound to have bugs. Please make a backup before using it as it may eat your configuration. Having said that, all known code paths appear to be solid and working without issue. If you find a bug, please report it!

Updated disclaimer: this software is no longer very new, but is still bound to have bugs. Continue to make backups as appropriate :) Having said that, it's been used for multiple years now on several systems and has yet to do anything evil.

Installation

Various files are available in the deploy directory of the project, alter to your needs and kubectl apply.

Alternatively, a helm repository is provided for convenience:

helm repo add kubernetes-pfsense-controller https://travisghansen.github.io/kubernetes-pfsense-controller-chart/
helm repo update

# create your own values.yaml file and edit as appropriate
# https://github.com/travisghansen/kubernetes-pfsense-controller-chart/blob/master/stable/kubernetes-pfsense-controller/values.yaml
helm upgrade \
--install \
--create-namespace \
--namespace kpc \
--values values.yaml \
kpc-primary \
kubernetes-pfsense-controller/kubernetes-pfsense-controller

Support Matrix

Generally speaking kpc tracks the most recent versions of both kubernetes and pfSense. Having said that reasonable attempts will be made to support older versions of both.

kpc currently works with any 2.4+ (known working up to 2.5.2) version of pfSense and probably very old kubernetes versions (known working up to 1.22).

Plugins

The controller is comprised of several plugins that are enabled/disabled/configured via a Kubernetes ConfigMap. Details about each plugin follows below.

metallb

MetalLB implements LoadBalancer type Services in Kubernetes. This is done via any combination of Layer2 or BGP type configurations. Layer2 requires no integration with pfSense, however, if you want to leverage the BGP implementation you need a BGP server along with neighbor configuration. kpc dynamically updates bgp neighbors for you in pfSense by continually monitoring cluster Nodes.

While this plugin is named metallb it does not require MetalLB to be installed or in use. It can be used with kube-vip or any other service that requires BGP peers/neighbors.

The plugin assumes you've already installed openbgp or frr and configured it as well as created a group to use with MetalLB.

      metallb:
        enabled: true
        nodeLabelSelector:
        nodeFieldSelector:
        # pick 1 implementation
        # bgp-implementation: openbgp
        bgp-implementation: frr
        options:
          frr:
            template:
              peergroup: metallb

          openbgp:
            template:
              md5sigkey:
              md5sigpass:
              groupname: metallb
              row:
                - parameters: announce all
                  parmvalue:

haproxy-declarative

haproxy-declarative plugin allows you to declaratively create HAProxy frontend/backend definitions as ConfigMap resources in the cluster. When declaring backends however, the pool of servers can/will be dynamically created/updated based on cluster nodes. See declarative-example.yaml for an example.

      haproxy-declarative:
        enabled: true

haproxy-ingress-proxy

haproxy-ingress-proxy plugin allows you to mirror cluster ingress rules handled by an ingress controller to HAProxy running on pfSense. If you run pfSense on the network edge with non-cluster services already running, you now can dynamically inject new rules to route traffic into your cluster while simultaneously running non-cluster services.

To achieve this goal, new 'shared' HAProxy frontends are created and attached to an existing HAProxy frontend. Each created frontend should also set an existing backend. Note that existing frontend(s)/backend(s) can be created manually or using the haproxy-declarative plugin.

When creating the parent frontend(s) please note that the selected type should be http / https(offloading) to fully support the feature. If type ssl / https(TCP mode) is selected (SSL Offloading may be selected or not in the External address table) sni is used for routing logic and CANNOT support path-based logic which implies a 1:1 mapping between host entries and backing services. Type tcp will not work and any Ingress resources that would be bound to a frontend of this type are ignored.

Combined with haproxy-declarative you can create a dynamic backend service (ie: your ingress controller) and subsequently dynamic frontend services based off of cluster ingresses. This is generally helpful when you cannot or do not for whatever reason create wildcard frontend(s) to handle incoming traffic in HAProxy on pfSense.

Optionally, on the ingress resources you can set the following annotations: haproxy-ingress-proxy.pfsense.org/frontend and haproxy-ingress-proxy.pfsense.org/backend to respectively set the frontend and backend to override the defaults.

In advanced scenarios it is possible to provide a template definition of the shared frontend using the haproxy-ingress-proxy.pfsense.org/frontendDefinitionTemplate annotation (see #19 (comment)).

      haproxy-ingress-proxy:
        enabled: true
        ingressLabelSelector:
        ingressFieldSelector:
        # works in conjunction with the ingress annotation 'haproxy-ingress-proxy.pfsense.org/enabled'
        # if defaultEnabled is empty or true, you can disable specific ingresses by setting the annotation to false
        # if defaultEnabled is false, you can enable specific ingresses by setting the annotation to true
        defaultEnabled: true
        # can optionally be comma-separated list if you want the same ingress to be served by multiple frontends
        defaultFrontend: http-80
        defaultBackend: traefik
        #allowedHostRegex: "/.*/"

DNS Helpers

kpc provides various options to manage DNS entries in pfSense based on cluster state. Note that these options can be used in place of or in conjunction with external-dns to support powerful setups/combinations.

pfsense-dns-services

pfsense-dns-services watches for services of type LoadBalancer that have the annotation dns.pfsense.org/hostname with the value of the desired hostname (optionally you may specifiy a comma-separated list of hostnames). kpc will create the DNS entry in unbound/dnsmasq. Note that to actually get an IP on these services you'll likely need MetalLB deployed in the cluster (regardless of the metallb plugin running or not).

      pfsense-dns-services:
        enabled: true
        serviceLabelSelector:
        serviceFieldSelector:
        #allowedHostRegex: "/.*/"
        dnsBackends:
          dnsmasq:
            enabled: true
          unbound:
            enabled: true

pfsense-dns-ingresses

pfsense-dns-ingresses watches ingresses and automatically creates DNS entries in unbound/dnsmasq. This requires proper support from the ingress controller to set IPs on the ingress resources.

      pfsense-dns-ingresses:
        enabled: true
        ingressLabelSelector:
        ingressFieldSelector:
        # works in conjunction with the ingress annotation 'dns.pfsense.org/enabled'
        # if defaultEnabled is empty or true, you can disable specific ingresses by setting the annotation to false
        # if defaultEnabled is false, you can enable specific ingresses by setting the annotation to true
        defaultEnabled: true
        #allowedHostRegex: "/.*/"
        dnsBackends:
          dnsmasq:
            enabled: true
          unbound:
            enabled: true

pfsense-dns-haproxy-ingress-proxy

pfsense-dns-haproxy-ingress-proxy monitors the HAProxy rules created by the haproxy-ingress-proxy plugin and creates host aliases for each entry. To do so you create an arbitrary host in unbound/dnsmasq (something like <frontend name>.k8s) and bind that host to the frontend through the config option frontends.<frontend name>. Any proxy rules created for that frontend will now automatically get added as aliases to the configured hostname. Make sure the static hostname created in your DNS service of choice points to the/an IP bound to the corresponding frontend.

      pfsense-dns-haproxy-ingress-proxy:
        enabled: true
        # NOTE: this regex is in *addition* to the regex applied to the haproxy-ingress-proxy plugin
        #allowedHostRegex: "/.*/"
        dnsBackends:
          dnsmasq:
            enabled: true
          unbound:
            enabled: true
        frontends:
          http-80:
            hostname: http-80.k8s
          primary_frontend_name2:
            hostname: primary_frontend_name2.k8s

Notes

regex parameters are passed through php's preg_match() method, you can test your syntax using that. Also note that if you want to specify a regex ending ($), you must escape it in yaml as 2 $ (ie: #allowedHostRegex: "/.example.com$$/").

kpc stores it's stateful data in the cluster as a ConfigMap (kube-system.kubernetes-pfsense-controller-store by default). You can review the data there to gain understanding into what the controller is managing.

You may need/want to bump up the webConfigurator setting for Max Processes to ensure enough simultaneous connections can be established. Each kpc instance will only require 1 process (ie: access to the API is serialized by kpc).

Links

TODO

  1. base64 advanced fields (haproxy)
  2. taint haproxy config so it shows 'apply' button in interface?
  3. _index and id management
  4. ssl certs name/serial
  5. build docker images
  6. create manifests
  7. ensure pfsync items are pushed as appropriate
  8. perform config rollbacks when appropriate?
  9. validate configuration(s) to ensure proper schema

Development

check store values

kubectl -n kube-system get configmaps kubernetes-pfsense-controller-store -o json | jq -crM '.data."haproxy-declarative"' | jq .
kubectl -n kube-system get configmaps kubernetes-pfsense-controller-store -o json | jq -crM '.data."metallb"' | jq .
...

HAProxy

XML config structure (note that ha_backends is actually frontends...it's badly named):

haproxy
   ha_backends
     item
     item
     ...
   ha_pools
     item
       ha_servers
         item
         item
         ...
     item
     ...

Links

Links

kubernetes-pfsense-controller's People

Contributors

cptaffe avatar slamdev avatar toxuin avatar travisghansen avatar valtzu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-pfsense-controller's Issues

Cannot change namespace

Hi @travisghansen, thanks for this controller, it's very useful!

When I try to change the namespace (basically s/kube-system/mynamespace/ on each k8s manifest file), I'm able to deploy the controller, but I get the following messages in pod logs:

2020-03-25T18:02:34+00:00 waiting for ConfigMap kube-system/kubernetes-pfsense-controller-config to be present and valid

It seems it waits for kubernetes-pfsense-controller-config on kube-system namespace only.

PS: I'm relatively new in Kubernetes, so I apologize if this is the desired behaviour.

Thanks in advance!

SSL to PFSense?

Is it possible to supply the cert to the deployment to have SSL to PFSense?

Unable to enable crypto on TCP connection 192.168.2.1: make sure the "sslcafile" or "sslcapath" option are properly set for the environment.

ingress watch causes repeated restarts on pfsense services?

Hi,

I'm using ingress watching with unbound. It works well, except, both ingresses I use (nginx and traefik) seem to cause restarts over and over.

Traefik will update the status endpoint every minute. Even though nothing changed, this triggers a restart. (Kind of expected). With nginx, this update must occur less often, as it seems roughly every 10 minutes or so.

Is there a way to make the ingress watch do a diff before triggering a restart? If not, I can look into trying to code something in, but not sure if this is a known issue.

I'm using 0.1.7 and also tried 0.1.5.

Thanks again for a great controller! Sorry to bug with an issue. :(

dns update error

Installed latest git master version.

I get pfsense Incorrect parameters passed to method: Signature permits 2 parameters but the request had 1 (3) when it tries to update pfsense

using only dns service, only unbound

excelente project!

XMLRPC errors with pfSense 2.4.4

metallb fails silently, haproxy-declarative spams about serialization errors:

plugin (haproxy-declarative): failed reload HAProxy service: Unhandled XML_RPC2_InvalidTypeEncodeException exception:Impossible to encode value '' from type 'NULL'. No analogous type in XML_RPC.#0 /usr/local/share/pear/XML/RPC2/Backend/Php/Value/Struct.php(107): XML_RPC2_Backend_Php_Value::createFromNative(NULL)
#1 /usr/local/share/pear/XML/RPC2/Backend/Php/Response.php(86): XML_RPC2_Backend_Php_Value_Struct->encode()
#2 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(135): XML_RPC2_Backend_Php_Response::encode(Object(XML_RPC2_Backend_Php_Value_Struct), 'utf-8')
#3 /usr/local/share/pear/XML/RPC2/Backend/Php/Server.php(99): XML_RPC2_Backend_Php_Server->getResponse()
#4 /usr/local/www/xmlrpc.php(768): XML_RPC2_Backend_Php_Server->handleCall()
#5 {main} (1)

I haven't tested the other plugins yet, this config was working on an older version, but I can't recall which.
I also get notifications in the pfSense webadmin about restoring config from backups,.

pfSense Version:
2.4.4-RELEASE-p3 (amd64)
built on Wed May 15 18:53:44 EDT 2019
FreeBSD 11.2-RELEASE-p10

pfsense getting constant updates

Haproxy on pfsense keep getting reloaded, leading haproxy not being able to hold a connection.

As you can see from the logs bellow. Every second or so it goes and updates pfsense. What could I be doing wrong?

2022-02-23T22:40:28+00:00 plugin (haproxy-declarative): successfully reloaded HAProxy service
2022-02-23T22:40:28+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071070
2022-02-23T22:40:28+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071071
2022-02-23T22:40:28+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071096
2022-02-23T22:40:28+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/test/Ingress/mysite-ingress MODIFIED - 8070722
2022-02-23T22:40:28+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/cattle-system/Ingress/rancher MODIFIED - 8070723
2022-02-23T22:40:28+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/test/Ingress/mysite-ingress MODIFIED - 8070726
2022-02-23T22:40:31+00:00 plugin (haproxy-declarative): successfully reloaded HAProxy service
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071097
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071129
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071130
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/cattle-system/Ingress/rancher MODIFIED - 8070728
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/cattle-system/Ingress/rancher MODIFIED - 8070731
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/cattle-system/Ingress/rancher MODIFIED - 8070755
2022-02-23T22:40:31+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/test/Ingress/mysite-ingress MODIFIED - 8070756
2022-02-23T22:40:33+00:00 plugin (haproxy-declarative): successfully reloaded HAProxy service
2022-02-23T22:40:33+00:00 plugin (haproxy-ingress-proxy): successfully reloaded HAProxy service
2022-02-23T22:40:33+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071153
2022-02-23T22:40:33+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071155
2022-02-23T22:40:33+00:00 plugin (pfsense-dns-services): /v1/namespaces/kube-system/Service/traefik MODIFIED - 8071192
2022-02-23T22:40:33+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/cattle-system/Ingress/rancher MODIFIED - 8070757
2022-02-23T22:40:33+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/test/Ingress/mysite-ingress MODIFIED - 8070758
2022-02-23T22:40:33+00:00 plugin (pfsense-dns-haproxy-ingress-proxy): /networking.k8s.io/v1/namespaces/cattle-system/Ingress/rancher MODIFIED - 8070760

I have a "simple" setup currently with a rancher service and one test service.
I'm running k3s v1.22.3+k3s1 with 3 servers and 3 agents in a HA config. For HA I'm using kube-vip and then using metallb for service load balancing. Finally traefik for ingress. Currently haproxy on pfsense doing certification management and SSL offloading. This issue seems to caused by k3s thinking there is a change then triggering this project to go update pfsense.

Let me know if you need more details about my setup.

Errors and warnings in logs, no domains added to DNS Resolver

Using version 0.1.8, just updated from 0.1.5 and all my cluster domain names are gone from DNS resolver settings.
I am seeing warnings and errors in logs of the controller:

PHP Warning: Illegal string offset 'host' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/Utils.php on line 136`
...
11/16/2019 10:43:03 PM 2019-11-17T05:43:03+00:00 plugin (pfsense-dns-ingresses): failed saving unbound config: Read timed out after 10 seconds (1000)
11/16/2019 10:43:03 PM 2019-11-17T05:43:03+00:00 plugin (pfsense-dns-ingresses): failed update/reload: Read timed out after 10 seconds (1000)
11/16/2019 10:43:17 PM PHP Warning: Invalid argument supplied for foreach() in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 31
...
2019-11-17T05:43:17+00:00 plugin (pfsense-dns-ingresses): deleting hostname entry for host: domain1.example.com
11/16/2019 10:43:17 PM PHP Warning: Illegal string offset 'host' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/Utils.php on line 136

ConfigMap kubernetes-pfsense-controller-store has {"managed_hosts":[]} for both pfsense-dns-ingresses as well as pfsense-dns-services.

Running kubernetes version 1.15.5

Question re external-dns

Firstly, apologies if I am missing something really simple here.

We are testing Istio and what we want is to update pfsense DNS entries using Istio Gateway resource(s), as opposed to using standard k8s Ingress resources. This is supported by external-dns but looking through the list of providers it looks like only RFC2136 is suitable for use with pfsense. The team that manages the pfsense appliances are reluctant to set-up RFC2136 and so we need to find another solution, this is what has brought me here :)

Reading the intro to this project there is mention of using kpc in conjunction with external-dns but I can't find any documentation of how this would be achieved. Does one exist? Could you point me at it?

If not, would it be possible to provide some bullet points to give me a head start?

Many thanks!

Plugin "pfsense-dns-haproxy-ingress-proxy" sets last host alias only

Hi Travis,
when I add more than one ingress on my cluster (created via different Helm charts), I only find an host alias in unbound, the one created through the last ingress. I guess the plugin sets (not adds) the host aliases...?
My ingress controller is nginx-ingress.

TIA!

Where do I actually put the plugins?

This looks like exactly what I'm looking for and I'm familiar with deploying via helm, but forgive me where do I actually put the plugin configs? I can't see the settings in the values.yml

I'm sure I'm missing something simple.

haproxy-ingress-proxy select backend by HostSNI

Hi there,

I'm running cert-manager in my Kubernetes cluster, which manages TLS certs for cluster services. Non cluster services are also running, with pfSense managing the certs for those. Therefore, I would like to be able to have pfSense HAProxy use SSL Passthrough mode for the frontend which cluster services attach to, and select the K8s cluster ingress controller backend via HostSNI, instead of HTTP Host Header, which seems to be the default selector. Is there a way this could be done?

Unable to set bool parameters in haproxy-declarative

I am trying to enable sll and sslcheck for a backend node created by haproxy-declarative, but am unable to get it to actually set the xml values correctly.
I have tried setting the values as:

  • yes
  • true
  • True
  • 1

None of which work, the only one that doesn't result in an empty value in the xml was 1, but that was still not read as a yes.

haproxy-declarative config:

resources:
  - type: backend
    definition:
      name: metallb-nginx-ingress-https
      monitor_uri: /healthz
    ha_servers:
      # declare dynamic nodes by using the backing service
      - type: node-service
        # serviceNamespace: optional, uses namespace of the ConfigMap by default
        # service must be type NodePort or LoadBalancer
        serviceNamespace: ingress-nginx
        serviceName: metallb-nginx-ingress
        servicePort: 443
        definition:
          name: metallb-nginx-ingress-https
          status: active
          ssl: true
          checkssl: true

Resulting xml:

<item>
	<name>metallb-nginx-ingress-https</name>
	<monitor_uri>/healthz</monitor_uri>
	<ha_servers>
		<item>
			<name>metallb-nginx-ingress</name>
			<status>active</status>
			<ssl></ssl>
			<checkssl></checkssl>
			<address>10.1.2.0</address>
			<port>443</port>
		</item>
	</ha_servers>
</item>

Expected xml (confirmed by selecting the checkboxes in the GUI and saving):

<item>
	<name>metallb-nginx-ingress-https</name>
	<monitor_uri>/healthz</monitor_uri>
	<ha_servers>
		<item>
			<name>metallb-nginx-ingress</name>
			<status>active</status>
			<ssl>yes</ssl>
			<checkssl>yes</checkssl>
			<address>10.1.2.0</address>
			<port>443</port>
		</item>
	</ha_servers>
</item>

GUI:
image

require_once(): Failed opening required '/usr/local/pkg/openbgpd.inc' on pfsense

Hi there, I have had kpf set up and working brilliantly for many months but since yesterday I've started seeing a lot of errors like below from pfsense.

 PHP ERROR: Type: 64, File: /usr/local/www/xmlrpc.php(147) : eval()'d code, Line: 1, Message: require_once(): Failed opening required '/usr/local/pkg/openbgpd.inc' (include_path='.:/etc/inc:/usr/local/pfSense/include:/usr/local/pfSense/include/www:/usr/local/www:/usr/local/captiveportal:/usr/local/pkg:/usr/local/www/classes:/usr/local/www/classes/Form:/usr/local/share/pear:/usr/local/share/openssl_x509_crl/') @ 2023-06-16 17:19:59

After a bit of digging I saw that openbgpd was removed but that happened a while ago so I'm not sure why this would only just start happening.

I'm using pfsense version 2.6.0, metal-lb with the plugin setup:

plugins:
  metallb:
    enabled: true
    nodeLabelSelector:
    nodeFieldSelector:
    bgp-implementation: openbgp
    options:
      openbgp:
        # pass through to config.xml
        template:
          md5sigkey:
          md5sigpass:
          groupname: metallb
          row:
            - parameters: announce all
              parmvalue:
  pfsense-dns-services:
    enabled: true
    serviceLabelSelector:
    serviceFieldSelector:
    #
    #allowedHostRegex: "/.*/"
    #
    dnsBackends:
      dnsmasq:
        enabled: false
      unbound:
        enabled: true

Do you have any pointers?
Thanks!

PHP Warning: Undefined array key "data"

I seem to be seeing this quite a lot in the logs for the pod when this runs...

2022-10-25T14:56:17+00:00 controller config loaded/updated
2022-10-25T14:56:17+00:00 loading plugin haproxy-declarative
PHP Warning:  Undefined array key "data" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  foreach() argument must be of type array|object, null given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Undefined array key "data" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  foreach() argument must be of type array|object, null given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Undefined array key "data" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 100
PHP Warning:  Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 100
PHP Warning:  foreach() argument must be of type array|object, null given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 100
2022-10-25T14:56:18+00:00 plugin (haproxy-declarative): successfully reloaded HAProxy service
PHP Warning:  Undefined array key "data" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  foreach() argument must be of type array|object, null given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Undefined array key "data" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310
PHP Warning:  foreach() argument must be of type array|object, null given in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/HAProxyDeclarative.php on line 310

I'm not sure why its happening but I've been trying to debug all afternoon and cant seem to see any patterns...

I'm using the declarative plugin only with a custom configmap as (just to test):

apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: "2022-10-25T14:54:45Z"
  labels:
    app: pfsense
    kustomize.toolkit.fluxcd.io/name: apps
    kustomize.toolkit.fluxcd.io/namespace: flux-system
    pfsense.org/type: declarative
  name: haproxy-declaratives-75hbc5ccgd
  namespace: networking
  resourceVersion: "2847484"
  uid: 4af2b995-dd56-40fb-a507-a31c0429a87f
data:
  data: |
    resources:
      - type: frontend
        definition:
          name: some-frontend-name
          type: http
          forwardfor: yes
          status: active
          backend_serverpool:
          a_extaddr:
            item:
              - extaddr: wan_ipv4
                extaddr_port: 443
                extaddr_ssl: yes

Undefined array key "object"

Hi there! I'm super grateful for this project. I'm using it with pfsense at home.

I'm sure I have something configured wrong, but not being a super pro with pfsense and networking I made some inferences from the docs and they seem to have led to these errors. They flood the logs many times per second. Do you have any tips on my configuration?
ย 

PHP Warning: Undefined array key "object" in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-client-php/src/KubernetesClient/Watch.php on line 479
PHP Warning: Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-client-php/src/KubernetesClient/Watch.php on line 479
PHP Warning: Undefined array key "type" in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-client-php/src/KubernetesClient/Watch.php on line 611
apiVersion: v1
data:
  config: |
    controller-id: mypfsense
    enabled: true
    plugins:
      haproxy-declarative:
        enabled: true
      pfsense-dns-ingresses:
        defaultEnabled: true
        dnsBackends:
          dnsmasq:
            enabled: false
          unbound:
            enabled: true
        enabled: true
        ingressFieldSelector: cluster.app/hostname
        ingressLabelSelector: null
      pfsense-dns-services:
        dnsBackends:
          dnsmasq:
            enabled: false
          unbound:
            enabled: true
        enabled: true
        serviceFieldSelector: cluster.app/hostname
        serviceLabelSelector: null
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: pfsense
    meta.helm.sh/release-namespace: kpc
  labels:
    app.kubernetes.io/instance: pfsense
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kubernetes-pfsense-controller
    app.kubernetes.io/version: 0.0.1
    helm.sh/chart: kubernetes-pfsense-controller-0.1.2
  name: pfsense-kubernetes-pfsense-controller-config
  namespace: kpc


CrashLoopBackOff: invalid username or password

First of all I can't express my gratitude enough for the Kubernetes pfSense controller!

System details

k3s -version

k3s version v1.21.3+k3s1 (1d1f220f)
go version go1.16.6

Ubuntu version

Linux 5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:08 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

pfSense version

2.5.2-RELEASE (amd64)
built on Fri Jul 02 15:33:00 EDT 2021
FreeBSD 12.2-STABLE

Installation details

kubectl method

i.e. kubectl apply -f secret.yaml, in which secret.yaml was edited and adjusted according the pfSense admin password.
The password was generated via the following command within the OS system where k3s is running.

To generate a base64 encoded password:
echo 'blabla' | base64

YmxhYmxhCg==

And to decode the base64 code into a human readable form
echo 'YmxhYmxhCg==' | base64 -d

blabla

The password is than defined in secret.yaml as follows:

apiVersion: v1
kind: Secret
metadata:
  name: kubernetes-pfsense-controller
  namespace: kube-system
type: Opaque
data:
  pfsense-password: YmxhYmxhCg==

Log details

kubectl logs kubernetes-pfsense-controller-668c59c454-rlxbs -n kube-system

PHP Deprecated:  Required parameter $callback follows optional parameter $params in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-client-php/src/KubernetesClient/Client.php on line 170
PHP Warning:  Undefined array key "data" in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Store.php on line 128
PHP Deprecated:  Required parameter $callback follows optional parameter $params in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-client-php/src/KubernetesClient/Watch.php on line 137
2021-08-01T11:47:05+00:00 store successfully initialized
2021-08-01T11:47:05+00:00 controller config loaded/updated
2021-08-01T11:47:05+00:00 loading plugin metallb
PHP Warning:  Undefined array key "configMap" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/MetalLB.php on line 30
2021-08-01T11:47:05+00:00 loading plugin haproxy-declarative
2021-08-01T11:47:05+00:00 loading plugin haproxy-ingress-proxy
2021-08-01T11:47:05+00:00 loading plugin pfsense-dns-services
PHP Warning:  Undefined array key "serviceLabelSelector" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSIngresses.php on line 32
PHP Warning:  Undefined array key "serviceFieldSelector" in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSIngresses.php on line 33
2021-08-01T11:47:05+00:00 loading plugin pfsense-dns-ingresses
2021-08-01T11:47:05+00:00 loading plugin pfsense-dns-haproxy-ingress-proxy
PHP Warning:  Trying to access array offset on value of type null in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Store.php on line 218
2021-08-01T11:47:05+00:00 plugin (metallb): /v1/namespaces/metallb-system/ConfigMap/config ADDED - 135950
PHP Fatal error:  Uncaught Laminas\XmlRpc\Client\Exception\FaultException: Authentication failed: Invalid username or password in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/laminas/laminas-xmlrpc/src/Client.php:324
Stack trace:
#0 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/XmlRpc/Client.php(59): Laminas\XmlRpc\Client->call('pfsense.backup_...', Array)
#1 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/PfSenseConfigBlock.php(96): KubernetesPfSenseController\XmlRpc\Client->call('pfsense.backup_...', Array)
#2 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/MetalLB.php(141): KubernetesPfSenseController\Plugin\PfSenseConfigBlock::getInstalledPackagesConfigBlock(Object(KubernetesPfSenseController\XmlRpc\Client), 'frrbgpneighbors')
#3 phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/MetalLB.php(117): KubernetesPfSenseController\Plugin\MetalLB->doActionGeneric()
#4 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Plugin/AbstractPlugin.php(108): KubernetesPfSenseController\Plugin\MetalLB->doAction()
#5 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Controller.php(532): KubernetesController\Plugin\AbstractPlugin->invokeAction()
#6 phar:///usr/local/bin/kubernetes-pfsense-controller/controller.php(87): KubernetesController\Controller->main()
#7 /usr/local/bin/kubernetes-pfsense-controller(2): include('phar:///usr/loc...')
#8 {main}
  thrown in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/laminas/laminas-xmlrpc/src/Client.php on line 324

In other words

PHP Fatal error: Uncaught Laminas\XmlRpc\Client\Exception\FaultException: Authentication failed: Invalid username or password in phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/laminas/laminas-xmlrpc/src/Client.php:324

Questions

  1. How do you define, generate and/or structure the base64 password of pfSense within the secret.yaml? Or even perhaps extract the password from config.xml?
  2. Could you give an exact example of the secret.yaml file?
  3. Could you also provide an extra section with applicable tested against or an overview on the first github landing page. As of now it unclear which version of kubernetes and pfSense the kubernetes-pfsense-controlller is succesfully tested against or working with? Changelog.md is somewhat hidden.

PHP Fatal error: Uncaught Error: Only variables can be passed by reference in ... DNSResourceTrait.php:75

Hello Travis,

I'm trying to set this up but unfortunately, I'm getting the following error:

2021-02-09T17:56:40+00:00 store successfully initialized
2021-02-09T17:56:41+00:00 controller config loaded/updated
2021-02-09T17:56:41+00:00 loading plugin pfsense-dns-ingresses
2021-02-09T17:56:41+00:00 loading plugin pfsense-dns-services
PHP Warning: Illegal string offset 'hosts' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 71
PHP Warning: Illegal string offset 'hosts' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 72
2021-02-09T17:56:58+00:00 plugin (pfsense-dns-services): successfully reloaded dnsmasq service
2021-02-09T17:57:07+00:00 plugin (pfsense-dns-services): successfully reloaded unbound service
2021-02-09T17:57:09+00:00 plugin (pfsense-dns-services): successfully reloaded DHCP service
2021-02-09T17:57:10+00:00 plugin (pfsense-dns-ingresses): setting hostname entry: Host - vsphere-cluster.xxxxxx.dc, IP - 172.16.1.251
PHP Warning: Illegal string offset 'hosts' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 71
PHP Warning: Illegal string offset 'hosts' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 72
PHP Warning: Illegal string offset 'hosts' in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 75
PHP Fatal error: Uncaught Error: Only variables can be passed by reference in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php:75
Stack trace:
#0 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Plugin/AbstractPlugin.php(108): KubernetesPfSenseController\Plugin\DNSIngresses->doAction()
#1 phar:///usr/local/bin/kubernetes-pfsense-controller/vendor/travisghansen/kubernetes-controller-php/src/KubernetesController/Controller.php(525): KubernetesController\Plugin\AbstractPlugin->invokeAction()
#2 phar:///usr/local/bin/kubernetes-pfsense-controller/controller.php(87): KubernetesController\Controller->main()
#3 /usr/local/bin/kubernetes-pfsense-controller(2): include('phar:///usr/loc...')
#4 {main}
thrown in phar:///usr/local/bin/kubernetes-pfsense-controller/src/KubernetesPfSenseController/Plugin/DNSResourceTrait.php on line 75
stream closed

This is happening with the following:

  • kubernetes-pfsense-controller: v0.3.3
  • pfsense: 2.4.5-RELEASE-p1

Also, the config is the following (I'm using the helm chart and just omitting the "pfsense" part here):

config:
  controller-id: "vsphere-cluster"
  enabled: true
  plugins:
    pfsense-dns-services:
      enabled: true
      serviceLabelSelector:
      serviceFieldSelector:
      #allowedHostRegex: "/.*/"
      dnsBackends:
        dnsmasq:
          enabled: true
        unbound:
          enabled: true
    pfsense-dns-ingresses:
      enabled: true
      ingressLabelSelector:
      ingressFieldSelector:
      # works in conjunction with the ingress annotation 'dns.pfsense.org/enabled'
      # if defaultEnabled is empty or true, you can disable specific ingresses by setting the annotation to false
      # if defaultEnabled is false, you can enable specific ingresses by setting the annotation to true
      defaultEnabled: true
      #allowedHostRegex: "/.*/"
      dnsBackends:
        dnsmasq:
          enabled: true
        unbound:
          enabled: true

Any ideas of what could be the problem?

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.