Giter Site home page Giter Site logo

charts's Introduction

Falco Helm Charts

Falco Core Repository Stable License

This GitHub project is the source for the Falco Helm chart repository that you can use to deploy Falco in your Kubernetes infrastructure.

The purpose of this repository is to provide a place for maintaining and contributing Charts related to the Falco project, with CI processes in place for managing the releasing of Charts into our Helm Chart Repository.

For more information about installing and using Helm, see the Helm Docs.

Repository Structure

This GitHub repository contains the source for the packaged and versioned charts released to https://falcosecurity.github.io/charts (our Helm Chart Repository). We also, are publishing the charts in a OCI Image and it is hosted in GitHub Packages

The Charts in this repository are organized into folders: each directory that contains a Chart.yaml is a chart.

The Charts in the master branch (with a corresponding GitHub release) match the latest packaged Charts in our Helm Chart Repository, though there may be previous versions of a Chart available in that Chart Repository.

Charts

Charts currently available are listed below.

Usage

Adding falcosecurity repository

Before installing any chart provided by this repository, add the falcosecurity Charts Repository:

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

Installing a chart

Please refer to the instruction provided by the Chart you want to install. For installing Falco via Helm, the documentation is here.

Contributing

We are glad to receive your contributions. To help you in the process, we have prepared a CONTRIBUTING.md, which includes detailed information on contributing to falcosecurity projects. Furthermore, we implemented a mechanism to automatically release and publish our charts whenever a PR is merged (if you are curious how this process works, you can find more details in our release.md).

So, we ask you to follow these simple steps when making your PR:

  • The DCO is required to contribute to a falcosecurity project. So ensure that all your commits have been signed off. We will not be able to merge the PR if a commit is not signed off.
  • Bump the version number of the chart by modifying the version value in the chart's Chart.yaml file. This is particularly important, as it allows our CI to release a new chart version. If the version has not been increased, we will not be able to merge the PR.
  • Add a new section in the chart's CHANGELOG.md file with the new version number of the chart.
  • If your changes affect any chart variables, please update the chart's README.gotmpl file accordingly and run make docs in the main folder.

Finally, when opening your PR, please fill in the provided PR template, including the final checklist of items to indicate that all the steps above have been performed.

If you have any questions, please feel free to contact us via GitHub issues.

charts's People

Contributors

alacuku avatar andreagit97 avatar arnaudcht avatar cpanato avatar dependabot[bot] avatar developer-guy avatar dotdc avatar eyenx avatar hardwarefresser avatar igoritos22 avatar issif avatar jasondellaluce avatar jgmartinez avatar kaizhe avatar keisukeyamashita avatar kostavro avatar leodido avatar leogr avatar lowaiz avatar lucaguerra avatar nestorsalceda avatar nibalizer avatar nlamirault avatar sryther avatar stone-z avatar tberreis avatar tra0x avatar whyeasy avatar willejs avatar yasinterol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Dummy chart miss-use.

Describe the bug

Upon updating falco helm chart from 1.0.10, we experienced non-start problems with falco. After discussion with @leogr he noticed we are using the dummy chart in the helm stable repo, but in this case we should have an nginx in the cluster, and not falco. We had a falco deployment without the necessary mountpoints, so it failed to start up.

How to reproduce it

We are using helmfile to orchestrate workload in our cluster. Additionally we have a template rendering in place(gotmpl files), but that is not relevant just note it to make the file structure more understandable.

https://github.com/helm/charts/blob/master/stable/falco/templates/deployment.yaml#L26

containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}

based on this: https://github.com/helm/charts/blob/master/stable/falco/values.yaml#L7 We should have an nginx deployment.

But, we have the following helmfile.yaml:

#...
releases:
- name: falcoservice
  chart: stable/falco
  version: 1.1.8
  namespace: security
  installed: true
  values:
  - values.yaml.gotmpl
#...

And in the values.yaml.gotmpl we have:

image:
  registry: docker.io
  repository: falcosecurity/falco
  tag: "{{ .Environment.Values.image_tag }}"
  pullPolicy: IfNotPresent

So the timeline is the following:
We had 1.0.10 chart cached. Whenever we deployed it worked. We tried to upgrade to 1.8... something (does not matter), and helm pulled the new version what caused your dummy template to be used, but with our overrides for 1.0.10 -> we still deployed falco (due to the image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" line), but in a corrupted manner (was not able to start up, due to missing host mounts).

Expected behaviour

Dummy chart should deploy an nginx regardless of the in-place image/version pinning of the user's setup.

Screenshots

N/A

Environment

  • Falco version:
    N/A
  • System info:
    N/A
  • Cloud provider or hardware configuration:
  • OS:
    N/A
  • Kernel:
    N/A
  • Installation method:
    Helm chart

Additional context

We are using a CI/CD to deploy all of our workload, so it was not possible to see any warning about the changes if there is any. I read the README in the stable repo, but I assumed the older versions are still available from that repo and I only have to change the repo, if I want to use the latest chart with the latest falco.
(because of this we did not upgrade to the latest version, first it seemed hard to incorporate the new repo to the CI/CD).

Secure Helm Chart

Motivation

Following this discussion we have been able to identify a number of security holes in the current helm chart.

This issue aims to define the constraints of building a secure-by-default Helm chart for Falco.

Feature

As a Kubernetes user I would like to be able to type

helm install falco <args>

such that a complete Falco installation is deployed to my cluster and is running as an unprivileged daemonset.

This chart should be to the default chart as hardened is to the Linux kernel.

Constraints:

  • The daemonset pods are have securitycontext.privileged=false
  • No access to the host network
  • No access to the host PID namespace
  • No access to any of the host namespaces while we are at it. Get rid of them all.

The daemonset pods should be a lightweight program (probably written in Go) that read events from the Falco Unix Socket here.

The host

There should be two options for installing the Falco components on the host. A privileged and less secure option that runs the installation in an init container, or an opt-out option that simply assumes this is already managed at the host level.

Kubernetes should NOT be watching/scheduling Falco. Falco should be scheduled with Systemd so that it will continue to run even if Kubernetes is compromised.

The only components running inside of Kubernetes will be lightweight pods that consume the falco events and can potentially forward these events around the cluster.

Alternatives

Additional context

Add support to chart to work with 'helm template'

Motivation
It would be very helpful to be able to use the falco chart in a GitOps pipeline (e.g. Flux) or via Spinnaker's Bake Manifest stage. For the Flux use case, this involves using helm template to render the chart's contents into fully populated manifests and then committing the manifests to a git repository that is monitored by a GitOps operator running in the cluster. For Spinnaker, it involves using helm template to render the chart's contents that are then applied directly to the Kubernetes cluster.

The current chart does not work as-is for this use case because the namespace-scoped objects are missing a namespace field, which results in the objects ending up being applied to the default namespace.

Feature

I would like to see the various namespace-scoped objects be defined with namespace: {{ .Release.Namespace }} so that they will work well with helm template.

Alternatives

An alternative approach is to fork this chart in order to add the namespace fields.

Additional context

The requested change is the same as other charts have done:

No support for latest EKS 1.15 and 1.16 Amazon Linux 2 Kernel version (4.14.177-139.253.amzn2.x86_64)

Describe the bug

No falco support for latest EKS 1.15 and 1.16 kernel versions:

curl -s https://s3.amazonaws.com/download.draios.com/stable/sysdig-probe-binaries/index.html | grep 'falco-probe.*177-139.*amzn2' |wc -l
       0

How to reproduce it

$ kubectl set image daemonset.apps/falco -n kube-system falco=docker.io/falcosecurity/falco:0.23.0
 kubectl logs -f $(kubectl get pod -l app=falco -o name | head -1)
* Setting up /usr/src links from host
* Running falco-driver-loader with: driver=module, compile=yes, download=yes
* Unloading falco module, if present
* Trying to dkms install falco module
* Running dkms build failed, couldn't find /var/lib/dkms/falco/96bd9bc560f67742738eb7255aeb4d03046b8045/build/make.log
* Trying to load a system falco driver, if present
* Trying to find locally a prebuilt falco module for kernel 4.14.177-139.253.amzn2.x86_64, if present
Detected an unsupported target system, please get in touch with the Falco community
Wed Jun  3 12:25:50 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Wed Jun  3 12:25:50 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Wed Jun  3 12:25:52 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Wed Jun  3 12:25:53 2020: Loading rules from file /etc/falco/rules.d/rules-overrides.yaml:
Wed Jun  3 12:25:55 2020: Unable to load the driver. Exiting.
Wed Jun  3 12:25:55 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.

Expected behaviour

No CrashLoopBackOff for falco pods

Screenshots
Environment

  • falco version: 0.23.0
  • System info: AWS EKS Amazon Linux 2: amazon-eks-node-1.15-v20200507
  • Cloud provider or hardware configuration: AWS EKS
  • OS: Amazon Linux 2
  • Kernel: 4.14.177-139.253.amzn2.x86_64
    Installation method: Helm and manually replicated Dockerfile apt-get steps in seperate container

Additional context

Error: failed to download "falcosecurity/falco

Hi,

Helm is not able to download .

helm repo add falcosecurity https://falcosecurity.github.io/charts
"falcosecurity" has been added to your repositories
helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "harbor" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "googleapis" chart repository

$ helm search repo falco
NAME CHART VERSION APP VERSION DESCRIPTION
falcosecurity/falco 1.5.0 0.26.1 Falco
falcosecurity/falco-exporter 0.3.7 0.3.0 Prometheus Metrics Exporter for Falco output ev...
falcosecurity/falcosidekick 0.1.26 2.14.0 A simple daemon to help you with falco's outputs

helm install falco falcosecurity/falco
Error: failed to download "falcosecurity/falco" (hint: running helm repo update may help)

Regards,

Enabling k8s audit event support not working on k8s version 1.18.5

Describe the bug
Unable to see the k8s audit events in the Falco logs after enabling the k8s audit event support

I have a local k8s setup with one master and one worker node, I am trying to enable the k8s audit event support by following https://github.com/falcosecurity/charts/tree/falco-1.5.1/falco link. I am able to execute the instructions successfully but not able to see the k8s audit events in the Falco logs.

Apiserver flags:
Option1:
- --audit-log-path=/var/lib/k8s_audit/k8s_audit_events.log
- --audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml
- --audit-log-maxbackup=1
- --audit-log-maxsize=10
- --audit-dynamic-configuration
- --feature-gates=DynamicAuditing=true
- --runtime-config=auditregistration.k8s.io/v1alpha1=true

Option2:
- --audit-dynamic-configuration
- --feature-gates=DynamicAuditing=true
- --runtime-config=auditregistration.k8s.io/v1alpha1=true

Falco deployment command:

helm install falco --set auditLog.enabled=true --set auditLog.dynamicBackend.enabled=true falcosecurity/falco

Auditsink.yml:

Source: falco/templates/auditsink.yaml

apiVersion: auditregistration.k8s.io/v1alpha1
kind: AuditSink
metadata:
name: falco
spec:
policy:
level: RequestResponse
stages:
- ResponseComplete
- ResponseStarted
webhook:
throttle:
qps: 10
burst: 15
clientConfig:
service:
namespace: default
name: falco
port: 8765
path: /k8s-audit

How to reproduce it

Follow instructions on https://github.com/falcosecurity/charts/tree/falco-1.5.1/falco
Expected behaviour

K8s events should be visible in the Falco logs

Screenshots

Environment
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 105d v1.18.5
k8s-node1 Ready 105d v1.18.5

  • Falco version:

Falco: 0.26.1
Chart: 1.5.0

  • Cloud provider or hardware configuration:
  • OS:
  • Kernel:
  • Installation method:
    Helm chart

Additional context

Helm install failing on GKE

Describe the bug

Upon installing the helm chart on GKE, the pods error out and the logs suggest a failure to install the falco module.

How to reproduce it

Install on GKE cluster with helm.

helm install falco falcosecurity/falco -n sysdig-falco

Expected behaviour

Should load the pods correctly and compile/download the falco module, resulting in a working install.

Screenshots

kubectl log ouput:

* Setting up /usr/src links from host
* Running falco-driver-loader with: driver=module, compile=yes, download=yes
* Unloading falco module, if present
* Trying to dkms install falco module
* Running dkms build failed, couldn't find /var/lib/dkms/falco/96bd9bc560f67742738eb7255aeb4d03046b8045/build/make.log
* Trying to load a system falco driver, if present
* Trying to find locally a prebuilt falco module for kernel 4.14.138+, if present
* Trying to download prebuilt module from https://dl.bintray.com/falcosecurity/driver/96bd9bc560f67742738eb7255aeb4d03046b8045/falco_cos_4.14.138%2B_1.ko
curl: (22) The requested URL returned error: 404 Not Found
Download failed, consider compiling your own falco module and loading it or getting in touch with the Falco community
Fri May 29 09:50:49 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Fri May 29 09:50:49 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Fri May 29 09:50:50 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Fri May 29 09:50:51 2020: Unable to load the driver. Exiting.
Fri May 29 09:50:51 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.

Environment

  • Falco version: Chart Version 1.1.8, App Version 0.23.0
  • System info: Nodes running COS
  • Cloud provider or hardware configuration:
  • OS: COS 11647.293.0
  • Kernel:
  • Installation method: Kubernetes with helm3

Does this chart replace the helm/stable chart?

Hi Folks:

I am a PM at Sumo Logic and we package Falco with our chart for collecting data from K8s. I wanted to confirm that this is the chart that will replace the chart in helm/stable. We plan to migrate to this chart to get the fix for #10 which is not in helm/stable. We also want to confirm if there have been any significant changes between 1.1.7 of the helm/stable chart and this chart (e.g. changes to the behavior of the chart, etc).

Unable to deploy this helm chart with falcosecurity/falco-no-driver:0.23.0

Describe the bug
Falco pods fail to start as the Falco binary is unable to locate a pre-installed BPF probe /root/.falco/falco-bpf.o.

Wed May 27 00:25:04 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Wed May 27 00:25:04 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Wed May 27 00:25:05 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Wed May 27 00:25:06 2020: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
Wed May 27 00:25:07 2020: Unable to load the driver. Exiting.
Wed May 27 00:25:07 2020: Runtime error: can't open BPF probe '/root/.falco/falco-bpf.o': Errno 2. Exiting.

How to reproduce it

Create a chart override file test.yaml:

image:
  registry: docker.io
  repository: falcosecurity/falco-no-driver
  tag: 0.23.0
  pullPolicy: IfNotPresent

ebpf:
  enabled: true

auditLog:
  enabled: true
  dynamicBackend:
    enabled: true

Run
helm install falco -f falco/values.yaml -f test.yaml -n falco --generate-name

Note that I pre-installed Falco BPF probe by using falco-driver-loader which by default installs BPF probe in /root/.falco

docker run --rm --privileged --name falco-probe-installer -v /root/.falco:/root/.falco -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /etc:/host/etc:ro --env FALCO_BPF_PROBE="" falcosecurity/falco-driver-loader:0.23.0

Expected behaviour

The Falco chart should support deploying falcosecurity/falco-no-driver:0.23.0

Screenshots

Environment

  • Falco version:
    falcosecurity/falco-no-driver:0.23.0
  • OS:
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel:
    Linux ip-10-97-143-63 5.3.0-42-generic #34~18.04.1-Ubuntu SMP Fri Feb 28 13:42:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Installation method:
    falco-driver-loader

Additional context

  • I was able to bypass this reported error by mounting /root to Falco pod.
  • Should this chart support pre-installing Falco BPF probe/module by making use of falco-driver-loader as initContainer?

Setting up liveness and readiness probes on falco

Motivation
As part of Kubernetes' best practices, I'd like to set Readiness and Liveness probes on all the containers deployed on my infrastructure. As of now, the chart lacks this capability.

Feature

At the moment Falco doesn't define any probe. We'd like to have a way to check if the container is running correctly; this is a bit challenging since it's not clear what we could check to ensure falco's is up.

Alternatives

Possible ways to ensure Falco is running would be:

  • Checking on running processes inside the container
  • Checking on logs/files used by falco

Any option would be welcome.

How are the rules for Falco configured, or what should be added to falco in a production environment

At present, I have deployed a group of falco containers to run in the k8s cluster through helm. The current rules are not clear to me. I can only see the changes of the current node's container when I touch a file. This rule Are there any best practices in the production environment, which rules should be added, and how to modify or add rules? I am not familiar with this area, I hope I can get your help

[root@m1 deployment]# kubectl get no -o wide
NAME   STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
m1     Ready    master   20d   v1.14.3   192.168.2.10   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://18.9.9
m3     Ready    worker   20d   v1.14.3   192.168.2.12   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://18.9.9

[root@m1 deployment]# kubectl get po -n kube-system |grep falco
falco-fnhqc                                    1/1     Running            0          3d
falco-hnkt9                                    1/1     Running            0          3d
[root@m1 deployment]# helm list -n kube-system
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
falco   kube-system     1               2020-10-12 11:25:50.074378454 +0800 CST deployed        falco-1.5.0     0.26.1   

[falco] Allow extra init container

Motivation

Reporting here the original request asked in helm/charts#22268

Feature

Allow extra init container(s) to be added, as Grafana does helm/charts#12343

Alternatives

Additional context

This feature can be used to run falcosecurity/driverloader as init container, then falcosecurity/falco-no-driver.
It would help to implement #17

Deprecate Falco's chart integrations in favor of falcosidekick

Motivation

Falco's chart comes with various thirdy-party integrations that seems to be not maintained anymore.

Moreover, the current implementation of those integrations relies on several docker images that live outside the falcosecurity org:

image: sysdig/falco-nats:latest
,
image: sysdig/falco-sns:latest
,
image: sysdiglabs/falco-pubsub:latest

That makes Falco's chart hard to maintain Falco's chart too (here is an example).

Finally, falcosidekick (that's actively maintained, lives inside the falcosecurity org, and its chart is already inside this repository) already provides some of those integrations (missing ones are coming soon).

Feature

Remove the following integrations from the Falco's chart:

  • gcscc (i.e., Google Cloud Security Command Center)
  • natsOutput
  • snsOutput
  • pubsubOutput (i.e., Google Cloud Pub/Sub)

Finally, document how Falco's chart can integrate with those services by using falcosidekick's.

Alternatives

Do nothing. But sooner or later the current integrations will not work anymore.

Additional context

Service supported by falcosidekick:

  • NATS
  • SNS (recently added)
  • PubSub (work already in progress)
  • GCloud Security Command Center

Slack channel discussion ๐Ÿ‘‡
https://kubernetes.slack.com/archives/CMWH3EH32/p1602690485207200

cc @Issif @nibalizer

falco-exporter: grafanaDashboard.enabled does not handle prometheus data source correctly

Describe the bug

When falco-exporter is configured with the following values, the dashboard is created inside Grafana via ConfigMap (grafana-falco) but does not work since the data source variable ${DS_PROMETHEUS} doesn't seem to be replaced.

grafanaDashboard:
  enabled: true
  namespace: monitoring

JSON model of the imported "Falco Dashboard":

{
  "annotations": {
    "list": [
      {
        "builtIn": 1,
        "datasource": "-- Grafana --",
        "enable": true,
        "hide": true,
        "iconColor": "rgba(0, 211, 255, 1)",
        "name": "Annotations & Alerts",
        "type": "dashboard"
      }
    ]
  },
  "editable": true,
  "gnetId": null,
  "graphTooltip": 0,
  "id": 29,
  "links": [],
  "panels": [
    {
      "aliasColors": {},
      "bars": false,
      "dashLength": 10,
      "dashes": false,
      "datasource": "${DS_PROMETHEUS}",
      "description": "",
      "fieldConfig": {
        "defaults": {
          "custom": {}
        },
        "overrides": []
      },
      "fill": 1,
      "fillGradient": 0,
      "gridPos": {
        "h": 11,
        "w": 24,
        "x": 0,
        "y": 0
      },
      "hiddenSeries": false,
      "id": 2,
      "legend": {
        "alignAsTable": true,
        "avg": false,
        "current": false,
        "max": false,
        "min": false,
        "rightSide": true,
        "show": true,
        "total": false,
        "values": false
      },
      "lines": true,
      "linewidth": 1,
      "nullPointMode": "null",
      "options": {
        "dataLinks": []
      },
      "percentage": false,
      "pointradius": 2,
      "points": false,
      "renderer": "flot",
      "seriesOverrides": [],
      "spaceLength": 10,
      "stack": true,
      "steppedLine": false,
      "targets": [
        {
          "expr": "rate(falco_events[5m])",
          "interval": "",
          "intervalFactor": 1,
          "legendFormat": "{{rule}} (node=\"{{kubernetes_node}}\",ns=\"{{k8s_ns_name}}\",pod=\"{{k8s_pod_name}}\")",
          "refId": "A"
        }
      ],
      "thresholds": [],
      "timeFrom": null,
      "timeRegions": [],
      "timeShift": null,
      "title": "Events rate",
      "tooltip": {
        "shared": true,
        "sort": 0,
        "value_type": "individual"
      },
      "type": "graph",
      "xaxis": {
        "buckets": null,
        "mode": "time",
        "name": null,
        "show": true,
        "values": []
      },
      "yaxes": [
        {
          "format": "short",
          "label": null,
          "logBase": 1,
          "max": null,
          "min": null,
          "show": true
        },
        {
          "format": "short",
          "label": null,
          "logBase": 1,
          "max": null,
          "min": null,
          "show": true
        }
      ],
      "yaxis": {
        "align": false,
        "alignLevel": null
      }
    },
    {
      "columns": [],
      "datasource": "${DS_PROMETHEUS}",
      "fieldConfig": {
        "defaults": {
          "custom": {}
        },
        "overrides": []
      },
      "fontSize": "100%",
      "gridPos": {
        "h": 10,
        "w": 24,
        "x": 0,
        "y": 11
      },
      "id": 4,
      "links": [],
      "pageSize": null,
      "showHeader": true,
      "sort": {
        "col": null,
        "desc": false
      },
      "styles": [
        {
          "alias": "Time",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "Time",
          "type": "date"
        },
        {
          "alias": "",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 2,
          "link": false,
          "mappingType": 1,
          "pattern": "/__name__|instance|job|kubernetes_name|(__name|helm_|app_).*/",
          "sanitize": false,
          "thresholds": [],
          "type": "hidden",
          "unit": "short"
        },
        {
          "alias": "Count",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Value",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "",
          "align": "left",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "priority",
          "thresholds": [
            ""
          ],
          "type": "number",
          "unit": "none",
          "valueMaps": [
            {
              "text": "5",
              "value": "5"
            }
          ]
        },
        {
          "alias": "",
          "align": "left",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "decimals": 2,
          "pattern": "/.*/",
          "thresholds": [],
          "type": "string",
          "unit": "short"
        }
      ],
      "targets": [
        {
          "expr": "falco_events",
          "format": "table",
          "instant": true,
          "refId": "A"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "Totals",
      "transform": "table",
      "transparent": true,
      "type": "table-old"
    }
  ],
  "schemaVersion": 25,
  "style": "dark",
  "tags": [],
  "templating": {
    "list": []
  },
  "time": {
    "from": "now-6h",
    "to": "now"
  },
  "timepicker": {
    "refresh_intervals": [
      "10s",
      "30s",
      "1m",
      "5m",
      "15m",
      "30m",
      "1h",
      "2h",
      "1d"
    ]
  },
  "timezone": "",
  "title": "Falco Dashboard",
  "uid": "FvUFlfuZz",
  "version": 1
}

TBH I'm not sure how exactly the ${DS_PROMETHEUS} is handled and if it even should be replaced with the actual prometheus data source when the dashboard is imported via API, but when I change datasource from ${DS_PROMETHEUS} to Prometheus is works just fine.

How to reproduce it

  1. Let the falco-exporter Helm chart handle the Grafana monitoring using the values here:
grafanaDashboard:
  enabled: true
  namespace: monitoring
  1. Access the Grafana Falco dashboard and see that no values will be shown - even after minutes or hours.

Expected behaviour

The "Falco Dashboard" should be configured with the proper prometheus data source and show the falco events.

Screenshots

Environment

  • Falco version: falcosecurity/falco-exporter:0.3.0
  • Grafana version: grafana/grafana:7.0.3
  • Installation method: Helm Chart version 0.3.3

Thanks!

Regards,
Philip

Missing falco_debian_4.9.0-11-amd64_1.o

Describe the bug
Url to download ebpf driver for kernel version 4.9.0-11-amd64_1, 4.9.0-12-amd64_1, falco_debian_4.9.0-13-amd64_1 are missing the .o file but the .ko file are there. All other kernel seems to have th e.o file.

See this url
https://dl.bintray.com/falcosecurity/driver/85c88952b018fdbce2464222c3303229f5bfcfad/falco_debian_4.9.0-11-amd64_1.o returns 404.

Resulting in there error when compiling :

Jul 27 10:46:23 falco-m27nj falco mv: cannot stat '/usr/src/falco-85c88952b018fdbce2464222c3303229f5bfcfad/bpf/probe.o': No such file or directory
Jul 27 10:46:23 falco-m27nj falco * Trying to download a prebuilt eBPF probe from https://dl.bintray.com/falcosecurity/driver/85c88952b018fdbce2464222c3303229f5bfcfad/falco_debian_4.9.0-11-amd64_1.o
Jul 27 10:46:27 falco-m27nj falco error curl: (22) The requested URL returned error: 404 Not Found
Jul 27 10:46:27 falco-m27nj falco Download failed

NOTE: I'm using the helm chart with ebpf.enabled: true with a kubernetes install using kops 1.17.1

How to reproduce it

Have a kubernetes cluster with kops 1.17.1 (or nodes using debien kernel version 4.9.0-11 and use the helm chart provided by https://github.com/falcosecurity/charts with values ebpf.enabled: true

Expected behaviour

Url shouldn't return 404.

Environment

  • Falco version: 0.24
  • System info:
  • Cloud provider or hardware configuration: AWS - KOPS
  • OS: Debian
  • Kernel: 4.9.0-11
  • Installation method: helm charts

falco-exporter: Add PSP, Role and RoleBinding Helm templates to the chart

Hi there,

Motivation

I'm currently working in a quite restrictive K8s environment and unfortunately wasn't able to deploy falco-exporter until now since the falco-exporter Helm chart only comes with a ServiceAccount "falco-exporter" but without any configuration about the required PSP privileges.

Feature

Since the falco-exporter DaemonSet uses a hostPath volume (see https://github.com/falcosecurity/charts/blob/master/falco-exporter/templates/daemonset.yaml#L64-L67) it's sometimes required to explicitly allow this behavior via separate PSP, ClusterRole & ClusterRoleBinding.

A possible PSP probably could look like this:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  labels:
    app: falco-exporter
  name: falco-exporter
spec:
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - 'hostPath'
  allowedHostPaths:
  - pathPrefix: "/var/run/falco"
    readOnly: true

But what's with the ClusterRole? I mean which permissions need to be configured inside it? Since falco-exporter is able to add a ServiceMonitor and/or a Grafana Dashboard ConfigMap I don't think something like to following will be sufficient:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: falco-exporter
  labels:
    app: falco-exporter
rules:
  - apiGroups:
      - extensions
    resources:
      - podsecuritypolicies
    resourceNames:
      - falco-exporter
    verbs:
      - use
... what else here?

If you can tell me the exactly required ClusterRole permissions I could implement this change via PR (if you want)..

Thanks!

Regards,
Philip

Helm Chart; Auditing not working. "the server could not find the requested resource"

Describe the bug

When installing Falco through the helm chart, this issue falcosecurity/falco#1026, that relates to a wrong setting in the Auditsink, still persists. After setting this to the correct format, as described in this issue, my Kubernetes API log file is full with Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource errors. Tried generating some Kubernetes audits, that should trigger an alert for Falco as described on the Falco documentation site, but no alert is given. This probably indicates that Falco is not receiving any audit logs.

How to reproduce it

Add the following to the Kubernetes API server:

  • --audit-dynamic-configuration
  • --feature-gates=DynamicAuditing=true
  • --runtime-config=auditregistration.k8s.io/v1alpha1=true

Set "auditLog", and "dynamicBackend" to true in the values.yaml, provided by the Falco Helm chart.

Install Falco with the Helm Chart with the command: helm install Falco -f values.yaml stable/falco. Used Helm 3.2.1, so the original commands on the Falco Chart Github site won't work anymore.

Expected behaviour

Audit logs from the Kubernetes API server getting received and inspected by Falco.

Screenshots

2020-05-11T11:05:24.005466424Z AUDIT: id="0b967b34-9750-4dcd-905c-cacf392c16c7" stage="ResponseComplete" ip="xx.xx.xx.xx" method="get" user="system:kube-controller-manager" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="kube-system" uri="/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s" response="200"
E0511 11:16:46.538374       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:08:12.927495363Z AUDIT: id="d031c5b4-101e-4ba9-964f-fb8cc0b9b402" stage="ResponseComplete" ip="xx.xx.xx.xx" method="update" user="system:kube-controller-manager" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="kube-system" uri="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s" response="200"
E0511 11:16:46.581997       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:02:22.342627937Z AUDIT: id="ec410dc7-8173-4528-aef6-d66a753524df" stage="ResponseStarted" ip="xx.xx.xx.xx" method="watch" user="system:node:workernode1" groups="\"system:nodes\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dfalco-token-6z6x6&resourceVersion=33914&timeout=8m14s&timeoutSeconds=494&watch=true" response="200"
E0511 11:16:46.677243       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:15:01.198249466Z AUDIT: id="0046feb5-dea3-4361-b9e2-3472e01537e9" stage="ResponseComplete" ip="xx.xx.xx.xx" method="get" user="system:kube-controller-manager" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="kube-system" uri="/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s" response="200"
E0511 11:16:46.880651       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:01:40.478150167Z AUDIT: id="12f28d4c-4e36-40a5-b3e0-faabb666b17d" stage="ResponseComplete" ip="xx.xx.xx.xx" method="get" user="system:serviceaccount:kube-system:generic-garbage-collector" groups="\"system:serviceaccounts\",\"system:serviceaccounts:kube-system\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="<none>" uri="/apis/apiextensions.k8s.io/v1beta1?timeout=32s" response="200"
E0511 11:16:46.903637       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:12:18.187986702Z AUDIT: id="e53e4dad-9a1d-49d3-95de-f2e26de39259" stage="ResponseComplete" ip="xx.xx.xx.xx" method="update" user="system:kube-scheduler" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="kube-system" uri="/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s" response="200"
E0511 11:16:46.908830       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:02:00.904053739Z AUDIT: id="e0b80922-c48c-4cbf-86ac-a5e545a5e2dc" stage="ResponseComplete" ip="xx.xx.xx.xx" method="get" user="system:kube-controller-manager" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="kube-system" uri="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s" response="200"
E0511 11:16:46.959913       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:05:43.536143494Z AUDIT: id="855dcc7e-0fad-4d3f-bb8e-e7035adf48e4" stage="ResponseStarted" ip="xx.xx.xx.xx" method="watch" user="system:kube-controller-manager" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="<none>" uri="/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=33914&timeout=9m1s&timeoutSeconds=541&watch=true" response="200"
E0511 11:16:47.060695       1 metrics.go:109] Error in audit plugin 'dynamic_webhook' affecting 1 audit events: the server could not find the requested resource
Impacted events:
2020-05-11T11:08:30.947512433Z AUDIT: id="4c2b9bd7-c162-496b-af08-50e3744a0c5c" stage="ResponseComplete" ip="xx.xx.xx.xx" method="get" user="system:kube-scheduler" groups="\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="kube-system" uri="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" response="200"

Environment

  • Falco version: 0.22.1
  • System info:
    {
    "machine": "x86_64",
    "nodename": "falco-b8kjr",
    "release": "4.18.0-147.8.1.el8_1.x86_64",
    "sysname": "Linux",
    "version": "#1 SMP Thu Apr 9 13:49:54 UTC 2020"
    }
  • Cloud provider or hardware configuration: Kubernetes 1.18.2
  • OS:
    NAME="CentOS Linux"
    VERSION="8 (Core)"
  • Kernel: 4.18.0-147.8.1.el8_1.x86_64
  • Installation method: Helm Chart

Additional context
Tried installing Falco as a host-based installation via the script on the Falco documention site. By using this method and configuring the Kubernetes API server, Falco works as expected and no issues appear in the Kubernetes API log.

Added this issue couple of days ago. When I looked at it agian, it was moved to the contrib section? Why? This is not a contribute report.

Update OWNERS

Motivation

Following up on falcosecurity/contrib#12

We need to review and update the OWNERS file.

More importantly - we need to identify new owners for these charts.

Feature

Calling all maintainers. If you are interested in maintaining or contributing to the chart please follow up below.

I would also like to volunteer myself as a maintainer to help with ensuring features and support are not missed.

Alternatives

Additional context

Declare support for helm charts

Motivation

Following up on https://falco.org/blog/falco-scope/

Can we please document the support path for these charts? Where do users go for help?

Feature

Can we create a clear document somewhere that describes responsibility of each chart, and where to go for help and support?

Alternatives

Additional context

falco-exporter: Using imagePullSecrets conflicts with PSP

Describe the bug

The usage of imagePullSecrets is currently not working with the falco-exporter default PSP which only allows volumes of type hostPath.

Example:

A Falco-Exporter values file with the following config ...

...
imagePullSecrets:
- name: my-awesome-image-pull-secret
...

... is not able to run ...

$ kubectl describe ds -n falco falco-exporter
...
  Warning  FailedCreate      77s (x10 over 18m)  daemonset-controller  Error creating: pods "falco-exporter-" is forbidden: unable to validate against any pod security policy: [spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "secret": secret volumes are not allowed to be used]

Additional context

This bug was introduced with falco-exporter chart version 0.3.5. #114 chose the PSP too restrictive. We need to add secret too:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  ...
  name: falco-exporter
spec:
  ...
  volumes:
  - hostPath
  - secret

@leogr: I'll open a PR in a few hours which will fix this issue.

Regards,
Philip

Chart lint check on PRs

Motivation

Helm provides the command helm lint that runs a series of tests to verify that the chart is well-formed.
It would be great to add this as a status check for PRs.

Feature

Add a CI step for PR's branches that runs helm lint.

Alternatives

Do nothing.

Additional context

This issue follows up the issue #22

/assign

[helm install] Kernel headers not found for 4.18.0-147.8.1.el8_1.x86_64 For OpenShift 4.4

Describe the bug
I'm trying to deploy Falco in Openshift 4.4 with Falco version 0.18.0, and facing the follwing error:

 Setting up /usr/src links from host
* Unloading falco-probe, if present
* Running dkms install for falco
Error! echo
Your kernel headers for kernel 4.18.0-147.8.1.el8_1.x86_64 cannot be found at
/lib/modules/4.18.0-147.8.1.el8_1.x86_64/build or /lib/modules/4.18.0-147.8.1.el8_1.x86_64/source.
* Running dkms build failed, couldn't find /var/lib/dkms/falco/0.18.0/build/make.log
* Trying to load a system falco-probe, if present
* Trying to find precompiled falco-probe for 4.18.0-147.8.1.el8_1.x86_64
Found kernel config at /lib/modules/4.18.0-147.8.1.el8_1.x86_64/config
* Trying to download precompiled module from https://s3.amazonaws.com/download.draios.com/stable/sysdig-probe-binaries/falco-probe-0.18.0-x86_64-4.18.0-147.8.1.el8_1.x86_64-ea6bf7ba7bc281b199cd7bd0fb7866f3.ko
curl: (22) The requested URL returned error: 404 Not Found
Download failed, consider compiling your own falco-probe and loading it or getting in touch with the sysdig community
Thu May 14 12:39:20 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Thu May 14 12:39:20 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Thu May 14 12:39:20 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Thu May 14 12:39:21 2020: Loading rules from file /etc/falco/rules.available/application_rules.yaml:
Thu May 14 12:39:22 2020: Unable to load the driver. Exiting.
Thu May 14 12:39:22 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco-probe module is loaded.. Exiting.

How to reproduce it

Falco Chart 1.17
Image: 0.18.0

Expected behaviour
Normal execution of Faclo modules

Screenshots

Environment

  • Falco version: 0.18.0, 0.19.0, 0.20.0,0.21.0, 0.22.0, master
  • System info:
  • Cloud provider or hardware configuration:
  • OS: RHEL 8
  • Kernel: 4.18.0-147.8.1.el8_1.x86_64
  • Installation method: stable/falco chart

Additional context

Without eBPF filter:

* Setting up /usr/src links from host
* Unloading falco module, if present
* Running dkms build failed, couldn't find /var/lib/dkms/falco/96bd9bc560f67742738eb7255aeb4d03046b8045/build/make.log
* Trying to load a system falco driver, if present
* Trying to find a prebuilt falco module for kernel 4.18.0-147.8.1.el8_1.x86_64
Detected an unsupported target system, please get in touch with the Falco community
Thu May 14 13:19:38 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Thu May 14 13:19:38 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Thu May 14 13:19:38 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Thu May 14 13:19:39 2020: Loading rules from file /etc/falco/rules.available/application_rules.yaml:
Thu May 14 13:19:40 2020: Unable to load the driver. Exiting.
Thu May 14 13:19:40 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.

Related:
falcosecurity/falco#1188
falcosecurity/falco#1078

Update the Falco's Helm chart once Falco 0.24.0 has been released

Motivation

We just need to keep the chart in sync.

Feature

  • Merge #29 (Unix socket support)
  • Update the ruleset (grab files from the Falco's main repository once the release has a git tag)
  • Update falco/Chart.yaml with appVersion: 0.24.0
  • Release a new Falco chart version (bump to 1.2.0 since a new feature has been added)

Alternatives

No alternatives.

Additional context

We just have to keep this on hold until Falco 0.24.0 has been released (scheduled for tomorrow).
/assign

Integrate Falco exporter into Falco pod

Motivation
So right now Falco and Falco-exporter are two different helm charts resulting in two different pods, which are pretty much dependent on each other. I have two issues with that:

  1. The easiest way to setup is using the gRPC socket. Which means both pods need a extra host mount to set up communication. This is not a best practice. It would be better to setup communication through networking. If choosing for that, it makes way more sense from multiple perspectives to have Falco and the exporter in the same networking space. (both containers in the same pod)

  2. We are running AKS and EKS, with the native cni. Which means that each node has a maximum amount of ips to distribute to the pods. When I have to spin up an extra daemonset it costs our developers 1 * #nodes pods to spin up.
    -->

Feature
I'm willing to make a pr to integrate Falco exporter into the Falco helm chart. But I get that feeling that you guys want to keep the helm chart as slim as possible, which also makes sense. (I opened a pr for falco-ekscloudwatch before) Hence opening this ticket.

Alternatives

Another idea would perhaps to create two Falco charts:

Falco-slim: the bare minimum included to deploy Falco.
Falco-full: all extensions included

Additional context

Na

[falco] Enabling SSL for the embedded webserver is not possible / auditLog.dynamicBackend cannot work

Describe the bug

Since the chart does not provide an option to enable the SSL for the webserver and when pointing to a service reference in the AuditSink clientConfig the scheme is always set to HTTPS, the AudiSink configuration won't work.

How to reproduce it

Install falco with auditLog.dynamicBacked enabled:

helm install falco falcosecurity/falco
    --set auditLog.enabled=true --set auditLog.dynamicBackend.enabled=true

Then errors like the following are continuously emitted to the K8s api-server log:

Error in audit plugin 'dynamic_webhook' affecting 1 audit events: Post https://falco.default.svc:8765/k8s-audit?timeout=30s: http: server gave HTTP response to HTTPS client

Expected behaviour

Ability to enable the SSL on the webserver and no errors.

Screenshots

Environment

  • Falco version: 0.23.0
  • System info:
  • Cloud provider or hardware configuration:
  • OS:
  • Kernel:
  • Installation method: Helm

Additional context

Duplicate mount point

Describe the bug

Since v1.2.x, Falco pods are Crashlooping due to duplicate mountpoint error related to /var/run/falco

kubectl -n falco -l app=falco describe pods

Events:
  Type     Reason     Age                      From               Message
  ----     ------     ----                     ----               -------
  Normal   Scheduled  <unknown>                default-scheduler  Successfully assigned falco/falco-zwbxh to master02
  Normal   Pulling    71s                      kubelet, master02  Pulling image "busybox"
  Normal   Pulled     69s                      kubelet, master02  Successfully pulled image "busybox"
  Normal   Created    69s                      kubelet, master02  Created container init-pipe
  Normal   Started    69s                      kubelet, master02  Started container init-pipe
  Normal   Pulling    68s                      kubelet, master02  Pulling image "sysdig/falco-nats:latest"
  Normal   Pulled     67s                      kubelet, master02  Successfully pulled image "sysdig/falco-nats:latest"
  Normal   Created    67s                      kubelet, master02  Created container falco-nats
  Normal   Started    66s                      kubelet, master02  Started container falco-nats
  Warning  Failed     2s (x8 over 68s)         kubelet, master02  Error: Error response from daemon: Duplicate mount point: /var/run/falco
  Normal   Pulled     <invalid> (x9 over 68s)  kubelet, master02  Container image "docker.io/falcosecurity/falco:0.24.0" already present on machine

How to reproduce it

Run the following command to install Falco with grpc enabled:

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install --namespace falco falco --set falco.grpc.enabled=true falcosecurity/falco

Expected behaviour

v1.1.10 is working fine.
shared-pipe and grpc-socket-dir are both pointing to the /var/run/falco directory.

spec:
  volumes:
    - name: shared-pipe
      emptyDir: {}
    - name: grpc-socket-dir
      hostPath:
        path: /var/run/falco
        type: ''
...
  containers:
      volumeMounts:
        - name: shared-pipe
          mountPath: /var/run/falco/
        - name: grpc-socket-dir
          mountPath: /var/run/falco

Environment

Kubernetes v1.18.4

  • Falco version:
    v0.24.0
  • System info:
  • Cloud provider or hardware configuration: Hetzner Cloud
  • OS: Ubuntu 20.04 Focal Fossa
  • Kernel: Linux edge01 5.4.0-28-generic #32-Ubuntu SMP Wed Apr 22 17:40:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Installation method: Kubernetes

[UMBRELLA] improvements to include other charts

Motivation

This issue tries to summarize all improvements we still need to implement in this repo to include other charts.

References:

The current State of Art

This GitHub project is the source for our Helm chart repository, and its scope is explained in the README.

Our Helm chart repository is already working, and users can use it, for example:

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

At the time of writing, the repo contains only one chart (the falco chart), although we are ready to support other charts. To add a new chart, we just need to create another directory.

Furthermore, we already have some automation in place:

  • The automated release process (explained in the release.md)
  • The helm lint check is implemented for PRs (see #46)

Currently, we also have other charts that live in their GitHub repos and that are not published in any Helm Chart repository. Below is a non-exhaustive list:

Finally, it's important to mention that we already decided to do not to have the Falco Helm Chart source in the principal Falco repository. However, not all projects are the same, and other projects (as like the two mentioned above) may have different needs.

Proposal

Since it is convenient for users to find all the charts (of the Falco project) under the same Helm Chart repository, it would be great that also the abovementioned charts were listed our Helm Chart repository ๐Ÿ‘‰ https://falcosecurity.github.io/charts
That being said, we know the importance of having chart sources in the same repository as the application.

The following solution tries to satisfy both needs.

  • Charts maintainers can decide where charts source will live: in this repo or in its repo (both options will be supported)
  • If maintainers opt for this repo, they will just need to make a PR to move source files here (everything should work yet)
  • Otherwise, if maintainers choose for a different repo, they have to implement some automation in the other repo to trigger the update of the index of the Helm Chart repository here.

Now we will explain how automation for the second option should work (since the first one is already implemented).

A chart can be easily packaged and uploaded into a GitHub release using the chart-releaser tool provided by Helm. It works in a similar way go-releaser does. Here an example:

helm package "falco-exporter" --destination .cr-release-packages --dependency-update
cr upload -o "falcosecurity" -r "falco-exporter"

The above example will create a git tag (in the format <chartname>-<version>) and upload the .tgz into the GitHub release (here an example).

The above process can happen in the CI. The maintainers can decide how to trigger the CI to start the above step. I would prefer a simple script that checks if the version has changed (I've already implemented it ๐Ÿ‘‰ here).

The second step we need to include the latest chart version in our centralized Helm Chart repository is to update the index.

Luckily, chart-releaser tool already provides a way to produce a partial index, for example:

cr index -o "falcosecurity" -r "falco-exporter" -c "https://github.com/falcosecurity/falco-exporter"

Then it can easily be merged with pre-existing index (the index is append-only, so we just need to push a new entry).

Finally, we just need to commit and push the index into the gh-pages branch of this repo (here). By doing so, we will automatically update our Helm Chart repository.

That's it, and no other actions are needed. ๐ŸŽ‰

Furthermore, since the whole process can happen inside the CI (that's shared among all falcosecurity org), there should be no problem to trigger the process in one repo and using our beloved @poiana to commit this repo (we are already using @poiana for that ๐Ÿ‘‰ here).

PS
This solution can contain some hidden complexity, but I'm confident it can be implemented. I'd like to implement a PoC as soon as I can. Moreover, once implemented, it can easily be replicated in every repo, wherever we need it.

falco helm chart doesn't deploy operator

Motivation

not a problem, but an improvement

Feature

Custom rule seems to be a helm entry, but the falco operator allows for this to be a crd

Alternatives

Unless theres a reason why it doesn't, helm chart should deploy the falco operator (it can include of course the option to deploy an instance within it...a good example would be the jaeger helm chart).

Additional context

Just wanted to share my thoughts, of course I'm sure there could be good reason but I thought it would be nice to be able to use crds for the falco rules.

helm compatible with lower k8s version

Motivation

My kubernates version is v1.14.6, auditsink.spec.webhook.clientConfig.service does not contain port field.

Feature

chart falcosecurity/falco would compatible with lower version?

Additional context

[root@tpaas-rke17 ~]# helm install falco -f ./falco_custom.yaml falcosecurity/falco
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(AuditSink.spec.webhook.clientConfig.service): unknown field "port" in io.k8s.api.auditregistration.v1alpha1.ServiceReference

[root@tpaas-rke17 ~]# kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
tpaas-rke17   Ready    master   44h   v1.14.6
[root@tpaas-rke17 ~]# 
[root@tpaas-rke17 ~]# kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
tpaas-rke17   Ready    master   44h   v1.14.6
[root@tpaas-rke17 ~]# 
[root@tpaas-rke17 ~]# kubectl explain auditsink.spec.webhook.clientConfig.service
KIND:     AuditSink
VERSION:  auditregistration.k8s.io/v1alpha1

RESOURCE: service <Object>

DESCRIPTION:
     `service` is a reference to the service for this webhook. Either `service`
     or `url` must be specified. If the webhook is running within the cluster,
     then you should use `service`. Port 443 will be used if it is open,
     otherwise it is an error.

     ServiceReference holds a reference to Service.legacy.k8s.io

FIELDS:
   name	<string> -required-
     `name` is the name of the service. Required

   namespace	<string> -required-
     `namespace` is the namespace of the service. Required

   path	<string>
     `path` is an optional URL path which will be sent in any request to this
     service.

Linting errors

Describe the bug

A linting problem with the Falco occurred in the CI:

==> Linting falco
[ERROR] templates/clusterrole.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRole" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRole"
[ERROR] templates/clusterrolebinding.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRoleBinding"

How to reproduce it

Try helm lint (with the latest helm version).

Expected behaviour

No errors.

Screenshots

https://app.circleci.com/pipelines/github/falcosecurity/charts/100/workflows/2dfed5fc-0ee7-48a6-af03-b9b599741f61/jobs/91

Environment

  • Falco version:
  • System info:
  • Cloud provider or hardware configuration:
  • OS:
  • Kernel:
  • Installation method:

Additional context

Helm install for falco not working on EKS 1.16 -- Unable to download precompiled falco-probe module for 4.14.173-137.229.amzn2.x86_64

Describe the bug
We recently updated from EKS 1.15 to 1.16. We were using 0.19.0 of Falco before this without issue.
We tried using the latest falcosecurity/falco:master image, but that did not resolve the issue.

Error! echo
Your kernel headers for kernel 4.14.173-137.229.amzn2.x86_64 cannot be found at
/lib/modules/4.14.173-137.229.amzn2.x86_64/build or /lib/modules/4.14.173-137.229.amzn2.x86_64/source.
* Running dkms build failed, couldn't find /var/lib/dkms/falco/a259b4bf49c3330d9ad6c3eed9eb1a31954259a6/build/make.log
* Trying to load a system falco-probe, if present
* Trying to find precompiled falco-probe for 4.14.173-137.229.amzn2.x86_64
Found kernel config at /host/boot/config-4.14.173-137.229.amzn2.x86_64
* Trying to download precompiled module from https://s3.amazonaws.com/download.draios.com/stable/sysdig-probe-binaries/falco-probe-a259b4bf49c3330d9ad6c3eed9eb1a31954259a6-x86_64-4.14.173-137.229.amzn2.x86_64-f0c8ced41ae4d0e71aa715068964ce9f.ko
curl: (22) The requested URL returned error: 404 Not Found
Download failed, consider compiling your own falco-probe and loading it or getting in touch with the Falco community
Tue May  5 15:09:32 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Tue May  5 15:09:32 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Tue May  5 15:09:32 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Tue May  5 15:09:33 2020: Unable to load the driver. Exiting.
Tue May  5 15:09:33 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco-probe module is loaded.. Exiting.

How to reproduce it

helm install falcohelm stable/falco in the EKS 1.16 env.

Expected behaviour

No errors when Falco pods spin up.

Environment

  • Falco version:

0.18.0 / 0.19.0 / 0.22.0 / master

  • Cloud provider or hardware configuration: EKS cluster (1.16)
  • OS: Amazon Linux 2
  • Installation method: Helm Install

Fix for additional labels for falco-exporter servicemonitor

Describe the bug

When deploying falco-exporter Helm chart with servicemonitor.additionallabels in values.yaml, the chart fails unless the value is a string, in which case the label is not created.

How to reproduce it

Edit values.yaml and add some servicemonitor additionallabels as map value(s).

serviceMonitor:
  # Enable the deployment of a Service Monitor for the Prometheus Operator.
  enabled: true
  additionalLabels:
    release: prom
โ‡’ helm upgrade --install falco-exporter . -f values.yaml -n falco
Error: UPGRADE FAILED: YAML parse error on falco-exporter/templates/servicemonitor.yaml: error converting YAML to JSON: yaml: line 10: mapping values are not allowed in this context

Expected behaviour

The additionallables should be created in the servicemonitor. In this case "release: prom".

โ‡’ kctl -n falco get servicemonitors falco-exporter -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
...
  labels:
  ...
    release: prom
  ...

Screenshots

N/A

Environment

  • Falco version:
    Falco version: 0.24.0
    Driver version: 85c88952b018fdbce2464222c3303229f5bfcfad

  • System info:

  • Cloud provider or hardware configuration:
    GCP GKE Kubernetes v1.16.12

  • OS:
    "machine": "x86_64",
    "nodename": "gke-yolo-infra-dev-default-node-pool-4754cdf8-1s69",
    "release": "4.19.112+",
    "sysname": "Linux",
    "version": "#1 SMP Thu May 21 12:32:38 PDT 2020"

  • Installation method:
    Helm chart

Additional context

Here is the fix:

File: falco-exporter/templates/servicemonitor.yaml
see lines 8-10 under metadata.labels.

{{ if .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: {{ include "falco-exporter.fullname" . }}
  labels:
    {{- include "falco-exporter.labels" . | nindent 4 }}
    {{- range $key, $value := .Values.serviceMonitor.additionalLabels }}
    {{ $key }}: {{ $value | quote }}
    {{- end }}
  namespace: {{ .Release.Namespace }}
spec:
  endpoints:
  - port: metrics
    {{- if .Values.serviceMonitor.interval }}
    interval: {{ .Values.serviceMonitor.interval }}
    {{- end }}
    {{- if .Values.serviceMonitor.scrapeTimeout }}
    scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
    {{- end }}
  selector:
    matchLabels:
      {{- include "falco-exporter.selectorLabels" . | nindent 6 }}
{{- end }}

Chart testing tool

Motivation

As originally asked within #6

Feature

Add a step in the CI pipeline to perform automated testing, for example using this tool

Alternatives

Additional context

Charts releasing automation

Motivation

We just need to automate everything, as usual ๐Ÿ˜บ
As also discussed during the last community call.

Feature

Automate the following tasks (eg. using CircleCI and/or bots):

  • Each chart should have a way to trigger its release process that
    • validates the chart (ie. checking the version, helm lint, etc..)
    • then runs helm package <chart-name> to produce the artifact
  • Artifacts should be available here as single source-of-truth (eg. in the gh-pages branch)
  • The helm repo index must be automatically updated after a new chart release comes in
  • Finally, everything must be published (currently using the gh-pages mechanism)
  • Add this helm repo to hub.helm.sh #40

Alternatives

Currently, we have a simple script that automate part of this process. Anyway, that script builds the artifact on the developer machine, thus the release process is not fully transparent.
I don't believe that is a real alternative. We can just stay with it until we find a better and more elegant solution.

Additional context

Some constraints:

  • both master and gh-pages branches are protected
  • We can have multiple charts inside this repo
  • We cannot use the GitHub release/tag feature (since the reason above) solution found ๐Ÿ‘‰ #31
  • I would like to keep the chart sources of some project in their own repositories (eg falco-exporter)
    • The main reason here is to keep app sources and chart sources in the same place, that's useful to do things like this

falcosidekick does not provide PodSecurityPolicy

Describe the bug

The helm chart has a switch to create a PodSecurityPolicy: podSecurityPolicy.create
But this switch only adds a ClusterRole and RoleBinding.

How to reproduce it

Install helm Chart with podSecurityPolicy.create=true

Expected behaviour

Creates a matching PodSecurityPolicy for falcosidekick.

Environment

  • Falco version:
    falcosidekick version: 2.14.0
    falcosidekick helm chart version: 0.1.25
  • Installation method:
    helm

Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.

Describe the bug
I'm trying to deploy Falco in Openshift 4.5.0-0.okd-2020-07-29-070316 with Falco version 0.25.0 using helm, and facing the follwing error:

* Setting up /usr/src links from host
* Running falco-driver-loader with: driver=module, compile=yes, download=yes
* Unloading falco module, if present
* Trying to dkms install falco module
* Running dkms build failed, couldn't find /var/lib/dkms/falco/ae104eb20ff0198a5dcb0c91cc36c86e7c3f25c7/build/make.log
* Trying to load a system falco driver, if present
* Trying to find locally a prebuilt falco module for kernel 5.6.19-300.fc32.x86_64, if present
* Trying to download prebuilt module from https://dl.bintray.com/falcosecurity/driver/ae104eb20ff0198a5dcb0c91cc36c86e7c3f25c7/falco_fedora_5.6.19-300.fc32.x86_64_1.ko
curl: (22) The requested URL returned error: 404 Not Found
Download failed, consider compiling your own falco module and loading it or getting in touch with the Falco community
Sat Sep 19 08:05:25 2020: Falco version 0.25.0 (driver version ae104eb20ff0198a5dcb0c91cc36c86e7c3f25c7)
Sat Sep 19 08:05:25 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Sat Sep 19 08:05:25 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Sat Sep 19 08:05:26 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Sat Sep 19 08:05:26 2020: Unable to load the driver.
Sat Sep 19 08:05:26 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.

How to reproduce it

falco chart:1.4.0
falco image:0.25.0

Expected behaviour
successfully deploy falco pods on openshift(okd)

Screenshots

Environment

  • Falco version: 0.25.0,0.24.0,master
  • System info:
  • Cloud provider or hardware configuration:
  • OS: Fedora CoreOS 32
  • Kernel: 5.6.19-300.fc32.x86_64
  • Installation method: falcosecurity/falco chart

Additional context

falco-exporter: Documentation about the falco gRPC socket communication

Hi there,

Unfortunately for me it's unclear how the falco-exporter should be able to access the falco output events. The falco-exporter/README.md lists the falco.grpcUnixSocketPath configuration possibility but does not say what needs to be configured on the other side in the falco Helm chart falco/README.md under the falco.grpc.* values. falco.grpc.enabled is by default set to false so my guess is that falco-exporter isn't even able to communicate with the falco socket by default...

Thanks!

Regards,
Philip

falco fake event generator does not start in PSP enabled k8s cluster

Describe the bug

When enabling the fake event generator in an PSP enabled custer, the POD cannot be create:

Error creating: pods "falco-event-generator-6f48d99f6f-" is forbidden: unable to validate against any pod security policy: []

How to reproduce it

Deploy the falco helm chart the --set fakeEventGenerator.enabled=true in an PSP-enabled k8s cluster.

Expected behaviour

The POD gets scheduled.

Screenshots

Environment

k8s 1.18.8 with PSPs enabled
helm v3.3.4
falco helm chart master branch

  • Falco version: 0.26.1
  • Cloud provider or hardware configuration: AWS
  • OS: Ubuntu 20.04.1
  • Kernel: 5.4.0-1024-aws
  • Installation method: helm

Additional context

[Falco 0.23.0] Falco pod is failing in kubernetes cluster due to dkms build failed

Describe the bug
I am trying to install falco on my RHEL 7.7 k8s cluster but the falco pod goes into error state due to dkms build failed

Falco prebuilt module not getting downloaded is expected as there is no internet connectivity in the env, but in case of RHEL it is looking for the module with rhel as a prefix whereas the module exists with centos prefix in the repository

How to reproduce it
Running helm chart with default value from following link https://github.com/falcosecurity/charts/tree/master/falco

Expected behaviour
The falco kernel module should be built by the falco pod and it should be in a running state with falco working

Screenshots

kubectl logs -f falco-hgv5b -n security

  • Setting up /usr/src links from host
  • Running falco-driver-loader with: driver=module, compile=yes, download=yes
  • Unloading falco module, if present
  • Trying to dkms install falco module
  • Running dkms build failed, couldn't find /var/lib/dkms/falco/96bd9bc560f67742738eb7255aeb4d03046b8045/build/make.log
  • Trying to load a system falco driver, if present
  • Trying to find locally a prebuilt falco module for kernel 3.10.0-1062.12.1.el7.x86_64, if present
  • Trying to download prebuilt module from https://dl.bintray.com/falcosecurity/driver/96bd9bc560f67742738eb7255aeb4d03046b8045/falco_rhel_3.10.0-1062.12.1.el7.x86_64_1.ko
    curl: (7) Failed to connect to dl.bintray.com port 443: Connection timed out
    Download failed, consider compiling your own falco module and loading it or getting in touch with the Falco community
    Wed Aug 26 08:31:58 2020: Falco initialized with configuration file /etc/falco/falco.yaml
    Wed Aug 26 08:31:58 2020: Loading rules from file /etc/falco/falco_rules.yaml:
    Wed Aug 26 08:31:59 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
    Wed Aug 26 08:31:59 2020: Unable to load the driver. Exiting.
    Wed Aug 26 08:31:59 2020: Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.

Environment

  • Falco version: 0.23.0
  • System info:
    falco --support | jq .system_info
    Wed Aug 26 11:44:01 2020: Falco version 0.25.0 (driver version ae104eb20ff0198a5dcb0c91cc36c86e7c3f25c7)
    Wed Aug 26 11:44:01 2020: Falco initialized with configuration file /etc/falco/falco.yaml
    Wed Aug 26 11:44:01 2020: Loading rules from file /etc/falco/falco_rules.yaml:
    Wed Aug 26 11:44:02 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
    {
    "machine": "x86_64",
    "nodename": "falco-mwgmw",
    "release": "3.10.0-1062.12.1.el7.x86_64",
    "sysname": "Linux",
    "version": "#1 SMP Thu Dec 12 06:44:49 EST 2019"
    }
  • Cloud provider or hardware configuration:
  • OS:
    cat /etc/redhat-release
    Red Hat Enterprise Linux Server release 7.7 (Maipo)
  • Kernel:
    uname -a
    Linux m1-kms0001.mgmt.oiaas 3.10.0-1062.12.1.el7.x86_64 #1 SMP Thu Dec 12 06:44:49 EST 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Installation method:
    Kubernetes

Additional context

Create helm repository

There is a procedure for creating an index and helm repository for our charts. We should do it.

  • Create Repository and Assets
  • PR remove from helm/charts
  • PR in repository to helm/hub
  • Setup chart testing tool

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.