Giter Site home page Giter Site logo

plugins's Introduction

Plugins

Falco Core Repository Stable License

Note: The plugin system is a new feature introduced since Falco 0.31.0. You can find more detail in the original proposal document.

This repository contains the Plugin Registry and the plugins officially maintained by the Falcosecurity organization. Plugins can be used to extend Falco and of applications using Falcosecurity libs. Please refer to the official documentation to better understand the plugin system's concepts and architecture.

Plugin Registry

The Registry contains metadata and information about every plugin known and recognized by the Falcosecurity organization. It lists plugins hosted either in this repository or in other repositories. These plugins are developed for Falco and made available to the community. Check out the sections below to know how to register your plugins and see plugins currently contained in the registry.

Registering a new Plugin

Registering your plugin inside the registry helps ensure that some technical constraints are respected, such as that a given ID is used by exactly one plugin with event source capability and allows plugin authors to coordinate about event source formats. Moreover, this is a great way to share your plugin project with the community and engage with it, thus gaining new users and increasing its visibility. We encourage you to register your plugin in this registry before publishing it. You can add your plugins in this registry regardless of where its source code is hosted (there's a url field for this specifically).

The registration process involves adding an entry about your plugin inside the registry.yaml file by creating a Pull Request in this repository. Please be mindful of a few constraints that are automatically checked and required for your plugin to be accepted:

  • The name field is mandatory and must be unique across all the plugins in the registry
  • (Sourcing Capability Only) The id field is mandatory and must be unique in the registry across all the plugins with event source capability
  • The plugin name must match this regular expression: ^[a-z]+[a-z0-9-_\-]*$ (however, its not recommended to use _ in the name, unless you are trying to match the name of a source or for particular reasons)
  • The source (Sourcing Capability Only) and sources (Extraction Capability Only) must match this regular expression: ^[a-z]+[a-z0-9_]*$
  • The url field should point to the plugin source code
  • The rules_url field should point to the default ruleset, if any

For reference, here's an example of an entry for a plugin with both event sourcing and field extraction capabilities:

- name: k8saudit
  description: ...
  authors: ...
  contact: ...
  maintainers:
    - name: The Falco Authors
      email: [email protected]
  keywords:
    - audit
    - audit-log
    - audit-events
    - kubernetes
    url: https://github.com/falcosecurity/plugins/tree/main/plugins/k8saudit
    rules_url: https://github.com/falcosecurity/plugins/tree/main/plugins/k8saudit/rules
  url: ...
  license: ...
  capabilities:
    sourcing:
      supported: true
      id: 2
      source: k8s_audit
    extraction:
      supported: true

You can find the full registry specification here: (coming soon...)

Registered Plugins

The tables below list all the plugins currently registered. The tables are automatically generated from the registry.yaml file.

Name Capabilities Description
plugin-id-zero-value Event Sourcing
ID: 0
``
This ID is reserved for particular purposes and cannot be registered. A plugin author should not use this ID unless specified by the documentation.

Authors: N/A
License: N/A
k8saudit Event Sourcing
ID: 1
k8s_audit
Field Extraction
k8s_audit
Read Kubernetes Audit Events and monitor Kubernetes Clusters

Authors: The Falco Authors
License: Apache-2.0
cloudtrail Event Sourcing
ID: 2
aws_cloudtrail
Field Extraction
aws_cloudtrail
Reads Cloudtrail JSON logs from files/S3 and injects as events

Authors: The Falco Authors
License: Apache-2.0
json Field Extraction
All Sources
Extract values from any JSON payload

Authors: The Falco Authors
License: Apache-2.0
dummy Event Sourcing
ID: 3
dummy
Field Extraction
dummy
Reference plugin used to document interface

Authors: The Falco Authors
License: Apache-2.0
dummy_c Event Sourcing
ID: 4
dummy_c
Field Extraction
dummy_c
Like dummy, but written in C++

Authors: The Falco Authors
License: Apache-2.0
docker Event Sourcing
ID: 5
docker
Field Extraction
docker
Docker Events

Authors: Thomas Labarussias
License: Apache-2.0
seccompagent Event Sourcing
ID: 6
seccompagent
Field Extraction
seccompagent
Seccomp Agent Events

Authors: Alban Crequy
License: Apache-2.0
okta Event Sourcing
ID: 7
okta
Field Extraction
okta
Okta Log Events

Authors: The Falco Authors
License: Apache-2.0
github Event Sourcing
ID: 8
github
Field Extraction
github
Github Webhook Events

Authors: The Falco Authors
License: Apache-2.0
k8saudit-eks Event Sourcing
ID: 9
k8s_audit
Field Extraction
k8s_audit
Read Kubernetes Audit Events from AWS EKS Clusters

Authors: The Falco Authors
License: Apache-2.0
nomad Event Sourcing
ID: 10
nomad
Field Extraction
nomad
Read Hashicorp Nomad Events Stream

Authors: Alberto Llamas
License: Apache-2.0
dnscollector Event Sourcing
ID: 11
dnscollector
Field Extraction
dnscollector
DNS Collector Events

Authors: Daniel Moloney
License: Apache-2.0
gcpaudit Event Sourcing
ID: 12
gcp_auditlog
Field Extraction
gcp_auditlog
Read GCP Audit Logs

Authors: The Falco Authors
License: Apache-2.0
syslogsrv Event Sourcing
ID: 13
syslogsrv
Field Extraction
syslogsrv
Syslog Server Events

Authors: Maksim Nabokikh
License: Apache-2.0
salesforce Event Sourcing
ID: 14
salesforce
Field Extraction
salesforce
Falco plugin providing basic runtime threat detection and auditing logging for Salesforce

Authors: Andy
License: Apache-2.0
box Event Sourcing
ID: 15
box
Field Extraction
box
Falco plugin providing basic runtime threat detection and auditing logging for Box

Authors: Andy
License: Apache-2.0
test Event Sourcing
ID: 999
test
This ID is reserved for source plugin development. Any plugin author can use this ID, but authors can expect events from other developers with this ID. After development is complete, the author should request an actual ID

Authors: N/A
License: N/A
k8smeta Field Extraction
syscall
Enriche Falco syscall flow with Kubernetes Metadata

Authors: The Falco Authors
License: Apache-2.0
k8saudit-gke Event Sourcing
ID: 16
k8s_audit
Field Extraction
k8s_audit
Read Kubernetes Audit Events from GKE Clusters

Authors: The Falco Authors
License: Apache-2.0
journald Event Sourcing
ID: 17
journal
Field Extraction
journal
Read Journald events into Falco

Authors: Grzegorz Nosek
License: Apache-2.0
kafka Event Sourcing
ID: 18
kafka
Read events from Kafka topics into Falco


Authors: Hunter Madison
License: Apache-2.0
gitlab Event Sourcing
ID: 19
gitlab
Field Extraction
gitlab
Falco plugin providing basic runtime threat detection and auditing logging for GitLab

Authors: Andy
License: Apache-2.0

Hosted Plugins

Another purpose of this repository is to host and maintain the plugins owned by the Falcosecurity organization. Each plugin is a standalone project and has its own directory, and they are all inside the plugins folder.

The main branch contains the most up-to-date state of development, and each plugin is regularly released. Please check our Release Process to know how plugins are released and how artifacts are distributed. Dev builds are published each time a Pull Request gets merged into main, whereas stable builds are released and published only when a new release gets tagged. You can find the published artifacts at https://download.falco.org/?prefix=plugins.

If you wish to contribute your plugin to the Falcosecurity organization, you just need to open a Pull Request to add it inside the plugins folder and to add it inside the registry. In order to be hosted in this repository, plugins must be licensed under the Apache 2.0 License.

Contributing

If you want to help and wish to contribute, please review our contribution guidelines. Code contributions are always encouraged and welcome!

License

This project is licensed to you under the Apache 2.0 Open Source License.

plugins's People

Contributors

alacuku avatar aleksvand avatar an1245 avatar andreabonanno avatar andreagit97 avatar cappellinsamuele avatar darryk10 avatar dependabot[bot] avatar fededp avatar fntlnz avatar geraldcombs avatar henridf avatar hmadison avatar issif avatar jasondellaluce avatar kaizhe avatar ldegio avatar leogr avatar lorenzo-merici avatar loresuso avatar lucaguerra avatar matteopasa avatar maxgio92 avatar mikegcoleman avatar mstemm avatar poiana avatar sboschman avatar skosier avatar trallnag avatar uhei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plugins's Issues

Issue adding source IP and arn from EKS Cloudwatch logs

Hi,

I would like to suggest to automatically include the ARN and source IP address in the logs when it sends security events to CloudWatch. Additionally, If it possible change on existing rules for example to detect any attempt to attach/exec to a pod. The rule should meet the following conditions:

rule: Attach/Exec Pod
desc: >
Detect any attempt to attach/exec to a pod
condition: kevt_started and pod_subresource and kcreate and ka.target.subresource in (exec,attach) and not user_known_exec_pod_activities
output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command])
priority: NOTICE
source: k8s_audit
tags: [k8s]

This is what I had tried
adding this:
arn=%ka.user.extra[0] I got the error

LOAD_ERR_COMPILE_OUTPUT (Error compiling output): invalid formatting token ka.target.extra.arn[0])
For source IP I've trying to add:

src_ip=%ka.target.sourceIPs I got the error

LOAD_ERR_COMPILE_OUTPUT (Error compiling output): invalid formatting token ka.target.sourceIPs)

Additional context

Currently, Falco sends logs to CloudWatch without including the ARN and source IP address. By automatically including this information in the logs, Falco can improve the efficiency and effectiveness of incident response. Additionally, the new rule or existing rules for detecting attempts to attach/exec to a pod will provide additional security visibility and help prevent potential security threats.

Typo in the README.md

Describe the bug

The third point of Registering a new Plugin section in README.md has a minor typo in the word reccomended. It should be changed to recommended. (reccomended --> recommended)
Can I make a quick pull request to fix this issue?

How to reproduce it

Expected behaviour

Screenshots

Environment

  • Falco version:
  • System info:
  • Cloud provider or hardware configuration:
  • OS:
  • Kernel:
  • Installation method:

Additional context

Add a plugin for Azure AKS k8_audit

Motivation

Just like in AWS I want to be able to monitor my k8s audit logs in Azure.

Feature

A implementation of reading k8s_aduit logs in AKS through Log Analytics Workspace.

Alternatives

AKS also supports sending logs directly to a storage account and a event hub

But to make the initial offering as similar to AWS I think starting with Log Analytics Workspace is a good idea.

Additional context

How to manage the Azure Diagnostic resources through terraform:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/monitor_diagnostic_setting

deprecated health_endpoints in k8_audit_rules in latest (19ab9e5) commit.

Describe the bug

According to https://kubernetes.io/docs/reference/using-api/health-checks/
/healthz is deprecated and /livez and /readyz should be used.

Currently: https://github.com/falcosecurity/falco/blob/master/rules/k8s_audit_rules.yaml lists

- macro: health_endpoint
  condition: ka.uri=/healthz

How to reproduce it

Expected behaviour

/livez and /readyz are included in the list, or are used instead of current.

Screenshots

Environment

.

k8saudit: Adjust healthcheck macros to include query parameters

Motivation

The included rule "Anonymous Request Allowed" excludes request made to healthcheck endpoints. But the exception does not cover requests made to healthcheck endpoints with query parameters like ?exclude=kms-provider-0. This leads to a lot of findings.

Feature

Add exception for calls that start with /healthz? and so on.

Alternatives

Do it locally only.

Additional context

Here is macro definition:

- macro: health_endpoint
condition: ka.uri=/healthz
- macro: live_endpoint
condition: ka.uri=/livez
- macro: ready_endpoint
condition: ka.uri=/readyz

Here is only usage of macros:

- rule: Anonymous Request Allowed
desc: >
Detect any request made by the anonymous user that was allowed
condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
priority: WARNING
source: k8s_audit
tags: [k8s]

Here is documentation of endpoints: https://kubernetes.io/docs/reference/using-api/health-checks/

k8s plugins should detect portforwards as well as execs

Motivation

Currently the various example k8s alerting rules don't include detections for port-forwards.
Similar to the exec rule(s) there should be an example rule for port-forwards because that actor could be sidestepping various network segmentation/auth steps.

Feature
Add port-forward detection, which should be almost identical to the exec detections

Alternatives

Additional context

k8s-audit: Add custom fields to audit events

Motivation

Instead of running a falco + falcosidekick instance for each of our self-hosted k8s clusters to add extra metadata like the k8s cluster name to the events, I would prefer to run a central falco cluster capable of ingesting k8s audit events from all of our clusters. Maintaining a per cluster falco instance becomes cumbersome.

Feature

Some way to add additional/extra/custom fields to each k8s audit event, just like falcosidekick can do, based on metadata sent with the request to the k8saudit plugin. As we are limited by what the k8s webhook audit backend can do, I am thinking of using http headers to supply additional metadata. This setup allows running a 'centralised' falco (cluster) ingesting k8s audit events from multiple k8s clusters without loosing for example the origin k8s cluster name.

Hopefully the custom fields added by the k8saudit plugin can be picked up by the solution for falco #2127

Alternatives

Currently running a falco instance for each k8s cluster, with a falcosidekick instance next to each falco instance. Falco ingests the k8s audit events, and pushes them to the falcosidekick instance. This falcosidekick instance adds the additional cluster specific labels before forwarding events to another falcosidekick instance used for falcosidekick ui and forwarding to another system. This setup becomes quite complex to maintain.

Additional context

As described in falco #2289:

My goal is to run a central falco (cluster) ingesting k8saudit events (or other cloud based events supported by falco plugins). A k8saudit event though misses important metadata to be able to trace the event back to the cluster (falcosecurity/falco#1704), like a cluster name/id.

The only way I see to do this, is by using the client authentication to provide additional metadata. A (reverse)proxy/ingress controller is responsible for handling authentication and pass additional metadata based on the authentication to falco by setting additional http headers. This is usually a pretty standard feature of a proxy (e.g. X-Forward-For headers). For example we can use mtls with k8s cluster specific client certs to authenticate. The proxy can forward the certificate subject field to the k8saudit plugin, and we can encode the cluster name in the subject for example.

I am looking into extending the k8saudit plugin with support to grab additional fields from http headers. Passing these values along the plugin event processing pipeline gets a bit ugly though. Only option seems to be altering the event from the http body with the header metadata. This unfortunately seems to mean the field names are fixed, as required by the extractor part.
Last challenge is to actually get the extra fields in the output, apart from editing all rules, the -p options seems atm the only way (related falcosecurity/falco#1704 (comment)). See falcosecurity/falco#2127 as well for additional output fields.
Downside of this approach is that each plugin requires a dedicated falco cluster, as the -p flag (with %field markers) is specific to each plugin. Falco validates the -p argument very early on, even before excluding rules with the -T/-t flags. E.g. adding a '%ka.auditid' with -p fails even if you only enable k8s rules (-t k8s).

Another downside is that the extra metadata is added to the output field. A better alternative is to only add these fields to the 'output_fields', not the 'output'. Just like falcosidekick allows to set 'custom fields', which are only added to the 'output_fields'.
Furthermore it would be ideal to be able to add 'custom' named fields to the output_fields of the alert. The additional fields a user wants to set, based on authentication/headers, is completely up to the user. Especially if you are forwarding the alerts with falcosidekick to other tools, the extra output_fields basically become 'labels' standardised across the organisation. E.g. alerts are forwarded as log lines to a log aggregation system. Each log line has a number of fixed labels so its origin can easily be found. The kubernetes cluster of origin is just one possibility, but one could add labels like division, project or team.

Support to EC2 Instance Profiles and/or IAM Roles when running Falco in an EC2 to read Cloudtrail logs from S3.

Motivation
Access Keys are not the most secure way of granting access to some entity to AWS Resources because they are pretty much like a user and a password that can be leaked. Also, whenever guaranteeing that an AWS account is secure, we have to rotate these access keys in order to prevent them from getting stored in old systems and be passed forward to someone.

Feature
When running Falco in a EC2, for example, I'd like to have an Instance Profile based on a secure Role to interact with my S3 Bucket whenever reading Cloudtrail logs from it.

Alternatives
N/A

Additional context
N/A

alerts of a rule A depends on condition of rule different to rule A

Describe the bug

The condition of the rule List Buckets is conditioning if alerts from other different rules (for example, Create AWS user, Create Group, etc.) are displayed.

How to reproduce it

  1. Configure Falco as explained in #56.
  2. You can use the aws_cloudtrail logs file here.
  3. Include enable: true in all the rules.
  4. Modify the rule condition of the rule List Buckets
    to ct.name="ListBuckets" instead of ct.name="ListBuckets" and not ct.error exists.

Expected behaviour

The alert of a rule A depends only on the condition of the rule A and not on the condition of rules other different than A.

Screenshots
The next screenshot displays two consecutive executions of falco. The only difference between them is that the condition of the rule List Buckets is just ct.name="ListBuckets" in the second one. In the first one I am using the original one: ct.name="ListBuckets" and not ct.error exists.

image

Environment

  • Falco version:
Falco version: 0.30.0-155+2f82a9b
Driver version: 319368f1ad778691164d33d59945e00c5752cd27
  • System info:
Wed Jan 26 09:17:48 2022: Falco version 0.30.0-155+2f82a9b (driver version 319368f1ad778691164d33d59945e00c5752cd27)
Wed Jan 26 09:17:48 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Wed Jan 26 09:17:48 2022: Loading plugin (cloudtrail) from file /usr/share/falco/plugins/libcloudtrail.so
Wed Jan 26 09:17:48 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
Wed Jan 26 09:17:48 2022: Loading rules from file /etc/falco/aws_cloudtrail_rules.yaml:
{
  "machine": "x86_64",
  "nodename": "host",
  "release": "5.11.0-1021-gcp",
  "sysname": "Linux",
  "version": "#23~20.04.1-Ubuntu SMP Fri Oct 1 19:04:32 UTC 2021"
}
  • Cloud provider or hardware configuration:
  • OS:
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Kernel:
Linux host 5.11.0-1021-gcp #23~20.04.1-Ubuntu SMP Fri Oct 1 19:04:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Installation method:
dpkg DEB

Additional context

I can provide a VM with the same environment I am using to test this.

GCP-Plugin Donation

Motivation

The GCP Audit Logs Plugin's primary purpose is to detect security threats, vulnerabilities, and compliance risks by analyzing the ingested GCP audit logs. The default security detection rules were built with the MITRE & ATT&CK framework in mind, which provides a comprehensive and industry-standard way to identify and classify different types of security threats.

The GCP Audit Logs Plugin can help security teams identify and respond to security incidents quickly, improve compliance posture, and reduce overall risk to the organization. It provides a comprehensive and centralized view of security events across multiple GCP services and can help detect and prevent unauthorized access, data exfiltration, and other types of malicious activity.

Feature

By leveraging GCP audit logs, the GCP Audit Logs Plugin provides deep insights into the activities of different users, services, and resources in your GCP environment. The GCP Audit Logs Plugin's advanced ebpf capabilities enable it to identify anomalous activities and raise alerts when it detects suspicious or malicious behavior.

The GCP Audit Logs Plugin also offers customizable detection rules that enable you to fine-tune the detection capabilities to suit your organization's specific needs. You can customize the rules to detect specific types of security threats, monitor specific users or services, and track specific resources or data types.

Additional context

The GCP Audit Logs Plugin comes with pre-built security detection rules designed to detect security threats based on the MITRE & ATT&CK framework. These rules are constantly updated to ensure that the security agent is always detecting the latest threats and vulnerabilities.

The default security detection rules cover the following areas:

Identity and Access Management (IAM)
Network Security
Data Security
Compliance
Infrastructure Security
Cloud Service Providers
The GCP Audit Logs Plugin's detection rules can identify threats such as:

Privilege escalation
Unauthorized access
Data exfiltration
Denial of Service (DoS) attacks
Insider threats
Suspicious network activity

Missing comma in rules list leading to concatenation

Describe the bug

The rules file k8s_audit_rules.yaml for plugin k8saudit contains following entry:

- list: falco_hostnetwork_images
  items: [
    gcr.io/google-containers/prometheus-to-sd,
    gcr.io/projectcalico-org/typha,
    gcr.io/projectcalico-org/node,
    gke.gcr.io/gke-metadata-server,
    gke.gcr.io/kube-proxy,
    gke.gcr.io/netd-amd64,
    k8s.gcr.io/ip-masq-agent-amd64
    k8s.gcr.io/prometheus-to-sd,
    ]

Permanent link to section: link

Note the missing comma. This leads to the following parsing:

items:
  - gcr.io/google-containers/prometheus-to-sd
  - gcr.io/projectcalico-org/typha
  - gcr.io/projectcalico-org/node
  - gke.gcr.io/gke-metadata-server
  - gke.gcr.io/kube-proxy
  - gke.gcr.io/netd-amd64
  - k8s.gcr.io/ip-masq-agent-amd64 k8s.gcr.io/prometheus-to-sd

How to reproduce it

Try it out in any generic YAML.

Feature Request: Let K8saudit plugin watch/tail file and parse new lines.

Motivation

As of now the plugin reads the given filepath (file or files in directory), parses it to create the alerts and stops there.
To have this more aligned with the functionality of Falco and make the filepath option of the plugin more viable I would like to request the k8saudit plugin to watch given file(s) and parse newly written k8saudit lines to generate alerts.
This would allow us to use both Falco's syscall parsing, k8saudit parsing, without the need to change API-server configuration/add another webhook in the cluster.

Feature

  • Let k8saudit plugin tail/watch a given file for new entries and parse those new lines when k8saudit logs events.
  • Let plugin check if current watched/tailed file is rotated (example: close file and reopen file with the name given in filepath if no new event has been generated for X-amount of time)

Additional context

This feature should probably be built in at the source.go file of the k8sauditplugin

image

k8saudit plugin - Log File Backend, recursively check the directory

Describe the bug

The k8s audit plugin doesn't recursively search the directory for log files.

How to reproduce it

For retention purposes, configure k8s to save audit logs as files. Modify the values.yaml of to mount the volume from the host to /var/log/k8saudit on the Falco containers. Configure the plugin's open_params to point to the container audit directory.

Expected behaviour

According to the Falco book (O'Reilly, 2022, page 153), for log files the config should be with ">":
open_params: >
/file/path

The greater than (>) sign includes the carriage return (\n) which is an invalid symbol for the Golang URL parser. It worked when I just typed the URL on the same line as the key.

Falco opens the directory, recursively searches the contents and parses each audit file it finds.

Evidence

pod/falco-fusion-falco-66b76889c4-47lzk
Tue Oct 18 08:08:00 2022: Falco version 0.32.2
Tue Oct 18 08:08:00 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Tue Oct 18 08:08:00 2022: Loading plugin (k8saudit) from file /usr/share/falco/plugins/libk8saudit.so
Tue Oct 18 08:08:00 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
Tue Oct 18 08:08:00 2022: Configured rules filenames:
Tue Oct 18 08:08:00 2022:    /etc/falco/falco_rules.yaml
Tue Oct 18 08:08:00 2022:    /etc/falco/falco_rules.local.yaml
Tue Oct 18 08:08:00 2022: Loading rules from file /etc/falco/falco_rules.yaml:
Tue Oct 18 08:08:00 2022: Loading rules from file /etc/falco/falco_rules.local.yaml:
Tue Oct 18 08:08:00 2022: Watching /etc/falco/falco.yaml
Tue Oct 18 08:08:00 2022: Watching /etc/falco/falco_rules.yaml.
Tue Oct 18 08:08:00 2022: Watching /etc/falco/falco_rules.local.yaml.
Tue Oct 18 08:08:00 2022: Starting internal webserver, listening on port 8765
Events detected: 0
Rule counts by severity:
Triggered rules by rule name:
Syscall event drop monitoring:
   - event drop detected: 0 occurrences
   - num times actions taken: 0
Error: read /var/log/k8saudit: is a directory

Environment

  • Falco version:
    0.32.2
  • System info:
    23~20.04.2-Ubuntu SMP
  • Cloud provider or hardware configuration:
  • OS:
    NAME="Ubuntu"
    VERSION="20.04.5 LTS (Focal Fossa)"
  • Kernel:
    Linux testlab-0 5.15.0-1017-gcp 23~20.04.2-Ubuntu SMP x86_64
  • Installation method:
    Helm chart
    Additional context
    None

improvements to the plugin registry

Motivation

I see a big potential in the plugin registry, but I think we might want to elaborate more on what it really is and how we should implement it. Currently, what we do is filling a markdown document with the publicly known plugins, which also help to track the list of assigned unique source IDs. I think we might want to further elaborate this concept, clarify some points, and reach a more sophisticated level.

Feature

This is not a real feature request, but instead I'm going to list some points and concerns that I would like to have a debate on:

  1. Clarify which plugins stay in the registry, and which not. I think we need to formalize what a "known plugin" is. Are those the official ones, supported/maintained by the falcosecurity organization? Are these the ones that are distributed with the Falco binaries (if not, which are bundled in the distributions and which not)? Are these only open source, or do we also list proprietary plugins? Does this include only plugins in this repo, or also external ones.
  2. Is this the official source ID knowledge base? Source plugins should not have overlapping IDs, and we need a way for developers to reserve those IDs. Is this the official place where those are collected? If so, how does the assignment process happen (where and how do we formally document it)? Does this need to happen for propertary plugins too, and how? Do we have similar needs for the other plugin types (e.g. extractor)?
  3. Clarify which information are collected for each plugin. So far, for each plugin we collect Name, Description, Contact, Event ID, Event Source, Extract Event Sources. These are basically a portion of the info that plugins return to the plugin framework. Do we want more than this? What are the formal differences between the source and extractor plugin info?
  4. Switching to a machine-readable format. I like the idea of tracking the plugins in a single location, but I don't think a MD document is the way to go. I would prefer having some machine-processable format (even a simple JSON would suffice) on which we could generate richer documents automatically, and that can be used in tools ad CI workflows.
  5. Publishing this on falco-website. I think this should have a dedicated page on the official Falco website, so that users can fastly consult which plugins are available.

Alternatives

Additional context

Using different values of GODEBUG=cgocheck for dev and release builds

Motivation

As per cgo documentation, complete checking of pointer handling can have some cost at run time. For this reason, it would be nice to set the value of cgocheck on a per-build basis.

Feature

Improve the building system in order to use:

  • GODEBUG=cgocheck=2 for development builds (a.k.a. debug release, usually published when a PR gets merged into master)
  • GODEBUG=cgocheck=0 for production builds (official distribution artifacts, published when tagging a release)

Alternatives

The do-nothing alternative? ๐Ÿ˜ธ

Additional context

Consider make falco-plugin-scoffolding repository

Motivation

The idea is inspired by terraform-provider-scaffolding project, which makes life easier to create custom providers from scratch by using template-project. In this project, we have plugins folder to demonstrate how can we create custom plugins from scratch.

We (w/@developer-guy @erkanzileli) are proposing to create brand-new batteries-included public template repository for the developer who do not want to do chore things.

Feature

Since plugins mostly written in Go, I'll go with Go here:

To provide batteries-included project:

  • .gitignore
  • .goreleaser.yml
  • .golangci-lint.yml
  • docs/
  • .github/
    • dependabot
    • ISSUE + PR Templates
    • workflows: test + build
  • CONTRIBUTING.md
  • OWNERS
  • LICENSE
  • just or make file
    • build: go build (ldflags for version)
    • test: unit + integrations
    • fmt: format checking
    • lint: linters

Alternatives

If we don't want to complicate simple things, plugins folder is just fine to get started. This is just for an idea.

Additional context

-

Plugins not compiling with minimum defined go version in go.mod

Describe the bug

The minimum required go version in many plugin go.mod files is 1.15. If you actually try to compile the source with go 1.15 you get a compile error:

Error: /home/runner/go/pkg/mod/github.com/alecthomas/[email protected]/comment_extractor.go:5:2: package io/fs is not in GOROOT 

This io/fs package requires at least go 1.16. The CI workflows build with go 1.18 atm.

How to reproduce it

Try to build with the minimum required version 1.15 defined in go.mod, e.g. the github plugin.

Expected behaviour

Successful build

Screenshots

Environment

  • Falco version:
  • System info:
  • Cloud provider or hardware configuration:
  • OS:
  • Kernel:
  • Installation method:

Additional context

Add signatures to the generated index

Motivation

When we release plugins we can now add signatures and check them (see example: falcosecurity/falcoctl#305)
Now we need to make sure that the index file we generate contains the signature section accordingly

Note that for plugins, the index file could in the future also contain plugins that are not generated from this repository, so we may want to rely to those third-party trusted plugin authors to supply that signature and add it to this repo.

Use 'syscall' source in extractor plugin

Motivation
I want to make a custom extractor plugin to extend some field in the syscall events, just like some file stats .
So can I use 'syscall' source in my custom extractor plugin?

Feature

Alternatives

Additional context

[UMBRELLA] Requested Plugins

In January 2022, Falco introduced its first version of a Plugin framework to extend its available inputs. The framework has been enhanced in the following months to have something production ready for adopters.

Existing Plugins

We, the maintainers of Falco, created a bunch of Plugins to replace deprecated features (k8saudit) or to follow mediatic security events (Okta breach).

Right now, we have registered (excluding dummy plugins)

SDK

To make the development of plugins easier, 2 SDK are provided: Go and C++.
We can notice all plugins have been written in Go, it can be explained by several factors:
Go is easier to than C++
Itโ€™s a common language in web development, so in adoptersโ€™ infras
Falcoโ€™s ecosystem already embeds different Go codebases (Falcosidekick, Falcosidekick-UI, Falcoctl, Driverkit, Falco-exporter, Event-generator)

Libs

Writing a plugin from scratch could be complicated for the contributors, this is why we could also provide libraries to keep them focus on the extraction logic and not the asides (auth, polling, create a web server, etc). The main goal of these libs is to avoid duplicate codes across plugins, allowing to keep an uniformity.

This approach has been started with 2 libs for AWS:

  • AWS Session: allows to create easily a session for AWS API
  • AWS Cloudwatch: allows to set filters and starts the polling of log entries from Cloudwatch

To โ€œopenโ€ Falco to more sources, we could create shared libs for generic usages:

  • Web Server: to collect JSON webhook payloads
  • File reader: to follow new entries in a file
  • Kafka: to be a consumer of a topic
  • SQS: to poll a queue
  • RabbitMQ: to be a consumer of an exchange
  • MQTT: for IoT

We also need to address the most common Cloud Providers and their specific log aggregator systems with the basic functions which are:

  • Authentication to the API
  • Creation of a client for the log service
  • Gathering of logs
  • Looping over the results

By providing these libs, it will be easier for developers to create new plugins for specific usages with these Cloud Providers.

Plugins

The purpose of this issue is to list the requested plugins by the community, the volunteers to develop them and their statuses.

The following table will be kept updated to avoid people to search through N issues.

Plugin Description Issue # Developer Repo URL Status
k8saudit-gke Collect K8S Audit Logs from GKE @sboschman Completed
k8saudit-aks Collect K8S Audit Logs from AKS #123 @NissesSenap In Progress
k8saudit-openshift Collect K8S Audit Logs from OpenShift Requested
redshift Collect Audit Logs from Redshift #117 Requested
slack Collect Audit Logs from Slack Requested
k8saudit-admission Collect Audit Logs from K8S Control Plane through an admission controller @RichardoC https://github.com/RichardoC/k8sadmission In Progress

Tasks

  1. help wanted kind/feature
  2. help wanted kind/feature
    leogr
  3. kind/feature

empty string with double quotes in aws_cloudtrail event `""`

Describe the bug

In an aws_cloudtrail event, the value "" is not interpreted as an empty string, but literally as two double quotes.

How to reproduce it

Run one of the latest builds of Falco 0.30 (for example this one), set config file to:

plugins:
  - name: cloudtrail
    library_path: libcloudtrail.so
    init_config: ""
    open_params: "/root/cloudtrail.json"
  - name: json
    library_path: libjson.so
    init_config: ""
load_plugins: [cloudtrail,json]

and create a log file /root/cloudtrail.json with the content of this gist.

The last event in the file should trigger the rule Delete Bucket Public Access Block, but it does not. If I modify the rule condition to make it more general and just be triggered by the eventName being PutBucketPublicAccessBlock and include in the output the next fields:

    field_1=%json.value[/requestParameters/publicAccessBlock]
    field_2=%json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]
    field_3=%json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]
    field_4=%json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]
    field_5=%json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]

The alert I get contains:

23:54:15.000000000: Critical A pulic access block for a bucket has been deleted (requesting user=developer, requesting IP=24.7.51.241, AWS region=eu-central-1, bucket=fak3-name-066997976161), field_1="" field_2=true field_3=true field_4=false field_5=true

Expected behaviour

Expected behaviour when I use the original rule: an alert is displayed when I use the logs file above for the rule: Delete Bucket Public Access Block. I don't think it is a problem of the rule syntax (including here @seishinsha in case it is), but how an empty value returned by AWS is interpreted.

Screenshots

Non required, check the code_blocks above please.

Environment

  • Falco version:
Falco version: 0.30.0-155+2f82a9b
Driver version: 319368f1ad778691164d33d59945e00c5752cd27
  • System info:
Wed Jan 26 09:17:48 2022: Falco version 0.30.0-155+2f82a9b (driver version 319368f1ad778691164d33d59945e00c5752cd27)
Wed Jan 26 09:17:48 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Wed Jan 26 09:17:48 2022: Loading plugin (cloudtrail) from file /usr/share/falco/plugins/libcloudtrail.so
Wed Jan 26 09:17:48 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
Wed Jan 26 09:17:48 2022: Loading rules from file /etc/falco/aws_cloudtrail_rules.yaml:
{
  "machine": "x86_64",
  "nodename": "host",
  "release": "5.11.0-1021-gcp",
  "sysname": "Linux",
  "version": "#23~20.04.1-Ubuntu SMP Fri Oct 1 19:04:32 UTC 2021"
}
  • Cloud provider or hardware configuration:
  • OS:
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Kernel:
Linux host 5.11.0-1021-gcp #23~20.04.1-Ubuntu SMP Fri Oct 1 19:04:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Installation method:
dpkg DEB

Additional context

I can provide a VM with the same environment I am using to test this.

k8saudit extract additional fields

Motivation

When using EKS the audit logs contain important information about the user under user.extra field which is not extracted by the plugin.

The fields that are currently extracted by the plugin for the user are:

user.name --> user.username
user.groups --> user.groups

Feature

We would like to include those fields like user.extra to the extracted fields. Even better if we could make it dynamic or provide an easy way of extracting additional fields.

Alternatives

For now there are no alternatives to get that information on the event.

Additional context

Example of a full audit log from EKS:

{
   "kind":"Event",
   "apiVersion":"audit.k8s.io/v1",
   "level":"RequestResponse",
   "auditID":"09dddf1c-01e4-4e0f-867d-eb430659575d",
   "stage":"ResponseComplete",
   "requestURI":"/api/v1/namespaces/default/pods/example-76dc5d5df9-s6b57/portforward",
   "verb":"create",
   "user":{
      "username":"admin",
      "uid":"aws-iam-authenticator:12345678910:ASLGADSLKASDFLKJHSAFLK",
      "groups":[
         "system:masters",
         "system:authenticated"
      ],
      "extra":{
         "accessKeyId":[
            "ASLGADSLKASDFLKJHSAFLK"
         ],
         "arn":[
            "arn:aws:iam::12345678910:role/admin/ronaldo.ur"
         ],
         "canonicalArn":[
            "arn:aws:iam::12345678910:role/admin"
         ],
         "sessionName":[
            "ronaldo.ur"
         ]
      }
   },
   "sourceIPs":[
      "1.2.3.4"
   ],
   "userAgent":"kubectl/v1.23.6 (linux/amd64) kubernetes/ad33385",
   "objectRef":{
      "resource":"pods",
      "namespace":"default",
      "name":"example-76dc5d5df9-s6b57",
      "apiVersion":"v1",
      "subresource":"portforward"
   },
   "responseStatus":{
      "metadata":{
         
      },
      "code":101
   },
   "requestReceivedTimestamp":"2022-12-16T23:18:20.112683Z",
   "stageTimestamp":"2022-12-16T23:18:21.859605Z",
   "annotations":{
      "authorization.k8s.io/decision":"allow",
      "authorization.k8s.io/reason":""
   }
}

We can create a PR to include those fields, but maybe we can discuss the possibility of making it dynamic?

Error: Plugin 'k8saudit' version '0.5.0' is not compatible with required plugin version '0.5.2'

Describe the bug

Falco version 0.34.1 is in a CrashLoopBackOff state and the logs complain about the following:

Error: Plugin 'k8saudit' version '0.5.0' is not compatible with required plugin version '0.5.2'

This has affected all of our Falco deployments across several clusters.

How to reproduce it

Deploy Falco with JSON & k8saudit plugins loaded version 0.34.1 and the pods will enter CrashLoobBackOff state with the error posted above.

Expected behaviour

Running Falco pods with working k8saudit plugin 0.5.0 OR 0.5.2 loaded.

Screenshots

Environment

  • Falco version:
  • System info: 0.34.1 (x86_64)
  • Cloud provider or hardware configuration: K8s running on Openstack
  • OS: Ubuntu 22.04
  • Kernel: 5.15.0-67-generic
  • Installation method: Kubernetes & Helm

Additional context

This is our values.yaml file:

# Default values for Falco.

###############################
# General deployment settings #
###############################

image:
  # -- The image pull policy.
  pullPolicy: IfNotPresent
  # -- The image registry to pull from.
  registry: docker.io
  # -- The image repository to pull from
  repository: falcosecurity/falco-no-driver
  # -- The image tag to pull. Overrides the image tag whose default is the chart appVersion.
  tag: ""

# -- Secrets containing credentials when pulling from private/secure registries.
imagePullSecrets: []
# -- Put here the new name if you want to override the release name used for Falco components.
nameOverride: ""
# -- Same as nameOverride but for the fullname.
fullnameOverride: ""
# -- Override the deployment namespace
namespaceOverride: ""

rbac:
  # Create and use rbac resources when set to true. Needed to fetch k8s metadata from the api-server.
  create: true

serviceAccount:
  # -- Specifies whether a service account should be created.
  create: true
  # -- Annotations to add to the service account.
  annotations: {}
  # -- The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

# -- Add additional pod annotations
podAnnotations: {}

# -- Add additional pod labels
podLabels: {}

# -- Set pod priorityClassName
podPriorityClassName:

# -- Set securityContext for the pods
# These security settings are overriden by the ones specified for the specific
# containers when there is overlap.
podSecurityContext: {}

# Note that `containerSecurityContext`:
#  - will not apply to init containers, if any;
#  - takes precedence over other automatic configurations (see below).
#
# Based on the `driver` configuration the auto generated settings are:
# 1) driver.enabled = false:
#    securityContext: {}
#
# 2) driver.enabled = true and (driver.kind = module || driver.kind = modern-bpf):
#    securityContext:
#     privileged: true
#
# 3) driver.enabled = true and driver.kind = ebpf:
#    securityContext:
#     privileged: true
#
# 4) driver.enabled = true and driver.kind = ebpf and driver.ebpf.leastPrivileged = true
#    securityContext:
#     capabilities:
#      add:
#      - BPF
#      - SYS_RESOURCE
#      - PERFMON
#      - SYS_PTRACE
#
# -- Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl"
containerSecurityContext: {}

scc:
  # -- Create OpenShift's Security Context Constraint.
  create: false

resources:
  # -- Although resources needed are subjective on the actual workload we provide
  # a sane defaults ones. If you have more questions or concerns, please refer
  # to #falco slack channel for more info about it.
  requests:
    cpu: 100m
    memory: 512Mi
  # -- Maximum amount of resources that Falco container could get.
  # If you are enabling more than one source in falco, than consider to increase
  # the cpu limits.
  limits:
    cpu: 1000m
    memory: 1024Mi
# -- Selectors used to deploy Falco on a given node/nodes.
nodeSelector: {}

# -- Affinity constraint for pods' scheduling.
affinity: {}

# -- Tolerations to allow Falco to run on Kubernetes masters.
tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane

# -- Parameters used
healthChecks:
  livenessProbe:
    # -- Tells the kubelet that it should wait X seconds before performing the first probe.
    initialDelaySeconds: 60
    # -- Number of seconds after which the probe times out.
    timeoutSeconds: 5
    # -- Specifies that the kubelet should perform the check every x seconds.
    periodSeconds: 15
  readinessProbe:
    # -- Tells the kubelet that it should wait X seconds before performing the first probe.
    initialDelaySeconds: 30
    # -- Number of seconds after which the probe times out.
    timeoutSeconds: 5
    # -- Specifies that the kubelet should perform the check every x seconds.
    periodSeconds: 15

# -- Attach the Falco process to a tty inside the container. Needed to flush Falco logs as soon as they are emitted.
# Set it to "true" when you need the Falco logs to be immediately displayed.
tty: true

#########################
# Scenario requirements #
#########################

# Sensors dislocation configuration (scenario requirement)
controller:
  # Available options: deployment, daemonset.
  kind: daemonset
  # Annotations to add to the daemonset or deployment
  annotations: {}
  daemonset:
    updateStrategy:
      # You can also customize maxUnavailable or minReadySeconds if you
      # need it
      # -- Perform rolling updates by default in the DaemonSet agent
      # ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
      type: RollingUpdate
  deployment:
    # -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
    # For more info check the section on Plugins in the README.md file.
    replicas: 1

# -- Network services configuration (scenario requirement)
# Add here your services to be deployed together with Falco.
services:
  - name: k8saudit-webhook
    type: NodePort
    ports:
      - port: 9765 # See plugin open_params
        nodePort: 30007
        protocol: TCP
  # Example configuration for the "k8sauditlog" plugin
  # - name: k8saudit-webhook
  #   type: NodePort
  #   ports:
  #     - port: 9765 # See plugin open_params
  #       nodePort: 30007
  #       protocol: TCP

# File access configuration (scenario requirement)
mounts:
  # -- A list of volumes you want to add to the Falco pods.
  volumes: []
  # -- A list of volumes you want to add to the Falco pods.
  volumeMounts: []
  # -- By default, `/proc` from the host is only mounted into the Falco pod when `driver.enabled` is set to `true`. This flag allows it to override this behaviour for edge cases where `/proc` is needed but syscall data source is not enabled at the same time (e.g. for specific plugins).
  enforceProcMount: false

# Driver settings (scenario requirement)
driver:
  # -- Set it to false if you want to deploy Falco without the drivers.
  # Always set it to false when using Falco with plugins.
  enabled: true
  # -- Tell Falco which driver to use. Available options: module (kernel driver), ebpf (eBPF probe), modern-bpf (modern eBPF probe).
  kind: modern-bpf
  # -- Configuration section for ebpf driver.
  ebpf:
    # -- Path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init
    # container deployed with the chart.
    path:
    # -- Needed to enable eBPF JIT at runtime for performance reasons.
    # Can be skipped if eBPF JIT is enabled from outside the container
    hostNetwork: false
    # -- Constrain Falco with capabilities instead of running a privileged container.
    # This option is only supported with the eBPF driver and a kernel >= 5.8.
    # Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`).
    leastPrivileged: false
  # -- Configuration for the Falco init container.
  loader:
    # -- Enable/disable the init container.
    enabled: true
    initContainer:
      image:
        # -- The image pull policy.
        pullPolicy: IfNotPresent
        # -- The image registry to pull from.
        registry: docker.io
        # -- The image repository to pull from.
        repository: falcosecurity/falco-driver-loader
        #  -- Overrides the image tag whose default is the chart appVersion.
        tag: ""
      # -- Extra environment variables that will be pass onto Falco driver loader init container.
      env: []
      # -- Arguments to pass to the Falco driver loader init container.
      args: []
      # -- Resources requests and limits for the Falco driver loader init container.
      resources: {}
      # -- Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`.
      securityContext: {}

# -- Gvisor configuration. Based on your system you need to set the appropriate values.
# Please, rembember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes.
gvisor:
  # -- Set it to true if you want to deploy Falco with gVisor support.
  enabled: false
  # -- Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods.
  runsc:
    # -- Absolute path of the `runsc` binary in the k8s nodes.
    path: /home/containerd/usr/local/sbin
    # -- Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it;
    root: /run/containerd/runsc
    # -- Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence.
    config: /run/containerd/runsc/config.toml

# Collectors for data enrichment (scenario requirement)
collectors:
  # -- Enable/disable all the metadata collectors.
  enabled: true

  docker:
    # -- Enable Docker support.
    enabled: false
    # -- The path of the Docker daemon socket.
    socket: /var/run/docker.sock

  containerd:
    # -- Enable ContainerD support.
    enabled: true
    # -- The path of the ContainerD socket.
    socket: /run/k3s/containerd/containerd.sock

  crio:
    # -- Enable CRI-O support.
    enabled: false
    # -- The path of the CRI-O socket.
    socket: /run/crio/crio.sock

  kubernetes:
    # -- Enable Kubernetes meta data collection via a connection to the Kubernetes API server.
    # When this option is disabled, Falco falls back to the container annotations to grap the meta data.
    # In such a case, only the ID, name, namespace, labels of the pod will be available.
    enabled: false
    # -- The apiAuth value is to provide the authentication method Falco should use to connect to the Kubernetes API.
    # The argument's documentation from Falco is provided here for reference:
    #
    #  <bt_file> | <cert_file>:<key_file[#password]>[:<ca_cert_file>], --k8s-api-cert <bt_file> | <cert_file>:<key_file[#password]>[:<ca_cert_file>]
    #     Use the provided files names to authenticate user and (optionally) verify the K8S API server identity.
    #     Each entry must specify full (absolute, or relative to the current directory) path to the respective file.
    #     Private key password is optional (needed only if key is password protected).
    #     CA certificate is optional. For all files, only PEM file format is supported.
    #     Specifying CA certificate only is obsoleted - when single entry is provided
    #     for this option, it will be interpreted as the name of a file containing bearer token.
    #     Note that the format of this command-line option prohibits use of files whose names contain
    #     ':' or '#' characters in the file name.
    # -- Provide the authentication method Falco should use to connect to the Kubernetes API.
    apiAuth: /var/run/secrets/kubernetes.io/serviceaccount/token
    ## -- Provide the URL Falco should use to connect to the Kubernetes API.
    apiUrl: "https://$(KUBERNETES_SERVICE_HOST)"
    # -- If true, only the current node (on which Falco is running) will be considered when requesting metadata of pods
    # to the API server. Disabling this option may have a performance penalty on large clusters.
    enableNodeFilter: true

###########################
# Extras and customization #
############################

extra:
  # -- Extra environment variables that will be pass onto Falco containers.
  env: []
  # -- Extra command-line arguments.
  args: []
  # -- Additional initContainers for Falco pods.
  initContainers: []

# -- certificates used by webserver and grpc server.
# paste certificate content or use helm with --set-file
# or use existing secret containing key, crt, ca as well as pem bundle
certs:
  # -- Existing secret containing the following key, crt and ca as well as the bundle pem.
  existingSecret: ""
  server:
    # -- Key used by gRPC and webserver.
    key: ""
    # -- Certificate used by gRPC and webserver.
    crt: ""
  ca:
    # -- CA certificate used by gRPC, webserver and AuditSink validation.
    crt: ""
# -- Third party rules enabled for Falco. More info on the dedicated section in README.md file.
customRules:
  {}
  # Although Falco comes with a nice default rule set for detecting weird
  # behavior in containers, our users are going to customize the run-time
  # security rule sets or policies for the specific container images and
  # applications they run. This feature can be handled in this section.
  #
  # Example:
  #
  # rules-traefik.yaml: |-
  #   [ rule body ]

########################
# Falco integrations   #
########################

# -- For configuration values, see https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml
falcosidekick:
  # -- Enable falcosidekick deployment.
  enabled: false
  # -- Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used).
  fullfqdn: false
  # -- Listen port. Default value: 2801
  listenPort: ""

####################
# falcoctl config  #
####################
falcoctl:
  image:
    # -- The image pull policy.
    pullPolicy: IfNotPresent
    # -- The image registry to pull from.
    registry: docker.io
    # -- The image repository to pull from.
    repository: falcosecurity/falcoctl
    #  -- Overrides the image tag whose default is the chart appVersion.
    tag: "0.4.0"
  artifact:
    # -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before
    # Falco starts. It provides them to Falco by using an emptyDir volume.
    install:
      enabled: true
      # -- Extra environment variables that will be pass onto falcoctl-artifact-install init container.
      env: {}
      # -- Arguments to pass to the falcoctl-artifact-install init container.
      args: ["--verbose"]
      # -- Resources requests and limits for the falcoctl-artifact-install init container.
      resources: {}
      # -- Security context for the falcoctl init container.
      securityContext: {}
    # -- Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for
    # updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir)
    # that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the
    # correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible
    # with the running version of Falco before installing it.
    follow:
      enabled: true
      # -- Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container.
      env: {}
      # -- Arguments to pass to the falcoctl-artifact-follow sidecar container.
      args: ["--verbose"]
      # -- Resources requests and limits for the falcoctl-artifact-follow sidecar container.
      resources: {}
      # -- Security context for the falcoctl-artifact-follow sidecar container.
      securityContext: {}
  # -- Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers.
  config:
    # -- List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see:
    # https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview
    indexes:
    - name: falcosecurity
      url: https://falcosecurity.github.io/falcoctl/index.yaml
    # -- Configuration used by the artifact commands.
    artifact:
      # -- List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained
      # in the list it will refuse to downloade and install that artifact.
      allowedTypes:
        - rulesfile
      install:
        # -- Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it.
        resolveDeps: false
        # -- List of artifacts to be installed by the falcoctl init container.
        refs: [falco-rules:0, k8saudit-rules:0.5]
        # -- Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir
        # mounted also by the Falco pod.
        rulesfilesDir: /rulesfiles
        # -- Same as the one above but for the artifacts.
        pluginsDir: /plugins
      follow:
      # -- List of artifacts to be followed by the falcoctl sidecar container.
        refs: [falco-rules:0, k8saudit-rules:0.5]
        # -- How often the tool checks for new versions of the followed artifacts.
        every: 6h
        # -- HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible
        # with the running Falco instance.
        falcoversions: http://localhost:8765/versions
        # -- See the fields of the artifact.install section.
        rulesfilesDir: /rulesfiles
        # -- See the fields of the artifact.install section.
        pluginsDir: /plugins

######################
# falco.yaml config  #
######################
falco:
# File(s) or Directories containing Falco rules, loaded at startup.
# The name "rules_file" is only for backwards compatibility.
# If the entry is a file, it will be read directly. If the entry is a directory,
# every file in that directory will be read, in alphabetical order.
#
# falco_rules.yaml ships with the falco package and is overridden with
# every new software version. falco_rules.local.yaml is only created
# if it doesn't exist. If you want to customize the set of rules, add
# your customizations to falco_rules.local.yaml.
#
# The files will be read in the order presented here, so make sure if
# you have overrides they appear in later files.
  # -- The location of the rules files that will be consumed by Falco.
  rules_file:
    - /etc/falco/falco_rules.yaml
    - /etc/falco/falco_rules.local.yaml
    - /etc/falco/k8s_audit_rules.yaml
    - /etc/falco/rules.d

  #
  # Plugins that are available for use. These plugins are not loaded by
  # default, as they require explicit configuration to point to
  # cloudtrail log files.
  #

  # To learn more about the supported formats for
  # init_config/open_params for the cloudtrail plugin, see the README at
  # https://github.com/falcosecurity/plugins/blob/master/plugins/cloudtrail/README.md.
  # -- Plugins configuration. Add here all plugins and their configuration. Please
  # consult the plugins documentation for more info. Remember to add the plugins name in
  # "load_plugins: []" in order to load them in Falco.
  plugins:
    - name: k8saudit
      library_path: libk8saudit.so
      init_config:
        useAsync: false
      #   maxEventSize: 262144
      #   webhookMaxBatchSize: 12582912
      #   sslCertificate: /etc/falco/falco.pem
      open_params: "http://:9765/k8s-audit"
    - name: json
      library_path: libjson.so
      init_config: ""

  # Setting this list to empty ensures that the above plugins are *not*
  # loaded and enabled by default. If you want to use the above plugins,
  # set a meaningful init_config/open_params for the cloudtrail plugin
  # and then change this to:
  # load_plugins: [cloudtrail, json]
  # -- Add here the names of the plugins that you want to be loaded by Falco. Please make sure that
  # plugins have been configured under the "plugins" section before adding them here.
  # Please make sure to configure the falcoctl tool to download and install the very same plugins
  # you are loading here. You should add the references in the falcoctl.config.artifact.install.refs array
  # for each plugin you are loading.
  load_plugins: [k8saudit,json]

  # -- Watch config file and rules files for modification.
  # When a file is modified, Falco will propagate new config,
  # by reloading itself.
  watch_config_files: true

  # -- If true, the times displayed in log messages and output messages
  # will be in ISO 8601. By default, times are displayed in the local
  # time zone, as governed by /etc/localtime.
  time_format_iso_8601: false

  # -- If "true", print falco alert messages and rules file
  # loading/validation results as json, which allows for easier
  # consumption by downstream programs. Default is "false".
  json_output: true

  # -- When using json output, whether or not to include the "output" property
  # itself (e.g. "File below a known binary directory opened for writing
  # (user=root ....") in the json output.
  json_include_output_property: true

  # -- When using json output, whether or not to include the "tags" property
  # itself in the json output. If set to true, outputs caused by rules
  # with no tags will have a "tags" field set to an empty array. If set to
  # false, the "tags" field will not be included in the json output at all.
  json_include_tags_property: true

  # -- Send information logs to stderr. Note these are *not* security
  # notification logs! These are just Falco lifecycle (and possibly error) logs.
  log_stderr: true
  # -- Send information logs to syslog. Note these are *not* security
  # notification logs! These are just Falco lifecycle (and possibly error) logs.
  log_syslog: true

  # -- Minimum log level to include in logs. Note: these levels are
  # separate from the priority field of rules. This refers only to the
  # log level of falco's internal logging. Can be one of "emergency",
  # "alert", "critical", "error", "warning", "notice", "info", "debug".
  log_level: debug

  # Falco is capable of managing the logs coming from libs. If enabled,
  # the libs logger send its log records the same outputs supported by
  # Falco (stderr and syslog). Disabled by default.
  libs_logger:
    # -- Enable the libs logger.
    enabled: false
    # -- Minimum log severity to include in the libs logs. Note: this value is
    # separate from the log level of the Falco logger and does not affect it.
    # Can be one of "fatal", "critical", "error", "warning", "notice",
    # "info", "debug", "trace".
    severity: debug

  # -- Minimum rule priority level to load and run. All rules having a
  # priority more severe than this level will be loaded/run.  Can be one
  # of "emergency", "alert", "critical", "error", "warning", "notice",
  # "informational", "debug".
  priority: debug

  # -- Whether or not output to any of the output channels below is
  # buffered. Defaults to false
  buffered_outputs: false

  # Falco uses a shared buffer between the kernel and userspace to pass
  # system call information. When Falco detects that this buffer is
  # full and system calls have been dropped, it can take one or more of
  # the following actions:
  #   - ignore: do nothing (default when list of actions is empty)
  #   - log: log a DEBUG message noting that the buffer was full
  #   - alert: emit a Falco alert noting that the buffer was full
  #   - exit: exit Falco with a non-zero rc
  #
  # Notice it is not possible to ignore and log/alert messages at the same time.
  #
  # The rate at which log/alert messages are emitted is governed by a
  # token bucket. The rate corresponds to one message every 30 seconds
  # with a burst of one message (by default).
  #
  # The messages are emitted when the percentage of dropped system calls
  # with respect the number of events in the last second
  # is greater than the given threshold (a double in the range [0, 1]).
  #
  # For debugging/testing it is possible to simulate the drops using
  # the `simulate_drops: true`. In this case the threshold does not apply.

  syscall_event_drops:
    # -- The messages are emitted when the percentage of dropped system calls
    # with respect the number of events in the last second
    # is greater than the given threshold (a double in the range [0, 1]).
    threshold: .1
    # -- Actions to be taken when system calls were dropped from the circular buffer.
    actions:
      - log
      - alert
    # -- Rate at which log/alert messages are emitted.
    rate: .03333
    # -- Max burst of messages emitted.
    max_burst: 1
    # -- Flag to enable drops for debug purposes.
    simulate_drops: false

  # Falco uses a shared buffer between the kernel and userspace to receive
  # the events (eg., system call information) in userspace.
  #
  # Anyways, the underlying libraries can also timeout for various reasons.
  # For example, there could have been issues while reading an event.
  # Or the particular event needs to be skipped.
  # Normally, it's very unlikely that Falco does not receive events consecutively.
  #
  # Falco is able to detect such uncommon situation.
  #
  # Here you can configure the maximum number of consecutive timeouts without an event
  # after which you want Falco to alert.
  # By default this value is set to 1000 consecutive timeouts without an event at all.
  # How this value maps to a time interval depends on the CPU frequency.

  syscall_event_timeouts:
    # -- Maximum number of consecutive timeouts without an event
    # after which you want Falco to alert.
    max_consecutives: 1000

  # --- [Description]
  #
  # This is an index that controls the dimension of the syscall buffers.
  # The syscall buffer is the shared space between Falco and its drivers where all the syscall events
  # are stored.
  # Falco uses a syscall buffer for every online CPU, and all these buffers share the same dimension.
  # So this parameter allows you to control the size of all the buffers!
  #
  # --- [Usage]
  #
  # You can choose between different indexes: from `1` to `10` (`0` is reserved for future uses).
  # Every index corresponds to a dimension in bytes:
  #
  # [(*), 1 MB, 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, 64 MB, 128 MB, 256 MB, 512 MB]
  #   ^    ^     ^     ^     ^     ^      ^      ^       ^       ^       ^
  #   |    |     |     |     |     |      |      |       |       |       |
  #   0    1     2     3     4     5      6      7       8       9       10
  #
  # As you can see the `0` index is reserved, while the index `1` corresponds to
  # `1 MB` and so on.
  #
  # These dimensions in bytes derive from the fact that the buffer size must be:
  # (1) a power of 2.
  # (2) a multiple of your system_page_dimension.
  # (3) greater than `2 * (system_page_dimension)`.
  #
  # According to these constraints is possible that sometimes you cannot use all the indexes, let's consider an
  # example to better understand it:
  # If you have a `page_size` of 1 MB the first available buffer size is 4 MB because 2 MB is exactly
  # `2 * (system_page_size)` -> `2 * 1 MB`, but this is not enough we need more than `2 * (system_page_size)`!
  # So from this example is clear that if you have a page size of 1 MB the first index that you can use is `3`.
  #
  # Please note: this is a very extreme case just to let you understand the mechanism, usually the page size is something
  # like 4 KB so you have no problem at all and you can use all the indexes (from `1` to `10`).
  #
  # To check your system page size use the Falco `--page-size` command line option. The output on a system with a page
  # size of 4096 Bytes (4 KB) should be the following:
  #
  # "Your system page size is: 4096 bytes."
  #
  # --- [Suggestions]
  #
  # Before the introduction of this param the buffer size was fixed to 8 MB (so index `4`, as you can see
  # in the default value below).
  # You can increase the buffer size when you face syscall drops. A size of 16 MB (so index `5`) can reduce
  # syscall drops in production-heavy systems without noticeable impact. Very large buffers however could
  # slow down the entire machine.
  # On the other side you can try to reduce the buffer size to speed up the system, but this could
  # increase the number of syscall drops!
  # As a final remark consider that the buffer size is mapped twice in the process' virtual memory so a buffer of 8 MB
  # will result in a 16 MB area in the process virtual memory.
  # Please pay attention when you use this parameter and change it only if the default size doesn't fit your use case.
  # -- This is an index that controls the dimension of the syscall buffers.
  syscall_buf_size_preset: 5

  ############## [EXPERIMENTAL] Modern BPF probe specific ##############
  # Please note: these configs regard only the modern BPF probe. They
  # are experimental so they could change over releases.
  #
  # `cpus_for_each_syscall_buffer`
  #
  # --- [Description]
  #
  # This is an index that controls how many CPUs you want to assign to a single
  # syscall buffer (ring buffer). By default, every syscall buffer is associated to
  # 2 CPUs, so the mapping is 1:2. The modern BPF probe allows you to choose different
  # mappings, for example, 1:1 would mean a syscall buffer for each CPU.
  #
  # --- [Usage]
  #
  # You can choose between different indexes: from `0` to `MAX_NUMBER_ONLINE_CPUs`.
  # `0` is a special value and it means a single syscall buffer shared between all
  # your online CPUs. `0` has the same effect as `MAX_NUMBER_ONLINE_CPUs`, the rationale
  # is that `0` allows you to create a single buffer without knowing the number of online
  # CPUs on your system.
  # Let's consider an example to better understand it:
  #
  # Consider a system with 7 online CPUs:
  #
  #          CPUs     0  X  2  3  X  X  6  7  8  9   (X means offline CPU)
  #
  # - `1` means a syscall buffer for each CPU so 7 buffers
  #
  #          CPUs     0  X  2  3  X  X  6  7  8  9   (X means offline CPU)
  #                   |     |  |        |  |  |  |
  #       BUFFERs     0     1  2        3  4  5  6
  #
  # - `2` (Default value) means a syscall buffer for each CPU pair, so 4 buffers
  #
  #          CPUs     0  X  2  3  X  X  6  7  8  9   (X means offline CPU)
  #                   |     |  |        |  |  |  |
  #       BUFFERs     0     0  1        1  2  2  3
  #
  # Please note that we need 4 buffers, 3 buffers are associated with CPU pairs, the last
  # one is mapped with just 1 CPU since we have an odd number of CPUs.
  #
  # - `0` or `MAX_NUMBER_ONLINE_CPUs` mean a syscall buffer shared between all CPUs, so 1 buffer
  #
  #          CPUs     0  X  2  3  X  X  6  7  8  9   (X means offline CPU)
  #                   |     |  |        |  |  |  |
  #       BUFFERs     0     0  0        0  0  0  0
  #
  # Moreover you can combine this param with `syscall_buf_size_preset`
  # index, for example, you could create a huge single syscall buffer
  # shared between all your online CPUs of 512 MB (so `syscall_buf_size_preset=10`).
  #
  # --- [Suggestions]
  #
  # We chose index `2` (so one syscall buffer for each CPU pair) as default because the modern bpf probe
  # follows a different memory allocation strategy with respect to the other 2 drivers (bpf and kernel module).
  # By the way, you are free to find the preferred configuration for your system.
  # Considering a fixed `syscall_buf_size_preset` and so a fixed buffer dimension:
  # - a lower number of buffers can speed up your system (lower memory footprint)
  # - a too lower number of buffers could increase contention in the kernel causing an
  #   overall slowdown of the system.
  # If you don't have huge events throughputs and you are not experimenting with tons of drops
  # you can try to reduce the number of buffers to have a lower memory footprint

  modern_bpf:
    # -- [MODERN PROBE ONLY] This is an index that controls how many CPUs you want to assign to a single syscall buffer.
    cpus_for_each_syscall_buffer: 2
  ############## [EXPERIMENTAL] Modern BPF probe specific ##############

  # Falco continuously monitors outputs performance. When an output channel does not allow
  # to deliver an alert within a given deadline, an error is reported indicating
  # which output is blocking notifications.
  # The timeout error will be reported to the log according to the above log_* settings.
  # Note that the notification will not be discarded from the output queue; thus,
  # output channels may indefinitely remain blocked.
  # An output timeout error indeed indicate a misconfiguration issue or I/O problems
  # that cannot be recovered by Falco and should be fixed by the user.
  #
  # The "output_timeout" value specifies the duration in milliseconds to wait before
  # considering the deadline exceed.
  #
  # With a 2000ms default, the notification consumer can block the Falco output
  # for up to 2 seconds without reaching the timeout.
  # -- Duration in milliseconds to wait before considering the output timeout deadline exceed.
  output_timeout: 2000

  # A throttling mechanism implemented as a token bucket limits the
  # rate of Falco notifications. One rate limiter is assigned to each event
  # source, so that alerts coming from one can't influence the throttling
  # mechanism of the others. This is controlled by the following options:
  #  - rate: the number of tokens (i.e. right to send a notification)
  #    gained per second. When 0, the throttling mechanism is disabled.
  #    Defaults to 0.
  #  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.
  #
  # With these defaults, the throttling mechanism is disabled.
  # For example, by setting rate to 1 Falco could send up to 1000 notifications
  # after an initial quiet period, and then up to 1 notification per second
  # afterward. It would gain the full burst back after 1000 seconds of
  # no activity.

  outputs:
    # -- Number of tokens gained per second.
    rate: 1
    # -- Maximum number of tokens outstanding.
    max_burst: 1000

  # Where security notifications should go.
  # Multiple outputs can be enabled.

  syslog_output:
    # -- Enable syslog output for security notifications.
    enabled: true

  # If keep_alive is set to true, the file will be opened once and
  # continuously written to, with each output message on its own
  # line. If keep_alive is set to false, the file will be re-opened
  # for each output message.
  #
  # Also, the file will be closed and reopened if falco is signaled with
  # SIGUSR1.

  file_output:
    # -- Enable file output for security notifications.
    enabled: false
    # -- Open file once or every time a new notification arrives.
    keep_alive: false
    # -- The filename for logging notifications.
    filename: ./events.txt

  stdout_output:
    # -- Enable stdout output for security notifications.
    enabled: true

  # Falco contains an embedded webserver that exposes a healthy endpoint that can be used to check if Falco is up and running.
  # By default the endpoint is /healthz
  #
  # The ssl_certificate is a combination SSL Certificate and corresponding
  # key contained in a single file. You can generate a key/cert as follows:
  #
  # $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
  # $ cat certificate.pem key.pem > falco.pem
  # $ sudo cp falco.pem /etc/falco/falco.pem
  webserver:
    # -- Enable Falco embedded webserver.
    enabled: true
    # -- Number of threads depending on the number of online cores.
    threadiness: 0
    # -- Port where Falco embedded webserver listen to connections.
    listen_port: 8765
    # -- Endpoint where Falco exposes the health status.
    k8s_healthz_endpoint: /healthz
    # -- Enable SSL on Falco embedded webserver.
    ssl_enabled: false
    # -- Certificate bundle path for the Falco embedded webserver.
    ssl_certificate: /etc/falco/falco.pem

  # Possible additional things you might want to do with program output:
  #   - send to a slack webhook:
  #         program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"
  #   - logging (alternate method than syslog):
  #         program: logger -t falco-test
  #   - send over a network connection:
  #         program: nc host.example.com 80

  # If keep_alive is set to true, the program will be started once and
  # continuously written to, with each output message on its own
  # line. If keep_alive is set to false, the program will be re-spawned
  # for each output message.
  #
  # Also, the program will be closed and reopened if falco is signaled with
  # SIGUSR1.
  program_output:
    # -- Enable program output for security notifications.
    enabled: false
    # -- Start the program once or re-spawn when a notification arrives.
    keep_alive: false
    # -- Command to execute for program output.
    program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"

  http_output:
    # -- Enable http output for security notifications.
    enabled: false
    # -- When set, this will override an auto-generated URL which matches the falcosidekick Service.
    # -- When including Falco inside a parent helm chart, you must set this since the auto-generated URL won't match (#280).
    url: ""
    user_agent: "falcosecurity/falco"

  # Falco supports running a gRPC server with two main binding types
  # 1. Over the network with mandatory mutual TLS authentication (mTLS)
  # 2. Over a local unix socket with no authentication
  # By default, the gRPC server is disabled, with no enabled services (see grpc_output)
  # please comment/uncomment and change accordingly the options below to configure it.
  # Important note: if Falco has any troubles creating the gRPC server
  # this information will be logged, however the main Falco daemon will not be stopped.
  # gRPC server over network with (mandatory) mutual TLS configuration.
  # This gRPC server is secure by default so you need to generate certificates and update their paths here.
  # By default the gRPC server is off.
  # You can configure the address to bind and expose it.
  # By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.
  # grpc:
  #   enabled: true
  #   bind_address: "0.0.0.0:5060"
  #   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores
  #   threadiness: 0
  #   private_key: "/etc/falco/certs/server.key"
  #   cert_chain: "/etc/falco/certs/server.crt"
  #   root_certs: "/etc/falco/certs/ca.crt"

  # -- gRPC server using an unix socket
  grpc:
    # -- Enable the Falco gRPC server.
    enabled: false
    # -- Bind address for the grpc server.
    bind_address: "unix:///run/falco/falco.sock"
    # -- Number of threads (and context) the gRPC server will use, 0 by default, which means "auto".
    threadiness: 0

  # gRPC output service.
  # By default it is off.
  # By enabling this all the output events will be kept in memory until you read them with a gRPC client.
  # Make sure to have a consumer for them or leave this disabled.
  grpc_output:
    # -- Enable the gRPC output and events will be kept in memory until you read them with a gRPC client.
    enabled: false

  # Container orchestrator metadata fetching params
  metadata_download:
    # -- Max allowed response size (in Mb) when fetching metadata from Kubernetes.
    max_mb: 100
    # -- Sleep time (in ฮผs) for each download chunck when fetching metadata from Kubernetes.
    chunk_wait_us: 1000
    # -- Watch frequency (in seconds) when fetching metadata from Kubernetes.
    watch_freq_sec: 1

[github] Add repo name as field

Motivation

Currently the github.repo field exposes the full repo url (html_url json field). To make processing the output with f.e. falcosidekick easier (#falcosidekick 537) it would be great to have access to just the repo name as well.

Feature

Split the current github.repo field into github.repo.name and github.repo.url fields. So github.repo.name provides the short repo name (<repo>) and github.repo.url provides the current value of html_url (https://github.com/<owner>/<repo>).

Alternatives

Additional context

ka.sourceips is always N/A

Describe the bug

ka.sourceips value is always N/A even the k8s audit event contains a valid ip.

How to reproduce it

Taking this k8s audit event:

{
  "kind": "Event",
  "apiVersion": "audit.k8s.io/v1",
  "level": "Metadata",
  "auditID": "f6680d88-fa8e-4632-be24-4454439581a5",
  "stage": "ResponseComplete",
  "requestURI": "/apis/kyverno.io/v1alpha2/namespaces/ids/backgroundscanreports/a056be7f-bf34-4530-93a8-105c541585c4",
  "verb": "get",
  "user": {
    "username": "system:serviceaccount:core:kyverno",
    "uid": "055bd069-dbca-40d5-9dbd-26718003be43",
    "groups": [
      "system:serviceaccounts",
      "system:serviceaccounts:core",
      "system:authenticated"
    ],
    "extra": {
      "authentication.kubernetes.io/pod-name": [
        "kyverno-76c7c75d75-wf949"
      ],
      "authentication.kubernetes.io/pod-uid": [
        "2c18b7cd-3053-4ba5-9f43-7936de39cb05"
      ]
    }
  },
  "sourceIPs": [
    "192.168.121.136"
  ],
  "userAgent": "kyverno/v0.0.0 (linux/amd64) kubernetes/$Format",
  "objectRef": {
    "resource": "backgroundscanreports",
    "namespace": "ids",
    "name": "a056be7f-bf34-4530-93a8-105c541585c4",
    "apiGroup": "kyverno.io",
    "apiVersion": "v1alpha2"
  },
  "responseStatus": {
    "metadata": {},
    "status": "Failure",
    "message": "backgroundscanreports.kyverno.io \"a056be7f-bf34-4530-93a8-105c541585c4\" not found",
    "reason": "NotFound",
    "details": {
      "name": "a056be7f-bf34-4530-93a8-105c541585c4",
      "group": "kyverno.io",
      "kind": "backgroundscanreports"
    },
    "code": 404
  },
  "requestReceivedTimestamp": "2023-02-24T07:00:13.055259Z",
  "stageTimestamp": "2023-02-24T07:00:13.056314Z",
  "annotations": {
    "authorization.k8s.io/decision": "allow",
    "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"kyverno\" of ClusterRole \"kyverno\" to ServiceAccount \"kyverno/keos-core\""
  }
}

And having this rule in my custom rules file:

    - rule: Check unknown source IP
      desc: Detect an unknown ip address doing requests to Kube API Server
      condition: kevt
      output: Unknown ip request to Kube API Server (sourceips=%ka.sourceips useragent=%ka.useragent)
      priority: NOTICE
      source: k8s_audit
      tags: [k8s]

The falco event shows like this:

07:00:31.960363000: Notice Unknown ip request to Kube API Server (sourceips=<NA> useragent=kyverno/v0.0.0 (linux/amd64) kubernetes/$Format)

Expected behaviour

Falco event should contain the sourceIPs value from k8s audit event log

Environment

  • Falco version:
    0.34.1

  • System info:
    "machine": "x86_64",
    "nodename": "falco-7twq5",
    "release": "4.18.0-373.el8.x86_64",
    "sysname": "Linux",
    "version": "1 SMP Tue Mar 22 15:11:47 UTC 2022"

  • Cloud provider or hardware configuration:
    Vagrant local k8s cluster (1.24)

  • OS:
    CentOS Stream release 8

  • Kernel:
    Linux m1 4.18.0-373.el8.x86_64

  • Installation method:
    Chart

  • Additional comments:

I've also tried to debug the k8s_audit plugin with the extract_test.go but it seems the json is not decoded due the if evtNum != e.jdataEvtnum condition (line 82 of extract.go) is never met. IDK if I can test it locally somehow.

All the others fields seems to be parsed correctly since I get the right value on the falco events from the rest of the rules.

k8saudit plugin can not output the change of configmap object

Describe the bug
The ka.req.configmap.obj field can not output any contents.

How to reproduce it

The rule as below

    - rule: Not allowed configmap being changed
      desc: >
        Some body is changing a configmap that being not allowed
      condition: >
        kevt
        and configmap
        and not (ka.verb in (watch, list))
        and ka.req.configmap.name = "falco-k8s-audit-rules"
      output: >
        "Not allowed configmap changing happenned (user=%ka.user.name node=%ka.target.name verb=%ka.verb resource=%ka.target.resource configmap=%ka.req.configmap.name m_obj=%ka.req.configmap.obj)"
      priority: ERROR
      source: k8s_audit
      tag: [k8s]

Expected behaviour

The ka.req.configmap.obj filed should ouput the content after edited or before edited.

Screenshots
image

Environment

  • Falco version:

Falco version: 0.34.1 (x86_64)

  • System info:
{
  "machine": "x86_64",
  "nodename": "k8s-master001",
  "release": "5.14.0-162.12.1.el9_1.x86_64",
  "sysname": "Linux",
  "version": "#1 SMP PREEMPT_DYNAMIC Mon Jan 23 14:51:52 EST 2023"
}
  • Cloud provider or hardware configuration:
    hardware configuration: vituralbox 7.0.6
  • OS:
NAME="AlmaLinux"
VERSION="9.1 (Lime Lynx)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.1"
PLATFORM_ID="platform:el9"
PRETTY_NAME="AlmaLinux 9.1 (Lime Lynx)"
ANSI_COLOR="0;34"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:almalinux:almalinux:9::baseos"
HOME_URL="https://almalinux.org/"
DOCUMENTATION_URL="https://wiki.almalinux.org/"
BUG_REPORT_URL="https://bugs.almalinux.org/"

ALMALINUX_MANTISBT_PROJECT="AlmaLinux-9"
ALMALINUX_MANTISBT_PROJECT_VERSION="9.1"
REDHAT_SUPPORT_PRODUCT="AlmaLinux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.1"
  • Kernel:
Linux k8s-master001 5.14.0-162.12.1.el9_1.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 23 14:51:52 EST 2023 x86_64 x86_64 x86_64 GNU/Linux
  • Installation method:

Kubernetes

Additional context

k8s_audit_rules missing rolebinding

Describe the bug
The k8s_audit_rules.yaml missing macro about rolebinding in default

How to reproduce it
You can find code line 579 & 587 in https://github.com/falcosecurity/plugins/blob/master/plugins/k8saudit/rules/k8s_audit_rules.yaml
The rule is about rolebinding and clusterrolebinding.But in desc and condition just detect clusterrolebinding.
it does not comply with the rule name.
the better way is adding rolebinding macro and use it in condition๏ผš

- macro: rolebinding
  condition: ka.target.resource=rolebindings

- rule: K8s Role/Clusterrolebinding Created
  desc: Detect any attempt to create  clusterrolebinding and rolebinding
  condition: (kactivity and kcreate and (clusterrolebinding or rolebinding) and response_successful)
  output: K8s Cluster Role Binding Created (user=%ka.user.name binding=%ka.target.name resource=%ka.target.resource subjects=%ka.req.binding.subjects role=%ka.req.binding.role resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
  priority: INFO
  source: k8s_audit
  tags: [k8s]

- rule: K8s Role/Clusterrolebinding Deleted
  desc: Detect any attempt to delete  clusterrolebinding and rolebinding
  condition: (kactivity and kdelete and (clusterrolebinding or rolebinding) and response_successful)
  output: K8s Cluster Role Binding Deleted (user=%ka.user.name binding=%ka.target.name resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
  priority: INFO
  source: k8s_audit
  tags: [k8s]

Expected behaviour

Screenshots
image

  • Falco version:cloudtrail-0.8.0 Latest

Additional context
I think detect rolebinding and clusterrolebing is more security in default rules

Provide signatures for plugins when distributed as OCI artifacts

Feature

For the last few months we've been distributing plugins as OCI artifacts: https://github.com/orgs/falcosecurity/packages?repo_name=plugins .

The next steps to continue with this work involve properly signing all artifacts. Keyless signing with cosign works well for our purpose, although for now we won't be able to use the new OCI v1.1 mechanisms to store signatures, mainly because the official spec isn't released yet at the time of writing ( https://opencontainers.org/release-notices/overview/ ) and GHCR does not support it yet.

I think it would be better to have OCI v1.1 support, but on the other hand Falco is security software, and when given the ability to download plugins for easier consumption, the most up to date signature support is pretty much mandatory :)

k8s-audit: panic on SIGINT

Describe the bug
If the falco pod running with the k8s-audit plugins receives a SIGINT signal the k8s-audit plugin does not gracefully shutdown but panics.

Fri Oct 21 08:47:09 2022: Falco version: 0.33.0 (x86_64)
Fri Oct 21 08:47:09 2022: Falco initialized with configuration file: /etc/falco/falco.yaml
Fri Oct 21 08:47:09 2022: Loading plugin 'k8saudit' from file /usr/share/falco/plugins/libk8saudit.so
Fri Oct 21 08:47:09 2022: Loading plugin 'json' from file /usr/share/falco/plugins/libjson.so
Fri Oct 21 08:47:09 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml
Fri Oct 21 08:47:09 2022: Loading rules from file /etc/falco/falco_rules.yaml
Fri Oct 21 08:47:09 2022: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Fri Oct 21 08:47:09 2022: Starting health webserver with threadiness 2, listening on port 8765
Fri Oct 21 08:47:09 2022: Enabled event sources: k8s_audit, syscall
Fri Oct 21 08:47:09 2022: Opening capture with plugin 'k8saudit'
Fri Oct 21 08:47:09 2022: Opening capture with Kernel module
Fri Oct 21 08:50:23 2022: SIGINT received, exiting...
Syscall event drop monitoring:
   - event drop detected: 0 occurrences
   - num times actions taken: 0
2022/10/21 08:50:23 http: panic serving 172.17.0.1:2674: send on closed channel
goroutine 942 [running]:
net/http.(*conn).serve.func1()
	/usr/local/go/src/net/http/server.go:1825 +0xbf
panic({0x7f3678354320, 0x7f3678395568})
	/usr/local/go/src/runtime/panic.go:844 +0x258
github.com/falcosecurity/plugins/plugins/k8saudit/pkg/k8saudit.(*Plugin).OpenWebServer.func1({0x7f36783977f0, 0xc000b3d420}, 0xc000c26300)
	/home/prow/go/src/github.com/falcosecurity/plugins/plugins/k8saudit/pkg/k8saudit/source.go:126 +0x371
net/http.HandlerFunc.ServeHTTP(0x200?, {0x7f36783977f0?, 0xc000b3d420?}, 0xc000c83aa8?)
	/usr/local/go/src/net/http/server.go:2084 +0x2f
net/http.(*ServeMux).ServeHTTP(0xc0005986a0?, {0x7f36783977f0, 0xc000b3d420}, 0xc000c26300)
	/usr/local/go/src/net/http/server.go:2462 +0x149
net/http.serverHandler.ServeHTTP({0x7f3678396cb0?}, {0x7f36783977f0, 0xc000b3d420}, 0xc000c26300)
	/usr/local/go/src/net/http/server.go:2916 +0x43b
net/http.(*conn).serve(0xc000b7e3c0, {0x7f3678397a78, 0xc000097980})
	/usr/local/go/src/net/http/server.go:1966 +0x5d7
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:3071 +0x4db

How to reproduce it
I have encountered this behavior each time the kubelet kills the falco pod when the liveness probe is not correctly configured.

Expected behaviour

The plugin should properly handle the SIGINT signal.

cloudtrail plugin: Support S3-based SNS ingestion

Motivation

A common practice is to use org-wide cloudtrails. These trails write events to s3 nested under account number. As it stands today, there is no easy way to selectively ingest only specific accounts.

Feature

We would like to selectively ingest cloudtrail data from aws accounts in an org-trail s3 bucket by way of configuring s3 bucket notifications with prefixes and pointing them to SNS. Currently the plugin expect SNS notifications directly from cloudtrail.

Alternatives

Additional context

Looking at the snsMessage struct, it appears that the plugin will only ingest directly from cloudtrail. The proposal is to introduce a flag/param that can be used to direct the plugin to read SNS originating from s3.

New plugin socket / webhook

Motivation

Our organization wants to use Falco as the one and only threat-detection engine. The obstacle is that many sources of events in modern Linux are not covered with Falco, e.g., auth logs, and other logs. There are many log collectors that can efficiently read a file and send data further to a socket or by HTTP/S.

Feature

Add a new plugin to accept the data via sockets (TCP, UDP, Unix, Datagram, you name it).

Alternatives

As an alternative, we are considering using two systems for alerts: falco stack and grafana + loki + alertmanager, but it will increase the maintenance costs of the whole threat-detection pipeline.

Additional context

If the idea is good enough, we are ready to contribute. Maybe it is better to develop a plugin in a separate repo.

In addition, if someone knows how to solve our problem another way we would love to hear it.

[json plugin error] json.value[...] doesn't accept json pointer

Describe the bug

When loading rules, falco finds the rules file as invalid with the following error:

LOAD_ERR_COMPILE_OUTPUT (Error compiling output):
 ....
 has an invalid index argument not composed only by digits: /user/extraauthentication.kubernetes.io/pod-name

How to reproduce it

Install the helm chart of falco with this config file:

falcoctl:
  artifact:
    install:
      enabled: true
    follow:
      enabled: true
  config:
    artifact:
      install:
        resolveDeps: false
        refs: [falco-rules:0, k8saudit-rules:0.5]
      follow:
        refs: [falco-rules:0, k8saudit-rules:0.5]

falco:
  plugins:
    - name: k8saudit
      library_path: libk8saudit.so
      init_config:
        ""
      open_params: "http://:9765/k8s-audit"
    - name: json
      library_path: libjson.so
      init_config: ""
      open_params: ""
  load_plugins: [k8saudit, json]

And with this rule in k8s_audit_rules.local.yaml:

- list: getVerbs
  items: [list, get]

- rule: getFromPod
  desc: A pod tried to access resorces in the cluster
  condition: ka.verb in (getVerbs) and ka.user.name != admin
  output: "%json.value[/user/extra/authentication.kubernetes.io/pod-name]"
  priority: info
  source: k8s_audit

and feeding falco with this json:

{
    "kind": "Event",
    "apiVersion": "audit.k8s.io/v1",
    "level": "Metadata",
    "auditID": "4d80af72-c845-42c3-9159-97a97925fcac",
    "stage": "ResponseComplete",
    "requestURI": "/api/v1/namespaces/default/pods?limit=500",
    "verb": "list",
    "user": {
        "username": "system:serviceaccount:default:default",
        "uid": "ff7eb48d-d26c-4b11-9f18-e2b5e9be50ee",
        "groups": [
            "system:serviceaccounts",
            "system:serviceaccounts:default",
            "system:authenticated"
        ],
        "extra": {
            "authentication.kubernetes.io/pod-name": [
                "ubuntu-ubuntu"
            ],
            "authentication.kubernetes.io/pod-uid": [
                "7d49c124-be01-4f95-827e-de1125f05dc9"
            ]
        }
    },
    "sourceIPs": [
        "10.1.134.102"
    ],
    "userAgent": "kubectl/v1.27.4 (linux/amd64) kubernetes/fa3d799",
    "objectRef": {
        "resource": "pods",
        "namespace": "default",
        "apiVersion": "v1"
    },
    "responseStatus": {
        "metadata": {},
        "code": 200
    },
    "requestReceivedTimestamp": "2023-08-02T08:12:31.920374Z",
    "stageTimestamp": "2023-08-02T08:12:31.925808Z",
    "annotations": {
        "authorization.k8s.io/decision": "allow",
        "authorization.k8s.io/reason": ""
    }
}

Expected behaviour

Return in output: ubuntu-ubuntu

Environment

  • Falco version:
    0.35.1
  • System info:
Wed Aug  2 12:53:30 2023: Falco version: 0.35.1 (x86_64)
Wed Aug  2 12:53:30 2023: Falco initialized with configuration file: /etc/falco/falco.yaml
Wed Aug  2 12:53:30 2023: Loading plugin 'k8saudit' from file /usr/share/falco/plugins/libk8saudit.so
Wed Aug  2 12:53:30 2023: Loading plugin 'json' from file /usr/share/falco/plugins/libjson.so
Wed Aug  2 12:53:30 2023: Loading rules from file /etc/falco/falco_rules.yaml
Wed Aug  2 12:53:31 2023: Loading rules from file /etc/falco/local/falco_rules.local.yaml
Wed Aug  2 12:53:31 2023: Loading rules from file /etc/falco/k8s_audit_rules.yaml
{
  "machine": "x86_64",
  "nodename": "falco-vqvkp",
  "release": "5.15.0-76-generic",
  "sysname": "Linux",
  "version": "#83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023"
}

this output is missing k8s_audit_rules.local.yaml because I had to remove it to access the machine

  • OS:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
  • Kernel:
    Linux falco-vqvkp 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 GNU/Linux
  • Installation method:
    helm chart

falco-k8saudit: Pod getting OOM - Memory leak ?

Describe the bug

Few days back, I deployed, via the helm chart, falco with the k8saudit plugin, and once the kube-apiserver succeed to send its log to the pod, this one started to consume more and more memory until it hits the limit and get OOM killed.

How to reproduce it

Deploy Falco with k8s plugin and plug it as the audit webhook of the Kubernetes cluster

Expected behaviour

No leaks

Screenshots

image

Environment

  • Falco version: 0.33.1
  • Installation method: Official Helm Chart
  • Plugins: k8saudit, json

Additional context

Values used:

falco:
  selectLabels:
    - app.kubernetes.io/name: falco-audit
    - app.kubernetes.io/instance: falco-audit
  rules_file:
    - /etc/falco/k8s_audit_rules.yaml
    - /etc/falco/rules.d
  plugins:
    - name: k8saudit
      library_path: libk8saudit.so
      init_config:
        ""
      open_params: "http://0.0.0.0:9765/k8s-audit"
    - name: json
      library_path: libjson.so
      init_config: ""
  webserver:
    enabled: false
  http_output:
    enabled: true
    url: URL
  load_plugins: [ k8saudit, json ]
  libs_logger:
      enabled: true
  stdout_output:
      enabled: false
  json_output: true
  priority: error

k8s audit plug-ins for AKS and GCP

Motivation

As today most projects are multi-cloud, we would need k8saudit plug-in for at least AKS and GCP

Feature

k8s audit plug-ins for AKS and GKE

Alternatives

Today's alternative is creating a messaging stack on each CSP to send logs to the falco Web service

Misleading alerts for rule K8s Serviceaccount Created

Describe the bug

I get numerous alerts of type K8s Serviceaccount Created, but no SA is created.
This is an example of such an alert:

{"output":"12:34:16.469623040: Notice K8s Serviceaccount Created (user=system:node:ip-1-2-3-4.eu-central-1.compute.internal user=prometheus-cloudwatch-exporter ns=monitoring resp=201 decision=allow reason=)","priority":"Notice","rule":"K8s Serviceaccount Created","source":"k8s_audit","tags":["k8s"],"time":"2021-11-11T12:34:16.469623040Z", "output_fields": {"jevt.time":"12:34:16.469623040","ka.auth.decision":"allow","ka.auth.reason":"","ka.response.code":"201","ka.target.name":"prometheus-cloudwatch-exporter","ka.target.namespace":"monitoring","ka.user.name":"system:node:ip-1-2-3-4.eu-central-1.compute.internal"}}

This is the matching k8s audit log entry (shortened for readability):

{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"eadc032d-514c-43dc-b0f6-e3e4b26d4039","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/monitoring/serviceaccounts/prometheus-cloudwatch-exporter/token","verb":"create","user":{"username":"system:node:ip-1-2-3-4.eu-cent
ral-1.compute.internal","groups":["system:nodes","system:authenticated"]},"sourceIPs":["1.2.3.4"],"userAgent":"kubelet/v1.21.5 (linux/amd64) kubernetes/aea7bba","objectRef":{"resource":"serviceaccounts","namespace":"monitoring","name":"prometheus-cloudwatch-exporter","apiVersion":"v1","subresource":"token"},"res
ponseStatus":{"metadata":{},"code":201},"responseObject":{"kind":"TokenRequest","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null,"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"authentication.k8s.io/v1","time":"2021-11-11T12:34:16Z","fieldsType":"FieldsV1","fieldsV1
":{"f:spec":{"f:boundObjectRef":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:uid":{}},"f:expirationSeconds":{}}}}]},"spec":{"audiences":["kubernetes.svc.default"],"expirationSeconds":3607,"boundObjectRef":{"kind":"Pod","apiVersion":"v1","name":"prometheus-cloudwatch-exporter-7577684574-rjrmr","uid":"0f59c732-8822-475
6-b315-1ad719c8337c"}},"status":{"token":"ey...EQ","expirationTimestamp":"2021-11-11T13:34:23Z"}},"requestReceivedTimestamp":"2021-11-11T12:34:16.464330Z","stageTimestamp":"2021-11-11T12
:34:16.469623Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}

According to the log, it was a TokenRequest and not a SA creation.
This is how a SA creation looks like in the log:

{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"0154a4c3-4755-474c-874b-47070ec04400","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/falco/serviceaccounts/cz-test"
,"verb":"update","user":{"username":"system:kube-controller-manager","groups":["system:authenticated"]},"sourceIPs":["127.0.0.1"],"userAgent":"kube-controller-manager/v1.21.5 (linux/amd64) kubernetes/aea7bba/toke
ns-controller","objectRef":{"resource":"serviceaccounts","namespace":"falco","name":"cz-test","uid":"ca2efd84-f3a5-48db-bf31-97c20f930bb0","apiVersion":"v1","resourceVersion":"255871357"},"responseStatus":{"metad
ata":{},"code":200},"requestObject":{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"cz-test","namespace":"falco","uid":"ca2efd84-f3a5-48db-bf31-97c20f930bb0","resourceVersion":"255871357","creation
Timestamp":"2021-11-12T05:04:25Z"},"secrets":[{"name":"cz-test-token-tb42d"}]},"responseObject":{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"cz-test","namespace":"falco","uid":"ca2efd84-f3a5-48d
b-bf31-97c20f930bb0","resourceVersion":"255871359","creationTimestamp":"2021-11-12T05:04:25Z"},"secrets":[{"name":"cz-test-token-tb42d"}]},"requestReceivedTimestamp":"2021-11-12T05:04:25.716247Z","stageTimestamp"
:"2021-11-12T05:04:25.723696Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:kube-controller-manager\" of ClusterRole \"system:
kube-controller-manager\" to User \"system:kube-controller-manager\""}}

How to reproduce it

Actually do nothing, and have a look at the falco logs of a running k8s cluster with some SAs.
The tokens get regenerated from time to time. These docs mention one hour.

These docs mention a different timeframe. In our setup, it happens multiple times during one hour.

Expected behaviour

The alert should only trigger, when a SA is created, not for each token.

Screenshots

Environment

  • Falco version:
    0.30.0
  • System info:
{
   "machine": "x86_64",
   "nodename": "ip-1-2-3-4",
   "release": "5.8.0-1041-aws",
   "sysname": "Linux",
   "version": "#43~20.04.1-Ubuntu SMP Thu Jul 15 11:07:29 UTC 2021"
}
  • Cloud provider or hardware configuration: AWS
  • OS:
    Ubuntu 20.04
  • Kernel:
    5.8.0-1041-aws
  • Installation method:
    Helm chart

Additional context

Falco not triggering rules after disabling a default rule using customRules

Hi

After I've disabled a default rule with this example issue using customRules in the Falco configuration.

customRules:
  override-k8saudit.yaml: |-
         - rule: Anonymous Request Allowed
            enabled: false

By the way this rule triggered more then 300 times in 15 minutes.
Do you know why?

The Falco logs show that the configuration file has been loaded successfully, and the rules have been loaded, but no events are being logged when manually triggered. The issue only occurs after disabling a default rule using the customRules property.
log from the pod (add 2h from GMT):

Sun Mar 26 14:30:59 2023: Falco version: 0.34.1 (x86_64)
Sun Mar 26 14:30:59 2023: Falco initialized with configuration file: /etc/falco/falco.yaml
Sun Mar 26 14:30:59 2023: Loading plugin 'k8saudit-eks' from file /usr/share/falco/plugins/libk8saudit-eks.so
Sun Mar 26 14:30:59 2023: Loading plugin 'json' from file /usr/share/falco/plugins/libjson.so
Sun Mar 26 14:30:59 2023: Loading rules from file /etc/falco/k8s_audit_rules.yaml
Sun Mar 26 14:30:59 2023: Loading rules from file /etc/falco/rules.d/override-k8saudit.yaml
Sun Mar 26 14:30:59 2023: Starting health webserver with threadiness 2, listening on port 8765
Sun Mar 26 14:30:59 2023: Enabled event sources: k8s_audit
Sun Mar 26 14:30:59 2023: Opening capture with plugin 'k8saudit-eks'

As you can see without any rule triggered like exec to pod
But when I deleted the Falco namespace all the events arrived of the triggered rules.

That was after I deleted the Falco namespace:
image
Unix time:
1,679,841,117,794,301,952 - Sun Mar 26 2023 14:31:57 GMT
1,679,841,197,925,266,944 - Sun Mar 26 2023 14:33:17 GMT

Do you have any idea how can I find a solution to resolve this?

[k8saudit-eks] cannot parse JSON

Describe the bug

Just set up k8saudit-eks and the audit rules in falco.
I can see the following in the logs, among with valid k8saudit logs:

2023/06/23 15:51:36 [k8saudit-eks] cannot parse JSON: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse array: cannot parse array value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object key: missing closing '"'; unparsed tail: ""
2023/06/23 15:51:36 [k8saudit-eks] unexpected tail: "d\n key value map stored with a resource ...on.gatekeeper.sh\\\",\\\"mutated\\\":false}\"}}"
2023/06/23 15:50:21 [k8saudit-eks] cannot parse JSON: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse array: cannot parse array value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse object: cannot parse object value: cannot parse string: missing closing '"'; unparsed tail: ""
2023/06/23 15:50:21 [k8saudit-eks] cannot parse JSON: cannot parse number: unexpected char: "d"; unparsed tail: "dD"
2023/06/23 15:50:21 [k8saudit-eks] cannot parse JSON: cannot parse number: unexpected char: "u"; unparsed tail: "uringSchedulingIgnoredDuringExecution\":{...he ones selected by namesp[Truncated...]"
2023/06/23 15:50:21 [k8saudit-eks] cannot parse JSON: cannot parse number: unexpected char: "s"; unparsed tail: "st\n and null namespaceSelector means \\\"t...nity) or not co-located (a[Truncated...]"
2023/06/23 15:50:21 [k8saudit-eks] cannot parse JSON: cannot parse number: unexpected char: "b"; unparsed tail: "be"
2023/06/23 15:50:21 [k8saudit-eks] cannot parse JSON: cannot parse number: unexpected char: "l"; unparsed tail: "lSelector in the specified namespaces, w...on.gatekeeper.sh\\\",\\\"mutated\\\":false}\"}}"

How to reproduce it

Configure audit logs in EKS
Configure falco 0.35 with the k8saudit-eks plug-in, default configuration, falco provided audit rules

Expected behaviour

The plug-in should process the logs correctly

Environment

  • Falco version:
    0.35.0
    k8saudit-eks:0.2
    json:0.7

  • System info:
    Fri Jun 23 16:03:16 2023: Falco version: 0.35.0 (x86_64)
    Fri Jun 23 16:03:16 2023: Falco initialized with configuration file: /etc/falco/falco.yaml
    Fri Jun 23 16:03:16 2023: Loading plugin 'k8saudit-eks' from file /usr/share/falco/plugins/libk8saudit-eks.so
    Fri Jun 23 16:03:16 2023: Loading plugin 'json' from file /usr/share/falco/plugins/libjson.so
    Fri Jun 23 16:03:16 2023: Loading rules from file /etc/falco/k8s_audit_rules.yaml
    Fri Jun 23 16:03:16 2023: Loading rules from file /etc/falco/falco_rules.yaml
    Fri Jun 23 16:03:16 2023: Loading rules from file /etc/falco/rules.d/k8saudit-rules-upm.yaml
    Fri Jun 23 16:03:16 2023: Loading rules from file /etc/falco/rules.d/rules-upm.yaml
    {
    "machine": "x86_64",
    "nodename": "falco-9bwd2",
    "release": "5.15.0-1037-aws",
    "sysname": "Linux",
    "version": "41~20.04.1-Ubuntu SMP Mon May 22 18:18:00 UTC 2023"
    }

  • Cloud provider or hardware configuration:
    EKS 1.24

  • OS:
    PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
    NAME="Debian GNU/Linux"
    VERSION_ID="11"
    VERSION="11 (bullseye)"
    VERSION_CODENAME=bullseye
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

  • Kernel:
    Linux falco-9bwd2 5.15.0-1037-aws 41~20.04.1-Ubuntu SMP Mon May 22 18:18:00 UTC 2023 x86_64 GNU/Linux

  • Installation method:
    Kubernetes

Cloudwatch Logs plugin

Motivation

In AWS, a lot of managed services store their logs into Cloudwatch Logs, especially EKS. It will be really useful to have this source too.

Feature

A Cloudwatch Logs plugin

Alternatives

The community has tackled this need with some extra stuff https://github.com/xebia/falco-eks-audit-bridge

Additional context

This logic may be replicated for other cloud providers

Add two users to eks_allowed_k8s_users list

Motivation

The rules document k8s_audit_rules.yaml contains the list eks_allowed_k8s_users for whitelisting Kubernetes users that are specific and standard to Amazon EKS.

As of now the list is not complete (which it never will be), leading to warnings, when the rules are used without customization.

14:54:29.810991000: Warning K8s Operation performed by user not in allowed list of users (user=eks:cloud-controller-manager target=cloud-controller-manager/leases verb=get uri=/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s resp=200)
14:54:29.969626000: Warning K8s Operation performed by user not in allowed list of users (user=eks:vpc-resource-controller target=cp-vpc-resource-controller/configmaps verb=get uri=/api/v1/namespaces/kube-system/configmaps/cp-vpc-resource-controller resp=200)

Feature

Add the two users to allowed_k8s_users

Alternatives

Users of rules to it themselves for example like this:

- list: eks_allowed_k8s_users
  append: true
  items:
    # Used by official VPC CNI add-on for Amazon EKS.
    - eks:vpc-resource-controller
    # Used by official AWS cloud controller manager.
    - eks:cloud-controller-manager

RFE: Github - Support Github Enterprise Server

Motivation
We would like to use Falco to monitor/protect our on-prem github deployment

Feature
We would like to use falco to monitor all activity on our Github Enterprise deployment. This would need two changes.

  1. Provide a configuration option to supply the github URL i.e. https://github.mycompany.com
  2. Allow for manually creating the github webhook at the org or enterprise level.

As github administrators, we can define a webhook that fires on all repositories - no need for falco to go create/remove webhooks dynamically. Attempting to add/remove webhooks dynamically would likely cause extreme startup delays with 10s of thousands of repositories.

GCP audit events Plugin

A new Plugin Request

A plugin to read GCP audit events via pubsub topic.

I have noticed inside the slack community that there was some work regarding this. Is it still planned in roadmap?

change plugins git tag convention to be go-compatible

Motivation

Currently, every plugin release is marked by a git tag with the naming convention <plugin_name>-x.y.z. This is also used by our CI to build and release plugin artifacts.

However, many/most plugins are developed in Go and the git tags for their release cannot be used to import the plugins' code in other Go projects because they don't respect the Go version convention for modules in subdirectories (see: https://go.dev/ref/mod#vcs-version).

Feature

Changing the naming convention from <plugin_name>-x.y.z to plugins/<plugin_name>/vx.y.z, which is compatible with the Go package manager and improves the code reusability of each plugin. We can easily adapt our CI to use this format for building release artifacts. Two birds with one stone.

Alternatives

Additional context

commit and release note naming conventions

What to document

We have no definite naming conventions for either commits or release notes (I don't think we have release notes at all).

In Falco, we have a naming convention for release notes touching rulesets (see: https://github.com/falcosecurity/.github/blob/main/CONTRIBUTING.md#rule-type), but although this repository is the official place for all our plugin rulesets, we haven't something like this here.

At the same time, our changelog generation tool looks for commit messages following conventional commits specifications. For a given plugin, the changelog will filter commit messages in the form of xxx(plugins): ... or xxx(plugins/<plugin_name>): .... However, there is nothing enforcing this at the CI level, not even with a warning, which ends up potentially missing valuable commits in the changelogs.

*Proposals/Idea

  • Add a CI job checking that:
    • If a file changes in a plugin/<plugin_name> path, then the commit must follow the xxx(plugins/<plugin_name>): ... message pattern
    • If a file changes in more than one plugin/<plugin_name> path, then the commit must follow the xxx(plugins): ... message pattern
  • Check or enforce than the rule(xxx):... commit message format is respected, even in a looser form than what we have in Falco

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.