Giter Site home page Giter Site logo

kyverno / policy-reporter Goto Github PK

View Code? Open in Web Editor NEW
253.0 253.0 72.0 27.43 MB

Monitoring and Observability Tool for the PolicyReport CRD with an optional UI.

Home Page: https://kyverno.github.io/policy-reporter/

License: MIT License

Dockerfile 0.13% Makefile 0.26% Smarty 1.92% Go 87.86% HTML 9.83%
grafana kubernetes kyverno metrics observability prometheus-metrics

policy-reporter's Introduction

Kyverno Tweet

Cloud Native Policy Management ๐ŸŽ‰

Go Report Card License: Apache-2.0 GitHub Repo stars CII Best Practices OpenSSF Scorecard SLSA 3 Artifact HUB codecov FOSSA Status

logo

Kyverno is a policy engine designed for Kubernetes platform engineering teams. It enables security, automation, compliance, and governance using policy-as-code. Kyverno can validate, mutate, generate, and cleanup configurations using Kubernetes admission controls, background scans, and source code respository scans. Kyverno policies can be managed as Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like kubectl, kustomize, and Git.

Open Source Security Index - Fastest Growing Open Source Security Projects

๐Ÿ“™ Documentation

Kyverno installation and reference documents are available at kyverno.io.

๐Ÿ‘‰ Quick Start

๐Ÿ‘‰ Installation

๐Ÿ‘‰ Sample Policies

๐Ÿ™‹โ€โ™‚๏ธ Getting Help

We are here to help!

๐Ÿ‘‰ For feature requests and bugs, file an issue.

๐Ÿ‘‰ For discussions or questions, join the Kyverno Slack channel.

๐Ÿ‘‰ For community meeting access, join the mailing list.

๐Ÿ‘‰ To get updates โญ๏ธ star this repository.

โž• Contributing

Thanks for your interest in contributing to Kyverno! Here are some steps to help get you started:

โœ” Read and agree to the Contribution Guidelines.

โœ” Browse through the GitHub discussions.

โœ” Read Kyverno design and development details on the GitHub Wiki.

โœ” Check out the good first issues list. Add a comment with /assign to request assignment of the issue.

โœ” Check out the Kyverno Community page for other ways to get involved.

Software Bill of Materials

All Kyverno images include a Software Bill of Materials (SBOM) in CycloneDX JSON format. SBOMs for Kyverno images are stored in a separate repository at ghcr.io/kyverno/sbom. More information on this is available at Fetching the SBOM for Kyverno.

Contributors

Kyverno is built and maintained by our growing community of contributors!

Made with contributors-img.

License

Copyright 2024, the Kyverno project. All rights reserved. Kyverno is licensed under the Apache License 2.0.

Kyverno is a Cloud Native Computing Foundation (CNCF) Incubating project and was contributed by Nirmata.

policy-reporter's People

Contributors

andersbennedsgaard avatar blakepettersson avatar boniek83 avatar djerfy avatar eddycharly avatar fengshunli avatar fjogeleit avatar frezbo avatar guipal avatar kolikons avatar m-yosefpor avatar mikebryant avatar mjnagel avatar monotek avatar nikolay-o avatar nlamirault avatar nobletrout avatar oliverbaehler avatar realshuting avatar rgarcia89 avatar rromic avatar rsicart avatar rufusnufus avatar skuethe avatar stone-z avatar sudoleg avatar thomaslachaux avatar vponoikoait avatar windowsrefund avatar yanehi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

policy-reporter's Issues

Policy reporter Grafana dashboards stopped working

Hi!

So I have latest policy-reporter Helm chart (2.10.0) and I have the monitoring enabled.

monitoring:
  enabled: true
  serviceMonitor:
    labels:
      release: kube-prometheus-stack
  plugins:
    kyverno: true
  grafana:
    # required: namespace of your Grafana installation
    namespace: monitoring-system
    dashboards:
      # Enable the deployment of grafana dashboards
      enabled: true
      # Label to find dashboards using the k8s sidecar
      label: grafana_dashboard
    folder:
      # Annotation to enable folder storage using the k8s sidecar
      annotation: grafana_folder
      # Grafana folder in which to store the dashboards
      name: Big Brother

What is interesting that these 3 Dashboards worked on Monday and I did no changes on the policy-reporter itself.
But what I did is a couple of kube-prometheus-stack Helm chart upgrades. I didn't see anything dangerous there, but my suspicion is that something changed that the policy-reporter is expecting.

On Monday I did upgrade from 36.2.1 -> 36.6.1 and 36.6.1 -> 36.6.2 (nothing special in values file)
On Wednesday 36.6.2 -> 37.0.0 (here they changed metricRelabelings and cAdvisorMetricRelabelings)
On Thursday 37.0.0 -> 37.2.0 (nothing special in values file)

Not sure when the dashboards stopped working, but they worked on Monday and yesterday after the upgrade I got this:

pr1

pr2

Or maybe I am on the wrong track here?

Thanks!

Add drop-down filtering to top of `PolicyReports` dashboards

On both of the "Details" dashboards included in the Helm Chart, there are drop-down filters at the top for Policy, Category, Severity, Namespace (PolicyReport Details only) and Kind. I'd like to see these same filters on the PolicyReports dashboard.

ARM64 Docker image feature

It would be really good to have an arm64 compatible Docker image to deploy the application on a Kubernetes Raspberry PI cluster

Installation Docs for Non-Helm Users

I happen to be one of the folks who don't use Helm, Kustomize, etc.; so trying to get this stood up has been mostly reverse engineering the Helm chart. There are some places that aren't quite clear to me how to adapt, so even a globbed yaml manifest of the associated resources to install via kubectl [apply|create] -f <some path to maniest>.yaml would be very welcomed.

Kyverno's own Quick Start page has this, for example:
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml

Question: Can this be used without cluster level access

If one is restricted to a namespace in their company from a SAAS platform / Kubernetes team.
Could this be applied at the scope of a namespace? No ability to use any type of CRD or cluster level role access.

But if so, I could focus on applying policies for network policies, labels or so on...

Prometheus metric labels not matching what Kyverno emits

Hi,

Prometheus metrics do not contain validation mode label (audit or enforce) so we are not able to filter by this setting.

Potential solutions:

  1. Additional validation mode label in Policy Reporter policy_report_result metric is available.
  2. Enabled relabelings in ServiceMonitor so we can conform Policy Reporter label names to Kyverno naming and then we can use Prometheus metric joining based on the same label name to append validation mode data.

Using only Kyverno or Policy Reporter metrics is not a silver bullet. They are different. Kyverno shows validation mode when Policy Reporter does not, Policy Reporter shows name of the resource affected when Kyverno does not etc.

What do you think?

Failed to update Policy Report

Hi,
I'm trying to configure the trivy scan
I faced an issue

2022/08/19 06:26:25 [ERROR] Failed to update Policy Report pod-vault-0-vault (UNIQUE constraint failed: policy_report_result.id)

policy-report vault trivy

apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
  creationTimestamp: "2022-08-18T16:10:25Z"
  generation: 1
  labels:
    pod-spec-hash: 764f764bb7
    trivy-adapter.container.name: vault
    trivy-adapter.resource.kind: Pod
    trivy-adapter.resource.name: vault-0
    trivy-adapter.resource.namespace: infra
  name: pod-vault-0-vault
  namespace: infra
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: false
    controller: true
    kind: Pod
    name: vault-0
    uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
  resourceVersion: "425016218"
  uid: 3ea09e50-8b3e-43df-b875-2499eaea033c
results:
- category: libcrypto1.1
  message: 'openssl: AES OCB fails to encrypt some bytes'
  policy: CVE-2022-2097
  properties:
    FixedVersion: 1.1.1q-r0
    InstalledVersion: 1.1.1n-r0
    PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-2097
  resources:
  - apiVersion: v1
    kind: Pod
    name: vault-0
    namespace: infra
    uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
  result: error
  scored: true
  severity: high
  source: Trivy
  timestamp:
    nanos: -110037824
    seconds: 1660839024
- category: libssl1.1
  message: 'openssl: AES OCB fails to encrypt some bytes'
  policy: CVE-2022-2097
  properties:
    FixedVersion: 1.1.1q-r0
    InstalledVersion: 1.1.1n-r0
    PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-2097
  resources:
  - apiVersion: v1
    kind: Pod
    name: vault-0
    namespace: infra
    uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
  result: error
  scored: true
  severity: high
  source: Trivy
  timestamp:
    nanos: -110016824
    seconds: 1660839024
- category: zlib
  message: 'zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c
    via a large gzip header extra field'
  policy: CVE-2022-37434
  properties:
    FixedVersion: 1.2.12-r2
    InstalledVersion: 1.2.12-r0
    PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-37434
  resources:
  - apiVersion: v1
    kind: Pod
    name: vault-0
    namespace: infra
    uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
  result: skip
  scored: true
  source: Trivy
  timestamp:
    nanos: -110011824
    seconds: 1660839024
summary:
  error: 2
  fail: 0
  pass: 0
  skip: 1
  warn: 0

some other scan and policyreports works, example
image

apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
  creationTimestamp: "2022-08-19T03:09:52Z"
  generation: 1
  labels:
    pod-spec-hash: 7db8786b54
    trivy-adapter.container.name: victoria-metrics-agent
    trivy-adapter.resource.kind: Pod
    trivy-adapter.resource.name: vm-agent-victoria-metrics-agent-0
    trivy-adapter.resource.namespace: monitoring
  name: pod-vm-agent-victoria-metrics-agent-0-victoria-metrics-agent
  namespace: monitoring
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: false
    controller: true
    kind: Pod
    name: vm-agent-victoria-metrics-agent-0
    uid: dd2448f0-ef36-40e1-8ee0-4da4dbaad378
  resourceVersion: "426019219"
  uid: f5e531a1-3d61-48fd-8515-f1fcc5d87dcc
results:
- category: zlib
  message: 'zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c
    via a large gzip header extra field'
  policy: CVE-2022-37434
  properties:
    FixedVersion: 1.2.12-r2
    InstalledVersion: 1.2.12-r1
    PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-37434
  resources:
  - apiVersion: v1
    kind: Pod
    name: vm-agent-victoria-metrics-agent-0
    namespace: monitoring
    uid: dd2448f0-ef36-40e1-8ee0-4da4dbaad378
  result: skip
  scored: true
  source: Trivy
  timestamp:
    nanos: -1659053872
    seconds: 1660878592
summary:
  error: 0
  fail: 0
  pass: 0
  skip: 1
  warn: 0

Thanks

Helm upgrade failed: unable to decode "": json

Hello,

I am running policy-reporter Helm chart 2.9.1 for a couple of days now.
Everything is working fine. Also I use it together with Flux.
Yesterday policy-reporter 2.9.2 Helm chart was released and I went for upgrade.
I checked the commits and values file and the upgrade seemed quite straightforward, just increment the version to 2.9.2.

But the upgrade failed and I keep getting:
Helm upgrade failed: unable to decode "": json: cannot unmarshal number into Go struct field ObjectMeta.metadata.labels of type string
I couldn't get any more details out.

Since these last commits were related to Grafana monitoring part I also added

monitoring:
...
  grafana:
    # required: namespace of your Grafana installation
    namespace: monitoring-system
    dashboards:
      # Enable the deployment of grafana dashboards
      enabled: true
      # Label to find dashboards using the k8s sidecar
      label: grafana_dashboard
      value: "1"

value: "1" to the values.yaml file but that also didn't work.
Regarding other options in values.yaml I don't have anything fancy. I have metrics, monitoring, kyverno plugin and Slack webhook. All other stuff is set to default.

Yandex Object Storage Target

Motivation

Hi - Im from Yandex.Cloud solution architect team. I believe that kyverno is the best way to manage kubernetes policies. But Yandex.Cloud users demand export to Yandex.Cloud storages

Feature

Yandex Cloud has multiple services that can be targets,but most demanded output right now is Yandex.Storage which has S3 API

Additional context

I could implement this feature by my own - Recently I added Yandex.Storage as part of falcosidekick falcosecurity/falcosidekick#261

Additional clusters behind proxy doesn't work

Trying to use an amazing feature from #167 I noticed that external clusters behind reverse proxy can't been accessed. E.g. my external policy-reporter is accessed through an nginx ingress.

The reason is

NewSingleHostReverseProxy does not rewrite the Host header. To rewrite Host headers, use ReverseProxy directly with a custom Director policy.

Requests to the ingress controller are submitted with wrong Host header and root policy-reporter can't gather the data. So only non-proxied setups work for now.

Summary and violation reports dont generate logs

Ive tested both the summary and violations report feature of the helm chart, but both dont produce logs.
In my case, i only got the summary email but no violation email, although both jobs are marked as recieved.
it would be nice if the jobs could output what they are working on

Grafana Dashboard PolicyReports panel columns need updating

I'm using the Grafana dashboard named PolicyReports provided in the Helm chart. To be consistent with other dashboards, the panel columns should be updated as follows:

Failing PolicyRules

  • Add category and severity columns in front of namespace

Failing ClusterPolicyRules

  • Filter out Pass and Skip statuses since this is supposed to be for failing policies only
  • Reorder columns to be inline with other dashboards (category, severity, kind, name, policy, rule, status)
  • Remove container column

/metrics returns 404 even it enabled on helm values

I was trying to get Prometheus metrics (as described on docs) on /metrics but somehow it returns 404.

$ kubectl port-forward service/policy-reporter-ui 8082:8080 -n kyverno
$ helm get values policy-reporter

USER-SUPPLIED VALUES:
kyvernoPlugin:
  enabled: true
metrics:
  enabled: true
ui:
  enabled: true
  plugins:
    kyverno: true
$ curl http://localhost:8082/metrics

Not Found

How i installed:

$ helm install policy-reporter policy-reporter/policy-reporter --set kyvernoPlugin.enabled=true --set ui.enabled=true --set ui.plugins.kyverno=true --set metrics.enabled=true -n kyverno --create-namespace

Anything I missed here? ๐Ÿค”

Issue Helm installation with own values.yaml

Hi,

I'm trying to deploy the policy-reporter with Helm and I'm running into an issue when I try to apply our own values.yaml file. Not sure if it's a bug or I'm missing something here.

Versions

policy-reporter: v2.2.0
kubernetes: 1.24.1
helm: v3.8.0

Reproduction path:

  1. Install policy-reporter with the default command:
helm upgrade -i policy-reporter policy-reporter/policy-reporter -n policy-reporter --namespace policy-reporter --version v2.2.2

Release "policy-reporter" does not exist. Installing it now.
NAME: policy-reporter
LAST DEPLOYED: Fri Jan 28 19:14:22 2022
NAMESPACE: policy-reporter
STATUS: deployed
REVISION: 1
TEST SUITE: None
  1. Export values used by deployment and store them in file:
    helm show values policy-reporter/policy-reporter > default_values.yaml

  2. Run helm upgrade using the newly created file:
    helm upgrade -i policy-reporter policy-reporter/policy-reporter -n policy-reporter --namespace policy-reporter --version v2.2.2 -f default_values.yaml
    Expected behavior:
    Succesful upgrade showing REVISION: 2

Actual behavior:

Error: UPGRADE FAILED: template: policy-reporter/templates/deployment.yaml:31:28: executing "policy-reporter/templates/deployment.yaml" at <include (print .Template.BasePath "/config-secret.yaml") .>: error calling include: template: policy-reporter/templates/config-secret.yaml:10:18: executing "policy-reporter/templates/config-secret.yaml" at <tpl (.Files.Get "config.yaml") .>: error calling tpl: error during tpl function execution for "loki:\n  host: {{ .Values.target.loki.host | quote }}\n  minimumPriority: {{ .Values.target.loki.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.loki.skipExistingOnStartup }}\n  {{- with .Values.target.loki.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\nelasticsearch:\n  host: {{ .Values.target.elasticsearch.host | quote }}\n  index: {{ .Values.target.elasticsearch.index | default \"policy-reporter\" | quote }}\n  rotation: {{ .Values.target.elasticsearch.rotation | default \"dayli\" | quote }}\n  minimumPriority: {{ .Values.target.elasticsearch.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.elasticsearch.skipExistingOnStartup }}\n  {{- with .Values.target.elasticsearch.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\nslack:\n  webhook: {{ .Values.target.slack.webhook | quote }}\n  minimumPriority: {{ .Values.target.slack.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.slack.skipExistingOnStartup }}\n  {{- with .Values.target.slack.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\ndiscord:\n  webhook: {{ .Values.target.discord.webhook | quote }}\n  minimumPriority: {{ .Values.target.discord.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.discord.skipExistingOnStartup }}\n  {{- with .Values.target.discord.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\nteams:\n  webhook: {{ .Values.target.teams.webhook | quote }}\n  minimumPriority: {{ .Values.target.teams.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.teams.skipExistingOnStartup }}\n  {{- with .Values.target.teams.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\nui:\n  host: {{ include \"policyreporter.uihost\" . }}\n  minimumPriority: {{ .Values.target.ui.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.ui.skipExistingOnStartup }}\n  {{- with .Values.target.ui.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\ns3:\n  accessKeyID: {{ .Values.target.s3.accessKeyID }}\n  secretAccessKey:  {{ .Values.target.s3.secretAccessKey }}\n  region: {{ .Values.target.s3.region }}\n  endpoint: {{ .Values.target.s3.endpoint }}\n  bucket: {{ .Values.target.s3.bucket }}\n  prefix: {{ .Values.target.s3.prefix }}\n  minimumPriority: {{ .Values.target.s3.minimumPriority | quote }}\n  skipExistingOnStartup: {{ .Values.target.s3.skipExistingOnStartup }}\n  {{- with .Values.target.s3.sources }}\n  sources:\n    {{- toYaml . | nindent 4 }}\n  {{- end }}\n\n{{- with .Values.policyPriorities }}\npriorityMap:\n  {{- toYaml . | nindent 2 }}\n{{- end }}": template: policy-reporter/templates/deployment.yaml:49:11: executing "policy-reporter/templates/deployment.yaml" at <include "policyreporter.uihost" .>: error calling include: template: policy-reporter/templates/_helpers.tpl:68:47: executing "policyreporter.uihost" at <.Values.ui.views.logs>: nil pointer evaluating interface {}.logs

Digging through the error message I manage to see something about the UI. So I tried to enable the UI and if I do so, my deployment works.

helm upgrade -i policy-reporter policy-reporter/policy-reporter -n policy-reporter --namespace policy-reporter --version v2.2.2 -f default_values.yaml --set ui.enabled=true

Release "policy-reporter" has been upgraded. Happy Helming!

There is no need for us to have the UI enabled for the policy reporter. So I'd rather have it disabled.

Thanks!

`capabilities.drop["all"]` is case sensitive and triggers existing kyverno policy

Hey there,

first off, thank you for this helpful tool. Makes adapting kyverno even easier.

I was "quick starting" kyverno and stumbled upon this issue, that the existing (strict) kyverno policy disallow-capabilities-strict will trigger on the policy-reporter deployment.

The problem:
The policy is testing upper case "ALL" as necessary drop capability:
https://github.com/kyverno/policies/blob/b3d81ea30e8751a503abe9dd888cc6cc3d4ebd72/pod-security/restricted/disallow-capabilities-strict/disallow-capabilities-strict.yaml#L39-L41

All deployment templates in this repo use a lower case "all" and therefor trigger the policy.

I first tried adding a to_upper function on that validate condition, but that just got messy, because we are already in a foreach loop for the containers.

Although I was unable to find a definition from Kubernetes side if a lower case "all" is allowed, all references I could find just adapt the upper case solution.
So I am going the easy way first and will add a PR here that fixes it for policy-reporter.

Policy-Reporter-UI behind a rancher reverse proxy Error 403

Is there a way to reach the UI internally via rancher without an ingress? I can access the web interface, but I don't get any data displayed.
The problem is the following:
example URL in rachner:
https: //rancher.com/k8s/clusters/xyz/api/v1/namespaces/policy-reporter/services/http:policy-reporter-ui:8080/proxy/

Request of the UI:
https: //rancher.com/api/
instead of
https: //rancher.com/k8s/clusters/xyz/api/v1/namespaces/policy-reporter/services/http:policy-reporter-ui:8080/proxy/api

Or is there a solution for the user login? Then the policy-reporter can also be accessible from outside.

Error page is not shown on 404

When going to a wrong page in the UI, eg: /does-not-exist, we're getting a UI glitch instead of an error page:
Screenshot from 2022-06-09 16-24-40
I'm using the latest release, 2.9.0

Allow custom webhook endpoint as a target

It would be great if it would be possible to define custom webhook endpoints as targets. The defined endpoints are great but a custom http endpoint would allow for simple implementations to respond differently.

Restart needed to remove old records for Pod (and maybe other) resources

Hi,

great product in combination with Kyverno!

Just noticed the issue of Policy Reporter showing old Pod names failing after changes that make those resources pass the validation.

Restarting Policy Reporter is a solution (now).

Not sure, but it seems some occasional garbage collecting/removal of old records for resources that do not exist anymore in the cluster might be a solution?

Thank you,
Alen

central policy exporter dashboard for multi cluster

I deployed a set of kyverno with policies, policy exporter and policy exporter UI on cluster A, able to see the policy reports from UI

image

Configured one more setup of kyverno with policies, policy exporter on cluster B, but this time without policy expoter UI. In helm chart values.yaml of policy exporter, for UI url field, i gave fqdn of cluster A policy UI url.

After installing of the setup, i see the reports are pushed. I able to see report error from policy exporter log, but unable to see from dashbaord by filtering cluster or namespace etc.. How can we do this multi cluster UI setup

svg logo for policy-reporter

Hello!
Iam from Yandex Cloud. Recently we contributed with yandex cloud s3 to policy reporter.
We want to add policy-reporter to our Yandex Cloud Kubernetes Marketplace and we need the logo of policy-reporter in svg format.
I tryed to convert your image from docs to svg but unfortunately i didnt meet a good quality and size. Could you pleasee send me your logo in svg format in normal quality and size?
Thank you very much

Kyverno's default `restrict-automount-sa-token` policy denies the installation of policy-reporter

Shouldn't we set automountServiceAccountToken: "false" in deployment manifest? Any ideas why we set it to true instead?

$ helm install policy-reporter policy-reporter/policy-reporter --set kyvernoPlugin.enabled=true --set ui.enabled=true --set ui.plugins.kyverno=true  -n policy-reporter --create-namespace

Error: INSTALLATION FAILED: admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/policy-reporter/policy-reporter-kyverno-plugin was blocked due to the following policies

restrict-automount-sa-token:
  autogen-validate-automountServiceAccountToken: 'validation error: Auto-mounting
    of Service Account tokens is not allowed. Rule autogen-validate-automountServiceAccountToken
    failed at path /spec/template/spec/automountServiceAccountToken/'

Rule:

spec:
  background: true
  rules:
  - match:
      any:
      - resources:
          kinds:
          - Pod
    name: validate-automountServiceAccountToken
    validate:
      message: Auto-mounting of Service Account tokens is not allowed.
      pattern:
        spec:
          automountServiceAccountToken: "false"
  validationFailureAction: enforce

cc @developer-guy

No PolicyReport CRDs found

I am running 1.8.9 and see the following log entries when starting my policy-reporter pod. Is the ERROR legit?

2021/09/07 16:00:39 [INFO] UI configured
2021/09/07 16:00:52 [ERROR] No PolicyReport CRDs found
2021/09/07 16:01:09 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports
2021/09/07 16:01:09 [INFO] Resource Found: wgpolicyk8s.io/v1alpha2, Resource=policyreports

The following CRDs exist on the system since this cluster is running Kyverno 1.4.2

clusterpolicies.kyverno.io                    2021-09-02T15:13:05Z
clusterreportchangerequests.kyverno.io        2021-09-02T15:13:05Z
generaterequests.kyverno.io                   2021-09-02T15:13:05Z
policies.kyverno.io                           2021-09-02T15:13:05Z
reportchangerequests.kyverno.io               2021-09-02T15:13:05Z

Multiple slack endpoints

It would be great if the policy-reporter configuration would allow multiple slack targets.
Currently its only possible to setup one slack target.

Desired behaviour

  • Define multiple slack targets
  • Allow each slack target to have different channel
  • Allow each slack target to have different priorities set

[Bug] Kyverno Image policy and validation of policy on resource crashes policyreporter

Environment

  1. Kubectl version: Client: 1.23.3, Server: 1.23.1
  2. Minikube: 1.25.1
  3. Kyverno: 1.6.0
  4. PolicyReporter image: ghcr.io/kyverno/policy-reporter:2.0.0

Bug Description

When creating a image policy and creating a subsequent resource which triggers that policy (E.g an unsigned image on a pod), it appears to crash policy reporter.

Steps to reproduce.

  1. Install the helm repository
helm repo add policy-reporter https://kyverno.github.io/policy-reporter
helm repo update
  1. Only install the core application
helm upgrade --install policy-reporter policy-reporter/policy-reporter --create-namespace -n policy-reporter --set metrics.enabled=true --set api.enabled=true
  1. Install kyverno
  1. Install the test-image-policy.txt attached, convert to yaml - kubectl apply -f test-image-policy.yaml
  2. Create pod with kubectl run unsigned --image=ghcr.io/kyverno/test-verify-image:unsigned.
  3. Reproduce with kubectl run signed--image=ghcr.io/kyverno/test-verify-image:signed

Notice how to policy-reporter will produce an error in the logs and be constantly restarting.
The expected result is that policyreporter does not crash

Error

The errors found

2022/02/15 18:38:44 [WARNING] - Healthz Check: No policyreport.wgpolicyk8s.io and clusterpolicyreport.wgpolicyk8s.io crds are found
2022/02/15 18:38:46 [WARNING] - Healthz Check: No policyreport.wgpolicyk8s.io and clusterpolicyreport.wgpolicyk8s.io crds are found
2022/02/15 18:38:49 [WARNING] - Healthz Check: No policyreport.wgpolicyk8s.io and clusterpolicyreport.wgpolicyk8s.io crds are found
2022/02/15 18:38:49 [INFO] Resource registered: wgpolicyk8s.io/v1alpha2, Resource=clusterpolicyreports
2022/02/15 18:38:49 [INFO] Resource registered: wgpolicyk8s.io/v1alpha2, Resource=policyreports
E0215 18:38:49.225534       1 runtime.go:78] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x1741cc0), concrete:(*runtime._type)(nil), asserted:(*runtime._type)(0x16fbc20), missingMethod:""} (interface conversion: interface {} is nil, not string)
goroutine 52 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x177cde0, 0xc000211d10})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x7d
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40ed74})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
panic({0x177cde0, 0xc000211d10})
	/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).mapResult(0xc0000109f8, 0xc0003cdf80)
	/app/pkg/kubernetes/mapper.go:90 +0x734
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).MapPolicyReport(0xc000121cc0, 0xc0005cc230)
	/app/pkg/kubernetes/mapper.go:55 +0x485
github.com/kyverno/policy-reporter/pkg/kubernetes.(*k8sPolicyReportClient).watchCRD.func2({0x191f320, 0xc00037a7e8})
	/app/pkg/kubernetes/policy_report_client.go:108 +0x44
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0x9f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fcf7c455e60)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000056f38, {0x1c78f20, 0xc0005d6000}, 0x1, 0xc0005d4000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0xc000056f88)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc00010df00)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
panic: interface conversion: interface {} is nil, not string [recovered]
	panic: interface conversion: interface {} is nil, not string

goroutine 52 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40ed74})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x177cde0, 0xc000211d10})
	/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).mapResult(0xc0000109f8, 0xc0003cdf80)
	/app/pkg/kubernetes/mapper.go:90 +0x734
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).MapPolicyReport(0xc000121cc0, 0xc0005cc230)
	/app/pkg/kubernetes/mapper.go:55 +0x485
github.com/kyverno/policy-reporter/pkg/kubernetes.(*k8sPolicyReportClient).watchCRD.func2({0x191f320, 0xc00037a7e8})
	/app/pkg/kubernetes/policy_report_client.go:108 +0x44
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0x9f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fcf7c455e60)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000056f38, {0x1c78f20, 0xc0005d6000}, 0x1, 0xc0005d4000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0xc000056f88)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc00010df00)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88

Additional labels for Loki target

In a multi cluster environment, we have one policy-reporter (PR) running in each cluster. Each PR server sends the events to a central Loki server. It will be useful to specify additional Loki labels in the PR config to query and create alerts on events from different PR servers separately.

What is the purpose of this namespace value in the monitoring sub chart?

Just wonding what I can configure with this option, or what was the intention to configure a namespace in 2 places:

and then the online place it is referenced is here:

{{- define "monitoring.namespace" -}}
{{- if .Values.grafana.namespace -}}
{{- .Values.grafana.namespace -}}
{{- else if .Values.namespace -}}
{{- .Values.namespace -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end }}

So is this redundant to grafana.namespace setting? At the moment I dont understand it's purpose, so maybe I am using it wrong.

Helm chart sensitive data

Also, I don't like the secret file, you don't encrypt to base64 and some CI can block when you're checking manifest
I'll recomend you replace https://github.com/fjogeleit/policy-reporter/blob/main/charts/policy-reporter/templates/targetssecret.yaml#L8 to use | b64enc but it's braking change in helm chart

And one more thing what's about move vars loki:,elasticsearch: insde additinal object config.loki, config.elasticsearch i,
https://github.com/fjogeleit/policy-reporter/blob/b71128448dcbfde8bd2937d4d60661103d9c52c3/charts/policy-reporter/values.yaml#L30

helm template ./
---
# Source: policy-reporter/templates/targetssecret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: policy-reporter-targets
  labels:
    helm.sh/chart: policy-reporter-0.16.2
    app.kubernetes.io/name: policy-reporter
    app.kubernetes.io/instance: policy-reporter
    app.kubernetes.io/version: "0.12.0"
    app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
  config.yaml: |-
    loki:
      host: ""
      minimumPriority: ""
      skipExistingOnStartup: true

    elasticsearch:
      host: ""
      index: "policy-reporter"
      rotation: "dayli"
      minimumPriority: ""
      skipExistingOnStartup: true

    slack:
      webhook: ""
      minimumPriority: ""
      skipExistingOnStartup: true

    discord:
      webhook: ""
      minimumPriority: ""
      skipExistingOnStartup: true

Allow to show only (kyverno) policy reports from specific namespaces

What

It would be great to have a config option in the helm chart to filter of which namespaces policy-reports should be shown

Why

We have a multi-tenant cluster with a single kyverno instance. The policies are all the same for everyone. Now it would be very cool if I could give different teams on the cluster access to different deployments of policy-reporter, where they can only see their individual reports, and not everything from all other teams, too.

How

Ideally this config option allows to configure:

  • a list of namespaces
  • the namespaces can contain wildcards, such that teamA-* will mean that all policies reports from teamA-namespace1, teamA-namespace2, ... are shown
  • all namespaces not following the list with the patterns should be ignored

http proxy error causing policy-reporter-ui slowness

Our policy reporter UI regularly responds very slowly and sometimes an error message appears : Unable to retrieve all Data from the Server

In the logs :

2022/06/02 16:20:38 http: proxy error: context canceled
2022/06/02 16:20:44 http: proxy error: context canceled
2022/06/02 16:20:44 http: proxy error: context canceled
2022/06/02 16:20:44 http: proxy error: context canceled

When we update a kyverno policy, the information takes a long time to appear in the UI whereas the report CRD has already been updated.

Any idea about a configuration we can tune to improve this please ?

Additinnal notes :

  • Version : chart v2.8.0
  • We use policy reporter for kyverno reports only

Support other Database Backend to avoid using persistentVolumes

We would like to use the policy-reporter as an overview for our policy violations but the clusters policy-reporter should run in do not have the ability to provide persistent volumes.

Is there a possibility to implement the usage of another database backend such as postgresql?

policy-reporter Unable to Locate policyreport and clusterpolicyreport CRDs

Inspecting logs for the policy-reporter pod, I'm seeing log entries like so:

2021/05/24 20:54:14 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha2, Resource=clusterpolicyreports                                                               
2021/05/24 20:54:14 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha2, Resource=policyreports
2021/05/24 21:50:29 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha1, Resource=policyreports
2021/05/24 21:52:36 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports 

Comparing to the CRDs in my k8s cluster, the policyreports and clusterpolicyreports CRDs are defined like so (truncated for brevity - these are Kyverno's CRDs and very lengthy):

### policyreports CRD ###
Name:         policyreports.wgpolicyk8s.io 
Namespace:
Labels: <none>
Annotations:  controller-gen.kubebuilder.io/version: v0.4.0 
API Version:  apiextensions.k8s.io/v1
Kind:         CustomResourceDefinition
# ...and so on
### clusterpolicyreports CRD ###
Name:         clusterpolicyreports.wgpolicyk8s.io
Namespace:
Labels: <none>
Annotations:  controller-gen.kubebuilder.io/version: v0.4.0 
API Version:  apiextensions.k8s.io/v1
Kind:         CustomResourceDefinition
# ...and so on

I'm running Kyverno v1.3.6 for reference, which I installed from their globbed manifest with minimal changes - I only modified the namespace label in each resource. The only reference to wgpolicyk8s.io/v1alpha* in their manifest is for the ClusterRole. So the CRDs exist, but aren't under the API policy-reporter seems to expect as defined in pkg/kubernetes/report_adapter.go.

Please let me know if there's any additional info I can provide!

sharedIndexInformer warning and UI oddity

Seeing 2 things after upgrading to 2.6.1 along with Kyverno 1.7.0.

2022-06-21T15:57:50.076802254-04:00 W0621 19:57:50.076713       1 shared_informer.go:401] The sharedIndexInformer has started, run more than once is not allowed

In the UI, I only see data related to a single namespace.

Happy to provide additional information as needed.

Multi-tenancy UI/reporter

awesome project. Just wondering about options how multiple teams in cluster can have different access level to UI and get separate notification back-ends. Are there any road-map in this direction ?

Vulnerabilities found in Golang 1.17.2

New vulnerabilities are found in Golang 1.17.2, we need to bump Golang version to 1.17.6 for all policy-reporter images:

  • ImportedSymbols in debug/macho (for Open or OpenFat) in Go before 1.16.10 and 1.17.x before 1.17.3 Accesses a Memory Location After the End of a Buffer, aka an out-of-bounds slice situation, link
  • Go before 1.16.10 and 1.17.x before 1.17.3 allows an archive/zip Reader.Open panic via a crafted ZIP archive containing an invalid name or an empty filename field, link

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.