Giter Site home page Giter Site logo

resmoio / kubernetes-event-exporter Goto Github PK

View Code? Open in Web Editor NEW

This project forked from opsgenie/kubernetes-event-exporter

764.0 764.0 152.0 10.24 MB

Export Kubernetes events to multiple destinations with routing and filtering

License: Apache License 2.0

Go 99.14% Dockerfile 0.35% Makefile 0.51%

kubernetes-event-exporter's People

Contributors

akifkhan01 avatar brianterry avatar dependabot[bot] avatar evedel avatar ilteristabak avatar itsmefishy avatar jun06t avatar jylitalo avatar lesh366 avatar lobshunter avatar mdemierre avatar mheck136 avatar minchao avatar mrueg avatar msasaki666 avatar muffl0n avatar mustafaakin avatar mustafaakin-atl avatar navidshariaty avatar njegosrailic avatar nlamirault avatar omauger avatar redlinkk avatar ronaknnathani avatar ryanbrainard avatar sergelogvinov avatar sshishov avatar superbrothers avatar xmcqueen avatar youyongsong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-event-exporter's Issues

Pod is in crashloopback state because of config map error

I am trying to deploy this folder in a infra namespace and getting below error :

https://github.com/resmoio/kubernetes-event-exporter/tree/master/deploy

{"level":"fatal","error":"yaml: invalid map key: map[interface {}]interface {}{\".InvolvedObject.Kind\":interface {}(nil)}","time":"2023-01-09T11:30:02Z","message":"cannot parse config to YAML"}

apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-cfg
  namespace: infra
data:
  config.yaml: |
    logLevel: debug
    logFormat: json
    maxEventAgeSeconds: 10
    kubeQPS: 60
    kubeBurst: 60
    # namespace: my-namespace-only # Omitting it defaults to all namespaces.
    route:
      # Main route
      routes:
        # This route allows dumping all events because it has no fields to match and no drop rules.
        - match:
            - receiver: "kafka"
        # # This starts another route, drops all the events in *test* namespaces and Normal events
        # # for capturing critical events
        # - drop:
        #     - namespace: "*test*"
        #     - type: "Normal"
        #       minCount: 5
        #       apiVersion: "*beta*"
        #   match:
        #     - receiver: "kafka"
        # # This a final route for user messages
        # - match:
        #     - kind: "Pod|Deployment|ReplicaSet"
        #       receiver: "kafka"
    receivers:
      - name: "kafka"
        kafka:
          clientId: "kubernetes"
          topic: "logs"
          brokers:
            - "kafka:9092"
          compressionCodec: "snappy"
          layout: #optionnal
            kind: {{ .InvolvedObject.Kind }}
            namespace: {{ .InvolvedObject.Namespace }}
            name: {{ .InvolvedObject.Name }}
            reason: {{ .Reason }}
            message: {{ .Message }}
            type: {{ .Type }}
            createdAt: {{ .GetTimestampISO8601 }}

can someone help me to resolve the error.

Empty JSONs in /dev/stdout

Hello, I have deployed the kube-event-exporter with mostly default configs:

config:
  logLevel: info
  logFormat: json

and I am seeing a lot of empty JSON logs in the output:

kubectl -n kubernetes-event-exporter logs -f kubernetes-event-exporter-6d7c58d7c4-bwqd8
{"level":"info","name":"dump","type":"*sinks.File","time":"2023-05-23T13:10:42Z","caller":"/bitnami/blacksmith-sandox/kubernetes-event-exporter-1.1.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/engine.go:25","message":"Registering sink"}
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
...

is there a way to avoid this JSONs from being outputted to /dev/stdout?

Thanks,
Thomas.

Release strategy/naming convention

@mustafaakin Can you clarify this project's release strategy? I see there was a v1.0 release followed by a v1.1 release last December. But then there was a release called "kubernetes-event-exporter-0.1.0" made available in March. Does that release represent the "next" release after v1.1? Or, does it represent a branch for some other purposes? Should we expect to see additional releases using one or both naming/numbering conventions moving forward?

I want to make sure I understand how things are being handled so I can be sure I'm using the "right" version of things. Thanks!

Vulnerabilities CVE-2021-43565 & CVE-2022-27664

Hi team,

In our trivy scan report there are 2 HIGH vulnerabilities in resmoio/kubernetes-event-exporter:v1.0:

Library Vulnerability Severity Installed Version Title
golang.org/x/crypto CVE-2021-43565 HIGH v0.0.0-20210220033148-5ea612d1eb83 golang.org/x/crypto: empty plaintext packet causes panicΒ  https://avd.aquasec.com/nvd/cve-2021-43565
golang.org/x/net CVE-2022-27664 HIGH v0.0.0-20220225172249-27dd8689420f golang: net/http: handle server errors after sending GOAWAY https://avd.aquasec.com/nvd/cve-2022-27664

Is there any plan for these vulnerabilities? Thanks.

Not all Events Pushed

We have metricbeat which sends events to opensearch and we are trying to replace it with your product but we see that metricbeat always has more events logged than event-exporter. For example deployment has 3 pods we restart it and each pod pulls image but I see from debug logs that event exporter reports only one to opensearch. doing kubectl get events also shows events for each pod

[Question] Using IRSA for Opensearch Authentication Failed

I use opensearch and use IAM role as service account for authorzing

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: monitoring
  name: event-exporter
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/EksCluster-eksblueprint-YSE1WGEEZD3K

Config

apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-cfg
  namespace: monitoring
data:
  config.yaml: |
    logLevel: error
    logFormat: json
    route:
      routes:
        - match:
            - receiver: "dump"
    receivers:
    - name: "dump"
      opensearch:
        hosts:
          - https://dev-es.cloudopz.co
        index: kube-events
        indexFormat: "kube-events-{2006-01-02}"
        useEventID: true
        tls: # optional, advanced options for tls
          insecureSkipVerify: false # optional, if set to true, the tls cert won't be verified

Error

{"level":"error","time":"2022-07-29T16:26:19Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
{"level":"error","time":"2022-07-29T16:26:19Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
{"level":"error","time":"2022-07-29T16:26:19Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
{"level":"error","time":"2022-07-29T16:26:19Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
{"level":"error","time":"2022-07-29T16:26:19Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
I0729 16:26:21.160973       1 request.go:665] Waited for 1.197094422s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/authentication.k8s.io/v1?timeout=32s
{"level":"error","time":"2022-07-29T16:26:29Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
{"level":"error","time":"2022-07-29T16:28:39Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
{"level":"error","time":"2022-07-29T16:28:39Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}
I0729 16:28:41.320471       1 request.go:665] Waited for 1.190263149s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/events.k8s.io/v1?timeout=32s
{"level":"error","time":"2022-07-29T16:28:50Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: Unauthorized"}

Event Flow Interruption During Restart

Analysis indicates that with the leaderElection activated, the interruption is about 17 seconds (the 15 second lease expiration period + 2 seconds for overhead). At high rates, that can be thousands of lost events. Certainly one option is to reduce defaultLeaseDuration, but I'd like it to be subsecond or even zero if possible.

I've been looking at the leader election code and I think we can get make that interruption interval much lower. I'm going to start working on that unless somebody has a better idea. I don't see a way to get the existing setup to go much faster than the defaultLeaseDuration.

Index Template Management For Elasticsearch/Opensearch

I see receivers like opensearch and elasticsearch does not add custom templates, is there any plans to add it? because we do migration from metricbeat to your project we compare both and I see metricbeat creates a large custom index template and additionally is supports user to configure shards and so

Do not logging the events or username

Hello everyone.

I would like to know if possible logging that all the users done without use the kube-apiserver to logging audit. I tried configured the event-export but I cannot see if an user scale down some pods , when do something in k8s and so on.

I am new using this exporter and I need a guide about it pls. Tks

Add ClusterName to Events

When there are multiple clusters in an area it's essential to have the source clustername associated with each event. The event object exposed by Kubernetes provides a clustername, but it does not set it.

We've got some code here in testing that adds an optional "clusterName" to the config file, and if clusterName is set in the config file, then the clusterName is set in the event. This makes it easy to filter a stream of events by source cluster.

PR: #3

Problem with collecting logs with filebeat

Hello.
I'm using events exporter with stdout receiver. Filebeat collects data from stdout but gets the error like:

parsing input as JSON: invalid character 'I' looking for beginning of value

Maybe u have seen this error before? How can I fix it?

nil pointer dereference with 1.27 cluster

the the release of 1.27 I'm getting the following:

E0526 10:47:05.249149       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 49 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2685340?, 0x46d49b0})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x20?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x2685340, 0x46d49b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/client-go/discovery.convertAPIResource(...)
	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:88
k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000129980, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:69 +0x5f0
k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc000128330, 0x15}, {0xc0004532e0, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:35 +0x2f8
k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0x0?)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:310 +0x47c
k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0x0?)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:198 +0x5c
k8s.io/client-go/discovery.ServerGroupsAndResources({0x3059430, 0xc0007059e0})
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:392 +0x59
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:356 +0x25
k8s.io/client-go/discovery.withRetries(0x2, 0xc000767690)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:621 +0x72
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:355 +0x3a
k8s.io/client-go/restmapper.GetAPIGroupResources({0x3059430?, 0xc0007059e0?})
	/go/pkg/mod/k8s.io/[email protected]/restmapper/discovery.go:148 +0x42
github.com/resmoio/kubernetes-event-exporter/pkg/kube.GetObject(0xc000677360, 0xc0001c4ea0, {0x302f540, 0xc0004a4410})
	/app/pkg/kube/objects.go:28 +0x108
github.com/resmoio/kubernetes-event-exporter/pkg/kube.(*LabelCache).GetLabelsWithCache(0xc000361e40, 0xc000677360)
	/app/pkg/kube/labels.go:40 +0xab
github.com/resmoio/kubernetes-event-exporter/pkg/kube.(*EventWatcher).onEvent(0xc0001595c0, 0xc000677258)
	/app/pkg/kube/watcher.go:95 +0x29d
github.com/resmoio/kubernetes-event-exporter/pkg/kube.(*EventWatcher).OnAdd(0x0?, {0x2a86d00?, 0xc000677258?})
	/app/pkg/kube/watcher.go:62 +0x32
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:911 +0x134
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x100047378a0?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:157 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000074f38?, {0x302f500, 0xc00059a000}, 0x1, 0xc000168000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:158 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:135 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:92
k8s.io/client-go/tools/cache.(*processorListener).run(0xc000379d00?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:905 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:75 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x12d1050]

goroutine 49 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x20?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xd7
panic({0x2685340, 0x46d49b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/client-go/discovery.convertAPIResource(...)
	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:88
k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000129980, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:69 +0x5f0
k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc000128330, 0x15}, {0xc0004532e0, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	/go/pkg/mod/k8s.io/[email protected]/discovery/aggregated_discovery.go:35 +0x2f8
k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0x0?)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:310 +0x47c
k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0x0?)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:198 +0x5c
k8s.io/client-go/discovery.ServerGroupsAndResources({0x3059430, 0xc0007059e0})
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:392 +0x59
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:356 +0x25
k8s.io/client-go/discovery.withRetries(0x2, 0xc000767690)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:621 +0x72
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
	/go/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:355 +0x3a
k8s.io/client-go/restmapper.GetAPIGroupResources({0x3059430?, 0xc0007059e0?})
	/go/pkg/mod/k8s.io/[email protected]/restmapper/discovery.go:148 +0x42
github.com/resmoio/kubernetes-event-exporter/pkg/kube.GetObject(0xc000677360, 0xc0001c4ea0, {0x302f540, 0xc0004a4410})
	/app/pkg/kube/objects.go:28 +0x108
github.com/resmoio/kubernetes-event-exporter/pkg/kube.(*LabelCache).GetLabelsWithCache(0xc000361e40, 0xc000677360)
	/app/pkg/kube/labels.go:40 +0xab
github.com/resmoio/kubernetes-event-exporter/pkg/kube.(*EventWatcher).onEvent(0xc0001595c0, 0xc000677258)
	/app/pkg/kube/watcher.go:95 +0x29d
github.com/resmoio/kubernetes-event-exporter/pkg/kube.(*EventWatcher).OnAdd(0x0?, {0x2a86d00?, 0xc000677258?})
	/app/pkg/kube/watcher.go:62 +0x32
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:911 +0x134
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x100047378a0?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:157 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000074f38?, {0x302f500, 0xc00059a000}, 0x1, 0xc000168000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:158 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:135 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:92
k8s.io/client-go/tools/cache.(*processorListener).run(0xc000379d00?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:905 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:75 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x85

Document buffer configuration

My elastic sink takes 5 to 9 minutes to write sometimes. Is there a buffer setting? I see in the code BatchSize but I don't see anything in the docs.

kubernetes eventrouter had more data

I noticed that kubernetes eventrouter would get data when the verb == UPDATED and ADDED but kubernetes event exporter only gets ADDED data. Is this a problem that it's not getting the same data as eventrouter? What is the verb field? I'm not sure. I. I've started looking at the kubernetes eventrouter code but i'm not sure what it is. Does anyone know what this is? is verb == UPDATED data important?

Looking at kubernetes-event-exporter it looks like updates are discared

watcher.go
func (e *EventWatcher) OnUpdate(oldObj, newObj interface{}) {
// Ignore updates
}

Does it work with ES datastreams?

For the opstree kubernetes-event-exporter i had to hack together a solution to support elasticsearch datastreams, see: opsgenie#165

I know that this is far from ideal, but i needed something to worked quickly at that point and couldn't spend more time on making it proper.

Has a fix like this (or proper πŸ˜‰ ) been applied to this fork?

Prometheus Metrics Coming Soon

I've got a PR coming that adds metrics to the exporter. We need to know if this thing is up, down, or failing. It's a pretty small PR and its working great in my testing.

Metrics are essential. If this thing is down in our clusters its a critical problem. Other downstream systems rely on these events. We watch the metrics and raise an alert if the metrics stop flowing.

receive many client-side throttling error

I0928 09:51:51.117936       1 request.go:665] Waited for 1.169935677s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/custom.metrics.k8s.io/v1beta1?timeout=32s
I0928 09:52:01.118162       1 request.go:665] Waited for 9.19883911s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0928 09:52:12.918241       1 request.go:665] Waited for 1.052697189s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/types.kubefed.io/v1beta1?timeout=32s
I0928 09:52:23.118536       1 request.go:665] Waited for 9.219608953s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/networking.istio.io/v1alpha3?timeout=32s
I0928 09:52:33.317829       1 request.go:665] Waited for 4.095703063s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/scheduling.kubefed.io/v1alpha1?timeout=32s
I0928 09:52:43.517991       1 request.go:665] Waited for 8.397779104s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/security.istio.io/v1beta1?timeout=32s
I0928 09:53:48.924398       1 request.go:665] Waited for 1.173705111s due to client-side throttling, not priority and fairness, request: GET:https://10.240.0.1:443/apis/config.istio.io/v1alpha2?timeout=32s

Avoid `runAsNonRoot: true` and use `runAsUser: 65532`

Hello,

In deployment.yaml, a securityContext with runAsNonRoot: true is specified.

securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault

However, the deployment cannot be started due to this line. The event log says:

Error: container has runAsNonRoot and image has non-numeric user (nonroot), cannot verify user is non-root (pod: "event-exporter-6746869bb-wzc6p_monitoring(5090cc65-1e05-4504-ac7b-57b243c111e2)", container: event-exporter)

I think runAsUser: 65532 is preferable, and this works for me.

K8s version: 1.26

loglevel

Hi,
what does loglevel in the configmap exactly do? is it a loglevel for the container itself or does it affect the events which will be exported?
Thank you

stale cache items and proposed solution

Hi we've found that if we use just the ev.InvolvedObject.UID as the cache key for the annotations and labels, we end up missing some required data, which we are passing via annotations. We have switched here to using ev.InvolvedObject.UID and ev.InvolvedObject.ResourceVersion. I've made it optional here, where "strict-caching" is activated by CLI option. I've also added metrics to show the performance of the cache.

I'm going to let it run for a while here, but I think it's going to be good. I can see from the metrics that it's still getting a large percentage of cache hits.

I'm happy to get a PR going over here for it soon if you're interested. I'm asking because if you change the default cache key, it is a very small change and so you may want to do this yourself, using this new key structure as the new default. If you want to make it optional and with metrics, I'll send the PR over soon.

strings.Join([]string{string(ev.InvolvedObject.UID), ev.InvolvedObject.ResourceVersion}, "/")

Kubernetes events exporter with teams

Hi ,
We have integrated with events exporter with the teams and we are able to see the alerts in teams. But we would like to customize the message in teams like cluster name , namespace in the alerts

Truncate messages via char limitation.

Hi, team our Splunk agent has a limitation of 15k characters anything more than that will be truncated. As for those msgs are do not getting the proper formatting in Splunk

We are using the stdout configuration, can we limit the msg chars in some way?

Project still active?

I notice there haven't been any new commits to this project since December. This is concerning, especially since this project was forked after work ceased on the original opsgenie project. Can the maintainers confirm this project is still active? Is Resmo still supportive of this effort? Is there a specific timeline for when a new release will be available?

I am NOT complaining, thank you for your efforts so far. I am just trying to understand the status of things so we can make good choices about the projects we use.

Future of this project/fork

My question is if Resmo is behind this project and also wants to check it regularly? As my feeling is that there are currently already some PRs (including fixes for critical CVEs from me) that are unfortunately not reviewed and merged. Don't get me wrong, I understand that there is no entitlement here. I'm happy to contribute too, but there has to be someone there to supervise it, so I don't want to put my efforts on a "dead" project.

cc @mustafaakin

RBAC `view` role is not engough sometimes

I'm getting this error

helmcharts.source.toolkit.fluxcd.io "karma-karma" is forbidden: User "system:serviceaccount:observability:event-exporter" cannot get resource "helmcharts" in API group "source.toolkit.fluxcd.io" in the namespace "flux-system""

Some applications with CRD would not decide for you what should be aggregated into the default Roles, in this case, FluxCD thinks that only FluxCD (cluster-admin) should be able to see these resources

OpenSearch indexing failed due to mapping error

Tried to run last docker image of exporter with this config:

    logLevel: error
    logFormat: json
    route:
      routes:
        - match:
            - receiver: "dump"
    receivers:
      - name: "dump"
        opensearch:
          hosts:
            - https://opensearch:9200
          index: kube-events
          username: {{ .Values.opensearch.username }}
          password: {{ .Values.opensearch.password }}
          tls:
            insecureSkipVerify: true

Result is:

{"level":"error","time":"2022-09-11T15:00:53Z","caller":"/app/pkg/sinks/opensearch.go:139","message":"Indexing failed: {\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"Could not dynamically add mapping for field [app.kubernetes.io/managed-by]. Existing mapping for [involvedObject.labels.app] must be of type object but found [text].\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"Could not dynamically add mapping for field [app.kubernetes.io/managed-by]. Existing mapping for [involvedObject.labels.app] must be of type object but found [text].\"},\"status\":400}"}

Error: container has runAsNonRoot and image has non-numeric user (nonroot), cannot verify user is non-root (pod: "event-exporter-*_monitoring(**)", container: event-exporter)

Hello, then I deploy the kubernetes-event-exporter-1.1, I see this erros

Warning Failed 13s (x2 over 13s) kubelet Error: container has runAsNonRoot and image has non-numeric user (nonroot), cannot verify user is non-root (pod: "event-exporter-54df879f78-m9njp_monitoring(2290c8e3-e8f2-4476-8d11-c6a938f16830)", container: event-exporter)

ClusterRole rules are too permissive

role.yaml

{{- if .Values.rbac.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: {{ include "kubernetes-event-exporter.fullname" . }}
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["get", "watch", "list"]
{{- end -}}

ClusterRole rules are too permissive! The service account does not need to be able to get, watch, list everything (e.g., secrets). The default should include only what is necessary for the software to function (i.e., events).

If we could configure the rules from the values.yaml that would be awesome.

The layout section is not working

I am outputting the logs to kafka but somehow the layout section is not working.

config:
  logLevel: debug
  logFormat: json
  receivers:
    - name: "kafka"
      kafka:
        clientId: "kubernetes"
        topic: "event-logs"
        brokers:
          - "localhost:9092"
        layout:
          eventType: "kube-event"
          createdAt: "{{ .GetTimestampMs }}"
          details:
            message: "{{ .Message }}"
            reason: "{{ .Reason }}"
            tip: "{{ .Type }}"
            count: "{{ .Count }}"
            kind: "{{ .InvolvedObject.Kind }}"
            name: "{{ .InvolvedObject.Name }}"
            namespace: "{{ .Namespace }}"
            component: "{{ .Source.Component }}"
            host: "{{ .Source.Host }}"
            labels: "{{ toJson .InvolvedObject.Labels}}"
  route:
    routes:
      - match:
          - receiver: "kafka"

I don't see the layout section in kafka. am I doing anything wrong in config? Can someone show how the output will look like

Renaming fields

Is it possible to rename some fields? I see there is layout option but it overrites all fields, I wonder if it is possible to keep all but just rename some

kubernetes-event-exporter stop collect

kubectl get pod -n monitoring |grep event-exporter

event-exporter-585c6df544-4mggq 1/1 Running 0 27d

kubectl logs -n -f monitoring event-exporter-585c6df544-4mggq --tail=2

{"metadata":{"name":"basketball-live-api-20220224201321-9b7083b-54989875d7-52xnl.16d8c37329152d6b","namespace":"prod","selfLink":"/api/v1/namespaces/prod/events/basketball-live-api-20220224201321-9b7083b-54989875d7-52xnl.16d8c37329152d6b","uid":"f1d9552e-d621-4dff-bd33-830b59408cab","resourceVersion":"1613015268","creationTimestamp":"2022-03-03T04:09:23Z"},"reason":"Unhealthy","message":"Readiness probe failed: Get http://10.41.180.10:8080/health: dial tcp 10.41.180.10:8080: connect: connection refused","source":{"component":"kubelet","host":"cn-hangzhou.10.41.129.225"},"firstTimestamp":"2022-03-03T04:09:23Z","lastTimestamp":"2022-03-03T04:10:08Z","count":4,"type":"Warning","eventTime":null,"reportingComponent":"","reportingInstance":"","involvedObject":{"kind":"Pod","namespace":"prod","name":"basketball-live-api-20220224201321-9b7083b-54989875d7-52xnl","uid":"7b651605-5c3f-4961-86f0-7585fb5c7788","apiVersion":"v1","resourceVersion":"1613011795","fieldPath":"spec.containers{basketball-live-api}","labels":{"app":"basketball-live-api","apptype":"java-msv","datetime":"2022-02-24-20-21-44-036497","env":"prod","pod-template-hash":"54989875d7","version":"basketball-live-api-20220224201321-9b7083b"}}}
{"metadata":{"name":"basketball-live-api-20220224201321-9b7083b-54989875d7-ls7rz.16d8c373768fc603","namespace":"prod","selfLink":"/api/v1/namespaces/prod/events/basketball-live-api-20220224201321-9b7083b-54989875d7-ls7rz.16d8c373768fc603","uid":"440a8e10-324d-45d7-ab77-947175108ca0","resourceVersion":"1613015343","creationTimestamp":"2022-03-03T04:09:24Z"},"reason":"Unhealthy","message":"Readiness probe failed: Get http://10.41.225.235:8080/health: dial tcp 10.41.225.235:8080: connect: connection refused","source":{"component":"kubelet","host":"cn-hangzhou.10.41.129.239"},"firstTimestamp":"2022-03-03T04:09:24Z","lastTimestamp":"2022-03-03T04:10:09Z","count":4,"type":"Warning","eventTime":null,"reportingComponent":"","reportingInstance":"","involvedObject":{"kind":"Pod","namespace":"prod","name":"basketball-live-api-20220224201321-9b7083b-54989875d7-ls7rz","uid":"f19a6bca-af32-4582-bfce-feffd82b3025","apiVersion":"v1","resourceVersion":"1613011793","fieldPath":"spec.containers{basketball-live-api}","labels":{"app":"basketball-live-api","apptype":"java-msv","datetime":"2022-02-24-20-21-44-036497","env":"prod","pod-template-hash":"54989875d7","version":"basketball-live-api-20220224201321-9b7083b"}}}

date

Wed Mar 30 12:00:28 CST 2022

Question: File Receiver Path

Hi

In the path variable for the File Receiver, is /tmp/dump a directory or a file? If it's a directory, presumably the directory needs to exist and won't be automatically created. If it's a file, does that need to already exist? Also is this location on the k8s node where the exporter pod is running?

Cheers

Disable output of events in pod

Installation method: I use a helm chart from Bitnami. I use the Syslog receiver as the main one and I don't need to duplicate events in the event-exporter log. Is there a functional capability?

Vulnerabilities in dependencies

Summary
Hi! I have recently analysed Kubernetes Event Exporter's latest release (0.1.0) with Trivy (a vulnerability scanner). The scanner mainly reports two vulnerabilities:

$ wget https://github.com/resmoio/kubernetes-event-exporter/archive/refs/tags/kubernetes-event-exporter-0.1.0.tar.gz
$ tar xfz kubernetes-event-exporter-0.1.0.tar.gz
$ trivy rootfs .

go.mod (gomod)

Total: 2 (UNKNOWN: 0, LOW: 1, MEDIUM: 1, HIGH: 0, CRITICAL: 0)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚          Library          β”‚ Vulnerability β”‚ Severity β”‚ Installed Version β”‚ Fixed Version β”‚                           Title                            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ github.com/aws/aws-sdk-go β”‚ CVE-2020-8911 β”‚ MEDIUM   β”‚ 1.44.162          β”‚               β”‚ aws/aws-sdk-go: CBC padding oracle issue in AWS S3 Crypto  β”‚
β”‚                           β”‚               β”‚          β”‚                   β”‚               β”‚ SDK for golang...                                          β”‚
β”‚                           β”‚               β”‚          β”‚                   β”‚               β”‚ https://avd.aquasec.com/nvd/cve-2020-8911                  β”‚
β”‚                           β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                           β”‚ CVE-2020-8912 β”‚ LOW      β”‚                   β”‚               β”‚ aws-sdk-go: In-band key negotiation issue in AWS S3 Crypto β”‚
β”‚                           β”‚               β”‚          β”‚                   β”‚               β”‚ SDK for golang...                                          β”‚
β”‚                           β”‚               β”‚          β”‚                   β”‚               β”‚ https://avd.aquasec.com/nvd/cve-2020-8912                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What is the status on these? Is the software affected by these?

Cannot specify a username in a slack message

When defining a slack receiver I'm not able to specify the identifier of a user so that one can be alerted directly.
In the "message" field it seems the @here or @USERNAME are not taken into account, they just appear as simple text once the message is in slack:

Here is a simple exemple of my slack receiver (I'd like each message be sent to the CHANNEL_ID but also to alert the USERNAME user in that channel).

    - name: "slack"
      slack:
        token: TOKEN
        channel: "CHANNEL_ID"
        message: "@USERNAME {{ .Message }}"
        fields:
          Reason: "{{ .Reason }}"
          Namespace: "{{ .Namespace }}"
          Kind: "{{ .InvolvedObject.Kind }}"
          Name: "{{ .InvolvedObject.Name }}"

In slack the message appears as shown in the following screenshot (the user is not notified as the @USERNAME appears like regular text instead of a reference towards the user):

Capture d’écran 2022-08-15 aΜ€ 10 15 00

Any idea of what I'm doing wrong ?

Release

Hi, thanks for the new fork, are you planning a release so the latest changes can be used?

Speed up multi-arch builds

arm64 builds are very slow. Takes around 30 minutes due to using docker buildx creating an emulated arm64 image. We can modify the build to use GOARCH=arm64 and just copy the file to a Docker image which should be much faster - compiling lots of huge dependencies in emulated VM is slow.

Events are missing on newer k8s version

hi
try to use helm chart for kubernetes-event-exporter from Bitnami version: 2.2.4 with this values

resources:
  limits:
     cpu: 400m
     memory: 250Mi
  requests:
     cpu: 100m
     memory: 25Mi

image:
  registry: ghcr.io/resmoio
  repository: kubernetes-event-exporter
  tag: v1.1

config:
  logLevel: trace
  logFormat: json
  maxEventAgeSeconds: 10
  throttlePeriod: 10
  kubeQPS: 60
  kubeBurst: 60
  route:
      match:
          - receiver: "dump"
  receivers:
    - name: "dump"
      elasticsearch:
        hosts:
          - http://elasticsearch-master.logging:9200
        index: new-kube-events
        indexFormat: "new-kube-events-{2006-01-02}"
        useEventID: false
        deDot: false
        tls:
          insecureSkipVerify: true

in elastic I can see only
Successfully assigned POD to NODE
and that is all, no other events
but in kubernetes-event-exporter I can see all other events

In the configuration file, how does a route drop multiple namespaces

I tried to drop the namespaces containing test and temp in the name, but it didn't work.

    route:
      routes:
        - drop:
            - namespace: "*test*|*temp*"
            - type: "Normal"
          match:
            - kind: "Pod|Deployment"
              receiver: "file"

The events of the namespace containing these two keywords is still collected

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.