Giter Site home page Giter Site logo

kiwigrid / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
183.0 8.0 212.0 731 KB

Helm charts for Kubernetes curated by Kiwigrid

Home Page: https://kiwigrid.github.io

License: MIT License

HTML 3.66% Smarty 6.50% Mustache 89.85%
helm charts kiwigrid kubernetes

helm-charts's Introduction

Kiwigrid Helm charts

Github Action

Add repo

$ helm repo add kiwigrid https://kiwigrid.github.io

Support

  • Please don't write mails directly to the maintainers.
  • Use the Github issue tracker instead.

Adding charts

  • Use a fork of this repo
  • Always sign your commits (git commit -s -m 'usefull commitmessage')
  • Do NOT touch default (master) branch in any forks
  • Always create new branches to work on
  • Create a Github pull request and fill out the PR template
  • Follow Helm best practices: https://docs.helm.sh/chart_best_practices

helm-charts's People

Contributors

0x4c6565 avatar alexsn avatar alien2150 avatar andrewneudegg avatar aperigault avatar axdotl avatar bartlettc22 avatar bnutt avatar brandonschmitt avatar chrisob avatar ghazgkull avatar gjtempleton avatar iemre avatar kovszilard avatar markusheiden avatar mauermann avatar monotek avatar muffix avatar niklaskhf avatar nvanheuverzwijn avatar nvtkaszpir avatar obeyler avatar oscarfh avatar pluies avatar pravarag avatar pulledtim avatar ritmas avatar rpahli avatar sshishov avatar tdinucci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Path customisation changes from PR #99 were reverted in #101

Hello,

PR #99 introduced a useful feature where it allowed to customise the paths mounted from the host. This is useful in our case where we are running Fluentd on Windows and the docker container logs are in C:\ProgramData\docker\containers.

on PR #101 the volumeMounts mountPath was reverted to a hardcoded one instead of being aligned with hostPath which takes it's value from the corresponding hostLogDir value. I am not sure what is the reasoning behind this but this makes it less flexible to accommodate custom setups.

It would also be helpful to add additional hostPaths to mount as for example in our case the kubelet logs are in C:\k (I guess this should be raised as a feature request).

Prometheus Thanos: Add service manifest for Ruler

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Please add K8s Service manifest for Ruler component. It will help to add its StoreApi to Querier.

fluentd fails and restarts all the time

Hi i get the following error when i install fluentd on k8s which talks to Aws ES, logs go fine but the fluentd restarts all the time:

2019-05-02 19:53:59 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2019-05-02 19:53:59.814288919 +0000 record={"priority"=>"6", "boot_id"=>"896d98e4770742679351613906e814ab", "machine_id"=>"ec2e1846d08b4dce48cb0784eee1afc3", "hostname"=>"ip-10-104-127-75", "transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "message"=>"cbr0: port 5(veth8260447c) entered disabled statedevice veth8260447c left promiscuous modecbr0: port 5(veth8260447c) entered disabled state", "source_monotonic_timestamp"=>"1226774483531"}
2019-05-02 20:06:07 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=0 next_retry_seconds=2019-05-02 20:06:08 +0000 chunk="587ed298f69c6778a968deab34cbc42d" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"localhost", :port=>8080, :scheme=>"http", :user=>"", :password=>"obfuscated"}): read timeout reached"
2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.3.3/lib/fluent/plugin/out_elasticsearch.rb:703:in rescue in send_bulk' 2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.3.3/lib/fluent/plugin/out_elasticsearch.rb:679:in send_bulk'
2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.3.3/lib/fluent/plugin/out_elasticsearch.rb:585:in block in write' 2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.3.3/lib/fluent/plugin/out_elasticsearch.rb:584:in each'
2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.3.3/lib/fluent/plugin/out_elasticsearch.rb:584:in write' 2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.4.1/lib/fluent/plugin/output.rb:1125:in try_flush'
2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.4.1/lib/fluent/plugin/output.rb:1425:in flush_thread_run' 2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.4.1/lib/fluent/plugin/output.rb:454:in block (2 levels) in start'
2019-05-02 20:06:07 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.4.1/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-05-02 20:06:08 +0000 [warn]: [elasticsearch] retry succeeded. chunk_id="587ed299eae2e26c8bcfd7d4cd9a595d"
2019-05-02 20:08:49 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2019-05-02 20:08:49.814133959 +0000 record={"priority"=>"6", "boot_id"=>"896d98e4770742679351613906e814ab", "machine_id"=>"ec2e1846d08b4dce48cb0784eee1afc3", "hostname"=>"ip-*******", "transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "message"=>"cbr0: port 2(vethece8768c) entered disabled statedevice vethece8768c left promiscuous modecbr0: port 2(vethece8768c) entered disabled state", "source_monotonic_timestamp"=>"1227663843094"}

Can some one help me in this?

Help on deploying graphite_exporter inside the graphite pod

Hello, this is not a technical issue. I don't have a lot of experience with the following kind of "issues" so I thought that someone might be able to help me if I'm not the only one doing it:

I would like to know if there is a way to run graphite_exporter inside my graphite pod that I deploy with your Helm chart. I tried to do it by hand but the git & make packages are not present in the container.

help on configuring elasticsearch with fluentd

Is this a request for help?: yes

Inside the value.yaml I see that to works with elasticsearch we need to install also the fluentd-elasticsearch plugin

# These logs are then submitted to Elasticsearch which assumes the
# installation of the fluent-plugin-elasticsearch & the
# fluent-plugin-kubernetes_metadata_filter plugins.
# See https://github.com/uken/fluent-plugin-elasticsearch &
# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
# more information about the plugins.

I don't find the right way to install the plugin you've mention with the helm chart of stable/elasticsearch
Have you got an example of how to do ?
Thanks by advance

Empty message after deploy

Version of Helm and Kubernetes:
Helm: 2.13.0
K8s: v1.10.0

Which chart in which version:
fluentd-elasticsearch-2.5.0

What happened:
Installed helm chart and all is running fine. However in ES empty log messages are appearing. This is the way we are logging with Log4J:
<Property name="CONSOLE_LOG_PATTERN">%d{ISO8601} [%thread] %highlight{%-5level} %logger{36}.%M: %m%ex%throwable{none}%n</Property>

March 6th 2019, 09:40:24.601 | checkout-api | 2019-03-06T08:39:55,180 [main] �[1;31mERROR�[m org.springframework.boot.SpringApplication.reportFailure: Application startup failed org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
March 6th 2019, 09:40:24.601 | checkout-api

What you expected to happen:
I expect it to recognize multilines properly. Since there are no newlines in the console logs itself i also do not expect them in ES itself

How to reproduce it (as minimally and precisely as possible):
Hard for me to say since its just a default install of the package. Maybe by using the logformat we use?

Anything else we need to know:
N/A

[Feature Request] - Thanos Compactor switch to StatefulSet

Is this a request for help?:

No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature Request

We want to create a Persistent Volume Claim for the Compactor to allow us to expand the disk space available to the compactor as our data stored by Thanos and pulled down for compaction grows. Currently the Helm chart deploys the [Compactor] (https://thanos.io/components/compact.md/) as a deployment. This doesn't allow for the dynamic provisioning of volumes for the recommended singleton to ensure that it has the recommended 100Gb of disk space for working with.

This would consist of a number of changes:

  • Change the compactor component from a deployment to a statefulSet
  • Remove the replicas option for the compactor - the official guidance from the Thanos docs is that this should only ever be a singleton
  • Add a number of new options for persistentVolumeClaim to allow users to configure the storage class, size etc. as with the existing storeGateway options

Version of Helm and Kubernetes:
Helm: v2.14.3
Kubernetes: v1.12.8

Which chart in which version:
Thanos, 1.2.1

What happened:
N/A

What you expected to happen:
The ability to create and configure a PVC for a Thanos compactor singleton.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

I intend to raise a PR to resolve this issue this evening which I've tested internally already. However if anything in my list of proposed changes above looks like the wrong approach I'm happy to test and modify the approach.

Prometheus Thanos: Allow --objstore.config-file to be used

Promlem

At the moment --objstore.config is defined inline in the Store Gateway, Compact, and Ruler with:

  • storeGateway.objStoreConfig
  • ruler.objStoreConfig
  • compact.objStoreConfig

This requires the values.yaml to contain secrets, and to be repeated many times.

Feature Request

To be able to specify a secret out of the helm chart (perhaps in the helm chart too?) which can be referenced in the values file for each component.

Implementation Options

  1. The helm chart already supports mounting volumes. So secrets could be mounted, and the object-file argument point at the mounted path.
  2. The secret could be mounted as an environment variable and --objstore.object used with the value of the variable, similar to that of prometheus-operator thanos sidecar. See https://github.com/coreos/prometheus-operator/blob/master/pkg/prometheus/statefulset.go#L705

Next Steps

I will raise a PR with option 1 above as I think it is the best suited for this helm chart. Comments are appreciated.

Kube 1.11 install error

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
helm version 2.9.1
Kubernetes master version 1.11.8_1552
Kubernetes workers version 1.11.8_1547

Which chart in which version:
graphite chart
image:
repository: graphiteapp/graphite-statsd
tag: 1.1.5-10

What happened:
The pod/container fails to create: CreateContainerError

in the kube logs: kubectl describe pod graphite-0

(combined from similar events): Error: failed to create containerd container: taking runtime copy of volume: open /var/data/cripersistentstorage/io.containerd.grpc.v1.cri/containers/922cfa93053db8e428823b201b481143f9cc230f18ff42b9f37fa77161db1b9f/volumes/bd1403ba9f315e8c02c1add7951089dda95a42c6e1d86893593749738fa8ed51: no such file or directory

What you expected to happen:
the pod to be created

How to reproduce it (as minimally and precisely as possible):
helm repo add kiwigrid https://kiwigrid.github.io
helm install kiwigrid/graphite --name graphite

Anything else we need to know:
This is on IBM's kubernetes infrastructure

Error: render error in fluentd-elasticsearch/templates/service.yaml

**Is this a BUG REPORT **:

Version of Helm and Kubernetes:
helm version: v2.14.0
kubernetes version: v1.15.0

Which chart in which version:
fluentd-elasticsearch

What happened:
When trying to render the template I got

Error: render error in "fluentd-elasticsearch/templates/service.yaml": template: fluentd-elasticsearch/templates/service.yaml:10:3: executing "fluentd-elasticsearch/templates/service.yaml" at <include "fluentd-elasticsearch.labels" .>: error calling include: template: fluentd-elasticsearch/templates/_helpers.tpl:48:27: executing "fluentd-elasticsearch.labels" at <include "fluentd-elasticsearch.name" .>: error calling include: template: fluentd-elasticsearch/templates/_helpers.tpl:6:18: executing "fluentd-elasticsearch.name" at <.Chart.Name>: nil pointer evaluating interface {}.Name

What you expected to happen:
render fluentd manifest

How to reproduce it
values.yaml

image:
  pullPolicy: IfNotPresent
  repository: quay.io/fluentd_elasticsearch/fluentd
  tag: v2.7.0

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  limits:
    memory: 500Mi
  requests:
    cpu: 100m
    memory: 200Mi

awsSigningSidecar:
  enabled: false

priorityClassName: ""

hostLogDir:
  varLog: /var/log
  dockerContainers: /var/lib/docker/containers
  libSystemdDir: /usr/lib64

elasticsearch:
  auth:
    enabled: false
  host: elasticsearch-client
  port: 9200
  buffer_chunk_limit: 2M
  buffer_queue_limit: 8
  logstash_prefix: 'logstash'

fluentdArgs: "--no-supervisor -q"

rbac:
  create: true

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

updateStrategy:
  type: RollingUpdate

livenessProbe:
  enabled: true

serviceMonitor:
  enabled: false

prometheusRule:
  enabled: false

annotations: {}
#  prometheus.io/scrape: "true"
#  prometheus.io/port: "24231"

podSecurityPolicy:
  enabled: false

ingress:
  enabled: false

configMaps:
  useDefaults:
    systemConf: true
    containersInputConf: true
    systemInputConf: true
    forwardInputConf: true
    monitoringConf: false
    outputConf: true

extraConfigMaps:
  containers.input.conf: |-
    <match fluent.**>
      @type null
    </match>
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>

    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>

    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>

    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>

    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>
  output.conf: |-
    # Enriches records with Kubernetes metadata
    <filter **>
        @type record_transformer
        enable_ruby
        <record>
          enviroment "${environment}"
          global_log $${ record["log"] || record["MESSAGE"] }
        </record>
    </filter>
    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      include_tag_key true
      type_name _doc
      host "#{ENV['OUTPUT_HOST']}"
      port "#{ENV['OUTPUT_PORT']}"
      scheme "#{ENV['OUTPUT_SCHEME']}"
      ssl_version "#{ENV['OUTPUT_SSL_VERSION']}"
      ssl_verify true
      logstash_format true
      logstash_prefix "#{ENV['LOGSTASH_PREFIX']}"
      reconnect_on_error true
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
        queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
        overflow_action block
      </buffer>
    </match>


## Persist data to a persistent volume
persistence:
  enabled: false

service:
  ports:
    - name: fluentd-tcp
      port: 24224
      protocol: TCP
      targetPort: 24224
      type: ClusterIp
    - name: fluentd-udp
      port: 24224
      protocol: UDP
      targetPort: 24224
      type: ClusterIp

nodeSelector: {}

tolerations: []

affinity: {}

helm fetch kiwigrid/fluentd-elasticsearch --untar
helm template fluentd-elasticsearch --values values.yaml > fluentd.yaml

Upgrade prometheus-thanos chart to use thanos 0.4.0

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

Which chart in which version:
prometheus-thanos

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Thanos Compactor v0.4.0 has changed the flag --sync-delay to --consistency-delay. Since --sync-delay is hardcoded in the templates, changing the image.tag is not enough to upgrade Thanos Compactor

Code change

can't connect to AWS elasticsearch (6.7)

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-29T16:15:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:
apiVersion: v1
name: fluentd-elasticsearch
version: 4.4.0
appVersion: 2.6.0

What happened:
fluentd-elasticsearch-2.6.0]$ kubectl logs -n logging fluentd-elasticsearch-787zj
2019-07-25 14:43:30 +0000 [error]: unexpected error error_class=Elasticsearch::Transport::Transport::Errors::Forbidden error="[403] {"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"}"
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/elasticsearch-transport-7.1.0/lib/elasticsearch/transport/transport/base.rb:218:in __raise_transport_error' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/elasticsearch-transport-7.1.0/lib/elasticsearch/transport/transport/base.rb:338:in perform_request'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/elasticsearch-transport-7.1.0/lib/elasticsearch/transport/transport/http/faraday.rb:37:in perform_request' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/elasticsearch-transport-7.1.0/lib/elasticsearch/transport/client.rb:160:in perform_request'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/elasticsearch-api-7.1.0/lib/elasticsearch/api/actions/info.rb:32:in info' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:334:in detect_es_major_version'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:245:in block in configure' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/elasticsearch_index_template.rb:35:in retry_operate'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:244:in configure' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin.rb:164:in configure'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:130:in add_match' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:72:in block in configure'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:64:in each' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:64:in configure'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/root_agent.rb:150:in configure' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/engine.rb:131:in configure'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/engine.rb:96:in run_configure' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:801:in run_configure'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:548:in block in run_worker' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:730:in main_process'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:544:in run_worker' 2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/command/fluentd.rb:316:in <top (required)>'
2019-07-25 14:43:30 +0000 [error]: /usr/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in require' 2019-07-25 14:43:30 +0000 [error]: /usr/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in require'
2019-07-25 14:43:30 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-1.5.1/bin/fluentd:8:in <top (required)>' 2019-07-25 14:43:30 +0000 [error]: /usr/local/bin/fluentd:22:in load'
2019-07-25 14:43:30 +0000 [error]: /usr/local/bin/fluentd:22:in `

'

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

help on kubernetes metadata

Is this a request for help?: help
I try to add kubernetes metadata filter but no metadata are added.
Do you know if some other config are mandatory ?

      <filter kubernetes.**>
        @id filter_kubernetes_metadata
        @type kubernetes_metadata
      </filter>

content of log file /var/log/containers/gogs-6dbc5d64df-t8ddv_gogs_gogs-fe724f9b94a28d6e43948d563462281c786d2e4e498a989d6be10feabeceec79.log

 
{"log":"groupmod: PAM: Permission denied\n","stream":"stderr","time":"2019-05-14T08:57:04.409682156Z"}
{"log":"usermod: no changes\n","stream":"stderr","time":"2019-05-14T08:57:04.413281472Z"}
{"log":"mkdir: can't create directory '/data/gogs/data': Permission denied\n","stream":"stderr","time":"2019-05-14T08:57:04.418772517Z"}

https://kiwigrid.github.io is not a valid chart repository or cannot be reached

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}

Which chart:

What happened:
helm repo add kiwigrid https://kiwigrid.github.io
Error: Looks like "https://kiwigrid.github.io" is not a valid chart repository or cannot be reached: Failed to fetch https://kiwigrid.github.io/index.yaml : 404 Not Found

What you expected to happen:
Being able to add the repo

How to reproduce it (as minimally and precisely as possible):
helm repo add kiwigrid https://kiwigrid.github.io

Anything else we need to know:

SSL_VERIFY??

Is there a way to set ssl_verify=false when installing this chart? I have a ES server with a self signed cert that I want to send traffic to. If I set the scheme=https and port=443 I get an SSL verification error back from Fluentd.

This is a question for the fluentd-elasticsearch chart.

where is sidecar component

Hi,
Where is sidecar compoenent in it.

I have running Prometheus server that is deployed using helm.
Now I want to implement thnos separately without changing Prometheus setup.

please suggest

kiwigrid-ci-bot not working as intended

What happened:
Unable to add the kiwigrid repository. using command:
helm repo add kiwigrid https://kiwigrid.github.io.

Results in the following error:
Error: Looks like "https://kiwigrid.github.io" is not a valid chart repository or cannot be reached: Failed to fetch https://kiwigrid.github.io/index.yaml : 404 Not Found

What you expected to happen:
Repo to be added to helm

How to reproduce it (as minimally and precisely as possible):
helm repo add kiwigrid https://kiwigrid.github.io

Anything else we need to know:
It appears as though this is an issue with the new kiwigrid-ci-bot. A commit was made 30 minutes ago to the kiwigrid/kiwigrid.github.io github repository to remove the index.yaml. Commit reference: 2b56bcc

Trying to send logs to s3 and elasticsearch

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REGUEST

If this is a FEATURE REQUEST, please:

  • Describe in detail the feature/behavior/change you'd like to see.
    Currently the dockerfile in the kiwigrid/fluentd-elasticsearch helm chart only gives elasticsearch plugin , it would be nice if the s3 plugin is also included.

Version of Helm and Kubernetes:
helm version :- v2.13.1
kubernetes version :- v1.12.7

I have added customized Docker image to include both elasticsearch and s3 plugins and
added below configurations to kiwigrid/fluentd-elasticsearch helm chart :-

extraConfigMaps:
    output.conf: |-
      <match **>
        @type s3
        @id s3
        @log_level info
        s3_bucket "#{ENV['S3_BUCKET_NAME']}"
        s3_region "#{ENV['S3_BUCKET_REGION']}"
        aws_access_key_id "${ENV['AWS_ACCESS_KEY_ID']}"
        aws_secret_access_key "${ENV['AWS_SECRET_ACCESS_KEY']}"
        path logs/%Y/%m/%d/
        logstash_format true
        logstash_prefix logstash
        reconnect_on_error true
        <buffer>
          @type file
          path /var/log/fluentd-buffers/kubernetes.system.buffer
          flush_mode interval
          retry_type exponential_backoff
          flush_thread_count 8
          flush_interval 15s
          retry_forever
          retry_max_interval 30
          timekey 3600
          timekey_use_utc true
          overflow_action block
        </buffer>
      </match>
      <match **>
        @id elasticsearch
        @type elasticsearch
        @log_level error
        include_tag_key true
        host "#{ENV['OUTPUT_HOST']}"
        port "#{ENV['OUTPUT_PORT']}"
        path "#{ENV['OUTPUT_PATH']}"
        scheme "#{ENV['OUTPUT_SCHEME']}"
        ssl_verify "#{ENV['OUTPUT_SSL_VERIFY']}"
        ssl_version "#{ENV['OUTPUT_SSL_VERSION']}"
        logstash_format true
        logstash_prefix "#{ENV['LOGSTASH_PREFIX']}"
        reconnect_on_error true
        <buffer>
          @type file
          path /var/log/fluentd-buffers/kubernete.system.buffer
          flush_mode interval
          retry_type exponential_backoff
          flush_thread_count 2
          flush_interval 5s
          retry_forever
          retry_max_interval 30
          chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
          queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
          overflow_action block
        </buffer>
      </match>

this setup seems working for few minutes and later it stops sending logs to elasticsearch and s3.
If i remove one of those and deploy it works fine. Not sure whats happening can someone please help ?

The udp and tcp configs for statsd are not mapped correctly for the graphite chart

Is this a request for help?:
No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:

$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Kubernetes version: 1.14.1

Which chart in which version:
Graphite chart

$ helm ls
NAME            REVISION        UPDATED                         STATUS          CHART                           APP VERSION     NAMESPACE
graphite-statsd 1               Fri Jun 28 15:14:18 2019        DEPLOYED        graphite-0.3.4                  1.1.5-4         monitoring

What happened:
The statefulset.yaml for the udp and tcp mountPaths seem to be incorrect.

When I exec into the statefulset container, I can see that statsd is looking for those files in a different path

$ kubectl exec -it --namespace=monitoring graphite-statsd-0 sh
ps -aef | grep -i statsd
35 root      0:00 statsd /opt/statsd/config/udp.js

But the configmap for udp is getting mounted under /opt/statsd/config_udp.js instead.
So naturally, statsd ends up loading the default 10 second config in /opt/statsd/config/udp.js and I start losing data points.

What you expected to happen:
The configmaps to be loaded at container start by statsd.

How to reproduce it (as minimally and precisely as possible):
Simply modify either the udp or tcp statsd config on your values.yaml or via --set and check if statsd is reading the correct file.

Anything else we need to know:
I'm submitting a PR for this soon'ish.

Ruler sidecar crashing during startup due to SIGSEGV

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:
Kubernetes: 1.13.7
Helm: 2.14.3

Which chart in which version:
prometheus-thanos / 1.2.1

What happened:
During the startup of sidecar container of Ruler component it crashes with:

│ k8s-configmap-watcher time="2019-08-07T14:52:19Z" level=info msg="Using in cluster config"                                                                                                 │
│ k8s-configmap-watcher time="2019-08-07T14:52:20Z" level=error msg="unknown (get configmaps)"                                                                                               │
│ k8s-configmap-watcher panic: runtime error: invalid memory address or nil pointer dereference                                                                                              │
│ k8s-configmap-watcher [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1011693]                                                                                             │
│                                                                                                                                                                                            │
│ k8s-configmap-watcher goroutine 19 [running]:                                                                                                                                              │
│ k8s-configmap-watcher main.startWatchConfigMaps(0x0, 0x0, 0x0, 0x0, 0xc420042006, 0x13, 0xc420042027, 0xb, 0xc420040038, 0x1f, ...)                                                        │
│ k8s-configmap-watcher     /go/src/github.com/kiwigrid/k8s-configmap-watcher/main.go:81 +0x583                                                                                              │
│ k8s-configmap-watcher created by main.main.func1                                                                                                                                           │
│ k8s-configmap-watcher     /go/src/github.com/kiwigrid/k8s-configmap-watcher/main.go:60 +0x285

What you expected to happen:
Sidecar container should start normally.

How to reproduce it (as minimally and precisely as possible):
Enable sidecar for Ruler and install Helm Chart.

Anything else we need to know:
Currently, we do not have any ConfigMap with the matching label.

[fluentd-elasticsearch] PR #82 breaks metrics when using awsSigningSidecar

Is this a request for help?:
no

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:

$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:
fluentd-elasticsearch-3.0.1

What happened:
PR #82 introduced a bug (among others...) which assigns service ports (including the prom metrics port) to the signing sidecar container, instead of the fluentd container.

In other words, when awsSigningSidecar.enabled=true and some service ports are enabled, the service ports incorrectly forward to the sidecar instead of fluentd-elasticsearch.

What you expected to happen:
Service ports should correctly route to the fluentd-elasticsearch container when the signing proxy sidecar is enabled. For example, when the monitor-agent service on port 24231 is enabled, fluentd-elasticsearch container metrics should be scrape-able from the service.

How to reproduce it (as minimally and precisely as possible):

awsSigningSidecar:
  enabled: true

service:
  type: ClusterIP
  ports:
    - name: "monitor-agent"
      port: 24231

Anything else we need to know:

[BUG] fluentd-elasticsearch config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? BUG REPORT

Version of Helm and Kubernetes:
Helm version:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Kubernetes version: 1.11.9

Which chart in which version:
kiwigrid/fluentd-elasticsearch 4.8.5 (i assume this version as I just installed using kiwigrid/fluentd-elasticsearch)

What happened:
One of the pods keeps failing with error:
config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://some-ip-address:443/api: Timed out connecting to server"

and logs are not available in my elk stack

What you expected to happen:
pod should not fail and logs should appear

How to reproduce it (as minimally and precisely as possible):
I keep deleting the deployment and reinstalling and it keeps failing

Elasticsearch Breaking Changes

fluentd-elasticsearch (version: 2.5.0)

Upstream elasticsearch has introduced a number of breaking changes (as discussed here). This can be trivially solved by changing _doc to doc in the default config here, but there may be more issues that I've yet to come across.

dial tcp 100.64.75.68:8080: i/o timeout

I simply did the install and tried to configure the DataSource in Grafana, but getting the timeout error in grafana logs

ENV:
Kuberenetes V.1.10.11 installed via KOPS
CNI - Flannel

Command used to install : (I have cloned the repo)

helm install -f values.yaml --name graphite --namespace monitoring .

Try to add the datasource from Grafana and you see the error in log:

2019/01/07 13:30:04 http: proxy error: dial tcp 100.64.75.68:8080: i/o timeout
[root@localhost charts]# kubectl get po -n monitoring -l "app.kubernetes.io/instance=graphite"
NAME         READY     STATUS    RESTARTS   AGE
graphite-0   1/1       Running   0          52m

[root@localhost charts]# kubectl get svc -n monitoring -l "app.kubernetes.io/instance=graphite"
NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                          AGE
graphite   ClusterIP   100.64.75.68   <none>        2004/TCP,2003/TCP,8080/TCP,2023/TCP,2024/TCP,8125/UDP,8126/TCP   1h

Request for interest

Hey,

I've recently been using the fluentd-elasticsearch chart (thanks for it). We are currently using fluentd in our Amazon EKS cluster. We set up an AWS ES Domain, however, this requires all requests to ES to be signed using Amazon's IAM v4 signature format.

I've modified the fluentd-elasticsearch chart to add support for including a proxy sidecar that signs the requests coming from fluentd before sending them off to AWS ES. This is enabled with a templated flag.

Proxy project: https://github.com/abutaha/aws-es-proxy

Is there any interest in merging this change into the chart? I can prepare and open a PR for review if desired.

Cheers,
Sandy

suppress user/password if empty

Is this a request for help?: yes, or bug fix, not sure


Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug fix???

Version of Helm and Kubernetes:
helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
[ec2-user@ip-172-31-255-150 fluentd-elasticsearch]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-12-06T01:33:57Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.8-eks-7c34c0", GitCommit:"7c34c0d2f2d0f11f397d55a46945193a0e22d8f3", GitTreeState:"clean", BuildDate:"2019-03-01T22:49:39Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:
fluentd-elasticsearch
version: 2.7.0
appVersion: 2.4.0

What happened:
can't connect to AWS Elasticsearch, suppress user/password if empty?

What you expected to happen:
connect to AWS Elasticsearch (6.4)

How to reproduce it (as minimally and precisely as possible):
helm install . --name fluentd-elasticsearch --namespace logging -f values-mine.yaml
< host: 'vpc-eks-logging-aaaaaaaaaaaaa.us-east-1.es.amazonaws.com'
1.es.amazonaws.com'
< port: 443
< scheme: 'https'

host: 'elasticsearch-client'
port: 9200
scheme: 'http'
39,40c37,40
user: ""
password: ""

Anything else we need to know:
[warn]: [elasticsearch] failed to flush the buffer. retry_time=5 next_retry_seconds=2019-03-27 22:13:38 +0000 chunk="58519368ae8d5ef06fb04c89b57fdcc7" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"vpc-eks-logging-6w44gqj2u5xkmsnic766dbezhm.us-east-1.es.amazonaws.com", :port=>443, :scheme=>"https", :user=>"", :password=>"obfuscated"}): [403] {"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Basic PG5pbD46PG5pbD4="}"

AWS support says...
it looks like your Fluentd is adding an authorization header to the request.
For AWS ES only AWSSigV4 method[1] is supported for authentication, as such if you use any other type of authorization method, the value for the "Authorization" header will not be in the correct format and ES will reject the request with the error above.

In case of fluentd the authorization header is usually added when you have the "user" and "password" parameters[2] in your elasticsearch output configuration. Since you have an open access policy on your domain, if you simply remove these two parameters from the configuration and the security groups on your domain allow connections from the clients running fluentd, you should be able to connect to your AWS ES domains without any issues.

Bug fluentd mount issue in IBM IKS

Version of Helm and Kubernetes:

Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.8", GitCommit:"7d3d6f113e933ed1b44b78dff4baf649258415e5", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:16Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:
https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch
version: 4.8.4

What happened:
Helm chart works perfectly on GKE but doesn't work on IBM IKS

GKE fluentd pod /var/lib/docker/ mounted path exists:

fluentd-fluentd-elasticsearch-kzsv7:/# df -kh
Filesystem      Size  Used Avail Use% Mounted on
overlay          95G  4.2G   91G   5% /
tmpfs            64M     0   64M   0% /dev
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1        95G  4.2G   91G   5% /var/log
/dev/root       1.2G  691M  530M  57% /usr/lib64
shm              64M     0   64M   0% /dev/shm
shm              64M     0   64M   0% /var/lib/docker/containers/c7953b6a1fca9c99b2d77d12380733368fcf7adc6c370618ce956bc2514e4aa7/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/804fe84001046dfe68f982ac2559d48cfb00db973d0f027c056eac273c74f757/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/564c783f4de9557256b9d92a48d5a69991b16020ecc90e0d4db8593e0e24bdd8/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/9bc4d1143653e27c4c829a96272b237089dbb5d99db824709cf1b4ce50f4323d/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/b63acc42a68335a87447de9bb5e42908282bc1e5bc36da6db4a03fcca1e06c58/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/61e1cbeb92ba08b1e49adb1025278bbdb75a96a59d615b1c612da35f4e1f5240/mounts/shm
tmpfs           1.9G   12K  1.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           1.9G     0  1.9G   0% /proc/acpi
tmpfs           1.9G     0  1.9G   0% /proc/scsi
tmpfs           1.9G     0  1.9G   0% /sys/firmware

IBM IKS fluentd pod /var/lib/docker/ mount path don't exist:

df -kh
Filesystem               Size  Used Avail Use% Mounted on
overlay                   98G   29G   65G  31% /
tmpfs                     64M     0   64M   0% /dev
tmpfs                    7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/xvda2                25G  5.1G   18G  23% /var/log
/dev/mapper/docker_data   98G   29G   65G  31% /etc/hosts
shm                       64M     0   64M   0% /dev/shm
tmpfs                    7.9G   12K  7.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                    7.9G     0  7.9G   0% /proc/acpi
tmpfs                    7.9G     0  7.9G   0% /proc/scsi
tmpfs                    7.9G     0  7.9G   0% /sys/firmware

Errors in IBM IKS fluentd pod:

2019-08-21 15:55:23 +0000 [warn]: [fluentd-containers.log] /var/log/containers/n63f277org1peer1-6686bb8585-m6ps7_n63f277_couchdb-79285b9489b5c51d94e478511f9a0854b9d3264b25e2cc9796fa28d57ee70c6b.log unreadable. It is excluded and would be examined next time.
2019-08-21 15:55:23 +0000 [warn]: [fluentd-containers.log] /var/log/containers/calico-kube-controllers-764b648775-4ftbt_kube-system_calico-kube-controllers-75dd90987cee4640b51812cea4532890ee42c04d6157a0a76a499623c2527ed4.log unreadable. It is excluded and would be examined next time.
2019-08-21 15:55:23 +0000 [warn]: [fluentd-containers.log] /var/log/containers/n63f277org1peer1-6686bb8585-m6ps7_n63f277_peer-7ff2d4c7d89294eaa8eab484379cf52a3f71c31526930462f8dca3d8c7c2186a.log unreadable. It is excluded and would be examined next time.

I can fix this by myself if you provide Dockerfile, because these fluentd daemonset versions are different
https://github.com/fluent/fluentd-kubernetes-daemonset

What you expected to happen:
/var/lib/docker/containers/ mounts should be applied in IKS k8s fluentd pod

How to reproduce it (as minimally and precisely as possible):

  1. Deploy elasticsearch helm chart, I used repo elastic/elasticsearch but with mount fix (haven't created PR for repo)
  2. Deploy helm repo kiwigrid/fluentd-elasticsearch

Reference, why it doesn't work on IBM IKS, is due to non-root account usage:
https://cloud.ibm.com/docs/containers?topic=containers-cs_troubleshoot_storage#file_app_failures

Update: I used fluentd daemonset manifest without helm chart and the following images (with root user) but still have the same mounting issue in IBM IKS:

        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.4/debian-elasticsearch/Dockerfile

        image: fluent/fluentd-kubernetes-daemonset:v1.2.6-debian-elasticsearch
https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.2/debian-elasticsearch/Dockerfile

json log is not picked up

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
Helm v2.14.1
Kubernetes GKE-1.10

Which chart in which version:
kiwigrid/helm-charts/fluentd-elasticsearch

What happened:
after the chart is deployed, logs are picked up in general to Elastic-search, available in Kibana; however only syslog, or text log in stdout are collected, json log is not picked up or parsed correctly.

What you expected to happen:
json log to be available in Kibana as well

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

1 pod fails to connect to AWS ES

Is this a request for help?:
Yes it is

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:41:50Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:

fluentd-elasticsearch-2.7.0 2.4.0

What happened:
1 pods does not manage to send log to AWS Elasticsearch when the 2 others sounds to work perfectly

What you expected to happen:
All the pods works like a charm

How to reproduce it (as minimally and precisely as possible):
I just deploy the chart on my kubernetes stack with the default values.yml file configured to target my AWS ES database

Anything else we need to know:
I added the 3 following configuration to the AWS ES match :
resurrect_after 5s
reload_on_failure true
reload_connections false

Please find below the log file from the unhealthy pod :

2019-03-26 14:24:18 +0000 [error]: config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Exception encountered fetching metadata from Kubernetes API endpoint: pods is forbidden: User \"system:serviceaccount:monitoring:fluentd-monitoring-fluentd-elasticsearch\" cannot list pods at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"pods is forbidden: User \\\"system:serviceaccount:monitoring:fluentd-monitoring-fluentd-elasticsearch\\\" cannot list pods at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"pods\"},\"code\":403}\n)"

Thanos-prometheus additional labels

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Please add functionality to specify additional labels and annotations on pods.

Trouble upgrading an existing release of fluentd-elasticsearch

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

Helm 2.12
K8s: 1.12.3

Which chart:

fluentd-elasticsearch

What happened:

We have a release of fluentd-elasticsearch from the helm/charts repository with version 1.0.3
I am trying to upgrade that to 2.3.0 from this repository.

We use helmfile to do the deploying of our helm releases.

This was the output I got when trying to run helmfile sync.

09:47 $ CONTEXT="APP-Dev" helmfile --kube-context=kubernetes-admin-dev --file=HelmfileBootstrap.yaml --selector name=logging-fluentd sync
Adding repo stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories

Adding repo incubator https://kubernetes-charts-incubator.storage.googleapis.com
"incubator" has been added to your repositories

Adding repo elastic https://helm.elastic.co
"elastic" has been added to your repositories

Adding repo kiwigrid https://kiwigrid.github.io
"kiwigrid" has been added to your repositories

Adding repo stable_f5 https://f5networks.github.io/charts/stable
"stable_f5" has been added to your repositories

Updating repo
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable_f5" chart repository
...Successfully got an update from the "kiwigrid" chart repository
...Successfully got an update from the "elastic" chart repository
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Upgrading kiwigrid/fluentd-elasticsearch
Error: UPGRADE FAILED: DaemonSet.apps "logging-fluentd-fluentd-elasticsearch" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"fluentd-elasticsearch", "app.kubernetes.io/instance":"logging-fluentd"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

err: release "logging-fluentd" in "HelmfileBootstrap.yaml" failed: exit status 1
exit status 1

If I try the upgrade with helm directly I get the same errors.

10:19 $ helm upgrade --install --debug --namespace=logging --kube-context=kubernetes-admin-dev logging-fluentd ./charts/fluentd-elasticsearch -f helmfile-values/logging-fluentd-elasticsearch.yaml
[debug] Created tunnel using local port: '54922'

[debug] SERVER: "127.0.0.1:54922"

Error: UPGRADE FAILED: DaemonSet.apps "logging-fluentd-fluentd-elasticsearch" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"logging-fluentd", "app.kubernetes.io/name":"fluentd-elasticsearch"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

What you expected to happen:

I expected the release to be upgraded to to new version.

How to reproduce it (as minimally and precisely as possible):

Install fluentd-elasticsearch 1.0.3 from helm/charts and upgrade it to fluentd-elasticsearch 2.3.0 from this repository.

Anything else we need to know:

PodSecurityPolicy / VolumeMount need to be more configurable

FEATURE REQUEST
context:
In our case (CFCR a K8S deployed by bosh) the*.log files are not located in /var/log/containers but in /var/vcap/data/root_log/containers.
so the default volumeMounts are not revelant in our case.
the podsecurity policy created, the allowed hostPast is a already defined list.

to solve that I think there is several way to implement
either:
just add a new extraVolumes with hostPath : /var/vcap/data/root_log/containers and extravolumemount the podsecuritypolicy need also to add this new hostPath

or maybe and easiest solution:
give the opportunity to use podsecuritypolicy but without create one (and so on the policy is created manually) and also default volumeMount need to be configurable

Error disabling default configmaps

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:

$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T19:44:19Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:

$ helm search kiwigrid/fluentd-elasticsearch
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                                 
kiwigrid/fluentd-elasticsearch  2.9.1           2.5.1           A Fluentd Helm chart for Kubernetes with Elasticsearch ou...

What happened:
Following your values.yml file, i copy pasted the following part:

configMaps:
  useDefaults:
    systemConf: false
extraConfigMaps:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
      <log>
        format json
      </log>
    </system>

When i try to apply changes, it complains:

Error: UPGRADE FAILED: render error in "fluentd-elasticsearch/templates/daemonset.yaml": template: fluentd-elasticsearch/templates/daemonset.yaml:40:28: executing "fluentd-elasticsearch/templates/daemonset.yaml" at <include (print $.Tem...>: error calling include: template: fluentd-elasticsearch/templates/configmap.yaml:15:19: executing "fluentd-elasticsearch/templates/configmap.yaml" at <4>: wrong type for value; expected string; got map[string]interface {}

Following your readme as well, i used

configMaps.useDefaults.systemConf: false
extraConfigMaps:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
      <log>
        format json
      </log>
    </system>

With this setting, it does not complain, but the default system conf is added anyway, and my custom configmap is ignored.

What you expected to happen:
Generated yamls would not add the default system.conf configmap and add my custom system.conf configmap.

How to reproduce it (as minimally and precisely as possible):
Create a values.yaml file with this content:

configMaps:
  useDefaults:
    systemConf: true
    containersInputConf: true
    systemInputConf: true
    forwardInputConf: true
    monitoringConf: true
    outputConf: true

Run this:

helm upgrade --dry-run --debug --install -f values.yaml fluentd kiwigrid/fluentd-elastics
earch

Anything else we need to know:

buffer errors in fluentd-elasticsearch

2019-02-26 17:08:24 +0000 [error]: [elasticsearch] unexpected error while checking flushed chunks. ignored. error_class=RuntimeError error="can't enqueue buffer file: path = /var/log/fluentd-buffers/kubernetes.system.buffer/buffer.b582cf1a9610279adff4ff90342562a9a.log, error = 'No such file or directory @ rb_file_s_rename - (/var/log/fluentd-buffers/kubernetes.system.buffer/buffer.b582cf1a9610279adff4ff90342562a9a.log, /var/log/fluentd-buffers/kubernetes.system.buffer/buffer.q582cf1a9610279adff4ff90342562a9a.log)'

Error "log does not exist" after deploying

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report

Version of Helm and Kubernetes:
Helm: v2.14.0
Kubernetes: v1.13.5

Which chart in which version:
fluentd-elasticsearch-3.0.1

What happened:
After deployment, no logs end up in ES. The pods have the following errors:

2019-06-11 12:04:01 +0000 [warn]: dump an error event: error_class=ArgumentError error="log does not exist" 

After a while:

2019-06-11 12:04:01 +0000 [warn]: [elasticsearch] failed to write data into buffer by buffer overflow action=:block

What you expected to happen:
That the logs are forwarded to ES.

How to reproduce it (as minimally and precisely as possible):
I only executed:

helm install --name fluentd --namespace logging kiwigrid/fluentd-elasticsearch --set elasticsearch.host=elasticsearch-client.logging.svc.cluster.local,elasticsearch.port=9200

Anything else we need to know:
I don't think so.

Not able to connect to Amazon Elasticsearch Service

Hi there,

Im trying to send the logs from my kubernetes cluster to Amazon Elasticsearch Service.Unfortunately im getting an error. Below are the logs

$ kubectl logs es-fluentd-elasticsearch-5p2sb -c es-fluentd-elasticsearch
2019-06-20 03:07:40 +0000 [error]: unexpected error error_class=NoMethodError error="undefined method []' for nil:NilClass" 2019-06-20 03:07:40 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.4.3/lib/fluent/plugin/out_elasticsearch.rb:324:in detect_es_major_version'
2019-06-20 03:07:40 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.4.3/lib/fluent/plugin/out_elasticsearch.rb:250:in block in configure' 2019-06-20 03:07:40 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.4.3/lib/fluent/plugin/elasticsearch_index_template.rb:35:in retry_operate'
2019-06-20 03:07:40 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-elasticsearch-3.4.3/lib/fluent/plugin/out_elasticsearch.rb:249:in `configure'

$ kubectl describe po/es-fluentd-elasticsearch-5p2sb
Name: es-fluentd-elasticsearch-5p2sb
Namespace: logging
Priority: 0
PriorityClassName:
Node: aks-nodepool1
Start Time: Wed, 19 Jun 2019 19:01:55 +1200
Labels: app.kubernetes.io/instance=es
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=fluentd-elasticsearch
Annotations: checksum/config: 83a245c2df3e2121122
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 10.244.7.88
Controlled By: DaemonSet/es-fluentd-elasticsearch
Containers:
es-fluentd-elasticsearch:
Container ID: docker://b43d76073a8c2e52c55e8b8f205c77cf83d03eb10dd552a1ede9dd3ca200139e
Image: gcr.io/fluentd-elasticsearch/fluentd:v2.5.2
Image ID: docker-pullable://gcr.io/fluentd-elasticsearch/fluentd@sha256:6977aef7bad7ad590376a3d245473230ad1793a715b852360d058d6333913383
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 20 Jun 2019 15:07:13 +1200
Finished: Thu, 20 Jun 2019 15:07:40 +1200
Ready: False
Restart Count: 220
Liveness: exec [/bin/sh -c LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300}; STUCK_THRESHOLD_SECONDS=${STUCK_THRESHOLD_SECONDS:-900}; if [ ! -e /var/log/fluentd-buffers ]; then
exit 1;
fi; touch -d "${STUCK_THRESHOLD_SECONDS} seconds ago" /tmp/marker-stuck; if [ -z "$(find /var/log/fluentd-buffers -type d -newer /tmp/marker-stuck -print -quit)" ]; then
rm -rf /var/log/fluentd-buffers;
exit 1;
fi; touch -d "${LIVENESS_THRESHOLD_SECONDS} seconds ago" /tmp/marker-liveness; if [ -z "$(find /var/log/fluentd-buffers -type d -newer /tmp/marker-liveness -print -quit)" ]; then
exit 1;
fi;
] delay=600s timeout=1s period=60s #success=1 #failure=3
Environment:
FLUENTD_ARGS: --no-supervisor -q
OUTPUT_HOST: localhost
OUTPUT_PORT: 8080
LOGSTASH_PREFIX: logstash
OUTPUT_SCHEME: http
OUTPUT_SSL_VERIFY: true
OUTPUT_SSL_VERSION: TLSv1_2
OUTPUT_BUFFER_CHUNK_LIMIT: 2M
OUTPUT_BUFFER_QUEUE_LIMIT: 8
K8S_NODE_NAME: (v1:spec.nodeName)
Mounts:
/etc/fluent/config.d from config-volume (rw)
/usr/lib64 from libsystemddir (ro)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
/var/run/secrets/kubernetes.io/serviceaccount from es-fluentd-elasticsearch-token-gg7kx (ro)
es-fluentd-elasticsearch-aws-es-proxy:
Container ID: docker://1940858aab2125c0a86350afcff5175540e7abb92d19bc0a35ef979cc7849819
Image: abutaha/aws-es-proxy:0.9
Image ID: docker-pullable://abutaha/aws-es-proxy@sha256:c45e94d79848394cf5425eaa66408d0bd619d5aba25d00a2f3bc1da4b52c0da6
Port:
Host Port:
Args:
-endpoint
https://search-kubernetes-logs-xxxxxxxxxxxxxxxxxxxx.ap-southeast-2.es.amazonaws.com:443
-listen
127.0.0.1:8080
State: Running
Started: Wed, 19 Jun 2019 19:02:03 +1200
Ready: True
Restart Count: 0
Environment:
PORT_NUM: 443
AWS_ACCESS_KEY_ID: AKPPPPPPPPPPPPPPPPPP
AWS_SECRET_ACCESS_KEY: xccxcvxvcxxxxxxxxxxxx
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from es-fluentd-elasticsearch-token-gg7kx (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
libsystemddir:
Type: HostPath (bare host directory volume)
Path: /usr/lib64
HostPathType:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: es-fluentd-elasticsearch
Optional: false
es-fluentd-elasticsearch-token-gg7kx:
Type: Secret (a volume populated by a Secret)
SecretName: es-fluentd-elasticsearch-token-gg7kx
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message


Warning BackOff 7m8s (x4642 over 20h) kubelet, aks-nodepool1-11121221 Back-off restarting failed container
Normal Pulled 2m14s (x221 over 20h) kubelet, aks-nodepool1-11121221 Container image "gcr.io/fluentd-elasticsearch/fluentd:v2.5.2" already present on machine

Any help would be greatly appreciated

Detected ES 7.x or above: `_doc` will be used as the document `_type`.

Is this a request for help?:
yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
bug report

Version of Helm and Kubernetes:
kubernetes 1.11.5
helm 2.9.1

Which chart in which version: 2.9.0

What happened:
i keep getting this message which kills my cluster

fluentd-elasticsearch-b5rk9 fluentd-elasticsearch 2019-05-05 12:43:00 +0000 [warn]: [elasticsearch] Detected ES 7.x or above: `_doc` will be used as the document `_type`.```

**What you expected to happen**:
value.yaml specifies output settings with 
`type_name _doc`
but it's not applied?

**How to reproduce it** (as minimally and precisely as possible):
open an account on elasitc cloud and set the output to it

**Anything else we need to know**:

'fluentd-elsasticsearch' chart doesn't work as expected with secret option.

Is this a request for help?: Yeap.


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

tklym@Tarass-MacBook-Pro-2 Repositories $ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

tklym@Tarass-MacBook-Pro-2 Repositories $ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:14:49Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

Which chart in which version:

  chart     = "stable/fluentd-elasticsearch"
  name      = "fluentd"
  namespace = "logging"
  version   = "1.1.0"

What happened:

Expected change is not applied to PODs.

What you expected to happen:

AWS_KEY and AWS_SECRET are created in PODs env.

How to reproduce it (as minimally and precisely as possible):

Start helm chart with secret list provided.

Anything else we need to know:
Nope.

Unable to connect to a secure ES cluster

Is this a request for help?: Yes


Version of Helm and Kubernetes:

Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Which chart in which version:

REVISION: 1
RELEASED: Mon Jun 10 11:04:13 2019
CHART: fluentd-elasticsearch-3.0.1
USER-SUPPLIED VALUES:
elasticsearch:
  auth:
    enabled: false
    password: xxx
    user: elastic
  buffer_chunk_limit: 2M
  buffer_queue_limit: 8
  host: elasticsearch-master
  logstash_prefix: logstash
  port: 9200
  scheme: https
  ssl_version: TLSv1_2
env: null
extraConfigMaps: null
extraVolumeMounts:
- mountPath: /certs
  name: es-certs
  readOnly: true
extraVolumes:
- name: es-certs
  secret:
    defaultMode: 420
    secretName: elastic-certificate-pem
fluentdArgs: --no-supervisor -q
rbac:
  create: true
secret:
- name: ELASTICSEARCH_PASSWORD
  secret_key: password
  secret_name: elastic-credentials
serviceAccount:
  create: true
  name: ""

COMPUTED VALUES:
affinity: {}
annotations: {}
awsSigningSidecar:
  enabled: false
  image:
    repository: abutaha/aws-es-proxy
    tag: 0.9
configMaps:
  useDefaults:
    containersInputConf: true
    forwardInputConf: true
    monitoringConf: true
    outputConf: true
    systemConf: true
    systemInputConf: true
elasticsearch:
  auth:
    enabled: false
    password: xxx
    user: elastic
  buffer_chunk_limit: 2M
  buffer_queue_limit: 8
  host: elasticsearch-master
  logstash_prefix: logstash
  port: 9200
  scheme: https
  ssl_version: TLSv1_2
extraVolumeMounts:
- mountPath: /certs
  name: es-certs
  readOnly: true
extraVolumes:
- name: es-certs
  secret:
    defaultMode: 420
    secretName: elastic-certificate-pem
fluentdArgs: --no-supervisor -q
hostLogDir:
  dockerContainers: /var/lib/docker/containers
  libSystemdDir: /usr/lib64
  varLog: /var/log
image:
  pullPolicy: IfNotPresent
  repository: gcr.io/fluentd-elasticsearch/fluentd
  tag: v2.5.2
livenessProbe:
  enabled: true
nodeSelector: {}
podAnnotations: {}
podSecurityPolicy:
  annotations: {}
  enabled: false
priorityClassName: ""
prometheusRule:
  enabled: false
  labels: {}
  prometheusNamespace: monitoring
rbac:
  create: true
resources: {}
secret:
- name: ELASTICSEARCH_PASSWORD
  secret_key: password
  secret_name: elastic-credentials
service: {}
serviceAccount:
  create: true
  name: ""
serviceMonitor:
  enabled: false
  interval: 10s
  labels: {}
  path: /metrics
tolerations: {}
updateStrategy:
  type: RollingUpdate

HOOKS:
MANIFEST:

---
# Source: fluentd-elasticsearch/templates/configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-elasticsearch
  labels:
    app.kubernetes.io/name: fluentd-elasticsearch
    helm.sh/chart: fluentd-elasticsearch-3.0.1
    app.kubernetes.io/instance: fluentd-elasticsearch
    app.kubernetes.io/managed-by: Tiller
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
  containers.input.conf: |-
    # This configuration file for Fluentd / td-agent is used
    # to watch changes to Docker log files. The kubelet creates symlinks that
    # capture the pod name, namespace, container name & Docker container ID
    # to the docker logs for pods in the /var/log/containers directory on the host.
    # If running this fluentd configuration in a Docker container, the /var/log
    # directory should be mounted in the container.
    #
    # These logs are then submitted to Elasticsearch which assumes the
    # installation of the fluent-plugin-elasticsearch & the
    # fluent-plugin-kubernetes_metadata_filter plugins.
    # See https://github.com/uken/fluent-plugin-elasticsearch &
    # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
    # more information about the plugins.
    #
    # Example
    # =======
    # A line in the Docker log file might look like this JSON:
    #
    # {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #  "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z"}
    #
    # The time_format specification below makes sure we properly
    # parse the time format produced by Docker. This will be
    # submitted to Elasticsearch and should appear like:
    # $ curl 'http://elasticsearch-logging:9200/_search?pretty'
    # ...
    # {
    #      "_index" : "logstash-2014.09.25",
    #      "_type" : "fluentd",
    #      "_id" : "VBrbor2QTuGpsQyTCdfzqA",
    #      "_score" : 1.0,
    #      "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
    #                 "stream":"stderr","tag":"docker.container.all",
    #                 "@timestamp":"2014-09-25T22:45:50+00:00"}
    #    },
    # ...
    #
    # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
    # record & add labels to the log record if properly configured. This enables users
    # to filter & search logs on any metadata.
    # For example a Docker container's logs might be in the directory:
    #
    #  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
    #
    # and in the file:
    #
    #  997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # where 997599971ee6... is the Docker ID of the running container.
    # The Kubernetes kubelet makes a symbolic link to this file on the host machine
    # in the /var/log/containers directory which includes the pod name and the Kubernetes
    # container name:
    #
    #    synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #    ->
    #    /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # The /var/log directory on the host is mapped to the /var/log directory in the container
    # running this instance of Fluentd and we end up collecting the file:
    #
    #   /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # This results in the tag:
    #
    #  var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
    # which are added to the log message as a kubernetes field object & the Docker container ID
    # is also added under the docker field object.
    # The final tag is:
    #
    #   kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # And the final log record look like:
    #
    # {
    #   "log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #   "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z",
    #   "kubernetes": {
    #     "namespace": "default",
    #     "pod_name": "synthetic-logger-0.25lps-pod",
    #     "container_name": "synth-lgr"
    #   },
    #   "docker": {
    #     "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
    #   }
    # }
    #
    # This makes it easier for users to search for logs by pod name or by
    # the name of the Kubernetes container regardless of how many times the
    # Kubernetes pod has been restarted (resulting in a several Docker container IDs).
    # Json Log Example:
    # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
    # CRI Log Example:
    # 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>

    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>

    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>

    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>

    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>
  system.input.conf: |-
    # Example:
    # 2015-12-21 23:17:22,066 [salt.state       ][INFO    ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
    <source>
      @id minion
      @type tail
      format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
      time_format %Y-%m-%d %H:%M:%S
      path /var/log/salt/minion
      pos_file /var/log/salt.pos
      tag salt
    </source>

    # Example:
    # Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
    <source>
      @id startupscript.log
      @type tail
      format syslog
      path /var/log/startupscript.log
      pos_file /var/log/startupscript.log.pos
      tag startupscript
    </source>

    # Examples:
    # time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
    # time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id docker.log
      @type tail
      format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
      path /var/log/docker.log
      pos_file /var/log/docker.log.pos
      tag docker
    </source>

    # Example:
    # 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
    <source>
      @id etcd.log
      @type tail
      # Not parsing this, because it doesn't have anything particularly useful to
      # parse out of it (like severities).
      format none
      path /var/log/etcd.log
      pos_file /var/log/etcd.log.pos
      tag etcd
    </source>

    # Multi-line parsing is required for all the kube logs because very large log
    # statements, such as those that include entire object bodies, get split into
    # multiple lines by glog.
    # Example:
    # I0204 07:32:30.020537    3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
    <source>
      @id kubelet.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kubelet.log
      pos_file /var/log/kubelet.log.pos
      tag kubelet
    </source>

    # Example:
    # I1118 21:26:53.975789       6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
    <source>
      @id kube-proxy.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-proxy.log
      pos_file /var/log/kube-proxy.log.pos
      tag kube-proxy
    </source>

    # Example:
    # I0204 07:00:19.604280       5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
    <source>
      @id kube-apiserver.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-apiserver.log
      pos_file /var/log/kube-apiserver.log.pos
      tag kube-apiserver
    </source>

    # Example:
    # I0204 06:55:31.872680       5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
    <source>
      @id kube-controller-manager.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-controller-manager.log
      pos_file /var/log/kube-controller-manager.log.pos
      tag kube-controller-manager
    </source>

    # Example:
    # W0204 06:49:18.239674       7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
    <source>
      @id kube-scheduler.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-scheduler.log
      pos_file /var/log/kube-scheduler.log.pos
      tag kube-scheduler
    </source>

    # Example:
    # I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf
    <source>
      @id glbc.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/glbc.log
      pos_file /var/log/glbc.log.pos
      tag glbc
    </source>

    # Example:
    # TODO Add a proper example here.
    <source>
      @id cluster-autoscaler.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/cluster-autoscaler.log
      pos_file /var/log/cluster-autoscaler.log.pos
      tag cluster-autoscaler
    </source>

    # Logs from systemd-journal for interesting services.
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id journald-docker
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-docker.pos
      </storage>
      read_from_head true
      tag docker
    </source>

    <source>
      @id journald-container-runtime
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-container-runtime.pos
      </storage>
      read_from_head true
      tag container-runtime
    </source>

    <source>
      @id journald-kubelet
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-kubelet.pos
      </storage>
      read_from_head true
      tag kubelet
    </source>

    <source>
      @id journald-node-problem-detector
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-node-problem-detector.pos
      </storage>
      read_from_head true
      tag node-problem-detector
    </source>

    <source>
      @id kernel
      @type systemd
      matches [{ "_TRANSPORT": "kernel" }]
      <storage>
        @type local
        persistent true
        path /var/log/kernel.pos
      </storage>
      <entry>
        fields_strip_underscores true
        fields_lowercase true
      </entry>
      read_from_head true
      tag kernel
    </source>
  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @id forward
      @type forward
    </source>
  monitoring.conf: |-
    # Prometheus Exporter Plugin
    # input plugin that exports metrics
    <source>
      @id prometheus
      @type prometheus
    </source>

    <source>
      @id monitor_agent
      @type monitor_agent
    </source>

    # input plugin that collects metrics from MonitorAgent
    <source>
      @id prometheus_monitor
      @type prometheus_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

    # input plugin that collects metrics for output plugin
    <source>
      @id prometheus_output_monitor
      @type prometheus_output_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

    # input plugin that collects metrics for in_tail plugin
    <source>
      @id prometheus_tail_monitor
      @type prometheus_tail_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>
  output.conf: |-
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      include_tag_key true
      type_name _doc
      host "#{ENV['OUTPUT_HOST']}"
      port "#{ENV['OUTPUT_PORT']}"
      scheme "#{ENV['OUTPUT_SCHEME']}"
      ssl_version "#{ENV['OUTPUT_SSL_VERSION']}"
      ssl_verify true
      logstash_format true
      logstash_prefix "#{ENV['LOGSTASH_PREFIX']}"
      reconnect_on_error true
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
        queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
        overflow_action block
      </buffer>
    </match>
---
# Source: fluentd-elasticsearch/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-elasticsearch
  labels:
    app.kubernetes.io/name: fluentd-elasticsearch
    helm.sh/chart: fluentd-elasticsearch-3.0.1
    app.kubernetes.io/instance: fluentd-elasticsearch
    app.kubernetes.io/managed-by: Tiller
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
# Source: fluentd-elasticsearch/templates/clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-elasticsearch
  labels:
    app.kubernetes.io/name: fluentd-elasticsearch
    helm.sh/chart: fluentd-elasticsearch-3.0.1
    app.kubernetes.io/instance: fluentd-elasticsearch
    app.kubernetes.io/managed-by: Tiller
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
# Source: fluentd-elasticsearch/templates/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-elasticsearch
  labels:
    app.kubernetes.io/name: fluentd-elasticsearch
    helm.sh/chart: fluentd-elasticsearch-3.0.1
    app.kubernetes.io/instance: fluentd-elasticsearch
    app.kubernetes.io/managed-by: Tiller
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-elasticsearch
  namespace: logging
roleRef:
  kind: ClusterRole
  name: fluentd-elasticsearch
  apiGroup: rbac.authorization.k8s.io
---
# Source: fluentd-elasticsearch/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  labels:
    app.kubernetes.io/name: fluentd-elasticsearch
    helm.sh/chart: fluentd-elasticsearch-3.0.1
    app.kubernetes.io/instance: fluentd-elasticsearch
    app.kubernetes.io/managed-by: Tiller
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  updateStrategy:
    type: RollingUpdate

  selector:
    matchLabels:
      app.kubernetes.io/name: fluentd-elasticsearch
      app.kubernetes.io/instance: fluentd-elasticsearch
  template:
    metadata:
      labels:
        app.kubernetes.io/name: fluentd-elasticsearch
        helm.sh/chart: fluentd-elasticsearch-3.0.1
        app.kubernetes.io/instance: fluentd-elasticsearch
        app.kubernetes.io/managed-by: Tiller
        kubernetes.io/cluster-service: "true"
      annotations:
        # This annotation ensures that fluentd does not get evicted if the node
        # supports critical pod annotation based priority scheme.
        # Note that this does not guarantee admission on the nodes (#40573).
        # NB! this annotation is deprecated as of version 1.13 and will be removed in 1.14.
        # ref: https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
        scheduler.alpha.kubernetes.io/critical-pod: ''
        checksum/config: ae070472c960fbf8a8fd16a51455814c6ba9d07842f2f23551de68d7ba4233bb
    spec:
      serviceAccountName: fluentd-elasticsearch
      containers:
      - name: fluentd-elasticsearch
        image:  "gcr.io/fluentd-elasticsearch/fluentd:v2.5.2"
        imagePullPolicy: "IfNotPresent"
        env:
        - name: FLUENTD_ARGS
          value: "--no-supervisor -q"
        - name: OUTPUT_HOST
          value: "elasticsearch-master"
        - name: OUTPUT_PORT
          value: "9200"
        - name: LOGSTASH_PREFIX
          value: "logstash"
        - name: OUTPUT_SCHEME
          value: "https"
        - name: OUTPUT_SSL_VERSION
          value: "TLSv1_2"
        - name: OUTPUT_BUFFER_CHUNK_LIMIT
          value: "2M"
        - name: OUTPUT_BUFFER_QUEUE_LIMIT
          value: "8"
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elastic-credentials
              key: "password"
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        resources:
          {}

        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: libsystemddir
          mountPath: /usr/lib64
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
        - mountPath: /certs
          name: es-certs
          readOnly: true
          #pointing to fluentd Dockerfile
        # Liveness probe is aimed to help in situarions where fluentd
        # silently hangs for no apparent reasons until manual restart.
        # The idea of this probe is that if fluentd is not queueing or
        # flushing chunks for 5 minutes, something is not right. If
        # you want to change the fluentd configuration, reducing amount of
        # logs fluentd collects, consider changing the threshold or turning
        # liveness probe off completely.
        livenessProbe:
          initialDelaySeconds: 600
          periodSeconds: 60
          exec:
            command:
            - '/bin/sh'
            - '-c'
            - >
              LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300};
              STUCK_THRESHOLD_SECONDS=${STUCK_THRESHOLD_SECONDS:-900};
              if [ ! -e /var/log/fluentd-buffers ];
              then
                exit 1;
              fi;
              touch -d "${STUCK_THRESHOLD_SECONDS} seconds ago" /tmp/marker-stuck;
              if [ -z "$(find /var/log/fluentd-buffers -type d -newer /tmp/marker-stuck -print -quit)" ];
              then
                rm -rf /var/log/fluentd-buffers;
                exit 1;
              fi;
              touch -d "${LIVENESS_THRESHOLD_SECONDS} seconds ago" /tmp/marker-liveness;
              if [ -z "$(find /var/log/fluentd-buffers -type d -newer /tmp/marker-liveness -print -quit)" ];
              then
                exit 1;
              fi;
        ports:
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      # It is needed to copy systemd library to decompress journals
      - name: libsystemddir
        hostPath:
          path: /usr/lib64
      - name: config-volume
        configMap:
          name: fluentd-elasticsearch
      - name: es-certs
        secret:
          defaultMode: 420
          secretName: elastic-certificate-pem

What happened:

I tried to setup the chart by passing in the CA certificate I use with kibana in the pem format. When the pods start up they crash with this error:

2019-06-10 15:10:36 +0000 [error]: unexpected error error_class=Faraday::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n Excon.defaults[:ssl_ca_path] = path_to_certs\n ENV['SSL_CERT_DIR'] = path_to_certs\n Excon.defaults[:ssl_ca_file] = path_to_file\n ENV['SSL_CERT_FILE'] = path_to_file\n Excon.defaults[:ssl_verify_callback] = callback\n (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n Excon.defaults[:ssl_verify_peer] = false (less secure).\n"

The docs are unclear on what format the cert is needed in and where it should be placed. It seems just giving it what kibana uses is not correct.

What you expected to happen:

Pods start successfully and communicate with ES.

How to reproduce it (as minimally and precisely as possible):

I installed a secure version of ES off this chart:

https://github.com/elastic/helm-charts/tree/6.5.2-alpha1/elasticsearch#security

ES seems to working, the cluster is green. I also have been able to get kibana to talk to it.

Anything else we need to know:

fluentd-elasticsearch chart failed with extraVolumeMounts value

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG Report

Version of Helm and Kubernetes:
k8s 1.14.1
Helm 2.13.1

Which chart in which version:
fluentd-elasticsearch 2.10.1

What happened:
helm upgrade failed with:

Error: YAML parse error on fluentd-elasticsearch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 106: did not find expected key
YAML parse error on fluentd-elasticsearch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 106: did not find expected key

What you expected to happen:
helm upgrade should run without any errors

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
fixed by moving the extraVolumeMounts template to the right position

--- a/charts/fluentd-elasticsearch/templates/daemonset.yaml
+++ b/charts/fluentd-elasticsearch/templates/daemonset.yaml
@@ -121,7 +121,18 @@ spec:
           readOnly: true
         - name: config-volume
           mountPath: /etc/fluent/config.d
-{{- if .Values.livenessProbe.enabled }}  #pointing to fluentd Dockerfile
+{{- if .Values.extraVolumeMounts }}
+{{ toYaml .Values.extraVolumeMounts | indent 8 }}
+{{- end }}
+        ports:
+{{- range $port := .Values.service.ports }}
+          - name: {{ $port.name }}
+            containerPort: {{ $port.port }}
+{{- if $port.protocol }}
+            protocol: {{ $port.protocol }}
+{{- end }}
+{{- end }}
+{{- if .Values.livenessProbe.enabled }}
         # Liveness probe is aimed to help in situarions where fluentd
         # silently hangs for no apparent reasons until manual restart.
         # The idea of this probe is that if fluentd is not queueing or
@@ -165,18 +176,6 @@ spec:
         - name: PORT_NUM
           value: "{{ .Values.elasticsearch.port }}"
       {{- end }}
-{{- if .Values.extraVolumeMounts }}
-{{ toYaml .Values.extraVolumeMounts | indent 8 }}
-{{- end }}
-        ports:
-{{- range $port := .Values.service.ports }}
-          - name: {{ $port.name }}
-            containerPort: {{ $port.port }}
-{{- if $port.protocol }}
-            protocol: {{ $port.protocol }}
-{{- end }}
-{{- end }}
-
       terminationGracePeriodSeconds: 30
       volumes:
       - name: varlog

Skipped value (map[]) for 'extraConfigMaps', as it is not a table

Is this a request for help?:
YES

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
HELM: 2.12.2,2.14.1
Kubernetes: 1.13.5

Which chart in which version:
3.0.1

What happened:
kiwigrid/fluentd-elasticsearch chart produces warnings (see. how to reproduce it) when used as a subchart.

What you expected to happen:
No warnings

How to reproduce it (as minimally and precisely as possible):

  1. Create simplistic parent chart / specify fluentd-elasticsearch in requirements.yaml
  2. Override extraConfigMaps in values.yaml of parent chart

parent-chart/values.yaml:

fluentd-elasticsearch:
  configMaps:
    useDefaults:
      outputConf: false
  extraConfigMaps:
    output.conf: |-
      <match **>
        @id elasticsearch
        @type elasticsearch
        @log_level info
         .....
  1. run helm template parent-chart > out.yaml:
Warning: Building values map for chart 'fluentd-elasticsearch'. Skipped value (map[]) for 'extraConfigMaps', as it is not a table
Warning: Building values map for chart 'fluentd-elasticsearch'. Skipped value (map[]) for 'extraConfigMaps', as it is not a table
Warning: Building values map for chart 'fluentd-elasticsearch'. Skipped value (map[]) for 'extraConfigMaps', as it is not a table
Warning: Building values map for chart 'fluentd-elasticsearch'. Skipped value (map[]) for 'extraConfigMaps', as it is not a table
.
.

Anything else we need to know:
Commenting out extraConfigMaps in values.yaml (kiwigrid/fluentd-elasticsearch) or setting it equal to {} helps

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.