Giter Site home page Giter Site logo

splunk / fluent-plugin-splunk-hec Goto Github PK

View Code? Open in Web Editor NEW
83.0 28.0 87.0 258 KB

This is the Fluentd output plugin for sending events to Splunk via HEC.

License: Apache License 2.0

Ruby 93.12% Shell 2.70% Dockerfile 2.79% Makefile 1.39%
fluentd plugin splunk hec

fluent-plugin-splunk-hec's Introduction

End of Support

Important: The fluent-plugin-splunk-hec will reach End of Support on January 1, 2024. After that date, this repository will no longer receive updates from Splunk and will no longer be supported by Splunk. Until then, only critical security fixes and bug fixes will be provided.

fluent-plugin-splunk-hec

Fluentd output plugin to send events and metrics to Splunk in 2 modes:

  1. Via Splunk's HEC (HTTP Event Collector) API
  2. Via the Splunk Cloud Services (SCS) Ingest API

Installation

RubyGems

$ gem install fluent-plugin-splunk-hec

Bundler

Add following line to your Gemfile:

gem "fluent-plugin-splunk-hec"

And then execute:

$ bundle

Configuration

Example 1: Minimum HEC Configuration

<match **>
  @type splunk_hec
  hec_host 12.34.56.78
  hec_port 8088
  hec_token 00000000-0000-0000-0000-000000000000
</match>

This example is very basic, it just tells the plugin to send events to Splunk HEC on https://12.34.56.78:8088 (https is the default protocol), using the HEC token 00000000-0000-0000-0000-000000000000. It will use whatever index, source, sourcetype are configured in HEC. And the host of each event is the hostname of the machine which running fluentd.

Example 2: SCS Ingest Configuration example

<match **>
@type splunk_ingest_api
service_client_identifier xxxxxxxx
service_client_secret_key xxxx-xxxxx
token_endpoint /token
ingest_auth_host auth.scp.splunk.com
ingest_api_host api.scp.splunk.com
ingest_api_tenant <mytenant>
ingest_api_events_endpoint /<mytenant>/ingest/v1beta2/events
debug_http false
</match>

This example shows the configuration to be used for sending events to ingest API. This configuration shows how to use service_client_identifier, service_client_secret_key to get token from token_endpoint and send events to ingest_api_host for the tenant ingest_api_tenant at the endpoint ingest_api_events_endpoint. The debug_http flag indicates whether the user wants to print debug logs to stdout.

Example 3: Overwrite HEC defaults

<match **>
  @type splunk_hec
  hec_host 12.34.56.78
  hec_port 8088
  hec_token 00000000-0000-0000-0000-000000000000

  index awesome
  source ${tag}
  sourcetype _json
</match>

This configuration will

  • send all events to the awesome index, and
  • set their source to the event tags. ${tag} is a special value which will be replaced by the event tags, and
  • set their sourcetype to _json.

Sometimes you want to use the values from the input event for these parameters, this is where the *_key parameters help.

<match **>
  ...omitting other parameters...

  source_key file_path
</match>

In this example (in order to keep it concise, we just omitted the repeating parameters, and we will keep doing so in the following examples), it uses the source_key config to set the source of event to the value of the event's file_path field. Given an input event like

{"file_path": "/var/log/splunk.log", "message": "This is an exmaple.", "level": "info"}

Then the source for this event will be "/var/log/splunk.log". And the "file_path" field will be removed from the input event, so what you will eventually get ingested in Splunk is:

{"message": "This is an example.", "level": "info"}

If you want to keep "file_path" in the event, you can use keep_keys.

Besides source_key there are also other *_key parameters, check the parameters details below.

Example 4: Sending metrics

Metrics is available since Splunk 7.0.0, you can use this output plugin to send events as metrics to a Splunk metric index by setting data_type to "metric".

<match **>
  @type splunk_hec
  data_type metric
  hec_host 12.34.56.78
  hec_port 8088
  hec_token 00000000-0000-0000-0000-000000000000
</match>

With this configuration, the plugin will treat each input event as a collection of metrics, i.e. each key-value pair in the event is a metric name-value pair. For example, given an input event like

{"cpu/usage": 0.5, "cpu/rate": 10, "memory/usage": 100, "memory/rss": 90}

then 4 metrics will be sent to splunk.

If the input events are not like this, instead they have the metric name and metric value as properties of the event. Then you can use metric_name_key and metric_value_key. Given an input event like

{"metric": "cpu/usage", "value": 0.5, "app": "web_ui"}

You should change the configuration to

<match **>
  @type splunk_hec
  data_type metric
  hec_host 12.34.56.78
  hec_port 8088
  hec_token 00000000-0000-0000-0000-000000000000

  metric_name_key metric
  metric_value_key value
</match>

All other properties of the input (in this example, "app"), will be sent as dimensions of the metric. You can use the <fields> section to customize the dimensions.

Type of plugin

@type

This value must be set to splunk_hec when using HEC API and to splunk_ingest_api when using the ingest API. Only one type either splunk_hec or splunk_ingest_api is expected to be used when configuring this plugin.

Parameters for splunk_hec

protocol (enum) (optional)

This is the protocol to use for calling the HEC API. Available values are: http, https. This parameter is set to https by default.

hec_host (string) (required)

The hostname/IP for the HEC token or the HEC load balancer.

hec_port (integer) (optional)

The port number for the HEC token or the HEC load balancer. The default value is 8088.

hec_token (string) (required)

Identifier for the HEC token.

hec_endpoint (string) (optional)

The HEC REST API endpoint to use. The default value is services/collector.

metrics_from_event (bool) (optional)

When data_type is set to "metric", the ingest API will treat every key-value pair in the input event as a metric name-value pair. Set metrics_from_event to false to disable this behavior and use metric_name_key and metric_value_key to define metrics. The default value is true.

metric_name_key (string) (optional)

Field name that contains the metric name. This parameter only works in conjunction with the metrics_from_event paramter. When this prameter is set, the metrics_from_event parameter is automatically set to false.

metric_value_key (string) (optional)

Field name that contains the metric value, this parameter is required when metric_name_key is configured.

coerce_to_utf8 (bool) (optional)

Indicates whether to allow non-UTF-8 characters in user logs. If set to true, any non-UTF-8 character is replaced by the string specified in non_utf8_replacement_string. If set to false, the Ingest API errors out any non-UTF-8 characters. This parameter is set to true by default.

non_utf8_replacement_string (string) (optional)

If coerce_to_utf8 is set to true, any non-UTF-8 character is replaced by the string you specify in this parameter. The parameter is set to ' ' by default.

Parameters for splunk_ingest_api

service_client_identifier: (optional) (string)

Splunk uses the client identifier to make authorized requests to the ingest API.

service_client_secret_key: (string)

The client identifier uses this authorization to make requests to the ingest API.

token_endpoint: (string)

This value indicates which endpoint Splunk should look to for the authorization token necessary for requests to the ingest API.

ingest_api_host: (string)

Indicates which url/hostname to use for requests to the ingest API.

ingest_api_tenant: (string)

Indicates which tenant Splunk should use for requests to the ingest API.

ingest_api_events_endpoint: (string)

Indicates which endpoint to use for requests to the ingest API.

debug_http: (bool)

Set to True if you want to debug requests and responses to ingest API. Default is false.

Parameters for both splunk_hec and splunk_ingest_api

index (string) (optional)

Identifier for the Splunk index to be used for indexing events. If this parameter is not set,
the indexer is chosen by HEC. Cannot set both index and index_key parameters at the same time.

index_key (string) (optional)

The field name that contains the Splunk index name. Cannot set both index and index_key parameters at the same time.

host (string) (optional)

The host location for events. Cannot set both host and host_key parameters at the same time.
If the parameter is not set, the default value is the hostname of the machine runnning fluentd.

host_key (string) (optional)

Key for the host location. Cannot set both host and host_key parameters at the same time.

source (string) (optional)

The source field for events. If this parameter is not set, the source will be decided by HEC.
Cannot set both source and source_key parameters at the same time.

source_key (string) (optional)

Field name to contain source. Cannot set both source and source_key parameters at the same time.

sourcetype (string) (optional)

The sourcetype field for events. When not set, the sourcetype is decided by HEC.
Cannot set both source and source_key parameters at the same time.

sourcetype_key (string) (optional)

Field name that contains the sourcetype. Cannot set both source and source_key parameters at the same time.

time_key (string) (optional)

Field name to contain Splunk event time. By default will use fluentd'd time.

fields (init) (optional)

Lets you specify the index-time fields for the event data type, or metric dimensions for the metric data type. Null value fields are removed.

keep_keys (boolean) (Optional)

By default, all the fields used by the *_key parameters are removed from the original input events. To change this behavior, set this parameter to true. This parameter is set to false by default. When set to true, all fields defined in index_key, host_key, source_key, sourcetype_key, metric_name_key, and metric_value_key are saved in the original event.

<fields> section (optional) (single)

Depending on the value of data_type parameter, the parameters inside the <fields> section have different meanings. Despite the meaning, the syntax for parameters is unique.

app_name (string) (Optional)

Splunk app name using this plugin (default to hec_plugin_gem)

app_version (string) (Optional)

The version of Splunk app using this this plugin (default to plugin version)

custom_headers (Hash) (Optional)

Hash of custom headers to be added to the HTTP request. Used to populate override_headers attribute of the underlying Net::HTTP::Persistent connection.

When data_type is event

In this case, parameters inside <fields> are used as indexed fields and removed from the original input events. Please see the "Add a "fields" property at the top JSON level" here for details. Given we have configuration like

<match **>
  @type splunk_hec
  ...omitting other parameters...

  <fields>
    file
    level
    app application
  </fields>
</match>

and an input event like

{"application": "webServer", "file": "server.rb", "lineNo": 100, "level": "info", "message": "Request finished in 30ms."}

Then the HEC request JSON payload will be:

{
   // omitting other fields
   // ...
   "event": "{\"lineNo\": 100, \"message\": \"Request finished in 30ms.\"}",
   "fields": {
     "file": "server.rb",
     "level": "info",
     "app": "webServer"
   }
}

As you can see, parameters inside <fields> section can be a key-value pair or just a key (a name). If a parameter is a key-value, the key will be the name of the field inside the "fields" JSON object, whereas the value is the field name of the input event. So a key-value pair is a rename.

If a parameter has just a key, it means its value is exactly the same as the key.

When data_type is metric

For metrics, parameters inside <fields> are used as dimensions. If <fields> is not presented, the original input event will be used as dimensions. If an empty <fields></fields> is presented, no dimension is sent. For example, given the following configuration:

<match **>
  @type splunk_hec
  data_type metric
  ...omitting other parameters...

  metric_name_key name
  metric_value_key value
  <fields>
    file
    level
    app application
  </fields>
</match>

and the following input event:

{"application": "webServer", "file": "server.rb", "value": 100, "status": "OK", "message": "Normal", "name": "CPU Usage"}

Then, a metric of "CPU Usage" with value=100, along with 3 dimensions file="server.rb", status="OK", and app="webServer" are sent to Splunk.

<format> section (optional) (multiple)

The <format> section let you define which formatter to use to format events. By default, it uses the json formatter.

Besides the @type parameter, you should define the other parameters for the formatter inside this section.

Multiple <format> sections can be defined to use different formatters for different tags. Each <format> section accepts an argument just like the <match> section does to define tag matching. By default, every event is formatted with json. For example:

<match **>
  @type splunk_hec
  ...

  <format sometag.**>
    @type single_value
    message_key log
  </format>

  <format some.othertag>
    @type csv
    fields ["some", "fields"]
  </format>

This example:

  • Formats events with tags that start with sometag. with the single_value formatter
  • Formats events with tags some.othertag with the csv formatter
  • Formats all other events with the json formatter (the default formatter)

If you want to use a different default formatter, you can add a <format **> (or <format>) section.

@type (string) (required)

Specifies which formatter to use.

Net::HTTP::Persistent parameters (optional)

The following parameters can be used for tuning HTTP connections:

gzip_compression (boolean)

Whether to use gzip compression on outbound posts. This parameter is set to false by default for backwards compatibility.

idle_timeout (integer)

The default is five seconds. If a connection has not been used for five seconds, it is automatically reset at next use, in order to avoid attempting to send to a closed connection. Specifiy nil to prohibit any timeouts.

read_timeout (integer)

The amount of time allowed between reading two chunks from the socket. The default value is nil, which means no timeout.

open_timeout (integer)

The amount of time to wait for a connection to be opened. The default is nil, which means no timeout.

SSL parameters

The following optional parameters let you configure SSL for HTTPS protocol.

client_cert (string)

The path to a file containing a PEM-format CA certificate for this client.

client_key (string)

The private key for this client.

ca_file (string)

The path to a file containing CA cerificates in PEM format. The plugin will verify the TLS server certificate presented by Splunk against the certificates in this file, unless verification is disabled by the ssl_insecure option.

ca_path (string)

The path to a directory containing CA certificates in PEM format. The plugin will verify the TLS server certificate presented by Splunk against the certificates in this file, unless verification is disabled by the ssl_insecure option.

ciphers (array)

List of SSl ciphers allowed.

insecure_ssl (bool)

Specifies whether an insecure SSL connection is allowed. If set to false (the default), the plugin will verify the TLS server certificate presented by Splunk against the CA certificates provided by the ca_file/ca_path options, and reject the certificate if if verification fails.

require_ssl_min_version (bool)

When set to true (the default), the plugin will require TLSv1.1 or later for its connection to Splunk.

consume_chunk_on_4xx_errors (bool)

Specifies whether any 4xx HTTP response status code consumes the buffer chunks. If set to false, Splunk will fail to flush the buffer on such status codes. This parameter is set to true by default for backwards compatibility.

About Buffer

This plugin sends events to HEC using batch mode. It batches all events in a chunk in one request. So you need to configure the <buffer> section carefully to gain the best performance. Here are some hints:

  • Read through the fluentd buffer document to understand the buffer configurations.
  • Use chunk_limit_size and/or chunk_limit_records to define how big a chunk can be. And remember that all events in a chunk will be sent in one request.
  • Splunk has a limit on how big the payload of a HEC request can be. And it's defined with max_content_length in the [http_input] section of limits.conf. In Splunk of version 6.5.0+, the default value is 800MiB, while in versions before 6.5.0, it's just 1MB. Make sure your chunk size won't exceed this limit, or you should change the limit on your Splunk deployment.
  • Sending requests to HEC takes time, so if you flush your fluentd buffer too fast (for example, with a very small flush_interval), it's possible that the plugin cannot catch up with the buffer flushing. There are two ways you can handle this situation, one is to increase the flush_interval or use multiple flush threads by setting flush_thread_count to a number bigger than 1.

License

Please see LICENSE.

fluent-plugin-splunk-hec's People

Contributors

abisnar avatar ahma avatar albinvass avatar aryznar-splunk avatar bcotton avatar chaitanyaphalak avatar cjgibson avatar conorevans avatar dbaldwin-splunk avatar dependabot[bot] avatar foram-splunk avatar gmcnaught avatar gp510 avatar hbrewster-splunk avatar howdoicomputer avatar hvaghani221 avatar inthemuse avatar jenworthington avatar liveaverage avatar luckyj5 avatar mwang2016 avatar rdooley avatar rockb1017 avatar szymonpk avatar toscott avatar vears91 avatar ventsislav-georgiev avatar vihasmakwana avatar vinzent avatar yrro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-plugin-splunk-hec's Issues

unexpected error in namespace watcher thread

What happened:

Pod restarts multiple times an hour, exiting with an error mentioned in the issue below.

Same issue as fabric8io/fluent-plugin-kubernetes_metadata_filter#222

What you expected to happen:

No errors.

How to reproduce it (as minimally and precisely as possible):

Use the current version of the image which has v2.4.5 of fluent-plugin-kubernetes_metadata_filter.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): /
  • Ruby version (use ruby --version): /
  • OS (e.g: cat /etc/os-release): /
  • Splunk version: /
  • Others: Sorry, can't give much information.

Updating to v2.4.6 fixes the issue. fabric8io/fluent-plugin-kubernetes_metadata_filter#223

Sorry I didn't do a PR (never messed around with Ruby and not sure how to properly update a package version).

Cannot run plugin on fluentd debian docker image

In the current gemspec, this plugin requires ruby >= 2.4.0.

However, the most recent debian docker image of fluentd (as of today) runs ruby 2.3.3p222. Which means I can't run this plugin using the docker image supplied by fluentd, without reinstalling ruby on it.

$ docker run --rm fluent/fluentd:v1.2.5-debian ruby --version
ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]

Is it possible to make the plugin work on ruby >= 2.3.0, so I can run it on the fluentd-supplied image?

Showing a SSL error when trying to push logs to splunk using Fluentd

When trying to send logs to splunk using Fluentd I got the error

"error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol"

This is despite I haven't enabled SSL in splunk server.
Here is my configuration.

<store>
    @type splunk_hec
    hec_host charithk
    hec_port 8088
    hec_token XXXX
    index fdindex
    <buffer>
      flush_interval 5s
    </buffer>
 </store>

Splunk runs on my local server.
Why this error appears and how to solve it?

Error installing fluent-plugin-splunk-hec due to activesupport

What happened:
Installation failed with following error message

ERROR:  Error installing fluent-plugin-splunk-hec:
	There are no versions of activesupport (= 6.0.1) compatible with your Ruby & RubyGems. Maybe try installing an older version of the gem you're looking for?
	activesupport requires Ruby version >= 2.5.0. The current ruby version is 2.4.9.362.

As suggested in the message, I tried again after installing lower version of activesupport ( -v 5.2.3 ).

  • activesupport -v 5.2.5 got installed successfully, but fluent-plugin-splunk-hec is still failing with the same error message. As per td-agent, they don't support 2.5 for t td-agent 1.7.4.
# /opt/td-agent/embedded/bin/fluent-gem install activesupport -v 5.2.3
Successfully installed activesupport-5.2.3
Parsing documentation for activesupport-5.2.3
Done installing documentation for activesupport after 3 seconds
1 gem installed
# /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-splunk-hec
ERROR:  Error installing fluent-plugin-splunk-hec:
	There are no versions of activesupport (= 6.0.1) compatible with your Ruby & RubyGems. Maybe try installing an older version of the gem you're looking for?
	activesupport requires Ruby version >= 2.5.0. The current ruby version is 2.4.9.362.
[root@xdpapi-asa-04-rev ~]# /opt/td-agent/embedded/bin/fluent-gem list

What you expected to happen:

fluent-plugin-splunk-hec should get installed successfully ( not sure why its looking for activesupport = 6.0.1 though i installed required lower version -v 5.2.3)

How to reproduce it (as minimally and precisely as possible):

  1. Install td-agent3: curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh
    Ref: https://support.treasuredata.com/hc/en-us/articles/207048547-Installing-td-agent-on-RHEL-and-CentOS-
  2. Install /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-splunk-hec
  3. You should see an error message mentioned above.

Anything else we need to know?:

would be happy to provide more information if required.

Environment:

  • OS (e.g: cat /etc/os-release): CentOS Linux release 7.5.1804 (Core)
  • td-agent: td-agent 1.7.4
  • Ruby version (use /opt/td-agent/embedded/bin/ruby -v): ruby 2.4.9p362 (2019-10-02 revision 67824) [x86_64-linux]

Thanks for your help.

Unable to activate activesupport-6.0.0

What happened:
I cloned v1.1.1 code to include kubernetes_metadata plugin per my requirement. I used the v1.1.1 tag version and then followed the below step

  • Modified the Dockerfile to include `gem install -N fluent-plugin-kubernetes_metadata_filter -v "2.1.2"

  • Followed the below step to create a new docker image.

gem install fluent-plugin-splunk-hec --install-dir .
docker build --no-cache -t fluent-plugin-splunk-hec -f docker/Dockerfile .
  • When I deploy the daemon set with this new image, I get the below error. I am also able to reproduce this with develop branch
/usr/local/lib/ruby/site_ruby/2.5.0/rubygems/specification.rb:2332:in `raise_if_conflicts': Unable to activate activesupport-6.0.0, because tzinfo-2.0.0 conflicts with tzinfo (~> 1.1) (Gem::ConflictError)
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/specification.rb:1441:in `activate'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/specification.rb:1475:in `block in activate_dependencies'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/specification.rb:1461:in `each'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/specification.rb:1461:in `activate_dependencies'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/specification.rb:1443:in `activate'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems.rb:224:in `rescue in try_activate'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems.rb:217:in `try_activate'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:128:in `rescue in require'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:39:in `require'
	from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-2.1.2/lib/fluent/plugin/filter_kubernetes_metadata.rb:157:in `configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/plugin.rb:164:in `configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/agent.rb:152:in `add_filter'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/agent.rb:70:in `block in configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/agent.rb:64:in `each'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/agent.rb:64:in `configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/root_agent.rb:150:in `configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/engine.rb:131:in `configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/engine.rb:96:in `run_configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/supervisor.rb:804:in `run_configure'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/supervisor.rb:581:in `dry_run'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/supervisor.rb:599:in `supervise'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/supervisor.rb:504:in `run_supervisor'
	from /usr/local/bundle/gems/fluentd-1.7.3/lib/fluent/command/fluentd.rb:314:in `<top (required)>'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from /usr/local/bundle/gems/fluentd-1.7.3/bin/fluentd:8:in `<top (required)>'
	from /usr/local/bundle/bin/fluentd:23:in `load'
	from /usr/local/bundle/bin/fluentd:23:in `<main>'

Environment:

  • Kubernetes version (use kubectl version): v1.16.1
  • Ruby version (use ruby --version): 2.3.0
  • OS (e.g: cat /etc/os-release): linux

last few chunks of messages are not cleared out

What happened:

I am using splunk/fluentd-hec:1.1.1 image in a daemon set. I observed that when I run a low load i.e. 25 bytes of 1500 records per second, the last chunk of messages never gets sent to splunk. Only when I stop the daemon set, it sends all the remaining logs in the memory.

  • output plugin section configMap.yaml. I tried with buffer type file and memory. same result
      <filter **>
        @type grep
        <exclude>
          key log
          pattern \A\s*\z
        </exclude>
      </filter>
      <match **>
          @type splunk_hec
          protocol https
          hec_host "hec-url"
          hec_token "token"
          hec_port 443
          insecure_ssl true
          <buffer>
            @type file
            path  /var/log/splunk-fluentd/
            chunk_limit_size 50MB
            total_limit_size 500MB
            flush_interval 5s
            flush_thread_count 5
            overflow_action throw_exception
            flush_mode interval
            retry_max_times 5
          </buffer>
          # <buffer>
          #   @type memory
          #   chunk_limit_size 5MB
          #   total_limit_size 250MB
          #   flush_mode interval
          #   flush_interval 5s
          #   flush_thread_count 5
          #   retry_type exponential_backoff
          #   overflow_action throw_exception
          #   retry_max_times 5
          # </buffer>

          <format>
            # we just want to keep the raw logs, not the structure created by docker or journald
            @type single_value
            message_key log
            add_newline false
          </format>
      </match>

Logs that indicate the behavior

---last post call ---
2019-10-09 22:37:53 +0000 [debug]: #0 [Response] POST https://hec.com/services/collector: #<Net::HTTPOK 200 OK readbody=true>
2019-10-09 22:37:53 +0000 [trace]: #0 write operation done, committing chunk="59481ef208d0e994b2d1c3dfec0c4f6b"
2019-10-09 22:37:53 +0000 [trace]: #0 committing write operation to a chunk chunk="59481ef208d0e994b2d1c3dfec0c4f6b" delayed=false

---daemon set shutdown ---

2019-10-09 22:49:36 +0000 [info]: #0 shutting down fluentd worker worker=0
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on input plugin type=:tail plugin_id="containers.log"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on input plugin type=:prometheus plugin_id="object:3ff21d5b9178"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on input plugin type=:prometheus_output_monitor plugin_id="object:3ff21d5989f0"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on input plugin type=:monitor_agent plugin_id="fluentd-monitor-agent"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff2210777d8"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff22000e388"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff220fbda90"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff220f4ab44"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff2211a9188"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff21f2fa804"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff21f30d634"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff21f2b3968"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:concat plugin_id="object:3ff21f2c0e10"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:kubernetes_metadata plugin_id="object:3ff220176e00"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:record_modifier plugin_id="object:3ff21e2a0e90"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on filter plugin type=:grep plugin_id="object:3ff21e60a360"
2019-10-09 22:49:36 +0000 [debug]: #0 calling stop on output plugin type=:splunk_hec plugin_id="object:3ff21e28bbf8"

--- post calls before shutting down----


2019-10-09 22:49:38 +0000 [debug]: #0 [Response] POST https://hec.com/services/collector: #<Net::HTTPOK 200 OK readbody=true>
2019-10-09 22:49:38 +0000 [trace]: #0 writing events into buffer instance=70309120829800 metadata_size=1
2019-10-09 22:49:38 +0000 [trace]: #0 write operation done, committing chunk="594821970b32d79c10997f3dd87f168b"
2019-10-09 22:49:38 +0000 [trace]: #0 committing write operation to a chunk chunk="594821970b32d79c10997f3dd87f168b" delayed=false
2019-10-09 22:49:38 +0000 [trace]: #0 purging a chunk instance=70309120829800 chunk_id="594821970b32d79c10997f3dd87f168b" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>
2019-10-09 22:49:38 +0000 [trace]: #0 chunk purged instance=70309120829800 chunk_id="594821970b32d79c10997f3dd87f168b" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>
2019-10-09 22:49:38 +0000 [trace]: #0 done to commit a chunk chunk="594821970b32d79c10997f3dd87f168b"
2019-10-09 22:49:38 +0000 [trace]: #0 adding metadata instance=70309120829800 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>
2019-10-09 22:49:38 +0000 [trace]: #0 writing events into buffer instance=70309120829800 metadata_size=1
2019-10-09 22:49:38 +0000 [trace]: #0 enqueueing chunk instance=70309120829800 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>
2019-10-09 22:49:38 +0000 [trace]: #0 dequeueing a chunk instance=70309120829800
2019-10-09 22:49:38 +0000 [trace]: #0 chunk dequeued instance=70309120829800 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>
2019-10-09 22:49:38 +0000 [trace]: #0 trying flush for a chunk chunk="594821979be418a69c678d1c34981f06"
2019-10-09 22:49:38 +0000 [trace]: #0 adding write count instance=70309120621560
2019-10-09 22:49:38 +0000 [trace]: #0 executing sync write chunk="594821979be418a69c678d1c34981f06"
2019-10-09 22:49:38 +0000 [debug]: #0 Received new chunk, size=5021481
2019-10-09 22:49:38 +0000 [debug]: #0 Sending 5021481 bytes to Splunk.
2019-10-09 22:49:38 +0000 [trace]: #0 adding metadata instance=70309120829800 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>
2019-10-09 22:49:38 +0000 [trace]: #0 writing events into buffer instance=70309120829800 metadata_size=1
2019-10-09 22:49:38 +0000 [debug]: #0 [Response] POST https://hec.com/services/collector: #<Net::HTTPOK 200 OK readbody=true>
2019-10-09 22:49:38 +0000 [trace]: #0 adding metadata instance=70309120829800 metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil>

Environment:

  • Kubernetes version (use kubectl version): v1.16.1
  • Ruby version (use ruby --version): 2.3.0
  • OS (e.g: cat /etc/os-release): linux

Docs example for ciphers array option

What would you like to be added:

Docs example for ciphers array option

https://github.com/splunk/fluent-plugin-splunk-hec#ciphers-array

Why is this needed:

I am deploying SCK 1.3.0 charts and experimenting with security settings by hardening the sslVersions in server.conf & with HEC inputs.conf. They both have options to set sslVersions and ciphers.

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf#SSL_Configuration_details
https://docs.splunk.com/Documentation/Splunk/8.0.1/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29

sslVersions = <versions_list>
* Comma-separated list of SSL versions to support for incoming connections.
* The versions available are "ssl3", "tls1.0", "tls1.1", and "tls1.2".
* The special version "*" selects all supported versions.
  The version "tls" selects all versions tls1.0 or newer.
* If a version is prefixed with "-" it is removed from the list.
* SSLv2 is always disabled; "-ssl2" is accepted in the version
  list but does nothing.
* When configured in FIPS mode, "ssl3" is always disabled regardless
  of this configuration.
* Default: The default can vary (see the 'sslVersions' setting in
  the $SPLUNK_HOME/etc/system/default/server.conf file for the
  current default)

I tried setting specific versions in server.conf but no dice. I didnt set them explicitly in inputs.conf, so I am thinking server.conf applied. I tried setting tls and thats when the connection fails/retried happened.

I am now seeing this from some of my fluentd pods:

2020-02-16 17:37:50 +0000 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2020-02-16 17:37:51 +0000 chunk="59eb4e57fdaa2189656a4acd3c8195e9" error_class=Net::HTTP::Persistent::Error error="too many connection resets (due to SSL_connect returned=1 errno=0 state=SSLv3/TLS write client hello: wrong version number - OpenSSL::SSL::SSLError) after 0 requests on 70019652579240, last used 1581874670.2335458 seconds ago"
  2020-02-16 17:37:50 +0000 [warn]: #0 suppressed same stacktrace
2020-02-16 17:37:51 +0000 [warn]: #0 retry succeeded. chunk_id="59eb4e57fdaa2189656a4acd3c8195e9"

The initial connection fails on sslv3 then succeeds on retry.

Issue starting td-agent after adding output following "Minimum HEC Configuration"

Hi, as title states I'm having an issue getting td-agent to start after adding this output using a minimum hec config. As of now I'm not sure if this is a fluentd bug or particular to this plugin, however if I replace this output with s3 td-agent starts up and functions fine. See error below.

[root@server ~]# systemctl start td-agent
Job for td-agent.service failed because the control process exited with error code. See "systemctl status td-agent.service" and "journalctl -xe" for details.

Checking td-agent.log with log_level trace I see the following:

2020-07-23 01:45:59 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:45:59 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:45:59 +0000 [trace]: registered output plugin 'tdlog'
2020-07-23 01:45:59 +0000 [trace]: registered buffer plugin 'file'
2020-07-23 01:45:59 +0000 [trace]: registered output plugin 'file'
2020-07-23 01:45:59 +0000 [trace]: registered formatter plugin 'out_file'
2020-07-23 01:45:59 +0000 [warn]: [output_td] Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::TreasureDataLogOutput" secondary="Fluent::Plugin::FileOutput"
2020-07-23 01:45:59 +0000 [trace]: registered output plugin 'stdout'
2020-07-23 01:45:59 +0000 [trace]: registered buffer plugin 'memory'
2020-07-23 01:45:59 +0000 [trace]: registered formatter plugin 'stdout'
2020-07-23 01:45:59 +0000 [trace]: registered formatter plugin 'json'
2020-07-23 01:45:59 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:45:59 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:45:59 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:45:59 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Duplicated plugin id output_td. Check whole configuration and fix it."
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_id.rb:38:in configure' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/log.rb:626:in configure'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin/output.rb:247:in configure' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_helper/event_emitter.rb:73:in configure'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-td-1.1.0/lib/fluent/plugin/out_tdlog.rb:54:in configure' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin.rb:173:in configure'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:132:in add_match' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:74:in block in configure'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in each' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in configure'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/root_agent.rb:146:in configure' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:105:in configure'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:80:in run_configure' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/supervisor.rb:551:in run_supervisor'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/command/fluentd.rb:341:in <top (required)>' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/bin/fluentd:8:in <top (required)>'
2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in load' 2020-07-23 01:45:59 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in

'
2020-07-23 01:46:00 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:00 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:00 +0000 [trace]: registered output plugin 'tdlog'
2020-07-23 01:46:00 +0000 [trace]: registered buffer plugin 'file'
2020-07-23 01:46:00 +0000 [trace]: registered output plugin 'file'
2020-07-23 01:46:00 +0000 [trace]: registered formatter plugin 'out_file'
2020-07-23 01:46:00 +0000 [warn]: [output_td] Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::TreasureDataLogOutput" secondary="Fluent::Plugin::FileOutput"
2020-07-23 01:46:00 +0000 [trace]: registered output plugin 'stdout'
2020-07-23 01:46:00 +0000 [trace]: registered buffer plugin 'memory'
2020-07-23 01:46:00 +0000 [trace]: registered formatter plugin 'stdout'
2020-07-23 01:46:00 +0000 [trace]: registered formatter plugin 'json'
2020-07-23 01:46:00 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:00 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:00 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:00 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Duplicated plugin id output_td. Check whole configuration and fix it."
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_id.rb:38:in configure' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/log.rb:626:in configure'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin/output.rb:247:in configure' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_helper/event_emitter.rb:73:in configure'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-td-1.1.0/lib/fluent/plugin/out_tdlog.rb:54:in configure' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin.rb:173:in configure'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:132:in add_match' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:74:in block in configure'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in each' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in configure'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/root_agent.rb:146:in configure' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:105:in configure'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:80:in run_configure' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/supervisor.rb:551:in run_supervisor'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/command/fluentd.rb:341:in <top (required)>' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/bin/fluentd:8:in <top (required)>'
2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in load' 2020-07-23 01:46:00 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in '
2020-07-23 01:46:01 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:01 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:01 +0000 [trace]: registered output plugin 'tdlog'
2020-07-23 01:46:01 +0000 [trace]: registered buffer plugin 'file'
2020-07-23 01:46:01 +0000 [trace]: registered output plugin 'file'
2020-07-23 01:46:01 +0000 [trace]: registered formatter plugin 'out_file'
2020-07-23 01:46:01 +0000 [warn]: [output_td] Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::TreasureDataLogOutput" secondary="Fluent::Plugin::FileOutput"
2020-07-23 01:46:01 +0000 [trace]: registered output plugin 'stdout'
2020-07-23 01:46:01 +0000 [trace]: registered buffer plugin 'memory'
2020-07-23 01:46:01 +0000 [trace]: registered formatter plugin 'stdout'
2020-07-23 01:46:01 +0000 [trace]: registered formatter plugin 'json'
2020-07-23 01:46:01 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:01 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:01 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:01 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Duplicated plugin id output_td. Check whole configuration and fix it."
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_id.rb:38:in configure' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/log.rb:626:in configure'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin/output.rb:247:in configure' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_helper/event_emitter.rb:73:in configure'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-td-1.1.0/lib/fluent/plugin/out_tdlog.rb:54:in configure' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin.rb:173:in configure'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:132:in add_match' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:74:in block in configure'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in each' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in configure'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/root_agent.rb:146:in configure' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:105:in configure'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:80:in run_configure' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/supervisor.rb:551:in run_supervisor'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/command/fluentd.rb:341:in <top (required)>' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/bin/fluentd:8:in <top (required)>'
2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in load' 2020-07-23 01:46:01 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in '
2020-07-23 01:46:02 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:02 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:02 +0000 [trace]: registered output plugin 'tdlog'
2020-07-23 01:46:02 +0000 [trace]: registered buffer plugin 'file'
2020-07-23 01:46:02 +0000 [trace]: registered output plugin 'file'
2020-07-23 01:46:02 +0000 [trace]: registered formatter plugin 'out_file'
2020-07-23 01:46:02 +0000 [warn]: [output_td] Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::TreasureDataLogOutput" secondary="Fluent::Plugin::FileOutput"
2020-07-23 01:46:02 +0000 [trace]: registered output plugin 'stdout'
2020-07-23 01:46:02 +0000 [trace]: registered buffer plugin 'memory'
2020-07-23 01:46:02 +0000 [trace]: registered formatter plugin 'stdout'
2020-07-23 01:46:02 +0000 [trace]: registered formatter plugin 'json'
2020-07-23 01:46:02 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:02 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:02 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:02 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Duplicated plugin id output_td. Check whole configuration and fix it."
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_id.rb:38:in configure' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/log.rb:626:in configure'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin/output.rb:247:in configure' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_helper/event_emitter.rb:73:in configure'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-td-1.1.0/lib/fluent/plugin/out_tdlog.rb:54:in configure' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin.rb:173:in configure'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:132:in add_match' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:74:in block in configure'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in each' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in configure'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/root_agent.rb:146:in configure' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:105:in configure'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:80:in run_configure' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/supervisor.rb:551:in run_supervisor'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/command/fluentd.rb:341:in <top (required)>' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/bin/fluentd:8:in <top (required)>'
2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in load' 2020-07-23 01:46:02 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in '
2020-07-23 01:46:03 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:03 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:03 +0000 [trace]: registered output plugin 'tdlog'
2020-07-23 01:46:03 +0000 [trace]: registered buffer plugin 'file'
2020-07-23 01:46:03 +0000 [trace]: registered output plugin 'file'
2020-07-23 01:46:03 +0000 [trace]: registered formatter plugin 'out_file'
2020-07-23 01:46:03 +0000 [warn]: [output_td] Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::TreasureDataLogOutput" secondary="Fluent::Plugin::FileOutput"
2020-07-23 01:46:03 +0000 [trace]: registered output plugin 'stdout'
2020-07-23 01:46:03 +0000 [trace]: registered buffer plugin 'memory'
2020-07-23 01:46:03 +0000 [trace]: registered formatter plugin 'stdout'
2020-07-23 01:46:03 +0000 [trace]: registered formatter plugin 'json'
2020-07-23 01:46:03 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.9'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-kafka' version '0.13.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.5.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-s3' version '1.3.3'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0'
2020-07-23 01:46:03 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5'
2020-07-23 01:46:03 +0000 [info]: gem 'fluentd' version '1.11.1'
2020-07-23 01:46:03 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Duplicated plugin id output_td. Check whole configuration and fix it."
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_id.rb:38:in configure' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/log.rb:626:in configure'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin/output.rb:247:in configure' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin_helper/event_emitter.rb:73:in configure'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-td-1.1.0/lib/fluent/plugin/out_tdlog.rb:54:in configure' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/plugin.rb:173:in configure'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:132:in add_match' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:74:in block in configure'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in each' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/agent.rb:64:in configure'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/root_agent.rb:146:in configure' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:105:in configure'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/engine.rb:80:in run_configure' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/supervisor.rb:551:in run_supervisor'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/lib/fluent/command/fluentd.rb:341:in <top (required)>' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:92:in require' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.11.1/bin/fluentd:8:in <top (required)>'
2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in load' 2020-07-23 01:46:03 +0000 [debug]: /opt/td-agent/bin/fluentd:23:in '

I'm unfamiliar with fluentd startup processes, but it appears to be looping of some sort. The first couple loops are successful until eventually I receive the following error:

2020-07-23 01:21:14 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Duplicated plugin id output_td. Check whole configuration and fix it."

I've tried a number of things in attempts to resolve this, including removing everything except a single source and match statement and I still get this issue. My td-agent.conf is currently a default file along with 4 sources(one commented) and the output at the bottom of the file. See td-agent.conf below.

[root@server ~]# cat /etc/td-agent/td-agent.conf
####
## Output descriptions:
##

# Treasure Data (http://www.treasure-data.com/) provides cloud based data
# analytics platform, which easily stores and processes data from td-agent.
# FREE plan is also provided.
# @see http://docs.fluentd.org/articles/http-to-td
#
# This section matches events whose tag is td.DATABASE.TABLE

<match td.*.*>
  @type tdlog
  @id output_td
  apikey YOUR_API_KEY

  auto_create_table
  <buffer>
    @type file
    path /var/log/td-agent/buffer/td
  </buffer>

  <secondary>
    @type file
    path /var/log/td-agent/failed_records
  </secondary>
</match>

## match tag=debug.** and dump to console
<match debug.**>
  @type stdout
  @id output_stdout
</match>

####
## Source descriptions:
##

## built-in TCP input
## @see http://docs.fluentd.org/articles/in_forward
<source>
  @type forward
  @id input_forward
</source>

## built-in UNIX socket input
#<source>
#  type unix
#</source>

# HTTP input
# POST http://localhost:8888/<tag>?json=<json>
# POST http://localhost:8888/td.myapp.login?json={"user"%3A"me"}
# @see http://docs.fluentd.org/articles/in_http
<source>
  @type http
  @id input_http
  port 8888
</source>

## live debugging agent
<source>
  @type debug_agent
  @id input_debug_agent
  bind 127.0.0.1
  port 24230
</source>

####
## Examples:
##

## File input
## read apache logs continuously and tags td.apache.access
#<source>
#  @type tail
#  @id input_tail
#  <parse>
#    @type apache2
#  </parse>
#  path /var/log/httpd-access.log
#  tag td.apache.access
#</source>

## File output
## match tag=local.** and write to file
#<match local.**>
#  @type file
#  @id output_file
#  path /var/log/td-agent/access
#</match>

## Forwarding
## match tag=system.** and forward to another td-agent server
#<match system.**>
#  @type forward
#  @id output_system_forward
#
#  <server>
#    host 192.168.0.11
#  </server>
#  # secondary host is optional
#  <secondary>
#    <server>
#      host 192.168.0.12
#    </server>
#  </secondary>
#</match>

## Multiple output
## match tag=td.*.* and output to Treasure Data AND file
#<match td.*.*>
#  @type copy
#  @id output_copy
#  <store>
#    @type tdlog
#    apikey API_KEY
#    auto_create_table
#    <buffer>
#      @type file
#      path /var/log/td-agent/buffer/td
#    </buffer>
#  </store>
#  <store>
#    @type file
#    path /var/log/td-agent/td-%Y-%m-%d/%H.log
#  </store>
#</match>

<source>
  @type tail
  path /var/log/secure
  pos_file /var/log/td-agent/secure.pos
  <parse>
    @type syslog
  </parse>
  tag hec.secure
</source>

<source>
  @type tail
  path /var/log/messages
  pos_file /var/log/td-agent/messages.pos
  <parse>
    @type syslog
  </parse>
  tag hec.messages
</source>

<source>
  @type tail
  path /var/log/yum.log
  pos_file /var/log/td-agent/yum.pos
  <parse>
    @type syslog
  </parse>
  tag hec.yum
</source>

#<source>
#  @type tail
#  path /var/log/qualys/qualys-cloud-agent.log
#  pos_file /var/log/td-agent/qualys.pos
#  <parse>
#    @type syslog
#  </parse>
#  tag hec.qualys
#</source>

<match hec.*>
  @type splunk_hec
  hec_host hf1
  hec_port 8088
  hec_token uuid
</match>    

[Feature] Support gzip compression on POST

In many environments there is a cost to network traffic used. Allowing compression of the HTTP post to the HEC endpoint could greatly reduce these costs and transport time.

Ability to exclude time from output

<

What would you like to be added:
By default the plugin uses the fluentd field time as an output field and optionally allows you to specify time_key. Could we please have the ability to exclude the time field entirely and let Splunk's props parsing extract a time from the event?

Why is this needed:
I'm sending access_combined logs and instead of using the time embedded in the event, it's using the time it passes through fleuntd. This is causing all the timestamps to be incorrect.

hec_token as ENV var ?

Is it possible to pass the hec_token as an ENV var when using the plugin in Kubernetes.
I want to pass the token in as a secret; like splunk-kubernetes-logging

        - name: SPLUNK_HEC_TOKEN
          valueFrom:
            secretKeyRef:
              name: splunk-kubernetes-logging
              key: splunk_hec_token

However the config would not load without it?

      <match **>
        log_level info
        @type splunk_hec
        hec_host the-host
        hec_port 8088
      </match>

2020-02-14 02:13:15 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="'hec_token' parameter is required"

Thanks

Unable to activate fluent-plugin-splunk-hec

What happened:
I try to create a custom fluentd container with splunk-hec plugin. There is my Dockerfile:

FROM fluent/fluentd:v1.4-1

USER root

RUN apk add --no-cache --update --virtual .build-deps \
        sudo build-base ruby-dev \
 && sudo gem install fluent-plugin-splunk-hec \
 && sudo gem sources --clear-all \
 && apk del .build-deps \
 && rm -rf /home/fluent/.gem/ruby/2.5.0/cache/*.gem

COPY fluent.conf /fluentd/etc/

USER fluent

And there is my fluent.conf (I replaced my actual hec-host and hec-token with example values):

<source>
  @type tail
  path /var/log/my_app/my_app.log
  tag my_app.log
  <parse>
    @type json
  </parse>
</source>

<match my_app.log>
  @type splunk_hec
  hec_host 12.34.56.78
  hec_port 8088
  hec_token 00000000-0000-0000-0000-000000000000
</match>

I run docker build - image builds successfully.
I run container as follows:

docker run -it --rm <image_id>

and get the following error:

Unable to activate fluent-plugin-splunk-hec-1.2.0, because fluentd-1.4.2 conflicts with fluentd (= 1.4) (Gem::ConflictError)

What you expected to happen:
Docker container runs successfully.

How to reproduce it (as minimally and precisely as possible):
Create docker image using Dockerfile and fluent.conf mentioned above.
Run it:

docker run -it --rm <image_id>

Anything else we need to know?:
There is the full container output:

2019-11-05 16:30:54 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
Traceback (most recent call last):
	31: from /usr/bin/fluentd:23:in `<main>'
	30: from /usr/bin/fluentd:23:in `load'
	29: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/bin/fluentd:8:in `<top (required)>'
	28: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	27: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	26: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/command/fluentd.rb:310:in `<top (required)>'
	25: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/supervisor.rb:502:in `run_supervisor'
	24: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/supervisor.rb:597:in `supervise'
	23: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/supervisor.rb:579:in `dry_run'
	22: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/supervisor.rb:801:in `run_configure'
	21: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/engine.rb:96:in `run_configure'
	20: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/engine.rb:131:in `configure'
	19: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/root_agent.rb:150:in `configure'
	18: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/agent.rb:64:in `configure'
	17: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/agent.rb:64:in `each'
	16: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/agent.rb:72:in `block in configure'
	15: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/agent.rb:128:in `add_match'
	14: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/plugin.rb:104:in `new_output'
	13: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/plugin.rb:146:in `new_impl'
	12: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/registry.rb:44:in `lookup'
	11: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/registry.rb:99:in `search'
	10: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/registry.rb:99:in `each'
	 9: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.4.2/lib/fluent/registry.rb:102:in `block in search'
	 8: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	 7: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	 6: from /usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-splunk-hec-1.2.0/lib/fluent/plugin/out_splunk_hec.rb:6:in `<top (required)>'
	 5: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:39:in `require'
	 4: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:128:in `rescue in require'
	 3: from /usr/lib/ruby/2.5.0/rubygems.rb:217:in `try_activate'
	 2: from /usr/lib/ruby/2.5.0/rubygems.rb:224:in `rescue in try_activate'
	 1: from /usr/lib/ruby/2.5.0/rubygems/specification.rb:1438:in `activate'
/usr/lib/ruby/2.5.0/rubygems/specification.rb:2325:in `raise_if_conflicts': Unable to activate fluent-plugin-splunk-hec-1.2.0, because fluentd-1.4.2 conflicts with fluentd (= 1.4) (Gem::ConflictError)

Is splunk-hec compatible with http_parser.rb(0.6.0) & tzinfo(2.0.1) gems?

Hi,
I'm building a docker image with td-agent-3.6.0 rpm and installing latest fluent-plugin-splunk-hec.

What happened:
I see this error while listing or installing any gems.

bash-4.2# td-agent-gem list 
WARN: Unresolved specs during Gem::Specification.reset:
      http_parser.rb (< 0.7.0, >= 0.5.1)
      tzinfo (< 3.0, >= 1.0, >= 1.0.0)
WARN: Clearing out unresolved specs.
Please report a bug if this causes problems.

This is happening because multiple versions of the same gems are getting installed.
With td-agent rpm, these gems get installed:

http_parser.rb (0.6.0)
tzinfo (2.0.1)

But while installing fluent-plugin-splunk-hec, it installs its dependencies that are older versions of these gems - http_parser.rb-0.5.3.gem and tzinfo-1.2.6.gem.

Ques 1 : Why is splunk plugin using older gems as dependencies? Another example I see is its dependency on active-support-5.2 while the latest active-support version available is 6.x. Any specific reason to stay at older versions?

 td-agent-gem install --no-document --source http://rubygems fluent-plugin-splunk-hec
Fetching: http_parser.rb-0.5.3.gem (100%)
Building native extensions.  This could take a while...
Successfully installed http_parser.rb-0.5.3
Fetching: thread_safe-0.3.6.gem (100%)
Successfully installed thread_safe-0.3.6
Fetching: tzinfo-1.2.6.gem (100%)
Successfully installed tzinfo-1.2.6
Fetching: i18n-1.8.2.gem (100%)

HEADS UP! i18n 1.1 changed fallbacks to exclude default locale.
But that may break your application.

If you are upgrading your Rails application from an older version of Rails:

Please check your Rails app for 'config.i18n.fallbacks = true'.
If you're using I18n (>= 1.1.0) and Rails (< 5.2.2), this should be
'config.i18n.fallbacks = [I18n.default_locale]'.
If not, fallbacks will be broken in your app by I18n 1.1.x.

If you are starting a NEW Rails application, you can ignore this notice.

For more info see:
https://github.com/svenfuchs/i18n/releases/tag/v1.1.0

Successfully installed i18n-1.8.2
Fetching: activesupport-5.2.4.2.gem (100%)
Successfully installed activesupport-5.2.4.2
Fetching: aes_key_wrap-1.0.1.gem (100%)
Successfully installed aes_key_wrap-1.0.1
Fetching: bindata-2.4.6.gem (100%)
Successfully installed bindata-2.4.6
Fetching: json-jwt-1.11.0.gem (100%)
Successfully installed json-jwt-1.11.0
Fetching: attr_required-1.0.1.gem (100%)
Successfully installed attr_required-1.0.1
Fetching: rack-2.2.2.gem (100%)
Successfully installed rack-2.2.2
Fetching: rack-oauth2-1.10.1.gem (100%)
Successfully installed rack-oauth2-1.10.1
Fetching: webfinger-1.1.0.gem (100%)
Successfully installed webfinger-1.1.0
Fetching: swd-1.1.2.gem (100%)
Successfully installed swd-1.1.2
Fetching: activemodel-5.2.4.2.gem (100%)
Successfully installed activemodel-5.2.4.2
Fetching: mini_mime-1.0.2.gem (100%)
Successfully installed mini_mime-1.0.2
Fetching: mail-2.7.1.gem (100%)
Successfully installed mail-2.7.1
Fetching: validate_email-0.1.6.gem (100%)
Successfully installed validate_email-0.1.6
Fetching: validate_url-1.0.8.gem (100%)
Successfully installed validate_url-1.0.8
Fetching: openid_connect-1.1.8.gem (100%)
Successfully installed openid_connect-1.1.8
Fetching: connection_pool-2.2.2.gem (100%)
Successfully installed connection_pool-2.2.2
Fetching: net-http-persistent-3.1.0.gem (100%)
Successfully installed net-http-persistent-3.1.0
Fetching: fluent-plugin-splunk-hec-1.2.1.gem (100%)
Successfully installed fluent-plugin-splunk-hec-1.2.1

Ques 2:
Is fluent-plugin splunk-hec compatible with http_parser.rb (0.6.0) and tzinfo (2.0.1) (i.e. latest gems coming from td-agent-3.6.0)?
Can we remove the older gem versions of http_parser & tzinfo coming from splunk plugin?
Do you foresee any issues with this?

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

  1. Install td-agent-3.6.0 rpm
  2. Install fluent-plugin-splunk-hec gem (i.e. 1.2.1) using td-agent-gem install --no-document --source http://rubygems fluent-plugin-splunk-hec
  3. Check td-agent-gem list

Anything else we need to know?:

Environment:

  • Ruby version (use ruby --version): ruby 2.4.9p362
  • OS (e.g: cat /etc/os-release): CentOS Linux 7
  • Splunk version: fluent-plugin-splunk-hec (1.2.1)

1.1.1 fails to start on AmazonLinux 1

What happened: 1.1.1 fails to start, see stack trace:

Starting td-agent: /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluent-plugin-splunk-hec-1.1.1/lib/fluent/plugin/out_splunk_hec (LoadError)
    from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/registry.rb:102:in `block in search'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/registry.rb:99:in `each'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/registry.rb:99:in `search'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/registry.rb:44:in `lookup'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/plugin.rb:146:in `new_impl'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/plugin.rb:104:in `new_output'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:128:in `add_match'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:72:in `block in configure'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:64:in `each'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/agent.rb:64:in `configure'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/root_agent.rb:150:in `configure'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/engine.rb:131:in `configure'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/engine.rb:96:in `run_configure'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:801:in `run_configure'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:579:in `dry_run'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:597:in `supervise'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/supervisor.rb:502:in `run_supervisor'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/lib/fluent/command/fluentd.rb:310:in `<top (required)>'
    from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.5.1/bin/fluentd:8:in `<top (required)>'
    from /opt/td-agent/embedded/bin/fluentd:23:in `load'
    from /opt/td-agent/embedded/bin/fluentd:23:in `<top (required)>'
    from /usr/sbin/td-agent:7:in `load'
    from /usr/sbin/td-agent:7:in `<main>'
td-agent           

How to reproduce it (as minimally and precisely as possible): service td-agent start

Environment:

  • Ruby version (use ruby --version): 2.4.0
  • OS (e.g: cat /etc/os-release): AmazonLinux 1

Crashes on "too old resource version" messages

What happened:

From time to time we have the following message in the pod/kubelet logs:

W0504 17:03:59.230317    4644 reflector.go:289] object-"splunk"/"splunk-sandbox-splunk-kubernetes-logging": watch of *v1.ConfigMap ended with: too old resource version: 56153661 (56154652) 

It is crashing fluentd-hec:

#<Thread:0x00005614d0fd6f70@/usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-2.4.5/lib/fluent/plugin/filter_kubernetes_metadata.rb:277 run> terminated with exception (report_on_exception is true):
/usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-2.4.5/lib/fluent/plugin/kubernetes_metadata_watch_namespaces.rb:43:in `rescue in set_up_namespace_thread': undefined method `<' for nil:NilClass (NoMethodError)
        from /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-2.4.5/lib/fluent/plugin/kubernetes_metadata_watch_namespaces.rb:38:in `set_up_namespace_thread'
        from /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-2.4.5/lib/fluent/plugin/filter_kubernetes_metadata.rb:277:in `block in configure'
2020-05-04 14:37:51 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Could not parse jq filter: .record.source = \"/var/log/journal/\" + .record.source | .record.sourcetype = (.tag | ltrimstr(\"journald.\")) | .record.cluster_name = \"sandbox\" | .record.index = \"sandbox\" |.record, error: undefined method `<' for nil:NilClass"

It is related to the upstream issue with the kubernetes metadata plugin. It work when you downgrade the plugin to 2.4.1.

What you expected to happen:

Fluentd shouldn't crash.

How to reproduce it (as minimally and precisely as possible):

I am not sure, I think too old resource version messages are expected but I do not know how to force them to appear.

Anything else we need to know?:

It happens with fluentd-hec 1.2.x image.

Environment:

  • Kubernetes version (use kubectl version): EKS 1.14
  • Ruby version (use ruby --version): 2.5
  • OS (e.g: cat /etc/os-release): Amazon Linux 2
  • Splunk version: 8.x
  • Others: n/a

<fields> section not working with Splunk's indexed field extraction

What happened:
I used the <fields> section to forward cluster_name on Splunk Connect for Kubernetes object logs (https://github.com/splunk/splunk-connect-for-kubernetes/tree/develop/helm-chart/splunk-kubernetes-objects). The logs themselves are showing up in Splunk and Splunk says that all 100% of the logs have cluster_name populated to one value.

However, if I perform an actual Splunk search with that cluster_name value, only a select few logs show up (after digging around, they probably show up because of something in the events that match the cluster_name value)

What you expected to happen:
When I perform the Splunk search with the cluster_name value, all of the logs should show up.

How to reproduce it (as minimally and precisely as possible):
Use the Splunk Connect for Kubernetes' splunk-kubernetes-objects helm chart while making sure that cluster_name is a field being forwarded. After seeing logs show up on Splunk, do a Splunk search while filtering on the cluster_name.

Anything else we need to know?:
In the Splunk docs, it says that

Be aware that you must send HEC requests containing the "fields" property to the /collector/event endpoint. Otherwise, they will not be indexed.

However, in the source code (https://github.com/splunk/fluent-plugin-splunk-hec/blob/develop/lib/fluent/plugin/out_splunk_hec.rb#L259), it seems like the logs are being sent to /collector endpoint rather than the /collector/event endpoint.

Environment:

  • Kubernetes version (use kubectl version):
  • Ruby version (use ruby --version):
  • OS (e.g: cat /etc/os-release):
  • Splunk version:
  • Others:

Specifying insecure_ssl with ca_file still verifies peer certificate

What happened:

While confirming communication between fluentd and Splunk we noticed that if ca_file was specified as an SSL parameter alongside insecure_ssl true, this option was ignored and the peer cert continued to be validated.

What you expected to happen:

insecure_ssl true should have bypassed peer cert validation despite the presence of ca_file

How to reproduce it (as minimally and precisely as possible):

Specify the following block for fluentd output:

<match **>
  @type splunk_hec
  hec_host splunk.local.domain
  hec_port 8088
  hec_token 00000000-0000-0000-0000-000000000000

  ca_file /etc/pki/interna-ca.crt
  insecure_ssl true
</match>

Anything else we need to know?:

This is likely just a doc bug since the class forces VERIFY_PEER: https://github.com/drbrain/net-http-persistent/blob/master/lib/net/http/persistent.rb#L301-L311

Logic could be added to only use ca_file or ca_path if insecure_ssl is != true: https://github.com/splunk/fluent-plugin-splunk-hec/blob/develop/lib/fluent/plugin/out_splunk_hec.rb#L131-L136

Environment:

  • Kubernetes version:
    openshift v3.11.157
    kubernetes v1.11.0+d4cacc0
    
  • Ruby version (use ruby --version):
    • ruby 2.5.5p157 (2019-03-15 revision 67260) [x86_64-linux] (rh-ruby25-2.5-2.el7.x86_64)
  • OS (e.g: cat /etc/os-release):
    • Image version: registry.access.redhat.com/rhel7/rhel@sha256:d8e52fab67bc27384fe5f4022e8c5e5d83a7901b83df7ed41d4a216aea57b44c
    • Host OS: Red Hat Enterprise Linux Server release 7.7 (Maipo)
  • Splunk version: N/A

Interpolation of custom tags is not supported

What happened:
Setting a source to a tag that does not match ${tag} exactly will never include tag information.

What you expected to happen:
If I have a string like "blah/app/thing/{tag}" i should have the "{tag}" replaced by the input tags

How to reproduce it (as minimally and precisely as possible):
Set source to a string including ${tag} and see the rendered source output.

** Other **
This line looks to be the issue:

Environment:

  • Ruby version (use ruby --version): 2.4
  • OS (e.g: cat /etc/os-release): bionic ubuntu
  • Splunk version: 7.2

Getting too many connection resets for fluentd-plugin-splunk-hec

I am using splunk/splunk docker image. I have generated hec token and kept SSL enable. I have generated self-signed certs similar as following documentation https://docs.splunk.com/Documentation/Splunk/7.2.5/Security/Howtoself-signcertificates
https://docs.splunk.com/Documentation/Splunk/7.2.5/Security/HowtoprepareyoursignedcertificatesforSplunk
https://docs.splunk.com/Documentation/Splunk/7.2.5/Security/ConfigureSplunkforwardingtousesignedcertificates

Modified input.conf

[splunktcp-ssl:8088]
disabled=0
[SSL]
serverCert = $SPLUNK_HOME/etc/auth/myNewServerCertificate.pem
sslPassword = <password>
rootCA= $SPLUNK_HOME/etc/auth/myCACertificate.pem
requireClientCert = false

Fluentd configuration

  <match **>
    @type splunk_hec
    hec_host <VM_Hostname_Where_Splunk_Container_is_Running>
    hec_port 8088
    hec_token <ssl_token>
    insecure_ssl true
    idle_timeout 10
    read_timeout 5
    ca_file "/fluentd/etc/tls/cacert.pem"
    <buffer>
      flush_thread_count 1
      flush_interval 65s
      chunk_limit_size 1M
      queue_limit_length 32
      retry_max_interval 30
      retry_forever false
    </buffer>
  </match>

I have kept server.pem certs CN as VM hostname where splunk container is running

After all these steps, I am getting following error at Fluentd side -

2019-03-21 23:43:08 +0000 [warn]: #0 failed to flush the buffer. retry_time=21 next_retry_seconds=2019-03-23 00:00:19 +0000 chunk="584b78a80f58364374f77ae06e4a0ebc" error_class=Net::HTTP::Persistent::Error error="too many connection resets (due to Connection reset by peer - Errno::ECONNRESET) after 0 requests on 69851140794400, last used 27.151896316 seconds ago"

And getting following error in a spunkd.log -

03-21-2019 23:44:08.343 +0000 DEBUG TcpInputConfig - connection_host=ip for <IP_Address>
03-21-2019 23:44:08.345 +0000 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1347375956 bytes from src=<IP_Address> in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
03-21-2019 23:44:41.009 +0000 DEBUG TcpInputConfig - connection_host=ip for <IP_Address>

I am sending only a single line log which is way below the data limits. I am getting this problem only for HTTPS and not for HTTP. I am also able to send testing logs using CURL.

Can you please let me know if I am misconfiguring something?

Getting "Unknown output plugin 'splunk_hec" after installation

I'm trying to redirect console log on my local machine (Mac OS Sierra) to local version of Splunk with FluentD.

Installed plug-ing as mentioned in docs: gem install fluent-plugin-splunk-hec

What happened:
Getting following message in log: 2019-05-02 15:48:38 -0700 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2019-05-02 15:48:38 -0700 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Unknown output plugin 'splunk_hec'. Run 'gem search -rd fluent-plugin' to find plugins"

What you expected to happen:
No error in log and plug-in runs properly.

Environment:

  • Kubernetes version (use kubectl version): NO

  • Ruby version (use ruby --version): ruby 2.3.7p456 (2018-03-28 revision 63024) [universal.x86_64-darwin17]

  • OS (e.g: cat /etc/os-release):
    ProductName: Mac OS X
    ProductVersion: 10.13.6
    BuildVersion: 17G6030

  • Splunk version:
    Splunk Enterprise
    Version: 7.2.6
    Build:c0bf0f679ce9

  • Others:

Screen Shot 2019-05-02 at 4 05 11 PM

Screen Shot 2019-05-02 at 4 05 53 PM

Configtest failing after installation of splunk hec plugin

What Happended:

Hi guys,
We are using fluent-plugin-splunk-hec from last one year but we didn't faced any issue. But yesterday i launched new instance and installed td-agent on it and it worked till i installed fluent-plugin-splunk-hec . After installing that plugin, configtest is failing to verify the config with following error. I am following same procedure which i followed to install it last year on 30+ EC2 instances. And i tried to install old version 1.1.0 & the latest version 1.2.1 but it is failing for both.

/opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require': cannot load such file -- ripper (LoadError)
from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require' from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.10.1/lib/fluent/config/literal_parser.rb:22:in <top (required)>'
from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require' from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require'
from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.10.1/lib/fluent/config/element.rb:18:in <top (required)>' from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require'
from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require' from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.10.1/lib/fluent/config.rb:18:in <top (required)>'
from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require' from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require'
from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.10.1/lib/fluent/supervisor.rb:20:in <top (required)>' from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require'
from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require' from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.10.1/lib/fluent/command/fluentd.rb:19:in <top (required)>'
from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require' from /opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in require'
from /opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.10.1/bin/fluentd:8:in <top (required)>' from /opt/td-agent/embedded/bin/fluentd:23:in load'
from /opt/td-agent/embedded/bin/fluentd:23:in <top (required)>' from /usr/sbin/td-agent:7:in load'
from /usr/sbin/td-agent:7:in <main>' td-agent [FAILED]

What you expected to happen:
td-agent should run after installing fluent-plugin-splunk-hec.

How to reproduce it (as minimally and precisely as possible):
Launch EC2 instance from AMI ami-08111162 .
Install td-agent by running this "curl -L https://toolbelt.treasuredata.com/sh/install-amazon1-td-agent3.sh | sh"
Install fluent splunk plugin by running cmd : /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-splunk-hec
Update /etc/td-agent/td-agent.conf to send metrics to splunk over hec . you can use below entries also for testing

`
@type splunk_hec
hec_host http-inputs-.splunkcloud.com
hec_port 443
hec_token
index main
sourcetype apache:access

@type tail @type apache2 path /var/log/httpd/access.log tag httpd pos_file /var/log/td-agent/httpd-rotate_1.pos `

Run this cmd /etc/init.d/td-agent configtest

Environment:

  • Ruby version (use ruby --version): ruby 2.4.5p335 (2018-10-18 revision 65137) [x86_64-linux-gnu]
  • OS (e.g: cat /etc/os-release): Amazon Linux AMI release 2016.03

Current docker image contains ~125 known vulnerabilities

The current docker image for version 1.1.0 contains 125 known vulnerabilities according to Jfrog's Xray. Almost all of these vulnerabilities come from the old base layer. Bumping the base image to 2.6-slim should mitigate almost all of these vulnerabilities.

What would you like to be added:
Use 2.6-slim as a base layer.

Incompatible with td-agent 3 and 4!

What happened:
I tried to do 2 things:
Step 1: Install td-agent
Step 2: Using the bundled Ruby gem (td-agent-gem), install Splunk output plugin

Unfortunately, out of the box, the Splunk plugin is incompatible with both the latest major versions of td-agent (3 and 4)!

What you expected to happen:
Installing Splunk plugin for td-agent should just work without issues, similar to installing other output plugins like Kafka.

How to reproduce it (as minimally and precisely as possible):

  1. For td-agent 3 (specifically, 3.8.0):
  • Install td-agent 3 via apt
  • Add Ruby gems source:
    td-agent-gem sources --add https://rubygems.org/
  • Attempt to install Splunk plugin:
    td-agent-gem install fluent-plugin-splunk-hec
    At this point, the Splunk plugin installation will fail because it tries to install activesupport 6.0.3.2, which requires Ruby >= 2.5.0 -- unfortunately, the version of Ruby bundled with td-agent 3 is only 2.4.10:
> /opt/td-agent/embedded/bin/ruby --version
ruby 2.4.10p364 (2020-03-31 revision 67879) [x86_64-linux]

Thus, the Splunk plugin is incompatible with td-agent 3.


  1. For td-agent 4 (specifically, 4.0.1):
  • Install td-agent 4 via apt
  • Add Ruby gems source:
    td-agent-gem sources --add https://rubygems.org/
  • Install Splunk plugin:
    td-agent-gem install fluent-plugin-splunk-hec
    This time, the Splunk plugin installation (and activesupport 6.0.3.2 installation) will work, because the version of Ruby bundled with td-agent 4 is 2.7.1:
> /opt/td-agent/bin/ruby --version
ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]
  • However, when we attempt to actually start up Fluentd (and activate the newly installed Splunk plugin), it fails (assuming the Fluentd config sends logs to Splunk output):
    • If your Fluentd config is complex, you will see a misleading error message complaining about a "duplicated plugin id" (as seen in #138). This is a red herring -- there is no actual duplicate plugin ID.
    • If we simplify the Fluentd config to just 1 source and 1 match (Splunk output), we will see the real error:
/opt/td-agent/lib/ruby/2.7.0/rubygems/specification.rb:2243:in `raise_if_conflicts': Unable to activate activesupport-6.0.3.2, because tzinfo-2.0.2 conflicts with tzinfo (~> 1.1) (Gem::ConflictError)

i.e. When we install td-agent 4, it installs tzinfo 2.0.2 -- however, the Splunk plugin (specifically the activesupport dependency) cannot handle that newer version. As part of the Splunk plugin installation, it installs an older compatible version of tzinfo (1.2.7) -- however, since the newer incompatible version (2.0.2) is still present from the previous td-agent installation, activesupport fails to activate, which means the Splunk plugin fails to activate.

Thus, the Splunk plugin is incompatible with td-agent 4.

Anything else we need to know?:

Workaround:
As of now, I was not able to get the Splunk plugin working with td-agent 3. However, I was able to eventually get it working with td-agent 4, by manually uninstalling the newer version of tzinfo (2.0.2) (td-agent-gem uninstall tzinfo -v '>= 2.0') before starting up Fluentd (and activating the Splunk plugin).

Goal:
Ideally, it should be possible to just install the Splunk plugin for td-agent and have it work, regardless of whether it's v3 or v4, without us having to do any manual workarounds.
At the very least though, given how many people use td-agent, it would helpful to have this incompatibility (and workaround) documented in the main README page.

Environment:

  • td-agent:
    td-agent 3: 3.8.0
    td-agent 4: 4.0.1

  • Ruby version:
    td-agent 3:

> /opt/td-agent/embedded/bin/ruby --version
ruby 2.4.10p364 (2020-03-31 revision 67879) [x86_64-linux]

td-agent 4:

> /opt/td-agent/bin/ruby --version
ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]
  • OS: Ubuntu 18.04.4 LTS

Output doesn't show up in Splunk when config contains JSON parse filter

My setup is fluentd deployed as a Docker container on Kubernetes. When I use a fairly plain config everything works as expected.

<source>
    @type tail
    path "/var/log/containers/*.log"
    pos_file "/var/log/fluentd-containers.log.pos"
    tag "kubernetes.*"
    read_from_head true
    <parse>
        @type json
        time_type string
        time_format %Y-%m-%dT%H:%M:%S.%NZ
    </parse>
</source>
<match **fluent**>
    @type null
</match>
<match **linkerd**>
    @type null
</match>
<match **tiller**>
    @type null
</match>
<filter kubernetes.var.log.containers.**.log>
    @type kubernetes_metadata
</filter>
<match **>
    @type splunk_hec
    hec_host xxxxx
    hec_port 8088
    hec_token xxxxx

    index my-index
    source ${tag}
    sourcetype fluentd
    insecure_ssl true
</match>

and the output in Splunk looks something like this:

{
    "log": "<json string>",
    "stream": "stderr",
    "docker": { ... },
    "kubernetes": { ... }
}

The natural next step was for me to parse the log key as JSON so that it is searchable in queries.

<filter **>
    @type parser
    key_name log
    reserve_data true
    hash_value_field line
    <parse>
        @type json
    </parse>
</filter>

Now I would expect to see something like this:

{
    "log": "<json string>",
    "line": { ... },
    "stream": "stderr",
    "docker": { ... },
    "kubernetes": { ... }
}

And I can confirm as much when I use the stdout output, however, nothing shows up in Splunk. I also don't see any error messages from the fluentd container. So it seems that this plugin doesn't like something about the parse section.

Thoughts?

Base image vulnerable

<

What would you like to be added:
UBI images are updated every 6 weeks by Red Hat (https://access.redhat.com/articles/2208321 - https://developers.redhat.com/articles/ubi-faq/#support__lifecycle__and_updating). It would be nice to automatically rebuild your image with the same lifecycle in order to have a updated image all the time.

Why is this needed:
Your current image is vulnerable. It has 13 known vulnerabilies that have been fixed in the latest ubi image. I see you already had a similar feature request in the past that has been resolved by republishing the image. By scheduling the build in your CD environment you might fix the problem in a more perpetual way.

[Feature] Support raw API

It could be useful to add support for the /raw api of HEC.
When using the /event endpoint some rules (like LINEMERGE/BREAK) don't work and they only work when batching the logs in the raw endpoint.

failed to send data

2018-07-01 22:39:04 +0000 [debug]: #0 Sending 305313 bytes to Splunk.
2018-07-01 22:39:10 +0000 [debug]: #0 Received new chunk, size=1186
2018-07-01 22:39:16 +0000 [debug]: #0 Received new chunk, size=19231
2018-07-01 22:40:04 +0000 [warn]: #0 thread exited by unexpected error plugin=Fluent::Plugin::SplunkHecOutput title=:"hec_worker_https://MyCloudInstance.cloud.splunk.com:8088/services/collector" error_class=Net::OpenTimeout error="execution expired"
2018-07-01 22:40:04 +0000 [error]: #0 unexpected error error_class=Net::OpenTimeout error="execution expired"
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/net/http.rb:937:in initialize' 2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/net/http.rb:937:in open'
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/net/http.rb:937:in block in connect' 2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/timeout.rb:103:in timeout'
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/net/http.rb:935:in connect' 2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/net/http.rb:920:in do_start'
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/lib/ruby/2.5.0/net/http.rb:915:in start' 2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/bundle/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:692:in start'
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/bundle/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:622:in connection_for' 2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/bundle/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:927:in request'
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-splunk-hec-1.0.0/lib/fluent/plugin/out_splunk_hec.rb:343:in send_to_hec' 2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-splunk-hec-1.0.0/lib/fluent/plugin/out_splunk_hec.rb:311:in block in start_worker_threads'
2018-07-01 22:40:04 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.2.0/lib/fluent/plugin_helper/thread.rb:78:in block in thread_create' #&lt;Thread:0x00007f0b0f5c91f0@/usr/local/bundle/gems/fluentd-1.2.0/lib/fluent/plugin_helper/thread.rb:70 run&gt; terminated with exception (report_on_exception is true): /usr/local/lib/ruby/2.5.0/net/http.rb:937:in initialize': execution expired (Net::OpenTimeout)
from /usr/local/lib/ruby/2.5.0/net/http.rb:937:in open' from /usr/local/lib/ruby/2.5.0/net/http.rb:937:in block in connect'
from /usr/local/lib/ruby/2.5.0/timeout.rb:103:in timeout' from /usr/local/lib/ruby/2.5.0/net/http.rb:935:in connect'
from /usr/local/lib/ruby/2.5.0/net/http.rb:920:in do_start' from /usr/local/lib/ruby/2.5.0/net/http.rb:915:in start'
from /usr/local/bundle/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:692:in start' from /usr/local/bundle/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:622:in connection_for'
from /usr/local/bundle/gems/net-http-persistent-3.0.0/lib/net/http/persistent.rb:927:in request' from /usr/local/bundle/gems/fluent-plugin-splunk-hec-1.0.0/lib/fluent/plugin/out_splunk_hec.rb:343:in send_to_hec'
from /usr/local/bundle/gems/fluent-plugin-splunk-hec-1.0.0/lib/fluent/plugin/out_splunk_hec.rb:311:in block in start_worker_threads' from /usr/local/bundle/gems/fluentd-1.2.0/lib/fluent/plugin_helper/thread.rb:78:in block in thread_create'
2018-07-01 22:40:04 +0000 [info]: fluent/log.rb:322:info: Worker 0 finished unexpectedly with status 1

Locale redirect from splunkcloud.com

What happened:
Splunk cloud sent a locale-based redirect instead of accepting the plugin's content:

2020-06-02 14:22:19 +0000 [error]: #0 Fluent::Plugin::SplunkHecOutput: Failed POST to https://<myinstance>.splunkcloud.com/services/collector, response: <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="refresh" content="1;url=https://<myinstance>.splunkcloud.com/en-US/services/collector"><title>303 See Other</title></head><body><h1>See Other</h1><p>The resource has moved temporarily <a href="https://<myinstance>.splunkcloud.com/en-US/services/collector">here</a>.</p></body></html>
What you expected to happen:

I expected splunk cloud's HEC endpoint to index my events

How to reproduce it (as minimally and precisely as possible):
Use a splunkcloud.com instance as hec_host, and use a valid token:

<match **>
  @type splunk_hec
  hec_host <myinstance>.splunkcloud.com
  hec_port 443
  hec_token 00000000-0000-0000-0000-000000000000 

  index dgw
  sourcetype _json
</match>

Anything else we need to know?:

Environment:

  • Ruby version (use ruby --version):
    ruby 2.5.8p224 (2020-03-31 revision 67882) [x86_64-linux-musl]

  • OS (e.g: cat /etc/os-release):

NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.9.6
PRETTY_NAME="Alpine Linux v3.9"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
  • Splunk version:
Splunk Version
    7.0.11.1
Splunk Build
    890181452bae 
  • Others:

File Buffer

I want to use file buffers with this plugin.
When setting this config the following messages appear in the fluentd logs.

Config:

<match **>
@type splunk_hec
hec_host hec.mydomain.com
hec_port 58495
hec_token A7A39208-0257-4B10-BEE7-88D6ACD32EF0
index test
source ${tag}
sourcetype _json
insecure_ssl true

buffer_type file
buffer_path /var/lib/fluentd/to-splunk/all/

Logs:
2019-03-25 13:56:08 +0000 [warn]: parameter 'buffer_type' in <match **> is not used
2019-03-25 13:56:08 +0000 [warn]: parameter 'buffer_path' in <match **> is not used

Can you add support for file buffers please?

because http_parser.rb-0.6.0 conflicts with http_parser.rb (= 0.5.3)

I'm building a docker image based on the official fluentd image using the provided instruction.
building is fine but launching it gives me a a conflict problem

What happened:

Dockerfile

FROM fluent/fluentd:v1.9-1

# Use root account to use apt
USER root

RUN apk add --no-cache --update --virtual .build-deps \
        sudo build-base ruby-dev \
 # cutomize following instruction as you wish
 && sudo gem install fluent-plugin-splunk-hec \
 && sudo gem sources --clear-all \
 && apk del .build-deps \
 && rm -rf /home/fluent/.gem/ruby/2.5.0/cache/*.gem

COPY fluent.conf /fluentd/etc/
COPY entrypoint.sh /bin/

USER fluent

Error at launch

2020-03-16 09:14:57 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2020-03-16 09:14:57 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.4.4'
2020-03-16 09:14:57 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.1'
2020-03-16 09:14:57 +0000 [info]: gem 'fluentd' version '1.9.3'
Traceback (most recent call last):
        28: from /usr/bin/fluentd:23:in `<main>'
        27: from /usr/bin/fluentd:23:in `load'
        26: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/bin/fluentd:8:in `<top (required)>'
        25: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
        24: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
        23: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/command/fluentd.rb:330:in `<top (required)>'
        22: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/supervisor.rb:547:in `run_supervisor'
        21: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/engine.rb:80:in `run_configure'
        20: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/engine.rb:105:in `configure'
        19: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/root_agent.rb:146:in `configure'
        18: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/agent.rb:64:in `configure'
        17: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/agent.rb:64:in `each'
        16: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/agent.rb:74:in `block in configure'
        15: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/agent.rb:130:in `add_match'
        14: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/plugin.rb:109:in `new_output'
        13: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/plugin.rb:155:in `new_impl'
        12: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/registry.rb:44:in `lookup'
        11: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/registry.rb:99:in `search'
        10: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/registry.rb:99:in `each'
         9: from /usr/lib/ruby/gems/2.5.0/gems/fluentd-1.9.3/lib/fluent/registry.rb:102:in `block in search'
         8: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
         7: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
         6: from /usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-splunk-hec-1.2.1/lib/fluent/plugin/out_splunk_hec.rb:7:in `<top (required)>'
         5: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:39:in `require'
         4: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:128:in `rescue in require'
         3: from /usr/lib/ruby/2.5.0/rubygems.rb:217:in `try_activate'
         2: from /usr/lib/ruby/2.5.0/rubygems.rb:224:in `rescue in try_activate'
         1: from /usr/lib/ruby/2.5.0/rubygems/specification.rb:1438:in `activate'
/usr/lib/ruby/2.5.0/rubygems/specification.rb:2325:in `raise_if_conflicts': Unable to activate fluent-plugin-splunk-hec-1.2.1, because http_parser.rb-0.6.0 conflicts with http_parser.rb (= 0.5.3) (Gem::ConflictError)

fluentd.conf

<source>
  @type  forward
  port  24224
</source>

<filter **>
  @type stdout
</filter>

<match logs.infra>
  @type splunk_hec
  hec_host splip404.foo.net
  hec_port 8088
  hec_token xxxxxxx....
  insecure_ssl true
</match>

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

  1. take the Dockerfile
  2. build the image
  3. run the image with docker run -ti <hash>

Environment:

  • Kubernetes version (use kubectl version): Openshift 4.3.x
  • Ruby version (use ruby --version): 2.5
  • OS (e.g: cat /etc/os-release): Alpine 3.9.5
  • Splunk version: n/a
  • Others: n/a

Incorrect fluentd docker version

What happened: when building and starting the docker image an error is thrown that Fluentd is not compatible with the version as specified in the gemspec:

/usr/local/lib/ruby/2.6.0/rubygems/specification.rb:2302:in raise_if_conflicts': Unable to activate fluent-plugin-splunk-hec-1.2.0, because fluentd-1.4.2 conflicts with fluentd (= 1.4) (Gem::ConflictError)`

How to reproduce it (as minimally and precisely as possible):

  1. Build the docker image: docker build . -t fluentd-splunk-hec
  2. Start the docker image with a fluent.conf file that includes @type splunk_hec

Unable to find git branch for 1.1.2 gem

What happened:
I am using the gem https://rubygems.org/gems/fluent-plugin-splunk-hec/versions/1.1.2 that was published on 12th June, 2019.
The 1.1.1 version of same gem was yanked from the repository.

I am looking for corresponding git branch for the 1.1.2 gem but i can only see
Fluent Plugin Splunk HEC v1.1.1 branch and no branch for 1.1.2 gem at the releases link - https://github.com/splunk/fluent-plugin-splunk-hec/releases.

The 1.1.1 github branch was also created on same day 12th june,2019. But its assets list shows this -
Assets:
fluent-plugin-splunk-hec-1.1.1.gem

source code (zip)
source code (tar.gz)

What you expected to happen:
I expected to find a corresponding branch for 1.1.2 gem as well.

Can you tell me where is the branch/tag for 1.1.2 gem? Or is 1.1.1 branch used to publish 1.1.2 gem?

latest Docker image have no splunk_hec plugin installed

What happened:
Fluentd failing if we trying to run official image splunk/fluentd-hec:1.2.1
config error file="/fluentd/etc/fluentd.conf" error_class=Fluent::ConfigError error="Unknown output plugin 'splunk_hec'. Run 'gem search -rd fluent-plugin' to find plugins"

What you expected to happen:
Usual work

How to reproduce it (as minimally and precisely as possible):
run
docker run --rm -ti --entrypoint="/bin/sh" splunk/fluentd-hec:1.2.1

inside of container run
gem list |grep splunk
you will get empty output

Anything else we need to know?:

Environment:
not important in this case

Support for docker image on arm

What would you like to be added:
This project does not provide arm images via docker hub.
Why is this needed:
I have arm based nodes in my deployment for which I would like to collect logs from.

Thank you!

SSL_connect SYSCALL : returned=5 SSLv3/TLS write client hello

Environment:

  • Ruby version: ruby 2.5.1p57
  • OS: Ubuntu 18.04.3 LTS Bionic
  • Splunk version: 7.2.0 (build 8c86330ac18)

Hi,

I'm trying to use the fluent-plugin-splunk-hec (install via gem install fluent-plugin-splunk-hec) to send messages to our Splunk server.

The Fluentd configuration:

    @type splunk_hec
    hec_host splunk.adm.toto.com
    hec_port 8088
    hec_token string_token

    client_cert /etc/ssl/certs/server-cert.pem
    client_key /etc/ssl/private/server-key.pem
    ca_file /usr/local/share/ca-certificates/ca.crt
    insecure_ssl true

    <buffer>
      @type file
      path /data/fluent/.buffer/siteauth_log_splunk
      total_limit_size 1024MB
      # chuck + enqueue
      chunk_limit_size 16MB
      flush_mode interval
      flush_interval 5s
      # flush thread
      flush_thread_count 4
      retry_type exponential_backoff
      retry_timeout 1h
      retry_max_interval 30
      overflow_action drop_oldest_chunk
    </buffer>

    <format>
      @type single_value
    </format>

The Http configuration on the Splunk server side:

$> cat /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf 
[http]
disabled=1
port=8088
enableSSL=1
dedicatedIoThreads=2
maxThreads = 0
maxSockets = 0
useDeploymentServer=0
# ssl settings are similar to mgmt server
sslVersions=*,-ssl2
allowSslCompression=true
allowSslRenegotiation=true

$> cat /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf 
[http]
disabled=0
serverCert=/etc/ssl/certs/splunk-ssl.pem
requireClientCert=true
caCertFile=/usr/local/share/ca-certificates/ca.crt
enableSSL=1
sslVersions=tls

[http://FluentTestAcl]
disabled = 0
token =  string_token
useACK = false

So here is the problem encounter, when we receive messages, Fluetnd logs return:

[warn]: #2 failed to flush the buffer. retry_time=0 next_retry_seconds=2019-12-17 14:23:07 +0100 chunk="599e63a3917ee9031dfcd6fdeaca6e42" error_class=OpenSSL::SSL::SSLError error="SSL_connect SYSCALL returned=5 errno=0 state=SSLv3/TLS write client hello"

On the Splunk server side into splunkd.log:

WARN  HttpListener - Socket error from fluentd_server_ip:58080 while idling: error:140D9115:SSL routines:ssl_get_prev_session:session id context uninitialized

What do I miss?
Thanks for your help.

Plugin does not work on fluentd 1.9.2

Plugin fluentd 1.9.2 has dependency tzinfo >=1.0, <3.0 whereas activesupport in splunk plugin has dependency tzinfo ~>1.1

What happened:
Error:
/usr/local/lib/ruby/2.6.0/rubygems/specification.rb:2302:in `raise_if_conflicts': Unable to activate activesupport-6.0.2. 1, because tzinfo-2.0.1 conflicts with tzinfo (~> 1.1) (Gem::ConflictError)

What you expected to happen:
Successful splunk access via fluentd 1.9.2

How to reproduce it (as minimally and precisely as possible):
Configure fluentd 1.9.2 to forward data to splunk with latest splunk plugin.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Ruby version (use ruby --version):
  • OS (e.g: cat /etc/os-release):
  • Splunk version:
  • Others:

@type splunk_hec clashes with fluentd's splunk_hec

Both https://github.com/fluent/fluent-plugin-splunk/ and https://github.com/splunk/fluent-plugin-splunk-hec/ use @type splunk_hec in their configuration files. As they are using the same @type namespace there is a clash in the configuration so you can't use the different plugins on the same instance of fluentd as far as I can tell.

Examples of why you might want to mix and match the plugins -

One suggestion is to change the @type to be something other than splunk_hec to avoid the clash such as splunk_http_event_collector. Or even split the current plugin into splunk_hec_json and splunk_hec_raw (if raw is supported at some point) so avoiding the clash that way.

Q about the Dockerfile

Inside the Dockerfile, it has this line:
COPY *.gem /tmp/

Where can I see or get the list of gems? Many Thanks.

Error logging prints incorrect port

What happened:

The configured HEC endpoint (10.167.21.188:8088) that was configured was down for a short period and the following error messages were seen:

2020-04-17 12:08:50 +0000 [warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2020-04-17 12:08:51 +0000 chunk="5a37b69207d35c52e3a43f159b486375" error_class=Net::HTTP::Persistent::Error error="connection refused: 10.167.21.188:80"
  2020-04-17 12:08:50 +0000 [warn]: #0 suppressed same stacktrace
2020-04-17 12:08:52 +0000 [warn]: #0 failed to flush the buffer. retry_time=2 next_retry_seconds=2020-04-17 12:08:54 +0000 chunk="5a37b69207d35c52e3a43f159b486375" error_class=Net::HTTP::Persistent::Error error="connection refused: 10.167.21.188:80"
  2020-04-17 12:08:52 +0000 [warn]: #0 suppressed same stacktrace

What you expected to happen:

The log message should print the correct port that was being refused.

How to reproduce it (as minimally and precisely as possible):
Configure any HEC endpoint that's not up?

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
$ oc version
Client Version: v4.2.21
Server Version: 4.2.26
Kubernetes Version: v1.14.6-152-g117ba1f
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-13T18:08:14Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-152-g117ba1f", GitCommit:"072272a", GitTreeState:"clean", BuildDate:"2020-03-23T05:25:55Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
  • Ruby version (use ruby --version):
  • OS (e.g: cat /etc/os-release):
  • Splunk version:
    Fluentd plugin version 1.2.1
  • Others:

How to leverage Splunk HEC plugin's Prometheus metrics

We are trying to troubleshoot latency and log loss issues on our end.
The metrics exposed by this plugin would be very useful for us. However, there is no documentation provided about how to use it.
Can the readme be updated with that information?

Improve SSL Documentation

How are each of the certs generated, who generates the certs, who signs the certs, do self signed certs work?

E.G.

  • client_cert - who provides, who signs?
  • client_key - assuming that this is the private key for the client_cert, is there a way to password protect this? how does the password get protected?
  • ca_file - who is this certificate for? who signs?
  • ca_path - naming conventions? single directory, or can things be nested?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.