Giter Site home page Giter Site logo

logstash-codec-sflow's People

Contributors

ashangit avatar dejongm avatar kzemek avatar robcowart avatar thewebface avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-codec-sflow's Issues

Logstash not receiving sflow as input

Hi,

Hope you can help me with the issue I'm having.

Currrent scenario:

We are currently integrating both netflow and sflow in Elastic with logstash version 7.3.2 on a CenOS 7 server. We've managed to visualize netflow correctly but when trying to ingest sflow we are not seeing anything.

For testing purpose, we are basically executing logstash from command line:

On port 6344, we are receiving sflow from a NEXUS 3000 with the following configuration:

feature sflow
sflow sampling-rate 4096
sflow max-sampled-size 128 -- 64
sflow counter-poll-interval 1
sflow max-datagram-size 1400
sflow collector-ip X.X.X.X vrf default
sflow collector-port 6344
sflow agent-ip Y.Y.Y.Y
no sflow extended switch
...
sflow data-source interface ...

Command line:


/usr/share/logstash/bin/logstash -e 'input { udp { port => 6344 codec => sflow }}' --debug
...
[DEBUG] 2019-10-17 13:35:57.248 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@snmp_interface = false
[DEBUG] 2019-10-17 13:35:57.249 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@snmp_community = "public"
[DEBUG] 2019-10-17 13:35:57.249 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@interface_cache_size = 1000
[DEBUG] 2019-10-17 13:35:57.249 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@interface_cache_ttl = 3600
[INFO ] 2019-10-17 13:35:57.296 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6344"}
[DEBUG] 2019-10-17 13:35:57.420 [Api Webserver] agent - Starting puma
[DEBUG] 2019-10-17 13:35:57.557 [Api Webserver] agent - Trying to start WebServer {:port=>9600}
[INFO ] 2019-10-17 13:35:57.584 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6344", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[DEBUG] 2019-10-17 13:35:57.785 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 13:35:57.812 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 13:35:58.116 [Api Webserver] service - [api-service] start
[INFO ] 2019-10-17 13:35:58.365 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[DEBUG] 2019-10-17 13:35:56.866 [[main]>worker3] CompiledPipeline - Compiled output
...
[DEBUG] 2019-10-17 11:08:58.195 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 11:08:59.329 [pool-3-thread-3] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 11:08:59.329 [pool-3-thread-3] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 11:09:03.195 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 11:09:04.336 [pool-3-thread-3] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 11:09:04.336 [pool-3-thread-3] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 11:09:08.195 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 11:09:09.347 [pool-3-thread-3] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 11:09:09.347 [pool-3-thread-3] jvm - collector name {:name=>"ConcurrentMarkSweep"}
...

As you can see NO data is coming through logstash nor error or warning shown, but if we capture paquets, we can see sflow is coming in through port 6344:

[root@elastic-netflow ~]# tcpdump -vvv -i any port 6344
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:05:30.791669 IP (tos 0x0, ttl 63, id 8334, offset 0, flags [none], proto UDP (17), length 1128)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1100
13:05:30.803948 IP (tos 0x0, ttl 63, id 8335, offset 0, flags [none], proto UDP (17), length 1376)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1348
13:05:30.815049 IP (tos 0x0, ttl 63, id 8336, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:30.830503 IP (tos 0x0, ttl 63, id 8337, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:31.811592 IP (tos 0x0, ttl 63, id 8338, offset 0, flags [none], proto UDP (17), length 464)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 436
13:05:31.824073 IP (tos 0x0, ttl 63, id 8339, offset 0, flags [none], proto UDP (17), length 1376)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1348
13:05:31.836283 IP (tos 0x0, ttl 63, id 8340, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:31.852217 IP (tos 0x0, ttl 63, id 8341, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:32.833621 IP (tos 0x0, ttl 63, id 8342, offset 0, flags [none], proto UDP (17), length 464)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 436
...

Same test, but with netflow:

/usr/share/logstash/bin/logstash -e 'input { udp { port => 2055 codec => netflow }}' 
...
[INFO ] 2019-10-17 13:32:24.406 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:2055"}
[INFO ] 2019-10-17 13:32:24.502 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-10-17 13:32:24.606 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:2055", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[INFO ] 2019-10-17 13:32:25.231 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
          "host" => "L.L.L.L",
       "netflow" => {
               "protocol" => 17,
          "ipv4_dst_addr" => "W.W.W.W",
              "tcp_flags" => 0,
               "in_bytes" => 80,
              "icmp_type" => 0,
             "input_snmp" => 0,
           "flow_seq_num" => 30583,
                "src_tos" => 0,
             "flowset_id" => 257,
            "l4_src_port" => 123,
          "mul_igmp_type" => 0,
                "in_pkts" => 1,
            "l4_dst_port" => 123,
          "ipv4_src_addr" => "Z.Z.Z.Z",
            "output_snmp" => 156,
              "direction" => 1,
          "flow_end_msec" => 1571311938000,
                "version" => 9,
        "flow_start_msec" => 1571311938000
    },
      "@version" => "1",
    "@timestamp" => 2019-10-17T11:32:25.000Z
}
...

Any idea?

Thanks in advance!!!

Idea: Add various field translations

Hello,

I just started using logstash-codec-sflow and added some dictionary files to add more information: In order to translate interface Ids from sflow messages to names I could more easily associate with my switches interfaces I first used this command to get a list of id => name:
snmpwalk -v2c -c public <switch IP> 1.3.6.1.2.1.2.2.1.2

Afterwards I modified that output a little to get a valid yaml dictionary file.
I used this file to translate the following fields:
"source_id_index"
"input_interface"
"output_interface"

One example:

 translate {
      field => "input_interface"
      dictionary_path => "/etc/logstash/dictionaries/ex2200_interface_names.yaml"
      fallback => "UNKNOWN"
      destination => "input_interface_name"
    }

I got the idea from here: https://github.com/NETWAYS/sflow/blob/master/lib/sflow/snmp/iface_names.rb

I also looked at https://whiskeyalpharomeo.com/2015/06/13/logstash-and-sflow/ and included the iana_services and iana_protocols dictionary files:

translate {
      field => "ip_protocol"
      dictionary_path => "/etc/logstash/dictionaries/iana_protocols.yaml"
      fallback => "UNKNOWN"
      destination => "ip_protocol_name"
    }
translate {
      field => "src_port"
      dictionary_path => "/etc/logstash/dictionaries/iana_services.yaml"
      fallback => "UNKNOWN"
      destination => "src_port_name"
    }
translate {
      field => "dst_port"
      dictionary_path => "/etc/logstash/dictionaries/iana_services.yaml"
      fallback => "UNKNOWN"
      destination => "dst_port_name"
    }

Would it be possible or make sense to include those translations directly into the plugin (services and protocols static, interface names dynamic)?
Thanks for your work.

logstash error when use slow codec

Hello,
sometimes i see follow error in logstash logs and after sflow input doesn't work.

[ERROR][logstash.inputs.udp      ] Exception in inputworker {
"exception"=>#<IOError: data truncated>,
"backtrace"=>[
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/io.rb:279:in `readbytes'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/string.rb:118:in `read_and_return_value'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/base_primitive.rb:127:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/array.rb:328:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/array.rb:328:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/array.rb:328:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/array.rb:328:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"(eval):2:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/array.rb:328:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/array.rb:328:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"org/jruby/RubyArray.java:1613:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/struct.rb:138:in `do_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/base.rb:147:in `read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/base.rb:254:in `start_read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/base.rb:145:in `read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.3.4/lib/bindata/base.rb:21:in `read'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-sflow-2.0.0/lib/logstash/codecs/sflow.rb:105:in `decode'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.rb:118:in `inputworker'",
"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.rb:89:in `udp_listener'"
]}

Unable to start logstash with latest sflow codec

[ERROR][logstash.plugins.registry] Tried to load a plugin's code, but failed. {:exception=>#<LoadError: no such file to load -- logstash/codecs/sflow>, :path=>"logstash/codecs/sflow", :type=>"codec", :name=>"sflow"}

Logstash >7.9.9 support

Is there a reason that logstash-core 7.9.9 is the highest version supported?
Is it possible to increase this to say <8.0.0?

Feature Request: BGP information

BGP information would be great!!!

BGP next hop IP addr (or Next hop IP)
BGP adjacent prev/next AS

Send as (if i'm correct) like this:

uint32_t as; /* AS number for this gateway /
uint32_t src_as; /
AS number of source (origin) /
uint32_t src_peer_as; /
AS number of source peer /
uint32_t dst_as_path_segments; /
number of segments in path */
SFLExtended_as_path_segment dst_as_path; / list of seqs or sets /
uint32_t communities_length; /
number of communities */
uint32_t communities; / set of communities /
uint32_t localpref; /
LocalPref associated with this route */

extendedType ROUTER
uint32_t nextHop

Sflow Codec Throwing an error

When I try and start logstash and use the sflow codec I installed the codec from the logstash-plugin install command which seems to pull from the rubygems repo. It looked like that was linked to this github repo so If I am wrong let me know.

this is the error I am getting

{:timestamp=>"2016-07-27T12:41:17.315000-0600", :message=>"fetched an invalid config", :config=>"input { \n\tudp {\n\t\tport => 6343 \n\t\tcodec => sflow \n\t}\n}\n\noutput {\n\n\telasticsearch {\n\n\t\tindex => \"sflow-%{+YYYY.MM.dd}\"\n\n\t}\n}\n\n\n", :reason=>"field 'type' is a reserved name in EthernetFrameData", :level=>:error}

this is my config

input { udp { port => 6343 codec => sflow type => sflow}}

output { file { path => "/tmp/logstashsflow.out"}}

Codec not parsing sflow metrics

So I have installed this plugin into logstash using, bin/plugin install logstash-codec-plugin and setup a new input:

input {
  udp {
    port => 6343
    codec => sflow
    type => sflow
  }
}

There are no filters and its output should be directly sent to elasticsearch. However this is not happening, in actuallity we are getting the following error: {:timestamp=>"2016-05-04T21:12:34.527000+0000", :message=>"Unknown record entreprise 0, format 0", :level=>:warn}

We have confirmed our switches are outputting to the configured port with tcpdump and sflowtool and with these tools have been able to confirm we are outputting sflow v5 metrics.

logstash version: 2.2.0
codec version: 1.0.0

any help would be greatly appreciated as we have resorted to using the sflowtool and a pipe input for logstash which feels wrong.

Version 2.0.2 still missing LAG support?

Hi!

I'm running version 2.0.2 of the sFlow codec, but I'm still seeing issues with LAG interfaces. My logs are full with the following message;

[2019-01-09T18:12:00,676][WARN ][logstash.codecs.sflow ] Unknown record entreprise 0, format 7

This LAG-support was supposed to be added about two years ago, so I'm not sure what's going wrong here. Can you please advise?

Thanks!
Joost

Support sFlow HTTP structures

Hi
Configured sflow to recive log from F5 LTM with elastiflow, but got folllowing error

[2018-12-12T00:27:09,070][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-12-12T00:28:38,990][WARN ][logstash.codecs.sflow    ] Unknown record entreprise 0, format 2102
[2018-12-12T00:28:38,993][WARN ][logstash.codecs.sflow    ] Unknown record entreprise 0, format 2100
[2018-12-12T00:28:38,994][WARN ][logstash.codecs.sflow    ] Unknown record entreprise 0, format 2206
[2018-12-12T00:28:38,996][DEBUG][logstash.codecs.sflow    ] sample: {:sample_entreprise=>0, :sample_format=>1, :sample_length=>448, :sample_data=>{:flow_sequence_number=>495, :source_id_type=>3, :source_id_index=>20, :sampling_rate=>1024, :sample_pool=>503912, :drops=>0, :input_interface=>928, :output_interface=>1073741823, :record_count=>3, :records=>[{:record_entreprise=>0, :record_format=>2102, :record_length=>20, :record_data=>""}, {:record_entreprise=>0, :record_format=>2100, :record_length=>20, :record_data=>""}, {:record_entreprise=>0, :record_format=>2206, :record_length=>352, :record_data=>""}]}}
[2018-12-12T00:28:46,191][ERROR][logstash.inputs.udp      ] UDP listener died {:exception=>java.nio.channels.ClosedSelectorException, :backtrace=>["sun.nio.ch.SelectorImpl.keys(SelectorImpl.java:68)", "org.jruby.util.io.SelectorPool.put(SelectorPool.java:88)", "org.jruby.util.io.SelectExecutor.selectEnd(SelectExecutor.java:59)", "org.jruby.util.io.SelectExecutor.go(SelectExecutor.java:44)", "org.jruby.RubyIO.select(RubyIO.java:3405)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$udp_listener$0(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:121)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$udp_listener$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$run$0(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$run$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:409)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$block$start_input$1(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:403)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:145)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71)", "org.jruby.runtime.Block.call(Block.java:124)", "org.jruby.RubyProc.call(RubyProc.java:289)", "org.jruby.RubyProc.call(RubyProc.java:246)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104)", "java.lang.Thread.run(Thread.java:748)"]}
[2018-12-12T00:28:46,197][ERROR][logstash.inputs.udp      ] UDP listener died {:exception=>java.nio.channels.ClosedSelectorException, :backtrace=>["sun.nio.ch.SelectorImpl.keys(SelectorImpl.java:68)", "org.jruby.util.io.SelectorPool.put(SelectorPool.java:88)", "org.jruby.util.io.SelectExecutor.selectEnd(SelectExecutor.java:59)", "org.jruby.util.io.SelectExecutor.go(SelectExecutor.java:44)", "org.jruby.RubyIO.select(RubyIO.java:3405)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$udp_listener$0(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:121)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$udp_listener$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$run$0(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$run$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:409)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$block$start_input$1(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:403)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:145)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71)", "org.jruby.runtime.Block.call(Block.java:124)", "org.jruby.RubyProc.call(RubyProc.java:289)", "org.jruby.RubyProc.call(RubyProc.java:246)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104)", "java.lang.Thread.run(Thread.java:748)"]}
[2018-12-12T00:28:46,201][ERROR][org.logstash.Logstash    ] java.lang.StackOverflowError
[2018-12-12T00:28:46,201][ERROR][logstash.inputs.udp      ] UDP listener died {:exception=>java.nio.channels.ClosedSelectorException, :backtrace=>["sun.nio.ch.SelectorImpl.keys(SelectorImpl.java:68)", "org.jruby.util.io.SelectorPool.put(SelectorPool.java:88)", "org.jruby.util.io.SelectExecutor.selectEnd(SelectExecutor.java:59)", "org.jruby.util.io.SelectExecutor.go(SelectExecutor.java:44)", "org.jruby.RubyIO.select(RubyIO.java:3405)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$udp_listener$0(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:121)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$udp_listener$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$run$0(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_udp_minus_3_dot_3_dot_4.lib.logstash.inputs.udp.RUBY$method$run$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:409)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$block$start_input$1(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:403)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:145)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71)", "org.jruby.runtime.Block.call(Block.java:124)", "org.jruby.RubyProc.call(RubyProc.java:289)", "org.jruby.RubyProc.call(RubyProc.java:246)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104)", "java.lang.Thread.run(Thread.java:748)"]}

and then logstash will restart.

F5 version: 11.6.2

Logstash error with sflow codec

Hello,
I have error if use sflow codec for logstash. After this error input stop working.

[2017-09-12T18:18:34,440][ERROR][logstash.inputs.udp      ] 
Exception in inputworker
 {"exception"=>#<EOFError: End of file reached>,
  "backtrace"=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/io.rb:314:in `read'",
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/io.rb:276:in `readbytes'", 
  "(eval):23:in `read_and_return_value'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/base_primitive.rb:129:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "(eval):2:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "(eval):2:in `do_read'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "(eval):2:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lifgb/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "(eval):2:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "(eval):2:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/array.rb:322:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/array.rb:322:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "(eval):2:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/array.rb:322:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/array.rb:322:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "org/jruby/RubyArray.java:1613:in `each'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/struct.rb:139:in `do_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/base.rb:147:in `read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/base.rb:254:in `start_read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/base.rb:145:in `read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/base.rb:21:in `read'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-sflow-2.0.0/lib/logstash/codecs/sflow.rb:105:in `decode'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.rb:118:in `inputworker'", 
  "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.:89:in `udp_listener'"]}

I found exact packets which are cause of this error.
EOFError packets.zip

regards,
Aleksey

Logstash codec fails listening IPv6

Sorry to post this in multiple places, just trying to have more visibility. Unfortunately nobody has responded yet.

For details see post

Gist of the post is that ipv6 input/codecs fail using logstash v6.2.2 but pass when using v5.2.8

Now i did find other logstash inputs that were failing with ipv6. Again, see post above.

UDP listener died {:exception=>java.nio.channels.UnsupportedAddressTypeException, :backtrace=>

Thanks!
dave

Feature request: MPLS support

On main page it is stated

it is able to decode Ethernet, 802.1Q VLAN, IPv4, UDP and TCP header

I've inspected decoded logs and it seems plugin is unable to analyze MPLS packets. Is there a plan to add support for it?

Feature Request: extended_mpls_tunnel structure

Im trying to get elastiflow working with Brocade MLXe-4 routers, 6 or so of them.
They are using MPLS.

Rob Cowart over at elastiflow investegated the error i got from the log.
[WARN ][logstash.codecs.sflow ] Invalid sflow packet received (End of file reached)

With an answer:

The issue is that the sFlow flow samples include the extended_mpls_tunnel structure, which is currently not supported by the Logstash sFlow Codec

Elastiflow Issue: #459

Can this be implemented?
Without it i cant parse any data into elastiflow, leaving a blank kibana :(

Here is an sflow pcap for reference and troubble shoot/implementation,
sflow.zip

Regards
Kim

Feature Request: Hsflowd

Hsflowd support would be extremely beneficial for use with monitoring of network data via monitoring applications such as Elastiflow.

sFlow metrics are not accurate

Despite the fact that sflow protocol is sample based and it highly depends on sample-rate parameter (lesser the value is better precision you get) some commercial tools can show ~99.9% accurate results. I have elastiflow+logstash-sflow-plugin and telegraf-snmp + prometheus + grafana installed. I use these 2 setups to compare results. And to have apple with apple comparison I get values from both setups for a specific interface which carries just a single public ip traffic. Counters (in/out) gathered through SNMP and those gathered from sflow (logstash sflow plugin) are very different. Even you have no telegraf-snmp and all that stuff, it's easy to check, just with the following script:

#!/bin/bash
#
#
delay=60

function getbytes_out()
{
        snmpwalk -v3  -l authPriv -u <myuser> -a SHA -A <super@uth>  -x AES -X <secretp@ssword> <device name or ip> IF-MIB::ifHCOutOctets.<snmp ifIndex number> | awk '{print $4}'
        return $?
}

function getbytes_in()
{
        snmpwalk -v3  -l authPriv -u <myuser> -a SHA -A <super@uth>  -x AES -X <secretp@ssword> <device name or ip> IF-MIB::ifHCInOctets.<snmp ifIndex number> | awk '{print $4}'
        return $?
}

while true
do
	bytes_start_out=$(getbytes_out)
	bytes_start_in=$(getbytes_in)
	sleep ${delay}
	bytes_end_out=$(getbytes_out)
	bytes_end_in=$(getbytes_in)
	result_out="$(($bytes_end_out-$bytes_start_out))"
	result_in="$(($bytes_end_in-$bytes_start_in))"
        speed_out=`echo "scale=2; $result_out / ${delay} *8/1000/1000" | bc`
        speed_in=`echo "scale=2; $result_in / ${delay} *8/1000/1000" | bc`
	echo "AVG Speed (Mbps) for last ${delay}s: ${speed_in} In  ${speed_out} Up"
done

exit $?

The values I get from prometheus and the script above are very very close. But again, the values from prometheus(telegraf-snmp)/script above and those from logstash are very different.
To give you an idea, for a 1G uplink, elatiflow+logstash shows me smth like 1.6G of utilization in out direction (a have filters in place that flow.src_addr or flow.dst_addr to distinguish in/out traffic). For in direction is the same thing. And the thing is that there's no pattern, I tried to find a correlation or a ratio between results but didn't find it. :(
My cisco nexus devices don't allow me to set a sampling-rate lower than 4096 and this is what negatively impacts the precision.
But even with that said, some companies overcome this by leveraging interface counters.
Here's an interesting explanation about sFlow accuracy on plixer.com (their commercial product called scrutinizer) https://www.plixer.com/blog/why-doesnt-sflow-look-accurate.
From what I understand, they say that in order to make sFlow collector more accurate, they use interface counters (which are part of sflow i.e. COUNTERSSAMPLE coming from sflow device) and all FLOWSAMPLE counters they fit into the amount of octets passed through that particular interface. COUNTERSSAMPLE is a special part of sflow protocol which allow you to bring not just bw utilization, but other metrics too, e.g. cpu, mem, etc.
It could be nice fix this issue.

logstash 6.2.0 StackOverFlowError

Starting with logstash 6.2.0 i get a stackoverflow after 100 sflow packages.

steps to reproduce

  • download logstash (curl -L -Ohttps://artifacts.elastic.co/downloads/logstash/logstash-6.2.0.tar.gz)
  • install plugin (cd logtash; ./bin/logstash-plugin install logstash-codec-sflow)
  • run with config (./bin/logstash -f ...)
  • replay the same sample of sflow packages multiple times (for i in $(seq 20); do echo $i; udpreplay -c 100 sample.pcap; done)

the sample contains 10 sflow packages. After about 8 replays logstash will crash with stackoverflow.

This is reproducable with other samples of sflow packages.

I tested with 6.1.0 (works), 6.1.3 (works), 6.2.0 (crash), 6.2.2 (crash), 6.5.4 (crash)

logstash config is

input {
	udp {
		port => 6343
		codec => sflow
		type => sflow
	}
}
output {
	stdout {
	}
}

removing the sflow codec it works in all versions.

Error parsing sflow

Dear,
the codec give me this error:

{:timestamp=>"2016-10-20T15:28:38.220000+0200", :message=>"Unknown record entreprise 0, format 7", :level=>:warn}

referring to existent lag_port_stats as explained here http://www.sflow.org/developers/structures.php.

Other thing I am not able to understand is what exactly frame_length_times_sampling_rate is:

frame lenght * sampling rate (1 packet every 1024 in my case) = bit/sec??

Thanks

Unknown record entreprise 25506, format 1003

This warning is getting spammed in the log file for seemingly every packet recieved. Any way to stop it?

[2017-10-16T15:34:56,659][WARN ][logstash.codecs.sflow    ] Unknown record entreprise 25506, format 1003

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.