Giter Site home page Giter Site logo

logstash-integration-rabbitmq's Introduction

Logstash Plugin

Travis Build Status

This is a plugin for Logstash.

It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.

Documentation

Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.

Need Help?

Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.

Developing

1. Plugin Developement and Testing

Code

  • To get started, you'll need JRuby with the Bundler gem installed.

  • Create a new plugin or clone and existing from the GitHub logstash-plugins organization. We also provide example plugins.

  • Install dependencies

bundle install

Test

  • Update your dependencies
bundle install
  • Run tests
bundle exec rspec

2. Running your unpublished Plugin in Logstash

2.1 Run in a local Logstash clone

  • Edit Logstash Gemfile and add the local plugin path, for example:
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
  • Install plugin
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Run Logstash with your plugin
bin/logstash -e 'filter {awesome {}}'

At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.

2.2 Run in an installed Logstash

You can use the same 2.1 method to run your plugin in an installed Logstash by editing its Gemfile and pointing the :path to your local plugin development directory or you can build the gem and install it using:

  • Build your plugin gem
gem build logstash-filter-awesome.gemspec
  • Install the plugin from the Logstash home
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Start Logstash and proceed to test the plugin

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.

Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.

It is more important to the community that you are able to contribute.

For more information about contributing, see the CONTRIBUTING file.

logstash-integration-rabbitmq's People

Contributors

andrewvc avatar baybatu avatar borisyukd avatar colinsurprenant avatar dedemorton avatar edmocosta avatar electrical avatar garywl avatar jakelandis avatar jordansissel avatar jsvd avatar kaisecheng avatar karenzone avatar kares avatar kurtado avatar magnusbaeck avatar mashhurs avatar mburger avatar michaelklishin avatar ph avatar robbavey avatar saggarsunil avatar suyograo avatar talevy avatar yaauie avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-integration-rabbitmq's Issues

`on_cancellation` flow can cause multiple invocations of `basic.cancel`

When a rabbitmq channel receives a basic.cancel, it will callback interested parties with call to handleCancel, implemented here as on_cancellation. As part of the on_cancellation callback function implemented in the plugin, a call to shutdown_consumer is made, which makes a further call to basic.cancel, resulting in the following error:

[2021-01-04T17:00:23,884][INFO ][logstash.inputs.rabbitmq ] Received basic.cancel from , shutting down.
E, [2021-01-04T17:00:23.911945 #23465] ERROR -- #<MarchHare::Session:2056 guest@localhost:5672, vhost=/>: Consumer org.jruby.proxy.com.rabbitmq.client.DefaultConsumer$Proxy5@2ad31a91 (amq.ctag--xxxxxxxxx) method handleCancel for channel AMQChannel(amqp://[email protected]:8282/,1)threw an exception for channel AMQChannel(amqp://[email protected]:8282/,1)
E, [2021-01-04T17:00:23.913706 #23465] ERROR -- #<MarchHare::Session:2056 guest@localhost:5672, vhost=/>: Unknown consumerTag (Java::JavaIo::IOException)
com.rabbitmq.client.impl.ChannelN.basicCancel(com/rabbitmq/client/impl/ChannelN.java:1476)

In addition, when the cancelation happens due to a basic.cancel message received from the broker, it checks the @consumer.terminated? condition to stop the plugin, which is only set to true when this same method returns, causing an infinite loop.

Implement ECS-Compatibility Mode

This is a stub issue, and needs to be fleshed out with details specific to
this plugin.


As a part of the effort to make plugins able to run in an ECS-Compatible manner
by default in an upcoming release of Logstash, this plugin needs to either
implement an ECS-Compatibility mode or certify that it does not implicitly use
fields that conflict with ECS.

RabbitMQ output plugin does not engage backpressure when queue is full

Logstash information:

  1. Logstash version:
    # bin/logstash --version
    Using bundled JDK: /usr/share/logstash/jdk
    logstash 8.12.2
    
  2. Logstash installation source: logstash:8.12.2 Docker image
  3. How is Logstash being run: as a service managed by a Docker Compose
  4. How was the Logstash Plugin installed: I did not install it explicitly so I'm assuming a default version shipped with the Docker image is used

OS version

Docker host:

Linux pop-os 6.6.10-76060610-generic #202401051437~1709085277~22.04~31d73d8 SMP PREEMPT_DYNAMIC Wed F x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:

I have configured queue with following arguments:

"arguments": {
    "x-queue-type": "classic",
    "x-max-length": 1000,
    "x-overflow": "reject-publish"
},

Expected behaviour:

  1. Upon reaching queue limit of 1000 enqueued messages, Logstash stops publishing
  2. Once space on the queue is freed up, Logstash resumes publishing

Observed behaviour:

  1. Logstash ignores queue being full and publishes all messages anyway, leading to a data loss

Related materials:

Steps to reproduce:

  1. docker compose up
  2. Wait for services to start up
  3. Navigate to RabbitMQ management console (http://<container_ip>:15672/#/queues/%2F/i3logs), browse i3logs Queue stats, confirm that queue has 1000 messages in the "Ready" state
  4. Use "Get messages" section at the bottom of the queue page to manually remove some messages (Ack mode: Reject requeue false, Messages: 100)
  5. Observe that number of "Ready" messages dropped to 900 and does NOT go back up 1000 again

Docker compose service definition:

version: '3'
services:
  rabbitmq:
    image: rabbitmq:3-management
    hostname: rabbitmq
    volumes:
      - ./rabbitmq_conf/definitions.json:/etc/rabbitmq/definitions.json
      - ./rabbitmq_conf/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
    healthcheck:
      test: rabbitmq-diagnostics -q ping
      interval: 10s
      timeout: 10s
      retries: 10

  logstash:
    depends_on:
      rabbitmq:
       condition: service_healthy
    image: logstash:8.12.2
    user: root
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
      - ./logs:/var/log
    environment:
      - xpack.monitoring.enabled=false
      - LOG_LEVEL=info
rabbitmq_conf/definitions.json
{
    "rabbit_version": "3.6.15",
    "users": [
        {
            "name": "guest",
            "password_hash": "roeR8CMxbpWbDUzBwN7eQ+rdnnG6UfwICGG1smu1GdssyyQ/",
            "hashing_algorithm": "rabbit_password_hashing_sha256",
            "tags": "administrator"
        }
    ],
    "vhosts": [
        {
            "name": "/"
        }
    ],
    "permissions": [
        {
            "user": "guest",
            "vhost": "/",
            "configure": ".*",
            "write": ".*",
            "read": ".*"
        }
    ],
    "parameters": [],
    "global_parameters": [],
    "policies": [],
    "queues": [
        {
            "arguments": {
                "x-queue-type": "classic",
                "x-max-length": 1000,
                "x-overflow": "reject-publish"
            },
            "auto_delete": false,
            "durable": true,
            "name": "i3logs",
            "type": "classic",
            "vhost": "/"
        }
    ],
    "exchanges": [
        {
            "arguments": {},
            "auto_delete": false,
            "durable": true,
            "name": "i3logs",
            "type": "fanout",
            "vhost": "/"
        }
    ],
    "bindings": [
        {
            "arguments": {},
            "destination": "i3logs",
            "destination_type": "queue",
            "routing_key": "logstash",
            "source": "i3logs",
            "vhost": "/"
        }
    ]
}
rabbitmq_conf/rabbitmq.conf
loopback_users.guest = false
listeners.tcp.default = 5672
management.listener.port = 15672
management.listener.ssl = false
management.load_definitions = /etc/rabbitmq/definitions.json
log.console.level = info
logstash.conf
input {
  file {
    path => "/var/log/myproduct.stdout.log"
    start_position => "beginning"
  }
}

output {
  rabbitmq {
    exchange => "i3logs"
    exchange_type => "fanout"
    host => "rabbitmq"
    port => 5672
    persistent => true
    user => "guest"
    password => "guest"
    vhost => "/"
    codec => "json"
  }
}

Provide logs (if relevant):

INFO logs
logstash-1  | 2024/03/20 06:51:51 Setting 'xpack.monitoring.enabled' from environment.
logstash-1  | 2024/03/20 06:51:51 Setting 'log.level' from environment.
logstash-1  | Using bundled JDK: /usr/share/logstash/jdk
logstash-1  | /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
logstash-1  | /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f
logstash-1  | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash-1  | [2024-03-20T06:52:03,576][WARN ][deprecation.logstash.runner] NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
logstash-1  | [2024-03-20T06:52:03,583][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
logstash-1  | [2024-03-20T06:52:03,584][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.12.2", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [x86_64-linux]"}
logstash-1  | [2024-03-20T06:52:03,585][INFO ][logstash.runner          ] JVM bootstrap flags: [-XX:+HeapDumpOnOutOfMemoryError, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, -Djruby.regexp.interruptible=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11, -Dlog4j2.isThreadContextMapInheritable=true, -Xms1g, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Djdk.io.File.enableADS=true, -Dfile.encoding=UTF-8, --add-opens=java.base/java.io=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, -Djruby.compile.invokedynamic=true, -Xmx1g, -Djava.security.egd=file:/dev/urandom, -Djava.awt.headless=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED]
logstash-1  | [2024-03-20T06:52:03,587][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
logstash-1  | [2024-03-20T06:52:03,587][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
logstash-1  | [2024-03-20T06:52:03,593][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash-1  | [2024-03-20T06:52:03,595][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash-1  | [2024-03-20T06:52:03,769][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"0dbd11ae-c44d-4c81-b50c-796472497f17", :path=>"/usr/share/logstash/data/uuid"}
logstash-1  | [2024-03-20T06:52:04,282][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
rabbitmq-1  | 2024-03-20 06:52:04.394571+00:00 [info] <0.886.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq-1  | 2024-03-20 06:52:04.394839+00:00 [info] <0.886.0> Successfully synced tables from a peer
logstash-1  | [2024-03-20T06:52:04,620][INFO ][org.reflections.Reflections] Reflections took 91 ms to scan 1 urls, producing 132 keys and 468 values
logstash-1  | [2024-03-20T06:52:04,807][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
logstash-1  | [2024-03-20T06:52:04,831][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
rabbitmq-1  | 2024-03-20 06:52:04.877477+00:00 [info] <0.894.0> accepting AMQP connection <0.894.0> (10.22.99.3:52328 -> 10.22.99.2:5672)
rabbitmq-1  | 2024-03-20 06:52:04.898726+00:00 [info] <0.894.0> connection <0.894.0> (10.22.99.3:52328 -> 10.22.99.2:5672): user 'guest' authenticated and granted access to vhost '/'
logstash-1  | [2024-03-20T06:52:04,913][INFO ][logstash.outputs.rabbitmq][main] Connected to RabbitMQ {:url=>"amqp://guest:XXXXXX@localhost:5672/"}
logstash-1  | [2024-03-20T06:52:04,945][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x61765f71 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
logstash-1  | [2024-03-20T06:52:05,472][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.53}
logstash-1  | [2024-03-20T06:52:05,480][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_08cfe5e821a4884a8b77971020dcc599", :path=>["/var/log/myproduct.log"]}
logstash-1  | [2024-03-20T06:52:05,481][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
logstash-1  | [2024-03-20T06:52:05,486][INFO ][filewatch.observingtail  ][main][7ad47ad9b8977afed9528ba0b335f1a77be695b9c7380d30afa97c0b7c37656b] START, creating Discoverer, Watch with file and sincedb collections
logstash-1  | [2024-03-20T06:52:05,489][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

Reject message on failure

If a message received by the plugin fails at the output stage (for example, due to an indexing issue in ES), is it possible to have the message rejected so that it is dead-lettered in RabbitMq?

What I'm currently seeing is that a message that fails to be indexed to ES is still ACKed and therefore is lost forever.

  • Version: logstash 6.8.2
  • Operating System: ubuntu 18.04
  • Config File:
input {
    rabbitmq {
      host => "blah.eu-west-2.compute.amazonaws.com"
      queue => "reading-elastic"
      durable => true
      arguments => {
          "x-dead-letter-exchange" => "reading-retry"
          "x-dead-letter-routing-key" => "elastic"
      }
   }
}
filter{
    mutate {
        add_field => {
            "deviceId" => "%{[payload][deviceId]}"
            "readingDate" => "%{[payload][readingDate]}"
            "value" => "%{[payload][value]}"
            "readingTypeId" => "%{[payload][readingTypeId]}"
            "collectionDate" => "%{[payload][collectionDate]}"
            "propertyReference" => "%{[payload][propertyReference]}"
            "propertyId" => "%{[payload][propertyId]}"
            "deviceSerialNumber" => "%{[payload][deviceSerialNumber]}"
            "manufacturerReference" => "%{[payload][manufacturerReference]}"
            "modelReference" => "%{[payload][modelReference]}"
        }

        remove_field => [ "[payload]", "[queuedAt]" ]
    }
    mutate {
        convert => { "value" => "float" }
    }
    date {
        match => [ "readingDate", "UNIX_MS" ]
    }
    date {
        match => [ "collectionDate", "UNIX_MS" ]
        target => "collectionDate"
    }
    date {
        match => [ "readingDate", "UNIX_MS" ]
        target => "readingDate"
    }
}
output {
  elasticsearch { hosts => ["https://blah.eu-west-2.es.amazonaws.com:443"]
                  index => "readings-%{+YYYY.MM.dd}"
  }
}

RabbitMQ output plugin blocks logstash to start if server is down

  • Version: 7.9.2
  • Operating System: Linux, Centos, official Elastic Helm

We run logstash, with RabbitMQ output, and our RabbitMQ server is down. Logstash never start properly (I mean, HTTP API is not started), looks like it stucks at plugin registration, so I suspect the connection inside register method, to prevent Logstash to start, is that possible?

If so, what about lazy connect to RabbitMQ server, when the 1st event comes?

Our use case:

We use logstash as a HTTP service, to send data to RabbitMQ. RabbitMQ server can be down for some hours. Our infra is a kubernetes cluster, where logstash is scaled via HPA, we use logstash in-memory queue as a buffer in front of RabbitMQ, so if RabbitMQ server is down > new logstash pod are created so queue is "load-balanced".

But since RabbitMQ output plugin is blocking logstash to start if server is down, nothing happen.

(Reference > elastic/logstash#12330)

Set dynamic values on headers in message_properties

Hello,

We are using RabbitMQ output plugins to send values to our brokers.
We want to dynamically set values on headers based on values in the message. But It seems that theses are not parsed as the templatized text is rendered in Rabbit.

  • Logstasg Version: 7.5.1 with plugin version 7.0.2. Please find below all available rabbitmq plugins installed :

[root@ logstash-7.5.1]# bin/logstash-plugin list --verbose | grep rabbit
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash-integration-rabbitmq (7.0.2)
โ”œโ”€โ”€ logstash-input-rabbitmq
โ””โ”€โ”€ logstash-output-rabbitmq

  • Operating System: RHEL 7.3
  • Config File (if you have sensitive info, please remove it):

output{
file { path => "/tmp/logstash-output.log" }
rabbitmq {
exchange => "test_exchange"
exchange_type => "topic"
user => "xxxxx"
password => "xxxxxx"
key => "test"
host => "xxxxxxxxxxxx:5672"
message_properties => {
headers => {
user_id => "%{[id]}"
version => "1.0"
}
}
}
}

  • Sample Data:
    {"id" : "367ab74e-252e-11ea-a4b0-fa6b20acc2b4"}

  • Steps to Reproduce:
    Launch Logstash server and examine output file and RabbitMQ content.

What we expects : the user_id headers is filled with the value of the iD. Instead, we have the text %{[id]} a value.

Please, could someone give some hints on how to achieve this or maybe clarify that it's unimplemented yet ?

Thanks in advance,

add parameter truststore and keyStore

Please post all product and debugging questions on our forum. Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here.

For all general issues, please provide the following details for fast resolution:

  • Version: 6x
  • Operating System: Centos7
  • Config File (if you have sensitive info, please remove it):
    rabbitmq {
    exchange => ''
    exchange_type => 'direct'
    host => 'yyyyyy'
    password => 'xxxxx'
    user => 'prebilling'
    vhost => 'sli'
    ssl => true
    ssl_certificate_password => '123456'
    ssl_certificate_path => '/tmp/yyy.p12'
    }
    }

Hi, i have a problem:
[2019-02-15T13:49:48,045][ERROR][logstash.outputs.rabbitmq] RabbitMQ connection error, will retry. {:error_message=>"sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :exception=>"Java::JavaxNetSsl::SSLHandshakeException"}

How i set truststore and keyStore with https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html?

Rabbitmq output plugin : dynamic values on headers not working

I would like to set dynamic values to headers in properties of the message published in RabbitMQ, but the field are not replaces with their values.

I saw the feature has been added in 7.1.0 version on RabbitMQ integration plugin : #27 (comment) but it doesn't seem to work.

According to this discussion, it seems that the code sprintf the top-level items in message_properties (such as app_id) but does not iterate over the items nested inside them.

Version of the plugin I use :

    bash-4.2$ logstash-plugin list --verbose
    (...)
    logstash-integration-rabbitmq (7.1.1)
     โ”œโ”€โ”€ logstash-input-rabbitmq
     โ””โ”€โ”€ logstash-output-rabbitmq

My configuration :

    output {
      rabbitmq {
        id => "mwx-rabbitmq-output"
        host =>"rabbitmq"
        port => "5672"
        user => "***"
        password => "***"
        ssl => "false"
        vhost => "/"
        durable => true 
        automatic_recovery => true
        persistent => true
        exchange => "sys-logstash-out"
        exchange_type => "topic"
        key => "app-%{[properties][app-id]}-default"
        # Fill the payload with "message" field :
        codec => plain { format => "%{[message]}" }
        message_properties => {
          "content_type" => "application/json"
          "priority" => 2
          "app_id" => "%{[properties][app-id]}"
          "headers" => {
            "test" => "%{[message]}"
          }
        }
      }
    }

The resulting message in RabbitMQ :

    Exchange          sys-logstash-out
    Routing Key	      app-app-test-MWX-default
    Redelivered       โ—
    Properties	
       app_id:        app-test-MWX
       priority:      2
       delivery_mode: 2
       headers:	
          test:       %{[message]}
       content_type:  application/json
    Payload           Test header 1

As you can see :

  • "%{[message]}" exists and is well replaced in payload part.
  • the "app_id" property is also OK
  • but the "test" header is not OK

Thanks.

Add more information about ssl_certificate_path for RabbitMQ output

(Somewhat related to logstash-plugins/logstash-output-rabbitmq#39)

The output plugin only supports passing in a .p12 for both trusts + any client cert/key pair. It'd be helpful to expand the docs by:

  • Mentioning that the .p12 can contain a client cert/key pair.
  • How to create a .p12 that's usable by the plugin.

Bullet two might seem a bit too out of scope, but I've encountered problems across different versions of Logstash. For example, the following worked great back with Logstash 5.4.0:

openssl pkcs12 -export -in chain.pem -inkey logstash.key -out openssl-only.p12

(where chain.pem is a concatenated file containing Logstash's public cert + the signing authority of RabbitMQ's cert).

But using the same method with Logstash 7.6.1 results in the following error:

RabbitMQ connection error, will retry. {:error_message=>"sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :exception=>"Java::JavaxNetSsl::SSLHandshakeException"}

In order to build a working .p12 on Logstash 7.6.1 (using existing PEM encoded certs/keys), I had to use both openssl + Java's keytool:

openssl pkcs12 -export -out logstash.p12 -inkey logstash.key -in logstash.crt

keytool -import -file ca.crt  -alias ca_cert -keystore logstash.p12

Add support for RabbitMQ streams

RabbitMQ 3.9 has added a new feature called streams. See here: https://www.rabbitmq.com/streams.html

As described on the above page, one of the advantages of RabbitMQ streams versus queues is throughput performance.

If Logstash added support for RabbitMQ streams both as input and output, then Logstash users who integrate with RabbitMQ, and who are able to switch from queues to streams, would get much better throughput performance.

Today we are using RabbitMQ queues as both Logstash input and output, and we are not happy with the performance.

Logstash 7.12.1 RabbitMQ Not Working With TLS 1.2 and Disable Cert Validation

I'm using Logstash 7.12.1 hosted on Docker (docker.elastic.co/logstash/logstash:7.12.1)
I'm getting an issue when using RabbitMQ Input/Output. It is working with TLS 1.1 but not working with TLS 1.2
Here is my logs:

[2021-05-05T14:03:43,327][WARN ][com.rabbitmq.client.TrustEverythingTrustManager][main][e232da8129404babfb53e3bde190f83c76f9dd988c85f0c7c50275ee0d994dfc] SECURITY ALERT: this trust manager trusts every certificate, effectively disabling peer verification. This is convenient for local development but offers no protection against man-in-the-middle attacks. Please see https://www.rabbitmq.com/ssl.html to learn more about peer certificate verification.
[2021-05-05T14:03:43,398][ERROR][com.rabbitmq.client.impl.SocketFrameHandler][main][e232da8129404babfb53e3bde190f83c76f9dd988c85f0c7c50275ee0d994dfc] TLS connection failed: Received fatal alert: handshake_failure
[2021-05-05T14:03:43,405][ERROR][logstash.inputs.rabbitmq ][main][e232da8129404babfb53e3bde190f83c76f9dd988c85f0c7c50275ee0d994dfc] RabbitMQ connection error, will retry. {:error_message=>"Received fatal alert: handshake_failure", :exception=>"Java::JavaxNetSsl::SSLHandshakeException"}

And Logstash pipeline:

input {

    rabbitmq {
        host => "rabbitmq"
        vhost => "/"
        queue => "my_queue"
        exchange => "my_exchange"
        exchange_type => "direct"
        heartbeat => 30
        key => "event"
        durable => true
        user => "USERNAME"
        password => "PASSWORD"
        port => 5671
        ssl => true
        ssl_version => "TLSv1.2"
    }
}

output {
      stdout { codec => rubydebug }
}

I'm using RabbitMQ 3.7.17. Here is my config:
Just notice that no issue with the RabbitMQ side. I've tested RabbitMQ by using a simple code, It worked with TLS 1.2 and ignore certificate validation.

[
  {rabbit,
   [
     {tcp_listeners, [5672]},
     {disk_free_limit, 1000000000 },
     {cluster_partition_handling, ignore },
     {default_vhost, <<"/">>},
     {default_user, <<"USERNAME">>},
     {default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
     {ssl_listeners, [5671]},
     {ssl_options, [{cacertfile,"/ca.crt"},
     {certfile,"/server.crt"},
     {keyfile,"/server.key"},
     {verify, verify_peer},
     {fail_if_no_peer_cert, false}]}
   ]
  },
  {rabbitmq_management,
   [
     {listener, [{port, 15672 }, {ip, "0.0.0.0"}]}
   ]
  }
]

Is anyone getting the same issue? Your advice is very much appreciated!

[Doc] ssl_certificate_path config could be more clear

In the doc file https://github.com/logstash-plugins/logstash-integration-rabbitmq/blob/master/docs/input-rabbitmq.asciidoc#plugins-inputs-rabbitmq-ssl_certificate_path the definition for config variable ssl_certificate_path state that Path to an SSL certificate in PKCS12 (.p12) format used for verifying the remote host. On rough reading is not clear if it refer to server's certificate or client's one. Should be the client's certificate, because client ans server needs to do peer vaildation

Multiple consumers to support rabbitmq-sharding

rabbitmq-sharding is an opinionated plugin which uses multiple "physical" queues behind one logical queue, taking advantage of more CPU cores.

This sounds like a great fit for the Logstash use case. However, to support it, the input needs to use multiple consumers.

This can be an option or an alternative input. Would there be any interest in this?

Port parameter is not working

If i change the default port, the configuration sets an array of elements instead of replacing the port.

My conf:

input {
rabbitmq {
    host => "172.20.0.21"
    port => "5673"
    queue => "collectd"
    durable => true
    key => "collectd"
    exchange => "collectd"
    threads => 3
    prefetch_count => 50
    port => 5672
    user => "guest"
    password => "guest"
    codec => "plain"
    add_field => [ "cliente", "CLIENTEB" ]
}
}

Logstash log:

[2017-10-16T09:42:57,073][ERROR][logstash.inputs.rabbitmq ] Invalid setting for rabbitmq input plugin:

input {
rabbitmq {
  # This setting must be a number
  # Expected number, got ["5673", 5672] (type Array)
  port => ["5673", 5672]
  ...
}
}
[2017-10-16T09:42:57,149][ERROR][logstash.agent           ] Cannot create pipeline {:reason=>"Something is wrong with your configuration."}

If i remove the " port => "5673" " part, it works perfectly.

`stop`/`shutdown_consumer` can throw on exit, leading to upstream crashes

When exiting the rabbitmq plugin, if the connection was already closed for some reason, the shutdown_consumer method called by stop can throw an exception, leading to a fatal crash of the Logstash process.

Logstash information:

Observed on logstash 7.10.1, plugin version 7.1.1

Steps to reproduce:

This behavior was observed, while Logstash was being shut down -

[2021-02-02T11:41:21,337][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Stop/pipeline_id:mgsf5, :exception=>"Java::ComRabbitmqClient::AlreadyClosedException", :message=>"connection is already closed due to connection error; cause: com.rabbitmq.client.MissedHeartbeatException: Heartbeat missing with heartbeat = 60 seconds", :backtrace=>["com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(com/rabbitmq/client/impl/AMQChannel.java:258)", "com.rabbitmq.client.impl.AMQChannel.rpc(com/rabbitmq/client/impl/AMQChannel.java:341)", "com.rabbitmq.client.impl.ChannelN.basicCancel(com/rabbitmq/client/impl/ChannelN.java:1491)", "jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)", "jdk.internal.reflect.NativeMethodAccessorImpl.invoke(jdk/internal/reflect/NativeMethodAccessorImpl.java:62)", "jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:566)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:426)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:293)", "org.jruby.RubyClass.finvokeWithRefinements(org/jruby/RubyClass.java:514)", "org.jruby.RubyBasicObject.send(org/jruby/RubyBasicObject.java:1755)", "org.jruby.RubyBasicObject$INVOKER$i$send.call(org/jruby/RubyBasicObject$INVOKER$i$send.gen)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.march_hare_minus_4_dot_3_dot_0_minus_java.lib.march_hare.channel.method_missing(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.3.0-java/lib/march_hare/channel.rb:888)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_integration_minus_rabbitmq_minus_7_dot_1_dot_1_minus_java.lib.logstash.inputs.rabbitmq.shutdown_consumer(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-rabbitmq-7.1.1-java/lib/logstash/inputs/rabbitmq.rb:318)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_integration_minus_rabbitmq_minus_7_dot_1_dot_1_minus_java.lib.logstash.inputs.rabbitmq.RUBY$method$shutdown_consumer$0$__VARARGS__(usr/share/logstash/vendor/bundle/jruby/$2_dot_5_dot_0/gems/logstash_minus_integration_minus_rabbitmq_minus_7_dot_1_dot_1_minus_java/lib/logstash/inputs//usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-rabbitmq-7.1.1-java/lib/logstash/inputs/rabbitmq.rb)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_integration_minus_rabbitmq_minus_7_dot_1_dot_1_minus_java.lib.logstash.inputs.rabbitmq.stop(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-rabbitmq-7.1.1-java/lib/logstash/inputs/rabbitmq.rb:312)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_integration_minus_rabbitmq_minus_7_dot_1_dot_1_minus_java.lib.logstash.inputs.rabbitmq.RUBY$method$stop$0$__VARARGS__(usr/share/logstash/vendor/bundle/jruby/$2_dot_5_dot_0/gems/logstash_minus_integration_minus_rabbitmq_minus_7_dot_1_dot_1_minus_java/lib/logstash/inputs//usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-rabbitmq-7.1.1-java/lib/logstash/inputs/rabbitmq.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.inputs.base.do_stop(/usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb:103)", "usr.share.logstash.logstash_minus_core.lib.logstash.inputs.base.RUBY$method$do_stop$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash/inputs//usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb)", "org.jruby.RubySymbol$SymbolProcBody.yieldInner(org/jruby/RubySymbol.java:1440)", "org.jruby.RubySymbol$SymbolProcBody.doYield(org/jruby/RubySymbol.java:1456)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1809)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.stop_inputs(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:458)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$stop_inputs$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash//usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.shutdown(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:447)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$shutdown$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash//usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.stop.execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/stop.rb:30)", "RUBY.terminate_pipeline(/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:173)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.stop.execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/stop.rb:29)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.converge_state(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:365)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:318)", "java.lang.Thread.run(java/lang/Thread.java:834)"]}
[2021-02-02T11:41:21,432][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:"placement-betting"}
[2021-02-02T11:41:21,442][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::ComRabbitmqClient::AlreadyClosedException` for `PipelineAction::Stop<mgsf5>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:129:in `create'", "org/logstash/execution/ConvergeResultExt.java:57:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:378:in `block in converge_state'"]}
[2021-02-02T11:41:21,452][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

Log connection info on start

It'd be nice to log the info about which host we've connected to both before and after connect for auditability.

`tls_certificate_password ` config is leaked in debug logs.

Issue description

When Logstash registers the plugin, its tls_certificate_password configuration is seen in debug logs.

Expectation

Since tls_certificate_password is a sensitive config, it shouldn't be seen in any logs expect --config.debug.

Cannot reference a field inside string

Hi,

I'm trying to reference a configuration field from a string inside message_properties. Instead of the field value, what I get is the literal representation of the string; in this case: %{myfield}.

  • Version: Logstash 7.4.2
  • Operating System: Windows Server 2016 Standard (1607)
  • Config File:
input {
    tcp {
        ...
    }
}

filter {
    if [somefield] == "something" {
        ...
        split {
	    field => "somefield2"
	}
	csv {
	    columns => [ "c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10", "c11" ]
	    separator => ","
	}
        if [c11] =~ someregex {
            ...
            mutate {
                rename => { "somefield3" => "myfield" }
            }
           ...
        } else {
            drop {}
        }

    }

}

output {
    if "_jsonparsefailure" in [tags] {
        ...
    } else {
        ...
        rabbitmq {
            ...
            message_properties => {
              "content_type" => "application/json"
              "priority" => 1
              "app_id" => "%{myfield}"
            }
            ...
        }
    }
}
  • Sample Data:
{"somefield": "something", "somefield2":"field1,field2,field3,field4,field5,field6,field7,field8,field9,field10,field11", "somefield3": "something"}

META: complete transition to this repo being source-of-truth

This repository will be the source-of-truth for rabbitmq-related logstash plugins, but there are some loose threads that need to be tied up to ensure users and contributors have a clear path to reporting and fixing bugs, requesting features, etc.

  • ensure outside contributors (e.g., those who are not members of the logstash-plugins org), if any, have equivalent permissions on this repository
  • update origin plugins to point to this repository in README.md, .github/*
    • logstash-input-rabbitmq
    • logstash-output-rabbitmq
  • migrate existing issues
  • migrate in-flight pull-requests
  • absorb upstream contributions

Error when enable SAC(single active consumer) on a queue

  • Version:
    logstash-7.7.0

  • Operating System:
    CentOS Linux release 7.1.1503

  • Config File:
    input{
    rabbitmq{
    ...
    arguments => {"x-single-active-consumer"=> true }
    }
    }

  • Sample Data:

  • Steps to Reproduce:
    when started, i got warn message, it seems that the type of "x-single-active-consumer" is not bool.
    but my config is true not "true"

[2020-09-19T16:20:32,508][WARN ][logstash.inputs.rabbitmq ][main][d57ca799b361a4e4fd873b803117f80df523979f6d5c3c183faa9e437307ff14] Error while setting up connection for rabbitmq input! Will retry. {:message=>"#method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-single-active-consumer' for queue โ€˜xxx' in vhost '/': received 'true' but current is 'true', class-id=50, method-id=10)", :class=>"MarchHare::ChannelAlreadyClosed", :location=>"/data/server/logstash-7.7.0/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'"}

Rabbitmq output plugin : dynamic values on headers not working as type is Hash not String

constant_properties = template.reject { |_,v| templated?(v) }
With this headers attributs of message_properties will fall all time in the constant_properties because it's not of type string

A Solution could be to extract constant_properties[:headers] then do the same as the variable_properties and reseting it in constant_properties before freezing. Something like

`
def buildHeaders(event)

      if @constant_properties[:headers]

        constant_headers = @constant_properties[:headers].reject { |_, v| templated?(v) }
        variable_headers = @constant_properties[:headers].select { |_, v| templated?(v) }

        headers = variable_headers.each_with_object(constant_headers.dup) do |(k, v), memo|
          memo.store(k, event.sprintf(v))
        end

        @constant_properties[:headers] = headers
        @constant_properties.freez
      end
    end`

@yaauie could you help us fix this issue ? Thank you in advance

When binding to exchange that doesn't exist, retry loop cannot be escaped.

When we attempt to bind to an exchange that doesn't exist, retries in LogStash::Inputs::RabbitMQ#setup! raise a MarchHare::ChannelAlreadyClosed exception when attempting to declare a queue that has already been declared; this means that despite being in an infinite retry loop, we cannot succeed at binding to the exchange after it has been externally created.


/{REDACTED}/gems/march_hare-3.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/{REDACTED}/gems/march_hare-3.1.1-java/lib/march_hare/channel.rb:980:in `converting_rjc_exceptions_to_ruby'
/{REDACTED}/gems/march_hare-3.1.1-java/lib/march_hare/channel.rb:499:in `queue_declare'
/{REDACTED}/gems/march_hare-3.1.1-java/lib/march_hare/queue.rb:238:in `declare!'
/{REDACTED}/gems/march_hare-3.1.1-java/lib/march_hare/channel.rb:477:in `block in queue'
org/jruby/RubyKernel.java:1871:in `tap'
/{REDACTED}/gems/march_hare-3.1.1-java/lib/march_hare/channel.rb:476:in `queue'
/{REDACTED}/gems/logstash-input-rabbitmq-6.0.3/lib/logstash/inputs/rabbitmq.rb:211:in `declare_queue'
/{REDACTED}/gems/logstash-input-rabbitmq-6.0.3/lib/logstash/inputs/rabbitmq.rb:207:in `declare_queue!'
/{REDACTED}/gems/logstash-input-rabbitmq-6.0.3/lib/logstash/inputs/rabbitmq.rb:184:in `setup!'
/{REDACTED}/gems/logstash-input-rabbitmq-6.0.3/lib/logstash/inputs/rabbitmq.rb:177:in `run'

Connection is not recovered

After a connection reset and close of the RabbitMQ connection, the connection is not recovered. The last message below is the last thing that is reported in the stdout until now, no reports in stderr.

E, [2021-03-13T07:22:24.998952 #1] ERROR -- #<MarchHare::Session:2046 mailerq-logstash@localhost:5672, vhost=mailerq>: An unexpected connection driver error occured
E, [2021-03-13T07:22:24.998959 #1] ERROR -- #<MarchHare::Session:2048 mailerq-logstash@localhost:5672, vhost=mailerq>: An unexpected connection driver error occured
E, [2021-03-13T07:22:24.998952 #1] ERROR -- #<MarchHare::Session:2050 mailerq-logstash@localhost:5672, vhost=mailerq>: An unexpected connection driver error occured
E, [2021-03-13T07:22:24.998953 #1] ERROR -- #<MarchHare::Session:2052 mailerq-logstash@localhost:5672, vhost=mailerq>: An unexpected connection driver error occured
E, [2021-03-13T07:22:25.002854 #1] ERROR -- #<MarchHare::Session:2048 mailerq-logstash@localhost:5672, vhost=mailerq>: Connection reset (Java::JavaNet::SocketException)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252)
java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:271)
java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:293)
com.rabbitmq.client.impl.Frame.readFrom(Frame.java:91)
com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:184)
com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:598)
java.base/java.lang.Thread.run(Thread.java:834)
E, [2021-03-13T07:22:25.002896 #1] ERROR -- #<MarchHare::Session:2046 mailerq-logstash@localhost:5672, vhost=mailerq>: Connection reset (Java::JavaNet::SocketException)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252)
java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:271)
java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:293)
com.rabbitmq.client.impl.Frame.readFrom(Frame.java:91)
com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:184)
com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:598)
java.base/java.lang.Thread.run(Thread.java:834)
E, [2021-03-13T07:22:25.002925 #1] ERROR -- #<MarchHare::Session:2050 mailerq-logstash@localhost:5672, vhost=mailerq>: Connection reset (Java::JavaNet::SocketException)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252)
java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:271)
java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:293)
com.rabbitmq.client.impl.Frame.readFrom(Frame.java:91)
com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:184)
com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:598)
java.base/java.lang.Thread.run(Thread.java:834)
E, [2021-03-13T07:22:25.002876 #1] ERROR -- #<MarchHare::Session:2052 mailerq-logstash@localhost:5672, vhost=mailerq>: Connection reset (Java::JavaNet::SocketException)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252)
java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:271)
java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:293)
com.rabbitmq.client.impl.Frame.readFrom(Frame.java:91)
com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:184)
com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:598)
java.base/java.lang.Thread.run(Thread.java:834)
E, [2021-03-13T07:22:30.158860 #1] ERROR -- #<MarchHare::Session:2052 mailerq-logstash@localhost:5672, vhost=mailerq>: Caught exception when recovering queue success
E, [2021-03-13T07:22:30.159491 #1] ERROR -- #<MarchHare::Session:2052 mailerq-logstash@localhost:5672, vhost=mailerq>: NOT_FOUND - home node '[email protected]' of durable queue 'success' in vhost 'mailerq' is down or inaccessible (MarchHare::NotFound)
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:996:in `converting_rjc_exceptions_to_ruby'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:509:in `queue_declare'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/queue.rb:238:in `declare!'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/queue.rb:252:in `recover_from_network_failure'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:272:in `block in recover_queues'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:269:in `recover_queues'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:208:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:348:in `block in automatically_recover'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:346:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:300:in `block in add_automatic_recovery_hook'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/shutdown_listener.rb:12:in `shutdown_completed'
E, [2021-03-13T07:22:30.162298 #1] ERROR -- #<MarchHare::Session:2050 mailerq-logstash@localhost:5672, vhost=mailerq>: Caught exception when recovering queue failure
E, [2021-03-13T07:22:30.162720 #1] ERROR -- #<MarchHare::Session:2050 mailerq-logstash@localhost:5672, vhost=mailerq>: NOT_FOUND - home node '[email protected]' of durable queue 'failure' in vhost 'mailerq' is down or inaccessible (MarchHare::NotFound)
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:996:in `converting_rjc_exceptions_to_ruby'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:509:in `queue_declare'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/queue.rb:238:in `declare!'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/queue.rb:252:in `recover_from_network_failure'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:272:in `block in recover_queues'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:269:in `recover_queues'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:208:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:348:in `block in automatically_recover'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:346:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:300:in `block in add_automatic_recovery_hook'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/shutdown_listener.rb:12:in `shutdown_completed'
E, [2021-03-13T07:22:30.163722 #1] ERROR -- #<MarchHare::Session:2048 mailerq-logstash@localhost:5672, vhost=mailerq>: Caught exception when recovering queue retry
E, [2021-03-13T07:22:30.164080 #1] ERROR -- #<MarchHare::Session:2048 mailerq-logstash@localhost:5672, vhost=mailerq>: NOT_FOUND - home node '[email protected]' of durable queue 'retry' in vhost 'mailerq' is down or inaccessible (MarchHare::NotFound)
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:996:in `converting_rjc_exceptions_to_ruby'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:509:in `queue_declare'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/queue.rb:238:in `declare!'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/queue.rb:252:in `recover_from_network_failure'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:272:in `block in recover_queues'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:269:in `recover_queues'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:208:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:348:in `block in automatically_recover'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:346:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:300:in `block in add_automatic_recovery_hook'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/shutdown_listener.rb:12:in `shutdown_completed'
E, [2021-03-13T07:22:30.196345 #1] ERROR -- #<MarchHare::Session:2052 mailerq-logstash@localhost:5672, vhost=mailerq>: Caught exception when recovering consumer amq.ctag-kZudDDsqBlqTtlsM9yvUlA
E, [2021-03-13T07:22:30.196345 #1] ERROR -- #<MarchHare::Session:2050 mailerq-logstash@localhost:5672, vhost=mailerq>: Caught exception when recovering consumer amq.ctag-qAkYqYvKMO-VmsRI671nmA
E, [2021-03-13T07:22:30.197071 #1] ERROR -- #<MarchHare::Session:2050 mailerq-logstash@localhost:5672, vhost=mailerq>: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node '[email protected]' of durable queue 'failure' in vhost 'mailerq' is down or inaccessible, class-id=50, method-id=10) (MarchHare::ChannelAlreadyClosed)
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:996:in `converting_rjc_exceptions_to_ruby'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:629:in `basic_consume'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/consumers/base.rb:87:in `recover_from_network_failure'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:287:in `block in recover_consumers'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:283:in `recover_consumers'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:209:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:348:in `block in automatically_recover'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:346:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:300:in `block in add_automatic_recovery_hook'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/shutdown_listener.rb:12:in `shutdown_completed'
E, [2021-03-13T07:22:30.197010 #1] ERROR -- #<MarchHare::Session:2052 mailerq-logstash@localhost:5672, vhost=mailerq>: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node '[email protected]' of durable queue 'success' in vhost 'mailerq' is down or inaccessible, class-id=50, method-id=10) (MarchHare::ChannelAlreadyClosed)
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:996:in `converting_rjc_exceptions_to_ruby'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:629:in `basic_consume'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/consumers/base.rb:87:in `recover_from_network_failure'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:287:in `block in recover_consumers'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:283:in `recover_consumers'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:209:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:348:in `block in automatically_recover'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:346:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:300:in `block in add_automatic_recovery_hook'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/shutdown_listener.rb:12:in `shutdown_completed'
E, [2021-03-13T07:22:30.197524 #1] ERROR -- #<MarchHare::Session:2048 mailerq-logstash@localhost:5672, vhost=mailerq>: Caught exception when recovering consumer amq.ctag-ExTe9XX05ukOvJzklyHmAQ
E, [2021-03-13T07:22:30.198195 #1] ERROR -- #<MarchHare::Session:2048 mailerq-logstash@localhost:5672, vhost=mailerq>: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node '[email protected]' of durable queue 'retry' in vhost 'mailerq' is down or inaccessible, class-id=50, method-id=10) (MarchHare::ChannelAlreadyClosed)
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:996:in `converting_rjc_exceptions_to_ruby'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:629:in `basic_consume'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/consumers/base.rb:87:in `recover_from_network_failure'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:287:in `block in recover_consumers'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:283:in `recover_consumers'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/channel.rb:209:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:348:in `block in automatically_recover'
org/jruby/RubyArray.java:1814:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:346:in `automatically_recover'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/session.rb:300:in `block in add_automatic_recovery_hook'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/march_hare-4.1.1-java/lib/march_hare/shutdown_listener.rb:12:in `shutdown_completed'
[2021-03-13T07:22:30,205][WARN ][logstash.inputs.rabbitmq ][main] RabbitMQ connection was closed! {:url=>"amqp://mailerq-logstash:XXXXXX@localhost:5672mailerq", :automatic_recovery=>true, :cause=>com.rabbitmq.client.ShutdownSignalException: connection error}
[2021-03-13T07:22:30,205][WARN ][logstash.inputs.rabbitmq ][main] RabbitMQ connection was closed! {:url=>"amqp://mailerq-logstash:XXXXXX@localhost:5672mailerq", :automatic_recovery=>true, :cause=>com.rabbitmq.client.ShutdownSignalException: connection error}
[2021-03-13T07:22:30,205][WARN ][logstash.inputs.rabbitmq ][main] RabbitMQ connection was closed! {:url=>"amqp://mailerq-logstash:XXXXXX@localhost:5672mailerq", :automatic_recovery=>true, :cause=>com.rabbitmq.client.ShutdownSignalException: connection error}
[2021-03-13T07:22:30,205][WARN ][logstash.inputs.rabbitmq ][main] RabbitMQ connection was closed! {:url=>"amqp://mailerq-logstash:XXXXXX@localhost:5672mailerq", :automatic_recovery=>true, :cause=>com.rabbitmq.client.ShutdownSignalException: connection error}

new attribute 'rabbitmq_payload' in @metadata

  • Version: 6.03
  • Operating System: Windows NT
  • Config File (if you have sensitive info, please remove it):
  • Sample Data:
  • Steps to Reproduce:

I find it interesting to add a new attribute in the metadata to add the payload of the message, this is important because it is not always possible to identify the attributes of the payload and sometimes it is necessary to have this information.
I made this implementation locally to solve a situation that has occurred, I believe it will be useful for many.
Sorry, google translate

attributes
if @metadata_enabled event.set("[@metadata][rabbitmq_headers]", get_headers(metadata)) event.set("[@metadata][rabbitmq_properties]", get_properties(metadata)) event.set("[@metadata][rabbitmq_payload]", get_payload(data)) end

method
def get_payload(data) data || {} end

Single fixed routing key value prevents some RabbitMQ plugins from doing their job

This output can only publish using a single pre-configured routing key value; some RabbitMQ plugins highly relevant for the users of this output, e.g. rabbitmq_sharding and rabbitmq_consistent_hash_exchange which allow for workload parallelization beyond a single queue, assume that routing keys are used for message stream "sharding" (distribution).
If all routing keys are identical, all messages will end up on a single shard.

It would be very nice to have a way to tell the plugin to use random values instead of a hardcoded routing key, possibly produced by a user-provided Proc (lambda, anonymous function, etc).

Queue documentation is innacurate

Summary

The queue section in the Rabbitmq input plugin v7.3.0
documentation is inaccurate as it contains seemingly unrelated information.

A corrected version should be:

The name of the queue Logstash will consume events from. If left empty, a transient queue with an randomly chosen name will be created.

I tried going back in the git history to see when this unrelated information was added, but it seems like it has always been this way since the beginning.

If this is not the right place to report documentation related errors in the Elastic website, let me know.
I will also work on creating a PR for this change. I assume once the PR is merged, the changes will be reflected on the Elastic website.

RabbitMQ messages limited to 64MiB

Attempting to receive messages greater than 64MiB from a RabbitMQ queue, will currently fail with the following error:

[2024-04-28T13:11:12,192][ERROR][com.rabbitmq.client.impl.ForgivingExceptionHandler][err][rabbitmq] An unexpected connection driver error occurred
java.lang.IllegalStateException: Message body is too large (103456140), maximum configured size is 67108864. See ConnectionFactory#setMaxInboundMessageBodySize if you need to increase the limit.
at com.rabbitmq.client.impl.CommandAssembler.consumeHeaderFrame(CommandAssembler.java:109) ~[rabbitmq-client.jar:5.20.0]
at com.rabbitmq.client.impl.CommandAssembler.handleFrame(CommandAssembler.java:172) ~[rabbitmq-client.jar:5.20.0]
at com.rabbitmq.client.impl.AMQCommand.handleFrame(AMQCommand.java:108) ~[rabbitmq-client.jar:5.20.0]
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:123) ~[rabbitmq-client.jar:5.20.0]
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:761) [rabbitmq-client.jar:5.20.0]
at com.rabbitmq.client.impl.AMQConnection.access$400(AMQConnection.java:48) [rabbitmq-client.jar:5.20.0]
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:688) [rabbitmq-client.jar:5.20.0]
at java.lang.Thread.run(Thread.java:829) [?:?]

This error effectively blocks the queue from being processed by Logstash, with the message acting as a "poison pill"

Support sending to priority queues where priority is taken from the event

Rabbit MQ supports Priority Queues

To send a message with a priority, there should be a message property named priority with an integer value. Unfortunately, it seems impossible to pull this priority from the logstash event.

Logstash Config Error
message_properties => {"priority" => [priority]} org.jruby.exceptions.TypeError: (TypeError) cannot convert instance of class org.jruby.specialized.RubyArrayOneObject to class java.lang.Integer
message_properties => {"priority" => "%{[priority]}" } org.jruby.exceptions.TypeError: (TypeError) cannot convert instance of class org.jruby.RubyString to class java.lang.Integer

There are fuller stack traces here
https://discuss.elastic.co/t/error-sending-to-rabbitmq-with-priority-from-event-cannot-convert-instance-of-class-org-jruby-specialized-rubyarrayoneobject-to-class-java-lang-integer/199965/3

The reason for pulling the priority from the event is so that it can be calculated based on message contents.

Use a shared ExecutorService for all AMQP connections / plugin instances

Currently, for each AMQP connection, an ExecutorService is created with a number of threads equal to two times the number of CPUs/cores. These threads seems always to be in sleeping/blocking state (checked with VisualVM).
On a system with many cores and many AMQP connections this can waste a large amount of memory and sometimes slow down the whole pipeline.

Example on my environment:

  • Logstash version : 5.1.1 (but the behavior is the same with the old versions)
  • RabbitMQ : 3.6.5
  • OS : RHEL 6
  • System : 2 sockets / 6 cores per CPU / hyperthreading (=24 threads)
  • 40 queues from 4 RabbitMQ instances (160 queues)

Total of sleeping threads : 7680 (160 connections24 cores2) for about 2GB of threads stacks.

If we look at the code, the RabbitMQ client library create a FixedThreadPool with 2*core threads in the constructor of com.rabbitmq.client.impl.ConsumerWorkService if no pool is provided:

private static final int DEFAULT_NUM_THREADS = Runtime.getRuntime().availableProcessors() * 2;

this.executor = (executor == null ? Executors.newFixedThreadPool(DEFAULT_NUM_THREADS, threadFactory) : executor);

Actually, the March Hare library (which uses the RabbitMQ client) should provide this pool in the connection factory (com.rabbitmq.client.ConnectionFactory).

Here is a quick&dirty workaround to create a pool with only one thread for each connection by patching March Hare (/usr/share/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.20.0-java/lib/march_hare/session.rb). Add this line in self.connect:
cf.setSharedExecutor(Executors.newSingleThreadExecutor)

and the import line at the beginning:
java_import java.util.concurrent.Executors

On my setup:

  • before : 7739 threads for a single Logstash process
  • after : 2130 threads

We can even use a single thread for all the connections without a loss of speed (@4000 events/s with filters). In this case, the thread is running but not heavily used. It seems to handle the consumer delivery in the RabbitMQ client library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.