Giter Site home page Giter Site logo

logstash-output-jdbc's Introduction

logstash-output-jdbc

Build Status Flattr this git repo

This plugin is provided as an external plugin and is not part of the Logstash project.

This plugin allows you to output to SQL databases, using JDBC adapters. See below for tested adapters, and example configurations.

This has not yet been extensively tested with all JDBC drivers and may not yet work for you.

If you do find this works for a JDBC driver without an example, let me know and provide a small example configuration if you can.

This plugin does not bundle any JDBC jar files, and does expect them to be in a particular location. Please ensure you read the 4 installation lines below.

Support & release schedule

I no longer have time at work to maintain this plugin in step with Logstash's releases, and I am not completely immersed in the Logstash ecosystem. If something is broken for you I will do my best to help, but I cannot guarantee timeframes.

Pull requests are always welcome.

If you want to act as an additional maintainer, please reach out to discuss.

Changelog

See CHANGELOG.md

Versions

Released versions are available via rubygems, and typically tagged.

For development:

  • See master branch for logstash v5 & v6 ⚠️ This is untested under Logstash 6.3 at this time, and there has been 1 unverified report of an issue. Please use at your own risk until I can find the time to evaluate and test 6.3.
  • See v2.x branch for logstash v2
  • See v1.5 branch for logstash v1.5
  • See v1.4 branch for logstash 1.4

Installation

  • Run bin/logstash-plugin install logstash-output-jdbc in your logstash installation directory
  • Now either:
    • Use driver_jar_path in your configuraton to specify a path to your jar file
  • Or:
    • Create the directory vendor/jar/jdbc in your logstash installation (mkdir -p vendor/jar/jdbc/)
    • Add JDBC jar files to vendor/jar/jdbc in your logstash installation
  • And then configure (examples can be found in the examples directory)

Configuration options

Option Type Description Required? Default
driver_class String Specify a driver class if autoloading fails No
driver_auto_commit Boolean If the driver does not support auto commit, you should set this to false No True
driver_jar_path String File path to jar file containing your JDBC driver. This is optional, and all JDBC jars may be placed in $LOGSTASH_HOME/vendor/jar/jdbc instead. No
connection_string String JDBC connection URL Yes
connection_test Boolean Run a JDBC connection test. Some drivers do not function correctly, and you may need to disable the connection test to supress an error. Cockroach with the postgres JDBC driver is such an example. No Yes
connection_test_query String Connection test and init query string, required for some JDBC drivers that don't support isValid(). Typically you'd set to this "SELECT 1" No
username String JDBC username - this is optional as it may be included in the connection string, for many drivers No
password String JDBC password - this is optional as it may be included in the connection string, for many drivers No
statement Array An array of strings representing the SQL statement to run. Index 0 is the SQL statement that is prepared, all other array entries are passed in as parameters (in order). A parameter may either be a property of the event (i.e. "@timestamp", or "host") or a formatted string (i.e. "%{host} - %{message}" or "%{message}"). If a key is passed then it will be automatically converted as required for insertion into SQL. If it's a formatted string then it will be passed in verbatim. Yes
unsafe_statement Boolean If yes, the statement is evaluated for event fields - this allows you to use dynamic table names, etc. This is highly dangerous and you should not use this unless you are 100% sure that the field(s) you are passing in are 100% safe. Failure to do so will result in possible SQL injections. Example statement: [ "insert into %{table_name_field} (column) values(?)", "fieldname" ] No False
max_pool_size Number Maximum number of connections to open to the SQL server at any 1 time No 5
connection_timeout Number Number of milliseconds before a SQL connection is closed No 10000
flush_size Number Maximum number of entries to buffer before sending to SQL - if this is reached before idle_flush_time No 1000
max_flush_exceptions Number Number of sequential flushes which cause an exception, before the set of events are discarded. Set to a value less than 1 if you never want it to stop. This should be carefully configured with respect to retry_initial_interval and retry_max_interval, if your SQL server is not highly available No 10
retry_initial_interval Number Number of seconds before the initial retry in the event of a failure. On each failure it will be doubled until it reaches retry_max_interval No 2
retry_max_interval Number Maximum number of seconds between each retry No 128
retry_sql_states Array of strings An array of custom SQL state codes you wish to retry until max_flush_exceptions. Useful if you're using a JDBC driver which returns retry-able, but non-standard SQL state codes in it's exceptions. No []
event_as_json_keyword String The magic key word that the plugin looks for to convert the entire event into a JSON object. As Logstash does not support this out of the box with it's sprintf implementation, you can use whatever this field is set to in the statement parameters No @event
enable_event_as_json_keyword Boolean Enables the magic keyword set in the configuration option event_as_json_keyword. Without this enabled the plugin will not convert the event_as_json_keyword into JSON encoding of the entire event. No False

Example configurations

Example logstash configurations, can now be found in the examples directory. Where possible we try to link every configuration with a tested jar.

If you have a working sample configuration, for a DB thats not listed, pull requests are welcome.

Development and Running tests

For development tests are recommended to run inside a virtual machine (Vagrantfile is included in the repo), as it requires access to various database engines and could completely destroy any data in a live system.

If you have vagrant available (this is temporary whilst I'm hacking on v5 support. I'll make this more streamlined later):

  • vagrant up
  • vagrant ssh
  • cd /vagrant
  • gem install bundler
  • cd /vagrant && bundle install && bundle exec rake vendor && bundle exec rake install_jars
  • ./scripts/travis-before_script.sh && source ./scripts/travis-variables.sh
  • bundle exec rspec

Releasing

  • Update Changelog
  • Bump version in gemspec
  • Commit
  • Create tag git tag v<version-number-in-gemspec>
  • bundle exec rake install_jars
  • bundle exec rake pre_release_checks
  • gem build logstash-output-jdbc.gemspec
  • gem push

logstash-output-jdbc's People

Contributors

aceoliver avatar cawoodm avatar dmitryakadiamond avatar ebuildy avatar hordijk avatar josemazo avatar kushtrimjunuzi avatar massimosporchia avatar theangryangel avatar zyphlar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-output-jdbc's Issues

statement sql

jdbc {
            driver_jar_path => "E:/www/analyze/doc/mysql-connector-java-5.1.36-bin.jar"
            driver_class => "com.mysql.jdbc.Driver"
            connection_string => "jdbc:mysql://127.0.0.1/demo?user=root&password=root"      
            statement => [ "INSERT INTO `user_events_log_2016_08`(user_id,event,time,name,package,version,created_at,updated_at) values(?,?,?,?,?,?,?,?)","%{[@fields][ctxt_user_id]}","%{[@fields][ctxt_event]}","%{[@fields][ctxt_time]}","%{[@fields][ctxt_name]}","%{[@fields][ctxt_package]}","%{[@fields][ctxt_version]}","%{[@fields][ctxt_created_at]}","%{[@fields][ctxt_updated_at]}" ]
        }

the table name will be change~

example:
today is 2016-08-24
so, table name need change to
INSERT INTO user_events_log_2016_08_24(user_id,event,time,name,package,version,created_at,updated_at) values(?,?,?,?,?,?,?,?)

How to use???

Thank you for you help!

Investigate jar-dependencies

  • Was/Is used by Logstash. Some issues suggest it was removed, however it does appear to be in Logstash somewhere
  • Can reduce the size of the gem if setup to download at install time
    • Would require internet access to install gem (at least 1 user of this gem would be affected by this)
  • Jars could stay vendor'ed, but managed by jar-dependencies
  • Would make keeping dependent jars up-to date slightly easier

Improving repeat exception handling

Before releasing 0.2.0 proper I want better exception handling. The current max_repeat_* is somewhat poor. I also want to make it optional.

Work in progress.

Add tests

At minimum add tests for sqlite, and possibly other JDBC drivers.

Backport v5 retry logic to v2

The retry logic in v5 (master) is arguably better than the current v2 stuff which can get stuck with badly crafted sql.

RangeError: too big for int

Hi,

Im using version 2.x of the plugin and Im trying to load in a value 4000881632477184 which is 16 digits long, I am using a mySql db and have defined the column as BigInt, so it should hold the field. The plugin returns the output below;
JDBC - Exception. Not retrying. {:exception=>RangeError: too big for int: 4000881632477184

Is the plugin limited to a 10 digit int field?

Im running logstash 2.1.1

Thanks

How to connect to cockroachDB using logstash jdbc output plugin?

I am using logstash to create a pipeline from Postgres to CockroachDB. Below is the config.

input {
jdbc {
    jdbc_connection_string => "jdbc:postgresql://<host_ip>:5432/database"
    jdbc_user => "user"
    jdbc_password => "password"
    jdbc_driver_library => "/path/to/postgresql-9.4-1209.jdbc41.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    statement => "SELECT * FROM source_table"
    }
}
output {
    jdbc {
    connection_string => "jdbc:postgresql://<host_ip>:26257/database"
    driver_jar_path => "/path/to/postgresql-9.4-1209.jdbc41.jar"
    driver_class => "org.postgresql.Driver"
    driver_auto_commit => false
    statement => ["INSERT INTO dest_table(col1,col2) VALUES (?,?)","col1","col2"]
    }
}

The input plugin(source is postgres) is working fine. But I am unable to establish a connection in the output plugin(cockroachDB) using JDBC. I am facing the below error.

JDBC - Connection is not valid. Please check connection string or that your JDBC endpoint is available. {:level=>:error, :file=>"logstash/outputs/jdbc.rb", :line=>"154", :method=>"setup_and_test_pool!"}

Destination(cockroachDB) is open for connection at the specified ip and port.

Nmap scan report for <host>
Host is up (0.000070s latency).
PORT      STATE SERVICE
26257/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds

As cockroachDB JDBC connection string is very similar to postgres, I tried the below connection strings, and still the same error.

jdbc:postgresql://host/database
jdbc:postgresql://host/database?sslmode=disable
jdbc:postgresql://host:port/database
jdbc:postgresql://host:port/database?sslmode=disable

Insert into multiple tables?

Pretty simple question, however I'm stuck (maybe because it's the end of the day):

Basically, can I have multiple statements or multiple "INSERT INTO"'s in one statement? Doesn't seem to be working for me right now. Below is my configuration (keep in mind that I have it working inserting into one table)

connection_string => 'jdbc:sqlite:myexample.db'
statement => [ "INSERT INTO req(filename, cval, querystring, datestring, ref_hash, ua_hash) VALUES(?,?,?,?,?,?); INSERT INTO reflookup(refhash,referrer) VALUES(?,?); INSERT INTO ualookup(uahash,useragent) VALUES(?,?);", "%{name}", "%{cval}", "%{querystring}", "%{datestring}", "%{refhash}", "%{uahash}", "%{refhash}", "%{ref}", "%{uahash}", "%{useragent}"]

I apologize if I missed something in the README or something similar.
BTW, great plugin it's been pretty simple and useful.

How to get the error message in the backtrace?

Hi,

Now I have to dig into the source code of hikari to see what kind of error it is.

Any ideas? Thank you!

The backtrace I got looks like: (only said that it is a "Java::JavaLang::RuntimeException")

Failed to flush outgoing items {:outgoing_count=>10, :exception=>"Java::JavaLang::RuntimeException", :backtrace=>["com.zaxxer.hikari.util.DriverDataSource.<init>(com/zaxxer/hikari/util/DriverDataSource.java:88)", "com.zaxxer.hikari.pool.PoolBase.initializeDataSource(com/zaxxer/hikari/pool/PoolBase.java:296)", "com.zaxxer.hikari.pool.PoolBase.<init>(com/zaxxer/hikari/pool/PoolBase.java:84)", "com.zaxxer.hikari.pool.HikariPool.<init>(com/zaxxer/hikari/pool/HikariPool.java:103)", "com.zaxxer.hikari.HikariDataSource.getConnection(com/zaxxer/hikari/HikariDataSource.java:94)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:497)", "RUBY.safe_flush(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:168)", "RUBY.flush(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:116)", "RUBY.buffer_flush(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1342)", "RUBY.buffer_flush(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216)", "RUBY.buffer_receive(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159)", "RUBY.receive(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:109)", "RUBY.handle(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/outputs/base.rb:81)", "RUBY.output_func((eval):27)", "RUBY.outputworker(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:277)", "RUBY.start_outputs(/Users/yichao/odps/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:194)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

Unable to insert timestamp into SqlServer

I'm trying to output the event timestamp to my SqlServer 2012 instance but keep getting a conversion exception from the JDBC driver:

JDBC Exception {:exception=>java.sql.BatchUpdateException: Conversion failed when converting date and/or time from character string., :level=>:error, :file=>"logstash/outputs/jdbc.rb", :line=>"90"}

Here's my config section:

jdbc {
    driver_class => 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
    connection_string => ...
    statement => [ "INSERT INTO logs(host, message, timestamp ) VALUES(?, ?, ?)", "%{host}", "%{message}", "%{@timestamp}" ]
}

Is this supported?

MariaDB Connection issues

Wondering if there is a way to test the connection string to the database, is that possible?

Using MariaDB with the JAR in vendor/jar/jdbc with the jar in your example and basically the same connection string. I am getting this output in my logstash.log

{:timestamp=>"2016-01-06T16:44:06.699000-0500", :message=>"The error reported is: \n Exception during pool initialization: HikariPool-0 - Connection is not available, request timed out after 2800ms."}

Kind of stuck on this, any help would be appreciated..

MSSQL, Infinite loop on "String or binary data would be truncated"

Hello, an error occurred yesterday when the plugin was trying to unsuccefully send more than the maximum supported characters in a MS SQL column with 'varchar(MAX)'.

In this case you can be stuck in an infinite retry loop with "String or binary data would be truncated", and the logstash pipe will be stuck into this loop.

Is there a way to force the plugin to "not halt and not retry" when we try to insert any data which is more than the size of the column?

Exception infinite loop (100% CPU load)

Exception handling is busted, causing an infinite loop and 100% cpu usage. Here a patch to fix it:

--- logstash-output-jdbc-0.2.5/lib/logstash/outputs/jdbc.rb~    2016-05-02 11:06:57.000000000 -0400
+++ logstash-output-jdbc-0.2.5/lib/logstash/outputs/jdbc.rb     2016-05-02 11:57:22.547306200 -0400
@@ -282,7 +282,7 @@
     ce = e
     loop do
       @logger.error("JDBC Exception encountered: Will automatically retry.", :exception => ce)
-      ce = e.getNextException()
+      ce = ce.getNextException()
       break if ce == nil
     end
   end

Exception when insert

i hava a table,fields has id,name,type。
so,when I insert row,program hava Exception ,
you can try~~~

mysql5.7 Connection issues

My Environment

  • Version of plugin used: 0.2.10
  • Version of Logstash used: 2.2.1
  • Version of Mysql used: 5.7.13
  • Database engine & version you're connecting to:

com.mysql.jdbc.Driver & mysql-connector-java-5.1.39

  • Have you checked you've met the Logstash requirements for Java versions?:

yes,java version "1.8.0_45"

Using Mysql with the JAR in vendor/jar/jdbc with the jar in your example and basically the same connection string.

output {
    jdbc {
        driver_class => "com.mysql.jdbc.Driver"
        connection_string => "jdbc:mysql://172.16.66.184:3306/jsontest?user=root&password=Cmcc_123&useSSL=false"
        #connection_timeout => 5000
        statement => [ "replace into user (id,name, lastlogininfo) values (10,?,?)", "%{name}", "%{lastlogininfo}" ]
    }
}

I am getting this output in my logstash.log:

The error reported is:
Exception during pool initialization: HikariPool-30 - Connection is not available, request timed out after 10000ms.

With the same connection string of configuration, when I connection to mysql 5.6, it is fine, but when I connection to mysql 5.7, it is error and the log information as above

And I have try the JDBC connection with a simple Java class instead:

HikariDataSource ds = new HikariDataSource();

        ds.setJdbcUrl("jdbc:mysql://172.16.66.184:3306/jsontest?useSSL=false");
        ds.setUsername("root");
        ds.setPassword("Cmcc_123");
        ds.setDriverClassName(com.mysql.jdbc.Driver.class.getName());
        ds.setAutoCommit(false);

It is fine too.

Kind of stuck on this, any help would be appreciated..

Losing Events when Server is down

Hi,
I have faced a big problem:
If the JDBC-Server is down then I am losing all the logstash events until the server is again up an running. (FYI: I am using this nice plugin in combination with the Crate JDBC connector)

What can be done against that?
Is it possible to configure or introduce somthing like a retry policy?
It would be great, if this plugins is as robust as the elasticsearch-output-plugin.
And currently, this is the best and fastest plugin to write data from Logstash into the Crate-Elasticsearch-DB. The http-output-plugin is slow and has got no retry policy at all!

Thx for writing this plugin and hopefully you can give me a hint or even a solution!

Logstash v5 Support

Logstash v5 will be out Soon(TM).

Known, possibly incomplete things to alter

  • logstash-core-plugin-api dependency - elastic/logstash#4829
  • Stud::Buffer replacement with multi_receive
  • Exception handling - how many exceptions are really actually re-tryable?
  • Tests
    • Max exception retries
    • Unretry-able exceptions
    • Connection-loss handling
    • Fatal exceptions
    • Add additional content verification (not just number of rows with correct message)
  • HikariCP logging - may need to be changed in respect to however elastic/logstash#4548 finishes up

Use with Apache Phoenix

I would like to output events from logstash to HBase, via Apache Phoenix:

https://phoenix.apache.org

My output configuration:

jdbc
{
        connection_string => 'jdbc:phoenix://hbase'
        statement => [ "upsert into test values (2,'World!')"]
}

I can see JAR is loaded because duplicate SLF4J errors:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/vendor/jar-dependencies/runtime-jars/slf4j-nop-1.7.13.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/logstash/vendor/jar/jdbc/phoenix-4.6.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]

Error:

{
  : outgoing_count=>1,
  : exception=>"Java::JavaLang::RuntimeException",
  : backtrace=>[
    "com.zaxxer.hikari.util.DriverDataSource.<init>(com/zaxxer/hikari/util/DriverDataSource.java:88)",
    "com.zaxxer.hikari.pool.PoolBase.initializeDataSource(com/zaxxer/hikari/pool/PoolBase.java:296)",
    "com.zaxxer.hikari.pool.PoolBase.<init>(com/zaxxer/hikari/pool/PoolBase.java:84)",
    "com.zaxxer.hikari.pool.HikariPool.<init>(com/zaxxer/hikari/pool/HikariPool.java:103)",
    "com.zaxxer.hikari.HikariDataSource.getConnection(com/zaxxer/hikari/HikariDataSource.java:94)",
    "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:497)",
    "LogStash::Outputs::Jdbc.safe_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:168)",
    "LogStash::Outputs::Jdbc.safe_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:168)",
    "LogStash::Outputs::Jdbc.flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:116)",
    "LogStash::Outputs::Jdbc.flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:116)",
    "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219)",
    "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219)",
    "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1342)",
    "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216)",
    "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216)",
    "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:193)",
    "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:193)",
    "RUBY.buffer_receive(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159)",
    "RUBY.receive(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.1/lib/logstash/outputs/jdbc.rb:109)",
    "RUBY.handle(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.0-java/lib/logstash/outputs/base.rb:81)",
    "RUBY.output_func((eval):323)",
    "RUBY.outputworker(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.0-java/lib/logstash/pipeline.rb:276)",
    "RUBY.start_outputs(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.0-java/lib/logstash/pipeline.rb:193)",
    "java.lang.Thread.run(java/lang/Thread.java:745)"
  ],
  : level=>: warn
}

Any idea? Thanks,

Failed to flush outgoing items

Expected & Actual Behavior

I've been seeing this error pop up in my logstash logs ever since I updated the plugin to 0.2.7
Usually eerything works fine and then after a while I get these errors repeatedly in the logs, and my outputs (jdbc and elasticsearch) stop getting new logs...
Restarting logstash usually fixes the problem, but it keeps reoccuring.

Using mysql-connector-java-5.1.38-bin.jar with logstash-output-jdbc-0.2.7
Logstash version 2.1.1

{:timestamp=>"2016-06-28T16:45:11.285000-0400", :message=>"Failed to flush outgoing items", :outgoing_count=>307, :exception=>"NameError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.7/lib/logstash/outputs/jdbc.rb:337:in retry_exception?'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.7/lib/logstash/outputs/jdbc.rb:246:insafe_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.7/lib/logstash/outputs/jdbc.rb:243:in safe_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.7/lib/logstash/outputs/jdbc.rb:230:insafe_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.7/lib/logstash/outputs/jdbc.rb:143:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:inbuffer_flush'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:193:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:112:inbuffer_initialize'", "org/jruby/RubyKernel.java:1479:in loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:110:inbuffer_initialize'"], :level=>:warn}

Blank rows are getting inserted when i call different table than "log"

Blank rows are getting inserted when i call different table than "log"

Expected & Actual Behavior

I am picking up data through logstash & insert it into postgres DB

input {
stdin { }
}
output {
jdbc {
driver_jar_path => 'D:\OpenSource\postgres\postgresql-9.4.1208.jre6.jar'
connection_string => 'jdbc:postgresql://localhost:5432/loglog?user=postgres&password=password'
statement => [ "INSERT INTO ping (action, url, description) VALUES(?, ?, ?)", "action", "url", "description" ]
}
}

although i see number of rows getting increased each time i provide input through stdin, i do not see data?

Plz help!

Your Environment

  • Version of plugin used: logstash-output-jdbc-2.x(Windows7)
  • Version of Logstash used: logstash-2.3.3(Windows7)
  • Database engine & version you're connecting to: postgresql-9.5.3-1-windows-x64
  • Have you checked you've met the Logstash requirements for Java versions?: YES

failed to install

Why installation failed with following error?

Installing logstash-output-jdbc
Error Bundler::InstallError, retrying 1/10
An error occurred while installing logstash-output-jdbc (0.2.7), and Bundler cannot continue.
Make sure that gem install logstash-output-jdbc -v '0.2.7' succeeds before bundling.
WARNING: SSLSocket#session= is not supported

Why is this not in the logstash-plugins organisation?

I've been asked this several times over various media, and I'm getting a bit fed up of answering this.

I'll be frank, I'm not overly fond of the logstash-plugins.

  • Many plugins there seem to have a standard template README which are useless or wrong:
    • For instance describing the plugin as being a filter when it's an output plugin - a minor point I'll concede
    • Nothing else about what it does or what it's capabilities, drawbacks or limitations are
    • No example configurations
  • I find the auto-generated documentation on elastic.co to be border line acceptable in most of the plugins, which is something I find a bit perturbing since Jordan effectively described documentation as a first class bug in his early talks on logstash. I'll admit that a lack of tests in this repo is probably borderline as bad as little/no documentation, however I do use this plugin in production myself and I have a vested interest in not breaking it. As and when I have time I will sort the lack of tests
  • I don't trust elastic.co: I find their frequent naming, version scheme, and repository alterations irritating. Lumberjack is a prime example. Then it had a name change. Then it moved to one domain. Then it moved to another domain. Then it got switched out for Beats. I'm all for improving a product, but if you don't live in the ELK eco-system this can and will bite you in the ass

I've marked this as wontfix for now. I'd be happy to see my opinion changed.

Metrics support

Logstash v5 looks like it's getting metrics support. Once the implementation is complete let's see if we can join the party.

v2.0 support

Currently waiting on testing that the master branch works correctly with v2.0.

Buffered output

There should be an option to switch to buffered output, rather than immediate.

Closed Connection Exception

The database connection is getting closed after it ran for few hours (don't follow a pattern).
I will need to infinitely run the logstash and insert the required fields into the database using the
logstash-output-jdbc plugin in output.

Below is the trace:
{:timestamp=>"2015-11-09T11:26:22.870000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>21, :exception=>java.sql.SQLRecoverableException: Closed Connection, :backtrace=>["oracle.jdbc.driver.PhysicalConnection.prepareStatement(oracle/jdbc/driver/PhysicalConnection.java:3587)", "oracle.jdbc.driver.PhysicalConnection.prepareStatement(oracle/jdbc/driver/PhysicalConnection.java:3488)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)", "LogStash::Outputs::Jdbc.flush(/home/oracle/Logstash/logstash-1.5.3/vendor/local_gems/97ce7616/logstash-output-jdbc-0.1.1/lib/logstash/outputs/jdbc.rb:104)", "LogStash::Outputs::Jdbc.flush(/home/oracle/Logstash/logstash-1.5.3/vendor/local_gems/97ce7616/logstash-output-jdbc-0.1.1/lib/logstash/outputs/jdbc.rb:104)", "Stud::Buffer.buffer_flush(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219)", "Stud::Buffer.buffer_flush(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1341)", "Stud::Buffer.buffer_flush(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193)", "Stud::Buffer.buffer_flush(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193)", "RUBY.buffer_initialize(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112)", "org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1511)", "RUBY.buffer_initialize(/home/oracle/Logstash/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

Here is my logstash config file.
trace.txt

JDBC Exception. No retry. {:exception=>#<NoMethodError: undefined method `start_with?' for 1:Fixnum>, :level=>:error}

Expected & Actual Behavior

Exception message
JDBC Exception. No retry. {:exception=>#<NoMethodError: undefined method start_with?' for 1:Fixnum>, :level=>:error} Failed to flush outgoing items {:outgoing_count=>1, :exception=>"NoMethodError", :backtrace=>["C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.9/lib/logstash/outputs/jdbc.rb:332:in log_jdbc_exception'", "org/jruby/RubyKernel.java:1479:inloop'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.9/lib/logstash/outputs/jdbc.rb:326:in log_jdbc_exception'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.9/lib/logstash/outputs/jdbc.rb:339:inretry_exception?'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.9/lib/logstash/outputs/jdbc.rb:244:in safe_flush'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-jdbc-0.2.9/lib/logstash/outputs/jdbc.rb:143:inflush'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1342:ineach'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in buffer_flush'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:112:inbuffer_initialize'", "org/jruby/RubyKernel.java:1479:in loop'", "C:/Program Files/elastic/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:110:in`buffer_initialize'"], :level=>:warn}

Your Environment

Logstash version 2.3.3
Plugin version 0.2.9
JDBC driver Sql Server JDBC Driver
https://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774

Untested with other JDBC drivers

Known working SQL engines (note: these may not all yet be actively tested on every release). Example configurations can be found in the examples/ directory.

  • DerbyDB
  • Apache Phoenix
  • MariaDB
  • MySQL
  • odps
  • Oracle
  • Postgres
  • Microsoft SQL Server
  • SQLite

If you'd like to add to this list, we do accept pull requests.

Specifying invalid sql in the statement causes an infinite loop of attempted flushes

With an invalid SQL statement like:

    statement => [ "INSERT INTO logs(host, message, timestamp ) VALUES( ?, ?)", "%{host}", "%{message}", "%{timestamp}" ]

the output plugin will loop (seemingly) forever trying to flush the message to the db (I'm using SqlServer 2012).

With debug on, the error message is a SqlServerException: "The index 3 is out of range." Which makes sense since it's invalid sql.

Since this won't work until the SQL is fixed in the configuration it should give up.

Ability to use fields inside sql statement (not just for column values)

First off, thanks for the awesome plugin!

I was hoping to use a field value ("table_name") to control what table an event is inserted into. As it currently stands, the plugin just passes the 0th string as the sql so something like "INSERT INTO %{table_name}" gets taken literally.

As I understand it, this is how prepared statements work so the @Statement[0] would have to be parsed ahead of time. Do you think this would be prohibitively slow? Maybe it could take the string literally by default, but parse for field values if a certain configuration option was set.

I am relatively new to the logstash world so I'll defer to you, but if you think something like this would work, let me know and I would be happy to write it.

Add connection string config file path

I'm reading the database connection properties from a configuration file and putting them in an field in the event, using the ruby filter plugin.
Would it be difficult to add string interpolation to the connection string, so I can keep sensitive information out of my logstash.conf file?
I think the change would just be something like this:

@connection = driver.connect(event.sprintf(@connection_string), java.util.Properties.new)

In this line: https://github.com/theangryangel/logstash-output-jdbc/blob/master/lib/logstash/outputs/jdbc.rb#L74

Does that make sense?
Thanks.

Validation failed for <field_name>: 'NULL' cannot be cast to type double

I am using logstash 2.3.1 version. Input plugin is elasticsearch and I am trying to insert data into crate.io database using logstash output jdbc plugin.
There is a field 'M_body_t', of type DOUBLE, which has both integer values and null values from source. When the below config is run, it throws an error 'Validation failed for bench_pose_m_body_t: 'NULL' cannot be cast to type double'.

input{
elasticsearch {
hosts => "host_name:9200"
index => "index1"
query => '{ "size":10,"query": {"match_all": {} } }'
}
}
filter{
if!([bench_pose][M_Body_t]) {
mutate{
add_field => {"[bench_pose][M_Body_t]" => "NULL"}
}
}
if!([bench_pose][M_Jaw_t]) {
mutate{
add_field => {"[bench_pose][M_Jaw_t]" => "NULL"}
}
}
}
output{
jdbc {
driver_class => "io.crate.client.jdbc.CrateDriver"
driver_auto_commit => false
driver_jar_path => "/path/to/crate-jdbc-standalone-1.12.3.jar"
connection_string => "crate://host_ip:4300"
statement => ["INSERT INTO chartd_ls(path,bench_pose_m_Body_t,bench_pose_m_jaw_t) VALUES(?,?,?)",'path','%{[bench_pose][M_Body_t]}','%{[bench_pose][M_Jaw_t]}']
}
}

If I remove the filter plugin, then I am facing an error 'Validation failed for <field_name>: '%{[bench_pose][M_Body_t]}' cannot be cast to type double', when the source has null value.

I know DOUBLE datatype does not allow null values to be inserted but how do I retain NULL values in these fields in the destination table?

Unable to connect to database when connection string uses TCPS.

I am using ojdbc6.jar driver along with jre1.8 while trying to connect to a database instance. The connection string uses TCPS protocol which is referring to Sha1 certificate.
Now as per our new rules we had to use Sha2 certificate for secured connections. I have imported the new certificate into my cacerts. And the plugin is not able to connect to database anymore.
Is it something that we have to fix here.
We have tried using
jre1.7 and ojdbc7.jar to check the compatibility. But didn't work.
Is it possible to use Weblogic data source in place of connection string. Which would resolve my issue.

Thanks,
Shravan

call SP (stored procedure)

Hi
first, thanks for great job you done.
I have a question, can I use this plugin to call sp in mssql server?
Thanks

Timestamp

Some drivers are particularly fussy about timestamps:

  • Previously we've found some drivers don't/won't accept java.sql.Timestamp properly
  • Currently we're converting to a iso8601 string and this works most of the time
  • Some drivers (i.e. DerbyDB) won't access iso8601 with a 'T' in it, and instead wants a space

Options:

  • Switch to java.sql.Timestamp and provide a configuration option to format to iso8601 string
    • Con: This may break some deployments
    • Pro: I believe most drivers should support java.sql.Timestamp properly
  • Switch to java.sql.Timestamp with a blacklist
    • Con: Cant recall which driver caused the switch to iso8601 in the first place
    • Con: Requires maintaining a blacklist
  • Add a configuration option to format timestamps, but otherwise stay as things are
    • Pro: Existing deployments should continue working
    • Pro: Minimal effort to support
    • Con: Requires users to correctly enter time formatting strings
    • Con: Annoying to validate configuration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.