Giter Site home page Giter Site logo

influxdb-java's Introduction

influxdb-java

Build Status codecov.io Issue Count

This is the official (and community-maintained) Java client library for InfluxDB (1.x), the open source time series database that is part of the TICK (Telegraf, InfluxDB, Chronograf, Kapacitor) stack.

For InfluxDB 3.0 users, this library is succeeded by the lightweight v3 client library.

Note: This library is for use with InfluxDB 1.x and 2.x compatibility API. For full supports of InfluxDB 2.x features, please use the influxdb-client-java client.

Adding the library to your project

The library artifact is published in Maven central, available at https://search.maven.org/artifact/org.influxdb/influxdb-java.

Release versions

Maven dependency:

<dependency>
  <groupId>org.influxdb</groupId>
  <artifactId>influxdb-java</artifactId>
  <version>${influxdbClient.version}</version>
</dependency>

Gradle dependency:

compile group: 'org.influxdb', name: 'influxdb-java', version: "${influxdbClientVersion}"

Features

Quick start

// Create an object to handle the communication with InfluxDB.
// (best practice tip: reuse the 'influxDB' instance when possible)
final String serverURL = "http://127.0.0.1:8086", username = "root", password = "root";
final InfluxDB influxDB = InfluxDBFactory.connect(serverURL, username, password);

// Create a database...
// https://docs.influxdata.com/influxdb/v1.7/query_language/database_management/
String databaseName = "NOAA_water_database";
influxDB.query(new Query("CREATE DATABASE " + databaseName));
influxDB.setDatabase(databaseName);

// ... and a retention policy, if necessary.
// https://docs.influxdata.com/influxdb/v1.7/query_language/database_management/
String retentionPolicyName = "one_day_only";
influxDB.query(new Query("CREATE RETENTION POLICY " + retentionPolicyName
        + " ON " + databaseName + " DURATION 1d REPLICATION 1 DEFAULT"));
influxDB.setRetentionPolicy(retentionPolicyName);

// Enable batch writes to get better performance.
influxDB.enableBatch(
    BatchOptions.DEFAULTS
      .threadFactory(runnable -> {
        Thread thread = new Thread(runnable);
        thread.setDaemon(true);
        return thread;
      })
);

// Close it if your application is terminating or you are not using it anymore.
Runtime.getRuntime().addShutdownHook(new Thread(influxDB::close));

// Write points to InfluxDB.
influxDB.write(Point.measurement("h2o_feet")
    .time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
    .tag("location", "santa_monica")
    .addField("level description", "below 3 feet")
    .addField("water_level", 2.064d)
    .build());

influxDB.write(Point.measurement("h2o_feet")
    .time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
    .tag("location", "coyote_creek")
    .addField("level description", "between 6 and 9 feet")
    .addField("water_level", 8.12d)
    .build());

// Wait a few seconds in order to let the InfluxDB client
// write your points asynchronously (note: you can adjust the
// internal time interval if you need via 'enableBatch' call).
Thread.sleep(5_000L);

// Query your data using InfluxQL.
// https://docs.influxdata.com/influxdb/v1.7/query_language/data_exploration/#the-basic-select-statement
QueryResult queryResult = influxDB.query(new Query("SELECT * FROM h2o_feet"));

System.out.println(queryResult);
// It will print something like:
// QueryResult [results=[Result [series=[Series [name=h2o_feet, tags=null,
//      columns=[time, level description, location, water_level],
//      values=[
//         [2020-03-22T20:50:12.929Z, below 3 feet, santa_monica, 2.064],
//         [2020-03-22T20:50:12.929Z, between 6 and 9 feet, coyote_creek, 8.12]
//      ]]], error=null]], error=null]

Contribute

For version change history have a look at ChangeLog.

Build Requirements

  • Java 1.8+
  • Maven 3.5+
  • Docker (for Unit testing)

Then you can build influxdb-java with all tests with:

$> export INFLUXDB_IP=127.0.0.1

$> mvn clean install

There is a shell script running InfluxDB and Maven from inside a Docker container and you can execute it by running:

$> ./compile-and-test.sh

Useful links

License

The MIT License (MIT)

Copyright (c) 2014 Stefan Majer

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

influxdb-java's People

Contributors

andrewdodd avatar asashour avatar ashamanur avatar bednar avatar bentatham avatar caoli5288 avatar csokol avatar dependabot[bot] avatar dubsky avatar edemsegbedzi avatar eranl avatar fmachado avatar fredo994 avatar gkatzioura avatar hampelratte avatar ivankudibal avatar jiafu1115 avatar joelmarty avatar jordmoz avatar jvshahid avatar lxhoan avatar majst01 avatar raphaelaudet avatar rhajek avatar sfeilmeier avatar shanexu avatar shanielh avatar simon04 avatar tomklapka avatar wasnertobias avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

influxdb-java's Issues

Implement batch writes

Does the Java driver support batch writes? They are described here: http://influxdb.com/docs/v0.8/api/reading_and_writing_data.html .

To clarify, I'm talking about multiple data points with different timestamps in the same serie in one column.

If it exists, it should be documented.

If it would be helpful, I could implement and send a pull request, although this is my first contact with influx, and I would really need batch writes since I have writes with a couple of million data points.

Error writing JSON points to InfluxDB

Hi Stefan,

sorry for reopening the case of #47 , but my solution is not working as expected. If I convert my points via json to string and convert them back to points (GSON), I am getting the following error while writing them to database:

07-07 07:24:06.418   9821-11560 E/DatabaseInfluxDB๏น• write failed: field type conflict: input field "value" is type int64, already exists as type %!s(influxql.DataType=1)
    java.lang.RuntimeException: write failed: field type conflict: input field "value" is type int64, already exists as type %!s(influxql.DataType=1)
            at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)
            at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242)
            at java.lang.reflect.Proxy.invoke(Proxy.java:397)
            at org.influxdb.impl.$Proxy0.writePoints(Unknown Source)
            at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:156)
            at de.fhg.ipa.wearatwork.DatabaseInfluxDB$1.run(DatabaseInfluxDB.java:63)
            at java.lang.Thread.run(Thread.java:818)

When I delete and recreate my database, it is working fine. But then I am getting the same error when writing points to database which weren't converted to json before. I am not able to write points which were converted to json and back as well as points which were never converted to the same database. So it seems like converting points to json and back is destroying some information about the type of fields. Do you have any idea how to fix this?

Greetings
Dennis

Release 1.6 does not exist

The current README on master says:
To connect to InfluxDB 0.8.x you need to use influxdb-java version 1.6.

But I cannot find any release nor tag for 1.6.

Is this release missing or maybe it's a typo in the README?

Implement last missing REST API calls

from api.go:

// force a raft log compaction
self.registerEndpoint(p, "post", "/raft/force_compaction", self.forceRaftCompaction)

// fetch current list of available interfaces
self.registerEndpoint(p, "get", "/interfaces", self.listInterfaces)

// cluster config endpoints
self.registerEndpoint(p, "get", "/cluster/servers", self.listServers)
self.registerEndpoint(p, "post", "/cluster/shards", self.createShard)
self.registerEndpoint(p, "get", "/cluster/shards", self.getShards)
self.registerEndpoint(p, "del", "/cluster/shards/:id", self.dropShard)

// return whether the cluster is in sync or not
self.registerEndpoint(p, "get", "/sync", self.isInSync)

Precision goes wrong

  1. BatchPoints.time didn't affect the Point with null time, so it goes with batchpoints' time precision and point with TimeUnit.NANOSECONDS, which is default.
  2. Point's default precision is NANOSECONDS, however it goes with default value 'System.currentTimeMillis()', which precision is ms.
    Actually, the test pass because we set ms precision on BatchPoints!

Two negatives make positive:-)

mvn 2

Hi,
cant the influxdb client be compatible with mvn2 ?

mvn2 build is used yet in a lot of software ...

java.lang.RuntimeException: bad timestamp

This code with the tag "host" it gives an error:

Exception in thread "main" java.lang.RuntimeException: bad timestamp
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)

Point point1 = Point.measurement(metric)
                        .time(time, TimeUnit.MILLISECONDS)
                        .field("value", cnt)
                            .tag("host", host)
                            .tag("region", region)                          
                        .build();

If i just remote the tag("host", host), it works.

Influxdb 0.9
with java client 2.0-SNAPSHOT

InfluxDB.write NullPointerException

Hi,

Using InfluxDB.write(final String database, final String retentionPolicy, final Point point) throws a NullPointerException when used with a valid database, retentionPolicy, and point.

Stack Trace:
java.lang.NullPointerException
org.influxdb.impl.TimeUtil.toTimePrecision(TimeUtil.java:21)
org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:155)
org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:140)

It seems that the precision of the BatchPoints object is being accessed, but it is null as it is never set when the BatchPoints object is created (InfluxDBImpl.java:138).

I am using version 2.0.

Thanks
Anton

Scientific notation for large double values

Hi,

We have noticed a problem when writing Doubles to the influxdb. Over a certain size doubles will be formatted in scientific notation. We have worked around this by adding an extra if statement to handle doubles specifically in the concatenateFields method of the point class:

if (value instanceof String) {
    String stringValue = (String) value;                                
    sb.append("\"").append(FIELD_ESCAPER.escape(stringValue)).append("\"");
} else if (value instanceof Double) {                             

    DecimalFormat df = new DecimalFormat("0",DecimalFormatSymbols.getInstance(Locale.ENGLISH));
      df.setMaximumFractionDigits(340); //340 = DecimalFormat.DOUBLE_FRACTION_DIGITS
        sb.append(df.format(value));
} else {                           
       //Addition ends here.
         sb.append(value);
}

java write data

i use the web client to write data,
{"table":"write_test","points":[{"table":"count","fields":{"value":5}}]}
return "bad request"
i want to do http api by myself. HttpURLConnection write the data is not useful

Reading timestamp "Cannot convert String to Long"

When writing into influxDB, I provided a long as Timestamp
Point.measurement(/**/).time(observation.getTimestamp(), TimeUnit.MILLISECONDS).(/**/)

However, when I read the Point back I find a formatted String
2015-07-03T09:34:25.714Z
and not the long as I would expect.
(Of course I get a Cast exception when trying to initialize my Pojo)

Is this wanted?

InfluxDB#describeContinuousQueries queries wrong endpoint resulting in 404 / Exception

The following call:

influxDb.describeContinuousQueries("events");

produces a call to:

http://localhost:8086/db/events/continuous_queries?u=root&p=root

Which results in a 404 and related exception.

However, the admin interface generates:

http://localhost:8086/cluster/configuration?u=root&p=root

which produces the expected response. The implementation should be updated point to the correct endpoint.

NPE in non-batch request when using latest (2.0) influxdb-java client

Attempted to use https://jitpack.io/#influxdb/influxdb-java/influxdb-java-2.0 to get access to the 2.0 client, but ran into this error:

Point point = Point.measurement("span")
        .time(span.timestamp, TimeUnit.MILLISECONDS);
        // ...

influxDB.write(dbName, "default", point);

results in:

java.lang.NullPointerException
    at org.influxdb.impl.TimeUtil.toTimePrecision(TimeUtil.java:21)
    at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:155)
    at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:140)
    at Consumer.Consumer$$handleEvent$1(Main.scala:72)
    at Consumer$$anonfun$run$1.apply(Main.scala:37)
    at Consumer$$anonfun$run$1.apply(Main.scala:34)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at Consumer.run(Main.scala:34)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

retrofit.RetrofitError: Unexpected status line: w.onload = function () {

the bug OkHttp?

retrofit.RetrofitError: Unexpected status line: w.onload = function () {
at retrofit.RetrofitError.networkError(RetrofitError.java:27)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:390)
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:240)
at org.influxdb.impl.$Proxy75.write(Unknown Source)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:126)
at com.zsuper.controller.HomeController.updatePriceAndBroadcast(HomeController.java:79)
at com.zsuper.controller.HomeController.access$000(HomeController.java:56)
at com.zsuper.controller.HomeController$1.run(HomeController.java:92)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ProtocolException: Unexpected status line: w.onload = function () {
at com.squareup.okhttp.internal.http.StatusLine.parse(StatusLine.java:53)
at com.squareup.okhttp.internal.http.HttpConnection.readResponse(HttpConnection.java:189)
at com.squareup.okhttp.internal.http.HttpTransport.readResponseHeaders(HttpTransport.java:101)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:676)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:426)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:371)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:466)
at retrofit.client.UrlConnectionClient.readResponse(UrlConnectionClient.java:73)
at retrofit.client.UrlConnectionClient.execute(UrlConnectionClient.java:38)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:321)
... 14 more
2015-02-13 21:13:48,852 ERROR pool-2-thread-1 org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler - Unexpected error occurred in scheduled task.
retrofit.RetrofitError: Unexpected status line: w.onload = function () {
at retrofit.RetrofitError.networkError(RetrofitError.java:27)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:390)
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:240)
at org.influxdb.impl.$Proxy75.write(Unknown Source)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:126)
at com.zsuper.controller.HomeController.updatePriceAndBroadcast(HomeController.java:79)
at com.zsuper.controller.HomeController.access$000(HomeController.java:56)
at com.zsuper.controller.HomeController$1.run(HomeController.java:92)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ProtocolException: Unexpected status line: w.onload = function () {
at com.squareup.okhttp.internal.http.StatusLine.parse(StatusLine.java:53)
at com.squareup.okhttp.internal.http.HttpConnection.readResponse(HttpConnection.java:189)
at com.squareup.okhttp.internal.http.HttpTransport.readResponseHeaders(HttpTransport.java:101)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:676)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:426)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:371)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:466)
at retrofit.client.UrlConnectionClient.readResponse(UrlConnectionClient.java:73)
at retrofit.client.UrlConnectionClient.execute(UrlConnectionClient.java:38)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:321)
... 14 more
2015-02-13 21:14:59,856 ERROR pool-2-thread-1 org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler - Unexpected error occurred in scheduled task.
retrofit.RetrofitError: Unexpected status line: w.onload = function () {
at retrofit.RetrofitError.networkError(RetrofitError.java:27)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:390)
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:240)
at org.influxdb.impl.$Proxy75.write(Unknown Source)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:126)
at com.zsuper.controller.HomeController.updatePriceAndBroadcast(HomeController.java:79)
at com.zsuper.controller.HomeController.access$000(HomeController.java:56)
at com.zsuper.controller.HomeController$1.run(HomeController.java:92)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ProtocolException: Unexpected status line: w.onload = function () {
at com.squareup.okhttp.internal.http.StatusLine.parse(StatusLine.java:53)
at com.squareup.okhttp.internal.http.HttpConnection.readResponse(HttpConnection.java:189)
at com.squareup.okhttp.internal.http.HttpTransport.readResponseHeaders(HttpTransport.java:101)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:676)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:426)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:371)
at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:466)
at retrofit.client.UrlConnectionClient.readResponse(UrlConnectionClient.java:73)
at retrofit.client.UrlConnectionClient.execute(UrlConnectionClient.java:38)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:321)
... 14 more

documentation of batching and NullPointerException

The page where sources for batching is described contains wrong source code:
https://github.com/influxdb/influxdb-java

is:
should be:

is:
influxDB.write(point1);
influxDB.write(point1);

should be:
influxDB.write(dbName, retentionPolicy, point1);
influxDB.write(dbName, retentionPolicy, point2);

However with these code changes a NullPointerException happens:
Caused by: java.lang.NullPointerException: while trying to invoke the method java.util.concurrent.TimeUni
t.ordinal() of a null object loaded from local variable 't'
at org.influxdb.impl.TimeUtil.toTimePrecision(TimeUtil.java:21)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:150)
at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:164)
at org.influxdb.impl.BatchProcessor.put(BatchProcessor.java:179)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:136)

string values parse as boolean

Hello,

There seems to be a problem with the Point.Builder class and using String values. Example:

db = InfluxDBFactory.connect(dbUrl, dbUser, dbPass);
BatchPoints batchPoints = BatchPoints
              .database("my_db")
              .time(time, TimeUnit.MILLISECONDS)
              .tag("async", "true")
              .retentionPolicy("default")
              .consistency(InfluxDB.ConsistencyLevel.ALL)
              .build();
Point.Builder builder = Point.measurement("my_type");
builder.field("my_field", "string_value");
Point point = builder.build();
db.write(batchPoints);

This results in a field of type "boolean" typically set to false in the DB.

Influxd console shows errors such as
unable to parse bool value 'string_value': strconv.ParseBool: parsing "string_value": invalid syntax

I can repeat this with curl using

$ curl -i -XPOST 'http://localhost:8086/write?db=test1&precision=ms' -d 'cpu_load_short,host=server01,region=us-west value=string_value 1435148346112'

the above creates a field called "value" with boolean type and value of false, with the matching parse error.

$ curl -i -XPOST 'http://localhost:8086/write?db=test1&precision=ms' -d 'cpu_load_short,host=server01,region=us-west value2="string_value" 1435148346112'

the above with the double quotes creates a string field with the given string_value.

The problem seems to be in Point.java file in the method "public String lineProtocol()", where you do

sb.append(field.getKey()).append("=").append(field.getValue());

I would suggest checking if the value is of type String and add the double quotes to fix.

This is using the 2.0 version built from the repository.

Cheers,
T

Data is written with delay

When I repeatedly write to the influxdb without interval, with the following code:

public void write_fastWrite2_unknow()
{
    m_log.debug("==================== START TEST: write_fastWrite2_unknow");

    CInfluxDriver drv = new CInfluxDriver(m_prop);

    String tag = "new1";

    for (int i=0; i<5; i++)
    {
        CTag ctag = new CTag(tag, new CValue(i, CState.GOOD, Calendar.getInstance().getTime()));
        boolean result = drv.write(ctag);
        m_log.debug("Write ctag: "+ctag.toString());
        m_log.debug("CTag time of long: "+ctag.getValues().get(0).getTimestamp().getTime());
        assertTrue("Failed to write ctag!", result);


        try { Thread.sleep(250); } 
        catch (InterruptedException e) { m_log.error("Failed sleep. "+e.toString()); }

    }
}

I have the following contents in the influxdb series:
Result write

And also the following log:

19:36:03,181 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:362 - ==================== START TEST: write_fastWrite2_unknow
19:36:04,061 INFO CInfluxDriver:connect:87 - Done connection to InfluxDB.
19:36:04,153 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 0 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:36:03 State: GOOD Param: null]
19:36:04,153 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426692963231

19:36:04,468 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 1 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:36:04 State: GOOD Param: null]
19:36:04,469 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426692964403
19:36:04,768 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426692963231E12, sequence_number=1.697800001E9, time=1.426692963231E12, value=0.0}

19:36:04,774 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 2 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:36:04 State: GOOD Param: null]
19:36:04,777 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426692964719
19:36:05,076 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426692964719E12, sequence_number=1.697800001E9, time=1.426692963231E12, value=0.0}

19:36:05,087 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 3 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:36:05 State: GOOD Param: null]
19:36:05,087 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426692965028
19:36:05,378 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426692964719E12, sequence_number=1.697800001E9, time=1.426692963231E12, value=0.0}

19:36:05,384 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 4 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:36:05 State: GOOD Param: null]
19:36:05,385 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426692965338

Although I expect the following result:
Result write 2

With the following log:

19:59:02,746 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:362 - ==================== START TEST: write_fastWrite2_unknow
19:59:04,330 INFO CInfluxDriver:connect:87 - Done connection to InfluxDB.
19:59:04,472 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 0 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:59:02 State: GOOD Param: null]
19:59:04,472 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426694342848
19:59:06,062 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426694342848E12, sequence_number=1.697870001E9, time=1.426694342848E12, value=0.0}

19:59:06,070 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 1 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:59:05 State: GOOD Param: null]
19:59:06,070 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426694345973
19:59:07,696 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426694345973E12, sequence_number=1.697880001E9, time=1.426694345973E12, value=1.0}
19:59:07,702 DEBUG CInfluxDriver:write:530 - ____[2] Serie row: {time_last=1.426694345973E12, sequence_number=1.697870001E9, time=1.426694342848E12, value=0.0}

19:59:07,709 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 2 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:59:07 State: GOOD Param: null]
19:59:07,709 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426694347571
19:59:09,263 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426694347571E12, sequence_number=1.697890001E9, time=1.426694347571E12, value=2.0}
19:59:09,264 DEBUG CInfluxDriver:write:530 - ____[2] Serie row: {time_last=1.426694347571E12, sequence_number=1.697880001E9, time=1.426694345973E12, value=1.0}
19:59:09,264 DEBUG CInfluxDriver:write:530 - ____[3] Serie row: {time_last=1.426694345973E12, sequence_number=1.697870001E9, time=1.426694342848E12, value=0.0}

19:59:09,274 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 3 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:59:09 State: GOOD Param: null]
19:59:09,274 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426694349209
19:59:10,897 DEBUG CInfluxDriver:write:530 - ____[1] Serie row: {time_last=1.426694349209E12, sequence_number=1.697900001E9, time=1.426694349209E12, value=3.0}
19:59:10,898 DEBUG CInfluxDriver:write:530 - ____[2] Serie row: {time_last=1.426694349209E12, sequence_number=1.697890001E9, time=1.426694347571E12, value=2.0}
19:59:10,898 DEBUG CInfluxDriver:write:530 - ____[3] Serie row: {time_last=1.426694347571E12, sequence_number=1.697880001E9, time=1.426694345973E12, value=1.0}
19:59:10,898 DEBUG CInfluxDriver:write:530 - ____[4] Serie row: {time_last=1.426694345973E12, sequence_number=1.697870001E9, time=1.426694342848E12, value=0.0}

19:59:10,908 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:372 - Write ctag: Item: new1 Value: [Value: 4 Type: java.lang.Integer Timestamp: 18/03/2015 > 19:59:10 State: GOOD Param: null]
19:59:10,909 DEBUG TestCInfluxDriver:write_fastWrite2_unknow:373 - CTag time of long: 1426694350774

Such scenario happens if to change a write interval from 250 ms to 1500 ms.

    {
        //...
        try { Thread.sleep(1500); } 
        catch (InterruptedException e) { m_log.error("Failed sleep. "+e.toString()); }
        //...
    }

Data in a series represent structure of the list quickly to address to the following record (by means of a field time_last).
Mattering time of a certain record, it is possible to receive previous, by means of request a 'select * from new1 where time<1426694347571000000 limit 2', but it is impossible to receive the following record after the current.

The procedure WRITE before write data, running multiple queries, the elapsed time does not exceed 60-120 ms.

Please tell me why there are such strange events, when the interval between records in the influxdb is very small?

Cast time column from java.lang.String to java.util.Date

The time column of query result is String type, and the format is like:
"2015-07-01T09:06:46.852663Z"
"2015-07-01T09:06:46.852Z"
the time string is UTC time ,not local time.

return java.util.Date type is more better.

And I suggest add a method like:
influxDB.query(Query query,TimeZone timeZone)

Result of influxDB#write()

Hi Stefan,

first of all thanks for your great work on InfluxDB implementation for java. Including your commits from the last days it is working quite well in my 0.9.0 environment.

I was wondering if there is any possibility to get the result of writing a point (or batched points) to database? Somehow like boolean true/false. I'd like to store my data locally if there is any error (e.g. no connection) and try again later.

Thanks,
Dennis

Running using JDK 1.6 causes to java.lang.UnsupportedClassVersionError

java.lang.UnsupportedClassVersionError: com/squareup/okhttp/OkHttpClient : Unsupported major.minor version 51.0; Caused by: java.lang.UnsupportedClassVersionError: com/squareup/okhttp/OkHttpClient : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:421)
at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:383)
at org.influxdb.impl.InfluxDBImpl.(InfluxDBImpl.java:72)
at org.influxdb.InfluxDBFactory.connect(InfluxDBFactory.java:30)

write cache after adding a batch entry

Hi,

I think I found the place where cache is written: the method BatchProcessor.put(). Probably it might be better to add a batch entry before writing the entries

Regards,
Bernhard

is:
void put(final BatchEntry batchEntry) {
if (this.issuedBatches.incrementAndGet() >= this.actions) {
this.issuedBatches.set(0);
write();
}
this.cache.add(batchEntry);
}

better?
void put(final BatchEntry batchEntry) {
this.cache.add(batchEntry);
if (this.issuedBatches.incrementAndGet() >= this.actions) {
this.issuedBatches.set(0);
write();
}
}

Http Connection not getting closed ( CLOSE_WAIT)

Recently we ran into an issue where influx db went down and client publishing metrics started using lot of file descriptors.

Upon debugging I found infludb java client is not closing connection if influx db host is not reachable.

lsof -a -p showed too many ipv4 connections in CLOSE_WAIT state.

I think influx db client is not using connection pooling which is causing the issue.

point.setField(Map)?

hi,
when i use the api, i find point.field is pravite. only can add field one by one.i think if we can use the method setField(Map), the code will be more clean.

Changelog missing?

Hi,

I'm missing a Changelog. You just released 1.1, what are the changes?

Regards,
Theo

getting 'bad timestamp' exception

Anyone know what may cause the following exception?
java.lang.RuntimeException: bad timestamp
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242)
at org.influxdb.impl.$Proxy58.writePoints(Unknown Source)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:150)

relevant code:
long time = System.currentTimeMillis();
BatchPoints points = BatchPoints.database(dbName).time(time, TimeUnit.MILLISECONDS).tag("async", "true").retentionPolicy("default")
.consistency(ConsistencyLevel.ALL).build();

and in a loop

            Point point = Point.measurement(entry.getKey()).field("value", value).time(time, TimeUnit.MILLISECONDS).build();
            points.point(point);

at the end
db.write(points);

NPE thrown when working with InfluxDb 0.9 rc2

Method ping() worked properly with influxDb 0.8.
When we started to use InfluxDb version 0.9 rc2 method ping() failed with following exception:
2015-02-23 17:23:05,388 SEVERE [com.gigaspaces.webui.common] - java.lang.NullPointerException; Caused by: java.lang.NullPointerException
at org.influxdb.impl.InfluxDBImpl.ping(InfluxDBImpl.java:107)

Unable to write to influx

Oct 14 19:07:36 ip-10-0-5-65.ec2.internal bash[2762]: retrofit.RetrofitError: null
Oct 14 19:07:36 ip-10-0-5-65.ec2.internal bash[2762]: at retrofit.RetrofitError.networkError(RetrofitError.java:27) ~[retrofit-1.6.0.jar:na]
Oct 14 19:07:36 ip-10-0-5-65.ec2.internal bash[2762]: at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:389) ~[retrofit-1.6.0.jar:na]
Oct 14 19:07:36 ip-10-0-5-65.ec2.internal bash[2762]: at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:240) ~[retrofit-1.6.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at org.influxdb.impl.$Proxy64.write(Unknown Source) ~[na:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:105) ~[influxdb-java-1.2.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at io.multicloud.controller.AWSProvider.reportCost(AWSProvider.java:222) ~[controller-0.1.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at io.multicloud.controller.CloudControl.main(CloudControl.java:53) ~[controller-0.1.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: Caused by: java.io.EOFException: null
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:154) ~[okio-1.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at com.squareup.okhttp.internal.http.HttpConnection.readResponse(HttpConnection.java:189) ~[okhttp-2.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at com.squareup.okhttp.internal.http.HttpTransport.readResponseHeaders(HttpTransport.java:101) ~[okhttp-2.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:676) ~[okhttp-2.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:426) ~[okhttp-urlconnection-2.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:371) ~[okhttp-urlconnection-2.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:466) ~[okhttp-urlconnection-2.0.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at retrofit.client.UrlConnectionClient.readResponse(UrlConnectionClient.java:73) ~[retrofit-1.6.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at retrofit.client.UrlConnectionClient.execute(UrlConnectionClient.java:38) ~[retrofit-1.6.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:321) ~[retrofit-1.6.0.jar:na]
Oct 14 19:07:37 ip-10-0-5-65.ec2.internal bash[2762]: ... 5 common frames omitted

Remove time from BatchPoints API

This is a misleading API, time can be set in BatchPoints and Point, so it is unclear and not decidable which time to use, so remove it there and clarify the API.

retrofit.RetrofitError: null

Hi,
I just started to use InfluxDb(single node) to store system metrics using real time processing engine, Apache Storm. Everything was working really well before encountering this error. It even did not occurred when server was under heavy load but with normal traffic. Any insight ?

Caused by: retrofit.RetrofitError: null
        at retrofit.RetrofitError.networkError(RetrofitError.java:27) ~[stormjar.jar:na]
        at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:390) ~[stormjar.jar:na]
        at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:240) ~[stormjar.jar:na]
        at org.influxdb.impl.$Proxy0.write(Unknown Source) ~[na:na]
        at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:126) ~[stormjar.jar:na]
Caused by: java.io.EOFException: null
        at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:154) ~[stormjar.jar:na]
        at com.squareup.okhttp.internal.http.HttpConnection.readResponse(HttpConnection.java:189) ~[stormjar.jar:na]
        at com.squareup.okhttp.internal.http.HttpTransport.readResponseHeaders(HttpTransport.java:101) ~[stormjar.jar:na]
        at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:676) ~[stormjar.jar:na]
        at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:426) ~[stormjar.jar:na]
        at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:371) ~[stormjar.jar:na]
        at com.squareup.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:466) ~[stormjar.jar:na]
        at retrofit.client.UrlConnectionClient.readResponse(UrlConnectionClient.java:73) ~[stormjar.jar:na]
        at retrofit.client.UrlConnectionClient.execute(UrlConnectionClient.java:38) ~[stormjar.jar:na]
        at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:321) ~[stormjar.jar:na]
        ... 22 common frames omitted

P.S I have removed unnecessary log lines.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.