Giter Site home page Giter Site logo

spark-influx-sink's Introduction

spark-influx-sink

A spark metrics sink that pushes to InfluxDb

Why is this useful?

Collecting diagnostic metrics from Apache Spark can be difficult because of the distributed nature of Spark. Polling Spark executor processes or scraping logs becomes tedious when executors run on an arbitrary number of remote hosts. This package instead uses a "push" method of sending metrics to a central host running InfluxDb, where they can be centrally analyzed.

How to deploy

  1. Run ./gradlew build
  2. Copy the JAR that is output to a path where Spark can read it, and add it to Spark's extraClassPath, along with izettle/metrics-influxdb (available on maven)
  3. Add your new sink to Spark's conf/metrics.properties

Example metrics.properties snippet:

*.sink.influx.class=org.apache.spark.metrics.sink.InfluxDbSink
*.sink.influx.protocol=https
*.sink.influx.host=localhost
*.sink.influx.port=8086
*.sink.influx.database=my_metrics
*.sink.influx.auth=metric_client:PASSWORD
*.sink.influx.tags=product:my_product,parent:my_service

Notes

  • This takes a dependency on the Apache2-licensed com.izettle.dropwizard-metrics-influxdb library, which is an improved version of Dropwizard's upstream InfluxDb support, which exists only in the DropWizard Metrics 4.0 branch.
  • The package that this code lives in is org.apache.spark.metrics.sink, which is necessary because Spark makes its Sink interface package-private.

License

This project is made available under the Apache 2.0 License.

spark-influx-sink's People

Contributors

gatesn avatar robert3005 avatar svc-excavator-bot avatar tstearns avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-influx-sink's Issues

executor in yarn mode : ClassNotFoundException: org.apache.spark.metrics.sink.InfluxDbSink

Hello,

I stumbled on this project which appears quite new but is just what I'm looking for. I have followed the instructions on the readme for the most part. Here is my setup:

  • spark_2.11 : 2.1.0
  • your jar built per instructions in readme
  • my application project compiled into uber-jar
  • uber jar uses JobLauncher to launch job in cluster-yarn mode (and fill in a bunch of configuration from our configuration system)
  • I did add my uber-jar to the spark.executor.extraClassPath per your instructions in readme

I DO see the metrics library load in the driver in yarn (cluster mode). I added a few info log statements when the reporter starts up, and I see them in my container log for the driver. I also see metrics for the driver in InfluxDB so I know the reporter is loading and reporting OK (at least in the driver)

However, the executor containers are getting the following exception:

17/04/10 16:08:46 ERROR metrics.MetricsSystem: Sink class org.apache.spark.metrics.sink.InfluxDbSink cannot be instantiated
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:284)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.metrics.sink.InfluxDbSink
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:229)
        at org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:198)
        at org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:194)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
        at org.apache.spark.metrics.MetricsSystem.registerSinks(MetricsSystem.scala:194)
        at org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:102)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:364)
        at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:200)
        at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:223)
        at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:67)
        at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
        ... 4 more

I can see that my jar is being added to the front of the classpath as well, looking at the driver output of its "executor launch context" (reindex-spark-job-0.3-SNAPSHOT-all.jar is my uber-jar):

17/04/10 16:09:26 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
  env:
    SPARK_YARN_USER_ENV -> PYSPARK_PYTHON=/opt/rh/rh-python35
    CLASSPATH -> reindex-spark-job-0.3-SNAPSHOT-all.jar<CPS>{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CLIENT_C
ONF_DIR<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HA
DOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$MR2_CLASSPATH<CPS>/etc/hadoop/conf:/op
t/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/li
bexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.6.1-
1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/
.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/
lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/./
/*
    SPARK_DIST_CLASSPATH -> /etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera
/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../..
/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh
5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/*:
/opt/cloudera/parcels/CDH-5.6.1-1.cdh5.6.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/*:/
opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//*
    SPARK_YARN_STAGING_DIR -> hdfs://nameservice1/user/tstumpges/.sparkStaging/application_1480652198027_0357
    SPARK_USER -> tstumpges
    SPARK_YARN_MODE -> true
    PYSPARK_PYTHON -> /opt/rh/rh-python35

  command:
    {{JAVA_HOME}}/bin/java \
      -server \
      -Xmx4096m \
      '-Ddconfig.consul.keyStores=global,dev,host/hdpdev-01.cb.ntent.com,jobs/test-job/global,jobs/test-job/dev,jobs/test-job/host/hdpdev-01.cb.ntent.com' \
      -Djava.io.tmpdir={{PWD}}/tmp \
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \
      -XX:OnOutOfMemoryError='kill %p' \
      org.apache.spark.executor.CoarseGrainedExecutorBackend \
      --driver-url \
      spark://[email protected]:47361 \
      --executor-id \
      <executorId> \
      --hostname \
      <hostname> \
      --cores \
      2 \
      --app-id \
      application_1480652198027_0357 \
      --user-class-path \
      file:$PWD/__app__.jar \
      1><LOG_DIR>/stdout \
      2><LOG_DIR>/stderr

  resources:
    __app__.jar -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/.sparkStaging/application_1480652198027_0357/r
eindex-spark-job-0.3-SNAPSHOT-all.jar" } size: 191145201 timestamp: 1491865690837 type: FILE visibility: PRIVATE
    __spark_libs__ -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/.sparkStaging/application_1480652198027_035
7/__spark_libs__1980285003983078799.zip" } size: 197750381 timestamp: 1491865671716 type: ARCHIVE visibility: PRIVATE
    spark-metrics.properties -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/spark-metrics.properties" } size:
 301 timestamp: 1491862976743 type: FILE visibility: PUBLIC
    __spark_conf__ -> resource { scheme: "hdfs" host: "nameservice1" port: -1 file: "/user/tstumpges/.sparkStaging/application_1480652198027_035
7/__spark_conf__.zip" } size: 33484 timestamp: 1491865690906 type: ARCHIVE visibility: PRIVATE

===============================================================================

Any thoughts or suggestions? I can't see why it would find the class and load fine in the driver, but not the executor. All my other application classes are loading just fine and I can see the InfluxDbSink class in my uber-jar.

Thanks!

Build failing on missing ScalaStyle config

I get the following error when I run
./gradlew build
with the latest code

Caused by: java.lang.Exception: configLocation /home/eagle/src/spark-influx-sink/project/scalastyle_config.xml does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:83)
at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:105)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:249)
at org.github.ngbinh.scalastyle.ScalaStyleTask.extractAndValidateProperties(ScalaStyleTask.groovy:144)
at org.github.ngbinh.scalastyle.ScalaStyleTask.scalaStyle(ScalaStyleTask.groovy:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103)
... 92 more

Job hangs when configuring sinks

After following the README file and setting both the jars and configuration, the application hangs before allocating any task on any executor.

It's worth noticing that I've experienced this issue on the Amazon EMR built version of Spark 2.2.1. I've not tested it on another distribution.

A working application was submitted using both jars documented on README and the metrics.properties file configured just to connect to my own influxdb host.

After checking influx database, I could notice that series of data were created but not populated with any value.

InfluxDbReporter: Unable to report to InfluxDB with error 'Unrecognized SSL message, plaintext connection?'. Discarding data

I am trying to use your implementation with Influxdb and spark-shell for testing.

I have rebuild your code in maven but we launching the spark-shell, I can't get it to write to Influx with this error message :
'Unrecognized SSL message, plaintext connection?'

I have tried many combination of settings but none works, any idea what's wrong ?

Thanks

-- Here is my metrics.properties

For INFLUXDB

*.sink.influx.class=org.apache.spark.metrics.sink.InfluxDbSink
*.sink.influx.protocol=http
*.sink.influx.host=ip-xxx-xxx-xx-xx
*.sink.influx.port=8086
*.sink.influx.database=mydb
*.sink.influx.auth=user:pass
*.sink.influx.tags=

-- Here is my influxdb.conf

[http]

Determines whether HTTP endpoint is enabled.

enabled = true

The bind address used by the HTTP service.

bind-address = ":8086"

Determines whether user authentication is enabled over HTTP/HTTPS.

auth-enabled = false

The default realm sent back when issuing a basic auth challenge.

#realm = "InfluxDB"

Determines whether HTTP request logging is enabled.

#log-enabled = true

Determines whether detailed write logging is enabled.

write-tracing = false

Determines whether the pprof endpoint is enabled. This endpoint is used for

troubleshooting and monitoring.

pprof-enabled = true

Determines whether HTTPS is enabled.

https-enabled = false

The SSL certificate to use when HTTPS is enabled.

#https-certificate = "/etc/ssl/influxdb.pem"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.