datadog / dd-trace-java Goto Github PK
View Code? Open in Web Editor NEWDatadog APM client for Java
Home Page: https://docs.datadoghq.com/tracing/languages/java
License: Apache License 2.0
Datadog APM client for Java
Home Page: https://docs.datadoghq.com/tracing/languages/java
License: Apache License 2.0
What: dd-java-agent v0.7.0
Attempting to run some tests moving from v0.6.0 to v0.7.0 agent, I noticed I was no longer getting trace continuation through an executor service.
Looking at the diff here: v0.6.0...v0.7.0
It seems that for ExecutorInstrumentation.java the Wrap*Advice methods (intercepting ExecutorService.submit) was changed to only continue the current scope span IF the setAsyncPropagation flag is set on the TraceScope already. But this is confusing because how would this ever get set other than explicitly asking it to be?
Say I have some jax-rs resource that submits some work to an executor which further creates new spans, say with an apache-httpclient request to somewhere:
@Path('/foo')
class Foo {
ExecutorService someExecutor = ExecutorService.newFixedThreadPool(2);
@GET
String foo() {
return someExecutor.submit(() -> { someContrivedAsyncHttpCallThatReturnsAString(); }).get();
}
}
In this case a brand new trace is opened when the callable is executed by the executor and the instrumented apache-httpclient is invoked -- and it is not linked to the trace started for the jax-rs method.
However, if I fool it by explicitly setting the asyncPropagation flag it will work:
@Path('/foo')
class Foo {
ExecutorService someExecutor = ExecutorService.newFixedThreadPool(2);
@GET
String foo() {
((TraceScope)GlobalTracer.get().scopeManager().active()).setAsyncPropagation(true);
return someExecutor.submit(() -> { someContrivedAsyncHttpCallThatReturnsAString(); }).get();
}
}
I.e. I see the resulting continuation log lines:
...
[XNIO-1 task-3] DEBUG datadog.trace.agent.ot.PendingTrace - traceId: 4832245528826328661 -- registered continuation datadog.trace.agent.ot.scopemanager.ContinuableScope$Continuation@d0cf6b2. count = 2
[XNIO-1 task-3] DEBUG datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation$DatadogWrapper - created continuation datadog.trace.agent.ot.scopemanager.ContinuableScope$Continuation@d0cf6b2 from scope datadog.trace.agent.ot.scopemanager.ContinuableScope@497bfe34
...
Is it intentional that this flag be set?
If yes, then bummer. It just breaks existing functionality and leaves a bunch of disconnected traces and makes it very clumsy in the code using the agent, because now some work has to be done to explicitly set the async propagation flag and requires I bring in the dd-trace-ot dependency... unless I'm missing something?
I am looking to enable client side rate sampling (sample only a percentage of transactions) in my java service, by following your documentation
I am looking at this file but dont any option.
How do i set this?
Is there a simple way to create custom instrumentations without manually updating source code? OT has byteman which allowed for easily creating custom spans using the btm files. I see DD moved to ByteBuddy and it appears as though you can create "instrumentations" like those in the agent project, but it doesn't look like a simple build process.
We have run a couple tests on a boot 1.5.X Java 1.8 app that uses tomcat and we have found that when enabling the APM agent we are adding about about 180Mb of ram to the JVM. There is exactly 2 controllers in addition to the actuator endpoints for a total of ~10 accessible HTTP endpoints, rather small.
A few questions:
APM Enabled1 | APM Enabled2 | APM Disabled1 | APM Disabled2 | Avg APM Enabled | Avg APM Disabled | Avg Difference | ||
---|---|---|---|---|---|---|---|---|
1 | 107.9 | 159.5 | 30.52 | 85.91 | 133.7 | 58.215 | 75.485 | |
2 | 171.9 | 196.2 | 50.76 | 31.02 | 184.05 | 40.89 | 143.16 | |
3 | 183.6 | 181 | 50.76 | 160.5 | 182.3 | 105.63 | 76.67 | |
4 | 189.6 | 140.4 | 172.7 | 97.76 | 165 | 135.23 | 29.77 | |
5 | 139.9 | 133.3 | 58.69 | 107.6 | 136.6 | 83.145 | 53.455 | |
6 | 144.8 | 200 | 173.7 | 99.91 | 172.4 | 136.805 | 35.595 | |
7 | 240.5 | 261.2 | 104.9 | 56.14 | 250.85 | 80.52 | 170.33 | |
8 | 158.9 | 219.1 | 162.2 | 47.98 | 189 | 105.09 | 83.91 | |
9 | 153.6 | 282 | 140.2 | 121.4 | 217.8 | 130.8 | 87 | |
10 | 174.8 | 314.7 | 85.75 | 59.49 | 244.75 | 72.62 | 172.13 | |
11 | 222.8 | 204.2 | 92.83 | 140.4 | 213.5 | 116.615 | 96.885 | |
12 | 290.5 | 204 | 187.1 | 174.6 | 247.25 | 180.85 | 66.4 | |
13 | 301.7 | 345.1 | 74.09 | 144.8 | 323.4 | 109.445 | 213.955 | |
14 | 212.9 | 284.9 | 161.1 | 125.4 | 248.9 | 143.25 | 105.65 | |
15 | 304.4 | 217 | 201.2 | 59.11 | 260.7 | 130.155 | 130.545 | |
16 | 284.6 | 261.4 | 126.6 | 167.8 | 273 | 147.2 | 125.8 | |
17 | 218.5 | 335.3 | 182.4 | 123.4 | 276.9 | 152.9 | 124 | |
18 | 239.6 | 264.8 | 81.1 | 199.4 | 252.2 | 140.25 | 111.95 | |
19 | 325.4 | 320.3 | 118 | 218.6 | 322.85 | 168.3 | 154.55 | |
20 | 304.7 | 378 | 174.3 | 199.5 | 341.35 | 186.9 | 154.45 | |
21 | 290.9 | 263.8 | 160.3 | 199.6 | 277.35 | 179.95 | 97.4 | |
22 | 310.5 | 241.1 | 212 | 150.2 | 275.8 | 181.1 | 94.7 | |
23 | 268.7 | 317.3 | 204.2 | 96.4 | 293 | 150.3 | 142.7 | |
24 | 344.5 | 318 | 209.4 | 89.4 | 331.25 | 149.4 | 181.85 | |
25 | 380.2 | 318.9 | 203.1 | 149.7 | 349.55 | 176.4 | 173.15 | |
26 | 325.1 | 320.7 | 145.6 | 151 | 322.9 | 148.3 | 174.6 | |
27 | 326.8 | 321.1 | 146.5 | 151.8 | 323.95 | 149.15 | 174.8 | |
28 | 298.5 | 321.9 | 148.3 | 153.3 | 310.2 | 150.8 | 159.4 | |
29 | 310.3 | 323.5 | 149.8 | 153.9 | 316.9 | 151.85 | 165.05 | |
30 | 310.9 | 323.9 | 150.1 | 154.4 | 317.4 | 152.25 | 165.15 | |
31 | 313.1 | 324.3 | 151.3 | 156.7 | 318.7 | 154 | 164.7 | |
32 | 313.6 | 325.9 | 151.7 | 157.1 | 319.75 | 154.4 | 165.35 | |
33 | 318.3 | 326.4 | 151.7 | 157.7 | 322.35 | 154.7 | 167.65 | |
34 | 319.9 | 327.2 | 153.2 | 158.8 | 323.55 | 156 | 167.55 | |
35 | 321 | 328.9 | 153.6 | 158.9 | 324.95 | 156.25 | 168.7 | |
36 | 321.5 | 329.2 | 153.9 | 159.1 | 325.35 | 156.5 | 168.85 | |
37 | 323 | 330.2 | 155.4 | 160.5 | 326.6 | 157.95 | 168.65 | |
38 | 323.4 | 331.9 | 155.9 | 160.5 | 327.65 | 158.2 | 169.45 | |
39 | 324.2 | 332.1 | 156.1 | 160.7 | 328.15 | 158.4 | 169.75 | |
40 | 325.7 | 332.8 | 157.6 | 162.1 | 329.25 | 159.85 | 169.4 | |
41 | 326.2 | 334.5 | 158 | 162.1 | 330.35 | 160.05 | 170.3 | |
42 | 326.5 | 334.9 | 158.3 | 162.6 | 330.7 | 160.45 | 170.25 | |
43 | 328.5 | 335.2 | 159.9 | 164 | 331.85 | 161.95 | 169.9 | |
44 | 328.9 | 336.8 | 160.3 | 164.3 | 332.85 | 162.3 | 170.55 | |
45 | 329.8 | 337.1 | 160.6 | 165 | 333.45 | 162.8 | 170.65 | |
46 | 331.3 | 338 | 161.8 | 166.4 | 334.65 | 164.1 | 170.55 | |
47 | 331.8 | 339.5 | 162.4 | 166.4 | 335.65 | 164.4 | 171.25 | |
48 | 332.2 | 340 | 162.7 | 166.9 | 336.1 | 164.8 | 171.3 | |
49 | 333.8 | 340.2 | 163.9 | 168.6 | 337 | 166.25 | 170.75 | |
50 | 334.2 | 342 | 164.4 | 169 | 338.1 | 166.7 | 171.4 | |
51 | 335.1 | 342.3 | 164.4 | 169.2 | 338.7 | 166.8 | 171.9 | |
52 | 336.6 | 343.6 | 165.6 | 170.3 | 340.1 | 167.95 | 172.15 | |
53 | 336.9 | 345 | 166.3 | 170.6 | 340.95 | 168.45 | 172.5 | |
54 | 337.3 | 345.5 | 166.5 | 171.1 | 341.4 | 168.8 | 172.6 | |
55 | 339.4 | 345.8 | 167.8 | 172.8 | 342.6 | 170.3 | 172.3 | |
56 | 339.8 | 347.5 | 168.2 | 172.8 | 343.65 | 170.5 | 173.15 | |
57 | 340.7 | 348 | 168.6 | 173.3 | 344.35 | 170.95 | 173.4 | |
58 | 342.3 | 349 | 170 | 175 | 345.65 | 172.5 | 173.15 | |
59 | 342.7 | 350.6 | 170.9 | 175 | 346.65 | 172.95 | 173.7 | |
60 | 343.1 | 351.1 | 170.9 | 175.6 | 347.1 | 173.25 | 173.85 | |
61 | 344.6 | 351.4 | 172.3 | 177.2 | 348 | 174.75 | 173.25 | |
62 | 345 | 353 | 172.7 | 177.6 | 349 | 175.15 | 173.85 | |
63 | 345.8 | 353.4 | 172.7 | 178.1 | 349.6 | 175.4 | 174.2 | |
64 | 347.5 | 354.6 | 174.2 | 179.2 | 351.05 | 176.7 | 174.35 | |
65 | 347.8 | 356.1 | 175.2 | 179.5 | 351.95 | 177.35 | 174.6 | |
66 | 348.2 | 356.5 | 175.2 | 179.7 | 352.35 | 177.45 | 174.9 | |
67 | 350.3 | 356.8 | 176.4 | 181.1 | 353.55 | 178.75 | 174.8 | |
68 | 350.7 | 358.4 | 176.6 | 181.4 | 354.55 | 179 | 175.55 | |
69 | 351.5 | 358.9 | 176.6 | 183.4 | 355.2 | 180 | 175.2 | |
70 | 352.9 | 359.6 | 177.7 | 184.8 | 356.25 | 181.25 | 175 | |
71 | 353.3 | 361.1 | 178.3 | 185.1 | 357.2 | 181.7 | 175.5 | |
72 | 354.7 | 361.8 | 178.4 | 185.6 | 358.25 | 182 | 176.25 | |
73 | 356.1 | 362.2 | 179.8 | 187.1 | 359.15 | 183.45 | 175.7 | |
74 | 356.5 | 363.9 | 180.2 | 187.4 | 360.2 | 183.8 | 176.4 | |
75 | 357.2 | 364.6 | 180.4 | 187.6 | 360.9 | 184 | 176.9 | |
76 | 359.2 | 365.3 | 181.8 | 189 | 362.25 | 185.4 | 176.85 | |
77 | 359.6 | 366.9 | 183 | 189 | 363.25 | 186 | 177.25 | |
78 | 360 | 367.3 | 183.4 | 189.5 | 363.65 | 186.45 | 177.2 |
When I upload to S3 this happens:
Caused by: java.lang.NoClassDefFoundError: datadog/trace/agent/deps/fasterxml/jackson/databind/ObjectMapper
at datadog.trace.instrumentation.aws.SpanDecorator.<clinit>(SpanDecorator.java:29)
at datadog.trace.instrumentation.aws.TracingRequestHandler.beforeRequest(TracingRequestHandler.java:74)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1749)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1611)
Caused by: java.lang.ClassNotFoundException: datadog.trace.agent.deps.fasterxml.jackson.databind.ObjectMapper
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_162]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_162]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) ~[na:1.8.0_162]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_162]
OracleJDK 1.8.0_162
com.amazonaws:aws-java-sdk-s3:1.11.263
com.datadoghq:dd-java-agent:0.4.0
In our application, we have several house-keeping things that happen in the background that we don't want to trace in the APM.
There are two instances of us where this is happening:
Hi,
Even tho I'm setting dd.service.name
to plt-stg-sbd
and dd.agent.host
to 172.17.0.1
, I'm getting logging output that looks like this:
New instance: DDTracer-75cd8043{ service-name=plt-stg-sbd, writer=DDAgentWriter { api=DDApi { tracesEndpoint=http://localhost:8126/v0.3/traces }
The service-name was set successfully but it's still using localhost for tracesEndpoint which is the default value.
We have had a problem with a memory leak in one of our applications. After some digging, it appears that it happens with the auto instrumentation of JaxRS clients (Jersey implementation in this case).
If I've tracked it down properly, the ClientRequestFilter is executed creating the span. Then, the networking tries to actually connect to the server which throws an exception and the ClientResponseFilter is never executed leaving the span dangling.
There also appears to be an error with the cleanup of PendingTraces up to version 0.12.0 but I believe that has been fixed in master. Once that goes through, this won't cause unbounded leaks as it appears to be doing today. However, as these spans are never finished, the traces don't get sent to the agent and we have no record of these failures even if there was no memory leak.
How to reproduce:
The easiest way is to try and connect a client to a server end point that doesn't exist. It generates a javax.ws.rs.ProcessingException from a java.net.ConnectionException: Connection refused. Inspecting DataDog debug logs, it's clear the trace isn't closed or written.
I added the dd-trace artifact into my project using the following below. After adding that I keep getting the issue listed below:
<dependency>
<groupId>com.datadoghq</groupId>
<artifactId>dd-trace</artifactId>
<version>0.0.3</version> # I could not get 0.0.4-SNAPSHOT to be found even locally.
</dependency>
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/bmontague/code/hello-dropwizard/target/helloworld-1.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/bmontague/code/hello-dropwizard/dd-java-agent-0.0.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.byteman.agent.Main.premain(Main.java:287)
at io.opentracing.contrib.agent.AnnotationsTracingAgent.premain(AnnotationsTracingAgent.java:30)
... 6 more
Caused by: java.lang.NoSuchMethodError: ch.qos.logback.core.util.Loader.getResources(Ljava/lang/String;Ljava/lang/ClassLoader;)Ljava/util/Set;
at ch.qos.logback.classic.util.ContextInitializer.multiplicityWarning(ContextInitializer.java:183)
at ch.qos.logback.classic.util.ContextInitializer.statusOnResourceSearch(ContextInitializer.java:175)
at ch.qos.logback.classic.util.ContextInitializer.findConfigFileURLFromSystemProperties(ContextInitializer.java:111)
at ch.qos.logback.classic.util.ContextInitializer.findURLOfDefaultConfigurationFile(ContextInitializer.java:120)
at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:148)
at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at com.datadoghq.trace.resolver.FactoryUtils.<clinit>(FactoryUtils.java:14)
at io.opentracing.contrib.agent.TraceAnnotationsManager.initialize(TraceAnnotationsManager.java:61)
... 12 more
This is a dropwizard service so I am simply running:
java -Dlogback.configurationFile=configs/logback.xml \
-javaagent:dd-java-agent-0.0.3.jar \
-cp dd-trace.yaml \
-jar target/helloworld-1.0-SNAPSHOT.jar server configs/dev.yaml
When the traced process main finishes, DDAgentWriter
enters a wait that seems to hang the process and requires an interruption to terminate.
I'm interested in testing the snapshot on master now that #443 is merged, but the build is failing at HEAD: 76876e7. https://circleci.com/gh/DataDog/dd-trace-java/9897?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
Hello,
I am getting an exception when starting the agent on my system. I am able to start my program directly without the agent and no exceptions are thrown.
java -javaagent:dd-java-agent-0.9.0.jar -jar head-app.jar
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(Unknown Source)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(Unknown Source)
Caused by: java.lang.IllegalArgumentException
at sun.instrument.InstrumentationImpl.appendToClassLoaderSearch0(Native Method)
at sun.instrument.InstrumentationImpl.appendToBootstrapClassLoaderSearch(Unknown Source)
at datadog.trace.agent.TracingAgent.startAgent(TracingAgent.java:55)
at datadog.trace.agent.TracingAgent.premain(TracingAgent.java:32)
... 6 more
FATAL ERROR in native method: processing of -javaagent failed
I have also supplied my java version:
java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
I have also tried version 0.6.0 of the agent, but I am receiving the same error.
Thanks!
We use Kotlin in production and the lack of checked exceptions means that our error meta data isnt being persisted into our traces. This is due to TracingServerInterceptor only catching RuntimeException
and Error
.
This is a known issue in grpc in general but I wanted to raise awareness for this ahead of time since the discussion is already in progress in the grpc-java project. They proposed the solution of catching Exception
initially
grpc/grpc-java#4864
grpc/grpc-java#2668
If PRs are welcomed Ill gladly submit the work for the necessary updates.
The previous #313 was solved. We are able to see all API calls with duration. But still I found the Apm service trace does not show the inner redis calls. In Spring Boot 2 we use ReactiveRedisTemplate to make all our redis operations asynchronous as well.
The typical call stack will be something like this:
@GetMapping(...)
Mono restfulApiMethod(@PathVariable String key)
{
return reactiveRedisTemplate.opsHash().get(key).map(result -> String.valueof(result)).defaultIfEmpty("");
}
We have the snapshot agent from #313 and have -Ddd.integration.lettuce.enabled=true enabled as well.
Thanks
I've followed https://docs.datadoghq.com/tracing/setup/java/, but the Netty 4.0 Client instrumentation doesn't seem to work with Play 2.5 or even just AHC 2.0 by itself.
I expect x-datadog-trace-id
, x-datadog-parent-id
headers attached to the outbound HTTP request, and a child span named netty.client.request
.
I get neither.
I've replicated Netty40ClientTest in a new project: https://github.com/htmldoug/datadog-netty4-failing.
You can repro with sbt run
.
Relates to #352. cc: @tylerbenson @realark
For async applications it is necessary to be able to use a different ScopeManager
other than the default ThreadLocalScopeManager
. Currently DDTracer
extends ThreadLocalScopeManager
which makes it impossible to use a new scope manager. It would be better if DDTracer
had a ScopeManager
property that could be overridden by clients.
While Datadog supports instrumenting Spring Boot applications, it doesn't take into account the detailed trace data generated by the Spring Cloud Sleuth module. One way to deal with this would be for Sleuth to report traces to Datadog's built-in trace API - something I've created an issue for already. However, it might be nicer for instrumented applications to have the agent automatically recognize Sleuth traces, and might result in duplicate trace data (one set from Sleuth over the API, the other set coming directly from the tracing agent).
What do you think? A worthwhile goal, or one better solved by the issue I linked above?
dd-agent 4088 1 0 Jul02 ? 00:36:57 java -Xmx200m -Xms50m -classpath /opt/datadog-agent/bin/agent/dist/jmx/jmxfetch-0.20.1-jar-with-dependencies.jar org.datadog.jmxfetch.App --ipc_host localhost --ipc_port 5001 --check_period 15000 --log_level INFO --reporter statsd:localhost:8125 collect
dd-agent 13411 1 0 Jul12 ? 00:19:52 java -Xmx200m -Xms50m -classpath /opt/datadog-agent/bin/agent/dist/jmx/jmxfetch-0.20.1-jar-with-dependencies.jar org.datadog.jmxfetch.App --ipc_host localhost --ipc_port 5001 --check_period 15000 --log_level INFO --reporter statsd:localhost:8125 collect
dd-agent 13642 1 0 Jul09 ? 00:25:12 java -Xmx200m -Xms50m -classpath /opt/datadog-agent/bin/agent/dist/jmx/jmxfetch-0.20.1-jar-with-dependencies.jar org.datadog.jmxfetch.App --ipc_host localhost --ipc_port 5001 --check_period 15000 --log_level INFO --reporter statsd:localhost:8125 collect
dd-agent 13762 1 0 Jul09 ? 00:25:23 java -Xmx200m -Xms50m -classpath /opt/datadog-agent/bin/agent/dist/jmx/jmxfetch-0.20.1-jar-with-dependencies.jar org.datadog.jmxfetch.App --ipc_host localhost --ipc_port 5001 --check_period 15000 --log_level INFO --reporter statsd:localhost:8125 collect
dd-agent 15834 1 0 Jul19 ? 00:06:54 java -Xmx200m -Xms50m -classpath /opt/datadog-agent/bin/agent/dist/jmx/jmxfetch-0.20.1-jar-with-dependencies.jar org.datadog.jmxfetch.App --ipc_host localhost --ipc_port 5001 --check_period 15000 --log_level INFO --reporter statsd:localhost:8125 collect
These org.datadog.jmxfetch.App
processes are spun up whenever datadog-agent
is started, but stopping the agent doesn't seem to stop these processes. We're using AWS ElasticBeanstalk, and I followed this documentation on starting and stopping the datadog-agent
for the different event hooks, and while the agent is always stopped successfully, these processes aren't. In addition, a new org.datadog.jmxfetch.App
process spawns on each call to datadog-agent start
.
This issue exists in at dd-java-agent
0.9.0 - 0.11.0, we haven't tested earlier versions.
The datadog documention recommends to reduce the cardinality of the resource.name
, and most of the instrumentation is cognizant of this. What's the scenario where the Status404Decorator is useful?
The decorator here rewrites 404 responses with a custom resource.name
. This was very confusing for me because I was expecting to look at the ratio of 404s to non-404s for my resource.
Can we add an option to turn this decorator off? I would like to have the ability to see my 404 responses with their original resource.name
.
We're seeing quite a few of these debug messages.
[qtp1841695610-94-b15922e4-b495-4d81-9d44-c621c6fd997e] DEBUG datadog.trace.agent.tooling.AgentInstaller$LoggingListener - Failed to handle compile__stub.hugsql.adapter.clojure_java_jdbc.HugsqlAdapterClojureJavaJdbc for transformation on classloader clojure.lang.DynamicClassLoader@557e42a4: Cannot resolve type description for hugsql.adapter.HugsqlAdapter
[qtp1841695610-94-b15922e4-b495-4d81-9d44-c621c6fd997e] DEBUG datadog.trace.agent.tooling.AgentInstaller$LoggingListener - Failed to handle hugsql.adapter.clojure_java_jdbc.HugsqlAdapterClojureJavaJdbc for transformation on classloader clojure.lang.DynamicClassLoader@557e42a4: Cannot resolve type description for hugsql.adapter.HugsqlAdapter
From experience with NewRelic, we've opted to blacklist the classloader. See http://corfield.org/blog/2016/07/29/clojure-new-relic-slow-startup/
Is this an option we can potentially support?
The current dd-trace-ot
library uses Guava in a few different places for some simple helpers functions around Collections (from what i can see). It would be nice if a library like this, something that anyone who wants to use Datadog APM in Java is forced to depend on, aims to reduce its dependencies as much as possible. Adding Datadog APM is forcing me to add almost 3MB to my application and is in theory pinning my application to a specific version of Guava (they are generally pretty good about compat though).
So far when using Datadog it seems there is a principle of enabling simple integration for customers, by using Guava instead of the more difficult OTB functions you are shifting the integration effort from Datadog devs onto Customer devs, would be nice if that was reversed =) fix it once and many get the benefit.
We currently have a Spring app that has Datadog APM tracing set up using the Java agent. The app runs on Tomcat and uses JDBC and Apache HttpClient, both of which are instrumented by the agent. We also use the dd-trace-api
dependency to manually mark methods for instrumentation using the @Trace
annotation:
<dependency>
<groupId>com.datadoghq</groupId>
<artifactId>dd-trace-api</artifactId>
<version>0.17.0</version>
</dependency>
To get more granular trace spans, we want to use OpenTracing as recommended by the docs. We set up the required dependencies:
<dependency>
<groupId>io.opentracing</groupId>
<artifactId>opentracing-api</artifactId>
<version>0.31.0</version>
</dependency>
<dependency>
<groupId>io.opentracing</groupId>
<artifactId>opentracing-util</artifactId>
<version>0.31.0</version>
</dependency>
After this, tracing of both JDBC and Apache HttpClient calls stopped working, even without any tracer.buildSpan
calls in the app.
Looking at the debug output of the Java agent (-Ddatadog.slf4j.simpleLogger.defaultLogLevel=debug
) confirms that something is off. This is the 'Applying instrumentation' output from an instance without the OpenTracing libraries on the classpath vs. with them:
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ThreadPoolExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: spring-web -- datadog.trace.instrumentation.springweb.HandlerAdapterInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: spring-web -- datadog.trace.instrumentation.springweb.HandlerAdapterInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: spring-web -- datadog.trace.instrumentation.springweb.HandlerAdapterInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: spring-web -- datadog.trace.instrumentation.springweb.HandlerAdapterInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: spring-web -- datadog.trace.instrumentation.springweb.DispatcherServletInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on java.net.URLClassLoader@56673b2c
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: trace -- datadog.trace.instrumentation.trace_annotation.TraceAnnotationsInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: trace -- datadog.trace.instrumentation.trace_annotation.TraceAnnotationsInstrumentation on ParallelWebappClassLoader
[PostgreSQL JDBC driver connection thread] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.ConnectionInstrumentation on ParallelWebappClassLoader
[PostgreSQL JDBC driver connection thread] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.ConnectionInstrumentation on ParallelWebappClassLoader
[PostgreSQL JDBC driver connection thread] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.ConnectionInstrumentation on ParallelWebappClassLoader
[PostgreSQL JDBC driver connection thread] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.ConnectionInstrumentation on ParallelWebappClassLoader
[PostgreSQL JDBC driver connection thread] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.ConnectionInstrumentation on ParallelWebappClassLoader
[PostgreSQL JDBC driver connection thread] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.ConnectionInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.StatementInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.StatementInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.StatementInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.StatementInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.StatementInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jdbc -- datadog.trace.instrumentation.jdbc.StatementInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: httpclient -- datadog.trace.instrumentation.apachehttpclient.ApacheHttpClientInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: httpclient -- datadog.trace.instrumentation.apachehttpclient.ApacheHttpClientInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: httpclient -- datadog.trace.instrumentation.apachehttpclient.ApacheHttpClientInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: httpclient -- datadog.trace.instrumentation.apachehttpclient.ApacheHttpClientInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: httpclient -- datadog.trace.instrumentation.apachehttpclient.ApacheHttpClientInstrumentation on ParallelWebappClassLoader
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ThreadPoolExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on java.net.URLClassLoader@5c44c582
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on java.net.URLClassLoader@5c44c582
[main] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on java.net.URLClassLoader@5c44c582
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.FilterChain3Instrumentation on java.net.URLClassLoader@5c44c582
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jsp -- datadog.trace.instrumentation.jsp.JasperJSPCompilationContextInstrumentation on java.net.URLClassLoader@5c44c582
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: jsp -- datadog.trace.instrumentation.jsp.JSPInstrumentation on java.net.URLClassLoader@5c44c582
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: servlet -- datadog.trace.instrumentation.servlet3.HttpServlet3Instrumentation on java.net.URLClassLoader@5c44c582
[http-nio-8080-exec-8] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation on null
[http-nio-8080-exec-8] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
[http-nio-8080-exec-8] DEBUG datadog.trace.agent.tooling.Instrumenter$Default - Applying instrumentation: java_concurrent -- datadog.trace.instrumentation.java.concurrent.FutureInstrumentation on null
As can be seen from the logs, there is no instrumentation for spring-web
, jdbc
and httpclient
when OpenTracing is on the classpath.
One thing that stands out is the following errors that appear in the logs when OpenTracing is added:
[main] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - loader ParallelWebappClassLoader
context: ROOT
delegate: false
----------> Parent Classloader:
java.net.URLClassLoader@5c44c582
failed to delegate bootstrap opentracing class
--
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - loader org.apache.jasper.servlet.JasperLoader@5b2ba8f0 failed to delegate bootstrap opentracing class
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - skipping classloader instance org.apache.jasper.servlet.JasperLoader@5b2ba8f0 of type org.apache.jasper.servlet.JasperLoader
--
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - loader org.apache.jasper.servlet.JasperLoader@43f9f2c7 failed to delegate bootstrap opentracing class
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - skipping classloader instance org.apache.jasper.servlet.JasperLoader@43f9f2c7 of type org.apache.jasper.servlet.JasperLoader
--
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - loader org.apache.jasper.servlet.JasperLoader@4999d5ad failed to delegate bootstrap opentracing class
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - skipping classloader instance org.apache.jasper.servlet.JasperLoader@4999d5ad of type org.apache.jasper.servlet.JasperLoader
--
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - loader org.apache.jasper.servlet.JasperLoader@4f7a84de failed to delegate bootstrap opentracing class
[http-nio-8080-exec-1] DEBUG datadog.trace.agent.tooling.ClassLoaderMatcher - skipping classloader instance org.apache.jasper.servlet.JasperLoader@4f7a84de of type org.apache.jasper.servlet.JasperLoader
I'm not quite sure what the cause of this behaviour is. Maybe #433 is related?
Using version 0.15.0 with debug tracing enabled I see quite a few exceptions similar to the following:
[main] DEBUG datadog.trace.agent.tooling.ByteBuddyElementMatchers - Instrumentation type matcher unexpected exception: datadog.trace.instrumentation.jms.JMSMessageProducerInstrumentationjava.lang.IllegalStateException: Cannot resolve type description for net.bytebuddy.dynamic.Nexus
at net.bytebuddy.pool.TypePool$Resolution$Illegal.resolve(TypePool.java:135)
at net.bytebuddy.pool.TypePool$Default$WithLazyResolution$LazyTypeDescription.delegate(TypePool.java:1252)
at net.bytebuddy.description.type.TypeDescription$AbstractBase$OfSimpleType$WithDelegation.getModifiers(TypeDescription.java:7121)
at net.bytebuddy.matcher.ModifierMatcher.matches(ModifierMatcher.java:31)
at net.bytebuddy.matcher.ModifierMatcher.matches(ModifierMatcher.java:12)
at net.bytebuddy.matcher.NegatingMatcher.matches(NegatingMatcher.java:29)
at net.bytebuddy.matcher.ElementMatcher$Junction$Conjunction.matches(ElementMatcher.java:101)
at datadog.trace.agent.tooling.ByteBuddyElementMatchers$SafeMatcher.matches(ByteBuddyElementMatchers.java:262)
at net.bytebuddy.agent.builder.AgentBuilder$RawMatcher$ForElementMatchers.matches(AgentBuilder.java:1222)
at net.bytebuddy.agent.builder.AgentBuilder$RawMatcher$Conjunction.matches(AgentBuilder.java:1079)
at net.bytebuddy.agent.builder.AgentBuilder$Default$Transformation$Simple.matches(AgentBuilder.java:9033)
at net.bytebuddy.agent.builder.AgentBuilder$Default$Transformation$Compound.matches(AgentBuilder.java:9267)
at net.bytebuddy.agent.builder.AgentBuilder$RedefinitionStrategy$Collector.consider(AgentBuilder.java:6186)
at net.bytebuddy.agent.builder.AgentBuilder$RedefinitionStrategy.apply(AgentBuilder.java:4383)
at net.bytebuddy.agent.builder.AgentBuilder$Default.installOn(AgentBuilder.java:8523)
at net.bytebuddy.agent.builder.AgentBuilder$Default$Delegator.installOn(AgentBuilder.java:10182)
at datadog.trace.agent.tooling.AgentInstaller.installBytebuddyAgent(AgentInstaller.java:92)
at datadog.trace.agent.tooling.AgentInstaller.installBytebuddyAgent(AgentInstaller.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at datadog.trace.agent.TracingAgent.startAgent(TracingAgent.java:72)
at datadog.trace.agent.TracingAgent.premain(TracingAgent.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
In my particular case, I'm trying to get JMS Producer instrumentation working but I'm not seeing any spans created for it. I am getting my JaxRS server spans, HttpURLConnection, etc but not JMS.
I'm guessing that something is failing with the instrumentation and that's why it's not working. Is that a reasonable guess?
Trying to use the tracing agent in a Kubernetes deployed application, that has no runtime writable filesystems. As all disk writes are done during Docker Image build time we don't need writable disk.
The datadog agent seems to need a tmp file to extract / bootstrap some internal dependencies?
Seems to be done here:
https://github.com/DataDog/dd-trace-java/blob/master/dd-java-agent/src/main/java/datadog/trace/agent/TracingAgent.java
Failing stacktrace:
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:401)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:416)
Caused by: java.io.IOException: Read-only file system
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:2024)
at java.io.File.createTempFile(File.java:2070)
at datadog.trace.agent.TracingAgent.extractToTmpFile(TracingAgent.java:143)
at datadog.trace.agent.TracingAgent.startAgent(TracingAgent.java:49)
at datadog.trace.agent.TracingAgent.premain(TracingAgent.java:37)
... 6 more
It would be great if this bootstrapping could happen another way or if we could manually bring it forward to image build time and pass in the JARs ourselves, lots of other ways to do it obv.
Compromising our application security to add APM isn't worth it, so this is a showstopper for our use of Datadog APM.
The AWS Java SDK instrumentation may leak leak PII when the params
field is populated.
The current implementation iterates through all the request parameters and writes it to a string. In the case of SQS, this extracts the MessageBody
of a SQS Message which we don't want to show up in DataDog because it may contain PII or other sensitive information.
Clone failing project: https://github.com/htmldoug/datadog-netty4-failing/tree/master/2-play-scala-seed
cd 2-play-scala-seed
sbt stage
cd target/universal/stage
java -cp 'conf/:lib/*' "-Ddd.service.name=datadog_test_app" "-Ddd.trace.span.tags=env:dev" "-Ddd.integration.netty.enabled=true" "-Ddd.writer.type=LoggingWriter" "-Ddd.priority.sampling=true" -javaagent:~/.ivy2/cache/com.datadoghq/dd-java-agent/jars/dd-java-agent-0.13.0-SNAPSHOT.jar play.core.server.ProdServerStart
curl -X POST -d 'hai' localhost:9000/post
Expected: One trace_id
for this request
Actual: Two separate trace_id
values.
[application-akka.actor.default-dispatcher-7] INFO datadog.trace.agent.common.writer.LoggingWriter - write(trace): [{"type":"web","error":0,"meta":{"http.status_code":"200","component":"play-action","span.kind":"server","http.url":"/post","env":"dev","thread.name":"application-akka.actor.default-dispatcher-7","http.method":"POST","thread.id":"42","span.type":"web"},"metrics":{"_sample_rate":1.0,"_sampling_priority_v1":1},"duration":495080,"name":"play.request","resource":"POST /post","service":"datadog_test_app","trace_id":3394962754312157747,"span_id":4291744685937419745,"start":1533693728276018428,"parent_id":0}]
[netty-event-loop-6] INFO datadog.trace.agent.common.writer.LoggingWriter - write(trace): [{"type":"web","error":0,"meta":{"http.status_code":"200","component":"netty","span.kind":"server","http.url":"http://localhost:9000/post","peer.hostname":"localhost","env":"dev","peer.port":"63781","thread.name":"netty-event-loop-6","http.method":"POST","thread.id":"27","span.type":"web"},"metrics":{"_sample_rate":1.0,"_sampling_priority_v1":1},"duration":9722625,"name":"netty.request","resource":"POST /post","service":"datadog_test_app","trace_id":2725413885196180439,"span_id":199824513983472727,"start":1533693728268121976,"parent_id":0}]
Formatted:
[
{
"type": "web",
"error": 0,
"meta": {
"http.status_code": "200",
"component": "play-action",
"span.kind": "server",
"http.url": "/post",
"env": "dev",
"thread.name": "application-akka.actor.default-dispatcher-7",
"http.method": "POST",
"thread.id": "42",
"span.type": "web"
},
"metrics": {
"_sample_rate": 1.0,
"_sampling_priority_v1": 1
},
"duration": 495080,
"name": "play.request",
"resource": "POST /post",
"service": "datadog_test_app",
"trace_id": 3394962754312157747,
"span_id": 4291744685937419745,
"start": 1533693728276018428,
"parent_id": 0
}
]
[
{
"type": "web",
"error": 0,
"meta": {
"http.status_code": "200",
"component": "netty",
"span.kind": "server",
"http.url": "http://localhost:9000/post",
"peer.hostname": "localhost",
"env": "dev",
"peer.port": "63781",
"thread.name": "netty-event-loop-6",
"http.method": "POST",
"thread.id": "27",
"span.type": "web"
},
"metrics": {
"_sample_rate": 1.0,
"_sampling_priority_v1": 1
},
"duration": 9722625,
"name": "netty.request",
"resource": "POST /post",
"service": "datadog_test_app",
"trace_id": 2725413885196180439,
"span_id": 199824513983472727,
"start": 1533693728268121976,
"parent_id": 0
}
]
When adding the javaagent to the JVM args and running a spring boot 2 application using
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
The following is logged as the agent is started, but running requests against endpoints doesn't produce any traces in the UI.
app_1 | Picked up JAVA_TOOL_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:+UseCompressedOops -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Djboss.as.management.blocking.timeout=700 -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:-UseParallelOldGC
app_1 | [main] INFO datadog.trace.agent.ot.DDTraceOTInfo - dd-trace - version: 0.6.0~0ca84e8
app_1 | [main] INFO datadog.trace.agent.ot.DDTracer - New instance: DDTracer-76494737{ service-name=hello-world, writer=DDAgentWriter { api=DDApi { tracesEndpoint=http://localhost:8126/v0.4/traces } }, sampler=AllSampler { sample=true }, tags={}}
app_1 | [main] INFO datadog.trace.api.DDTraceApiInfo - dd-trace-api - version: unknown
app_1 | [main] INFO datadog.trace.agent.tooling.DDJavaAgentInfo - dd-java-agent - version: 0.6.0~0ca84e8
Using JDBC instrumentation, a SELECT USER( )
query is sent with each SQL statement, effectively doubling the number of SQL queries sent by the app.
Screenshot from Datadog APM dashboard, after turning on the java agent. The number of requests is equal to the total number of sql queries sent by the app.
Looking into connection metrics, HikariCP is properly reusing connections from its pool, so the caching mechanism in [Prepared]StatementInstrumentation.java should kick in, and there should be only one SELECT USER( )
statement for each new connection created.
Play 2.6.13 (Scala), Slick 3.2.1, HikariCP 2.7.8
Using Slick 3.2.1, I see the following exceptions in the logs:
Caused by: java.lang.ClassCastException: datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation$RunnableWrapper cannot be cast to slick.util.AsyncExecutor$PrioritizedRunnable
at slick.util.ManagedArrayBlockingQueue.offer(ManagedArrayBlockingQueue.scala:13)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
at slick.util.AsyncExecutor$$anon$2$$anon$3.execute(AsyncExecutor.scala:120)
...
at slick.basic.BasicBackend$DatabaseDef.runSynchronousDatabaseAction(BasicBackend.scala:231)
at slick.basic.BasicBackend$DatabaseDef.runSynchronousDatabaseAction$(BasicBackend.scala:229)
at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:208)
at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:140)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.runInternal(BasicBackend.scala:76)
at slick.basic.BasicBackend$DatabaseDef.runInternal$(BasicBackend.scala:75)
at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.run(BasicBackend.scala:73)
at slick.basic.BasicBackend$DatabaseDef.run$(BasicBackend.scala:73)
at slick.jdbc.JdbcBackend$DatabaseDef.run(JdbcBackend.scala:37)
Looks like the Java APM agent is interfering with a cast that occurs in Slick. Let me know if you need any other details.
Missing HelperInjector entries and a runtime dependency on Google Collections (which is relocated in the dd-java-agent shadow configuration) cause several NoClassDefFoundErrors to be thrown. Tests do not have this problem.
Using the current released Java agent version (0.9.0) WITHOUT enabling the (still in beta) jetty or spark instrumentation causes my jetty application to error out when trying to create outgoing get requests.
The stack trace I get:
java.lang.NullPointerException: null
at org.glassfish.jersey.model.internal.CommonConfig.configureFeatures(CommonConfig.java:709)
at org.glassfish.jersey.model.internal.CommonConfig.configureMetaProviders(CommonConfig.java:648)
at org.glassfish.jersey.client.ClientConfig$State.configureMetaProviders(ClientConfig.java:372)
at org.glassfish.jersey.client.ClientConfig$State.initRuntime(ClientConfig.java:405)
at org.glassfish.jersey.client.ClientConfig$State.access$000(ClientConfig.java:90)
at org.glassfish.jersey.client.ClientConfig$State$3.get(ClientConfig.java:122)
at org.glassfish.jersey.client.ClientConfig$State$3.get(ClientConfig.java:119)
at org.glassfish.jersey.internal.util.collection.Values$LazyValueImpl.get(Values.java:340)
at org.glassfish.jersey.client.ClientConfig.getRuntime(ClientConfig.java:733)
at org.glassfish.jersey.client.ClientRequest.getConfiguration(ClientRequest.java:286)
at org.glassfish.jersey.client.JerseyInvocation.validateHttpMethodAndEntity(JerseyInvocation.java:135)
at org.glassfish.jersey.client.JerseyInvocation.<init>(JerseyInvocation.java:105)
at org.glassfish.jersey.client.JerseyInvocation.<init>(JerseyInvocation.java:101)
at org.glassfish.jersey.client.JerseyInvocation.<init>(JerseyInvocation.java:92)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:411)
at org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:311)
at <application code, call to client.target(uri).request().get()>
at spark.RouteImpl$1.handle(RouteImpl.java:72)
at spark.http.matching.Routes.execute(Routes.java:61)
at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:130)
at spark.embeddedserver.jetty.JettyHandler.doHandle(JettyHandler.java:50)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:258)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
This error DOES NOT occur when the Datadog Java agent is not present on the box. The javaagent string I am using is:
-javaagent:/opt/dd-java-agent.jar -Ddd.service.name=$DD_SERVICE_NAME -Ddd.agent.host=$DD_HOST_IP
And the agent running on the box is set up using the chef-datadog recipe. I have this code deployed to a handful of servers, and deploying several times a day. It is usually only one of the servers that errors out on all outgoing requests, and the server changes with each new deploy
Edit:
Some more details, the application is running in a docker container and is sending the trace data to a datadog agent running on the host machine. The application uses spark-core 2.6.0
More of a feature request, but could the class/method names of the invoked resource be added to the span/trace name, or in the attributes? It would provide helpful context.
My team is currently experimenting with tracing a system that in part consists of Python and Java applications. On the Python side of things we use dd-trace-py. All outgoing requests include the following headers for distributed tracing:
{ 'x-datadog-trace-id': str(tracer.current_span().trace_id), 'x-datadog-parent-id': str(tracer.current_span().span_id) }
These ids are as far as I understand 64-bit unsigned integers.
When a Java application receive a request with such headers and try to submit them to the trace agent, the id will be incorrectly (imo) encoded by msgpack as float64
rather than uint64
. The trace agent expect the trace id to be an int so it throws the following error and drops the traces:
ERROR (receiver.go:386) - cannot decode v0.4 traces payload: msgp: attempted to decode type "float64" with method for "int"
I have created a ticket in the msgpack-java repository which can be found here. I'm posting this here because there may be others that have this problem, and I thought I'd let you know.
Currently we've worked around this by swapping out the objectMapper
in DDApi
, with the unfortunate use of reflection, to a subclass that use a MessagePackGenerator
that contains the fix suggested in the pull request related to the issue ticket linked above.
We see direct interactions with JMS, but when using Spring's JmsTemplate to interact with ActiveMQ we lose the trace. Are there any workarounds?
I see the traceId being generated in this code.
Is there a way to override this trace id with our own unique id (base 64 UUID), which we currently to track a request across our systems to maintain uniformity.
If yes, will the datadog backend be able to handle this?
It would be really nice if we could disable traces for code that is auto-traced by the agent in some cases. For example, I would like to exclude my http healthcheck endpoints as they skew my APM metrics. Not sure of the best way to accomplish this, but I was thinking having a boolean property in the Trace annotation could allow disabling spans for example. Another "unnamed" APM product allows doing this...
We upgraded our java agent to version 0.12.0
and noticed that we no longer received any traces for servlet.request
in the Datadog APM dashboard. To the best of our knowledge we still received all the traces for all other types. We reverted back down to 0.11.0
and started receiving the servlet.request
traces again. We are running jetty version 9.3.6.v20151106
and jersey version 2.22.1
.
Couchbase includes memcache functionality, traditional nosql, and an advanced query language so there are a several different buckets this fits into APM perspective.
The link to the offical client is here: https://github.com/couchbase/couchbase-java-client
I am not using the jvm agent.
I am trying to manually use dd-trace-ot 0.11.0 in a spring boot application along with opentracing-spring-cloud-starter.
I want to disable the datadog agent in various profiles and had thought it would be as simple as not instantiating the DDAgentWriter and Tracer beans -ie
@Bean
@ConditionalOnProperty(value = "opentracing.datadog.enabled", havingValue = "true", matchIfMissing = false)
public io.opentracing.Tracer tracer() {
DDAgentWriter writer = new DDAgentWriter(...);
Sampler sampler = new AllSampler();
Tracer tracer = new DDTracer("serviceName", writer, sampler);
return tracer;
}
unfortunately it turns out that not declaring this bean makes no difference because there's a DDTracerResolver that creates one anyway. the only thing I can find is with the agent to prevent it from starting the Tracer is to set system vars
dd.integrations.enabled=false
or equivalent environment variables, but these doesnt appear to have any effect when trying to use this in a manual world.
I can also use a system var or equivilant env var of
tracerresolver.disabled=true
which does actually work (when I pull the latest TracerResolver from opentracing) but also stomps on everything else and requires external config.
how can I prevent the DDTracer from instantiating without setting System vars or env vars, and without removing the dependency from my code altogether?
*update - I have figured out that I can set dd.writer.type=LoggingWriter
and while that works more or less for my purposes, it vomits a ridiculous amount of log in INFO!!
thanks in advance.
I realize that typically the agent is toggled on or off by virtue of adding/removing the requisite -javaagent
argument. In some environments, like heroku, it's fairly tricky to to enable the agent in one place and disable it in another.
This is feature request to support something like trace.enabled
flag (DD_TRACE_ENABLED
environment variable, etc.) that would completely disable tracing.
Alternatively (or in addition), it would be nice to supply a NoOpWriter
. Similar situation/cause: if the datadog agent itself itself is disabled, the java tracer will still collect traces and consistently fail to write to a non-running agent; which is basically just a waste of logging, memory, and cpu.
Following the steps in setting up apm and restarting tomcat we get a nasty exception message in our log:
java.lang.ClassCastException: datadog.trace.instrumentation.java.concurrent.ExecutorInstrumentation$CallableWrapper cannot be cast to company.common.asynchronous.immediatetasking.TaskWrapper company.common.asynchronous.immediatetasking.WrapperFutureTask.<init>(WrapperFutureTask.java:42) company.common.asynchronous.immediatetasking.TaskExecutorThreadPool.newTaskFor(TaskExecutorThreadPool.java:57) java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:133) company.common.asynchronous.immediatetasking.TaskExecutorThreadPool.submit(TaskExecutorThreadPool.java:48) company.common.asynchronous.immediatetasking.TaskExecutor.addTask(TaskExecutor.java:246) company.common.asynchronous.immediatetasking.TaskExecutor.addTask(TaskExecutor.java:218) company.common.asynchronous.immediatetasking.TaskExecutor.execute(TaskExecutor.java:99) company.service.cache.IntervalCache.updateCacheIfNecessary(IntervalCache.java:51) company.service.helptexts.HelpTextCache.getHelpLinks(HelpTextCache.java:63) company.service.helptexts.HelpTextServiceImplementation.getHelpLinks(HelpTextServiceImplementation.java:51) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) ...
Before over committing to debugging I figured maybe the problem is in datadogs end.
Is there a way on the map to represent that all the apps have unique mysql instances instead of all of them connection to a single mysql "node" on the APM Service Map.
While setting up APM for a Java project I naively copied the environment configuration from a Python project. However, it turns out the environment variable names are different: for example, the Java APM agent uses DD_AGENT_HOST
to configure the agent host, whereas the Python trace client expects to find the agent host in DATADOG_TRACE_AGENT_HOSTNAME
.
To prevent configuration mistakes, it would be great to have these basic configuration options named consistent across all supported APM languages.
at the moment the LoggingWriter outputs vast amounts of log data as INFO.
it would be great if all the INFO logging could be changed to TRACE for the logwriter output so that folk don't have to isolate the LoggingWriter as special (ie set it to > WARN) for normal operation.
Currently the default service name is supported by the tracer.
PR: #64
The user can set the default service:
new DDTracer("service-name", ...)
We want to allow a third way for the user to change the service name at the contribution level. For instance rename mongo
to my-app-mongo
service when using the mongo contrib provided.
At the point, we have the notion of decorators. But this concept should rest private from the user perspective. So, we will a simple API to allow the user to rename an existing service name to another.
tracer.addRule(Rules.SERVICENAME, new MappingRule("mongo","my-app-mongo"))
rules:
service-name:
- mapping: ["mongo", "my-app-mongo"]
- mapping: ["jdbc", "my-app-jdbc"]
Trying to get our APM going with some custom and agent woven instrumentation using a GlobalTracer, running in Java 8 on undertow servlet container. Our custom code is very similar to the example code:
...
Tracer tracer = GlobalTracer.get();
Scope scope = tracer.buildSpan("operation-name").startActive(true);
try {
scope.span().setTag(DDTags.SERVICE_NAME, "my-new-service");
// The code you're tracing
Thread.sleep(1000);
// If you don't call close(), the span data will NOT make it to Datadog!
} finally {
scope.close();
}
...
But these custom events never appear in the APM UI, or the logs via LoggingWriter
. Digging into your tracing code, it appears write()
is never being called for these traces in the expireReference
method.
private void expireReference() {
final int count = pendingReferenceCount.decrementAndGet();
if (count == 0) {
write();
}
log.debug("traceId: {} -- Expired reference. count = {}", traceId, count);
}
because the pendingReferenceCount
is always greater than 1 for custom traces, but it is 0 for agent woven instrumentation. Its not clear to me what this counter is aiming to achieve or how / when it gets incremented. Is this style of tracing (custom and auto agent) supported? Are there any gotchas? From the docs it looks like I only have to make sure to call close()
on the span and things should work, as per the code close()
is definitely being called, but ends up in the expireReferences
method in PendingTrace.java
and never gets written out.
I can see the counter gets incremented in registerContinuation
and registerSpan
, and we are running in a servlet container so continuations are possible, I'm not sure what to try next.
Hi, I'm trying to use dev
or staging
as the environment, instead of none
.
On below document, it says there are 3 ways to change the environment.
https://docs.datadoghq.com/tracing/environments/
I cannot use 3rd way, because I have no control over okhttp
, java-aws-sdk
, etc. I tried to use 1st way that sets a host tag.
I'm not sure if I'm looking at a right document, but on below document, it says
DD_TAGS: host tags, separated by spaces. For example: simple-tag-0 tag-key-1:tag-value-1
https://github.com/DataDog/datadog-agent/blob/master/Dockerfiles/agent/README.md
So, I've added an environment variable DD_TAGS=env:dev
on both my application container and DataDog Agent container. But, env
is still none
on my dashboard.
I would appreciate if someone tells me what I'm doing wrong.
Thanks.
We have an issue where our rabbit messages that are destined for mysql are failing due to a data violation. Under normal circumstances these will go through a 3x retry cycle and then bail out. However when we have APM enabled the TracedDelegatingConsumer
seems to "get in the way" and either eat the exception or augment the underlying default spring behavior and the message is infinitely stuck in a retry loop.
We have turned off the APM tool and this behavior goes away.
The exception thrown that SHOULD be causing the item to stop retrying is o.s.a.r.r.RejectAndDontRequeueRecoverer : Retries exhausted for message
. However we the below is also added and looped infinitely when APM is enabled below.
2018-11-27 15:50:04.660-06:00 ERROR- [pool-1-thread-26] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@1b993fb7 (amq.ctag-ztJhz2pW_o919UV5_ltoSg) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64773), class-id=0, method-id=0)
--
| 2018-11-27 15:50:04.658-06:00 ERROR- [pool-1-thread-26] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@1b993fb7 (amq.ctag-ztJhz2pW_o919UV5_ltoSg) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64773) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64773) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.598-06:00 ERROR- [pool-1-thread-16] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@70bc6d0e (amq.ctag-Mn3tr6a-Dxt8hiUgma9EyA) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64773), class-id=0, method-id=0)
| 2018-11-27 15:50:04.598-06:00 ERROR- [pool-1-thread-16] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@70bc6d0e (amq.ctag-Mn3tr6a-Dxt8hiUgma9EyA) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64773) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64773) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.595-06:00 ERROR- [pool-1-thread-8] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@308fc027 (amq.ctag-7S5wgXceOldEEvYJ0_vPgg) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772), class-id=0, method-id=0)
| 2018-11-27 15:50:04.593-06:00 ERROR- [pool-1-thread-8] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@308fc027 (amq.ctag-7S5wgXceOldEEvYJ0_vPgg) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.564-06:00 ERROR- [pool-1-thread-18] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@5fd4299 (amq.ctag-jBQGiGL6uPi4BEdD0gu-KQ) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772), class-id=0, method-id=0)
| 2018-11-27 15:50:04.563-06:00 ERROR- [pool-1-thread-18] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@5fd4299 (amq.ctag-jBQGiGL6uPi4BEdD0gu-KQ) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.498-06:00 ERROR- [pool-1-thread-25] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@6dc1b602 (amq.ctag-eVXMUgZzirJkyYG2fhgwFg) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772), class-id=0, method-id=0)
| 2018-11-27 15:50:04.496-06:00 ERROR- [pool-1-thread-25] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@6dc1b602 (amq.ctag-eVXMUgZzirJkyYG2fhgwFg) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64772) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.496-06:00 ERROR- [pool-1-thread-7] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@3c212de3 (amq.ctag-xPibB-_KrfwniVU83-0Q-g) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64771), class-id=0, method-id=0)
| 2018-11-27 15:50:04.494-06:00 ERROR- [pool-1-thread-7] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@3c212de3 (amq.ctag-xPibB-_KrfwniVU83-0Q-g) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64771) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64771) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.471-06:00 ERROR- [pool-1-thread-20] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@7b014a3d (amq.ctag-nbFCrYuc-__Dc2RFRrGRhw) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64770), class-id=0, method-id=0)
| 2018-11-27 15:50:04.469-06:00 ERROR- [pool-1-thread-20] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@7b014a3d (amq.ctag-nbFCrYuc-__Dc2RFRrGRhw) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64770) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64770) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.391-06:00 ERROR- [pool-1-thread-12] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@42e69391 (amq.ctag-5nJKe6c17JZdKdvUc5s1pA) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64769), class-id=0, method-id=0)
| 2018-11-27 15:50:04.390-06:00 ERROR- [pool-1-thread-12] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@42e69391 (amq.ctag-5nJKe6c17JZdKdvUc5s1pA) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64769) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64769) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.390-06:00 ERROR- [pool-1-thread-14] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@1ce46f8 (amq.ctag-b9RPANn6PE3f05BCZ5YYYA) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64770), class-id=0, method-id=0)
| 2018-11-27 15:50:04.388-06:00 ERROR- [pool-1-thread-14] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@1ce46f8 (amq.ctag-b9RPANn6PE3f05BCZ5YYYA) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64770) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64770) java.lang.NullPointerException: null
| 2018-11-27 15:50:04.375-06:00 ERROR- [pool-1-thread-16] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=Closed due to exception from Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@235112d9 (amq.ctag-464kxmffh3vP6XdYzkjRjQ) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64768), class-id=0, method-id=0)
| 2018-11-27 15:50:04.374-06:00 ERROR- [pool-1-thread-16] c.r.c.impl.ForgivingExceptionHandler : Consumer datadog.trace.instrumentation.rabbitmq.amqp.TracedDelegatingConsumer@235112d9 (amq.ctag-464kxmffh3vP6XdYzkjRjQ) method handleDelivery for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64768) threw an exception for channel AMQChannel(amqp://[scrubbed]/[scrubbed],64768) java.lang.NullPointerException: null
I have 3 sprint boot microservices (A, B, C) running on separate servers. My request flow is A -> B-> C
Now this is a single request call hitting multiple servers. I should get 3 spans stacked in a trace but spans are reported as 3 separate traces each containing only single spans. how can i correlate all spans together in one trace?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.