datadog / dd-trace-go Goto Github PK
View Code? Open in Web Editor NEWDatadog Go Library including APM tracing, profiling, and security monitoring.
Home Page: https://docs.datadoghq.com/tracing/
License: Other
Datadog Go Library including APM tracing, profiling, and security monitoring.
Home Page: https://docs.datadoghq.com/tracing/
License: Other
is there plan?
I setup tracing for redis-go and data flows through from a redis call and sends a trace off to datadog fine. I noticed that when i call redis (ex: GET "key") if the value doesn't exist datadog picks it up as an error and logs in a stack trace. A key not being found in redis is a valid use case in our application and show not generate an error. http://prntscr.com/ff6it6 <- screenshot of what it looks like in datadog. The error is "internal.RedisError: redis: nil", note the "redis: nil" just means that there was no value to find from the key.
math/rand
needs to be seeded. If it's not, it will seed with 1 every time.
Because of this, trace and span id generation is deterministic across restarts (and across different applications); this really confuses the DataDog dashboard ;)
There should be some documentation about seeding random before use. Or even better, the tracer should hold its own random source that it ensures is seeded, or use crypto/rand.
I am having a hard time trying to understand the logic behind SetServiceInfo and the difference between the name, service, and resource.
I re-read the source code several time and I still end up just trying difference variations of names in NewRootSpan
to get the result I want.
Is there any documentation that explains the difference between them? It seems that it would be much simpler to just have a name and type for each span and use meta for everything else.
Noticed tracedStmt
https://github.com/DataDog/dd-trace-go/blob/master/tracer/contrib/sqltraced/sqltraced.go#L324 is getting span from statement's context which gets used to prepare the driver statement.
I believe it should be using the context passed into ExecuteContext
or QueryContext
method, as prepared statements may be used for multiple requests.
I often find myself writing:
span := tracer.NewChildSpanFromContext("name", ctx)
ctx = span.Context(ctx)
I would love a version of tracer.NewChildSpanFromContext
that also returns a context with the created span:
span, ctx := tracer.Span("name", ctx) // name tbd
We can set ErrorMsg, ErrorType, ErrorStack, but we can't set the Error boolean to mark the trace as an error.
A possible approach would be for it to be set automatically if any of those 3 tags.
After the fix for #46 was applied, if SpanFromContextDefault is called when there is no existing span, a new one is created with a zero valued Transport, including a nil randGen.
Using this span previously would allow you to record data, but it wouldn't go anywhere. Now, trying to use this span's tracer to make a new child span will cause a panic when it tries to generate a new span id for the child.
We have a bunch of public fields on Span
that cannot be safely accessed in multi-threaded code. These fields include Error
, Name
, Meta
, and Metrics
. We can't even ask the user to lock the span themselves before accessing these fields, as the mutex that protects them is private.
This is particular concerning for Meta
and Metrics
, as concurrent hash accesses can cause panics in newer versions of Go.
We should make these fields private and write public, thread-safe getters and setters to access this data.
In the opentracing code, https://github.com/DataDog/dd-trace-go/blob/master/opentracing/propagators.go#L34 we are writing a base16 version of the trace-id and parent-id, whereas in the non opentracing libraries we are expecting a base10 version of the trace-id: https://github.com/DataDog/dd-trace-rb/blob/6534aaf725bd17df0363340595ebf3fc741e673e/lib/ddtrace/propagation/distributed_headers.rb#L19
This means we can't get distributed traces between opentracing (golang) services and legacy (ruby) services.
It looks like this library has swapped the meaning of component
and operation
versus the opentracing spec.
The operation should be passed to StartSpanFromContext
, but instead you're setting it onto a custom tag resource.name
The component should be set via e.g. ext.Component.Set(sp, "http.request")
, but instead you're passing it in as the operation name in StartSpanFromContext
.
[EDIT]
It looks like the opentracing branch is perpetuating this mistake - given that is going to be such a big breaking change anyways perhaps it could be fixed in that branch
If I use the HTTP integrations, I can enable apdex on the web interface.
However if I build my own spans with Go opentracing, apdex is not displayed.
Which tag is required in the span in order to enable apdex ?
These tags ? https://github.com/DataDog/dd-trace-go/blob/master/opentracing/tags.go
The basictracer example demonstrates this: https://github.com/opentracing/basictracer-go/blob/c7c0202a8a77f658aeb2193a27b6c0cfcc821038/propagation_ot.go#L96
The comment in the opentracing-go code: https://github.com/opentracing/opentracing-go/blob/master/propagation.go#L20
With the current code, a ParentId and TracerId of 0
is used and that is rejected by the datadog-apm agent with the following log line: 2018-01-02 23:25:06 ERROR (receiver.go:219) - dropping trace reason: invalid span Span[t_id:0,s_id:8489972898621406477,p_id:0,ser:...
It was actually seen in production, with large traces (around 2 MiB but it is actually an issue too) and heavy load, the channel got filled until the host memory became exhausted.
The channel can take 1000 traces and force flushes when it is half-filled.
https://github.com/DataDog/dd-trace-go/blob/master/tracer/channels.go#L8
We need to ensure two properties:
Glide fetches the latest available tag to properly set a version (0.3.0
). This version doesn't provide all the features available in your documentation (such as .SetMeta
on Tracer
).
Adding a new version tag on your latest stable commit should fix this issue.
We started implementing tracing in one of our low volume Go applications (30 requests/s). This process has 4 worker threads processing data. All traces and spans are within a single thread and never span multiple threads. I noticed that a large majority of our traces were missing spans (see attached screenshots for two identical traces with one missing spans). After some digging it appears that the issue is that when you reach the max buffer size (hard coded const of 10000) any pushes to the buffer will randomly pick an index and overwrite the span at that index https://github.com/DataDog/dd-trace-go/blob/master/tracer/buffer.go#L34
Even at a 1% sample rate we still are not getting full traces, 1 or 2 out of 10 are complete, which makes the tracing unreliable at best, unusable at worst.
Unfortunately both the const for the buffer size, and the buffer struct are private so there's no way to override this value. Is there an undocumented reason for this? Would you guys be open to making the buffer struct a public struct so that a custom sized buffer can be created if need be? (I can make a PR for this if approved). If not the only option I have is to reimplement the buffer.go file in my own package to allow the size to be configurable, however if there's a valid reason this size was chosen I don't want to circumvent that.
Full trace
Trace with missing spans
Hello. I am trying to incorporate dd-trace-go for tracing our postgres queries. We use GORM which sets up a new sql.DB like this:
import (
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/postgres"
)
func main() {
db, err := gorm.Open("postgres", "host=myhost user=gorm dbname=gorm sslmode=disable password=mypassword")
defer db.Close()
}
// use db for all queries, db.Query etc
This seems to incompatible with using sqltracerd calls for the following code:
// The first argument is a reference to the driver to trace.
// The second argument is the dataSourceName.
// The third argument is used to specify the name of the service under which traces will appear in the Datadog app.
// The last argument allows you to specify a custom tracer to use for tracing.
db, err := sqltraced.OpenTraced(&pq.Driver{}, "postgres://pqgotest:password@localhost/pqgotest?sslmode=disable", "web-backend")
if err != nil {
log.Fatal(err)
}
// Use the database/sql API as usual and see traces appear in the Datadog app.
rows, err := db.Query("SELECT name FROM users WHERE age=?", 27)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
I guess I need to pick one, as I can't use both db pointers to query my actual db? Hoping my issue is clear and that there is a solution.
When a host only (no port) is given to the tracer.WithAgentAddr
configuration unexpected results happen because of the SplitHostPort
return value. This needs to be handled differently (better).
We run continuous integration and local testing, both of which we do not want to tracing for. Please provide a simple way to toggle tracing on/off. Our tests have started running slow trying to connect to a dd-trace client that does not exist.
it might be nice to allow environment variables to configure the tracing client. i think we need the following and it should be consistent across all clients (with the implied defaults)
DATADOG_TRACE_TARGET=localhost:7777
DATADOG_TRACE_ENABLED=true
Gorilla's context is a map keyed off request references, meaning that it breaks rather catastrophically when mixing with the more recent request.Context()
stdlib approach.
Taken from the gorilla docs:
Note: gorilla/context, having been born well before context.Context existed, does not play well with the shallow copying of the request that http.Request.WithContext (added to net/http Go 1.7 onwards) performs. You should either use just gorilla/context, or moving forward, the new http.Request.Context().
In light of this, it's probably best to consider Gorilla deprecated in favour of the standard library context and routers that make use of it.
If you want to keep a gorilla implementation around, muxtrace.SetRequestSpan
and muxtrace.GetRequestSpan
should be updated to use Gorilla's context.
I'm trying to set up the APM in our codebase, and during development I'm getting the following message:
2017/02/24 20:57:56 cannot flush traces: Post http://localhost:7777/v0.3/traces: dial tcp [::1]:7777: getsockopt: connection refused
2017/02/24 20:57:56 lost 3 spans
I have two questions:
Thank you!
API should inspect incoming request headers and context (r.Context) for distributed tracing.
#115 provides new integrations for net/http.Mux
and gorila/mux.Router
.
However, it doesn't allow to wrap a simple http.handler
with a tracer.
Some of my projects don't use a mux/router.
They just use a simple http.Handler
.
It could be interesting to provide a wrapper function for this use case.
The implementation is very simple:
func WrapHandler(h http.Handler, service, resource string, tr *tracer.Tracer) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
internal.Trace(h, w, req, service, resource, tr)
})
}
What do you think ?
I'm using the contrib/gorilla/mux.Router
with the NotFoundHandler
field.
I think there is a bug in the current code, because it panics (nil pointer dereference).
dd-trace-go/contrib/gorilla/mux/mux.go
Lines 43 to 60 in 6ba9d7a
dd-trace-go/contrib/gorilla/mux/mux.go
Line 51 in 6ba9d7a
match.Route
is nil.r.Match()
returned true !
This is an expected behavior
https://github.com/gorilla/mux/blob/c0091a029979286890368b4c7b301261e448e242/mux.go#L103-L108
Route
is nilMatchErr
is definedtrue
My suggestion: contrib/gorilla/mux.Router
should also check that match.Route
is not nil.
I think we don't need to check match.MatchErr
.
Hi!
It would be nice to have an implementation for Labstack's Echo web framework.
Hello, are there plans to set tags at the tracer level? Basically the equivalent of tracer.set_tags()
from DD's python tracing library.
This package should have a RoundTripper wrapper.
https://golang.org/pkg/net/http/#RoundTripper
It will help to trace outgoing HTTP request.
Line 307 in 6ba9d7a
My linter reports this:
should not use basic type string as key in context.WithValue (golint)
IIRC, the best practice is to:
Currently, if a third party service wraps the context with the same string, it will overwrite it.
The best practice mentioned above prevents that.
Would it be reasonable to make it configurable whether or not the Ping() call is traced on the DB conn? In my case Ping() is used in a service-level health check with a high call frequency, and those traces are just noise.
Add and make use of x-datadog-sampling-priority
in propagators.
The datadog agent seems to be receiving empty, zero value spans, even though it appears that dd-trace-agent
is sending them fully formed.
Logs from Trace Agent
I'm receiving this log from /var/log/datadog/tracer-agent.log
:
2017-07-21 21:32:39 ERROR (receiver.go:232) - dropping trace reason: invalid span Span[t_id:0,s_id:0,p_id:0,ser:,name:,res:]: span.normalize: empty `Service` (debug for more info), [Span[t_id:0,s_id:0,p_id:0,ser:,name:,res:] Span[t_id:0,s_id:0,p_id:0,ser:,name:,res:]]
Below is my tracer set up & logs from the webserver process with the trace (forked dd-tracer-go
and added log statement in tracer/span.go
& tracer/buffer.go
)
Tracer and Span Setup
func sendToken() {
t := tracer.NewTracer()
t.SetEnabled(true)
t.DebugLoggingEnabled = true
t.SetServiceInfo("gladys", "go", "webserver")
span := t.NewRootSpan("sendToken.http.request", "gladys", "/token")
defer span.Finish()
childSpan := t.NewChildSpan("Sending auth token", span)
if err := sendAuthToken(); err != nil {
childSpan.FinishWithErr(err)
return
}
childSpan.Finish()
return
}
Logs from Webserver Process
I created a fork and added logs, the following is a reference to log statement and the corresponding log:
------------------ DoFlush called with spans:
Span #0: Name: sendToken.http.request
Service: gladys
Resource: /token
TraceID: 5807396313473053053
SpanID: 5807396313473053053
ParentID: 0
Start: 2017-07-21 21:32:37.328912911 +0000 UTC
Duration: 21.678358ms
Error: 0
Type:
Tags:
system.pid:26686
http.status:200
Span #1: Name: Sending auth token
Service: gladys
Resource: Sending auth token
TraceID: 5807396313473053053
SpanID: 393932223006074207
ParentID: 5807396313473053053
Start: 2017-07-21 21:32:37.330201662 +0000 UTC
Duration: 17.738033ms
Error: 0
Type:
Tags:
------------ SPAN FINISHED with ID: 393932223006074207 & Service: gladys
------------ SPAN FINISHED with ID: 5807396313473053053 & Service: gladys
These are the logs I received from setting tracer.DebugLoggingEnabled = true
:
Sending 1 traces
TRACE: 5807396313473053053
SPAN:
Name: sendToken.http.request
Service: gladys
Resource: /token
TraceID: 5807396313473053053
SpanID: 5807396313473053053
ParentID: 0
Start: 2017-07-21 21:32:37.328912911 +0000 UTC
Duration: 21.678358ms
Error: 0
Type:
Tags:
system.pid:26686
http.status:200
SPAN:
Name: Sending auth token
Service: gladys
Resource: Sending auth token
TraceID: 5807396313473053053
SpanID: 393932223006074207
ParentID: 5807396313473053053
Start: 2017-07-21 21:32:37.330201662 +0000 UTC
Duration: 17.738033ms
Error: 0
Type:
Tags:
Questions
Why might the trace agent not be receiving the fully formed spans I'm attempting to send to it?
I'm running into a lot of DataDog Tracer Errors
on the different services we run.
Datadog Tracer Error: Post http://dd-agent.kube-system:8126/v0.3/traces: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Datadog Tracer Error: unable to flush traces, lost 500 traces
This issue occurs on both of our clusters, both running Kubernetes: one self-managed, the other managed by Tectonic. We don't have connectivity issues between our services.
Please note that this issue is intermittent. We do receive traces on DataDog, but there's always a few that get lost.
# go version
go version go1.9.2 darwin/amd64
Our service uses this standard... http://opentracing.io/. Is dd-trace-go compatible with this standard?
The opentrancing api allows to propagate spans to upstream servers and for upstream servers to extract spans from incoming requests:
func makeSomeRequest(ctx context.Context) ... {
if span := opentracing.SpanFromContext(ctx); span != nil {
httpClient := &http.Client{}
httpReq, _ := http.NewRequest("GET", "http://myservice/", nil)
// Transmit the span's TraceContext as HTTP headers on our
// outbound request.
opentracing.GlobalTracer().Inject(
span.Context(),
opentracing.HTTPHeaders,
opentracing.HTTPHeadersCarrier(httpReq.Header))
resp, err := httpClient.Do(httpReq)
...
}
...
}
Extract the span on the upstream server:
http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
var serverSpan opentracing.Span
appSpecificOperationName := ...
wireContext, err := opentracing.GlobalTracer().Extract(
opentracing.HTTPHeaders,
opentracing.HTTPHeadersCarrier(req.Header))
if err != nil {
// Optionally record something about err here
}
// Create the span referring to the RPC client if available.
// If wireContext == nil, a root span will be created.
serverSpan = opentracing.StartSpan(
appSpecificOperationName,
ext.RPCServerOption(wireContext))
defer serverSpan.Finish()
ctx := opentracing.ContextWithSpan(context.Background(), serverSpan)
...
}
Hi friends,
I am getting this error:
github.com/DataDog/dd-trace-go/tracer/contrib/tracegrpc/grpc.go:112:15: undefined: metadata.FromContext
Seems like the dependancies have changed?
I want to contribute to this library, but I can't get the tests to pass locally.
vagrant@ubuntu-1204:~/src/dd-trace-go/tracer [16:44:50][master] $ go test
2017/03/29 16:44:58 tracer.SetSpansBufferSize max size must be greater than 0, current: 10000
--- FAIL: TestTracesAgentIntegration (0.00s)
Location: transport_test.go:63
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc420073220)}
Location: transport_test.go:65
Error: Not equal: 200 (expected)
!= 0 (actual)
Location: transport_test.go:63
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc420073310)}
Location: transport_test.go:65
Error: Not equal: 200 (expected)
!= 0 (actual)
Location: transport_test.go:63
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc420073400)}
Location: transport_test.go:65
Error: Not equal: 200 (expected)
!= 0 (actual)
Location: transport_test.go:63
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/traces", Err:(*net.OpError)(0xc42018c370)}
Location: transport_test.go:65
Error: Not equal: 200 (expected)
!= 0 (actual)
--- FAIL: TestAPIDowngrade (0.00s)
Location: transport_test.go:77
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.0/traces", Err:(*net.OpError)(0xc420088460)}
Location: transport_test.go:79
Error: Not equal: 200 (expected)
!= 0 (actual)
--- FAIL: TestEncoderDowngrade (0.00s)
Location: transport_test.go:90
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.2/traces", Err:(*net.OpError)(0xc420088550)}
Location: transport_test.go:92
Error: Not equal: 200 (expected)
!= 0 (actual)
--- FAIL: TestTransportServices (0.00s)
Location: transport_test.go:101
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.3/services", Err:(*net.OpError)(0xc4200887d0)}
Location: transport_test.go:103
Error: Not equal: 200 (expected)
!= 0 (actual)
--- FAIL: TestTransportServicesDowngrade_0_0 (0.00s)
Location: transport_test.go:113
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.0/services", Err:(*net.OpError)(0xc420089180)}
Location: transport_test.go:115
Error: Not equal: 200 (expected)
!= 0 (actual)
--- FAIL: TestTransportServicesDowngrade_0_2 (0.00s)
Location: transport_test.go:125
Error: Expected nil, but got: &url.Error{Op:"Post", URL:"http://localhost:8126/v0.2/services", Err:(*net.OpError)(0xc420073770)}
Location: transport_test.go:127
Error: Not equal: 200 (expected)
!= 0 (actual)
FAIL
exit status 1
FAIL _/home/vagrant/src/dd-trace-go/tracer 0.065s
Can you write some docs to get my development environment set up?
I've noticed that this integration uses the Client
type.
Problem: there are also other types of clients (Ring
and ClusterClient
).
All these types implement the same interface https://godoc.org/github.com/go-redis/redis#UniversalClient
I think that the DataDog/Redis integration should use it instead.
tracer/contrib/gin-gonic/gintrace.go
is currently broken as it has the wrong context key.
It looks like Context.spanKey was changed recently from a string to a struct.
I'd open a PR to fix the problem but unfortunately gin's implementation of context.Context
only accepts string keys.
I started updating the vendored copy of dd-trace-go today and noticed that the number of dependencies became huge. It was not like this just a few months ago.
2017/08/16 16:05:34 Fetching: github.com/DataDog/dd-trace-go/tracer
2017/08/16 16:05:35 · Fetching recursive dependency: github.com/gocql/gocql
2017/08/16 16:05:36 ·· Fetching recursive dependency: github.com/golang/snappy
2017/08/16 16:05:37 ·· Fetching recursive dependency: gopkg.in/inf.v0
2017/08/16 16:05:39 ·· Fetching recursive dependency: github.com/hailocab/go-hostpool
2017/08/16 16:05:40 · Fetching recursive dependency: github.com/DataDog/dd-trace-go/vendor/github.com/ugorji/go/codec
2017/08/16 16:05:40 · Fetching recursive dependency: github.com/gorilla/mux
2017/08/16 16:05:41 ·· Fetching recursive dependency: github.com/gorilla/context
2017/08/16 16:05:42 · Skipping (existing): github.com/garyburd/redigo/redis
2017/08/16 16:05:42 · Fetching recursive dependency: github.com/go-redis/redis
2017/08/16 16:05:43 · Fetching recursive dependency: github.com/stretchr/testify/assert
2017/08/16 16:05:44 ·· Fetching recursive dependency: github.com/stretchr/testify/vendor/github.com/pmezard/go-difflib/difflib
2017/08/16 16:05:44 ·· Fetching recursive dependency: github.com/stretchr/testify/vendor/github.com/davecgh/go-spew/spew
2017/08/16 16:05:44 · Fetching recursive dependency: github.com/cihub/seelog
2017/08/16 16:05:45 · Fetching recursive dependency: golang.org/x/net/context
2017/08/16 16:05:47 · Fetching recursive dependency: github.com/gin-gonic/gin
2017/08/16 16:05:48 ·· Fetching recursive dependency: github.com/ugorji/go/codec
2017/08/16 16:05:49 ·· Fetching recursive dependency: github.com/thinkerou/favicon
2017/08/16 16:05:49 ·· Fetching recursive dependency: github.com/json-iterator/go
2017/08/16 16:05:51 ·· Fetching recursive dependency: github.com/dustin/go-broadcast
2017/08/16 16:05:52 ·· Fetching recursive dependency: github.com/mattn/go-isatty
2017/08/16 16:05:53 ··· Fetching recursive dependency: golang.org/x/sys/unix
2017/08/16 16:05:55 ·· Fetching recursive dependency: github.com/gin-gonic/autotls
2017/08/16 16:05:55 ··· Fetching recursive dependency: golang.org/x/crypto/acme/autocert
2017/08/16 16:05:57 ···· Fetching recursive dependency: golang.org/x/crypto/acme
2017/08/16 16:05:57 ·· Fetching recursive dependency: github.com/manucorporat/stats
2017/08/16 16:05:58 ·· Fetching recursive dependency: gopkg.in/yaml.v2
2017/08/16 16:06:02 ·· Fetching recursive dependency: gopkg.in/go-playground/validator.v8
2017/08/16 16:06:05 ·· Fetching recursive dependency: github.com/golang/protobuf/proto
2017/08/16 16:06:08 ··· Fetching recursive dependency: github.com/golang/protobuf/ptypes/any
2017/08/16 16:06:08 ·· Fetching recursive dependency: github.com/gin-contrib/sse
2017/08/16 16:06:09 · Fetching recursive dependency: github.com/jmoiron/sqlx
2017/08/16 16:06:10 · Fetching recursive dependency: google.golang.org/grpc/metadata
2017/08/16 16:06:12 · Fetching recursive dependency: golang.org/x/sys/windows
2017/08/16 16:06:12 · Fetching recursive dependency: google.golang.org/grpc
2017/08/16 16:06:12 ·· Fetching recursive dependency: golang.org/x/net/trace
2017/08/16 16:06:12 ··· Fetching recursive dependency: golang.org/x/net/internal/timeseries
2017/08/16 16:06:12 ·· Fetching recursive dependency: golang.org/x/oauth2
2017/08/16 16:06:13 ··· Fetching recursive dependency: cloud.google.com/go/compute/metadata
2017/08/16 16:06:17 ··· Fetching recursive dependency: google.golang.org/appengine/urlfetch
2017/08/16 16:06:19 ···· Fetching recursive dependency: google.golang.org/appengine/internal/urlfetch
2017/08/16 16:06:19 ···· Fetching recursive dependency: google.golang.org/appengine/internal
2017/08/16 16:06:19 ··· Fetching recursive dependency: google.golang.org/appengine
2017/08/16 16:06:19 ·· Fetching recursive dependency: github.com/golang/protobuf/ptypes
2017/08/16 16:06:19 ·· Fetching recursive dependency: github.com/golang/glog
2017/08/16 16:06:20 ·· Fetching recursive dependency: golang.org/x/net/http2/hpack
2017/08/16 16:06:20 ·· Fetching recursive dependency: github.com/golang/protobuf/protoc-gen-go/descriptor
2017/08/16 16:06:20 ·· Fetching recursive dependency: github.com/golang/mock/gomock
2017/08/16 16:06:21 ·· Fetching recursive dependency: golang.org/x/net/http2
2017/08/16 16:06:21 ··· Fetching recursive dependency: golang.org/x/crypto/ssh/terminal
2017/08/16 16:06:21 ··· Fetching recursive dependency: go4.org/syncutil/singleflight
2017/08/16 16:06:23 ··· Fetching recursive dependency: google.golang.org/api/compute/v1
2017/08/16 16:06:26 ···· Fetching recursive dependency: google.golang.org/api/gensupport
2017/08/16 16:06:26 ····· Fetching recursive dependency: google.golang.org/api/googleapi
2017/08/16 16:06:26 ··· Fetching recursive dependency: golang.org/x/net/idna
2017/08/16 16:06:26 ···· Fetching recursive dependency: golang.org/x/text/secure/bidirule
2017/08/16 16:06:29 ····· Fetching recursive dependency: golang.org/x/text/unicode/bidi
2017/08/16 16:06:29 ······ Fetching recursive dependency: golang.org/x/text/unicode/rangetable
2017/08/16 16:06:29 ······· Fetching recursive dependency: golang.org/x/text/internal/gen
2017/08/16 16:06:29 ········ Fetching recursive dependency: golang.org/x/text/unicode/cldr
2017/08/16 16:06:29 ······· Fetching recursive dependency: golang.org/x/text/internal/ucd
2017/08/16 16:06:29 ······ Fetching recursive dependency: golang.org/x/text/internal/triegen
2017/08/16 16:06:29 ····· Fetching recursive dependency: golang.org/x/text/transform
2017/08/16 16:06:29 ···· Fetching recursive dependency: golang.org/x/text/unicode/norm
2017/08/16 16:06:29 ··· Fetching recursive dependency: golang.org/x/net/lex/httplex
2017/08/16 16:06:29 ·· Fetching recursive dependency: google.golang.org/genproto/googleapis/rpc/status
We closely monitor all dependencies in our project and review changes in every vendored library. This update more than doubles the number of libraries we depend on.
I understand that it might not be a big issue for other projects but I am not sure we can continue using DataDog tracing in 1Password :(
Hello,
I'm currently using the master
branch, with opentracing (I know it's not yet supported).
I wrapped net/http.DefaultTransport
, see #172
Problem: the outgoing HTTP requests to the DataDog agent are now visible in my traces.
Solution, dd-trace-go
should:
not uset he global net/http.DefaultClient|DefaultTransport
, and create its own client
OR
provide a way to customize the HTTP client/transport.
Actually, my real problem is not that the DataDog outgoing requests are visible in my traces (it's not a big issue).
My real problem is that for one of my project, the http.request_out
trace has replaced the most important trace amqp.consume
.
(My hypothesis is that the DataDog trace is written more often than the other one.)
If you want to check (or I can contact your support):
mta-create-queue
Is there a plan to implement the logging functionality for span in opentracing?
func (s *Span) LogFields(fields ...log.Field) {
// TODO: implementation missing
}
I'm trying to use the opentracing "bridge" + integrations (like gorilla/mux).
I'm initializing my code with the example from the front page https://github.com/DataDog/dd-trace-go .
In my configuration, I set some GlobalTags
: env
and version
(version of my app).
It works properly if I start my own span with the official opentracing lib.
I see my tags in my spans.
However, if I use the gorilla/mux integration, it doesn't work as expected.
I don't see the tags/meta or the service name in the spans created by gorilla/mux.
(I create the router with
dd-trace-go/contrib/gorilla/mux/mux.go
Line 22 in 6ba9d7a
I think that the issue is somewhere here
dd-trace-go/opentracing/tracer.go
Lines 151 to 178 in 6ba9d7a
ddtrace.DefaultTracer
is initialized, but the meta/tags and service name are not copied.
For now I will fix this on my side and initializes DefaultTracer
with my own config.
I'm using opentracing/dd to for tracing throughout my gRPC application and on each new call I start a new span from the global tracer (initialized to be datadog's). However, since I initialize the new span for this call with the method name, say grpc_Status
, the root span and resource is thus set to that. So now when I go to my dashboard, I can't see the different resources listed, I only see the one resource with the root span name that was luckily chosen.
https://github.com/DataDog/dd-trace-go/blob/master/opentracing/tracer.go#L62
Here the root span name == root span resource name, which doesn't allow me to see all my different 'resources' in the DataDog dashboard, for now the way I've gotten around this is: https://github.com/processout/dd-trace-go/pull/1/files
` func (t *Tracer) startSpanWithOptions(operationName string, options ot.StartSpanOptions) ot.Span {
otSpan := &Span{
Span: span,
context: SpanContext{
traceID: span.TraceID,
spanID: span.SpanID,
parentID: span.ParentID,
sampled: span.Sampled,
},
}
}`
We need to set the tracer in the struct initialization here above.
Simple fix:
`
otSpan := &Span {
Span: span,
context: SpanContext{
traceID: span.TraceID,
spanID: span.SpanID,
parentID: span.ParentID,
sampled: span.Sampled,
},
tracer: t
}`
Hello
I have the following error "no span buffer" since the recent changes to the lib.
I create childs across services by sending parent info (tradeid, parentid, spanid...) in HTTP headers. (see https://github.com/gchaincl/dd-go-opentracing)
Today the parent's buffer is required to create a child but since it's a private fields it is inaccessible.
Do you think it is possible to add a nil check before span.buffer = parent.bufer in the NewChildSpan func ?
There's no way to set the AppType from the opentracing tracer config.
It'd probably be easy enough to add the full service description to the config.
I'd like to do the same as tracer.Span.SetError(): https://github.com/DataDog/dd-trace-go/blob/master/tracer/span.go#L183-L202
I can see how to set the same meta tags "error.msg" etc, but what about marking the span.error = 1
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.