open-telemetry / opentelemetry-erlang-contrib Goto Github PK
View Code? Open in Web Editor NEWOpenTelemetry instrumentation for Erlang & Elixir
Home Page: https://opentelemetry.io
License: Apache License 2.0
OpenTelemetry instrumentation for Erlang & Elixir
Home Page: https://opentelemetry.io
License: Apache License 2.0
Not a big deal but it appears Oban stopped setting priority by default on all jobs and is set to nil
?
1) test records span on job insertion (OpentelemetryObanTest)
Error: test/opentelemetry_oban_test.exs:34
match (=) failed
code: assert %{
"messaging.destination": "events",
"messaging.destination_kind": :queue,
"messaging.oban.job_id": _job_id,
"messaging.oban.max_attempts": 1,
"messaging.oban.priority": 0,
"messaging.oban.worker": "TestJob",
"messaging.system": :oban
} = :otel_attributes.map(attributes)
left: %{
"messaging.destination": "events",
"messaging.destination_kind": :queue,
"messaging.oban.job_id": _job_id,
"messaging.oban.max_attempts": 1,
"messaging.oban.priority": 0,
"messaging.oban.worker": "TestJob",
"messaging.system": :oban
}
right: %{
"messaging.destination": "events",
"messaging.destination_kind": :queue,
"messaging.oban.job_id": 7,
"messaging.oban.max_attempts": 1,
"messaging.oban.priority": nil,
"messaging.oban.worker": "TestJob",
"messaging.system": :oban
}
stacktrace:
test/opentelemetry_oban_test.exs:47: (test)
I debugged this and it turns out resp_status
is not an integer but "200 OK"
in my case (cowboy
2.9.0 and cowboy_telemetry
0.3.1), while the code (as far as I can see) assumes it's an integer?
According to https://github.com/ninenines/cowboy/blob/8795233c57f1f472781a22ffbf186ce38cc5b049/src/cowboy.erl#L37 the resp_status can indeed be a binary as well.
Furthermore, shouldnt' http.status
be http.status_code
?
Assuming this https://github.com/open-telemetry/opentelemetry-specification/pull/3478/files is merged we need to update all the HTTP instrumentation libraries to follow it.
Is your feature request related to a problem? Please describe.
I would like to send selective metrics instead of every query executed.
Describe the solution you'd like
Optionally enable selective emitting of events via configurations
config :opentelemetry_ecto,
explicit_capture: true,
capture_key: :capture,
capture_value: true
# In my application that is a hot code path
# I care about this query's metrics
MyRepo.all(query, telemetry_options: [capture: true])
# Somewhere else in my application that is NOT a hot code path
# I don't care about this query, it's just noise and expensive to report and store
MyRepo.all(query)
In handle_event/4
if :explicit_capture
is enabled we only emit events with the appropriate capture key and value
Describe alternatives you've considered
I can't think of an alternative to conditionally emitting events besides implementing my own subscribe and handle_event/4
Additional context
Every metric that you store and emit has a cost to it, I'm trying to limit that.
Would you welcome an MR that attempts to implement some version of this?
Using the Finch.stream/5
function fails with a %KeyError{}
and detaches the :request_stop telemetry handler.
Expected behavior
A clear and concise description of what you expected to happen.
I've tracked it down to this line
The issue is that when you're using the stream function, meta.result
is going to have whatever the developer returned in their stream function, which is not guaranteed to have anything, let alone be a map. Maybe this is something that needs to be augmented in the Finch telemetry calls.
Under the About section of this repo (which you can access from the main repo page -- click the gear icon next to the About title):
We're looking to do this uniformly for all OTel org repos.
For example:
Describe the bug
The OpenTelemetryPhoenix
:telemetry
handler named :router_dispatch_start
is detached on the first request the endpoint receives
Expected behavior
The handler should handle the event [:phoenix, :router_dispatch, :start]
without errors OR the documentation should report the requirement for Plug.Telemetry
to be configured
Additional context
Reproducible with a Phoenix application without Plug.Telemetry
instrumented with OpenTelemetryPhoenix
.
The given plug is responsible for emitting the event [:phoenix, :endpoint, :start]
which starts the trace that should collect the span of the :router_dispatch_start
handler.
The plug could not be present by default if the phoenix application has been generated with an older version of Phoenix.
A mix task that adds dependencies like opentelemetry_phoenix
to a project's mix.exs
file if it see's that phoenix
is a dependency.
Opening both for discussion and in hopes someone would be interested in picking up this project to add to -contrib
.
With Bandit now having full support for being a drop-in cowboy replacement for Phoenix, we'll need an instrumentation library for Thousand Island from its telemetry.
No matter which handler I try to use with Opentelemetry they all aren't able to attach (OpentelemetryPhoenix, OpentelemetryEcto, etc...) . I followed and made sure that Plug.Telemetry
exists. Any ideas why things won't attach?
Handler {OpentelemetryPhoenix, :endpoint_stop} has failed and has been detached. Class=:error
Reason={:badmatch, :undefined}
Stacktrace=[
{:otel_telemetry, :end_telemetry_span, 2,
[file: 'src/otel_telemetry.erl', line: 59]},
{:telemetry, :"-execute/3-fun-0-", 4,
[
file: '',
line: 150
]},
{:lists, :foreach, 2, [file: 'lists.erl', line: 1342]},
{Plug.Telemetry, :"-call/2-fun-0-", 4,
[file: 'lib/plug/telemetry.ex', line: 76]},
{Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 2385]},
{Plug.Conn, :run_before_send, 2, [file: 'lib/plug/conn.ex', line: 1690]},
{Plug.Conn, :send_resp, 1, [file: 'lib/plug/conn.ex', line: 399]},
{Web.LivestreamController, :action, 2,
[file: 'lib/web/controllers/livestream_controller.ex', line: 1]},
{Web.LivestreamController, :phoenix_controller_pipeline, 2,
[file: 'lib/web/controllers/livestream_controller.ex', line: 1]},
{Phoenix.Router, :__call__, 2, [file: 'lib/phoenix/router.ex', line: 355]},
{Web.Endpoint, :plug_builder_call, 2,
[file: 'endpoint.ex', line: 1]},
{Web.Endpoint, :"call (overridable 3)", 2,
[file: 'lib/web/endpoint.ex', line: 1]},
{Web.Endpoint, :call, 2,
[file: 'endpoint.ex', line: 1]},
{Phoenix.Endpoint.Cowboy2Handler, :init, 4,
[file: 'lib/phoenix/endpoint/cowboy2_handler.ex', line: 43]},
{:cowboy_handler, :execute, 2,
[
file: '',
line: 37
]},
{:cowboy_stream_h, :execute, 3,
[
file: '',
line: 306
]},
{:cowboy_stream_h, :request_process, 3,
[
file: '',
line: 295
]},
{:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 226]}
]
Hello, we've implemented two libraries for OpenTelemetry that we'd like to offer to this project.
Happy to keep maintaining these separately, but I'd love to see them maintained alongside the other libraries if that's useful.
Describe the bug
opentelemetry_phoenix
does not include tags otel.library.name
and otel.library.version
.
I'm using latest opentelemetry libraries:
Expected behavior
Spans generated by opentelemetry_phoenix must include otel.library.name
and otel.library.version
.
Additional context
This problem was previously also present on ecto library but the commit on opentelemetry_api repository fixed that.
As far as I can tell, this is because the phoenix instrumentation library is using opentelemetry_telemetry which uses get_tracer call, but the tracer isn't registered (properly).
I also did some debugging inside the start_telemetry_span function:
erlang:display(opentelemetry:get_application_tracer(opentelemetry_phoenix))
-- returns default tracer
erlang:display(opentelemetry:get_application_tracer('Elixir.OpentelemetryPhoenix'))
-- returns CORRECT tracer
erlang:display(opentelemetry:get_tracer('opentelemetry_phoenix'))
-- returns default tracer (current code)
erlang:display(opentelemetry:get_tracer('Elixir.OpentelemetryPhoenix'))
-- returns default tracer
I would fix this myself but I'm unsure what's the correct solution here.
Currently opentelemetry_telemetry
relies on telemetry_registry
but I am wondering if that's an anti-pattern because that takes control away from users. Instead, we should add a public API to register spans:
:opentelemetry_telemetry.register(["foo.bar.baz", "bar.bar.bat"])
If you want to rely on telemetry_registry
, then you could do:
:opentelemetry_telemetry.register(:telemetry_registry.spannable_events())
And even that has advantages because now I can add or remove anything I'd like. So I think the user should be in control of what is being reported and what isn't. Thoughts?
When this open-telemetry/opentelemetry-specification#3127 is merged the Ecto instrumentation needs to be updated to not record the statement by default.
Right now each Instrumentation only uses the global TracerProvider. They need to be able to be configured to use a specific TracerProvider (defined as a pid, registered atom or {via, ...}
Describe the bug
Add a dependency in mix.exs that depends on opentelemetry_telemetry, "~> 1.0"
, example: opentelemetry_phoenix "~> 1.1"
Do mix test. The error is:
** (Mix) Could not start application telemetry_registry: could not find application file: telemetry_registry.app
This is because (I think) opentelemetry_telemetry, "1.1.0" in mix.lock does not depend on telemetry_registry
Expected behavior
Previously, opentelemetry_telemetry, "1.0.0", would depend on telemetry_registry, "~> 0.3.0"
, and include telemetry_registry in the build.
Additional context
If I add opentelemetry_telemetry, "1.0.0" to mix.exs, telemetry_registry is included in the build.
Alternatively, I can add telemetry_registry, "~> 0.3.0"
to mix.exs
Describe the bug
When using the cowboy_telemetry_h and cowboy_metrics_h handler together, the metrics cb fails with:
2024-03-25T12:25:21.637585+01:00 <0.3469.0> telemetry:execute/3 : error: Handler opentelemetry_cowboy_handlers has failed and has been detached. Class=error
Reason={case_clause,[]}
Stacktrace=[{otel_telemetry,pop_from_tracer_stack,1,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/opentelemetry_telemetry/src/otel_telemetry.erl"},
{line,110}]},
{otel_telemetry,end_telemetry_span,2,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/opentelemetry_telemetry/src/otel_telemetry.erl"},
{line,49}]},
{opentelemetry_cowboy,handle_event,4,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/opentelemetry_cowboy/src/opentelemetry_cowboy.erl"},
{line,79}]},
{telemetry,'-execute/3-fun-0-',4,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/telemetry/src/telemetry.erl"},
{line,160}]},
{lists,foreach_1,2,[{file,"lists.erl"},{line,1686}]},
{cowboy_metrics_h,terminate,3,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/cowboy/src/cowboy_metrics_h.erl"},
{line,302}]},
{cowboy_stream,terminate,3,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/cowboy/src/cowboy_stream.erl"},
{line,138}]},
{cowboy_http2,terminate_stream_handler,4,
[{file,
"/usr/src/erlang/erccn/_build/default/lib/cowboy/src/cowboy_http2.erl"},
{line,1334}]}]
```
Additionally, the cowboy_telemetry_h unconditionally overwrites the metrics_callback. That fact is neither documented nor desirable.
**Expected behavior**
not failing
**Additional context**
cowboy setup:
ProtoOpts =
#{env => #{dispatch => Dispatch},
logger => logger,
metrics_callback => fun prometheus_cowboy2_instrumenter:observe/1,
stream_handlers => [cowboy_telemetry_h, cowboy_metrics_h, cowboy_stream_h]
},
cowboy:start_clear(?MODULE, TransOpts, ProtoOpts).
Each HTTP client/server instrumentation in this repo needs to add an option for including headers as attributes that follows the semconv:
Describe the bug
The opentelemetry_httpoison
package is not published publically to the Hex package manager.
Expected behavior
opentelemetry_httpoison
should be installable through Hex, similar to the other instrumentation packages.
Additional context
Before moving the existing projects into this repo we need to settle on a couple of items.
Directory Structure
There are a couple of patterns being used by the other langs. The Go repo https://github.com/open-telemetry/opentelemetry-go-contrib seems to have a nice structure which is also close to the Python structure https://github.com/open-telemetry/opentelemetry-python-contrib. The basic pattern on both of these is organizing libraries by function, e.g. instrumentation, exporter, propagator, etc. The JS repo groups packages slightly differently but it seems like more a function of the server/web split.
Package Naming
Most of the languages seem to have settled on including the library's function as part of the name, e.g. opentelemetry-instrumentation-redis
or opentelemetry-exporter-zipkin
. This is a divergence from our current naming patterns but it sees useful to easing adoption and otel concepts. This seems like the time to undertake the migration if we choose to do so.
Git History
Most of the history probably isn't worth keeping and we can archive the old repos under https://github.com/opentelemetry-beam as they migrate.
I propose we adopt the patterns in Python and Go for directory structure, Python and JS for package naming, and not worry about keeping the Git history. We can migrate one at a time to get things moving and archive as we go along.
We now have a dice roll (open-telemetry/opentelemetry.io#2623) example in Erlang, #177 and looking for someone to contribute an Elixir (Phoenix) version.
@ marcdel maybe you can contribute what you did in open-telemetry/opentelemetry.io#2685
Is your feature request related to a problem? Please describe.
LiveView can be very noisy, and the current implementation of opentelemetry_phoenix
currently only supports enabled/disabled (I believe).
Describe the solution you'd like
I'm not sure the best general solution, presumably an allow and/or deny list for filtering these events.
Describe alternatives you've considered
My current solution, as suggested in the Elixir slack, is a custom sampler.
defmodule SpanNamesSampler do
def setup(attributes) when is_map(attributes) do
attributes
end
def setup(_), do: %{}
def description(_), do: "Allows you to drop spans based on their name."
def should_sample(_ctx, _trace_id, _links, span_name, _span_kind, _attributes, config_attributes) do
spans_to_drop = Map.get(config_attributes, :spans_to_drop, [])
if Enum.member?(spans_to_drop, span_name) do
{:drop, [], []}
else
{:record_and_sample, [], []}
end
end
end
config :opentelemetry, sampler: {SpanNamesSampler, %{spans_to_drop: ["super-annoying-event-handler"]}}
Describe the bug
Hello everyone,
I had encounter problem about Open Telemetry for Cowboy Erlang will detach
Handler opentelemetry_cowboy_handlers has failed and has been detached. Class=error
when I compile Cowboy project by using command like rebar3 as prod release
then I start Erlang binary by using /path/to/erlang-release [foreground/console]
but if I start project with rebar3 shell
I will not encounter Handler Detach problem.
Run Erlang Project by using /path/to/erlang-release [foreground/console]
will have error logs like this
Erlang/OTP 24 [erts-12.1.5] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1] [jit] [dtrace]
=======Before==============start_link=======Eshell V12.1.5 (abort with ^G)
(dockerwatch@cloudnative)1> *SPANS FOR DEBUG*
=ERROR REPORT==== 17-Dec-2021::10:31:29.735444 ===
Handler opentelemetry_cowboy_handlers has failed and has been detached. Class=error
Reason=undef
Stacktrace=[{otel_telemetry,start_telemetry_span,
[opentelemetry_cowboy,<<"HTTP GET">>,
#{req =>
#{body_length => 0,cert => undefined,has_body => false,
headers =>
#{<<"accept">> =>
<<"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9">>,
<<"accept-encoding">> =>
<<"gzip, deflate, br">>,
<<"accept-language">> => <<"en-US,en;q=0.9">>,
<<"connection">> => <<"keep-alive">>,
<<"cookie">> =>
<<"s_fid=737CE1562D9C2574-0BD2980522487B59; _ga=GA1.1.154107132.1625138202; ai_user=3BShZm2OqNNQggPlYdNDXo|2021-08-23T08:23:54.731Z; MicrosoftApplicationsTelemetryDeviceId=e81308c7-b4d5-4479-9b90-6a7cc10880a8; MSFPC=GUID=f0b25916852047deb2fcd15c12d84da6&HASH=f0b2&LV=202106&V=4&LU=1622518798816; django_language=en; optimizely-user-id=3yvznvqrba.9; AMCV_D10F27705ED7F5130A495C99%40AdobeOrg=359503849%7CMCMID%7C21585024227443696071629298091785840441%7CMCAAMLH-1630552041%7C3%7CMCAAMB-1630552041%7CRKhpRz8krg2tLO6pguXWp5olkAcUniQYPHaMWWgdJ3xzPWQmdj0y%7CMCOPTOUT-1629954441s%7CNONE%7CvVersion%7C5.0.1; OPTOUTMULTI=0:0%7Cc1:1%7Cc2:0%7Cc3:0; utag_main=v_id:017b7d1689740004500708d7b69100087000c07f00781$_sn:3$_se:5$_ss:0$_st:1629953950624$is_country_requiring_explicit_consent:false$dc_visit:3$ses_id:1629952150171%3Bexp-session$_pn:1%3Bexp-session$dc_event:1%3Bexp-session$mm_sync:1%3Bexp-session$mm_ga_sync:1%3Bexp-session$dc_region:ap-east-1%3Bexp-session; _mkto_trk=id:157-GQE-382&token:_mch--1629707035373-12244; ajs_anonymous_id=63beef76-095f-4e7b-b106-cc406694c7d0">>,
<<"host">> => <<"localhost:8083">>,
<<"purpose">> => <<"prefetch">>,
<<"sec-ch-ua">> =>
<<"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"96\", \"Google Chrome\";v=\"96\"">>,
<<"sec-ch-ua-mobile">> => <<"?0">>,
<<"sec-ch-ua-platform">> => <<"\"macOS\"">>,
<<"sec-fetch-dest">> => <<"document">>,
<<"sec-fetch-mode">> => <<"navigate">>,
<<"sec-fetch-site">> => <<"none">>,
<<"sec-fetch-user">> => <<"?1">>,
<<"upgrade-insecure-requests">> => <<"1">>,
<<"user-agent">> =>
<<"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36">>},
host => <<"localhost">>,method => <<"GET">>,
path => <<"/">>,
peer => {{127,0,0,1},62613},
pid => <0.626.0>,port => 8083,qs => <<>>,ref => http,
scheme => <<"http">>,
sock => {{127,0,0,1},8083},
streamid => 1,version => 'HTTP/1.1'},
streamid => 1},
#{}],
[]},
{opentelemetry_cowboy,handle_event,4,
[{file,
"/Users/yyyy/ProjectCode/yyy/yyyy-opentracing-demo/tracing-tracking-service/dockerwatch/_build/default/lib/opentelemetry_cowboy/src/opentelemetry_cowboy.erl"},
{line,49}]},
{telemetry,'-execute/3-fun-0-',4,
[{file,
"/Users/yyyy/ProjectCode/yyy/yyyy-opentracing-demo/tracing-tracking-service/dockerwatch/_build/default/lib/telemetry/src/telemetry.erl"},
{line,150}]},
{lists,foreach,2,[{file,"lists.erl"},{line,1342}]},
{cowboy_telemetry_h,init,3,
[{file,
"/Users/yyyy/ProjectCode/yyy/yyyy-opentracing-demo/tracing-tracking-service/dockerwatch/src/cowboy_telemetry_h.erl"},
{line,11}]},
{cowboy_stream,init,3,
[{file,
"/Users/yyyy/ProjectCode/yyy/yyyy-opentracing-demo/tracing-tracking-service/dockerwatch/_build/default/lib/cowboy/src/cowboy_stream.erl"},
{line,112}]},
{cowboy_http,after_parse,1,
[{file,
"/Users/yyyy/ProjectCode/yyy/yyyy-opentracing-demo/tracing-tracking-service/dockerwatch/_build/default/lib/cowboy/src/cowboy_http.erl"},
{line,352}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]
*SPANS FOR DEBUG*
Project Setup
I'm copy whole file of cowboy_telemetry_h.erl
from your github repository then put in to my handler is it correct ?
Also opentelemetry_cowboy:setup()
is using before Supervisor had been start like in my picture.
Thank you so much for helping !
Expected behavior
Everything working fine if using rebar3 shell
I'm assume that may be I set dependency incorrect ? but after I inspect file path it seem like I already have it and I can run project with rebar3 shell
without any problem.
Additional context
By default, Ecto executes preloads (if there are more than one) in parallel, e.g.,
Repo.all(from(Demo.Post, preload: [:authors, :comments]))
or
Repo.all(Demo.Post)
|> Repo.preload([:authors, :comments])
This will result in the two queries for authors
and comments
not to be linked to the parent span.
Solution
It would be nice if the preload spans were a child of the parent span (the query for Demo.Post
in the above example) or that they are created with a link to this parent span (similar to how it's done with Oban jobs in #6).
Alternatives
One workaround for this is passing in_parallel: false to Repo.all or Repo.preload, but this might impact the performance.
Additional context
Previously reported in opentelemetry-beam/opentelemetry_ecto#15
Is your feature request related to a problem? Please describe.
The Phoenix instrumentation does not inject tracing headers in the response
Describe the solution you'd like
Similar to how :otel_propagator_text_map.extract
already extracts tracing headers, there should be an otel_propagator_text_map.inject
to add tracing headers to the response.
Describe alternatives you've considered
This can be done in the application side with a simple custom plug, but it would make more sense integrated into the instrumentation and have it just work โข๏ธ
Additional context
This is very useful if you are running an application in a kubernetes cluster and have a proxy layer between applications, or running a micro service architecture.
Describe the bug
I recently experimented with adding open telemetry to one of the apps my team owns. It's been over a week now since I made the swap and we've had a few exceptions. However, they all show up in DataDog with no stacktrace info and this message: exit:{{#{'__exception__' => true,...},[...]},{'Elixir.MyAppWeb.Endpoint',...}}
.
It looks like the exceptions are coming in as events, which as far as I know is compliant with the OTel spec.
It seems like Datadog doesn't handle errors this way, and instead they're included as span attributes (sorry about the redaction, point being is that there is a message and a full stacktrace there)
It looks like I might need to add a span processor to add that span attribute. Perhaps this is more of a compatibility issue than anything else. Regardless, if this issue results in a bridge library or a migration guide to follow for Spandex, that would be much appreciated. Of course, I'd be happy to collaborate in any way I can!
Expected behavior
I had expected something like what Spandex presents.
Also, it seems like the exception.message
attribute is missing on the event.
Additional context
{:opentelemetry, "~> 1.3"},
{:opentelemetry_api, "~> 1.2"},
{:open_telemetry_decorator, "~> 1.4"},
{:opentelemetry_exporter, "~> 1.6"},
{:opentelemetry_phoenix, "~> 1.1"},
{:opentelemetry_ecto, "~> 1.2"},
{:opentelemetry_cowboy, "~> 0.3"},
Currently it checks formatting on every version of Elixir we support, this will result in check failures between versions at times.
We should only run the format checking on the latest version.
We'd like to be able to update all of our OpenTelemetry packages to their newest version but because of the pinned versions of this package we're currently unable to. Is there any way that an updated version of opentelemetry_tesla
can be published, even if all that's in there is the relaxed version requirements?
At the moment we have 3 HTTP server instumneters (cowboy
, phoenix
and elli
) and 4 HTTP client ones (finch
, httpoison
, req
and tesla
).
They expose some common option, some options that somehow overlap but are not equal and some options that are specific to that instrumenter.
The set of common options I would like to have on each HTTP instrumenter is the following:
{req|resp}_headers_to_span_attributes
- this allows to give a list of request/response headers to be used as span attributes as per semantic conventions. A PR for adding this is already open, see #184
span_name
- this option should be a string (for fixed span names) or a function that takes as input the request (the representation of the request depends on the library) and return the span name.
TBD if the string value is needed or if it's ok to allow only the function, it should behave like request_hook
below.
This is already implemented (partially) by some instrumenters:
httpoison
takes ot_span_name
with value the actual span name (a string)req
takes span_name
with value the actual span name (a string)tesla
takes span_name
with value the actual span name (a string) or a function taking as argument the %Telsa.Env{}
structrequest_hook
- function that takes as input the request and returns an enumerable of span attributes to be set on span creation. This is useful to add attributes that can be used for sampling decisions.
TBD not fully satisfied with the name, it should somehow explain that is returning span attributes.
TBD should be possible to give a fixed enumerable of attributes instead of a function? It should behave like span_name
above.
This is already implemented (partially) by some instrumenters:
cowboy
in PR #128elli
has the server_name
option to add a specific http.server_name
attributehttpoison
has the ot_attributes
option but is is a fixed map of attributes and not a function. It also has a convoluted logic that takes into account the ot_resource_route
and infer_route
options to set the http.route
span attribute.propagate_ctx
- boolean, whether to propagate the trae context in outoging requests (only for HTTP clients).
TBD if a boolean is sufficient or if taking a propagator as input (as done in tesla
, see below) can be useful.
This is already implemented (partially) by some instrumenters:
req
with the propagate_trace_ctx
option (bool)tesla
with the propagator
option that can be an otel_propagator:t()
or the atom :none
In addition to these options some library expose its own specific ones:
tesla
has the mark_status_ok
options to allow setting the span status to ok to request that returns an HTTP status code that is normally mapped as an error. This is against semantic conventions, we should decide if extending this option to other instrumenters or removing it from tesla
.elli
has the exluded_paths
option to allow excluding some paths from automatic span creation, may be useful to add it to other instrumenters?req
has the no_path_params
option, whose use is not so clear to mephoenix
has some specific options that do not need any change (endpoint_prefix
and adapter
)Describe the bug
I have an Elixir project using the family of opentelemetry
libraries. Thanks for maintaining those!
It is a Phoenix application that on each request creates one or more Oban jobs.
I wanted to make sure those jobs are connected to the original request.
I have opentelemetry_oban
setup and I am using OpentelemetryOban.insert
to enque the job.
I am checking otel_trace_id in logger metadata.
In the controller, I have:
CONTROLLER: [
otel_span_id: '1c53a8b205dc1601',
otel_trace_flags: '01',
otel_trace_id: 'e681d5e0ed8d30000000000000000001'
]
In Oban's perform
function I have:
PERFORM: [
otel_span_id: 'ab23dc61e7d8a801',
otel_trace_flags: '01',
otel_trace_id: '9e6d8a8528d2a8000000000000000001'
]
Expected behavior
As far as I understand, using OpentelemetryOban.insert
should ensure the trace_id
is the same in the Oban job.
Additional context
I started debugging it by adding a bunch of IO.inspect
s in the instrumentation/opentelemetry_oban/lib/opentelemetry_oban/job_handler.ex
I've found code that extracts the traceparent and it seems to be correct:
%{"traceparent" => "00-e681d5e0ed8d30000000000000000001-61f8a6206fdd7001-01"}
The trace_id
is the same as in controller up to this point:
https://github.com/open-telemetry/opentelemetry-erlang-contrib/blob/main/instrumentation/opentelemetry_oban/lib/opentelemetry_oban/job_handler.ex#L78
However, the call to start_telemetry_span
forgets the trace_id
and it makes a new one, so everything in Oban's perform has a different trace_id
.
Maybe, while creating the new span, we need to somehow pass the parent? I am not sure how to do it properly, but with some guidance, I am happy to try and open a PR.
Describe the bug
Adding :opentelemetry_cowboy.setup()
to the start/2
function in application.ex
in a Phoenix app, results in an error message on server start.
Expected behavior
No error and Cowboy spans emitted.
Additional context
Versions used:
{:opentelemetry, "1.0.0-rc.3"}
{:opentelemetry_api, "1.0.0-rc.3.2"}
{:opentelemetry_cowboy, "0.1.0"}
Also reproduced by @maciej-szlosarczyk on an empty Phoenix project: https://gist.github.com/maciej-szlosarczyk/91c257b738c429784889fb579f6e3636
Tracking which Semantic Conventions couldn't be replaced to figure out what the path forward will be with them:
We now have https://hex.pm/packages/opentelemetry_semantic_conventions which is generated from the spec semantic conventions https://github.com/open-telemetry/opentelemetry-specification/tree/main/semantic_conventions yaml
All the existing libraries must be updated to use these macros for attribute keys instead of hardcoding the keys.
Future PR's won't be merged unless they use this libraries for attributes that are semantic conventions.
Describe the bug
Tons of UPDATE "public"."oban_producers" AS o0 SET "updated_at" = $1 WHERE (o0."uuid" = $2)
as parentless spans
Expected behavior
Span from engine.ex should be respected and included as the parent span
Additional context
Offending code: https://github.com/sorentwo/oban/blob/main/lib/oban/queue/engine.ex#L117
I suspect this is probably happening for all the functions in engine.ex as there seems to be nothing special about this function compared to all the other ones.
TL;DR: provide option for users to setup Ecto telemetry with repo auto-discovery event, in addition to current explicit setup
At current version (1.0.0-rc.3
), Ecto setup is done explicit for each repo. Given Ecto provides repo init events, would be very useful to have some way to setup OpenTelemetry to listen those events and "auto-detect" repos. While the explicit approach we have today have many merits (doesn't rely on init order for example), with the auto-detect approach we have a bit more "drop-in" experience for Ecto.
Assuming this feature is desirable, I think the ideal scenario is auto-discovery by default. The convention of setup
function lister to [:ecto, :repo, : init]
, and defer to explicit setup per repo (the current behaviour).
Some pseudo-code that can achieve this solution:
def setup(config \\ []) do
:telemetry.attach({__MODULE__, :repo_init}, [:ecto, :repo, :init], &__MODULE__.handle_repo_init/4, config)
end
def handle_repo_init(_event_name, _measurements, metadata, config) do
event_prefix = metadata.opts[:telemetry_prefix] ++ [:query]
OpentelemetryEcto.setup_repo(event_prefix, config) # a rename of the current setup/2 impl
end
If the feature is desirable, but no breaking changes is preferred, the implementation will be exactly the same just with different functions names (something like setup_all_repos/1
?)
If this feature fits OpenTelemetry vision, I'm open to work on some PR for that after guidance around which approach to take.
Because the Cowboy telemetry handlers execute in a different process than the user's code they must use fetch_parent_ctx
from opentelemetry_process_propagator to get what should be the active context of the process.
Since we can't simply move the start/end of the span to a middleware (middlewares do run in the same process as the user's code) because each middleware has only an execute
function and no way to end
the span because of it we should have a middleware that does the fetch_parent_ctx
for the user.
The middleware can then set the current context in the pdict.
This will give the user an experience much more like those with other web servers, both in Erlang and other languages.
Describe the bug
Create new mix project. Add {opentelemetry_cowboy, "~> 0.2"}
to deps. Do
$ mix test
** (Mix) Could not start application telemetry_registry: could not find application file: telemetry_registry.app
When I check opentelemetry_cowboy it has
opentelemetry_cowboy/src/opentelemetry_cowboy.app.src:6: telemetry_registry]},
But no telemetry_registry in rebar.config for opentelemetry_cowboy
Expected behavior
No complaints about missing teletry_registry when running test.
Additional context
I mistakenly thought this was due to opentelemetry_telemetry, but this time I am right. Or I will be next time.
Describe the bug
Maybe not really a bug but more a lack of configuration or adherence to the spec.
We're seeing this kind of error:
OTLP grpc export failed with GRPC status 8 : grpc: received message after decompression larger than max (13104533 vs. 4194304)
Which looks like the message is too large and hits the 4194304
size limit. However, some people like in this issue here seem to have configurable ways of increasing this limit.
Possibly a workaround is to play with these env vars decreasing their scheduled delay and max queue size to deliver messages quicker before they reach this size limit?
Expected behavior
A way to avoid this error, either by a new configuration or some documentation explaining how to tackle it.
Additional context
It should be that the global propagators are only used if the user doesn't supply a propagator. This applies to anywhere propagators are used but it is largely HTTP instrumentation libraries that we have and need to make sure this is the case for.
Describe the bug
I have multiple repos but the only setup command that seems to work for opentelemetry_ecto is :repo
Expected behavior
I expect to run OpentelemetryEcto.setup([:reel, :repo])
as well as OpentelemetryEcto.setup([:reel, MyApp.Repo])
and OpentelemetryEcto.setup([:reel, MyApp.Repo2])
and have them all work
Additional context
It is entirely possible that I don't understand how this is supposed to work, so forgive me if that is the case. How do I specify more than one Repo if more than one repo exists?
Opening this in hopes there is a kind soul interested in tackling this -- well, first assuming @bryannaegele supports this plan.
I'd like publishing to hex done through github actions. There are a couple ways to do this but I'm not sure the best. The issue is my initial idea is to base it on tags, so when a tag phoenix-1.1.1
is created then the version 1.1.1
is published of opentelemetry_phoenix
. The problem is it'd be nice to have the application version be based on the tag if we aren't doing it off of a PR where it can be reviewed and this would be a pain for the Erlang apps.
Best would be if a PR could be opened to bump and release an application and it is done when that PR is merged. I don't know if that is even possible in Github Actions.
Some other languages' OpenTelemetry HTTP instrumentation libraries provide convenient options for filtering and modifying the spans created by the instrumentation libraries. It would be great to provide these options in Erlang/Elixir. See this discussion in slack.
Describe the solution you'd like
The interface could mirror either the go or node.js implementations:
Specifically, from the go instrumentation, the concept of Labelers and Filters, and from the Node.js instrumentation the headersToSpanAttributes
configuration object.
Is your feature request related to a problem? Please describe.
to fully enable APM backends, grpc spans should be properly marked with SpanKind and RpcSystem.
Describe the solution you'd like
set SpanKind=SERVER, RpcSystem=grpc
Additional context
PR forthcoming.
Maybe this context should be set on start event, but it was called on stop event.
My spans are shown in two different like this:
I want that all my own spans be in the main span HTTP POST.
Thank you for your help or guide on how to achieve it.
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
instrumentation/opentelemetry_dataloader/docker-compose.yml
postgres 16.2
instrumentation/opentelemetry_ecto/docker-compose.yml
postgres 16.2
instrumentation/opentelemetry_oban/docker-compose.yml
postgres 16.2
instrumentation/opentelemetry_redix/docker-compose.yml
.github/workflows/elixir.yml
actions/checkout v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
circleci/postgres 13.5-ram
ubuntu 22.04
circleci/postgres 13.5-ram
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
circleci/postgres 13.5-ram
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
.github/workflows/erlang.yml
actions/checkout v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
actions/checkout v4
erlef/setup-beam v1
actions/cache v4
.github/workflows/labeler.yml
actions/labeler v4
.github/workflows/publish-mix-hex-release.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/github-script v7@60a0d83039c74a4aee543508d2ffcb1c3799cdea
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/github-script v7@60a0d83039c74a4aee543508d2ffcb1c3799cdea
erlef/setup-beam v1@2f0cc07b4b9bea248ae098aba9e1a8a1de5ec24c
peter-evans/create-pull-request v6@a4f52f8033a6168103c2538976c07b467e8163bc
.github/workflows/release-drafter.yml
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
release-drafter/release-drafter v6
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
ubuntu 22.04
instrumentation/opentelemetry_bandit/mix.exs
opentelemetry_api ~> 1.2
opentelemetry_semantic_conventions ~> 0.2
opentelemetry_telemetry ~> 1.0
plug >= 1.15.0
telemetry ~> 1.2
ex_doc >= 0.0.0
excoveralls ~> 0.18
bandit ~> 1.0
dialyxir ~> 1.4
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
req ~> 0.4
instrumentation/opentelemetry_dataloader/mix.exs
telemetry ~> 1.0
opentelemetry_api ~> 1.2
opentelemetry_telemetry ~> 1.0
dataloader ~> 2.0
opentelemetry_exporter ~> 1.0
opentelemetry ~> 1.0
ex_doc ~> 0.31
ecto_sql >= 3.0.0
postgrex >= 0.15.0
dialyxir ~> 1.1
opentelemetry_process_propagator ~> 0.2.1
instrumentation/opentelemetry_ecto/mix.exs
telemetry ~> 0.4 or ~> 1.0
opentelemetry_api ~> 1.0
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
ex_doc ~> 0.31
ecto_sql >= 3.0.0
postgrex >= 0.15.0
dialyxir ~> 1.1
opentelemetry_process_propagator ~> 0.3
instrumentation/opentelemetry_finch/mix.exs
telemetry ~> 0.4 or ~> 1.0
opentelemetry_api ~> 1.0
opentelemetry_semantic_conventions ~> 0.2
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
ex_doc ~> 0.31.0
finch ~> 0.18
dialyxir ~> 1.2
bypass ~> 2.0
instrumentation/opentelemetry_httpoison/mix.exs
httpoison ~> 1.6 or ~> 2.0
opentelemetry_api ~> 1.0
opentelemetry_semantic_conventions ~> 0.2
instrumentation/opentelemetry_nebulex/mix.exs
dialyxir ~> 1.1
ex_doc ~> 0.31.0
nebulex ~> 2.1
opentelemetry ~> 1.0
opentelemetry_api ~> 1.0
opentelemetry_exporter ~> 1.0
opentelemetry_telemetry ~> 1.0
telemetry ~> 0.4 or ~> 1.0
instrumentation/opentelemetry_oban/mix.exs
oban ~> 2.0
opentelemetry_api ~> 1.2
opentelemetry_telemetry ~> 1.1
opentelemetry_semantic_conventions ~> 0.2
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
telemetry ~> 0.4 or ~> 1.0
ex_doc ~> 0.31
postgrex >= 0.0.0
instrumentation/opentelemetry_phoenix/mix.exs
nimble_options ~> 0.5 or ~> 1.0
opentelemetry_api ~> 1.0
opentelemetry_telemetry ~> 1.0
opentelemetry_process_propagator ~> 0.3
opentelemetry_semantic_conventions ~> 0.2
telemetry ~> 0.4 or ~> 1.0
plug >= 1.11.0
cowboy_telemetry ~> 0.4
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
ex_doc ~> 0.31
plug_cowboy ~> 2.4
dialyxir ~> 1.1
instrumentation/opentelemetry_redix/mix.exs
dialyxir ~> 1.1
ex_doc ~> 0.31.0
opentelemetry ~> 1.0
opentelemetry_api ~> 1.0
opentelemetry_process_propagator ~> 0.3
opentelemetry_semantic_conventions ~> 0.2
opentelemetry_exporter ~> 1.0
redix ~> 1.0
telemetry ~> 0.4 or ~> 1.0
instrumentation/opentelemetry_req/mix.exs
jason ~> 1.3
opentelemetry_api ~> 1.0
opentelemetry_semantic_conventions ~> 0.2
req >= 0.3.5
ex_doc ~> 0.31
opentelemetry ~> 1.0
instrumentation/opentelemetry_tesla/mix.exs
opentelemetry ~> 1.0
opentelemetry_api ~> 1.2
opentelemetry_telemetry ~> 1.1
opentelemetry_semantic_conventions ~> 0.2
tesla ~> 1.4
ex_doc ~> 0.31
bypass ~> 2.1
jason ~> 1.3
mix.exs
propagators/opentelemetry_process_propagator/mix.exs
dialyxir ~> 1.1
ex_doc ~> 0.31
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
utilities/opentelemetry_telemetry/mix.exs
dialyxir ~> 1.4.0
ex_doc ~> 0.31
opentelemetry ~> 1.0
opentelemetry_exporter ~> 1.0
The Semantic Conventions spec introduced a significant breaking change (opinion) to the HTTP Tracing spec's span name conventions in v1.18, along with a number of other breaking changes in v1.20. This was recognized as a disruptive change for endusers by several maintainer groups and the spec wg decided that the languages should hold off on implementing that version until the http section of the semantic conventions spec was marked stable. This evolved into a broader migration plan for semantic convention adoption choices by end users across the entire spec.
Open question on the http span name issue.
Resolution: HTTP span names are included in the "should not change until HTTP conventions stabilized" plan. Clarified in this open PR
Summary of the determinations that have come out of these plans:
OTEL_SEMCONV_STABILITY_OPT_IN
env varhttp/dup
, where a conflict exists between stable/unstable, the stable version should be emitted.Reference the following resources for any questions when dealing with semantic convention updates:
This raises some open questions:
Describe the bug
When Phoenix.LiveView assign_async/3
is used, any spans from async task will not have parent context, even when OpentelemetryProcessPropagator is used. This means that any Ecto query is detached span.
Expected behavior
I'd expect parent context to be propagated.
Additional context
Repro script:
#!/usr/bin/env elixir
Application.put_env(:sample, Example.Endpoint,
http: [ip: {127, 0, 0, 1}, port: 5001],
server: true,
live_view: [signing_salt: "you-are-the-best"],
secret_key_base: String.duplicate("a", 64)
)
Application.put_env(:opentelemetry, :traces_exporter, {:otel_exporter_stdout, []})
Mix.install([
{:plug_cowboy, "~> 2.5"},
{:jason, "~> 1.0"},
{:phoenix, "~> 1.7"},
{:phoenix_live_view, "~> 0.20.1"},
{:opentelemetry, "~> 1.3"},
{:opentelemetry_api, "~> 1.2"},
{:opentelemetry_exporter, "~> 1.6"},
{:opentelemetry_process_propagator, "~> 0.3.0"},
{:opentelemetry_phoenix, "~> 1.1"}
])
defmodule Example.ErrorView do
def render(template, _), do: Phoenix.Controller.status_message_from_template(template)
end
defmodule Example.HomeLive do
use Phoenix.LiveView, layout: {__MODULE__, :live}
require OpenTelemetry.Tracer, as: Tracer
def mount(_params, _session, socket) do
IO.inspect({Tracer.current_span_ctx(), connected?(socket)})
span_ctx = Tracer.current_span_ctx()
{:ok,
socket
|> assign_async(:items, fn ->
# detached spans are expected here I guess
Tracer.with_span "span-without-propagation" do
{:ok, %{items: []}}
end
end)
|> assign_async(:special_items, fn ->
with_telemetry_ctx(fn ->
# BUG! this is also detached span?
Tracer.with_span "span-with-propagation" do
{:ok, %{special_items: []}}
end
end)
end)
|> assign_async(:special_items_ctx, fn ->
Tracer.set_current_span(span_ctx)
Tracer.with_span "span-with-set-context" do
{:ok, %{special_items_ctx: []}}
end
end)}
end
def render("live.html", assigns) do
~H"""
<script src="/assets/phoenix/phoenix.js"></script>
<script src="/assets/phoenix_live_view/phoenix_live_view.js"></script>
<script>
let liveSocket = new window.LiveView.LiveSocket("/live", window.Phoenix.Socket)
liveSocket.connect()
</script>
<style>
* { font-size: 1.1em; }
</style>
<%= @inner_content %>
"""
end
def render(assigns) do
~H"""
Hello world
"""
end
# Basically what Ecto is doing:
# https://github.com/open-telemetry/opentelemetry-erlang-contrib/blob/3f13bc8b2042071b723d0ea8c7b118fd10f42d26/instrumentation/opentelemetry_ecto/lib/opentelemetry_ecto.ex#L116
defp with_telemetry_ctx(fun) do
parent_context =
case OpentelemetryProcessPropagator.fetch_ctx(self()) do
:undefined ->
OpentelemetryProcessPropagator.fetch_parent_ctx(1, :"$callers")
ctx ->
ctx
end
parent_token =
if parent_context != :undefined do
OpenTelemetry.Ctx.attach(parent_context)
else
:undefined
end
result = fun.()
if parent_token != :undefined do
OpenTelemetry.Ctx.detach(parent_token)
end
result
end
end
defmodule Example.Router do
use Phoenix.Router
import Phoenix.LiveView.Router
pipeline :browser do
plug(:accepts, ["html"])
end
scope "/", Example do
pipe_through(:browser)
live("/", HomeLive, :index)
end
end
defmodule Example.Endpoint do
use Phoenix.Endpoint, otp_app: :sample
socket("/live", Phoenix.LiveView.Socket)
plug(Plug.Static, from: {:phoenix, "priv/static"}, at: "/assets/phoenix")
plug(Plug.Static, from: {:phoenix_live_view, "priv/static"}, at: "/assets/phoenix_live_view")
plug(Example.Router)
end
{:ok, _} = Supervisor.start_link([Example.Endpoint], strategy: :one_for_one)
:ok = OpentelemetryPhoenix.setup(adapter: :cowboy2)
Process.sleep(:infinity)
This gives the following debug log:
*SPANS FOR DEBUG*
{span,258017312708833371964411571289189239406,8451101793218774242,
{tracestate,[]},
undefined,<<"Example.HomeLive.mount">>,server,-576460749570026539,
-576460749554976171,
{attributes,128,infinity,0,#{}},
{events,128,128,infinity,0,[]},
{links,128,128,infinity,0,[]},
undefined,1,false,
{instrumentation_scope,<<"opentelemetry_phoenix">>,<<"1.2.0">>,
undefined}}
{span,114061670040808529142589516162197117224,3262951029120267145,
{tracestate,[]},
# THIS LOOKS OK
undefined,<<"span-without-propagation">>,internal,-576460749554952607,
-576460749554903105,
{attributes,128,infinity,0,#{}},
{events,128,128,infinity,0,[]},
{links,128,128,infinity,0,[]},
undefined,1,false,undefined}
{span,258017312708833371964411571289189239406,7524343052582475866,
{tracestate,[]},
# THIS LOOKS OK, CONTEXT MANUALLY SET
8451101793218774242,<<"span-with-set-context">>,internal,
-576460749554866196,-576460749554842512,
{attributes,128,infinity,0,#{}},
{events,128,128,infinity,0,[]},
{links,128,128,infinity,0,[]},
undefined,1,false,undefined}
{span,156269372488035503806460351374740489154,16606203842975826459,
{tracestate,[]},
# THIS LOOKS BROKEN!
undefined,<<"span-with-propagation">>,internal,-576460749544575511,
-576460749544498287,
{attributes,128,infinity,0,#{}},
{events,128,128,infinity,0,[]},
{links,128,128,infinity,0,[]},
undefined,1,false,undefined}
I'd expect to have parent id in span-with-propagation
.
I'm unsure if it's an issue with OpentelemetryProcessPropagator or the way async assign in LiveView works.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.