Comments (15)
Another option is to use a tuple type of [seconds, nanos]
like process.hrtime returns
from opentelemetry-js.
In general I think I would vote for a tuple representation internally and accept both a tuple or a Number
as input.
from opentelemetry-js.
Well, most other languages just have nice nanosecond-level timestamps, or at least native 64-bit ints that could make representing a nanosecond-level epoch time easy :(
If we were going to write something in the specs for this, my sense would be to say that implementations may choose to use a monotonic clock timestamp for span start/end times and event times if that helps improve accuracy or ease of use. So that for Node/browser we could use the performance.now
timestamp.
One thing we could also do is keep the number
type but allow either monotonic (millis since performance.timeOrigin
) or epoch (millis since 1970) in the public API, but store all timestamps internally as monotonic performance.now
timestamps. We can figure out which type of timestamp was intended from the public API based on its size (if it's > say last month's epoch timestamp then it's an epoch time, else it's a monotonic performance.now
time). We can always determine that because there were no Node processes running in 1970....
from opentelemetry-js.
More granularity better :) I would go with something that has ns but not available us is good.
from opentelemetry-js.
Here is a quick summary of the differences between a Number
object (for example returned by performance.now
) and a tuple (for example returned by process.hrtime
)
- Number: Maximum granularity of 0.2µs. Easier to work with.
- Tuple: Maximum granularity of 1ns. A bit clunkier to work with but it's just 2 numbers so not too bad.
What is the use case for granularity higher than 0.2µs?
from opentelemetry-js.
To clarify my above comment, the Number
input would be as milliseconds with a decimal component. This is the precision that is used by JavaScript for timestamps.
from opentelemetry-js.
One of the weirdnesses with using Number
for milliseconds is that gets progressively less accurate at the number increases. So if you get values from performance.now()
they will have nanosecond accuracy since they are generally close to zero (assuming your browser window hasn't been open for too long and your Node process hasn't been running for < 90 days). However, for epoch millis you only have ~0.5 microsecond accuracy (try running Date.now() + 1e-6
vs Date.now() + 1e-3
)
from opentelemetry-js.
In any case we would always have 0.2µs precision since the result from performance.now()
needs to be added to the timestamp when the process was started to get the correct start/end times. Personally I think this is an acceptable precision loss for the simplicity of using numbers. For more precision, we can additionally support tuples (which would also need to be added to the process start timestamp).
from opentelemetry-js.
In any case we would always have 0.2µs precision since the result from performance.now() needs to be added to the timestamp when the process was started to get the correct start/end times.
Well, to convert a performance.now
timestamp to a nanosecond-precision ISO timestamp, you can separate the fractional and whole number parts of both the origin and performance.now
numbers, add them separately, and then create the ISO timestamp string based on the separately added whole millisecond and fractional millisecond parts, see this code that does that. That approach will still have 0.2 microsecond precision relative to the epoch, but it will have nanosecond precision for the relative events within the spans of the Node process.
Anyway, I'm fine with epoch millis everywhere and I think I'm convinced we should just go with that for simplicity.
from opentelemetry-js.
This would mean a change of how the time is reported though. It means end()
should be called with the duration and not the end time. This would indeed allow a higher precision in most cases if internally it's always stored as a tuple, depending on how the data is reported.
I think it's worth a discussion if that makes high precision possible with Number
. The only downside is that precision would be lost over time.
from opentelemetry-js.
This was discussed at today's SIG meeting, and for now a Number
will be used as milliseconds for the end time.
This is (potentially) temporary however and we still need to discuss what would be the best approach. One discussion that probably needs to happen in the specification repository first is whether end()
should accept a duration or an explicit end time.
from opentelemetry-js.
Opened open-telemetry/opentelemetry-specification#139 to discuss end time VS duration and making the timestamps available on the cross-language API.
from opentelemetry-js.
I like the idea of just picking epoch millis and moving on for now.
But to clarify a bit of what I meant - what I was envisioning was that both the start and end times would be in performance.now
timestamps such that both could be high-precision relative to each other and other events in the spans collected from the Node process for the request.
Only when they need to be converted to epoch timestamps when sent to an exporter would they become 0.2 microsecond accuracy relative to the epoch, but they would still all be nanosecond-accuracy relative to each other (true for both start and end times and for trace annotations).
By making storing start time and duration, we make the end time be nanosecond precise, but the start time is still just the 0.2 microsecond precision relative to other spans/events in the process for the request (portion of the overall trace).
from opentelemetry-js.
I like this idea a lot. However, this may not be possible in other languages. How should this be described in the spec?
from opentelemetry-js.
Closed via #206
from opentelemetry-js.
Related Issues (20)
- Publish ESM packages following ESM spec HOT 2
- Add allowUrls as a config option for web instrumentation
- could we *not* `.enable()` instrumentations in their constructor? HOT 1
- TraceID missed after deployed the project in docker containers
- @opentelemetry/sdk-node changes size when installed from package.json HOT 1
- BatchSpanProcessor doesn't re-enable tracing after flushing HOT 7
- CI Automating of Releases
- @opentelemetry/instrumentation-http should have an option to suppress tracing on ignored outgoing requests
- @opentelemetry/sdk-trace-base - Avoid merging Resource Attributes by default on BasicTracerProvider HOT 2
- use `module.register(...)` in recommended bootstrap code for ESM support HOT 1
- Hook support for adding custom dynamic metric attributes/ dimensions
- OTLPExporterBase causes unhandled rejection on DNS resolution error HOT 2
- Exceptions are not recorded when using auto instrumentation with fetch instrumentation HOT 1
- Can I use auto-instrumentation & manual instrumentation at the same time? HOT 2
- Can ConsoleSpanExporter be used in Prod systems? HOT 2
- [@opentelemetry/instrumentation-http] Default traces have just "GET" as their name HOT 2
- TypeError: Cannot set property Tracer of #<Object> which has only a getter HOT 4
- New Labels Request - `propagator: aws:xray` and `propagator: aws:xray:lambda` HOT 1
- In-Memory Exporter for unit testing HOT 5
- Doomday85
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from opentelemetry-js.