uber-go / zap Goto Github PK
View Code? Open in Web Editor NEWBlazing fast, structured, leveled logging in Go.
Home Page: https://pkg.go.dev/go.uber.org/zap
License: MIT License
Blazing fast, structured, leveled logging in Go.
Home Page: https://pkg.go.dev/go.uber.org/zap
License: MIT License
If the logger level is set to ignore all panic/fatal logs (e..g, by using FatalLevel+1
), then there is an inconsistency in how we handle the logs.
If we do logger.Panic(...)
, then the log is skipped, but the process crashes.
If we use the Check
method and pass in PanicLevel
or FatalLevel
, then we return a not-OK checked message and we don't actually panic or fatal.
Normally, check will cause a panic on the Write
if the level is enabled.
I've written some tests to reproduce this behaviour:
https://github.com/uber-go/zap/compare/pv_panic_always?expand=1
I think the best option is for Check
to always return true
for PanicLevel
and FatalLevel
.
It can often be convenient for logs to have a name that is part of each log line. For example, if multiple systems are writing to the same log file, or for pastes of log messages to make it obvious what system the log line came from.
I would like to propose adding a name argument to the logger.New and logger.With methods. When a child log is created the two log names are concatenated together with a '.'.
So for the example in the docs:
{"level":"warn","msg":"Log without structured data..."}
Would be something like:
{"name":"myapp","level":"warn","msg":"Log without structured data..."}
or for a child logger:
{"name":"myapp.child","level":"warn","msg":"Log without structured data..."}
Hi,
I type & run:
package main
import (
"os"
"github.com/uber-go/zap"
)
func main() {
writeSyncer := zap.AddSync(os.Stderr)
logger := zap.NewJSON(
zap.Output(writeSyncer),
zap.ErrorOutput(writeSyncer),
)
logger.With(zap.Bool("wrong_bool", false)).Info("Hi!")
}
then, I got
{"msg":"Hi!","level":"info","ts":1463991525501189122,"fields":{"wrong_bool":true}}
The value of field named wrong_bool
should be false, but is not expected.
I tried using this logger in a concurrent application, but the messages will run overtop of each other in the log. I am wondering if I am using it wrong, or the library isn't concurrent safe. I have a simple example:
package main
import "github.com/uber-go/zap"
var logger zap.Logger
func main() {
logger = zap.NewJSON()
for i := 0; i < 800; i++ {
go printer(i)
}
}
func printer(i int) {
logger.Info("Counting",
zap.Int("i", i),
)
}
Here is a section of output from the above program:
{"msg":"Counting","level":"info","ts":1469196244924860064,"fields":{"i":8}}
{"msg":"Counting","level":"info","ts":1469196244924867134,"fields":{"i":42}}
{"msg":"Counting","level":"info","ts":1469196244924866589,"fields":{"i":20}}
{"msg":"Counti{"msg":"Counting","level":"info","ts":1469196244924879429,"fields":{"i":21}}
ng","level":"info","ts":1469196244924870959,"fields":{"i":9}}
{"msg":"Counting","level":"info","ts":1469196244924879050,"fields":{"i":36}}
{"msg":"Counting","level":"info","ts":1469196244924873937,"fields":{"i":43}}
{"msg":"Counting","level":"info","ts":1469196244924904543,"fields":{"i":22}}
{"msg":"Counting","level":"info","ts":1469196244924910632,"fields":{"i":10}}
{"msg":"Counting","level":"info","ts":1469196244924916792,"fields":{"i":23}}
zap supports nested JSON Objects with zap.Nest
or zap.Marshaler
, but does not support JSON Array yet. Current workaround is to use zap.Object
and implement MarshalJSON
to output JSON Array. How about built-in support for JSON Array like zap.Array
?
Thanks to @osamingo, we have more benchmarks!
If we run into an error encoding a value, we're currently just putting the error message under the original key. (I.e., we output "foo":"failed to serialize User"
.) Instead, we should put the error message under a different key (I.e., "foo-error":"failed to serialize User"
). This will help users emit data with a more consistent schema.
zap.Logger
is really fast, but it's quite verbose - users must repeatedly reference the zap
package. This is a good trade-off for performance-critical applications, but it's not good for most users. Consider the impression that this snippet from the README leaves on potential users and contributors:
logger.Info("Failed to fetch URL.",
zap.String("url", url),
zap.Int("attempt", tryNum),
zap.Duration("backoff", sleepFor),
)
The wrapper provided by the zbark
sub-package is great for our internal use, since we're using the bark.Logger
interface all over. However, it's also unpleasantly verbose:
logger.WithFields(bark.Fields{
"url": url,
"attempt": tryNum,
"backoff": sleepFor,
}).Info("Failed to fetch URL.")
To show off how easy it is to write a clean wrapper on top of the core zap.Logger
type, spur adoption, and generally make a better impression on first-time users, I'd like to include a higher-level zap.SugaredLogger
(name TBD - we could even rename the current logger to CoreLogger
and call the new thing Logger
). I'm open to other ideas, but I'd like something similar to the log15
API:
logger := zap.Sugar(coreLogger)
logger.Info("Failed to fetch URL",
"url", url,
"attempt", tryNum,
"backoff", sleepFor,
)
anotherCoreLogger := zap.Desugar(logger)
This isn't a blocking issue for 1.0, but we should provide a Sentry hook (either in a subpackage or a completely separate repo). It'll be useful for internal users and as an example of hook-based external integrations.
Add a TextEncoder for pretty, non-JSON console output. This is definitely a nice-to-have for the wider community, and it forces us to keep loggers and encoders strongly separate.
AddCaller appears to run from within Check() instead of at Write() time.
this program:
package main
import "github.com/uber-go/zap"
func main() {
log := zap.NewJSON(zap.AddCaller())
if cm := log.Check(zap.InfoLevel, "check"); cm.OK() {
cm.Write(zap.String("line", "write"))
}
}
generates this output on my machine:
{"msg":"checked_message.go:57: check","level":"info","ts":1466538422225435174,"fields":{"line":"write"}}
Add a field factory for stacktraces. We should probably add a convenient ErrStack(error)
factory, too, since this will be a common combination.
We can use the following to convert between float64 and int64:
https://golang.org/pkg/math/#Float64bits
https://golang.org/pkg/math/#Float64frombits
Provide a wrapper that's compatible with bark.
Are there any plans to add support for []byte type? I'm not gonna lie, I did a TL;DR on the source code.
I noticed that if I do something like this:
logger := zap.NewJSON()
logger.Info("dupe fields", zap.String("foo", "bar"), zap.String("foo", "baz"))
I end up getting a JSON record like this:
{"msg":"dupe fields","level":"info","ts":1466086541235126983,"fields":{"foo":"bar","foo":"baz"}}
Now, the above example probably doesn't make a lot of sense, but it is more realistic if you consider an application that passes loggers around, with various layers adding context with "zap.Logger.With`.
I can't find anything that says duplicate fields are against the spec, and it doesn't seem to break json.Unmarshal
or Python's json.loads
which both end up taking the last "foo":"baz"
, but it still might be worth handling (or even just documenting).
Since NewJSON
already takes a variadic number of fields (and we don't want to take an options struct that'll almost always be nil), add a Configure()
method to loggers.
That will also let us (maybe) disable or override the time.Now()
output included in each log message, which will let us make the examples runnable.
Loggers typically provide a singleton root logger for convenience. I'm not seeing that in zap.
I realize it's trivial to implement in application code, but I'm curious about the rationale behind the omission from zap. Is it planned and just not implemented yet?
Hello,
zap.Error()
is a convenience method so I feel like making it handle nil
errors gracefully seems like a reasonable move.
Example use case: I have an http middleware that logs all requests. The wrapped http handler returns an error
and then the next line might be something like this:
logger.Info("time to process request", zap.Error(err), zap.Duration("delta", time.Since(t0)), zap.String("path", path))
With the current zap.Error()
this would panic if err = nil
and require a check for nil to prevent just that.
I'd be happy to submit a PR if this seems like a sane addition.
I noticed that of all the functions that return a zap.Field, Err is the only one that doesn't let you specify the name.
I don't have any use case where this is a problem, but it just popped out at me as inconsistent and I didn't see any comments or docs around it explaining why it is different.
As-is, sampling is okay...but calls to the leveled logging methods still requires allocating a slice to hold the variadic ...Field
argument. Add a token-like API so that we allocate only if we're actually going to log a message.
They don't add much value over DebugLevel
and FatalLevel
, respectively, and they make any code that handles user-supplied levels unnecessarily complex.
zbark.Barkify
wraps a zap logger in the bark interface; Debarkify
should do the opposite. This makes migrating from bark to zap (or vice versa) easier, since libraries and applications don't have to move in lock-step.
Allocating the fields currently happens outside the benchmark, so they're artificially good-looking. Allocate within the benchmark instead.
Require callers to construct loggers with something a bit richer than a plain io.Writer
and also require a Sync
method. Use Sync
to flush on panics and higher.
They're basically the same thing -- one takes an interface, one takes a function directly. We should be able to reduce the API by removing Nest
, and instead creating a type, LogMarshalerFunc
that will take a func(KeyValue) error
and wrapping it to make it a LogMarshaler
.
Update the readme to reflect the current APIs and benchmarks.
As currently implemented, the AddCaller
hook doesn't work when wrapped (e.g., by Standardize
) - we're hard-coding the number of stack frames to skip when finding the caller.
All the other Field wrappers use the full type name, only zap.Err uses a shortened name.
Currently, the timestamp is defined to be unix nanos within the field, "ts". I'd like to be able to pump the logs directly into third party services like loggly which have a different naming requirement and format for the timestamp field.
Was thinking of something like this:
TestSamplerTicks
is flaky on Travis, likely because Travis is extremely slow. We should either extend the sleep to a few ticks, or we should add a tchannel-style timeout multiplier flag.
So I know there's a request to support fancier outputs when doing development, but currently, how are you handling the logs if you are prettifying it (assuming someone is doing this)? Is there a process manager that redirects the STDOUT to an ELK node (vagrant or docker) that is being setup?
This is more for my personal preference really more than anything. So feel free to close this issue if currently reading to STDOUT is what is done.
Build a bridge back to uber-common/bark.
Add a constructor that takes a zap logger and returns an HTTP handler to manage common runtime changes (e.g., changing the log level). Applications can mount that handler wherever they like.
https://github.com/uber-go/zap/blob/master/spy/logger.go#L57
// Logs returns a copy of the sink's accumulated logs.
func (s *Sink) Logs() []Log {
logs := make([]Log, len(s.logs)) <------ s.logs access must be inside of lock/unlock
s.Lock()
for i, log := range s.logs {
logs[i] = log
}
s.Unlock()
return logs
}
Provide a sampling wrapper that allows callers to log the first n messages and every k'th message thereafter.
Using Zap i see logs are generated in Json Format, which is actually good.
What if we want to log Messages in File in Text Format like
2014-07-23 16:19:53,897:DEBUG :http-/0.0.0.0:28080-6: This is Error Message
2014-07-23 16:19:53,897:ERROR :http-/0.0.0.0:28080-6: UnHandled Exception
Thanks,
DNadar
Add unit tests for the Development
option, which requires stubbing calls to os.Exit
.
The New()
method of the json encoders pool creates a new jsonEncoder
with a reasonably sized buffer (jsonEncoder.go (51)).
The truncate()
method of jsonEncoder
resets the buffer length to 0, but does not modify its capacity, i.e. the memory buffer acquired to accommodate large chunks of logs and pointed to by the enc.bytes
slice stays as-is after the truncate()
call. As the log data is appended to the enc.bytes
slice, the capacity (and memory usage) will inevitably grow to the size that can fit the largest log message seen so far, which can be quite large if stack trace or custom marshallers are included.
I know that even very very large logs would increase memory usage very little, but still it happens, and my issue is actually a question if this is by design, or you'd rather avoid uncontrolled buffer allocations?
One way of fixing it would be to set a threshold for the buffer capacity, and check buffer capacity against the threshold in truncate()
call (or maybe better in Free()
call). If cap is beyond the threshold, a new buffer could be allocated with predefined fixed size, and the old one left for GC.
The new buffer size could also be determined runtime to be big enough for e.g. 90% of the messages (based on the log history).
While attempting to write a simple stub Logger
implementation for some tests, I noticed that it doesn't currently seem possible to do outside of the zap
package (assuming you don't want to ignore the log fields).
The main problem is that there doesn't appear to be a way for an encoder to get the any information from a Field
(both addFields
and Field.addTo
are unexported)
Am I missing something? I'll admit that I'm not familiar with the codebase.
Once we flip uber-go/atomic
to be public, use it here.
I am not sure if this is already supported but I propose that the fields
section of the log be configurable to a name desired by the user of this package.
now zap can write to stderr and stdout...
Is any plan to support syslog?
Copying from #81 (comment)
Currently a logger always has a reference to the top level fields. This is more likely to cause field clashes as the logger is passed between multiple libraries.
We could add support for a logger that only had a reference to a nested field:
logger := New(...)
_ = newConnection(logger.Nested("conn"))
_ = newRequest(logger.Nested("request"))
The nested logger would not be able to add anything to the top level fields, so you could pass in the logger to other components (internal or external) without having to worry about clashes.
I would like to be able to log unsigned ints as part of the structured data.
Add some comparative benchmarks and mention them in the readme.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.