Giter Site home page Giter Site logo

blackhole's Introduction

Blackhole - eating your logs with pleasure

Build Status codecov

Blackhole is an attribute-based logger with strong focus on gaining maximum performance as possible for such kind of loggers.

Features

Attributes

Attributes is the core feature of Blackhole. Technically speaking it's a key-value pairs escorting every logging record.

For example we have HTTP/1.1 server which produces access logs like:

[::] - esafronov [10/Oct/2000:13:55:36 -0700] 'GET /porn.png HTTP/1.0' 200 2326 - SUCCESS

It can be splitted into indexes or attributes:

message:   SUCCESS
host:      [::]
user:      esafronov
timestamp: 10/Oct/2000:13:55:36 -0700
method:    GET
uri:       /porn.png
protocol:  HTTP/1.0
status:    200
elapsed:   2326

Blackhole allows to specify any number of attributes you want, providing an ability to work with them before of while you writing them into its final destination. For example, Elasticsearch.

Shared library

Despite the header-only dark past now Blackhole is developed as a shared library. Such a radical change of distributing process was chosen because of many reasons.

Mainly, header-only libraries have one big disadvantage: any code change may (or not) result in recompiling all its dependencies, otherwise having weird runtime errors with symbol loading race.

The other reason was the personal aim to reduce compile time, because it was fucking huge!

Of course, there are disadvantages, such as virtual function call cost and closed doors for inlining, but here my personal benchmark-driven development helped to avoid performance degradation.

Planning

  • Shared library.
  • Inline namespaces.
  • Optional compile-time inline messages transformation (C++14).
    • Compile-time placeholder type checking.
    • Compile-time placeholder spec checking (?).
  • Python-like formatting (no printf-like formatting support) both inline and result messages.
  • Attributes.
  • Scoped attributes.
  • Wrappers.
  • Custom verbosity.
  • Custom attributes formatting.
  • Optional asynchronous pipelining.
    • Queue with block on overload.
    • Queue with drop on overload (count dropped message).
    • The same but for handlers.
  • Formatters.
    • String by pattern.
      • Optional placeholders.
      • Configurable leftover placeholder.
    • JSON with tree reconstruction.
  • Sinks.
    • Colored terminal output.
    • Files.
    • Syslog.
    • Socket UDP.
    • Socket TCP.
      • Blocking.
      • Non blocking.
  • Scatter-gathered IO (?)
  • Logger builder.
  • Macro with line and filename attributes.
  • Initializer from JSON (filename, string).
  • Inflector.
  • Filter category.
    • Category type.
    • For sinks.
    • For handlers.
    • For loggers.

Experimental

Note, that there are some symbols, that are wrapped into experimental namespace. These symbols don't adhere semantic versioning and, well... experimental. Use them with caution and only, where you want to try super-unstable features, which can be changed or even dropped.

Formatters

Formatters in Blackhole are responsible for converting every log record passing into some byte array representation. It can be either human-readable string, JSON tree or even protobuf packed frame.

String

String formatter provides an ability to configure your logging output using pattern mechanics with powerful customization support.

Unlike previous Blackhole versions now string formatter uses python-like syntax for describing patterns with using {} placeholders and format specifications inside. Moreover now you can specify timestamp specification directly inside the general pattern or even format it as a microseconds number since epoch.

For example we have the given pattern:

[{severity:>7}] [{timestamp:{%Y-%m-%d %H:%M:%S.%f}s}] {scope}: {message}

After applying some log events we expect to receive something like this:

[  DEBUG] [2015-11-19 19:02:30.836222] accept: HTTP/1.1 GET - / - 200, 4238
[   INFO] [2015-11-19 19:02:32.106331] config: server has reload its config in 200 ms
[WARNING] [2015-11-19 19:03:12.176262] accept: HTTP/1.1 GET - /info - 404, 829
[  ERROR] [2015-11-19 19:03:12.002127] accept: HTTP/1.1 GET - /info - 503, 829

As you may notice the severity field is aligned to the right border (see that >7 spec in pattern), the timestamp is formatted using default representation with a microseconds extension and so on. Because Blackhole is all about attributes you can place and format every custom attribute you want, as we just done with scope attribute.

The Blackhole supports several predefined attributes, with convenient specifications:

Placeholder Description
{severity:s} User provided severity string representation
{severity}, {severity:d} Numeric severity value
{timestamp:d} Number of microseconds since Unix epoch
{timestamp:{spec}s} String representation using strftime specification in UTC
{timestamp:{spec}l} String representation using strftime specification in local timezone
{timestamp}, {timestamp:s} The same as {timestamp:{%Y-%m-%d %H:%M:%S.%f}s}
{process:s} Process name
{process}, {process:d} PID
{thread}, {thread::x} Thread hex id as an opaque value returned by pthread_self(3)
{thread:s} Thread name or unnamed
{message} Logging message
{...} All user declared attributes

For more information please read the documentation and visit the following links:

Note, that if you need to include a brace character in the literal text, it can be escaped by doubling: {{ and }}.

There is a special attribute placeholder - {...} - which means to print all non-reserved attributes in a reverse order they were provided in a key-value manner separated by a comma. These kind of attributes can be configured using special syntax, similar with the timestamp attribute with an optional separator.

For example the following placeholder {...:{{name}={value}:p}{\t:x}s} results in tab separated key-value pairs like id=42\tmethod=GET.

For pedants there is a full placeholder grammar in EBNF:

Grammar     = Ph
            | OptPh
            | VarPh
Ph          = "{" Name "}"
OptPh       = "{" Name ":" Spec? "}"
VarPh       = "{...}"
            | "{...:" Ext? s "}"
Ext         = Pat
            | Sep
            | Pat Sep
            | Sep Pat
Name        = [a-zA-Z0-9_]
Spec        = Fill? Align? Width? Type
Fill        = [a character other than '{' or '}']
Align       = [>^<]
Width       = [1-9][0-9]*
Type        = [su]
Pat         = "{" PatSpec ":p}"
Sep         = "{" SepLit* ":s}" ("}" SepLit* ":s}")*
SepLit      = . ! (":s" | "}" | "}}" | "{" | "{{")
            | LeBrace
            | RiBrace
LeBrace     = "{{" -> "{"
RiBrace     = "}}" -> "}"
PatSpec     = (AtName | AtValue | PatLit)*
AtName      = "{name}"
            | AtNameSpec
AtNameSpec  = "{name:" AtSpec "}"
AtSpec      = Align? Width? AtType
AtType      = [sd]
AtValue     = "{value}"
            | AtValueSpec
AtValueSpec = "{value:" AtSpec "}"
PatLit      = . ! ("}" | "}}" | "{" | "{{")
            | LeBrace
            | RiBrace

Let's describe it more precisely. Given a complex leftover placeholder, let's parse it manually to see what Blackhole see. Given: {...:{{name}={value}:p}{\t:s}>50s}.

Parameter Description
... Reserved placeholder name indicating for Blackhole that this is a leftover placeholder.
: Optional spec marker that is placed after placeholder name where you want to apply one of several extensions. There are pattern, separator, prefix, suffix and format extensions. All of them except format should be surrounded in curly braces.
{{name}={value}:p} Pattern extension that describes how each attribute should be formatted using typical Blackhole notation. The suffix :p, that is required for extension identification, means pattern. Inside this pattern you can write any pattern you like using two available sub-placeholders for attribute name and value, for each of them a format spec can be applied using cppformat grammar. At last a format spec can be also applied to the entire placeholder, i.e. :>50p for example.
{\t:s} Separator extension for configuring each key-value pair separation. Nuff said.
{[:r} (Not implemented yet). Prefix extension that is prepended before entire result if it is not empty.
{]:u} (Not implemented yet). Suffix extension that is appended after entire result if it is not empty.

50s | Entire result format. See cppformat rules for specification.

JSON.

JSON formatter provides an ability to format a logging record into a structured JSON tree with attribute handling features, like renaming, routing, mutating and much more.

Briefly using JSON formatter allows to build fully dynamic JSON trees for its further processing with various external tools, like logstash or rsyslog lefting it, however, in a human-readable manner.

Blackhole allows you to control of JSON tree building process using several predefined options.

Without options it will produce just a plain tree with zero level depth. For example for a log record with a severity of 3, message "fatal error, please try again" and a pair of attributes {"key": 42, "ip": "[::]"} the result string will look like:

{
    "message": "fatal error, please try again",
    "severity": 3,
    "timestamp": 1449859055,
    "process": 12345,
    "thread": 57005,
    "key": 42,
    "ip": "[::]"
}

Using configuration parameters for this formatter you can:

Attributes renaming acts so much transparently as it appears: it just renames the given attribute name using the specified alternative.

Attributes routing specifies a location where the listed attributes will be placed at the tree construction. Also, you can specify a default location for all attributes, which is "/" meaning root otherwise.

For example with routing {"/fields": ["message", "severity"]} and "/" as a default pointer the mentioned JSON will look like:

{
    "fields": {
        "message": "fatal error, please try again",
        "severity": 3
    },
    "timestamp": 1449859055,
    "process": 12345,
    "thread": 57005,
    "key": 42,
    "ip": "[::]"
}

Attribute renaming occurs after routing, so mapping "message" => "#message" just replaces the old name with its new alternative.

To gain maximum speed at the tree construction no filtering occurs, so this formatter by default allows duplicated keys, which means invalid JSON tree (but most of parsers are fine with it). If you are really required to deal with unique keys, you can enable unique option, but it involves heap allocation and may slow down formatting.

Also, formatter allows to automatically append a newline character at the end of the tree, which is strangely required by some consumers, like logstash.

Note, that JSON formatter formats the tree using compact style without excess spaces, tabs etc.

For convenient formatter construction a special builder class is implemented allowing to create and configure instances of this class using streaming API. For example:

auto formatter = blackhole::formatter::json_t::builder_t()
    .route("/fields", {"message", "severity", "timestamp"})
    .route("/other")
    .rename("message", "#message")
    .rename("timestamp", "#timestamp")
    .newline()
    .unique()
    .build();

This allows to avoid hundreds of constructors and to make a formatter creation to look eye-candy.

The full table of options:

Option Type Description
/route Object of:
[string]
"*"
Allows to configure nested tree mapping. Each key must satisfy JSON Pointer specification and sets new attributes location in the tree. Values must be either an array of string, meaning list of attributes that are configured with new place or an "*" literal, meaning all other attributes.
/mapping Object of: [string] Simple attribute names renaming from key to value.
/newline bool If true, a newline will be appended to the end of the result message. The default is false.
/unique bool If true removes all backward consecutive duplicate elements from the attribute list range. For example, if there are two attributes with name "name" and values "v1" and "v2" inserted, then after filtering there will be only the last inserted, i.e. "v2". The default is false.
/mutate/timestamp string Replaces the timestamp field with new value by transforming it with the given strftime pattern.
/mutate/severity [string] Replaces the severity field with the string value at the current severity value.

For example:

"formatter": {
    "type": "json",
    "newline": true,
    "unique": true,
    "mapping": {
        "message": "@message",
        "timestamp": "@timestamp"
    },
    "routing": {
        "": ["message", "timestamp"],
        "/fields": "*"
    },
    "mutate": {
        "timestamp": "%Y-%m-%dT%H:%M:%S.%fZ",
        "severity": ["D", "I", "W", "E"]
    }
}

Sinks

Null

Sometimes we need to just drop all logging events no matter what, for example to benchmarking purposes. For these cases, there is null output (or sink), which just ignores all records.

The common configuration for this sink looks like:

"sinks": [
    {
        "type": "null"
    }
]

Console

Represents a console sink which is responsible for writing all incoming log events directly into the terminal using one of the selected standard outputs with an ability to optionally colorize result strings.

The sink automatically detects whether the destination stream is a TTY disabling colored output otherwise, which makes possible to redirect standard output to file without escaping codes garbage.

Note, that despite of C++ std::cout and std::cerr thread-safety with no undefined behavior its guarantees is insufficiently for safe working with them from multiple threads, leading to result messages intermixing. To avoid this a global mutex is used internally, which is kinda hack. Any other stdout/stderr usage outside from logger will probably results in character mixing, but no undefined behavior will be invoked.

The configuration:

"sinks": [
    {
        "type": "console"
    }
]

Note, that currently coloring cannot be configured through dynamic factory (i.e through JSON, YAML etc.), but can be through the builder.

enum severity {
    debug = 0,
    info,
    warn,
    error
};

auto console = blackhole::builder<blackhole::sink::console_t>()
    .colorize(severity::debug, blackhole::termcolor_t())
    .colorize(severity::info, blackhole::termcolor_t::blue())
    .colorize(severity::warn, blackhole::termcolor_t::yellow())
    .colorize(severity::error, blackhole::termcolor_t::red())
    .stdout()
    .build();

File

Represents a sink that writes formatted log events to the file or files located at the specified path.

The path can contain attribute placeholders, meaning that the real destination name will be deduced at runtime using provided log record (not ready yet). No real file will be opened at construction time. All files are opened by default in append mode meaning seek to the end of stream immediately after open.

This sink supports custom flushing policies, allowing to control hardware write load. There are three implemented policies right now:

  • Fully automatic (without configuration), meaning that the sink will decide whether to flush or not after each record consumed.
  • Count of records written - this is the simple counter with meaning of "flush at least every N records consumed", but the underlying implementation can decide to do it more often. The value of 1 means that the sink will flush after every logging event, but this results in dramatically performance degradation.
  • By counting of number of bytes written - Blackhole knows about bytes, megabytes, even mibibytes etc.

Note, that it's guaranteed that the sink always flush its buffers at destruction time. This guarantee with conjunction of thread-safe logger reassignment allows to implement common SIGHUP files reopening during log rotation.

Blackhole won't create intermediate directories, because of potential troubles with ACL. Instead an exception will be thrown, which will be anyway caught by the internal logging system notifying through stdout about it.

Note, that associated files will be opened on demand during the first write operation.

"sinks": [
    {
        "type": "file",
        "flush": "10MB",
        "path": "/var/log/blackhole.log"
    }
]

Blackhole knows about the following marginal binary units:

  • Bytes (B).
  • Kilobytes (kB).
  • Megabytes (MB).
  • Gigabytes (GB).
  • Kibibytes (KiB).
  • Mibibytes (MiB).
  • Gibibytes (GiB).

More you can read at https://en.wikipedia.org/wiki/Binary_prefix.

Socket

The socket sinks category contains sinks that write their output to a remote destination specified by a host and port. Currently the data can be sent over either TCP or UDP.

TCP

This appender emits formatted logging events using connected TCP socket.

Option Type Description
host string Required.
The name or address of the system that is listening for log events.
port u16 Required.
The port on the host that is listening for log events.

UDP

Nuff said.

Syslog

Option Type Description
priorities [i16] Required.
Priority mapping from severity number.

Configuration

Blackhole can be configured mainly in two ways:

  • Using experimental builder.
  • Using abstract factory (GoF, yeah).

Builder

The first way involves using experimental yet builder. For each library component (formatter, sink, etc.) there should be appropriate builder specialization that is used to create instances of associated component in a flow-way.

For example:

// Here we are going to configure our string/console handler and to build the logger.
auto log = blackhole::experimental::partial_builder<blackhole::root_logger_t>()
   // Add the blocking handler.
   .handler<blackhole::handler::blocking_t>()
       // Configure string formatter.
       //
       // Pattern syntax behaves like as usual substitution for placeholder. For example if
       // the attribute named `severity` has value `2`, then pattern `{severity}` will invoke
       // severity mapping function provided and the result will be `W`.
       .set<blackhole::formatter::string_t>("{severity}, [{timestamp}]: {message}")
           .mapping(&sevmap)
           .build()
       // Configure console sink to write into stdout (also stderr can be configured).
       .add<blackhole::sink::console_t>()
           .build()
       // And build the handler. Multiple handlers can be added to a single logger, but right
       // now we confine ourselves with a single handler.
       .build()
   // Build the logger.
   .build();

The result is a std::unique_ptr<C> where C: Component, sorry for my Rust.

This is also called static initialization, because you must know the configuration of your logging system at compile time. If this isn't suit for you there is another way.

Factory

Also called as dynamic initialization, and is the recommended way to configure the Blackhole, because it implements some kind of dependency injection through some external source, like JSON file, XML, or folly::dynamic.

Blackhole for now implements only initialization from JSON, but it can be easily extended as a plugin, because all you need is just to implement proper interface to allow tree-like traversing through your config object.

Here there is an example how to configure the library from JSON file.

// Here we are going to build the logger using registry. The registry's responsibility is to
// track registered handlers, formatter and sinks, but for now we're not going to register
// anything else, since there are predefined types.
auto log = blackhole::registry::configured()
    // Specify the concrete builder type we want to use. It may be JSON, XML, YAML or whatever
    // else.
    ->builder<blackhole::config::json_t>(std::ifstream(argv[1]))
        // Build the logger named "root".
        .build("root");

The result is a std::unique_ptr<logger_t> object.

For more information see blackhole::registry_t class and the include/blackhole/config where all magic happens. If you look for an example how to implement your own factory, please see src/config directory.

Facade

One can say that the raw logger interface is inconvenient, and this is true, unfortunately, because it must work both in simple cases, where intermediate message formatting is not required, without attributes; and in complex cases, where lazy message formatting occurs, with attributes provided, remaining at the same time as fast as possible, giving a high-performance solution.

Let's take a look on the interface:

class logger_t {
public:
    virtual ~logger_t() = 0;
    virtual auto log(severity_t severity, const message_t& message) -> void = 0;
    virtual auto log(severity_t severity, const message_t& message, attribute_pack& pack) -> void = 0;
    virtual auto log(severity_t severity, const lazy_message_t& message, attribute_pack& pack) -> void = 0;

    virtual auto manager() -> scope::manager_t& = 0;
};

To avoid manually creating all these structures a special extension is provided: facade. In two words it is a thin template adapter over any given logger which extends its interface, providing methods that makes logging convenient again. We describe all these methods by abusing a random HTTP logging event of success file serve.

For simple cases, there is a thin wrapper that transforms a string into string view and passes it further.

logger.log(0, "GET /static/image.png HTTP/1.1 436 200");

Sometimes we want to provide additional attributes. In these cases, they can be passed using initializer list.

logger.log(0, "GET /static/image.png HTTP/1.1 436 200", {
    {"cache", true},
    {"elapsed", 435.72},
    {"user-agent", "Mozilla Firefox"}
});

Often we want to format a message using predefined pattern, but with arguments obtained at runtime.

logger.log(0, "{} {} HTTP/1.1 {} {}", "GET", "/static/image.png", 436, 200);

At last, we can combine two previous examples to obtain something really useful. Note that attribute list argument must be the last.

logger.log(0, "{} {} HTTP/1.1 {} {}", "GET", "/static/image.png", 436, 200, attribute_list{
    {"cache", true},
    {"elapsed", 435.72},
    {"user-agent", "Mozilla Firefox"}
});

To use it all you need is to create a logger, import the facade definition and wrap the logger with it. We show you an improved example:

/// This example demonstrates how to initialize Blackhole from configuration file using JSON
/// builder.
/// In this case the entire logging pipeline is initialized from file including severity mapping.
/// The logging facade is used to allow runtime formatting and attributes provisioning.

#include <fstream>
#include <iostream>

#include <blackhole/attribute.hpp>
#include <blackhole/attributes.hpp>
#include <blackhole/config/json.hpp>
#include <blackhole/extensions/facade.hpp>
#include <blackhole/extensions/writer.hpp>
#include <blackhole/registry.hpp>
#include <blackhole/root.hpp>

using namespace blackhole;

/// As always specify severity enumeration.
enum severity {
    debug   = 0,
    info    = 1,
    warning = 2,
    error   = 3
};

auto main(int argc, char** argv) -> int {
    if (argc != 2) {
        std::cerr << "Usage: 3.config PATH" << std::endl;
        return 1;
    }

    /// Here we are going to build the logger using registry. The registry's responsibility is to
    /// track registered handlers, formatter and sinks, but for now we're not going to register
    /// anything else, since there are predefined types.
    auto inner = blackhole::registry::configured()
        /// Specify the concrete builder type we want to use. It may be JSON, XML, YAML or whatever
        /// else.
        ->builder<blackhole::config::json_t>(std::ifstream(argv[1]))
            /// Build the logger named "root".
            .build("root");

    /// Wrap the logger with facade to obtain an ability to format messages and provide attributes.
    auto log = blackhole::logger_facade<blackhole::root_logger_t>(inner);

    log.log(severity::debug, "{} {} HTTP/1.1 {} {}", "GET", "/static/image.png", 404, 347);
    log.log(severity::info, "nginx/1.6 configured", {
        {"elapsed", 32.5}
    });
    log.log(severity::warning, "client stopped connection before send body completed");
    log.log(severity::error, "file does not exist: {}", "/var/www/favicon.ico", blackhole::attribute_list{
        {"Cache", true},
        {"Cache-Duration", 10},
        {"User-Agent", "Mozilla Firefox"}
    });

    return 0;
}

Runtime Type Information

The library can be successfully compiled and used without RTTI (with -fno-rtti flag).

Possible bottlenecks

  • Timestamp formatting
  • Using system clock - can be replaces with OS specific clocks.
  • Using gmtime - manual std::tm generation without mutex shit.
  • Temporary buffer - affects, but not so much.

Why another logging library?

That's the first question I ask myself when seeing yet another silver-bullet library.

First of all, we required a logger with attributes support. Here boost::log was fine, but it didn't compile in our compilers. Sad. After that we've realized that one of our bottlenecks is located in logging part, that's why boost::log and log4cxx weren't fit in our requirements. Thirdly we are developing for stable, but old linux distributives with relatively old compilers that supports only basic part of C++11.

At last, but not least, all that libraries have one fatal disadvantage - NIH.

So here we are.

To be honest, let's describe some popular logging libraries, its advantages and disadvantages as one of them may fit your requirements and you may want to use them instead. It's okay.

Boost.LogV2

Developed by another crazy Russian programmer using dark template magic and Vodka (not sure what was first). It's a perfect and powerful library, seriously.

Pros:

  • It's a fucking boost! Many people don't want to depend on another library, wishing to just apt-get install instead.
  • Have attributes too.
  • Large community, fewer bugs.
  • Highly configurable.
  • Good documentation.

Cons:

  • Sadly, but you are restricted with the latest boost versions.
  • Hard to hack and extend unless you are fine with templates, template of templates and variadic templates of a templated templates with templates. Or you are Andrei Alexandrescu.
  • Relatively poor performance. Higher than log4cxx have, but not enough for us.
  • Requires RTTI.

Log4cxx

Logging framework for C++ patterned after Apache log4j. Yeah, Java.

Pros:

  • Absolutely zero barrier to entry. Really, you just copy-paste the code from tutorial and it works. Amazing!

Cons:

  • Leaking.
  • APR.
  • Have no attributes.
  • Really slow performance.
  • Seems like it's not really supported now.

Spdlog

Extremely ultra bloody fucking fast logging library. At least the documentation says that. Faster than speed of light!

But everyone knows that even the light is unable to leave from blackhole.

Pros:

  • Really fast, I checked.
  • Header only. Not sure it's an advantage, but for small projects it's fine.
  • Easy to extend, because the code itself is plain, straightforward and magically easy to understand.
  • No dependencies.
  • Nice kitty in author's avatar.

Cons:

  • Again no attributes, no custom filtering, no custom verbosity levels. You are restricted to the functionality provided by this library, nothing more.

Notable changes

First of all, the entire library was completely rewritten for performance reasons.

  • No more attribute copying unless it's really required (for asynchronous logging for example). Nested attributes are now organized in flattened range.
  • Dropped boost::format into the Hell. It's hard to find a slower library for formatting both in compilation stage and runtime. Instead, the perfect cppformat library with an own compile-time constexpr extensions is used.
  • There are predefined attributes with fast read access, like message, severity, timestmap etc.
  • With cppformat participation there is new Python-like format syntax using placeholder replacement.
  • Severity mapping from its numeric representation to strings can now be configured from generic configuration source (from file for example).
  • ...

Requirements

  • C++11/14/17 compiler (yep, using C++17 opens additional functionalities).
  • Boost.Thread - for TLS.

Development

Git workflow

Each feature and fix is developed in a separate branch. Bugs which are discovered during development of a certain feature, may be fixed in the same branch as their parent issue. This is also true for small features.

Branch structure:

  • master: master branch - contains a stable, working version of VM code.
  • develop: development branch - all fixes and features are first merged here.
  • issue/<number>/<slug> or issue/<slug>: for issues (both enhancement and bug fixes).

blackhole's People

Contributors

3hren avatar abu-zakaria avatar andrusha97 avatar antmat avatar bacek avatar bayonet avatar bioothod avatar ijon avatar maturin avatar minaevmike avatar shaitan avatar vitalyisaev2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blackhole's Issues

boost 1.58 compilation error

Hi

boost::variant 1.58 has introduced static asserts which fail compilation this way.
Here is code which calls blackhole::dynamic_t::to() which in turn invokes boost::get<T>(&value)

typedef blackhole::dynamic_t dynamic_t;
...
const dynamic_t m_value;
m_value.to<const dynamic_t::array_t &>().size()
In file included from /usr/include/boost/intrusive/detail/generic_hook.hpp:29:0,
                 from /usr/include/boost/intrusive/list_hook.hpp:23,
                 from /usr/include/boost/intrusive/list.hpp:20,
                 from /home/zbr/rpmbuild/BUILD/elliptics-2.26.9.2/cache/cache.hpp:33,
                 from /home/zbr/rpmbuild/BUILD/elliptics-2.26.9.2/cache/cache.cpp:18:
/usr/include/boost/variant/get.hpp: In instantiation of 'typename boost::add_pointer<const U>::type boost::strict_get(const boost::variant<T0, TN ...>*) [with U = const std::vector<blackhole::dynamic_t>&; T0 = blackhole::dynamic_t::null_t; TN = {bool, long unsigned int, long int, double, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<blackhole::dynamic_t, std::allocator<blackhole::dynamic_t> >, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, blackhole::dynamic_t, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<const std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, blackhole::dynamic_t> > >}; typename boost::add_pointer<const U>::type = const std::vector<blackhole::dynamic_t>*]':
/usr/include/boost/variant/get.hpp:269:25:   required from 'typename boost::add_pointer<const U>::type boost::get(const boost::variant<T0, TN ...>*) [with U = const std::vector<blackhole::dynamic_t>&; T0 = blackhole::dynamic_t::null_t; TN = {bool, long unsigned int, long int, double, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<blackhole::dynamic_t, std::allocator<blackhole::dynamic_t> >, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, blackhole::dynamic_t, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<const std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, blackhole::dynamic_t> > >}; typename boost::add_pointer<const U>::type = const std::vector<blackhole::dynamic_t>*]'
/usr/include/blackhole/dynamic.hpp:396:36:   required from 'typename std::enable_if<(blackhole::dynamic_t::is_convertible<T>::value && (! blackhole::type_traits::is_integer<T>::value)), T>::type blackhole::dynamic_t::to() const [with T = const std::vector<blackhole::dynamic_t>&; typename std::enable_if<(blackhole::dynamic_t::is_convertible<T>::value && (! blackhole::type_traits::is_integer<T>::value)), T>::type = const std::vector<blackhole::dynamic_t>&]'
/home/zbr/rpmbuild/BUILD/elliptics-2.26.9.2/cache/../example/config.hpp:237:49:   required from here
/usr/include/boost/variant/get.hpp:195:5: error: static assertion failed: boost::variant does not contain specified type U, call to boost::get<U>(const boost::variant<T...>*) will always return NULL
     BOOST_STATIC_ASSERT_MSG(
     ^

Thread placeholder

Possible types:

  • d - means platform specific thread id that is printed with your debugger. For linux it will be LWP, for OS X - it is small hex number, obtained through ::pthread_threadid_np(nullptr, &tid) call. Otherwise 0.
  • x - means POSIX thread ID (i.e., the opaque value returned by pthread_self(3)), which is the same as GDB or LLDB shows as a large hex number.
  • s - means thread name, <unnamed> otherwise.

By default there is :#x spec, meaning the โ€œalternate formโ€ with 0x prefix, because it's platform-independent.

Generally these placeholders will look like {thread:d}, {thread:x}, {thread:s} or {thread:^20#x} with spec for example.

files sink doesn't always rotate logs

We frequently see this issue in elliptics:

# ls -l /proc/19808/fd/ | grep log l-wx------ 1 root root 64 Jul 28 01:49 3 -> /home/admin/elliptics/log/ioserv.log-2015.07.22-1437535502 (deleted)

ioserv.log - elliptics log file - was moved by logrotate 6 days ago, but blackhole didn't notice that and didn't reopen logfile.
Here is appropriate config section:

        "logger": {
                "frontends": [
                        {
                                "formatter": {
                                        "type": "string",
                                        "pattern": "%(timestamp)s %(request_id)s/%(lwp)s/%(pid)s %(severity)s: %(message)s %(...L)s"
                                },
                                "sink": {
                                        "type": "files",
                                        "path": "/home/admin/elliptics/log/ioserv.log",
                                        "autoflush": true,
                                        "rotation": {
                                                "move": 0
                                        }
                                }
                        }
                ],
                "level": "info"
        },

We use blackhole 0.2 if that matters. Rotation changed a little bit - there were introduced additional checks which might explain why 'size' rotation doesn't work in 0.2, but common 'move' logic is the same afaics.

It doesn't happen all the time, but yet quite frequently.

Sink-attached filter (with all attributes).

Currently filtering only occurred for all attributes except local ones to prevent unnecessary initialization if filtering fails.

But there is also useful filtering including all available attributes.

Inline namespaces

It turned out that an easy combination with Blackhole v0.2 (Elliptics and friends) is impossible due to symbol conflicts.

As an easy solution is to introduce inline namespace v1 after each namespace blackhole.

Refactor scoped attributes

The problems:

  • To be able to implement logger interface the user must include scoped.hpp file, which requires attribute.hpp. It makes migration from 0.5 -> 1.0 version much painful.
  • It's impossible to implement custom scoped attributes behavior. For example lazy evaluation.

The solution: inverse the API.

Attributes container.

Consider using flat map (std::vector as underlying type) or std::map instead of std::unordered_map for attributes keeping and lookup.

Blackhole rises weird exceptions leading to crash

Under heavy tests blackhole v0.2 raises weird exceptions, most of which lead to crash
Here is one of them, I noticed attributes = std::unordered_map with 140106538842976 elements line, which looks wrong. Elliptics log string elliptics: self: addr: no address, resetting state: 0xa73e50 is quite ordinary, it is emitted when client disconnects.

I have coredump, if needed

(gdb) bt full
#0  0x00007f6e68df79c8 in raise () from /lib64/libc.so.6
No symbol table info available.
#1  0x00007f6e68df965a in abort () from /lib64/libc.so.6
No symbol table info available.
#2  0x00007f6e69731b4d in __gnu_cxx::__verbose_terminate_handler() () from /lib64/libstdc++.so.6
No symbol table info available.
#3  0x00007f6e6972f996 in ?? () from /lib64/libstdc++.so.6
No symbol table info available.
#4  0x00007f6e6972e989 in ?? () from /lib64/libstdc++.so.6
No symbol table info available.
#5  0x00007f6e6972f2e5 in __gxx_personality_v0 () from /lib64/libstdc++.so.6
No symbol table info available.
#6  0x00007f6e69192f13 in ?? () from /lib64/libgcc_s.so.1
No symbol table info available.
#7  0x00007f6e69193437 in _Unwind_Resume () from /lib64/libgcc_s.so.1
No symbol table info available.
#8  0x00007f6e6c6fdc54 in blackhole::logger_base_t::get_event_attributes (this=0x7f6d7c001a50) at /usr/include/blackhole/implementation/logger.ipp:165
        tv = {tv_sec = 140112243037770, tv_usec = 1}
        attributes = std::unordered_map with 140106538842976 elements
#9  0x00007f6e6c727b66 in boost::detail::variant::make_initializer_node::apply<boost::mpl::pair<boost::detail::variant::initializer_root, mpl_::int_<0> >, boost::mpl::l_iter<boost::mpl::list8<blackhole::dynamic_t::null_t, bool, unsigned long, long, double, std::string, std::vector<blackhole::dynamic_t, std::allocator<blackhole::dynamic_t> >, std::map<std::string, blackhole::dynamic_t, std::less<std::string>, std::allocator<std::pair<std::string const, blackhole::dynamic_t> > > > > >::initializer_node::initialize(void*, blackhole::dynamic_t::null_t&&) (dest=0xa385e8, operand=<unknown type in /home/zbr/awork/elliptics/build/library/libelliptics.so.2.26, CU 0xb7c69, DIE 0x12ac2f>) at /usr/include/boost/variant/detail/initializer.hpp:115
No locals.
#10 0x00007f6e6c71ae3e in boost::move<blackhole::dynamic_t::null_t&> (t=...) at /usr/include/boost/move/utility_core.hpp:183
No locals.
#11 0x00007f6e6c79976f in dnet_sink_t::emit (this=0xa5ef40, prio=cocaine::logging::error, app="storage/core", message="elliptics: self: addr: no address, resetting state: 0xa73e50 ") at /home/zbr/awork/elliptics/srw/srw.cpp:274
        record = {attributes = std::unordered_map with 8 elements = {["source"] = {value = {which_ = 7, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "\270\026\000|m\177\000\000.nonbloc", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::local}, ["app"] = {value = {which_ = 7, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "\030\230\245\000\000\000\000\000\000\000\000\000m\177\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::local}, ["message"] = {value = {which_ = 7, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "(\026\000|m\177\000\000\270\210\r\000\000\000\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::event}, ["severity"] = {value = {which_ = 0, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "\004\000\000\000\000\000\000\000\245\207\r\000\000\000\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::event}, ["pid"] = {value = {which_ = 1, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "\001\062\000\000\000\000\000\000\000\000\000\000n\177\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::universe}, ["lwp"] = {value = {which_ = 5, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "Q3\000\000\000\000\000\000\371\206\r\000\000\000\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::thread}, ["timestamp"] = {value = {which_ = 8, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = "D0\356U\000\000\000\000\027\211\r\000\000\000\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::event}, ["request_id"] = {value = {which_ = 5, storage_ = {<boost::detail::aligned_storage::aligned_storage_imp<16ul, 8ul>> = {data_ = {buf = '\000' <repeats 12 times>, "n\177\000", align_ = {<No data fields>}}}, static size = <optimized out>, static alignment = <optimized out>}}, scope = blackhole::log::attribute::scope::event}}}
        level = blackhole::defaults::severity::error
#12 0x00007f6e6bc9f80d in cocaine::logging::log_t::emit<std::string> (this=0xa60ea0, level=10882720, format=...) at /usr/src/debug/libcocaine-core2-0.11.3.1/include/cocaine/logging.hpp:69
No locals.
#13 0x00007f6e0832c078 in cocaine::storage::log_adapter_impl_t::handle (this=0xa63030, record=...) at /home/zbr/awork/elliptics/cocaine/plugins/storage.cpp:70
        level = blackhole::defaults::severity::error
        cocaine_level = cocaine::logging::error
#14 0x00007f6e6c6fdbe3 in blackhole::logger_base_t::push(blackhole::log::record_t&&) const (this=0xa62bc0, record=<unknown type in /home/zbr/awork/elliptics/build/library/libelliptics.so.2.26, CU 0xb7c69, DIE 0x17c014>) at /usr/include/blackhole/implementation/logger.ipp:161
        lock = {m = 0xa62da0, is_locked = true}
#15 0x00007f6e6c727b66 in boost::detail::variant::make_initializer_node::apply<boost::mpl::pair<boost::detail::variant::initializer_root, mpl_::int_<0> >, boost::mpl::l_iter<boost::mpl::list8<blackhole::dynamic_t::null_t, bool, unsigned long, long, double, std::string, std::vector<blackhole::dynamic_t, std::allocator<blackhole::dynamic_t> >, std::map<std::string, blackhole::dynamic_t, std::less<std::string>, std::allocator<std::pair<std::string const, blackhole::dynamic_t> > > > > >::initializer_node::initialize(void*, blackhole::dynamic_t::null_t&&) (dest=0xa648b8, operand=<unknown type in /home/zbr/awork/elliptics/build/library/libelliptics.so.2.26, CU 0xb7c69, DIE 0x12ac2f>) at /usr/include/boost/variant/detail/initializer.hpp:115
No locals.
#16 0x00007f6e6aacd35a in dnet_log_write (logger=0xa648b8, record=0x7f6d187aa5d0, format=0x7f6e6ab156e0 "self: addr: %s, resetting state: %p") at /home/zbr/awork/elliptics/bindings/cpp/logger.cpp:263
        args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7f6d187a8480, reg_save_area = 0x7f6d187a83c0}}
#17 0x00007f6e6aabe213 in dnet_io_process_network (data_=0xa65cf8) at /home/zbr/awork/elliptics/library/pool.c:896
        local_dnet_log = 0xa648b8
        local_dnet_record = 0x7f6d187aa5d0
        addr_str = "no address", '\000' <repeats 117 times>
        nio = 0xa65cf8
        n = 0xa64cd0
        data = 0xa74240
        st = 0xa73e50
        evs_size = 100
        evs = 0x7f6d7c0008c0
        evs_tmp = 0x0
        ts = {tv_sec = 140106128162816, tv_nsec = 10888720}
        tmp = 1
        err = -104
        num_events = 1
        i = 0
        prev_tv = {tv_sec = 1441673143, tv_usec = 269220}
        curr_tv = {tv_sec = 10888720, tv_usec = 10888720}
#18 0x00007f6e6a6f2555 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#19 0x00007f6e68ec5b9d in clone () from /lib64/libc.so.6
No symbol table info available.

[Formatter.String] Extended variadic support.

For user defined attributes make configurable:

  • Separator.
  • Open and close braces.
  • Pattern

For example:

{
    "type": "string",
    "pattern": "%(message)s %(([...%k: %v])|, )s.",   
}

And the next lines:

BH_LOG(log, debug, "Blah-blah")("host", "localhost", "port", 42000);
BH_LOG(log, debug, "Blah-blah");

Will result in:

Blah-blah ('host': 'localhost', 'port': 42000).
Blah-blah.

Version namespacing

For the sake of ABI incompatibility for different versions.

In theory it allows to use several libraries that use different Blackhole versions in a single project.

[Detail] Move LWP to the configuration

Currently Blackhole shows LWP instead of thread id on Linux. It's likely that you want either TID or LWP, but not both.

Create option BLACKHOLE_USE_LWP or something like this and move it to the configuration.

blackhole::scoped_attributes_concept_t exception and crash

Under extensive elliptics testing I've gotten following crash due to some exception
It is latest v0.2 branch

(gdb) bt full
#0  0x00007f6a538bb9c8 in raise () from /lib64/libc.so.6
No symbol table info available.
#1  0x00007f6a538bd65a in abort () from /lib64/libc.so.6
No symbol table info available.
#2  0x00007f6a538b4187 in __assert_fail_base () from /lib64/libc.so.6
No symbol table info available.
#3  0x00007f6a538b4232 in __assert_fail () from /lib64/libc.so.6
No symbol table info available.
#4  0x00007f6a46f8f87c in blackhole::scoped_attributes_concept_t::~scoped_attributes_concept_t (this=0x7f69e8007448, 
    __in_chrg=<optimized out>) at /usr/include/blackhole/implementation/logger.ipp:198
        __PRETTY_FUNCTION__ = "virtual blackhole::scoped_attributes_concept_t::~scoped_attributes_concept_t()"
#5  0x00007f6a46fa835f in blackhole::scoped_attributes_t::~scoped_attributes_t (this=0x7f69e8007448, __in_chrg=<optimized out>)
    at /usr/include/blackhole/scoped_attributes.hpp:8
No locals.
#6  0x00007f6a467ccc2e in dnet_node_unset_trace_id () at /home/zbr/awork/elliptics/bindings/cpp/logger.cpp:178
        local_attributes = @0x7f69e8007670: 0x7f69e8007448

...

(gdb) frame 4
#4  0x00007f6a46f8f87c in blackhole::scoped_attributes_concept_t::~scoped_attributes_concept_t (this=0x7f69e8007448, 
    __in_chrg=<optimized out>) at /usr/include/blackhole/implementation/logger.ipp:198
198     BOOST_ASSERT(m_logger->state.attributes.scoped.get() == this);

Elasticsearch Frontend

Features:

  • Asynchronous.
  • Thread-safe.
  • Bulk write (bulk size must be configured option).
  • Automatically discovering Elasticsearch cluster state.

Lazy attribute supplier

Consider lazy attribute supplier:

supplier = 
    | fn -> string
    | fn writer -> ().

It allows to easily implement #46, #54, #47 and any userland type based on external trait. But if it hurts the performance greatly (>5%) just drop.

Message supplier

Currently a logger's message supplier returns only a string view and it's inconvenient, because string view must point either on some externally allocated buffer or on string literal. It's possible to allow to return ADT of string | string_view.

  • Implement message_t class as a wrapper over variant.
  • Check the performance.

[Formatter.String] Optional.

Sometimes it's useful to declare optional attribute fields in string pattern to be able to perform extended formatting depending on presence that attribute in set.

{
    "pattern": "[%(timestamp)s]: 0%(/[request_id])?s: %(message)s"
}
BH_LOG(log, debug, "Le shit");
BH_LOG(log, debug, "Le shit")("request_id", 100500);
[2014-05-29 15:30:51]: 0: Le shit
[2014-05-29 15:30:51]: 0/100500: Le shit

Embed libcppformat

There are two main solutions how to provide an extension formatting library to the Blackhole users:

  • Include it as an external dependency.
  • Embed it directly into the Blackhole.

Let's see what pros and cons have these solutions.

External dependency

Pros

  • No external code in Blackhole's codebase.

Cons

  • Another external linkage and dev dependency. Blackhole at this moment depends only from Boost::Thread, so doubling dependencies hurts.
  • Possible API/ABI breakage due to external lib updates.

Embedding

Pros

  • No more breaking API/ABI because of external libraries. Embedding means that the borrowing API is a part of Blackhole and I watch for it myself.
  • Less external dependencies. Once I will drop Boost.Thread usage Blackhole will become pure library.

Cons

  • External code in Blackhole's codebase.
  • Need to suppress warnings with pragmas, because cppformat isn't prepared for -Weverything flag.

Ideal

If cppformat would have stateful API for writers, that allows to feed arguments one-by-one (instead of variadic pack), then I can possibly hide cppformat inside the Blackhole, providing wrappers. Think about it. but later.

Document RTTI requirement

One of the major shortcomings of Boost.Log is that it requires RTTI, even after adopting Boost.TypeIndex in 1.59.0. A quick grep in your repository shows that dynamic_cast is only used in a unit test and typeid is not used at all. However, it's hard to verify whether RTTI isn't pulled in from a different source, and whether disabling it would affect binary compatibility with the shared library part.

It would be great if the README spent a few words on documenting these things.

[Evolution] Filtering review

Checklist:

  • Primary filtering.
    • Create auxiliary attribute combined view class, which provides lightweight combined access to multiple attribute sets.
      • Copyable.
      • Movable.
      • Constructor signature: ctor({ set_t, ... }),
      • Access with casting: get<T>(const K&) -> optional<const V&>.
      • Access without casting: get(const K&) -> optional<const attribute::value&>.
    • Teach attribute::view to provide partial view: view.partial<external/internal>(); Internally should use combined view with single underlying set.
    • Logger: primary verbosity filter - set only verbosity level.
    • Logger: primary verbosity filter - set function.
  • Secondary filtering.
  • Sink filtering.

Smart thread safety.

Currently logger objects is not thread-safe.

Its thread-safety is achieved using synchronized wrapper which is just mutex for every method. It is a pain in the ass for some people (me included).

The only part of entire logging system that needs synchronization - are sinks. Some of them are thread-safe by design (e.g. file sink using non-buffered write via writev syslog or Elasticsearch sink), some of them - aren't.

Think about design:

  • Make logger object itself thread-safe (remember, it keeps attributes) as the only available way or by. Partially achieved using synchronized sink.
  • Make all sinks thread safe.
  • Mark some sinks as non thread-safe and use this mark explicitly in frontends configuration.
  • Any other ideas.

Log wrapper, that keeps some user-specified attributes until it lives.

It must act like logger object itself.

For example:

logger_base_t log;
LOG(log, "le message")("id", 100500); // Attributes: {"id": 100500}

{
    scoped_wrapper_t<logger_base_t> wrapper(log, log::attributes_t({{"answer", 42}}));
    LOG(log, "le message")("id", 100500); // Attributes: {"id": 100500}
    LOG(wrapper, "le message")("id", 100500); // Attributes: {"id": 100500, "answer": 42}
}

LOG(log, "le message")("id", 100500); // Attributes: {"id": 100500}

[Bug] Broken build on OS X.

Doesn't compile on Mac, because of googletest couldn't find something from tr1.

More verbose:

In file included from /Users/esafronov/sandbox/blackhole/foreign/gtest/src/gtest-all.cc:39:
In file included from /Users/esafronov/sandbox/blackhole/foreign/gtest/include/gtest/gtest.h:57:
In file included from /Users/esafronov/sandbox/blackhole/foreign/gtest/include/gtest/internal/gtest-internal.h:40:
/Users/esafronov/sandbox/blackhole/foreign/gtest/include/gtest/internal/gtest-port.h:484:13: fatal error: 'tr1/tuple' file not found
#   include <tr1/tuple>  // NOLINT
            ^
1 error generated.

Performance optimization for multi-threaded applications

Elliptics server node writes logs to /dev/null in our load test stand.
If log level is "debug", then 8k rps processed, but if log level is "error", then elliptics server node can process up to 100k rps.
Elliptics writes to logs from multiple threads. Blackhole has many lock waits. This lock offers critical section in which called write of a single message to log sink.

sudo strace -f -p pidof dnet_ioserv 2> trace2

This log (trace2) contains many records like this:

[pid 15786] <... futex resumed> ) = -1 EAGAIN (Resource temporarily unavailable)

If log level is "error", then log has much less EAGAIN on mutex comparing to "debug" log level.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.