Giter Site home page Giter Site logo

fluent-logger-python's Introduction

A Python structured logger for Fluentd/Fluent Bit

Many web/mobile applications generate huge amount of event logs (c,f. login, logout, purchase, follow, etc). To analyze these event logs could be really valuable for improving the service. However, the challenge is collecting these logs easily and reliably.

Fluentd and Fluent Bit solves that problem by having: easy installation, small footprint, plugins, reliable buffering, log forwarding, etc.

fluent-logger-python is a Python library, to record the events from Python application.

Requirements

  • Python 3.7+
  • msgpack
  • IMPORTANT: Version 0.8.0 is the last version supporting Python 2.6, 3.2 and 3.3
  • IMPORTANT: Version 0.9.6 is the last version supporting Python 2.7 and 3.4
  • IMPORTANT: Version 0.10.0 is the last version supporting Python 3.5 and 3.6

Installation

This library is distributed as 'fluent-logger' python package. Please execute the following command to install it.

$ pip install fluent-logger

Configuration

Fluentd daemon must be launched with a tcp source configuration:

<source>
  type forward
  port 24224
</source>

To quickly test your setup, add a matcher that logs to the stdout:

<match app.**>
  type stdout
</match>

Usage

FluentSender Interface

sender.FluentSender is a structured event logger for Fluentd.

By default, the logger assumes fluentd daemon is launched locally. You can also specify remote logger by passing the options.

from fluent import sender

# for local fluent
logger = sender.FluentSender('app')

# for remote fluent
logger = sender.FluentSender('app', host='host', port=24224)

For sending event, call emit method with your event. Following example will send the event to fluentd, with tag 'app.follow' and the attributes 'from' and 'to'.

# Use current time
logger.emit('follow', {'from': 'userA', 'to': 'userB'})

# Specify optional time
cur_time = int(time.time())
logger.emit_with_time('follow', cur_time, {'from': 'userA', 'to':'userB'})

To send events with nanosecond-precision timestamps (Fluent 0.14 and up), specify nanosecond_precision on FluentSender.

# Use nanosecond
logger = sender.FluentSender('app', nanosecond_precision=True)
logger.emit('follow', {'from': 'userA', 'to': 'userB'})
logger.emit_with_time('follow', time.time(), {'from': 'userA', 'to': 'userB'})

You can detect an error via return value of emit. If an error happens in emit, emit returns False and get an error object using last_error method.

if not logger.emit('follow', {'from': 'userA', 'to': 'userB'}):
    print(logger.last_error)
    logger.clear_last_error() # clear stored error after handled errors

If you want to shutdown the client, call close() method.

logger.close()

Event-Based Interface

This API is a wrapper for sender.FluentSender.

First, you need to call sender.setup() to create global sender.FluentSender logger instance. This call needs to be called only once, at the beginning of the application for example.

Initialization code of Event-Based API is below:

from fluent import sender

# for local fluent
sender.setup('app')

# for remote fluent
sender.setup('app', host='host', port=24224)

Then, please create the events like this. This will send the event to fluentd, with tag 'app.follow' and the attributes 'from' and 'to'.

from fluent import event

# send event to fluentd, with 'app.follow' tag
event.Event('follow', {
  'from': 'userA',
  'to':   'userB'
})

event.Event has one limitation which can't return success/failure result.

Other methods for Event-Based Interface.

sender.get_global_sender # get instance of global sender
sender.close # Call FluentSender#close

Handler for buffer overflow

You can inject your own custom proc to handle buffer overflow in the event of connection failure. This will mitigate the loss of data instead of simply throwing data away.

import msgpack
from io import BytesIO

def overflow_handler(pendings):
    unpacker = msgpack.Unpacker(BytesIO(pendings))
    for unpacked in unpacker:
        print(unpacked)

logger = sender.FluentSender('app', host='host', port=24224, buffer_overflow_handler=overflow_handler)

You should handle any exception in handler. fluent-logger ignores exceptions from buffer_overflow_handler.

This handler is also called when pending events exist during close().

Python logging.Handler interface

This client-library also has FluentHandler class for Python logging module.

import logging
from fluent import handler

custom_format = {
  'host': '%(hostname)s',
  'where': '%(module)s.%(funcName)s',
  'type': '%(levelname)s',
  'stack_trace': '%(exc_text)s'
}

logging.basicConfig(level=logging.INFO)
l = logging.getLogger('fluent.test')
h = handler.FluentHandler('app.follow', host='host', port=24224, buffer_overflow_handler=overflow_handler)
formatter = handler.FluentRecordFormatter(custom_format)
h.setFormatter(formatter)
l.addHandler(h)
l.info({
  'from': 'userA',
  'to': 'userB'
})
l.info('{"from": "userC", "to": "userD"}')
l.info("This log entry will be logged with the additional key: 'message'.")

You can also customize formatter via logging.config.dictConfig

import logging.config
import yaml

with open('logging.yaml') as fd:
    conf = yaml.load(fd)

logging.config.dictConfig(conf['logging'])

You can inject your own custom proc to handle buffer overflow in the event of connection failure. This will mitigate the loss of data instead of simply throwing data away.

import msgpack
from io import BytesIO

def overflow_handler(pendings):
    unpacker = msgpack.Unpacker(BytesIO(pendings))
    for unpacked in unpacker:
        print(unpacked)

A sample configuration logging.yaml would be:

logging:
    version: 1

    formatters:
      brief:
        format: '%(message)s'
      default:
        format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'
        datefmt: '%Y-%m-%d %H:%M:%S'
      fluent_fmt:
        '()': fluent.handler.FluentRecordFormatter
        format:
          level: '%(levelname)s'
          hostname: '%(hostname)s'
          where: '%(module)s.%(funcName)s'

    handlers:
        console:
            class : logging.StreamHandler
            level: DEBUG
            formatter: default
            stream: ext://sys.stdout
        fluent:
            class: fluent.handler.FluentHandler
            host: localhost
            port: 24224
            tag: test.logging
            buffer_overflow_handler: overflow_handler
            formatter: fluent_fmt
            level: DEBUG
        none:
            class: logging.NullHandler

    loggers:
        amqp:
            handlers: [none]
            propagate: False
        conf:
            handlers: [none]
            propagate: False
        '': # root logger
            handlers: [console, fluent]
            level: DEBUG
            propagate: False

Asynchronous Communication

Besides the regular interfaces - the event-based one provided by sender.FluentSender and the python logging one provided by handler.FluentHandler - there are also corresponding asynchronous versions in asyncsender and asynchandler respectively. These versions use a separate thread to handle the communication with the remote fluentd server. In this way the client of the library won't be blocked during the logging of the events, and won't risk going into timeout if the fluentd server becomes unreachable. Also it won't be slowed down by the network overhead.

The interfaces in asyncsender and asynchandler are exactly the same as those in sender and handler, so it's just a matter of importing from a different module.

For instance, for the event-based interface:

from fluent import asyncsender as sender

# for local fluent
sender.setup('app')

# for remote fluent
sender.setup('app', host='host', port=24224)

# do your work
...

# IMPORTANT: before program termination, close the sender
sender.close()

or for the python logging interface:

import logging
from fluent import asynchandler as handler

custom_format = {
  'host': '%(hostname)s',
  'where': '%(module)s.%(funcName)s',
  'type': '%(levelname)s',
  'stack_trace': '%(exc_text)s'
}

logging.basicConfig(level=logging.INFO)
l = logging.getLogger('fluent.test')
h = handler.FluentHandler('app.follow', host='host', port=24224, buffer_overflow_handler=overflow_handler)
formatter = handler.FluentRecordFormatter(custom_format)
h.setFormatter(formatter)
l.addHandler(h)
l.info({
  'from': 'userA',
  'to': 'userB'
})
l.info('{"from": "userC", "to": "userD"}')
l.info("This log entry will be logged with the additional key: 'message'.")

...

# IMPORTANT: before program termination, close the handler
h.close()

NOTE: please note that it's important to close the sender or the handler at program termination. This will make sure the communication thread terminates and it's joined correctly. Otherwise the program won't exit, waiting for the thread, unless forcibly killed.

Circular queue mode

In some applications it can be especially important to guarantee that the logging process won't block under any circumstance, even when it's logging faster than the sending thread could handle (backpressure). In this case it's possible to enable the circular queue mode, by passing True in the queue_circular parameter of asyncsender.FluentSender or asynchandler.FluentHandler. By doing so the thread doing the logging won't block even when the queue is full, the new event will be added to the queue by discarding the oldest one.

WARNING: setting queue_circular to True will cause loss of events if the queue fills up completely! Make sure that this doesn't happen, or it's acceptable for your application.

Testing

Testing can be done using pytest.

$ pytest tests

Release

$ # Download dist.zip for release from GitHub Action artifact.
$ unzip -d dist dist.zip
$ pipx twine upload dist/*

Contributors

Patches contributed by those people.

License

Apache License, Version 2.0

fluent-logger-python's People

Contributors

arcivanov avatar billykern avatar chrisseto avatar dimaqq avatar ento avatar evasdk avatar frsyuki avatar hirokiky avatar johnpaulett avatar kiyoto avatar ksato9700 avatar kzk avatar luhn avatar mastak avatar methane avatar mmasaki avatar montaro avatar panta avatar quasipedia avatar repeatedly avatar sleongkoan avatar sylvainde avatar takashibagura avatar vgavro avatar william-p avatar yoichi avatar yyuu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-logger-python's Issues

Python3 RecordFormatter is broken (and undocumented)

In order for the plugin to output data that can be correctly parsed by fluentd when it will forward it to some storage like for example elasticsearch, it is necessary to use the RecordFormatter class on the logging handler.

However, there are two issues:

  1. The need for this step is undocumented
  2. The code in the plugin is python2-compatible only, as it uses the basestring

problem with exception logging

Logging exception using _logger.exception(ex) causes exception inside the fluent package.

  (...)
    _logger.exception(ex)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1198, in exception
    self.error(msg, *args, **kwargs)

  File "/usr/lib64/python2.7/logging/__init__.py", line 1191, in error
    self._log(ERROR, msg, args, **kwargs)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1284, in _log
    self.handle(record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1294, in handle
    self.callHandlers(record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1334, in callHandlers
    hdlr.handle(record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 757, in handle
    self.emit(record)
  File "/home/bozydar/workspaces/data-mining/venv/lib/python2.7/site-packages/fluent/handler.py", line 106, in emit
    self.sender.emit(None, data)
  File "/home/bozydar/workspaces/data-mining/venv/lib/python2.7/site-packages/fluent/sender.py", line 54, in emit
    self.emit_with_time(label, cur_time, data)
  File "/home/bozydar/workspaces/data-mining/venv/lib/python2.7/site-packages/fluent/sender.py", line 57, in emit_with_time
    bytes_ = self._make_packet(label, timestamp, data)
  File "/home/bozydar/workspaces/data-mining/venv/lib/python2.7/site-packages/fluent/sender.py", line 68, in _make_packet
    return msgpack.packb(packet)
  File "/home/bozydar/workspaces/data-mining/venv/lib/python2.7/site-packages/msgpack/__init__.py", line 47, in packb
    return Packer(**kwargs).pack(o)
  File "msgpack/_packer.pyx", line 223, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:223)

  File "msgpack/_packer.pyx", line 225, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:225)

  File "msgpack/_packer.pyx", line 213, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:213)

  File "msgpack/_packer.pyx", line 184, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:184)

  File "msgpack/_packer.pyx", line 220, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:220)

exceptions.TypeError: can't serialize ValueError('No JSON object could be decoded',)

Used configuration:

logger = logging.getLogger('sentiment')
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel('DEBUG')
custom_format = {
            'host': '%(hostname)s',
            'where': '%(module)s.%(funcName)s',
            'type': '%(levelname)s',
            'stack_trace': '%(exc_text)s'
        }
h = handler.FluentHandler('sentiment', host='localhost', port=24224)
formatter = handler.FluentRecordFormatter(custom_format)
h.setFormatter(formatter)
logger.addHandler(h)

include tag and time IndexError

I've created a handler as per test here (https://github.com/fluent/fluent-logger-python/blob/master/tests/test_handler.py), I have the following fluentd config:

<source>
  type forward
  port 24224
  bind 0.0.0.0
</source>

<match app.**>
  type file
  path /var/log/td-agent/dev.log
  append true
  format json
  include_time_key true
  time_key time
  include_tag_key true
  tag_key tag
</match>

without including tag key there is no issue, including tag key results in the following error:

[error]: forward error error=#<IndexError: string not matched> error_class=IndexError
[error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.10.60/lib/fluent/formatter.rb:58:in `[]='

also the including_tag_key results in the following output:

"{'foo': 'bar', '2015-01-14T09:54:17+00:00': '2015-01-14 09:54:17'}"

instead of

"{'foo': 'bar', 'time': '2015-01-14 09:54:17'}"

I am looking to get the following output

"{'foo': 'bar', 'time': '2015-01-14 09:54:17', '_key': 'app.follow'}"

Anything I am missing or this a potential bug?

AttributeError: 'NoneType' object has no attribute 'emit_with_time'

(py3.6) ➜  ocrla git:(master) ✗ pip install fluent-logger==0.5.2
Collecting fluent-logger==0.5.2
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/76/f6/58fa7355a484b9d9c236d6c6b97e1bb148f1b076e5d252c2bc8570fde079/fluent_logger-0.5.2-py2.py3-none-any.whl
Requirement already satisfied: msgpack-python in /Users/wanghaisheng/anaconda/envs/py3.6/lib/python3.6/site-packages (from fluent-logger==0.5.2)
Installing collected packages: fluent-logger
  Found existing installation: fluent-logger 0.5.3
    Uninstalling fluent-logger-0.5.3:
      Successfully uninstalled fluent-logger-0.5.3
Successfully installed fluent-logger-0.5.2
(py3.6) ➜  ocrla git:(master) ✗ python local_image.py -i../../IMG_20170922_142015.jpg 
Exception in thread Thread-4:
Traceback (most recent call last):
  File "/Users/wanghaisheng/anaconda/envs/py3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/Users/wanghaisheng/workspace/OCR/binarization-experiments/ocrap/ocrla/process.py", line 61, in run
    log_utils.log_info('Starting thread: %s' % self.method, self.guid)
  File "/Users/wanghaisheng/workspace/OCR/binarization-experiments/ocrap/ocrla/utils/log_utils.py", line 41, in log_info
    event.Event('INFO', {'message': message, 'environment': config.environment, 'guid': guid})
  File "/Users/wanghaisheng/anaconda/envs/py3.6/lib/python3.6/site-packages/fluent/event.py", line 13, in __init__
    sender_.emit_with_time(label, timestamp, data)
AttributeError: 'NoneType' object has no attribute 'emit_with_time'

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/Users/wanghaisheng/anaconda/envs/py3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/Users/wanghaisheng/workspace/OCR/binarization-experiments/ocrap/ocrla/process.py", line 61, in run
    log_utils.log_info('Starting thread: %s' % self.method, self.guid)
  File "/Users/wanghaisheng/workspace/OCR/binarization-experiments/ocrap/ocrla/utils/log_utils.py", line 41, in log_info
    event.Event('INFO', {'message': message, 'environment': config.environment, 'guid': guid})
  File "/Users/wanghaisheng/anaconda/envs/py3.6/lib/python3.6/site-packages/fluent/event.py", line 13, in __init__
    sender_.emit_with_time(label, timestamp, data)
AttributeError: 'NoneType' object has no attribute 'emit_with_time'

Traceback (most recent call last):
  File "local_image.py", line 55, in <module>
    begin_processing_image(image_path, image_key, output_path, guid, None, is_local=True)
  File "/Users/wanghaisheng/workspace/OCR/binarization-experiments/ocrap/ocrla/process.py", line 435, in begin_processing_image
    log_utils.log_info('Processing complete. processing_time:%s' % (time_end - time_start), guid)
  File "/Users/wanghaisheng/workspace/OCR/binarization-experiments/ocrap/ocrla/utils/log_utils.py", line 41, in log_info
    event.Event('INFO', {'message': message, 'environment': config.environment, 'guid': guid})
  File "/Users/wanghaisheng/anaconda/envs/py3.6/lib/python3.6/site-packages/fluent/event.py", line 13, in __init__
    sender_.emit_with_time(label, timestamp, data)
AttributeError: 'NoneType' object has no attribute 'emit_with_time'
(py3.6) ➜  ocrla git:(master) ✗ 

Python3 pip installation is broken

After having created a python3 virtulaenv with the command pyvenv-3.3 efk , the pip installation fails with:

(efk) --- efk/fluent ‹master* ?› » pip install fluent-logger
Downloading/unpacking fluent-logger
  Running setup.py egg_info for package fluent-logger

Downloading/unpacking msgpack-python (from fluent-logger)
  Running setup.py egg_info for package msgpack-python

Installing collected packages: fluent-logger, msgpack-python
  Running setup.py install for fluent-logger
    error: could not create '/usr/lib/python2.7/site-packages/fluent': Permission denied
    Complete output from command /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build-mac/fluent-logger/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EAWTAq-record/install-record.txt --single-version-externally-managed:
    running install
running build
running build_py
running install_lib
creating /usr/lib/python2.7/site-packages/fluent
error: could not create '/usr/lib/python2.7/site-packages/fluent': Permission denied
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build-mac/fluent-logger/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EAWTAq-record/install-record.txt --single-version-externally-managed failed with error code 1 in /tmp/pip-build-mac/fluent-logger
Storing complete log in /home/mac/.pip/pip.log

retry support

Current version of the module doesn't retry if error occurs during the last connection (connection lost, timeout, ...); So a message is sent, or discarded without notice.

This is quite different from the behavior of Java, Ruby client.

Please fix that. Thank you very much.

Reconnect flow does not check for dead sockets

It seems that reconnect() only checks if socket exists and does not check for disconnected sockets.

To reproduce the issue:

  1. Run a python fluentd client and a fluentd service
  2. Send data from client to fluentd
  3. Restart fluentd while keeping client running
  4. Try to send data again to fluentd using same client

Step 4 fails due to a connection reset by peer error.

Support cloud-based FluentD deployment

This means supporting:

  1. Server host resolving to multiple IPs.
  2. Refreshing server host IPs at A/AAAA/CNAME record TTL expiration.
  3. Supporting fail-over to the next host available
  4. Supporting round-robin load balancing

FluentRecordFormatter fails, when given a string, which contains only numbers.

Simple script to illustrate:

import os
import logging
from fluent import handler


FLUENT_HOST = os.getenv('FLUENT_HOST', 'fluentd-logger')
FLUENT_PORT = int(os.getenv('FLUENT_PORT', 24224))
FLUENT_TAG = os.getenv('FLUENT_TAG', 'docker.docker-tag')

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)
    logger = logging.getLogger('fluent.test')
    h = handler.FluentHandler(FLUENT_TAG, host=FLUENT_HOST, port=FLUENT_PORT, verbose=True)
    formatter = handler.FluentRecordFormatter()
    h.setFormatter(formatter)
    logger.addHandler(h)

    logger.info('1')

extra fields

I wasn't able to pass extra fields. Have you tested such:

from fluent import handler
custom_format = {
  'host': '%(hostname)s',
  'where': '%(module)s.%(funcName)s',
  'type': '%(levelname)s',
  'stack_trace': '%(exc_text)s'
}
log = logging.getLogger('fluent.test')
log.setLevel(logging.DEBUG)
h = handler.FluentHandler('app.python', host='127.0.0.1', port=24224)
formatter = handler.FluentRecordFormatter(custom_format)
h.setFormatter(formatter)
log.addHandler(h)

log.debug('python fluent log test', extra={'extraField': 'test'} )

on the last line I am expecting fluentd(td-agent) to pass the extraField into the collector.

Is there sth I'm missing?

Support `chunk` option to enable "at-least-once" delivery

Background

According to the "Forward Protocol Specification v1", fluentd supports an option named chunk which enables at-least-once delivery of messages.

This option is very useful in cases where data loss is not acceptable.

https://github.com/fluent/fluentd/wiki/Forward-Protocol-Specification-v1#option

The problem

The current design of fluent-logger-python, however, makes it difficult to support this new option.
Specifically:

  1. Events are buffered inside FluentSender class as a single bytes sequence (self.pendings). There is no efficient way to reconstruct a specific event from the buffer and resend it.
  2. And this bytes sequence buffer is kinda API. So we cannot moddify the format in which FluentSender buffers messages (at least, casually) or it will break many user-defined buffer_overflow_handlers.
  3. Also for now, we lack a handful of building blocks for supporting the "at-least-once" semantics. For example, there is no reliable mechanism for users to tell if a message has been delivered successfully [^]

So we need to ...

The bottom line is, we need to apply some architectural changes to make this library support the (newly-introduced) "at-least-once" semantics. Of course, we need to do it without breaking many existing programs.

What do you think about this? Or is there already a plan to make this library compliant with the v1 specification?


[^] Yes, FluentSender.emit() is supposed to notify this via its return value. But even if the method returns False, the message might be delivered anyway through the pending buffer, and this "retry" part is totally opaque to users.

override time

I am reading records from a file and time should be set as the recorded time in the file not the time I send the record to fluent receiver (i.e. in_forwarder)

I am trying to override the time by including a time key in the event message. I tried combination of both time and timestamp with different formatting. It seems like the receiver disregard this key and honour time set by the handler.

Is there anyway to override the event time?

https://github.com/fluent/fluent-logger-python/blob/master/fluent/sender.py#L53

Add option not to parse json message

json.load is quite expensive and in most cases where native logging is used is not needed. It is, however sitting on a critical code path in formatter wasting CPU in 99% of the cases when it's not used.

handling connection error between fluentd

I want to use this module within a cron script for my project.
However, I can't find the way handling connection error between my script and fluentd.

As long as I understand, this modules runs as follow way, so cron scripts (not daemon programs) cannot handle the connection problems.

  1. when it can't connect to fluentd, data will be stored in a buffer.
  2. And it does not return any error, nor call error handler function.

At least, I should know my script have remaining log in the buffer.
Please tell me how to handle connection errors between my script and fluentd.

FluentHandler incorrectly timestamps the records

All log records already have a timestamp when they were created. FluentHandler loses that value completely substituting it for when the message is emitted, which is not terribly appropriate.

Fluent logger fails to serialize datetime

exception :
TypeError: can't serialize datetime.datetime(2017, 1, 11, 18, 35, 57, 863758)

script to test :

logger = logging.getLogger("exampleApp")
# datetime serialize exception
logger.info(dict(datetime=datetime.today())
# list, sets and nested dict exception
logger.info(dict(list=["string", 44.6, datetime.today()],
                     nested_dict=dict(available=True)))

Please provide sample for out_file plugin of v0.14

I try to upgrade fluentd from 0.12 to 0.14 for that I can use nanosecond precision, but I found the configuration format has been modified, the old style configuration is unusable under v0.14.

And what is the worse is that I only found the 0.12 manual for out_file plugin.

So please provide the manual and sample for 0.14, and especially when * is used in path parameter, now it always raises the exception that I use the * in path but there is no time_key.

Stop support for EOL Python versions

2.7 is EOL in 2020 so it should be kept

3.2 is EOL Februrary 2016
3.3 is EOL September 2017

We should stop supporting these versions and clean-up code to remove hacks and workarounds.
Add python_requires constraint.

how to add buffer_overflow_handler rightly to make the pendings send ok?

As the title said. The sender.pendings already > self.bufmax. If I what to send this pending, how to do?
In _send_internal function

    if self.pendings:
        self.pendings += bytes_
        bytes_ = self.pendings

I think we should not directly add these bytes . If it will overflow. How about send the pending first then send the bytes?

Formatter attribute exclusion set

The current logic behind FluentRecordFormatter is to only log the attributes that are specified with keys and values being formatted. This is too limiting as LoggerAdapter and extra fields allow the same formatter to be used with different sets of extra fields, all of which need to get transmitted to FluentD. Furthermore this logic requires to create a handler-formatter pair and add each one to distinct logger if different data set types are send from different loggers.

There are standard fields that will always occur on the LogRecord. All other firelds would be custom user-specified ones, including extra.

The solution, to my mind, is simple: provide a mode, where instead of inclusion dictionary and exclusion set is used that will filter out the fields that we don't want to send through formatter. If a exclusion dictionary is None, we assume current behavior. If exclusion dictionary is set()/[]/() we send everything. In all other cases we exclude the keys from the specified set.

related to #47

Cannot connect to Ipv6.

Hi,

I am trying to connect to ipv6 address using this library. But while trying to connect to an Ipv6 server it raise an error: gaierror(-9, 'Address family for hostname not supported')

reconnect and send after close method; sender doesn't have a "closed" state

I can't shutdown the client with the close() method.
I am using python 2.7 on jupyter notebook and structured the code as your example:

from fluent import sender
from fluent import event

host_name = (my_host_url)
port = 24224
logger = sender.FluentSender('app', host=host_name, port=port)

if not logger.emit("test", {'message': 'potato'}):
       print(logger.last_error)
       logger.clear_last_error()

logger.close()

if not logger.emit("test", {'message': 'Oops'}):
       print(logger.last_error)
       logger.clear_last_error()

The second log with the "Oops" message is emitted as nothing happened :(

I tried with FluentSender Interface and Event-Based Interface and both did the same thing.
Did I misunderstood the purpose of closing the client (I supposed it would cut the connection with the client and prevent me of emitting more logs) or am I forgetting to do something when I try to close it?

Thanks,
Débora.

Can this package be used with TLS?

Hi,
I'm using fluent-logger with a fluentd instance on a remote host and would like to secure the data transfer using TLS.

On the fluentd side, I follow the instructions set here.

IIUC, there's no support for TLS/SSL on this package yet, and in order to get that, we need to wrap the socket when needed (e.g., given a parameter in the sender's constructor) as described here.

Please tell me if I'm getting something wrong here, and if not, may I submit a pull request with this functionality?

Windows bug: 'socket' has no attribute 'MSG_DONTWAIT'

The recent versions of the package have the bug that prevents from using it on Windows installations since python's socket.py on Windows doesn't have the MSG_DONTWAIT option. Here's the sample traceback:

  File "<src_path>\app.py", line 43, in main
    logger.info('Account api started')
  File "<src_path>\appdata\local\programs\python\python36\Lib\logging\__init__.py", line 1306, in info
    self._log(INFO, msg, args, **kwargs)
  File "<src_path>\appdata\local\programs\python\python36\Lib\logging\__init__.py", line 1442, in _log
    self.handle(record)
  File "<src_path>\appdata\local\programs\python\python36\Lib\logging\__init__.py", line 1452, in handle
    self.callHandlers(record)
  File "<src_path>\appdata\local\programs\python\python36\Lib\logging\__init__.py", line 1514, in callHandlers
    hdlr.handle(record)
  File "<src_path>\appdata\local\programs\python\python36\Lib\logging\__init__.py", line 863, in handle
    self.emit(record)
  File "<src_path>\Envs\pms.account.queryapi\lib\site-packages\fluent\handler.py", line 225, in emit
    data)
  File "<src_path>\Envs\pms.account.queryapi\lib\site-packages\fluent\sender.py", line 97, in emit_with_time
    return self._send(bytes_)
  File "<src_path>\Envs\pms.account.queryapi\lib\site-packages\fluent\sender.py", line 139, in _send
    return self._send_internal(bytes_)
  File "<src_path>\Envs\pms.account.queryapi\lib\site-packages\fluent\sender.py", line 148, in _send_internal
    self._send_data(bytes_)
  File "<src_path>\Envs\pms.account.queryapi\lib\site-packages\fluent\sender.py", line 190, in _send_data
    self._check_recv_side()
  File "<src_path>\Envs\pms.account.queryapi\lib\site-packages\fluent\sender.py", line 173, in _check_recv_side
    recvd = self.socket.recv(4096, socket.MSG_DONTWAIT)
AttributeError: module 'socket' has no attribute 'MSG_DONTWAIT'

I had to switch to version 0.6.0 where there is no _check_recv_side method that fails.
Here's the similar problem and solution used in another lib: https://github.com/banjiewen/bernhard/issues/15

logging handler yields 'record' as string instead of map

When using the logging handler, the emitted record has type string (fixedstr) rather than map.

According to the fluent plugin docs at http://docs.fluentd.org/articles/plugin-development 'record' is required to be a Hash object, so I believe the implication is this module should only emit 'map'.

Example code (based on the examples from the docs), followed by some extra logging I added to fluentd's in_forward plugin:

Example using 'handler':

import logging
from fluent import handler

logging.basicConfig(level=logging.INFO)
l = logging.getLogger('fluent.test')
l.addHandler(handler.FluentHandler('app.follow'))
l.info({
  'message': 'test from info',
})

2015-04-10 22:15:07 +0000 [info]: on_read_msgpack:
00000000  93 aa 61 70 70 2e 66 6f 6c 6c 6f 77 ce 55 28 4b  |..app.follow.U(K|
00000010  6b bd 7b 27 6d 65 73 73 61 67 65 27 3a 20 27 74  |k.{'message': 't|
00000020  65 73 74 20 66 72 6f 6d 20 69 6e 66 6f 27 7d     |est from info'}|
2015-04-10 22:15:07 +0000 [info]: on_message:  msg=["app.follow", 1428704107, "{'message': 'test from info'}"] chunk_size=47 source="host: 127.0.0.1, addr: 127.0.0.1, port: 49307"
2015-04-10 22:15:07 +0000 [info]: record_class=String

Example using 'event' directly, which works correctly:

from fluent import sender
from fluent import event
sender.setup('app')
event.Event('follow', {
  'message': 'this is a test',
})

2015-04-10 22:15:07 +0000 [info]: on_read_msgpack:
00000000  93 aa 61 70 70 2e 66 6f 6c 6c 6f 77 ce 55 28 4b  |..app.follow.U(K|
00000010  6b 81 a7 6d 65 73 73 61 67 65 ae 74 68 69 73 20  |k..message.this |
00000020  69 73 20 61 20 74 65 73 74                       |is a test|
2015-04-10 22:15:07 +0000 [info]: on_message:  msg=["app.follow", 1428704107, {"message"=>"this is a test"}] chunk_size=41 source="host: 127.0.0.1, addr: 127.0.0.1, port: 49306"
2015-04-10 22:15:07 +0000 [info]: record_class=Hash

Some output plugins do not care about the type of 'record' so this may appear to work, but there are other plugins that discard the record or have other issues if it is not a hash.

Fluent logger fails to serialize exceptions.

Here is a simple script to illustrate the problem:


import os
import logging
from fluent import handler


FLUENT_HOST = os.getenv('FLUENT_HOST', 'fluentd-logger')
FLUENT_PORT = int(os.getenv('FLUENT_PORT', 24224))
FLUENT_TAG = os.getenv('FLUENT_TAG', 'docker.docker-tag')

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)
    logger = logging.getLogger('fluent.test')
    h = handler.FluentHandler(FLUENT_TAG, host=FLUENT_HOST, port=FLUENT_PORT, verbose=True)
    formatter = handler.FluentRecordFormatter()
    h.setFormatter(formatter)
    logger.addHandler(h)

    logger.info(Exception('foo'))

this produces:
('docker.docker-tag', 1481107415, {'sys_module': 'script', 'message': Exception('foo',), 'sys_name': 'fluent.test', 'sys_host': 'K501LB'})
('docker.docker-tag', 1481107415, {'message': "Can't output to log", 'traceback': 'Traceback (most recent call last):\n File "/tmp/fl/.venv/local/lib/python2.7/site-packages/fluent/sender.py", line 69, in emit_with_time\n bytes_ = self._make_packet(label, timestamp, data)\n File "/tmp/fl/.venv/local/lib/python2.7/site-packages/fluent/sender.py", line 101, in _make_packet\n return msgpack.packb(packet)\n File "/tmp/fl/.venv/local/lib/python2.7/site-packages/msgpack/init.py", line 47, in packb\n return Packer(**kwargs).pack(o)\n File "msgpack/_packer.pyx", line 231, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:3661)\n File "msgpack/_packer.pyx", line 233, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:3503)\n File "msgpack/_packer.pyx", line 221, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:3230)\n File "msgpack/_packer.pyx", line 192, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:2657)\n File "msgpack/_packer.pyx", line 228, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:3382)\nTypeError: can't serialize Exception('foo',)\n', 'level': 'CRITICAL'})
INFO:fluent.test:foo

Fluent send via UDP

Hi folks

I've tried to send logs to fluentd with this code
from fluent import sender logger = sender.FluentSender('app', host='127.0.0.1', port=24224) logger.emit('test', {'from': 'localhost', 'to': 'fluentd'}) logger.close()

And success!
Now, do you have idea to send that via udp?
If it's already the feature, please write it on readme.md. I think I'm not only one who wants to send logs via UDP.

Thank you!

Automate releases

Currently Travis isn't uploading the release. We need to make sure that tagged releases in master get uploaded automatically.

Updating version and git tags

Hi,

1/ could you please add tags to the repository ?
2/ could you please bump version number in setup.py, this would allow people (like me) who need the newer code to use dependency_links and adequate version requirements in setup.py to automatically fetch code from github rather than pypi.

TIA.

Allow fully customizable formatter

Currently Fluent Logger supports fmt as dict and exclusion attributes exclude_attrs. Sometimes, however, it's necessary to have a dynamic format that is flexible enough to allow dynamic formatting.

This change introduces an ability to specify a callable in fmt which would accept a record as a parameter and return a data dictionary. Additionally that callable object should itself have a callable attribute usesTime which returns a bool in accordance with Formatter.usesTime().

Why kwargs in FluentSender?

I noticed that FluentSender has **kwargs, but it's a blackhole—The values are never used.

Is there a reason for that to be there? If not, it should be removed.

This would be a backwards-incompatible change, so it would need to wait until the next major release.

Document if Sender is fork-safe.

Consider a scenario where:

  • master process attaches fluent Sender or logging Handler
  • master process forks off a bunch of worker processes
  • workers use handler

Will the socket get cloned or reopened?
Does fluent logger expect a response from fluentd?
If it does, can the response be read out by wrong sender process?

Transport endpoint is not connected after td-agent restart

After restarting of td-agent socket can't be closed properly and fails with the trace:
How to reproduce: Just restart td-agent.

2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self.logger.info(msg, *args, **kwargs)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/logging/__init__.py", line 1167, in info
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self._log(INFO, msg, args, **kwargs)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/logging/__init__.py", line 1286, in _log
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self.handle(record)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/logging/__init__.py", line 1296, in handle
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self.callHandlers(record)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/logging/__init__.py", line 1336, in callHandlers
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     hdlr.handle(record)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/logging/__init__.py", line 759, in handle
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self.emit(record)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/fluent/handler.py", line 187, in emit
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     data)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/fluent/sender.py", line 98, in emit_with_time
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     return self._send(bytes_)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/fluent/sender.py", line 127, in _send
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     return self._send_internal(bytes_)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/fluent/sender.py", line 149, in _send_internal
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self._close()
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/fluent/sender.py", line 200, in _close
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     self.socket.shutdown(socket.SHUT_RDWR)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/socket.py", line 228, in meth
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server     return getattr(self._sock,name)(*args)
2017-12-04 12:11:04,751.751 2165 ERROR oslo_messaging.rpc.server error: [Errno 107] Transport endpoint is not connected```


FluentSender suppresses too many exceptions

In _send_internal(), all exceptions are captured and suppressed, even if they are the result of a basic configuration error. This results in a total, silent failure of the library.

In my case, I inadvertently set the port number to a string, rather than an int. Fluentd failed silently.

Ultimately, I modified the code to log the exception in _send_internal, and received:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/site-packages/fluent/sender.py", line 97, in _send_internal
    self._reconnect()
  File "/usr/local/lib/python3.5/site-packages/fluent/sender.py", line 124, in _reconnect
    sock.connect((self.host, self.port))
TypeError: an integer is required (got type str)

I would suggest:

  1. Add basic type checking to the constructor.
  2. Revise the overly broad exception handling to only catch transient connection errors, allowing other exceptions to be handled appropriately by the calling code.

FluentRecordFormatter._structuring may raise AttributeError

json.loads(str(msg)) may not return a dict, it can return:

  • int if msg is a json dumped from an int, for example msg = '123'
  • list if msg is a json dumped from a list, for example msg = '["s", "1"]'

Only dict have items() method.

So if you get a logger and add a fluent.handler to it, then you execute
logger.info('1')
you will get an AttributeError

Massive threading and concurrency issues in async

While investigating closure issues in async I realized it was suboptimally implemented with several concurrency issues. Firstly, there was about 3 times the amount of code than was necessary. Secondly, the flush, drain and queue_timeout were all added to cover over the fact that the async thread couldn't be stopped reliably.

This became a massive rewrite of the entire async handler and sender, which shrunk the code dramatically, removed queue timeout, moved flush to default (observing the expected behavior of the Handler) and removed all possible race conditions on closing and sending.

Finally virtually all tests have been rewritten and cleaned up as they were reporting failures as successes. Tests now only take a few seconds to complete due to removal of sleep that pretended to synchronize several threads.

Original issue description:

Currently async handler will flush but also will discard on close. 
These parameters need to be controlled during handler instantiation to make sure that no logs are lost.

https://github.com/fluent/fluent-logger-python/blob/master/fluent/asyncsender.py#L76

Support sending data over HTTP(s)

It would be nice to add the option to log data to in_http.

A common use case is for people who run Fluentd on Heroku (ex: see this repo.)

NOTE to the maintianers/contributors: this is meant to be one of the projects for Facebook Open Academy =)

Asynchandler sending guarantees

How/when does the async handler decide when to flush its buffers? What's the maximum time it would take for any buffered data to pumped to its outbound TCP socket?

We send our logs to a centralized log collector, that then ships (via HTTP) logs to Sumologic for indexing and have seen lag times of up to a few minutes for logs to appear in Sumo searches from the time a logger was invoked; I understand there's unavoidable end-to-end network delay, but would like to get a sense of the local guarantees.

StreamHandler format customisation

Hello.

I noticed that the logs are also outputted to stdout via a StreamHandler with default format.

How do i disable this so that i can attach my own StreamHandler and customise it, or how can i change the format of the one that the module attaches? I would like to send a different format to fluentd and another to console.

From what i can see on the README, the example using logging.config.dictConfig seems to be able to change that, however i am looking for a simpler way.

Thank you.

FluentRecordFormatter not compatible with Python 3.2+ Formatter

From Python 3.2 logging.Formatter __init__ method has extra parameter style:

def __init__(self, fmt=None, datefmt=None, style='%'):

but fluent.handler.FluentRecordFormatter does not have it:

def __init__(self, fmt=None, datefmt=None):

Problem appears when reading logging configuration from a ConfigParser-format file using
logging.fileConfig (e.g. loading logging configuration from Pyramid ini file).
When logging.config._create_formatters runs f = c(fs, dfs, stl) following exception is raised:
TypeError: __init__() takes from 1 to 3 positional arguments but 4 were given

I suggest to add style parameter to fluent.handler.FluentRecordFormatter __init__ definition.

Warnings with msgpack 0.5.0

/usr/local/lib/python3.6/site-packages/msgpack/__init__.py:47: FutureWarning: use_bin_type option is not specified. Default value of the option will be changed in future version.
return Packer(**kwargs).pack(o)

TODO: Need to determine how we handle Unicode strings and how FluentD processes them.

Issues with the logging.Handler interface

So, I'm trying to integrate fluent into one of our apps, and I'm running into some issues using the logging.Handler interface.

First off, when I use the event interface, everything looks like I expect it to:

fluent_sender.setup('app', host='localhost', port=24224)
fluent_event.Event('info', {'connect': ''})

This gives me this output, which is exactly what I want:
2012-12-02T14:00:09-08:00 app.info {"connect":""}

When I try to use the logging.Handler interface, I get something different:

self.logger = logging.getLogger(self.__class__.__name__)
self.logger.addHandler(handler.FluentHandler('app', host='localhost', port=24224))
self.logger.info({'connect': ''})

This gives me this:
2012-12-02T13:08:05-08:00 app {"sys_module":"app","sys_name":"fluent","sys_host":"hostname","connect":""}
(that is all on one line)

Ideally, I would like the output to be the same as when I use the event interface. Also, when I specify multiple items in the log, like so:

self.logger.info({'level': 'info', 'connect': ''})

The fields in the log are not ordered, and show up in different order all the time.

Am I doing something wrong here? Can you point me in the right direction if so?

Thanks,
Andy

Events not delivered when connection is dropped

Problem

We have a Fluentd instance running behind an Amazon Load Balancer. Load balancer has a timeout setting (defaults to 60s, for purposes of testing I have set it to 5s). As per docs: "If no data has been sent or received by the time that the idle timeout period elapses, the load balancer closes the connection."

When sending data, the FluentSender tries to reuse the connection if possible, and reconnect in case of a socket.error. If we keep sending events one after another (time between sends is < timeout) everything works fine. But if we consider the following example:

  • Send event no. 1
  • Wait for timeout (e.g. 10s)
  • Send event no. 2

Since the connection has been closed by the load balancer, I would expect to receive a socket error when sending event no.2. But the FluentSender (or specifically, the socket object) does not seem to register that the connection was dropped. There is no error when sending event no. 2 (emit will return True), but the event never reaches the destination.

Debugging info

This was tested on Python 2.7.6 and Python 3.4.3.

I have prepared some examples to demonstrate the issue. For simplicity, we are using the socket directly, without using the FluentSender.

Example 1

In this example we send 1 event, then wait for timeout, and send a few more events. There is no socket error, even though the connection was dropped.

Code:

import socket
import time

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(3.0)
sock.connect(('myhost, 24224))

# send event no. 1
print('Sending event 1')
sock.sendall('event 1')

# wait for timeout
print('Waiting for timeout...')
time.sleep(10)

# send more events
for i in range(2, 10):
    print('Sending event {}'.format(i))
    sock.sendall('event {}'.format(i))

Output:

Sending event 1
Waiting for timeout...
Sending event 2
Sending event 3
Sending event 4
Sending event 5
Sending event 6
Sending event 7
Sending event 8
Sending event 9
Sending event 10

Example 2

This is a similar example as Example 1, but with a short delay between additional events. In this case we do get a socket error, but only when trying to send the 4th event after the timeout.

Code:

# ... snip

# send more events
for i in range(2, 10):
    print('Sending event {}'.format(i))
    sock.sendall('event {}'.format(i))

    # add a short delay before sending the next event
    time.sleep(0.1)

Output:

Sending event 1
Waiting for timeout...
Sending event 2
Sending event 3
Sending event 4
Sending event 5
Traceback (most recent call last):
  File "socktst.py", line 18, in <module>
    sock.sendall('event {}'.format(i))
  File "/usr/lib/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
socket.error: [Errno 32] Broken pipe

Solution?

One way of solving this would be to add an option to always reconnect (False by default), instead of reusing the connection. The problem might also be caused with the way the load balancer drops connections, but not sure how to check this. So any help on this is appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.