Giter Site home page Giter Site logo

lua-resty-logger-socket's Introduction

Name

lua-resty-logger-socket - nonblocking remote access logging for Nginx

Table of Contents

Status

This library is still experimental and under early development.

Description

This lua library is a remote logging module for ngx_lua:

http://wiki.nginx.org/HttpLuaModule

This is aimed to replace Nginx's standard ngx_http_log_module to push access logs to a remote server via an nonblocking socket. A common remote log server supporting sockets is syslog-ng.

This Lua library takes advantage of ngx_lua's cosocket API, which ensures 100% nonblocking behavior.

Synopsis

    lua_package_path "/path/to/lua-resty-logger-socket/lib/?.lua;;";

    server {
        location / {
            log_by_lua '
                local logger = require "resty.logger.socket"
                if not logger.initted() then
                    local ok, err = logger.init{
                        host = 'xxx',
                        port = 1234,
                        flush_limit = 1234,
                        drop_limit = 5678,
                    }
                    if not ok then
                        ngx.log(ngx.ERR, "failed to initialize the logger: ",
                                err)
                        return
                    end
                end

                -- construct the custom access log message in
                -- the Lua variable "msg"

                local bytes, err = logger.log(msg)
                if err then
                    ngx.log(ngx.ERR, "failed to log message: ", err)
                    return
                end
            ';
        }
    }

Back to TOC

Methods

This logger module is designed to be shared inside an Nginx worker process by all the requests. So currently only one remote log server is supported. We may support multiple log server sharding in the future.

Back to TOC

init

syntax: ok, err = logger.init(user_config)

Initialize logger with user configurations. This logger must be initted before use. If you do not initialize the logger, you will get an error.

Available user configurations are listed as follows:

  • flush_limit

    If the buffered messages' size plus the current message size reaches (>=) this limit (in bytes), the buffered log messages will be written to log server. Default to 4096 (4KB).

  • drop_limit

    If the buffered messages' size plus the current message size is larger than this limit (in bytes), the current log message will be dropped because of limited buffer size. Default drop_limit is 1048576 (1MB).

  • timeout

    Sets the timeout (in ms) protection for subsequent operations, including the connect method. Default value is 1000 (1 sec).

  • host

    log server host.

  • port

    log server port number.

  • sock_type

    IP protocol type to use for transport layer. Can be either "tcp" or "udp". Default is "tcp".

  • path

    If the log server uses a stream-typed unix domain socket, path is the socket file path. Note that host/port and path cannot both be empty. At least one must be supplied.

  • max_retry_times

    Max number of retry times after a connect to a log server failed or send log messages to a log server failed.

  • retry_interval

    The time delay (in ms) before retry to connect to a log server or retry to send log messages to a log server, default to 100 (0.1s).

  • pool_size

    Keepalive pool size used by sock:keepalive. Default to 10.

  • max_buffer_reuse

    Max number of reuse times of internal logging buffer before creating a new one (to prevent memory leak).

  • periodic_flush

    Periodic flush interval (in seconds). Set to nil to turn off this feature.

  • ssl

    Boolean, enable or disable connecting via SSL. Default to false.

  • ssl_verify

    Boolean, enable or disable verifying host and certificate match. Default to true.

  • sni_host

    Set the hostname to send in SNI and to use when verifying certificate match.

Back to TOC

initted

syntax: initted = logger.initted()

Get a boolean value indicating whether this module has been initted (by calling the init method).

Back to TOC

log

syntax: bytes, err = logger.log(msg)

Log a message. By default, the log message will be buffered in the logger module until flush_limit is reached in which case the logger will flush all the buffered messages to remote log server via a socket. bytes is the number of bytes that successfully buffered in the logger. If bytes is nil, err is a string describing what kind of error happens this time. If bytes is not nil, then err is a previous error message. err can be nil when bytes is not nil.

Back to TOC

flush

syntax: bytes, err = logger.flush()

Flushes any buffered messages out to remote immediately. Usually you do not need to call this manually because flushing happens automatically when the buffer is full.

Back to TOC

Installation

You need to compile at least ngx_lua 0.9.0 with your Nginx.

You need to configure the lua_package_path directive to add the path of your lua-resty-logger-socket source tree to ngx_lua's Lua module search path, as in

# nginx.conf
http {
    lua_package_path "/path/to/lua-resty-logger-socket/lib/?.lua;;";
    ...
}

and then load the library in Lua:

local logger = require "resty.logger.socket"

Back to TOC

TODO

  • Multiple log server sharding and/or failover support.
  • "match_similar" utf8 support test.

Back to TOC

Authors

Jiale Zhi [email protected], CloudFlare Inc.

Yichun Zhang (agentzh) [email protected], CloudFlare Inc.

Back to TOC

Copyright and License

This module is licensed under the BSD license.

Copyright (C) 2013, by Jiale Zhi [email protected], CloudFlare Inc.

Copyright (C) 2013, by Yichun Zhang [email protected], CloudFlare Inc.

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Back to TOC

lua-resty-logger-socket's People

Contributors

agentzh avatar calio avatar doujiang24 avatar guanlan avatar hamishforbes avatar p0pr0ck5 avatar robholland avatar tianchaijz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lua-resty-logger-socket's Issues

request help: use this lib in nginx stream subsystem

Hi folks, thank you for open sourcing this lib.

I tried using this lib in the Nginx stream subsystem, and it works well, but I found that the judgment here always comes up when running the test case:

if not ngx.config or not ngx.config.ngx_lua_version
or ngx.config.ngx_lua_version < 9003 then
is_exiting = function() return false end
ngx_log(CRIT, "We strongly recommend you to update your ngx_lua module to "
.. "0.9.3 or above. lua-resty-logger-socket will lose some log "
.. "messages when Nginx reloads if it works with ngx_lua module "
.. "below 0.9.3")

In the Nginx http subsystem, ngx.config looks like this:

{
  debug = true,
  nginx_configure = <function 1>,
  nginx_version = 1019009,
  ngx_lua_version = 10020,
  prefix = <function 2>,
  subsystem = "http"
}

and in Nginx stream subsystem,ngx.config looks like this:

{
  debug = true,
  nginx_configure = <function 1>,
  nginx_version = 1019009,
  ngx_lua_version = 10,
  prefix = <function 2>,
  subsystem = "stream"
}

I would be happy to submit a PR to improve this issue if you agree to be compatible with Nginx streams subsystem here.

Datagram Unix socket doesn't work

_do_connect() fails if socket_type is "udp" and path is specified instead of host, port

In this case, a UDP socket is created, and then its :connect() method is called, but there is no such method, so attempt to call method 'connect' (a nil value) error is thrown.

A simple fix is here: #35

cosocket disabled in the context of log_by _lua*

OpenResty 1.11.2.1.

According to current docs, the cosocket API is disabled in the log_by_lua context.

I need log some variables like ngx.var.status, and which context should I put it into? Thanks!

why lua-ngx delete the unix domain socket file when it connects to server?

We are using unix domain socket to connect to sever in my project,
When testing, i found the unix domain socket file(which generated by server when server listen) was deleted every time ngx.socket.tcp calling connect.
As i check the sourcecode, I found below in the function ngx_parse_unix_domain_url.
/* delete domain socket file. */
ngx_delete_file(saun->sun_path);
it seems ngx.socket.tcp would delete socket file every time it connect, and it would cause connect failure with err of "unix:/home/nginx/nginx/server.sock failed (2: No such file or directory)".

so my question is why it delete the socket file?
anybody knows? pls suggest.

Test case failures on rhel7.6 ppc64le platform

Hi All,

I had build the nginx binary on rhel 7.6 ppc64le (version 1.17.1.1rc0) from source code - https://github.com/openresty/openresty.
Please note that, I had copied and used ppc64le compiled LuaJIT code while building openresty (nginx).
Below command I used to compile the openresty -

./configure --with-cc-opt="-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC" --with-http_image_filter_module --with-http_dav_module --with-http_auth_request_module --with-poll_module --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module --with-http_iconv_module --with-http_drizzle_module --with-http_postgres_module --with-http_addition_module --add-module=/usr/openresty/openresty_test_modules/nginx-eval-module --add-module=/usr/openresty/openresty_test_modules/replace-filter-nginx-module

And then tried to execute the test cases for 'lua-resty-logger-socket' like below -

[root]# pwd
/usr/openresty/openresty_test_modules/lua-resty-logger-socket
[root]# prove -r t

NOTE: The 'lua-resty-logger-socket' module source code is latest one cloned from https://github.com/cloudflare/lua-resty-logger-socket

But I am getting below kind of repeated errors -

	[root ]# prove -r t/

	#   Failed test 'TEST 1: small flush_limit, instant flush - grep_error_log_out (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1149.
	# @@ -1,3 +0,0 @@
	# -resend log messages to the log server: timeout
	# -resend log messages to the log server: timeout
	# -resend log messages to the log server: timeout

	#   Failed test 'TEST 15: max reuse - pattern "log buffer reuse limit (1) reached, create a new "log_buffer_data"" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 15: max reuse - pattern "log buffer reuse limit (1) reached, create a new "log_buffer_data"" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	t/sanity.t ... 21/198
	#   Failed test 'TEST 15: max reuse - tcp_query ok'
	#   Failed test 'TEST 15: max reuse - tcp_query ok'
	#   at /usr/local/share/perl5/Test/Nginx/Util.pm line 188.
	#          got: ''
	#     expected: '111222333444555'

	#   Failed test 'TEST 15: max reuse - TCP query length ok'
	#   at /usr/local/share/perl5/Test/Nginx/Util.pm line 1276.
	#          got: '0'
	#     expected: '15'
	t/sanity.t ... 41/198
	#   Failed test 'TEST 12: drop log test - pattern "logger buffer is full, this log message will be dropped" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 12: drop log test - pattern "logger buffer is full, this log message will be dropped" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	t/sanity.t ... 183/198 # Looks like you failed 6 tests of 198.
	t/sanity.t ... Dubious, test returned 6 (wstat 1536, 0x600)
	Failed 6/198 subtests
	t/timeout.t .. 1/46
	#   Failed test 'ERROR: client socket timed out - TEST 1: connect timeout
	# '
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 2062.

	#   Failed test 'TEST 1: connect timeout - status code ok'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 948.
	#          got: ''
	#     expected: '200'

	#   Failed test 'TEST 1: connect timeout - response_body - response is expected (repeated req 0, req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1589.
	#          got: ''
	#     expected: 'foo
	# '

	#   Failed test 'TEST 1: connect timeout - pattern "tcp socket connect timed out" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 1: connect timeout - pattern "reconnect to the log server" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	t/timeout.t .. 6/46
	#   Failed test 'ERROR: client socket timed out - TEST 1: connect timeout
	# '
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 2062.

	#   Failed test 'TEST 1: connect timeout - status code ok'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 948.
	#          got: ''
	#     expected: '200'

	#   Failed test 'TEST 1: connect timeout - response_body - response is expected (repeated req 1, req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1589.
	#          got: ''
	#     expected: 'foo
	# '
	t/timeout.t .. 9/46
	#   Failed test 'TEST 1: connect timeout - pattern "tcp socket connect timed out" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 1: connect timeout - pattern "reconnect to the log server" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 2: send timeout - pattern "resend log messages to the log server: timeout" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 2: send timeout - pattern "resend log messages to the log server: timeout" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 3: risk condition - tcp_query ok'
	#   at /usr/local/share/perl5/Test/Nginx/Util.pm line 188.
	#          got: ''
	#     expected: '12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849501234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950'

	#   Failed test 'TEST 3: risk condition - TCP query length ok'
	#   at /usr/local/share/perl5/Test/Nginx/Util.pm line 1276.
	#          got: '0'
	#     expected: '182'
	t/timeout.t .. 25/46
	#   Failed test 'TEST 3: risk condition - tcp_query ok'
	#   at /usr/local/share/perl5/Test/Nginx/Util.pm line 188.
	#          got: ''
	#     expected: '12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849501234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950'

	#   Failed test 'TEST 3: risk condition - TCP query length ok'
	#   at /usr/local/share/perl5/Test/Nginx/Util.pm line 1276.
	#          got: '0'
	#     expected: '182'
	t/timeout.t .. 31/46
	#   Failed test 'ERROR: client socket timed out - TEST 4: return previous log error
	# '
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 2062.

	#   Failed test 'TEST 4: return previous log error - status code ok'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 948.
	#          got: ''
	#     expected: '200'

	#   Failed test 'TEST 4: return previous log error - response_body - response is expected (repeated req 0, req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1589.
	# @@ -1,3 +0,0 @@
	# -foo
	# -bar
	# -foo
	t/timeout.t .. 34/46
	#   Failed test 'TEST 4: return previous log error - pattern "lua tcp socket connect timed out" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 4: return previous log error - pattern "reconnect to the log server: timeout" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 4: return previous log error - pattern "log error:try to send log messages to the log server failed after 1 retries: try to connect to the log server failed after 1 retries: timeout" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	t/timeout.t .. 37/46
	#   Failed test 'ERROR: client socket timed out - TEST 4: return previous log error
	# '
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 2062.

	#   Failed test 'TEST 4: return previous log error - status code ok'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 948.
	#          got: ''
	#     expected: '200'

	#   Failed test 'TEST 4: return previous log error - response_body - response is expected (repeated req 1, req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1589.
	# @@ -1,3 +0,0 @@
	# -foo
	# -bar
	# -foo

	#   Failed test 'TEST 4: return previous log error - pattern "lua tcp socket connect timed out" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 4: return previous log error - pattern "reconnect to the log server: timeout" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.

	#   Failed test 'TEST 4: return previous log error - pattern "log error:try to send log messages to the log server failed after 1 retries: try to connect to the log server failed after 1 retries: timeout" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	t/timeout.t .. 43/46
	#   Failed test 'ERROR: client socket timed out - TEST 5: flush race condition
	# '
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 2062.

	#   Failed test 'TEST 5: flush race condition - status code ok'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 948.
	#          got: ''
	#     expected: '200'

	#   Failed test 'TEST 5: flush race condition - response_body - response is expected (repeated req 0, req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1589.
	#          got: ''
	#     expected: 'foo
	# '
	t/timeout.t .. 46/46
	#   Failed test 'TEST 5: flush race condition - pattern "previous flush not finished" should match a line in error.log (req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	t/timeout.t .. 48/46
	#   Failed test 'ERROR: client socket timed out - TEST 5: flush race condition
	# '
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 2062.

	#   Failed test 'TEST 5: flush race condition - status code ok'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 948.
	#          got: ''
	#     expected: '200'

	#   Failed test 'TEST 5: flush race condition - response_body - response is expected (repeated req 1, req 0)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1589.
	#          got: ''
	#     expected: 'foo
	# '
	WARNING: TEST 5: flush race condition - TCP server: failed to accept: Connection timed out
	t/timeout.t .. 51/46
	#   Failed test 'TEST 5: flush race condition - pattern "previous flush not finished" should match a line in error.log (req 1)'
	#   at /usr/local/share/perl5/Test/Nginx/Socket.pm line 1213.
	t/timeout.t .. Failed 32/46 subtests

	Test Summary Report
	-------------------
	t/bug.t    (Wstat: 512 Tests: 16 Failed: 2)
	  Failed tests:  3, 6
	  Non-zero exit status: 2
	t/sanity.t (Wstat: 1536 Tests: 198 Failed: 6)
	  Failed tests:  15, 20-22, 51, 56
	  Non-zero exit status: 6
	t/timeout.t (Wstat: 0 Tests: 52 Failed: 38)
	  Failed tests:  1-10, 14, 18, 23-24, 29-52
	  Parse errors: Bad plan.  You planned 46 tests but ran 52.
	Files=5, Tests=298, 77 wallclock secs ( 0.13 usr  0.00 sys +  2.11 cusr  0.84 csys =  3.08 CPU)
	Result: FAIL
	[root ]#
	[root ]#

Please help suggest if I need to export any specific environment/setup any additional service or should try any compiler flag/somehow increase timeout value to make these test cases pass?

nginx version (compiled with libdrizzle 1.0 and radius, mariadb, postgresql services setup) -

# nginx -V
nginx version: openresty/1.17.1.1rc0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
built with OpenSSL 1.0.2k-fips  26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-cc-opt='-O2 -DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC' --add-module=../ngx_devel_kit-0.3.1rc1 --add-module=../iconv-nginx-module-0.14 --add-module=../echo-nginx-module-0.61 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../drizzle-nginx-module-0.1.11 --add-module=../ngx_postgres-1.0 --add-module=../srcache-nginx-module-0.31 --add-module=../ngx_lua-0.10.15 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.7 --with-ld-opt=-Wl,-rpath,/usr/local/openresty/luajit/lib --with-http_image_filter_module --with-http_dav_module --with-http_auth_request_module --with-poll_module --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module --with-http_addition_module --add-module=/usr/openresty/openresty_test_modules/nginx-eval-module --add-module=/usr/openresty/openresty_test_modules/replace-filter-nginx-module --with-stream --with-stream_ssl_preread_module

Install via luarocks?

It would be nice to be able to install this via luarocks. There are already lots of other resty packages present.

Would u develop lua-resty-logger as lib writes logs into local files

as follows:

-- log file

local logger_0 = new logger("PATH")

logger_0.debug("debug"); --for debug

logger_0.info("info"); --for info

logger_0.error("error"); --for error

-- another log file

local logger_1 = new logger("PATH")

logger_1.debug("debug"); --for debug

logger_1.info("info"); --for info

logger_1.error("error"); --for error

thks

log message be dropped and socket:setkeepalive error

Hello! When I use openresty and syslog-ng to log message, I had two problems . My server parameters: sock_type=tcp,flush_limit=2048,periodic_flush=10,pool_size=100.

First, drop message problem, if request rate is fast,ngx.timer pending count increase. My understanding is that the scheduled task is waiting to be executed and flush data is too slow(milliseconds), resulting the buffer_size is not zero, so msg_len + buffer_size > drop_limit.Right?

Second, socket:setkeepalive error, i get the error message: "resend log messages to the log server: connection in dubious state". I think this error send data multiple times. how to resolve it?

Thanks

can filter nginx request_method like head?

hello, I just a OP ,and I use this progame to gather nginx's accesslog, recently, I find many heart check link, like head /index.php , these link have sended to the elasticserach (our syslog server), and cause disk use too many cost , Could you tell me how can I filter the head request ? Thanks

Unable to use this plugin for log into local server log files

Trying to use this plugin for log into local server log files
like /var/log/messages' or in some custom log file apart from nginx access or error log ie /var/log/testing.log etc.
But unable to do this.
getting following errors while trying
failed to flush log message: try to send log messages to the log server failed after 3 retries: try to connect to the log server failed after 3 retries: connection refused
failed to flush log message: try to send log messages to the log server failed after 3 retries: try to connect to the log server failed after 3 retries: permission denied
Am I configuring something wrong......

local ok, err = logger.init{
host = 'localhost',
path='/var/log/messages',
}
if not ok then
ngx.log(ngx.ERR, "failed to initialize the logger: ",
err)
return
end
end
-- construct the custom access log message in
-- the Lua variable "msg"

            local bytes, err = logger.log(msg)
            if err then
                ngx.log(ngx.ERR, ".failed to log message: ", err)
                return
            end
            local bytes, err = logger.flush()
            if err then
                 **ngx.log(ngx.ERR, "failed to flush  log message: ", err)**
                return
            end

When log messages are generated too quickly, problems may arise

When log messages are generated too quickly, does it cause logs to be split during transmission? For instance, when originally sending '...abcdefg...', which is a relatively long string, the actual transmission might result in '...abcd' and 'efg...' being sent as two separate strings. This is not the expected behavior.

Make use of table.clear() when it is available

The latest LuaJIT v2.1 branch implements a new Lua primitive, table.clear(), which clears all the contents in a Lua table without freeing any memory in the table itself. This is much cheaper than iterating the table ourselves with pairs() or ipairs(), and can also be JIT compiled.

We should make use of this API function when it is available in this logger library. We can fall back to the the existing way of clearing Lua tables when it is missing. Basically we can do something like this:

local ok, clear_tab = pcall(require, "table.clear")
if not ok then
    clear_tab = function (tab)
                    for k, _ in pairs(tab) do
                        tab[k] = nil
                    end
                end
end

And when we need to clear a Lua table in variable foo, we can just do

clear_tab(foo)

And that's it! :)

Don't drop messages upon Nginx worker shutdown

Right now, this library may drop messages unnecessarily upon Nginx worker shutdown.

The latest ngx_lua git master now allows creating 0-delay timers when the worker starts exiting. Also, ngx_lua provides the new API function ngx.worker.exiting() for testing whether the worker is already shutting down.

Basically, we should consider at least the following things:

  1. This library should always use 0 delay upon its own retries (for connect or send) when worker is shutting down.
  2. Upon every retry, this library should also accumulate any in-buffer messages into the data it tries to resend.
  3. Disable message buffering when the worker is shutting down.

Only users who are on the same network can communicate on socket.io

Hello,
I'm using nginx on linux to put my nodejs and socket.io signaler server online, I don't know why but only users who are on the same network can communicate on socket.io, how can i configure nginx and socket.io server to allow communication between users outside the network ?

Here is my nginx and socket.io server configuration:

  1. /etc/nginx/conf.d/app.com.conf

server {
listen 80;
listen [::]:80;
server_name 5902483.lovecames.com;
root /usr/share/nginx/html;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name 5902483.lovecames.com;
root /usr/share/nginx/html;

ssl_certificate /...;
ssl_certificate_key /...;
ssl_session_timeout ...;
ssl_session_cache ...;
ssl_session_tickets off;

# intermediate configuration
ssl_protocols ...;
ssl_ciphers ...
ssl_prefer_server_ciphers off;

# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;

resolver 0.0.0.0;

proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

location ~* .io {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy false;

  proxy_pass http://localhost:8080;
  proxy_redirect off;

  proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

location ~* / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

error_page 404 /404.html;
location = /404.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

  1. /etc/nginx/nginx.conf

include /etc/nginx/conf.d/*.conf;

server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;

  # Load configuration files for the default server block.
  include /etc/nginx/default.d/*.conf;
  error_page 404 /404.html;
  location = /404.html {
  }

  error_page 500 502 503 504 /50x.html;
  location = /50x.html {
  }

}

  1. nodejs server:

var fs = require('fs');

var _static = require('node-static');
var file = new _static.Server('./static', {
cache: false
});

var app = require('http').createServer(serverCallback);

function serverCallback(request, response) {
request.addListener('end', function () {
response.setHeader('Access-Control-Allow-Origin', '*');
response.setHeader('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE');
response.setHeader('Access-Control-Allow-Headers', 'Content-Type');

    file.serve(request, response);
}).resume();

}

var io = require('socket.io').listen(app, {
// log: true,
origins: ':'
});

io.set('transports', [
//'websocket',
'xhr-polling',
'jsonp-polling'
]);

var channels = {};
var users = [];
io.sockets.on('connection', function (socket) {
var initiatorChannel = '';
if (!io.isConnected) {
io.isConnected = true;
}

socket.on('new-channel', function (data) {
    if (!channels[data.channel]) {
        initiatorChannel = data.channel;
    }

    channels[data.channel] = data.channel;
    onNewNamespace(data.channel, data.sender);
});

socket.on('presence', function (channel) {
    var isChannelPresent = !! channels[channel];
    socket.emit('presence', isChannelPresent);
});

socket.on('disconnect', function (channel) {
    if (initiatorChannel) {
        delete channels[initiatorChannel];
    }
});

});

function onNewNamespace(channel, sender) {
io.of('/' + channel).on('connection', function (socket) {
var username;
if (io.isConnected) {
io.isConnected = false;
socket.emit('connect', true);
socket.emit('user-video-stream', JSON.stringify(users));
}

    socket.on('message', function (data) {
        if (data.sender == sender) {
            if(!username) username = data.data.sender;
            
            socket.broadcast.emit('message', data.data);
        }
    });
    
    socket.on('disconnect', function() {
        if(username) {
            socket.broadcast.emit('user-left', username);
            username = null;
        }
    });
});

}
function isInArray(users,newUser) {
found = false;
users.forEach(function(element) {
if(element.videoId == newUser.videoId) found = true;
}, this);
return found;
}
app.listen(8080, '0.0.0.0', function(){
console.log('listening on *:8080');
});

  1. client-side:

var SIGNALING_SERVER = 'https://5902483.lovecames.com/';

Multi-node acceptance of data

Hi! if I need to send to multiple nodes, can you support it?
I found that when I have multiple nodes, I can't control each node to get accurate data.
Here is an example:

influx_config = {
{
host = "192.168.1.2",
port = 8911,
sock_type = "udp",
},
{
host = "192.168.1.3",
port = 8911,
sock_type = "udp",
percent="50%"
},
{
host = "192.168.1.4",
port = 8911,
sock_type = "udp",
},
}

if not logger.initted() then
local ok, err = logger.init(
influx_config[index] -- index is: 1 or 2 or 3, I will perform hash sharding
)
ngx.log(ngx.ERR,'12312312300000000000')
if not ok then
ngx.log(ngx.ERR, "failed to initialize the logger: ",err)
return
end
end

must take lua_code_cache on

I use openresty several months, just a new hand, and don't understand what happend behind lua
code.when I use this library, for convenience, I take lua_code_cache off, but I find that the logs won't be
accumulated according to the param <flush_limit>, if I take lua_code_cache on , it will run normally. a
helper tell me that when taking lua_code_cache on, the top level variable in lua module will be cached, this
library just use this point , but there is none description. I advice you to add description on this readme, if
that has some description in lua_nginx_module, it would be better.
thanks.

Sent log data to logstash, log loss occurred?

Use UDP to send the log data sent to logstash, about 10 requests per second, each sending a thousand probably only received 400-600, strange, Udp in the LAN, the concurrency is not high, how to lose the log It?

local logger = require "resty.logger.socket"
local json = require "cjson"

if not logger then
    ngx.log(ngx.ERR, "can't required logger module")
end
if not logger.initted() then
    local ok, err = logger.init{
        host = '10.105.7.43',
        port = 5000,
        sock_type = 'udp',
        flush_limit = 1,
        drop_limit = 5678,
    }
    if not ok then
    ngx.log(ngx.ERR, "failed to initialize the logger: ",
            err)
    end
end

local log = {}

local client = ngx.var.http_x_forwarded_for
local url = ngx.var.uri
local x_auth_token = ngx.req.get_headers()['X-AUTH-TOKEN']
local method = ngx.var.request_method

log['client'] = client
log['url'] = url
log['method'] = method
log['X-AUTH-TOKEN'] = x_auth_token

local bytes, err = logger.log(json.encode(log))
if bytes then
    ngx.log(ngx.INFO, "send done")
end
if err then
    ngx.log(ngx.ERR, "failed to log message: ", err)
end

_do_flush函数connect没有close的疑问

您好。我看代码中,每次_do_flush的时候,都重新去connect连接服务器。但是发送完数据,并没有close,这不会导致连接数暴涨吗?
谢谢!

Possible race condition while nginx is running on more worker_processes

I have setup a simple configuration which just send the logs to localhost on port 4789:

worker_processes  1;
error_log logs/debug.log debug;

events {
    worker_connections  1024;
}

http {

    lua_package_path "/path/to/lua-resty-logger-socket/lib/?.lua;;";

    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {

        listen       80;
        server_name  localhost;

        location / {

            log_by_lua '
                local logger = require "resty.logger.socket"
                if not logger.initted() then
                    local ok, err = logger.init{
                        host = "127.0.0.1",
                        port = 4789,
                        flush_limit = 0, -- disable buffer, else flume cannot seperate the events
                        drop_limit = 8388608,
                    }
                    if not ok then
                        ngx.log(ngx.ERR, "failed to initialize the logger: ",
                                err)
                        return
                    end
                end

                -- construct the custom access log message in
                -- the Lua variable "msg"

                local bytes, err = logger.log("t")
                if err then
                    ngx.log(ngx.ERR, "failed to log message: ", err)
                    return
                end
            ';

            root   html;
            index  index.html index.htm;
        }
    }
}

A netcat server will pipe all logs to a file:

nc -l -q -1 -k -p 4789 > file.log

Then a kind of 'stress' test to assemble the logs:

for i in `seq 7000`; do sleep 0.1; curl 'http://127.0.0.1/?test=test' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:32.0) Gecko/20100101 Firefox/32.0' -H 'Accept: text/javascript, application/javascript, application/ecmascript, application/x-ecmascript, */*; q=0.01' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate' -H 'X-Requested-With: XMLHttpRequest' -H 'Referer: https://www.openindex.io/dev/' -H 'Connection: keep-alive' ; done

When worker_processes is set to one, after a while there are 7000 logs in my file and everything is fine.
But when I change the worker_processes to 2, it will stop collection on a random time. Could it be that there is a race condition? Or is this script simply not compatible with nginx 1.6.2?

Thanks in advice,

Ron

是否可以结合rsyslogd服务使用

本人结合rsyslog使用的时候发现,
1、发送过去的msg 无法正常接收,必须要改成#msg .." "..msg才可以传送过去,而且syslog这边必须要通过%rawmsg%才能接收到
2、syslog服务在接收发送过来的数据的时候无法获取 syslogtag内容。
麻烦帮忙解答。

cannot send log to logstash by tcp protocol

env

host

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"

elk version 5.5.2

openresty version 1.11.2.5

lua-resty-logger-socket version 0.03(lastest code version)

openresty configure params

./configure --prefix=/etc/openresty \
--user=nginx \
--group=nginx \
--with-cc-opt='-O2 -I/usr/local/openresty/zlib/include -I/usr/local/openresty/pcre/include -I/usr/local/openresty/openssl/include' \
--with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/usr/local/openresty/zlib/lib -L/usr/local/openresty/pcre/lib -L/usr/local/openresty/openssl/lib -Wl,-rpath,/usr/local/openresty/zlib/lib:/usr/local/openresty/pcre/lib:/usr/local/openresty/openssl/lib' \
--with-pcre-jit \
--with-stream \
--with-stream_ssl_module \
--with-http_v2_module \
--with-http_stub_status_module \
--with-http_realip_module \
--with-http_gzip_static_module \
--with-http_sub_module \
--with-http_gunzip_module \
--with-threads \
--with-file-aio \
--with-http_ssl_module \
--with-http_auth_request_module \
--without-mail_pop3_module \
--without-mail_imap_module \
--without-mail_smtp_module \
--without-http_fastcgi_module \
--without-http_uwsgi_module \
--without-http_scgi_module \
--without-http_autoindex_module \
--without-http_memcached_module \
--without-http_empty_gif_module \
--without-http_ssi_module \
--without-http_userid_module \
--without-http_browser_module \
--without-http_rds_json_module \
--without-http_rds_csv_module \
--without-http_memc_module \
--without-http_redis2_module \
--without-lua_resty_memcached \
--without-lua_resty_mysql \
-j4

nginx conf

lua_package_path "/path/to/lua-resty-logger-socket/lib/?.lua;;";

    server {
        location / {
            log_by_lua '
                local logger = require "resty.logger.socket"
                local cjson= require "cjson"

                if not logger.initted() then
                    local ok, err = logger.init{
                        host = '127.0.0.1',
                        port = 5044,
                        flush_limit = 0,
                        drop_limit = 1048576, --1024*1024=1mb
                    }
                    if not ok then
                        ngx.log(ngx.ERR, "failed to initialize the logger: ",
                                err)
                        return
                    end
                end

                local bytes, err = logger.log(cjson.encode({name='test'}))
                if err then
                    ngx.log(ngx.ERR, "failed to log message: ", err)
                    return
                end
            ';
        }
    }

logstash conf

input {
    tcp {
        port => "5044"
        codec => "json_lines" #or json
    }
}
output {
  stdout { codec => rubydebug }
}

the logstash cannot receive tcp data

wireshark has many tcp dup ack and tcp Out-Of-Order info.

udp is ok,but udp maximum size of udp datagram is 64k

Not buffering unix socket packets.

I've submitted a pull request to get unix socket logging working. Thinking about it some more though I'm not sure it makes sense to buffer them at all unless the connection is down. As we're not able to save any time by packing them together I think it's probably better to send them as we get them?

What do you think?

Option to NOT concatenate log_buffer_data into single message?

First off, awesome library.

I'm using Syslog-ng to collect messages from lua. I need each log message to be uniquely processed by Syslog. resty-logger concat's the contents of log_buffer_data into a single message unless I explicitly flush() after each log statement.

Is there any value (performance / function) to send spooled msg's (w/o concatenation) as unique msg's after timeout or the number msgs in log_buffer_data is > than a certain size?

Seems, there is a lock hit every time flush is called.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.