Giter Site home page Giter Site logo

driskell / log-courier Goto Github PK

View Code? Open in Web Editor NEW
419.0 25.0 107.0 3.77 MB

The Log Courier Suite is a set of lightweight tools created to ship and process log files speedily and securely, with low resource usage, to Elasticsearch or Logstash instances.

License: Other

Ruby 7.32% Makefile 1.06% Shell 1.26% Go 90.06% JavaScript 0.29%

log-courier's Introduction

Log Courier Suite

Golang Ruby Release

The Log Courier Suite is a set of lightweight tools created to ship and process log files speedily and securely, with low resource usage, to Elasticsearch or Logstash instances.

Log Courier

Log Courier is a lightweight shipper. It reads from log files and transmits events over the Courier protocol to a remote Logstash or Log Carver instance.

  • Reads from files or the program input, following log file rotations and movements
  • Compliments log events with additional fields
  • Live configuration reload
  • Transmits securely using TLS with server and client verification
  • Codecs for client-side preprocessing of multiline events and filtering of unwanted events
  • Native JSON reader to support JSON files, even those with no line-termination that makes line-based reading problematic
  • Remote Administration Utility to inspect monitored log files and connections in real time.
  • Compatible with all supported versions of Logstash. At the time of writing this is >= 7.7.x.

Log Carver

Log Carver is a lightweight event processor and alternative to Logstash. It receives events over the Courier protocol and performs actions against them to manipulate them into the required format for storage within Elasticsearch, or further processing in Logstash. Connected clients do not receive acknowledgements until the events are acknowledged by the endpoint, whether that be Elasticsearch or another more centralised Log Carver, providing end-to-end guarantee.

Philosophy

  • Keep resource usage low and predictable at all times
  • Be efficient, reliable and scalable
  • At-least-once delivery of events, a crash should never lose events
  • Offer secure transports
  • Be easy to use

Documentation

Installation

Reference

Upgrading from 1.x to 2.x

There are many breaking changes in the configuration between 1.x and 2.x. Please check carefully the list of breaking changes here: Change Log.

Packages also now default to using a log-courier user. If you require the old behaviour of root, please be sure to modify the /etc/sysconfig/log-courier (CentOS/RedHat) or /etc/default/log-courier (Ubuntu) file.

log-courier's People

Contributors

atwardowski avatar avleen avatar camerondavison avatar codec avatar cyberplant avatar d-lord avatar dependabot[bot] avatar donjohnson avatar driskell avatar igalic avatar jamtur01 avatar jordansissel avatar josegonzalez avatar kargig avatar lblasc avatar matejzero avatar mcnewton avatar mheese avatar nickethier avatar pilif avatar promisedlandt avatar shoenig avatar shurane avatar solarce avatar steeef avatar sysmonk avatar tzahari avatar willie avatar yath avatar yggdrasil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

log-courier's Issues

Protocol error - packet too large

Hi,

Recently i started seeing packet too large errors in logstash:

{:timestamp=>"2014-09-04T13:57:56.467000+0000", :message=>"[LogCourierServer] Protocol error on connection from 1.2.3.4:17009: Packet too large (1435480)", :level=>:warn}

Log-courier side:

Sep  4 14:04:45 es13 log-courier[25423]: Connected to 2.3.4.5
Sep  4 14:04:45 es13 log-courier[25423]: Transport error, will try again: write tcp 2.3.4.5:9001: connection reset by peer
Sep  4 14:04:55 es13 log-courier[25423]: Attempting to connect to 2.3.4.5:9001 (logstash2.sat.wordpress.com)

The client tries to send 'too large' packet, gets a protocol error, tries to reconnect and gets the same error again. Infinite loop. Unfortunately, switching to smaller sizes would let through some smaller packets until it hit the huge log message which is too big.

Not sure what's the best solution here. 1) try smaller spools until packet gets through
2) split the message if it's too big?
3) log-courier knows what's the biggest packet it can send - don't send it if it's too big and do 1) and/or 2) when it hits this?

fields issues when using several paths

Hi,

This might be related to #4

I'm using this configuration:

  "files": [
    {
      "paths": [ "/tmp/website.log"],
      "fields": { "type": "Tomcat", "tags": ["website","random"], "server":"random" }
    }, {
      "paths": [ "/tmp/admin.log"],
      "fields": { "type": "Tomcat", "tags": ["admin","random"] }
    }, {
      "paths": [ "/tmp/services.log"],
      "fields": { "type": "Tomcat", "tags": ["services","random"], "server":"raandom" }
    }
  ]

With that configuration I'd expect any logs created in /tmp/website.log to have the tags (website,random) and to have server:random. But what I'm seeing is no matter what file they come from, all the events have tags "services","random" and server:raandom

I have tested this v0.9 and v0.10 too.

By the way, nice job with the latest release ๐Ÿ‘ It's always nice to have good docs.

running logcourier through a loadbalancer (ELB)? debugging?

I have the logstash integration set up, and I have logcourier set up. I'm using the tcp protocol for both. My logstash servers sit behind an AWS ELB that has port 5550 set to TCP.

I can't send logs through the ELB. I get the following errors.

Sep  3 17:07:38 ip-2-3-4-5 log-courier[3054]: Attempting to connect to 1.2.3.4:5550 (loadbalancer.domain.com)
Sep  3 17:07:38 ip-2-3-4-5 log-courier[3054]: Connected to 1.2.3.4
Sep  3 17:07:38 ip-2-3-4-5 log-courier[3054]: Transport error, will try again: EOF
Sep  3 17:07:39 ip-2-3-4-5 log-courier[3054]: Attempting to connect to 1.2.3.4:5550 (loadbalancer.domain.com)
Sep  3 17:07:39 ip-2-3-4-5 log-courier[3054]: Connected to 1.2.3.4
Sep  3 17:07:39 ip-2-3-4-5 log-courier[3054]: Transport error, will try again: EOF

This is a problem, obviously. If I send logs to 'localhost' on one of the logstash servers, it works properly. So clearly I have the logstash input set up properly. It's just that the logcourier protocol doesn't work through a TCP load balancer, or I have it poorly configured for that case.

Looking through the ELB stats, I can see that there are zero connection errors but a high value in "spillovercount". That implies the connections are hitting the load balancer but the receiving server (logstash) is failing to answer them. This means the connections are dropped. So that's consistent with the above.

Here's my logstash input:

input {
  courier {
    port => 5550
    transport => tcp

    # key is required even when using TLS.
    ssl_key => "/path/to/logstash-forwarder.key"
  }
}

I don't see anything strange. I can telnet to that port from localhost and get a connection (though I don't know what to do with it other than prove it connects). However, when I do this to the LB, the connection is dropped/closed:

$ telnet loadbalancer.domain.com 5550
Trying 172.31.28.126...
Connected to loadbalancer.domain.com.
Escape character is '^]'.
Connection closed by foreign host.

Do you have any suggestions about how to debug this further? Do you know what's going on?

Documentation and usage information do not mention the "lc-admin ping" command

While tested with version 0.14, looking at the source the same should be true for the current version 0.15

# lc-admin help
Log Courier version 0.14 client

Attempting connection to unix:/var/lib/log-courier/admin.sock...
Connected

Available commands:
  reload    Reload configuration
  status    Display the current shipping status
  exit      Exit
# lc-admin ping
Log Courier version 0.14 client

Attempting connection to unix:/var/lib/log-courier/admin.sock...
Connected

Pong

It would be great if there were more example log-courier configurations.

https://github.com/driskell/log-courier/blob/develop/docs/Configuration.md is awesome documentation, but I would love to see some example configurations, much in the line of all the examples for @coolacid's https://github.com/coolacid/GettingStartedWithELK repo.

It's probably worth mentioning if log-courier is backwards compatible with logstash-forwarder configs as well.

I'm drawing inspiration for a well detailed config mainly from this ansible playbook.yml and this elasticsearch.yml.

Documentation typos

Regarding the multiline codec, I'm wondering what the correct spelling of the "previous timeout" setting shall be, the documentation (docs/codecs/Multiline.md) states "previous_timeout" while the code says "previous timeout".

Futhermore in the documentation, both the "transport" and "codec" sections (of docs/Configuration.md) contain a minor typo where log-courer -list-supported should read log-courier -list-supported.

ssl_verify failing for me

I'm trying to use the following config:

courier {
   port            => "6782"
   ssl_certificate => "/etc/ssl/certs/wildcard.pem"
   ssl_key         => "/etc/ssl/private/wildcard.key"
   ssl_verify      => true
   ssl_verify_ca   => "/etc/ssl/certs/wildcard-bundle.crt"
}

When I run configtest all checks out fine:

root@in-lss-s-01:/etc/logstash# /opt/logstash/logstash-1.4.2/bin/logstash -f /etc/logstash/logstash.conf --configtest
Using milestone 1 input plugin 'courier'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Using milestone 2 output plugin 'elasticsearch_http'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Using milestone 2 output plugin 'graphite'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Configuration OK

However, when I start logstash, it fails:

{:timestamp=>"2014-09-04T14:45:53.427000+0000", :message=>"+---------------------------------------------------------+
| An unexpected error occurred. This is probably a bug.   |
| You can find help with this problem in a few places:    |
|                                                         |
| * chat: #logstash IRC channel on freenode irc.          |
|     IRC via the web: http://goo.gl/TI4Ro                |
| * email: [email protected]                |
| * bug system: https://logstash.jira.com/                |
|                                                         |
+---------------------------------------------------------+
The error reported is: 
  [LogCourierServer] Either 'ssl_verify_default_ca' or 'ssl_verify_ca' must be specified when ssl_verify is true"}

cannot set spool size?

Hi,

Trying the newest log-courier, and i'm not able to set the spool size. I get this error:

# /usr/local/bin/log-courier -config /etc/log-courier.conf -log-to-syslog=true
Configuration error: Option /general/spool size must be int64 or similar

The configuration i use:

{
        "general": {
                "spool size": 1024,
                "spool timeout": 5,
                "log level" : "debug"
        },
        "network": {
                "servers": [ ... ],
                "ssl ca": "...",
                "timeout": 15,
                "reconnect": 10
        },
        "includes": [
                "/etc/log-courier.d/*.conf"
        ]
}

By the way, i liked the ability to set the spool size on the command line, that way i can have different spool sizes based on the server role / hostname when starting the script from init script. Now, i need to have separate configs for different servers :(

gem.version needs to be bumped

The log-courier.gemspec of release 0.11 sets gem.version to 0.10 while I would rather expected this to be in sync with the release version.

Read configuration issue

Hi there,

I can't find a way to run log-courier. It says that something is wrong in my confile, but I'm using a provided conf file from ยซlog-courier-0.12/docs/examples/*ยป.

I guess I'm doing something wrong :

root@pla121lx057:/etc/log-courier# log-courier -config="/etc/log-courier/log-courier.conf" -config-test
Configuration test failed: invalid character '}' looking for beginning of object key string
root@pla121lx057:/etc/log-courier# cat "/etc/log-courier/log-courier.conf"
{
        "network": {
                "servers": [ "localhost:5043" ],
                        "ssl ca": "./logstash.cer",
        },
                "files": [
                {
                        "paths": [ "/var/log/httpd/access.log" ],
                        "fields": { "type": "apache" }
                }
        ]
}
root@pla121lx057:/etc/log-courier# log-courier -version
Log Courier version 0.12
root@pla121lx057:/etc/log-courier#

(I tried with my real parameters, a good CA file, good logstash server, and other log files : same result)

Can anyone point me what I'm doing wrong ?

Improper handling of multiple files & types? (release 0.10)

It seems that whenever I add the second set of paths in my config below, all files are sent to logstash as type "jetty", even when read from phplog. Is this a known issue and, if so, has it been addressed? I've run numerous tests and my issues all seem to boil down to this addition.

{
"network": {
"servers": [ "...:5043" ],
"ssl ca": "/etc/log-courier/ssl/logstash.crt"
},
"files": [
{
"paths": [ "/Users/_/www/gslogging/logs/phplog" ],
"fields": { "type": "php" }
}, {
"paths": [ "/Users/
_/www/gslogging/logs/jettylog" ],
"fields": { "type": "jetty" }
}
]
}

Unexpected PONG received

After idling for about 15 minutes, log-courier gets into a loop of errors. Using logstash-1.4.2

2014/07/11 16:06:54.890511 Log Courier starting up
2014/07/11 16:06:54.902403 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:06:54.934343 Launching harvester on new file: /big/logs/logstash/0.log
2014/07/11 16:06:54.934426 Launching harvester on new file: /big/logs/logstash/1.log
2014/07/11 16:06:54.934458 Launching harvester on new file: /big/logs/logstash/2.log
2014/07/11 16:06:54.934505 Started harvester at position 20 (requested 20): /big/logs/logstash/0.log
2014/07/11 16:06:54.934683 Started harvester at position 9 (requested 9): /big/logs/logstash/1.log
2014/07/11 16:06:54.934895 Started harvester at position 6 (requested 6): /big/logs/logstash/2.log
2014/07/11 16:06:55.062264 Connected with 127.0.0.1
2014/07/11 16:07:12.978471 Registrar received 1 event
2014/07/11 16:22:12.980888 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:22:14.981360 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:22:15.034020 Connected with 127.0.0.1
2014/07/11 16:22:27.981560 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:22:29.982181 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:22:30.042036 Connected with 127.0.0.1
2014/07/11 16:22:42.980862 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:22:44.981424 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:22:45.034031 Connected with 127.0.0.1
2014/07/11 16:22:57.981973 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:22:59.982566 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:23:00.042030 Connected with 127.0.0.1
2014/07/11 16:23:12.982068 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:23:14.982682 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:23:15.042029 Connected with 127.0.0.1
2014/07/11 16:23:27.981322 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:23:29.981773 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:23:30.034053 Connected with 127.0.0.1
2014/07/11 16:23:42.981547 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:23:44.982026 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:23:45.038050 Connected with 127.0.0.1
2014/07/11 16:23:57.981451 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:23:59.982091 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:24:00.042055 Connected with 127.0.0.1
2014/07/11 16:24:12.982295 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:24:14.982757 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:24:15.042026 Connected with 127.0.0.1
2014/07/11 16:24:27.982573 Transport error, will reconnect: Unexpected PONG received
2014/07/11 16:24:29.983079 Connecting to 127.0.0.1:12345 (localhost) 
2014/07/11 16:24:30.042009 Connected with 127.0.0.1

Stack dump

SIGQUIT: quit
PC=0x443e61

goroutine 0 [idle]:
runtime.futex(0xc2080192f0, 0x0, 0x0, 0x0)
    /usr/local/go/src/pkg/runtime/sys_linux_amd64.s:268 +0x21
runtime.futexsleep(0xc2080192f0, 0x0, 0xffffffffffffffff)
    /usr/local/go/src/pkg/runtime/os_linux.c:49 +0x47
runtime.notesleep(0xc2080192f0)
    /usr/local/go/src/pkg/runtime/lock_futex.c:135 +0x86
stopm()
    /usr/local/go/src/pkg/runtime/proc.c:954 +0xe0
findrunnable()
    /usr/local/go/src/pkg/runtime/proc.c:1262 +0x445
schedule()
    /usr/local/go/src/pkg/runtime/proc.c:1345 +0xe3
park0(0xc208003560)
    /usr/local/go/src/pkg/runtime/proc.c:1410 +0xfe
runtime.mcall(0x5c7424)
    /usr/local/go/src/pkg/runtime/asm_amd64.s:181 +0x4b

goroutine 16 [chan receive, 19 minutes]:
main.main()
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/log-courier.go:91 +0x2db

goroutine 19 [finalizer wait, 19 minutes]:
runtime.park(0x4295d0, 0x82bdf0, 0x82a8c9)
    /usr/local/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0x82bdf0, 0x82a8c9)
    /usr/local/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
    /usr/local/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
    /usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 20 [syscall, 19 minutes]:
os/signal.loop()
    /usr/local/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.initยท1
    /usr/local/go/src/pkg/os/signal/signal_unix.go:27 +0x32

goroutine 21 [select]:
main.(*Prospector).Prospect(0xc20802c930, 0xc20800e900, 0xc208030440, 0xc2080700e0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:219 +0xb96
created by main.(*LogCourier).StartCourier
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/log-courier.go:190 +0xb13

goroutine 22 [select]:
main.(*Spooler).Spool(0xc20800e960, 0xc2080700e0, 0xc20802c8c0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/spooler.go:51 +0x335
created by main.(*LogCourier).StartCourier
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/log-courier.go:192 +0xb45

goroutine 23 [select]:
main.(*Publisher).Publish(0xc20802c9a0, 0xc20802c8c0, 0xc208030440)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/publisher1.go:205 +0x142d
created by main.(*LogCourier).StartCourier
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/log-courier.go:194 +0xb77

goroutine 24 [chan receive, 16 minutes]:
main.(*Registrar).Register(0xc208030440, 0xc20800e930)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/registrar.go:106 +0xbe
created by main.(*LogCourier).StartCourier
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/log-courier.go:196 +0xb99

goroutine 17 [syscall, 19 minutes]:
runtime.goexit()
    /usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 26 [select]:
main.(*Harvester).readline(0xc20801a410, 0xc208004600, 0xc20802c230, 0x2540be400, 0x0, 0x0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/harvester.go:202 +0x331
main.(*Harvester).Harvest(0xc20801a410, 0xc2080700e0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/harvester.go:97 +0x499
main.funcยท004()
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:379 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:384 +0xfb

goroutine 27 [select]:
main.(*Harvester).readline(0xc20801a690, 0xc208004660, 0xc20802c310, 0x2540be400, 0x0, 0x0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/harvester.go:202 +0x331
main.(*Harvester).Harvest(0xc20801a690, 0xc2080700e0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/harvester.go:97 +0x499
main.funcยท004()
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:379 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:384 +0xfb

goroutine 28 [select]:
main.(*Harvester).readline(0xc20801a780, 0xc2080046c0, 0xc20802c380, 0x2540be400, 0x0, 0x0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/harvester.go:202 +0x331
main.(*Harvester).Harvest(0xc20801a780, 0xc2080700e0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/harvester.go:97 +0x499
main.funcยท004()
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:379 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/prospector.go:384 +0xfb

goroutine 78 [IO wait]:
net.runtime_pollWait(0x7f7f379d98f8, 0x72, 0x0)
    /usr/local/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc2082b2990, 0x72, 0x0, 0x0)
    /usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc2082b2990, 0x0, 0x0)
    /usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc2082b2930, 0xc208279000, 0x800, 0x800, 0x0, 0x7f7f379d82b8, 0xb)
    /usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc20803c0d8, 0xc208279000, 0x800, 0x800, 0x0, 0x0, 0x0)
    /usr/local/go/src/pkg/net/net.go:122 +0xe7
main.(*TransportTlsWrap).Read(0xc2082606f0, 0xc208279000, 0x800, 0x800, 0x82c518, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/transport_tls.go:345 +0x78
crypto/tls.(*block).readFromUntil(0xc208260840, 0x7f7f379d9a58, 0xc2082606f0, 0x5, 0x0, 0x0)
    /usr/local/go/src/pkg/crypto/tls/conn.go:451 +0xd9
crypto/tls.(*Conn).readRecord(0xc20806eb00, 0x17, 0x0, 0x0)
    /usr/local/go/src/pkg/crypto/tls/conn.go:536 +0x1ff
crypto/tls.(*Conn).Read(0xc20806eb00, 0xc208271f20, 0x8, 0x8, 0x0, 0x0, 0x0)
    /usr/local/go/src/pkg/crypto/tls/conn.go:901 +0x16a
main.(*TransportTls).receiverRead(0xc20801a7d0, 0xc208271f20, 0x8, 0x8, 0x0, 0x0, 0x0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/transport_tls.go:280 +0x1a1
main.(*TransportTls).receiver(0xc20801a7d0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/transport_tls.go:235 +0xba
created by main.(*TransportTls).Connect
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/transport_tls.go:197 +0xcef

goroutine 77 [select]:
main.(*TransportTls).sender(0xc20801a7d0)
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/transport_tls.go:205 +0x246
created by main.(*TransportTls).Connect
    /home/hoenig/Documents/indeed/go/src/log-courier/src/log-courier/transport_tls.go:196 +0xcd4

rax     0xca
rbx     0x82d140
rcx     0xffffffffffffffff
rdx     0x0
rdi     0xc2080192f0
rsi     0x0
rbp     0xc208012900
rsp     0x7f7f36119da0
r8      0x0
r9      0x0
r10     0x0
r11     0x286
r12     0xa
r13     0x0
r14     0x0
r15     0x14
rip     0x443e61
rflags  0x286
cs      0x33
fs      0x0
gs      0x0

Specify logfile output for log-courier

Right now it seems log-courier can only send its own log output to stdout. It would be nice if you could specify a file to send its log output to instead.

Empty line in log file causes event corruption

When a log file contains an empty line (regex: '^$') log-courier message lines become corrupted. Symptoms include messages beginning with '\n' or if using the multiline codec this can cause an anchored RegEx to fail.

The error is in 'linereader.go' line 59:

if n := bytes.IndexByte(lr.buf[lr.start:lr.end], '\n'); n > 0 {

should be:

if n := bytes.IndexByte(lr.buf[lr.start:lr.end], '\n'); n >= 0 {

'>' should be '>='

fields issue

Hi, I'm trying to add more than one field to logs parsed by log-courier, however I'm facing a weird behaviour. Using the configuration from examples, like this:
"fields": { "type": "ftp"}
all works just fine, but if I want to set more than one field and I use this syntax:
"fields": { "type": "ftp", "tags": "system" }
The result is quite messy: type is set to "system", as well as "tags" that is set to "system", basically it seems that the second field value is used also for first field. If I try with three fields, results are even weirder. It can be an error on my side, but I can't figure out how to fix it and no error messages are reported by log-courier.
Thanks
please note: built with go 1.3

Prospector taking long time to scan 300k files

We are using log-courier to scan folders that contain events written in the form of XML files. We have around 25 folders having different name (by business event type). In each folder, we create a folder every day where events are logged in XML format. We log around 300k XML events a day (all folders combined).

We have observed that after a day or so the prospector takes a long time (around 20-25 minutes between the file creation time to harvest time) to harvest new files created in the directories. This latency increases even further if we check after 3-4 days. I think this is because the prospector has to compare each file against any changes and then harvest it. In our case, the XML event on the file system is immutable (i.e. its created and never modified).

Question: Is there any setting through which we can tell log-courier to delete the folders that have already been scanned from its internal prospector registry after a day (through some pattern)? This will speed up the subsequent scanning as it does not have to scan all those 300k files in the filesystem.

backwards-compatible mode?

Hi, after reviewing the configuration and integration docs, it appears it is necessary to install both log-courier (forwarder) and log-courier (server). Is that correct?

To put it another way, does this mean there's no backwards-compatible mode using the lumberjack input?

It appears the answer is "no", based on #16.

feature-request: make log-courier logs configurable

Hi,

It would be nice if we could make the log-courier more configurable. Now, there's only option to log to syslog or not. I'd like to log to syslog, but be able to have different type of loggings - 'verbose' - the current logging, and 'info' or something like that - which would only log file rotations/reconnects/failures.

Use case: log-courier running on busy server, sending thousands of log events per second, and writing the 'Registrar received X events' every second/few times per second and flooding the logs with that. It'd be great to disable the registrar log lines, but still be able to see when log-courier fails/disconnects/reconnects. This probably can be achieved by configuring the syslog server, but... you know, it's nicer to just be able to toggle a switch in log-courier config :)

Support for plain JSON log files

I currently already log my application events in JSON, is there an easy way for me to be able to ship these into logstash with the log-courier input? With lumberjack input this was accomplished with "format => json_event" and no additional parsing; however, courier seems to be lacking any datatype or format classification with the logstash plugin.

Transport error, will reconnect: EOF

I'm excited about this project you've got going! It's great to see a robust logstash-forwarder alternative.

Unfortunately, I'm seeing something eerily familiar to #12. However, this looping behavior begins as soon as log-courier connects to Logstash

log-courier -config /etc/log-courier.conf
2014/07/15 00:16:26.279688 Starting pipeline
2014/07/15 00:16:26.279792 Loading registrar data from /var/run/logstash/.log-courier
2014/07/15 00:16:26.294909 Pipeline ready
2014/07/15 00:16:26.294979 Resuming harvester on a previously harvested file: /var/log/audit/audit.log
2014/07/15 00:16:26.295013 Resuming harvester on a previously harvested file: /var/log/cron
2014/07/15 00:16:26.295043 Resuming harvester on a previously harvested file: /var/log/httpd/access_log
2014/07/15 00:16:26.295073 Resuming harvester on a previously harvested file: /var/log/httpd/error_log
2014/07/15 00:16:26.295101 Resuming harvester on a previously harvested file: /var/log/maillog
2014/07/15 00:16:26.295129 Resuming harvester on a previously harvested file: /var/log/mcelog
2014/07/15 00:16:26.295159 Resuming harvester on a previously harvested file: /var/log/messages
2014/07/15 00:16:26.295187 Resuming harvester on a previously harvested file: /var/log/mysqld.log
2014/07/15 00:16:26.295224 Resuming harvester on a previously harvested file: /var/log/opensm.log
2014/07/15 00:16:26.295253 Resuming harvester on a previously harvested file: /var/log/secure
2014/07/15 00:16:26.295293 Resuming harvester on a previously harvested file: /var/log/spooler
2014/07/15 00:16:26.295321 Resuming harvester on a previously harvested file: /var/log/yum.log
2014/07/15 00:16:26.295604 Connecting to 127.0.0.1:5043 (127.0.0.1)
2014/07/15 00:16:26.295833 Started harvester at position 1152663 (requested 1152663): /var/log/audit/audit.log
2014/07/15 00:16:26.295971 Started harvester at position 94951 (requested 94951): /var/log/cron
2014/07/15 00:16:26.296015 Started harvester at position 143453 (requested 143453): /var/log/httpd/access_log
2014/07/15 00:16:26.296079 Started harvester at position 1840 (requested 1840): /var/log/httpd/error_log
2014/07/15 00:16:26.296125 Started harvester at position 14094 (requested 14094): /var/log/maillog
2014/07/15 00:16:26.296170 Started harvester at position 0 (requested 0): /var/log/mcelog
2014/07/15 00:16:26.296205 Started harvester at position 330600 (requested 330600): /var/log/messages
2014/07/15 00:16:26.296258 Started harvester at position 18098 (requested 18098): /var/log/mysqld.log
2014/07/15 00:16:26.296292 Started harvester at position 23041 (requested 23041): /var/log/opensm.log
2014/07/15 00:16:26.296331 Started harvester at position 15046 (requested 15046): /var/log/secure
2014/07/15 00:16:26.296371 Started harvester at position 0 (requested 0): /var/log/spooler
2014/07/15 00:16:26.297529 Started harvester at position 33195 (requested 33195): /var/log/yum.log
2014/07/15 00:16:26.356679 Connected with 127.0.0.1
2014/07/15 00:16:26.936316 Transport error, will reconnect: EOF
2014/07/15 00:16:27.936713 Connecting to 127.0.0.1:5043 (127.0.0.1)
2014/07/15 00:16:27.998643 Connected with 127.0.0.1
2014/07/15 00:16:28.942697 Transport error, will reconnect: EOF
2014/07/15 00:16:29.943142 Connecting to 127.0.0.1:5043 (127.0.0.1)
2014/07/15 00:16:29.999691 Connected with 127.0.0.1
2014/07/15 00:16:30.950611 Transport error, will reconnect: EOF
2014/07/15 00:16:31.950976 Connecting to 127.0.0.1:5043 (127.0.0.1)
2014/07/15 00:16:32.014703 Connected with 127.0.0.1
2014/07/15 00:16:32.958883 Transport error, will reconnect: EOF
2014/07/15 00:16:33.959236 Connecting to 127.0.0.1:5043 (127.0.0.1)
2014/07/15 00:16:34.023653 Connected with 127.0.0.1

The stack trace:

SIGQUIT: quit
PC=0x43d529

runtime.epollwait(0x7fff00000004, 0x7fff6c3d6e90, 0xffffffff00000080)
        /usr/lib/golang/src/pkg/runtime/sys_linux_amd64.s:385 +0x19
runtime.netpoll(0x8e1f01)
        /usr/lib/golang/src/pkg/runtime/netpoll_epoll.c:71 +0x7d
findrunnable()
        /usr/lib/golang/src/pkg/runtime/proc.c:1222 +0x386
schedule()
        /usr/lib/golang/src/pkg/runtime/proc.c:1320 +0xe3
park0(0xc210060ea0)
        /usr/lib/golang/src/pkg/runtime/proc.c:1361 +0xd8
runtime.mcall(0x43b13d)
        /usr/lib/golang/src/pkg/runtime/asm_amd64.s:178 +0x4b

goroutine 1 [select]:
main.(*LogCourier).Run(0xc21001e2d0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/log-courier.go:179 +0x54c
main.main()
        /root/mcms/install/distfiles/log-courier/src/log-courier/log-courier.go:43 +0x2b

goroutine 3 [syscall]:
os/signal.loop()
        /usr/lib/golang/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.initยท1
        /usr/lib/golang/src/pkg/os/signal/signal_unix.go:27 +0x31

goroutine 4 [select]:
main.(*Prospector).Prospect(0xc210038280, 0xc210046620)
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:249 +0xad6
created by main.(*LogCourier).Run
        /root/mcms/install/distfiles/log-courier/src/log-courier/log-courier.go:163 +0x37e

goroutine 5 [select]:
main.(*Spooler).Spool(0xc210062420, 0xc210046620, 0xc210069150)
        /root/mcms/install/distfiles/log-courier/src/log-courier/spooler.go:50 +0x519
created by main.(*LogCourier).Run
        /root/mcms/install/distfiles/log-courier/src/log-courier/log-courier.go:165 +0x3a9

goroutine 6 [select]:
main.(*Publisher).Publish(0xc210038200, 0xc210069150)
        /root/mcms/install/distfiles/log-courier/src/log-courier/publisher1.go:224 +0x135e
created by main.(*LogCourier).Run
        /root/mcms/install/distfiles/log-courier/src/log-courier/log-courier.go:167 +0x3ca

goroutine 7 [chan receive]:
main.(*Registrar).Register(0xc210050680)
        /root/mcms/install/distfiles/log-courier/src/log-courier/registrar.go:188 +0x87
created by main.(*LogCourier).Run
        /root/mcms/install/distfiles/log-courier/src/log-courier/log-courier.go:169 +0x3e1

goroutine 8 [select]:
main.(*Harvester).readline(0xc21006d780, 0xc210076960, 0xc2100523f0, 0x2540be400, 0xc210067670, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21006d780, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 9 [select]:
main.(*Harvester).readline(0xc21006d960, 0xc2100769c0, 0xc210052540, 0x2540be400, 0xc210067690, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21006d960, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 10 [select]:
main.(*Harvester).readline(0xc21006dbe0, 0xc210076c60, 0xc210052700, 0x2540be400, 0xc2100676e0, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21006dbe0, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 11 [select]:
main.(*Harvester).readline(0xc21006deb0, 0xc210076d20, 0xc210052850, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21006deb0, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 12 [select]:
main.(*Harvester).readline(0xc21006d190, 0xc210076d80, 0xc210052a80, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21006d190, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 13 [select]:
main.(*Harvester).readline(0xc21001e320, 0xc210076de0, 0xc210052d20, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21001e320, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 14 [select]:
main.(*Harvester).readline(0xc21001e5f0, 0xc210076e40, 0xc210052f50, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21001e5f0, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 15 [select]:
main.(*Harvester).readline(0xc21001e8c0, 0xc210076ea0, 0xc2100b5150, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc21001e8c0, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 16 [select]:
main.(*Harvester).readline(0xc2100570f0, 0xc210076f00, 0xc2100b5310, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc2100570f0, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 17 [select]:
main.(*Harvester).readline(0xc2100573c0, 0xc210076f60, 0xc2100b54d0, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc2100573c0, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 18 [select]:
main.(*Harvester).readline(0xc210057690, 0xc210076660, 0xc2100b5690, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc210057690, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 19 [select]:
main.(*Harvester).readline(0xc210057960, 0xc2100391e0, 0xc2100b5cb0, 0x2540be400, 0x3, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:201 +0x2d9
main.(*Harvester).Harvest(0xc210057960, 0xc210046620, 0x0, 0x0)
        /root/mcms/install/distfiles/log-courier/src/log-courier/harvester.go:96 +0x47c
main.funcยท004()
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:409 +0x3e
created by main.(*Prospector).startHarvesterWithOffset
        /root/mcms/install/distfiles/log-courier/src/log-courier/prospector.go:414 +0xfb

goroutine 21 [syscall]:
runtime.goexit()
        /usr/lib/golang/src/pkg/runtime/proc.c:1394

goroutine 30 [select]:
main.(*TransportTcp).sender(0xc210039720)
        /root/mcms/install/distfiles/log-courier/src/log-courier/transport_tcp.go:239 +0x1c0
created by main.(*TransportTcp).Connect
        /root/mcms/install/distfiles/log-courier/src/log-courier/transport_tcp.go:230 +0xcb7

goroutine 31 [IO wait]:
net.runtime_pollWait(0x7fddc1c84890, 0x72, 0x0)
        /usr/lib/golang/src/pkg/runtime/netpoll.goc:116 +0x6a
net.(*pollDesc).Wait(0xc210103fb0, 0x72, 0x7fddc1c82f88, 0xb)
        /usr/lib/golang/src/pkg/net/fd_poll_runtime.go:81 +0x34
net.(*pollDesc).WaitRead(0xc210103fb0, 0xb, 0x7fddc1c82f88)
        /usr/lib/golang/src/pkg/net/fd_poll_runtime.go:86 +0x30
net.(*netFD).Read(0xc210103f50, 0xc210204c00, 0x400, 0x400, 0x0, ...)
        /usr/lib/golang/src/pkg/net/fd_unix.go:204 +0x2a0
net.(*conn).Read(0xc210000720, 0xc210204c00, 0x400, 0x400, 0x7fddc1c84a40, ...)
        /usr/lib/golang/src/pkg/net/net.go:122 +0xc5
main.(*TransportTcpWrap).Read(0xc2100f01e0, 0xc210204c00, 0x400, 0x400, 0x1, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/transport_tcp.go:385 +0x53
crypto/tls.(*block).readFromUntil(0xc2100f0240, 0x7fddc1c84a40, 0xc2100f01e0, 0x5, 0xc2100f01e0, ...)
        /usr/lib/golang/src/pkg/crypto/tls/conn.go:459 +0xb6
crypto/tls.(*Conn).readRecord(0xc210053a00, 0x17, 0x0, 0x0)
        /usr/lib/golang/src/pkg/crypto/tls/conn.go:539 +0x107
crypto/tls.(*Conn).Read(0xc210053a00, 0xc210000a70, 0x8, 0x8, 0x0, ...)
        /usr/lib/golang/src/pkg/crypto/tls/conn.go:897 +0x135
main.(*TransportTcp).receiverRead(0xc210039720, 0xc210000a70, 0x8, 0x8, 0x8, ...)
        /root/mcms/install/distfiles/log-courier/src/log-courier/transport_tcp.go:314 +0x154
main.(*TransportTcp).receiver(0xc210039720)
        /root/mcms/install/distfiles/log-courier/src/log-courier/transport_tcp.go:269 +0x93
created by main.(*TransportTcp).Connect
        /root/mcms/install/distfiles/log-courier/src/log-courier/transport_tcp.go:231 +0xcd1

rax     0xfffffffffffffffc
rbx     0x8e1f60
rcx     0xffffffffffffffff
rdx     0x80
rdi     0x4
rsi     0x7fff6c3d6e90
rbp     0x0
rsp     0x7fff6c3d6e50
r8      0xffffffff
r9      0xc210039300
r10     0xffffffff
r11     0x202
r12     0x64f340
r13     0xa5004c88
r14     0x21d6c483
r15     0x8443168f
rip     0x43d529
rflags  0x202
cs      0x33
fs      0x0
gs      0x0

Encoding issues with UTF-8 characters

I am just about to migrate from logstash-forwarder to log-courier but experiencing encoding issues with UTF-8 characters which are not part of the ASCII subset, to be more specific with German umlauts.
In a very simple setup I am running both shippers (logstash-forwarder 0.3.1 and log-courier 0.11) on the very same machine as logstash 1.4.2. While the same test string "Pufferรผberlauf" (German for "buffer overflow") is processed correctly by the logstash-forwarder, with log-courer the umlaut "รผ" gets replaced by "??".

rubydebug output written by Logstash for the logstash-forwarder input:

{"message":"Pufferรผberlauf","@version":"1","@timestamp":"2014-07-20T21:39:31.419Z","type":"forwarder","file":"/var/tmp/forwarder","host":"XX","offset":"78"}

rubydebug output written by Logstash for the log-courier input:

{"file":"/var/tmp/courier","host":"XX","message":"Puffer??berlauf","offset":72,"type":"courier","@version":"1","@timestamp":"2014-07-20T21:39:34.885Z"}

log-courier reload does not update filter codec settings

Hi,

I've noticed that log-courier reload does not affect filter codec settings.

Example:

[
 {
     "paths": [ "/tmp/test.log" ],
     "fields": { "type": "testlog" },
     "codec": {
      "name": "filter",
      "patterns": [ 
        ".*test2.*"
      ],
      "negate": true
     }
 }
]

if log entry has 'test' it is not filtered, after modifying the patterns to

"patterns": [ 
   ".*test.*"
]

then reloading log-courier (i do see log-courier[6434]: Configuration reload successful in the logs) and then writing an 'test' event to log - it is still not filtered.

If i restart log-courier, and then write the test log event - it is filtered.

reconnect after logstash restart

Hi,
another report :)

When I restart logstash for some reason, log-courier (ssl) spits this error:
2014/06/30 10:40:35 Transport error, will reconnect: write tcp 192.168.0.15:5003: connection reset by peer
that is fine, but afterwards nothing happens, basically log-courier is stuck and it needs a restart to have it sending logs again.

transport error, will try again: EOF

I get a lot of this message- typically about one per minute. Here they are, hostname scrubbed:

Sep  5 22:26:45 host1 log-courier[10937]: Transport error, will try again: EOF
Sep  5 22:27:46 host1 log-courier[10937]: Transport error, will try again: EOF
Sep  5 22:28:50 host1 log-courier[10937]: Transport error, will try again: EOF
Sep  5 22:29:51 host1 log-courier[10937]: Transport error, will try again: EOF

It looks like the logs are being sent- I can see them in elasticsearch/Kibana.

I have other errors in my logs, I put them in #43, but perhaps they are related.

Not work with zmq and plainzmq

I am trying log-courier with logstash 1.4.2 under debian 7.
It's work fine in "tcp" and "tls",
but I can not get it to work with ZMQ or plainzmq (because I need load balancing features).

After initial connection with logstash (I see with tcpdump), no more packets are sent and finish first with spool timeout and then with network timeout.

Logstash launched with /opt/logstash/bin/logstash -v -v but logs don't say nothing.

/etc/log-courier/test.conf:

{
    "general": {
        "admin enabled": true,
        "admin listen address": "tcp:127.0.0.1:1234",
    "log level": "debug",
    "log file": "/var/log/log-courier.log"
},
  "network": {
    "transport": "zmq",
    "servers": [ "server:6001" ],
    "curve server key": "<retracted>",
    "curve public key": "<retracted>",
    "curve secret key": "<retracted>",
    "timeout": 15
  },
  "files": [
    {
      "paths": [ "/opt/DEV_LOG_COURIER/nginx-access.log" ],
      "fields": { "type": "api-nginx-access" }
    }
  ]
}

log-courier.log:

2014/10/07 15:41:43.756547 Log Courier version 0.15 pipeline starting
2014/10/07 15:41:43.757542 Loading registrar data from ./.log-courier
2014/10/07 15:41:43.757971 Pipeline ready
2014/10/07 15:41:43.758770 Resuming harvester on a previously harvested file: /opt/DEV_LOG_COURIER/nginx-access.log
2014/10/07 15:41:43.759126 Started harvester at position 0 (requested 0): /opt/DEV_LOG_COURIER/nginx-access.log
2014/10/07 15:41:43.760794 Registered server:6001 (server) with ZMQ
2014/10/07 15:41:43.761554 Connected to tcp://server:6001
2014/10/07 15:41:48.758999 Spooler flushing 3 events due to spool timeout exceeded
2014/10/07 15:42:03.761506 Transport error, will try again: Server did not respond within network timeout

lc-admin status:

Log Courier version 0.15 client

Attempting connection to tcp:127.0.0.1:1234...
Connected

Publisher:
  Status: Connected
  Speed (Lps): 0.00
  Published lines: 0
  Pending Payloads: 1
Prospector:
  Watched files: 1
  Active states: 1
"State: /opt/DEV_LOG_COURIER/nginx-access.log (0xc208004840)":
  Status: Running
  Harvester:
    Speed (Lps): 0.00
    Speed (Bps): 0.00
    Processed lines: 3
    Current offset: 1973
    Last EOF: 1973

logstash input.conf:

input {
  courier {
    transport =>  "zmq"
    port => 6001
    curve_secret_key => "<retracted>"
  }
}

Am I missing something?

thx

Multiline filter and generation of an event when no line-ending at EOF

We have events stored on the file system where an XML file represents 1 event. These files are created in such a way that every file represents a new event. In each file, we don't have a newline indicator on the last line of the event. Following is a sample of an event where the last line does not have a new line indicator. Its basically the end of file.

<soap:Envelope
xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">

<soap:Body xmlns:m="http://www.example.org/stock">
  <m:GetStockPriceResponse>
    <m:Price>34.5</m:Price>
  </m:GetStockPriceResponse>
</soap:Body>

</soap:Envelope>

Following is how my configuration looks like:

{
    "general": {
        "log level" : "debug"
    },
    "network": {
        "servers": [ "127.0.0.1:5043" ],
        "ssl ca": "/log-courier/bin/selfsigned.crt"
    },
    "files": [
        {
            "paths": [ "/logstash/logs/*.xml" ],
            "fields": { "type": "soap_response" },
            "codec": {
                "name": "multiline",
                "pattern": "<\/.*:Envelope>",
                "negate": true,
                "what": "next"
            }
        }
    ]
}

In the above case, the harvester keeps waiting for a new line character and will never return the event. Is there any configuration that can be done to return the event when the harvester hits the end of file?

admin enabled by default

Hi,

Just started updating to log-courier 0.14 (well, devel branch) and noticed that the admin is enabled by default.

tcp        0      0 127.0.0.1:1234          0.0.0.0:*               LISTEN      45406/log-courier

But the documentation says:

"admin enabled"

Boolean. Optional. Default: false

Either the documentation is wrong, or it should be disabled by default.

Protocol error on connection with very long log lines

I started getting this message and can't connect to the logstash server. Any clues? Worked fine after installing and still working on 3 identical servers, but this one started after 6 hours up. Restarting the go client doesn't help

LogCourierServer] Protocol error on connection from xx.xx.xx.xx:pp: LogCourier::ProtocolError", :level=>:warn}

TLS InsecureSkipVerify

I'm trying to use log courier but I'm facing this issue:
TLS Handshake failure with 192.168.0.10: tls: either ServerName or InsecureSkipVerify must be specified in the tls.Config

I understand that this could be a security issue, but I need to turn off this check (internal test server).
Looking at the configuration, I see no way to disable TLS verification or placing a ServerName to make logcourier happy.

Being totally oblivious about go, I placed a
ret.tls_config.InsecureSkipVerify = true
at the end of func NewTransportTlsFactory(config_path string, config map[string]interface{}) (TransportFactory, error)

as a quick and dirty workaround, but I'm quite sure that a better solution exists; someone has some helpful hint?

Thanks.

Unexpected registrar conflict on log rotation

Hi, I'm using commit 43c2aa8 of log-courier

After logrotate rotated the /var/log/syslog file, log-courier seemed to get confused and eventually stopped sending any logs. Around the time the file was rotated log-courier started outputting lines like this:

2014/08/27 06:51:16.868045 BUG: Unexpected registrar conflict detected for /var/log/syslog
2014/08/27 06:51:17.164555 BUG: Unexpected registrar conflict detected for /var/log/syslog
2014/08/27 06:51:22.235496 BUG: Unexpected registrar conflict detected for /var/log/syslog
2014/08/27 06:51:26.662544 BUG: Unexpected registrar conflict detected for /var/log/syslog
2014/08/27 06:51:27.164598 BUG: Unexpected registrar conflict detected for /var/log/syslog

It did that every few seconds for about 2.5 hours, then stopped sending logs to logstash or outputting any other messages. Once I restarted it it worked ok.

I've seen this happen before too, please let me know if you need any other information.

Thanks much.

Can't build log-courier on FreeBSD 10

Hi,

I'm getting the following error when trying to build it under FreeBSD 10 (adm64)

# make with=zmq
make: "/usr/home/daniel/log-courier/Makefile" line 9: Missing dependency operator
make: "/usr/home/daniel/log-courier/Makefile" line 13: Need an operator
make: Fatal errors encountered -- cannot continue
make: stopped in /usr/home/daniel/log-courier

I can run go build inside src/log-courier and it compiles the binary, so I guess this is just a problem with the Makefile.

Running it with gmake throws the following error:

gmake: *** No rule to make target `bin/log-courier', needed by `all'.  Stop.

gem build instructions

The instruction steps don't look to be correct:

git clone https://github.com/driskell/log-courier
cd log-courier
gem build

This fails with:

ERROR:  While executing gem ... (Gem::CommandLineError)
    Please specify a gem name on the command line (e.g. gem build GEMNAME)

Building the gem with gem build log-courier.gemspec does work though. Bad instructions?

# gem -v
1.8.24

Log-courier upgrade affects logstash boot time

After upgrading from 0.11 to 0.12, my logstash indexer takes like 5sec to boot and the logstash log file is spammed with the following lines:

...
{:timestamp=>"2014-08-06T15:43:21.703000+0200", :message=>"[LogCourierServer] Invalid message: not enough data", :level=>:warn}
{:timestamp=>"2014-08-06T15:43:21.704000+0200", :message=>"[LogCourierServer] ZMQ recv_string failed: recv_string error: -1", :level=>:warn}
{:timestamp=>"2014-08-06T15:43:21.704000+0200", :message=>"[LogCourierServer] Invalid message: not enough data", :level=>:warn}
{:timestamp=>"2014-08-06T15:43:21.704000+0200", :message=>"[LogCourierServer] ZMQ recv_string failed: recv_string error: -1", :level=>:warn}
...

redis support

How hard would it be to add redis support? Its not a stream per/se, but it looks like it could be accomplished

"lc-admin help" should always do

I would expect lc-admin help to always print the usage information regardless whether a connection to log-courier can be made or not (actually I would furthermore expect that help prints that info only, not preforming any action).

# /etc/init.d/log-courier start
Starting log-courier:                                      [  OK  ]
# lc-admin help
Log Courier version 0.14 client

Attempting connection to unix:/var/lib/log-courier/admin.sock...
Connected

Available commands:
  reload    Reload configuration
  status    Display the current shipping status
  exit      Exit
# /etc/init.d/log-courier stop
Stopping log-courier:                                      [  OK  ]
# lc-admin help
Log Courier version 0.14 client

Attempting connection to unix:/var/lib/log-courier/admin.sock...
Failed to connect: dial unix /var/lib/log-courier/admin.sock: no such file or directory

Send to multiple Logstash servers in parallel

Is it possible to send the same messages to multiple (2) Logstash servers in parallel? My issue is I'd like to test configurations in dev before pushing them to prod. In order to do this I'd want a subset of my production servers to send to both the production AND dev Logstash servers.

make error

I get the following error when I run make from log-courier directory,

go get -d -tags "" log-courier
package log-courier: unrecognized import path "log-courier"
make: *** [bin/log-courier] Error 1

previous timeout doesn`t flush buffer

I have added the multiline codec to collect logmessages from an php application logfile with php stacktraces.

The last log event isn't send to logstash and with lc-admin I can see there a pending lines.
Only when a new message is written to the logfile the previous will be send.

My Config is like this (Files section):

    {
        "paths": [ "/srv/www/logs/php_server_error.log" ],
        "fields": { "type": "php_server_error" },
        "codec": {
             "name": "multiline",
             "pattern": "^[[][0-9]{1,2}?-[a-zA-Z]{3}?-[0-9]{4} [0-9]{2}:[0-9]{2}:[0-9]{2} [a-zA-Z/]+?[]]",
             "negate": true,
             "what": "previous",
             "previous timeout": "30s"
        } 
    },

An example log linie is(some path's striped):

[18-Sep-2014 15:59:56 Europe/Berlin] exception 'Core_ErrorException' with message 'Illegal string offset 'result'' in /srv/www/PHP_ROOT//Action.php:294
Stack trace:
#0 /srv/www/PHP_ROOT///Action.php(294): exception_error_handler(2, 'Illegal string ...', '/srv/www/PHP_RO...', 294, Array)
#1 /srv/www/PHP_ROOT//BonusController.php(72): Bonus_Controller_Action->getBonusList(Array)
#2 /srv/www/PHP_ROOT//Action.php(516): Bonus_BonusController->selectedAction()
#3 /srv/www/PHP_ROOT//Standard.php(308): Zend_Controller_Action->dispatch('selectedAction')
#4 /srv/www/PHP_ROOT//Front.php(954): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http))
#5 /srv/www/PHP_ROOT//App.php(550): Zend_Controller_Front->dispatch()
#6 /srv/www/PHP_ROOT/www/index.php(82): Core_App->run()
#7 {main}
Request: selected?format=json&format=json&items=6&random=17&page=1
POST Params:
array (
  '__before_filter' => 
  array (
  ),
)
GET Params:
array (
  'format' => 'json',
  'items' => '6',
  'random' => '17',
  'page' => '1',
  '__before_filter' => 
  array (
    'format' => 'json',
    'items' => '6',
    'random' => '17',
    'page' => '1',
  ),
)

Rotated log files issue?

Hi,
I'm using log-courier to get logs from files that are rotated hourly, using a symlink to the current one, so the log dir is similar to this example:
log.current -> log.07
log.07
log.06
log.05
[...]

in log.courier.conf I have the path pointing to log.current and "dead file": "1h"
what happens is that rotated files are still kept open by log-courier for hours (at least 20 or more). This puzzles me as it is not what I expected, I assumed files to be relaesed after 1 hour from rotation. Is this a (likely) my wrong astedsumption on log-courier behaviour or something not working in rotated log management by log-courier?
Thanks for any answer.

Compiling on Windows

I am trying to make this work on a windows xp 32 bit OS to collect logback logs that are rotated daily. (also backing up as zip files the last 15 days)

on logstash-forwarder I noticed that the collection of logs are alright, but the zip files are failed to generate on the logback side, which I think means that logback-forwarder locks the original log file.

It seems the compile support for windows is not ready since log-courier_windows.go (which has errors) doesn't look like log-courier_nonwindows.go

I can test Win 7 x32 & x64 as well as Windows XP x32

failed to flush outgoing items

I get the above error messages quite a bit in my logs. Here's an example.

{:timestamp=>"2014-09-04T18:30:19.434000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>#<Errno::EBADF: Bad file descriptor - Bad file descriptor>, :backtrace=>["org/jruby/RubyIO.java:2097:in `close'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:173:in `connect'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:168:in `connect'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:156:in `connect'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:139:in `connect'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:406:in `connect'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/pool.rb:48:in `fetch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:403:in `connect'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:319:in `execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:217:in `post!'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:228:in `post'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:223:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:201:in `receive'", "/opt/logstash/lib/logstash/outputs/base.rb:86:in `handle'", "(eval):325:in `initialize'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/lib/logstash/pipeline.rb:266:in `output'", "/opt/logstash/lib/logstash/pipeline.rb:225:in `outputworker'", "/opt/logstash/lib/logstash/pipeline.rb:152:in `start_outputs'"], :level=>:warn}

feature-request: ability to drop some log lines

Howdy!

I've got a feature request... I understand that you might not wish to implement this feature as there are ways around it, but worth a try!

In some places i've got plenty of logs that i ship and then just drop 80% of them. This consumes lots of traffic and CPU on the logstash servers. An example would be postfix logs - each email generates multiple log events:

Jul 29 06:25:03 smtp1 postfix/smtpd[3533]: connect from unknown[x.x.x.x]
Jul 29 06:25:03 smtp1 postfix/smtpd[3533]: ABCQUEUEID: client=unknown[x.x.x.x]
Jul 29 06:25:03 smtp1 postfix/cleanup[3504]: ABCQUEUEID: message-id=<[email protected]>
Jul 29 06:25:03 smtp1 postfix/qmgr[24275]: ABCQUEUEID: from=<[email protected]>, size=20425, nrcpt=1 (queue active)
Jul 29 06:25:03 smtp1 postfix/smtpd[32632]: SECONDQID: client=localhost.localdomain[127.0.0.1], orig_queue_id=ABCQUEUEID, orig_client=unknown[x.x.x.x]
Jul 29 06:25:03 smtp1 postfix/smtp[3641]: ABCQUEUEID: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10037, conn_use=72, delay=0.14, delays=0.04/0/0/0.09, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as SECONDQID)
Jul 29 06:25:03 smtp1 postfix/qmgr[24275]: ABCQUEUEID: removed

And then a log line from antispam, and then queued back to postfix with the same amount of logs + additional stuff for my own needs. Basically, around 15-20 log lines for one email. I only need 2-3 of them, and the rest is dropped on logstash side.

Is there a possibility to add a feature to log-courier to filter out / drop some log lines based on a regexp? This would really lower down the CPU/network usage on my side.

logging strangeness?

Hi,

Updated my log-courier to 0.14 (devel build from today) and i'm seeing something strange in the logs:

Sep 20 16:31:51 es2 log-courier[37917]: Connected to X.X.X.X
Sep 20 16:31:53 es2 log-courier[26545]: Registrar received offsets for 44 log entries
Sep 20 16:31:55 es2 log-courier[37917]: Spooler flushing 56 events due to spool timeout exceeded
Sep 20 16:31:55 es2 log-courier[37917]: Registrar received offsets for 56 log entries
Sep 20 16:31:58 es2 log-courier[26545]: Registrar received offsets for 25 log entries
Sep 20 16:32:00 es2 log-courier[37917]: Spooler flushing 23 events due to spool timeout exceeded
Sep 20 16:32:00 es2 log-courier[37917]: Registrar received offsets for 23 log entries
Sep 20 16:32:03 es2 log-courier[26545]: Registrar received offsets for 12 log entries
Sep 20 16:32:05 es2 log-courier[37917]: Spooler flushing 2 events due to spool timeout exceeded
Sep 20 16:32:05 es2 log-courier[37917]: Registrar received offsets for 2 log entries

As you can see, the 'spool flushing' entries don't show up for all the 'registrar received' messages (i.e. the logs for 44 / 25 / 12 log entries)

Long shutdown times

Hi,

I've noticed that sometimes log-courier takes quite a long time to shut down. Sometimes its a few seconds, sometimes its half a minute or maybe even more - init script just kills log-courier if it doesn't stop for 30 seconds on its own so 30 secs is maximum i have noticed.

An example log output:

Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Log Courier starting up
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Setting trusted CA from file: /usr/local/share/logstash/logstash.crt
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Loading registrar data from ./.log-courier
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /tmp/test.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /tmp/php-errors
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/mail.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/auth.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/messages
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/kern.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/debug
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/daemon.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Resuming harvester on a previously harvested file: /var/log/firewall-noise.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 1181 (requested 1181): /tmp/test.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 0 (requested 0): /tmp/php-errors
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 33763 (requested 33763): /var/log/mail.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 4810700 (requested 4810700): /var/log/auth.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 11000 (requested 11000): /var/log/messages
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 2152 (requested 2152): /var/log/kern.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 0 (requested 0): /var/log/debug
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 385242 (requested 385242): /var/log/daemon.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Started harvester at position 98335 (requested 98335): /var/log/firewall-noise.log
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Connecting to xxx.xxx.xxx.xxx:9001 (logstash2.example.com)    
Jul 10 13:56:47 server log-courier[332]: 2014/07/10 13:56:47 Connected with xxx.xxx.xxx.xxx
Jul 10 13:56:52 server log-courier[332]: 2014/07/10 13:56:52 Registrar received 1 event
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Log Courier shutting down
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Spooler shutdown complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/daemon.log complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /tmp/php-errors complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /tmp/test.log complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/debug complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/auth.log complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/mail.log complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/firewall-noise.log complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/kern.log complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Harvester shutdown for /var/log/messages complete
Jul 10 13:56:57 server log-courier[332]: 2014/07/10 13:56:57 Prospector shutdown complete
Jul 10 13:57:14 server log-courier[332]: 2014/07/10 13:57:14 Registrar received 43 events
Jul 10 13:57:14 server log-courier[332]: 2014/07/10 13:57:14 Registrar received 5 events
Jul 10 13:57:15 server log-courier[332]: 2014/07/10 13:57:15 Publisher shutdown complete
Jul 10 13:57:15 server log-courier[332]: 2014/07/10 13:57:15 Registrar shutdown complete

As you can see, log-courier received a shtudown call at 13:56:57, and finished the shutdown at 13:57:15 - that's 18 seconds.

Is there anything that can be done to make the shutdowns faster?

Is it beecause log-courier waits on some ack's from logstash before writing the state file and exiting gracefully? Maybe logstash is too busy and that's why it happens and adding moar servers would help?

Unable to use the zmq transport option

Oye,

I'm trying to use the zmq transport option to ship logs from log-courier to logstash.

But there is no documentation on how to use the logstash courier input with zmq, when I generate my curve keys with the utility lc-curvekey, I've got the following output:

Generating configuration keys...
(Use 'genkey --single' to generate a single keypair.)

Copy and paste the following into your Log Courier configuration:
    "curve server key": " L{+e!%gT!z(7!<pK4:]1av[xU=<jrA3A086cbGZY ",
    "curve public key": " pxU5^I>@UhJ]L@vi9<l9/o+@2AsSAlQLR#I9IqBE ",
    "curve secret key": " bv]>#M9]UI9vLkMYS3{EJX7{E*G@D%uRwl()%rc1 ",

Copy and paste the following into your LogStash configuration:
    curve_secret_key => " ^?Vye}w(GIy[?vjPkyU.0e2Hcq7=C>Rf8-^@Z?Hv ",

But when I add the last line to my logstash configuration, I got the following error in the logstash log:

{:timestamp=>"2014-07-17T16:50:16.371000+0200", :message=>"Using milestone 1 input plugin 'courier'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to
 improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2014-07-17T16:50:16.389000+0200", :message=>"Unknown setting 'curve_secret_key' for courier", :level=>:error}
{:timestamp=>"2014-07-17T16:50:16.392000+0200", :message=>"Error: Something is wrong with your configuration."}
{:timestamp=>"2014-07-17T16:50:16.393000+0200", :message=>"You may be interested in the '--configtest' flag which you can\nuse to validate logstash's configuration before you choose\nto restart a running system."}

I'm using the release v.0.11 of log-courier.

log to syslog not working anymore?

After upgrading log-courier to the newest version, the syslog output doesn't work anymore...

I'm running log-courier with:

/usr/local/bin/log-courier -config /etc/log-courier.conf -log-to-syslog=true

But it does not write anything into syslog. Looking at lsof output, it does not even open a socket to the syslog.

Old version:

...
log-couri 3636 root    0u   CHR                1,3      0t0        12 /dev/null
log-couri 3636 root    1u   CHR                1,3      0t0        12 /dev/null
log-couri 3636 root    2u   CHR                1,3      0t0        12 /dev/null
log-couri 3636 root    3u  unix 0xffff88000ddd7a80      0t0 128880045 socket
log-couri 3636 root    4u  0000                0,9        0      2353 anon_inode
log-couri 3636 root    5r   REG              202,2     1828    139032 /var/log/mail.log
...

New version:

# lsof -p 12344 | grep -i socket
#

It does log to STDOUT though.

Build requirements on Windows

The Build Requirements section is missing at least a couple of things for building log-courier on a Windows Server machine:

  • Some kind of 'make' installed
  • Some kidn of 'git' installed and usable

That's all I've found so far. I can read Makefiles and was able to winkle out what commands I need to run to build the programs, but after setting my GOPATH variable properly I get

    >go get -d -tags '' log-courier
    go: missing Git command. See http://golang.org/s/gogetcmd
    package github.com/op/go-logging: exec: "git": executable file not found in %PATH%

at which point I'm stuck. Gonna have to set up my own VM to work on this where I can install whatever is needed, I guess.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.