Comments (6)
Hi @java2kus
The problem here is the glob match will still return 300k files and for it to know whether a file is old or new etc it will still need to check it via stat()
If we tell it to stop remembering it will just see them again.
It's a complex issue and I can't think of any solution within courier. You may be best running your own archived script to move old files to another directory.
Having so many files in one folder will cause other issues too and wouldn't be isolated to courier.
Jason
from log-courier.
Hi Jason,
Thanks for the response. We have these files in multiple folders. The folder structure goes something like this:
/<businesseventname>/YYMMDDHH/*.xml
We actually move these files every week. I guess, we will have to change the job to move it every 1 hour. Thanks for the suggestion.
from log-courier.
I was thinking about it and I can see a use for a dead time action
option. It would work by the user setting dead time
to about an hour, then the action to delete
or, say, move:/archive/
or something. This would be the best way to reliably ensure files are only archived after they are completed, as using a job to do it might cause skipped files if Logstash was really busy or down. (dead time
only triggers if the file is fully processed and received by Logstash, and last modification is older)
Just for a bit more background - so you end up with 300k files each day? That's quite significant!
I'll log down dead time action
as an idea. I can't commit to adding it but I will definitely explore it at some point if it sounds feasible to you. What do you think?
from log-courier.
Sorry just saw you already mentioned it's 300k a day! Disregard that question.
from log-courier.
300k files a day is a bit unusual and this is because logging of the events for auditing and troubleshooting purposes came as an after thought. Typically, we would have designed the logging architecture using a high performance message broker but then it started with just logging around 10k events a day (where this was not felt necessary) to the current volume.
On the idea, I like the dead time action
concept. This way we wont be skipping events (due to a hourly job running to move events into archive) even if logstash/elasticsearch is overwhelmed with events. This dead time action
should only get triggered on after receiving a confirmation from logstash that the event has been processed (I know there is a bug in logstash because of the internal buffer pool which may result in some events getting skipped in case of crash). I am just learning programming in go and hence would maybe modify the source to implement a quick hack of the idea until you decide on the feasibility. Thanks!
from log-courier.
Closing as itβs a fairly big task and no bandwidth. And would mean a large rewrite of prospector.
from log-courier.
Related Issues (20)
- Duplicates observed when log-courier configuration file is overwritten HOT 3
- Undocumented behavior for paths matched by multiple fileglobs HOT 1
- log-courier can't seem to handle `~` for home in certain contexts. HOT 2
- error in logstash 7 HOT 4
- log-courier and logstash > 7.4.0? HOT 10
- Throughput question HOT 15
- does the includes configuration still work correctly? HOT 3
- Hold time setting not closing files properly
- syslog "progname" uses a full path rather than a basename HOT 1
- Information about payloads / different debug level ? HOT 3
- error while compiling code in the command line using MK in windows for Z wave HOT 1
- version.rb missing from the plugin package
- log-courier admin socket stuck HOT 2
- lc-admin does not default to tcp transport HOT 2
- order of files in lc-admin UI
- Logstash configuration auto reload is blocked by log-courier input HOT 2
- PPA packaging is using deprecated compatibility level 9 HOT 1
- 2.10.0 packages missing from PPA (Ubuntu)
- Receiver reload can cause panic
- TCP streaming receiver aborts if too many events received
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from log-courier.