gerritjvv / bigstreams Goto Github PK
View Code? Open in Web Editor NEWbigstreams big data kafka hadoop and file based imports
License: Eclipse Public License 1.0
bigstreams big data kafka hadoop and file based imports
License: Eclipse Public License 1.0
Start creating a simple independent monitoring UI.
Original issue reported on code.google.com by [email protected]
on 7 Oct 2010 at 12:52
We need to do some research into how to make the current coordination service
distributed and with failover.
The distributed mode is possible but should not greatly impact performance.
Original issue reported on code.google.com by [email protected]
on 14 Oct 2010 at 11:48
Streams provide strict documented guarantees about data integrity even under
application or machine failure.
This document must write down the areas where data duplication is possible and
what streams has done to guard against it.
Original issue reported on code.google.com by [email protected]
on 20 Oct 2010 at 9:19
Complete the home page with
-> a project overview
-> rational
-> Help wanted
-> related projects
-> How to contribute
Original issue reported on code.google.com by [email protected]
on 23 Sep 2010 at 1:38
This is a collector userguide.
Document
-> the properties
-> the CLI commands
-> the rest monitoring
-> startup and shutdown
Original issue reported on code.google.com by [email protected]
on 23 Sep 2010 at 1:35
If the pool size is smaller than threads send then the agent can be slowed down
while sending files to the collector because of compressors not being available
for each thread.
Original issue reported on code.google.com by [email protected]
on 14 Jun 2011 at 9:01
Sending of files requires the following monitor and management services:
Log Delete Service
Delete files after sent.
Some properties:
deleteWhenSent = $secondsAfterSent
compressWhenSent = $secondsAfterSent (-1 disable)
moveWhenSent= $secondsAfterSent:$directory to move to
execWhenSent= $secondsAfterSent:"$cmd"
LogDeleteResource:
Shows the actions of the {LogDeleteService
Original issue reported on code.google.com by [email protected]
on 25 Apr 2011 at 6:33
java.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)java.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.jajava.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingjavaThread.java:177)
at org.streams.agent.filjava.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
at java.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
java.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
at java.lang.NumberFormatException: For input string: "E.21106E2"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at java.text.DigitList.getDouble(DigitList.java:151)
at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
at java.text.DateFormat.parse(DateFormat.java:335)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.ja
va:131)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.ja
va:131)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
e.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
va:58)
at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
Original issue reported on code.google.com by [email protected]
on 13 Jun 2011 at 5:40
The agent configuration should be viewable from the REST interface.
This will allow any UI to see the configuration without requiring login to the
agent machine.
Original issue reported on code.google.com by [email protected]
on 20 Apr 2011 at 8:55
Metrics instance is not showing on the agent status
Original issue reported on code.google.com by [email protected]
on 29 Sep 2010 at 3:15
In the Collector under extreme conditions when errors are contineousely sent by
the agents runs out of Memory.
A fix is under way for this problem and will be tested before release.
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 1:58
Creating a GRing client implementation that will send Message instances.
The client is created such that a Channel will be kept open to send N amount of
Messages then on the close the channel is closed.
On any error or close event other than from the close method the client will
set its state as closed.
We do not need to accept an ACK message from the server as the io is ring
asynchronous which is just in internal term for, the response to this write
will travel through the GRing (all members) and a response will come in on
another port.
This is done because sending and ack between 2 members is not as usefull as
sending and receiving a global ack that all active members has received the
message.
Original issue reported on code.google.com by [email protected]
on 28 Oct 2010 at 12:31
Create an initial instalation documentation covering installation steps for the
agent, collector and coordination services.
This document should also cover how to setup LZO compression.
Original issue reported on code.google.com by [email protected]
on 23 Sep 2010 at 1:34
Create a list of nagios checks which Operations teams can use as a guideline on
how to monitor the streams instances via Nagios.
Original issue reported on code.google.com by [email protected]
on 7 Oct 2010 at 12:53
java.io.IOException: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:633)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:95)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at com.hadoop.compression.lzo.LzoDecompressor.<init>(LzoDecompressor.java:186)
at com.hadoop.compression.lzo.LzoCodec.createDecompressor(LzoCodec.java:202)
at com.hadoop.compression.lzo.LzoCodec.createInputStream(LzoCodec.java:170)
at org.streams.commons.io.impl.ProtocolImpl.read(ProtocolImpl.java:90)
at org.streams.collector.server.impl.LogWriterHandler.messageReceived(LogWriterHandler.java:98)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndfireMessageReceived(ReplayingDecoder.java:516)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:497)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:434)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 4:21
UI should show each agent's configuration
Original issue reported on code.google.com by [email protected]
on 20 Apr 2011 at 8:55
Have the coordination service keep track of the current agents in the system.
We already keep track of all agents that have connected to the system, but for
better monitoring and reporting we need the agents to send a heart beat to the
coordination service.
This will allow the coordination service to report on all agents in the
collection system.
Original issue reported on code.google.com by [email protected]
on 28 Oct 2010 at 8:28
The agent should recognized a file lock error i.e. when the collector sent a
request and the coordination service told the server that the current file has
already been locked. On such an error the agent should include an informative
error line in the log files. This will allow to track better why two requests
from the same agent happen for the same file.
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 4:00
See GzipCodec java 1.6 _02 - hadoop Memory Leak for trail.
We will continue to look into this issue by debugging either the gzip library
itself or the hadoop native jni code.
For the moment we have decided that the best fix is too:
-> Use LZO
-> Or Use the Java Gzip implementation
Original issue reported on code.google.com by [email protected]
on 19 Oct 2010 at 4:04
In the new un released version was introduced the following error:
Agent goes into infinite loop. not recovering even with kill signal
error desc: Agent hangs when error reporting to collector
Cause:
The AppStartCommand.shutdown() calls the AppLifeCycleManager.shutdown
This method calls the FileSendService.shutdown()
This calls the ClientResourceFactoryImpl.destroy()
Calls the ClientConnectionFactory.close()
Calls the org.jboss.netty.channel.socket.ClientSocketChannelFactory.destroyExternalResources() and hangs here.
Solution:
Edit the method close() in org.streams.agent.send.impl.ClientConnectionFactoryImpl to just set the org.jboss.netty.channel.socket.ClientSocketChannelFactory
instance to null and never call destroyExternalResources
Add a ThreadResourceService that implements the ApplicationService interface and add it to the ApplicationLifeCycleManager.
On shutdown the ThreadResourceService will ensure that all threads are shutdown.
Original issue reported on code.google.com by [email protected]
on 25 Nov 2010 at 1:35
The hsqldb 2.0.0 version crashed with this version of the java jdk.
Similar errors have been found in other software used on the same boxes. The
jvm crashed with a SIGSEGV error.
Using a newer version of the JVM works. But we are switching to using hsqldb
1.8 because using this version does not cause the JVM to crash.
It is recommended to use the 1.6.0_20-b02 JVM or newer.
Original issue reported on code.google.com by [email protected]
on 28 Sep 2010 at 1:37
Thread pools are not reused on each connection call but recreated. This causes
the agent to use a high amount of memory.
Original issue reported on code.google.com by [email protected]
on 26 Sep 2010 at 12:23
The following garbage collection parameters are to be added to the
streams-env.sh script for collector, agent and coordination service:
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC
-XX:MaxPermSize=128m
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 2:01
It seams that the hadoop GzipCodec with native libraries has a memory leak that
slowly consumed Virtual Memory.
On our boxes the agent process using the GzipCodec has consumed 90Gigs of
VIRTUAL ( SwAP + RES ) memory and grows un bounded.
All our machines that run the LZO Codec does not report this problem and memory
usage is extremely low.
We will investigate this issue and try to solve it by either:
-> putting the fix into bigstreams directly
-> and or opening a Jira on the hadoop core site with a fix.
Original issue reported on code.google.com by [email protected]
on 19 Oct 2010 at 2:20
Currently the metrics are outputting 4 lines at a time. This should be reduced
to 1.
Original issue reported on code.google.com by [email protected]
on 20 Apr 2011 at 9:54
The agent -ls command is not showing all entries correctly.
Original issue reported on code.google.com by [email protected]
on 18 Apr 2011 at 1:35
HazelcastFileTrackerStorage needs to save the file history to a persistent
storage map.
We require:
-> IMap<HistoryKey, FileTrackingStatusHistory>
-> remove logTypeSet, agentSet
-> Add DB JPA mappings for history
-> Add persistent store implementation
Original issue reported on code.google.com by [email protected]
on 25 Apr 2011 at 11:16
The NIOClientSocketChannelFactory must be a singleton and should be reused.
Not doing so would would lead to Direct Memory Errors.
The NIOClientSocketChannelFactory will pre create as many NIOWorker instances
as processors * a constant. Each NIOWorker pre allocates 64KB of direct memory.
On a 16 processor machine this means 64K * 16 = 256K per instance of
NIOClientSocketChannelFactory.
Creating an instance on each client connect for the CoordinationService means
512K per send lock and send unlock combination.
The ClientConnectionResource class in the commons and the agent client
connection will be changed to accept an instance of
NIOClientSocketChannelFactory in the Constructor.
Original issue reported on code.google.com by [email protected]
on 4 Oct 2010 at 8:19
This is currently a bash script, we should move away from this and design as
part of the collector project another component daemon that will load the
collected data to hdfs.
support for: describing bucketing through configuration. etc.
Original issue reported on code.google.com by [email protected]
on 14 Oct 2010 at 11:51
This has been identified as a good feature for identifying the rpms.
Original issue reported on code.google.com by [email protected]
on 28 Oct 2010 at 10:52
If the threads parameter is <= 0 then this value should be set equal to the
number of entries in the stream_directories.
Original issue reported on code.google.com by [email protected]
on 13 Jan 2011 at 1:07
[deleted issue]
This is a coordination service userguide.
Document
-> the properties
-> the CLI commands
-> the rest monitoring
-> startup and shutdown
Original issue reported on code.google.com by [email protected]
on 23 Sep 2010 at 1:36
This is a bug that we've only seen no one machine, and happened when a huge cp
-R command was run on the directory combined with multiple agent restarts.
The directory polling should check all of its files for when
status=DONE and filePointer is less than file size. This can be executed as a single HSQL query and if any entries found these must be set to status READY.
Original issue reported on code.google.com by [email protected]
on 3 Nov 2010 at 4:46
Collector garbage collection is 2 minutes for ParNew
Possible solution is to set the number of concurrent threads lower.
see http://forums.sun.com/thread.jspa?threadID=5244426
Quote:
After the exciting reply I did get on my question, we did some more
investigations on the problem and it seems that we finally found the solution
to our problem.
The number of garbage collection threads used by the virtual machine defaults
to the number of cpus of the machine.
This is ok for small machines or machines where the main load is produced by
the java application itself.
In our environment the main load is not produced by the java application but
oracle database processes.
When java tries to do it's garbage collection using 120 threads (# CPU) on the
machine which is already overloaded by non java processes, the thread
synchronization seems to produce an exorbitant overhead.
My theory is that spin locking is used on memory access, causing threads to
spin while waiting for other blocking threads not getting cpu because of the
heavy load on the system.
The solution is now to limit the number of garbage collection threads.
We did that on the first try by setting -XX:ParallelGCThreads=8
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 4:14
Documentation for the Agent Java Code.
i.e. UML diagrams, architecture explanation etc.
Original issue reported on code.google.com by [email protected]
on 7 Oct 2010 at 12:50
The new agent logic does not close the files correctly when an error was
encountered from the server side.
This fix is underway and currently in the testing phase
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 1:59
Documentation for the Agent Java Code.
i.e. UML diagrams, architecture explanation etc.
Original issue reported on code.google.com by [email protected]
on 7 Oct 2010 at 12:50
branch the current trunk version for release 0.1.0 and create new version 0.1.1
Original issue reported on code.google.com by [email protected]
on 7 Oct 2010 at 12:46
Lock memory currently blocks on a ConcurrentHashMap instance.
We are improving this by using a fixed array size ReentrantLock structure
similar to that of the locking mechanism used in ConcurrentHashMap.
Original issue reported on code.google.com by [email protected]
on 28 Sep 2010 at 1:35
This is a agent userguide.
Document
-> the properties
-> the CLI commands
-> the rest monitoring
-> startup and shutdown
Original issue reported on code.google.com by [email protected]
on 23 Sep 2010 at 1:36
When the coordination service fails and the collector has already written data
to the local file, a rollback is done. During the rollback process the
collector might fail.
The only way to rollback then would be when the collector starts up. We need to
be able to track this and when the collector starts up it needs to be able to
see this failed roll back and roll back correctly.
Original issue reported on code.google.com by [email protected]
on 20 Oct 2010 at 11:40
The HashedWheelTimer should only ever be created once per application.
This should be added as a singleton factory pattern.
Currently the CoordinationServiceClient in the commons project creates a new
instance each time the getAndLock and saveAndFreeLock methods are called.
Original issue reported on code.google.com by [email protected]
on 13 Jun 2011 at 4:28
The collector must be able to:
Roll back a file even though its contents has already been written.
This rollback is only applied during a single request.
Even if rollback fails the collector should be able to on restart see that a
file was to be rolled back and roll back the file.
Original issue reported on code.google.com by [email protected]
on 1 Oct 2010 at 9:24
java.lang.NullPointerException
at org.streams.collector.write.impl.FileOutputStreamPoolImpl.checkFilesForRollover(FileOutputStreamPoolImpl.java:297)
at org.streams.collector.write.impl.FileOutputStreamPoolFactoryImpl.checkFilesForRollover(FileOutputStreamPoolFactoryImpl.java:59)
at org.streams.collector.write.impl.LocalLogFileWriter$RolloverChecker.run(LocalLogFileWriter.java:253)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
2011-06-14/10:21:32.272/CEST ERROR LocalFileWriter-LogRolloverCheck
org.streams.collector.write.impl.FileOutputStreamPoolImpl -
java.lang.NullPointerException
Original issue reported on code.google.com by [email protected]
on 14 Jun 2011 at 8:27
Documentation for the Coordination Service Java Code.
i.e. UML diagrams, architecture explanation etc.
Original issue reported on code.google.com by [email protected]
on 7 Oct 2010 at 12:51
A separate thread should be running to check for logs that have timedout.
If a lock has timedout it should be removed so that the coordination service
status page display to correct number of locks.
Maybe add a timeout counter to the coordination status?
Original issue reported on code.google.com by [email protected]
on 27 Sep 2010 at 3:58
Coordination Service should save the agent send history.
Remember that the coordination service is a distributed service such that all
information should be shared distributed.
This information can be stored in a distributed map with backup == 1.
We need to know:
Agent -- last time contacted (timestamp per file)
Agent -- send information:
Keep history information:
Size of message (we know this from comparing the last file pointer sent)
Collector it sent to.(we know this from the cremoteAddress:InetSocketAddress in the coordination service CoordinationLockHandler)
Items todo:
(1) Update CollectorFileTrackerMemory such that it contains a history part.
(2) Update HazelcastFileTrackerStorage that is implements CollectorFileTrackerMemory
(3) Test Resources
Original issue reported on code.google.com by [email protected]
on 25 Apr 2011 at 7:18
After a file is sent some kind of status update needs to run.
AgentFileMonitorService
Required is:
LineCount: onSent cound the lines of the file just sent
FileSize: read the file size
If file age > X and not sent : notify someone and show on status
AgenFileMonitorResource
Shows actions and status of the AgentfileMonitorService
Original issue reported on code.google.com by [email protected]
on 25 Apr 2011 at 6:36
The coordination service line counter display is not always accurate, although
the actual files data is 100% looking at the line counter suggests that lines
have been duplicated. This is not the case and has been confirmed. It might be
the way in which its updated.
Original issue reported on code.google.com by [email protected]
on 8 Oct 2010 at 7:59
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.