Giter Site home page Giter Site logo

gerritjvv / bigstreams Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 3.0 23.2 MB

bigstreams big data kafka hadoop and file based imports

License: Eclipse Public License 1.0

Java 42.70% Shell 2.41% Lua 0.01% JavaScript 3.14% CSS 0.33% Protocol Buffer 0.01% Scala 0.32% Batchfile 0.07% XSLT 0.07% HTML 20.68% Makefile 0.87% M4 0.79% C++ 10.58% Perl 2.57% XS 1.51% C 6.41% Perl 6 0.80% Python 1.99% Groff 4.58% Mako 0.16%

bigstreams's People

Contributors

gerritjvv avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

bigstreams's Issues

DocumentHowStreamsSolvesDataDuplication

Streams provide strict documented guarantees about data integrity even under 
application or machine failure.

This document must write down the areas where data duplication is possible and 
what streams has done to guard against it.


Original issue reported on code.google.com by [email protected] on 20 Oct 2010 at 9:19

Home page completion

Complete the home page with 
-> a project overview
-> rational
-> Help wanted
-> related projects
-> How to contribute


Original issue reported on code.google.com by [email protected] on 23 Sep 2010 at 1:38

Collector Documentation

This is a collector userguide.
Document 
-> the properties
-> the CLI commands
-> the rest monitoring
-> startup and shutdown

Original issue reported on code.google.com by [email protected] on 23 Sep 2010 at 1:35

FileSendMonitorService

Sending of files requires the following monitor and management services:

Log Delete Service
  Delete files after sent. 
  Some properties:
   deleteWhenSent = $secondsAfterSent
   compressWhenSent = $secondsAfterSent (-1 disable)
   moveWhenSent= $secondsAfterSent:$directory to move to
   execWhenSent= $secondsAfterSent:"$cmd"

LogDeleteResource:
   Shows the actions of the {LogDeleteService


Original issue reported on code.google.com by [email protected] on 25 Apr 2011 at 6:33

agent number format exception

java.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
    at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)java.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.jajava.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingjavaThread.java:177)
    at org.streams.agent.filjava.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
    at java.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
    at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)
java.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
    at java.lang.NumberFormatException: For input string: "E.21106E2"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
    at java.lang.Double.parseDouble(Double.java:510)
    at java.text.DigitList.getDouble(DigitList.java:151)
    at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
    at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
    at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
    at java.text.DateFormat.parse(DateFormat.java:335)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:46)
    at org.streams.commons.file.impl.SimpleFileDateExtractor.parse(SimpleFileDateExtractor.java:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
    at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)
org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.ja
va:131)
    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)
org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.ja
va:131)
    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)
e.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)
va:58)
    at org.streams.agent.file.impl.DirectoryPollingThread.createFileStatus(DirectoryPollingThread.java:177)
    at org.streams.agent.file.impl.DirectoryPollingThread.run(DirectoryPollingThread.java:131)
    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)

    at org.streams.agent.file.impl.ThreadedDirectoryWatcher$1.run(ThreadedDirectoryWatcher.java:47)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)


Original issue reported on code.google.com by [email protected] on 13 Jun 2011 at 5:40

Collector OutOfMemory Error

In the Collector under extreme conditions when errors are contineousely sent by 
the agents runs out of Memory. 

A fix is under way for this problem and will be tested before release.

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 1:58

GRing - Create client implementation

Creating a GRing client implementation that will send Message instances.
The client is created such that a Channel will be kept open to send N amount of 
Messages then on the close the channel is closed.

On any error or close event other than from the close method the client will 
set its state as closed.

We do not need to accept an ACK message from the server as the io is ring 
asynchronous which is just in internal term for, the response to this write 
will travel through the GRing (all members) and a response will come in on 
another port. 

This is done because sending and ack between 2 members is not as usefull as 
sending and receiving a global ack that all active members has received the 
message.


Original issue reported on code.google.com by [email protected] on 28 Oct 2010 at 12:31

Instalation documentation

Create an initial instalation documentation covering installation steps for the 
agent, collector and coordination services.

This document should also cover how to setup LZO compression.

Original issue reported on code.google.com by [email protected] on 23 Sep 2010 at 1:34

Nagios Checks

Create a list of nagios checks which Operations teams can use as a guideline on 
how to monitor the streams instances via Nagios.

Original issue reported on code.google.com by [email protected] on 7 Oct 2010 at 12:53

Collector Direct Memory Error on extreme blasts

java.io.IOException: java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:633)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:95)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
    at com.hadoop.compression.lzo.LzoDecompressor.<init>(LzoDecompressor.java:186)
    at com.hadoop.compression.lzo.LzoCodec.createDecompressor(LzoCodec.java:202)
    at com.hadoop.compression.lzo.LzoCodec.createInputStream(LzoCodec.java:170)
    at org.streams.commons.io.impl.ProtocolImpl.read(ProtocolImpl.java:90)
    at org.streams.collector.server.impl.LogWriterHandler.messageReceived(LogWriterHandler.java:98)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndfireMessageReceived(ReplayingDecoder.java:516)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:497)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:434)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
    at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory



Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 4:21

Agents should report health to coordination service

Have the coordination service keep track of the current agents in the system. 
We already keep track of all agents that have connected to the system, but for 
better monitoring and reporting we need the agents to send a heart beat to the 
coordination service.

This will allow the coordination service to report on all agents in the 
collection system.

Original issue reported on code.google.com by [email protected] on 28 Oct 2010 at 8:28

Agent File Lock error logging

The agent should recognized a file lock error i.e. when the collector sent a 
request and the coordination service told the server that the current file has 
already been locked. On such an error the agent should include an informative 
error line in the log files. This will allow to track better why two requests 
from the same agent happen for the same file.

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 4:00

Hadoop native gzip libraries memory leak

See GzipCodec java 1.6 _02 - hadoop Memory Leak for trail.

We will continue to look into this issue by debugging either the gzip library 
itself or the hadoop native jni code.

For the moment we have decided that the best fix is too:
 -> Use LZO
 -> Or Use the Java Gzip implementation

Original issue reported on code.google.com by [email protected] on 19 Oct 2010 at 4:04

agent hangs on shutdown during error

In the new un released version was introduced the following error:

Agent goes into infinite loop. not recovering even with kill signal

error desc: Agent hangs when error reporting to collector

Cause: 
 The AppStartCommand.shutdown() calls the AppLifeCycleManager.shutdown
 This method calls the FileSendService.shutdown()
 This calls the ClientResourceFactoryImpl.destroy()
 Calls the ClientConnectionFactory.close()
 Calls the org.jboss.netty.channel.socket.ClientSocketChannelFactory.destroyExternalResources() and hangs here.

Solution:
 Edit the method close() in org.streams.agent.send.impl.ClientConnectionFactoryImpl to just set the org.jboss.netty.channel.socket.ClientSocketChannelFactory
 instance to null and never call destroyExternalResources
 Add a ThreadResourceService that implements the ApplicationService interface and add it to the ApplicationLifeCycleManager.
 On shutdown the ThreadResourceService will ensure that all threads are shutdown.

Original issue reported on code.google.com by [email protected] on 25 Nov 2010 at 1:35

Issue with java 1.6.0_02-b05

The hsqldb 2.0.0 version crashed with this version of the java jdk.
Similar errors have been found in other software used on the same boxes. The 
jvm crashed with a SIGSEGV error. 
Using a newer version of the JVM works. But we are switching to using hsqldb 
1.8 because using this version does not cause the JVM to crash.

It is recommended to use the 1.6.0_20-b02 JVM or newer.

Original issue reported on code.google.com by [email protected] on 28 Sep 2010 at 1:37

Use Concurrent Garbage collection

The following garbage collection parameters are to be added to the 
streams-env.sh script for collector, agent and coordination service:

-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:MaxPermSize=128m 

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 2:01

GzipCodec java 1.6 _02 - hadoop Memory Leak

It seams that the hadoop GzipCodec with native libraries has a memory leak that 
slowly consumed Virtual Memory.

On our boxes the agent process using the GzipCodec has consumed 90Gigs of 
VIRTUAL ( SwAP + RES ) memory and grows un bounded. 
All our machines that run the LZO Codec does not report this problem and memory 
usage is extremely low.

We will investigate this issue and try to solve it by either:
 -> putting the fix into bigstreams directly 
 -> and or opening a Jira on the hadoop core site with a fix.


Original issue reported on code.google.com by [email protected] on 19 Oct 2010 at 2:20

HazelcastFileTrackerStorage--saveHistory

HazelcastFileTrackerStorage needs to save the file history to a persistent 
storage map.

We require:
   -> IMap<HistoryKey, FileTrackingStatusHistory>
   -> remove logTypeSet, agentSet
   -> Add DB JPA mappings for history
   -> Add persistent store implementation



Original issue reported on code.google.com by [email protected] on 25 Apr 2011 at 11:16

NIOClientSocketChannelFactory must be resused - Collector Coordination service and Agent

The NIOClientSocketChannelFactory must be a singleton and should be reused.
Not doing so would would lead to Direct Memory Errors.

The NIOClientSocketChannelFactory will pre create as many NIOWorker instances 
as processors * a constant. Each NIOWorker pre allocates 64KB of direct memory. 
On a 16 processor machine this means 64K * 16  = 256K per instance of 
NIOClientSocketChannelFactory.

Creating an instance on each client connect for the CoordinationService means 
512K per send lock and send unlock combination. 

The ClientConnectionResource class in the commons and the agent client 
connection will be changed to accept an instance of 
NIOClientSocketChannelFactory in the Constructor.

Original issue reported on code.google.com by [email protected] on 4 Oct 2010 at 8:19

Coordination Service Userguide

This is a coordination service userguide.
Document 
-> the properties
-> the CLI commands
-> the rest monitoring
-> startup and shutdown

Original issue reported on code.google.com by [email protected] on 23 Sep 2010 at 1:36

Agent - report files as DONE when bash cp is run on file

This is a bug that we've only seen no one machine, and happened when a huge cp 
-R command was run on the directory combined with multiple agent restarts. 

The directory polling should check all of its files for when
 status=DONE  and filePointer is less than file size.  This can be executed as a single HSQL query and if any entries found these must be set to status READY.


Original issue reported on code.google.com by [email protected] on 3 Nov 2010 at 4:46

Collector GC 2 minutes on ParNew (30,430 collections)

Collector garbage collection is 2 minutes for ParNew

Possible solution is to set the number of concurrent threads lower.

see http://forums.sun.com/thread.jspa?threadID=5244426

Quote:
After the exciting reply I did get on my question, we did some more 
investigations on the problem and it seems that we finally found the solution 
to our problem.

The number of garbage collection threads used by the virtual machine defaults 
to the number of cpus of the machine.
This is ok for small machines or machines where the main load is produced by 
the java application itself.
In our environment the main load is not produced by the java application but 
oracle database processes.
When java tries to do it's garbage collection using 120 threads (# CPU) on the 
machine which is already overloaded by non java processes, the thread 
synchronization seems to produce an exorbitant overhead.
My theory is that spin locking is used on memory access, causing threads to 
spin while waiting for other blocking threads not getting cpu because of the 
heavy load on the system.
The solution is now to limit the number of garbage collection threads.

We did that on the first try by setting -XX:ParallelGCThreads=8

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 4:14

New Agent too many open files on error

The new agent logic does not close the files correctly when an error was 
encountered from the server side.

This fix is underway and currently in the testing phase

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 1:59

Agent userguide

This is a agent userguide.
Document 
-> the properties
-> the CLI commands
-> the rest monitoring
-> startup and shutdown

Original issue reported on code.google.com by [email protected] on 23 Sep 2010 at 1:36

Allow collector file rollback even in event of collector failure during rollback


When the coordination service fails and the collector has already written data 
to the local file, a rollback is done. During the rollback process the 
collector might fail. 

The only way to rollback then would be when the collector starts up. We need to 
be able to track this and when the collector starts up it needs to be able to 
see this failed roll back and roll back correctly.

Original issue reported on code.google.com by [email protected] on 20 Oct 2010 at 11:40

CoordinationServiceClient creates HashWeelTimer on each connection

The HashedWheelTimer should only ever be created once per application.

This should be added as a singleton factory pattern.

Currently the CoordinationServiceClient in the commons project creates a new 
instance each time the getAndLock and saveAndFreeLock methods are called.





Original issue reported on code.google.com by [email protected] on 13 Jun 2011 at 4:28

Transactional rollback for compressed and text files

The collector must be able to:
Roll back a file even though its contents has already been written.
This rollback is only applied during a single request.

Even if rollback fails the collector should be able to on restart see that a 
file was to be rolled back and roll back the file.

Original issue reported on code.google.com by [email protected] on 1 Oct 2010 at 9:24

NullPointerException in FileOutputStreamPoolImpl checkFilesForRollover

java.lang.NullPointerException
        at org.streams.collector.write.impl.FileOutputStreamPoolImpl.checkFilesForRollover(FileOutputStreamPoolImpl.java:297)
        at org.streams.collector.write.impl.FileOutputStreamPoolFactoryImpl.checkFilesForRollover(FileOutputStreamPoolFactoryImpl.java:59)
        at org.streams.collector.write.impl.LocalLogFileWriter$RolloverChecker.run(LocalLogFileWriter.java:253)
        at java.util.TimerThread.mainLoop(Timer.java:512)
        at java.util.TimerThread.run(Timer.java:462)
2011-06-14/10:21:32.272/CEST ERROR LocalFileWriter-LogRolloverCheck 
org.streams.collector.write.impl.FileOutputStreamPoolImpl - 
java.lang.NullPointerException


Original issue reported on code.google.com by [email protected] on 14 Jun 2011 at 8:27

Coordination Service Lock Timeout Check

A separate thread should be running to check for logs that have timedout.
If a lock has timedout it should be removed so that the coordination service 
status page display to correct number of locks.

Maybe add a timeout counter to the coordination status?

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 3:58

CoordinationServiceSaveAgentSendHistory

Coordination Service should save the agent send history.

Remember that the coordination service is a distributed service such that all 
information should be shared distributed.
This information can be stored in a distributed map with backup == 1. 

We need to know:
  Agent -- last time contacted (timestamp per file)
  Agent -- send information:
       Keep history information:
       Size of message (we know this from comparing the last file pointer sent)
       Collector it sent to.(we know this from the cremoteAddress:InetSocketAddress in the coordination service CoordinationLockHandler)


Items todo:
  (1) Update CollectorFileTrackerMemory such that it contains a history part.
  (2) Update HazelcastFileTrackerStorage that is implements CollectorFileTrackerMemory

  (3) Test Resources


Original issue reported on code.google.com by [email protected] on 25 Apr 2011 at 7:18

FileSendMonitorService - 2 FileStats

After a file is sent some kind of status update needs to run.

AgentFileMonitorService

Required is:
   LineCount: onSent cound the lines of the file just sent
   FileSize: read the file size
   If file age > X and not sent : notify someone and show on status

AgenFileMonitorResource
   Shows actions and status of the AgentfileMonitorService


Original issue reported on code.google.com by [email protected] on 25 Apr 2011 at 6:36

Coordination Service Line counter display error


The coordination service line counter display is not always accurate, although 
the actual files data is 100% looking at the line counter suggests that lines 
have been duplicated. This is not the case and has been confirmed. It might be 
the way in which its updated.

Original issue reported on code.google.com by [email protected] on 8 Oct 2010 at 7:59

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.