Giter Site home page Giter Site logo

threadly / threadly Goto Github PK

View Code? Open in Web Editor NEW
215.0 14.0 31.0 5.5 MB

A library of tools to assist with safe concurrent java development. Providing unique priority based thread pools, and ways to distrbute threaded work safely.

Home Page: http://threadly.org

License: Mozilla Public License 2.0

Java 100.00%
thread-pool scheduler executor threading concurrency asynchronous priority-scheduler high-performance priority-tasks recurring-tasks

threadly's Introduction

Threadly

A library of java tools to assist with development of concurrent java applications. It includes a collection of tools to help with a wide range of concurrent development and testing needs. This is designed to be a complement to java.util.concurrent and uses java.util.concurrent to help assist in it's implementations where it makes sense.

Include the threadly library into your project from maven central:

<dependency>
	<groupId>org.threadly</groupId>
	<artifactId>threadly</artifactId>
	<version>7.0</version>
</dependency>

For information about compiling, importing into eclipse, or contributing to the project, please look at the 'BUILD_INSTRUCTIONS' file.

For a complete list of features in threadly please view the features page on the wiki:

https://github.com/threadly/threadly/wiki/Threadly-Features

Current Project Status

Launched in May 2013, Threadly is a mature library employed in a range of production environments. Over the last decade, it has evolved in tandem with advancements in Java concurrency. With Java's recent introduction of Virtual Threads, some of the pooling concerns that Threadly initially addressed have become less critical. However, Threadly still offers unique benefits depending on specific use cases. While the rate of significant updates has slowed in the past year, the emphasis has shifted to ensuring stability and maturity. The project continues to be actively maintained.

Library Tool Highlights

-- General Concurrency Tools --

  • PriorityScheduler - A thread pool which makes different trade offs from java.util.concurrent.ScheduledThreadPoolExecutor. It offers a few advantages and disadvantages. Often times it can be better performing, or at least equally performing.

Advantages compared to ScheduledThreadPoolExecutor:

Better .execute task performance. Because it uses different structures for scheduled/recurring tasks from execute task we are able to use a structure which fits the job better. This provides a DRAMATIC improvement in the performance of executed jobs. Our pools also need to trap to the kernel to get system clock time less frequently.

The ability to provide a priority with a task means that things which are more critical are impacted less by things which are recurring or a delay wont matter as long as those low priority tasks are not starved. Low priority tasks will delay longer if there is high usage on the pool, until they reach their maximum wait time (assuming that high priority tasks are not further delayed). Using multiple priorities also reduces lock contention between the different priorities.

PriorityScheduler is only focused on the Runnable provided into it. So for example removing a task you can provide the original runnable (not a returned future). The runnables returned in .shutdownNow() are the original runnables, not wrapped in future tasks, etc, etc.

PriorityScheduler provides calls that do, and do not return a future, so if a future is not necessary the performance hit can be avoided.

If you need a thread pool that implements java.util.concurrent.ScheduledExecutorService you can wrap it in PrioritySchedulerServiceWrapper.

The other large difference compared to ScheduledThreadPoolExecutor is that the pool size is adjustable at runtime. In ScheduledThreadPoolExecutor you can only provide one size, and that pool can grow, but never shrink once started. In this implementation you construct with a start size, and if you ever want to adjust the size it can be done at any point with calls to setPoolSize(int). This will NEVER interrupt or stop running tasks, but as they finish the threads will be destroyed.

  • UnfairExecutor - A VERY high performance executor implementation. This executor has few features, and relaxed garuntees (particularly around execution order), in order to gain the highest task throughput possible. Since this pool works best for tasks with similar computational complexity it can be an excellent backing pool for handling client requests (ie servlets).

  • ExecutorLimiter, OrderedExecutorLimiter, SimpleSchedulerLimiter, SchedulerServiceLimiter - These are designed so you can control the amount of concurrency in different parts of code, while still taking maximum benefit of having one large thread pool.

The design is such so that you create one large pool, and then wrap it in one of these two wrappers. You then pass the wrapper to your different parts of code. It relies on the large pool in order to actually get a thread, but this prevents any one section of code from completely dominating the thread pool.

  • KeyDistributedExecutor and KeyDistributedScheduler - Provide you the ability to execute (or schedule) tasks with a given key such that tasks with the same key hash code will NEVER run concurrently. This is designed as an ability to help the developer from having to deal with concurrent issues when ever possible. It allows you to have multiple runnables or tasks that share memory, but don't force the developer to deal with synchronization and memory barriers (assuming they all share the same key). These now also allow you to continue to use Futures with the key based execution.

  • NoThreadScheduler - Sometimes even one thread is too many. This provides you the ability to schedule tasks, or execute tasks on the scheduler, but they wont be run till you call .tick() on the scheduler. This allows you to control which thread these tasks run on (since you have to explicitly call the .tick()). A great example of where this could be useful is if you want to schedule tasks which can only run on a GUI thread. Another example would be in NIO programming, where you want to modify the selector, you can just call .tick() before you call .select() on the selector to apply any modifications you need in a thread safe way (without worrying about blocking).

  • FutureUtils - Provides some nice utilities for working with futures. It provides many things when dealing with collections of futures, for example canceling any which have not finished in a collection. A couple of other the nice operations are the ability to combine several futures, and either block till all have finished, or get a future which combines all the results for once they have completed. As well as being able to get different formats (for example give me a future which provides a list of all futures in the collection which had an error).

  • ListenerHelper and RunnableListenerHelper - Listeners are a very common design pattern for asynchronous designs. ListenerHelper helps in building these designs (no matter what the interface for the listeners is), RunnableListenerHelper is a very efficent implementation designed around the common Runnable interface. I am sure we have all done similar implementations a million times, this is one robust implementation that will hopefully reduce duplicated code in the future. In addition there are varriants of these, AsyncCallListenerHelper, and DefaultExecutorListenerHelper (the same exists for the Runnable version as well) which allow different threading designs around how listeners are called.

-- Debugging utilities --

  • Profiler and ControlledThreadProfiler - These are utilities for helping to understand where the bottlenecks of your application are. These break down what each thread is doing, as well as the system as a whole. So you can understand where your CPU heavy operations are, where your lock contention exists, and other resources as well.

  • DebugLogger - often times logging gives a bad conception of what order of operations things are happening in concurrent designs. Or often times just adding logging can cause race conditions to disappear. DebugLogger attempts to solve those problems (or at least help). It does this by collecting log messages, storing the time which they came in (in nano seconds), and then order them. Then in batches it prints the log messages out. Since we are logging out asynchronously, and we try to have the storage of the log message very cheap, it hopefully will not impact the race condition you're attempting to investigate.

-- Unit Test Tools --

Starting in 6.0 unit test tools are provided by the threadly-test artifact: https://github.com/threadly/threadly-test

threadly's People

Contributors

altsysrq avatar axelarge avatar jentfoo avatar jmizell avatar liry avatar lwahlmeier avatar peter-kehl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

threadly's Issues

Improve fairness of TaskExecutorDistributor

In the 0.8 release the TaskExecutorDistributor will get a form of fairness added to it, by limiting the number of tasks a key can execute before it releases control. This strategy works good when tasks are very uniform between each other.

It might make sense to have multiple TaskQueueWorker implementations such that you can have different fairness strategies (maybe constructed using the builder pattern). An example alternative strategy would be to limit the amount of time spent on a specific key (instead of execution count).

In order to implement a time strategy, or any strategy which would require checking while executing tasks would need to use a different TaskQueueWorker implementation so that if no fairness is desired, there is no performance loss to what we have today.

Do not allow override of done() in ListenableFutureTask

Because overriding the done() call in ListenableFutureTask can be dangerous if they do not call the super function. We should prevent it from being overridden. It should not be necessary anyways, since an extending class can always add a listener to be called once done has occurred.

Even though this is only a protected function, we should probably wait to make this change till 2.0.0 since it could break compatibility.

Publish 0.7 artifacts and update resources

Go through release process for 0.7. This is to be the last ticket to be closed for this release. Which will indicate that 0.7 has been released after this has been closed.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.8-SNAPSHOT

Look at simplifying future handling for PrioritySchedulerLimiter

This is the second part of the original issue #50. We should consider if we want to or can simplify the Future handling for PrioritySchedulerLimiter. Right now we have this concept of a "FutureListenableFuture" which provides a future which depends on the implementation of another Future coming in later.

In the ExecutorLimiter and SchedulerLimiter however we are just wrapping the task in a ListernableFutureTask and calling to execute. I think we could simplify things by doing this same behavior for the PrioritySchedulerLimiter.

Move threadly repo under threadly organization

It makes sense to just have an organization for this project instead of putting it under my name directly. I have created the organization, but the following needs to happen:

  • Stylize and fill in details for organization
  • Move threadly repo to organization
  • Update wiki links
  • Update domain forwarding and website

Need to standardize logging

Right now logging for debugging during implementation still exists....do we want to remove it, or how do we want to handle it?

Create a wrapper for java.util.concurrent scheduler service interface

Our priority thread pool is great, but let's create a wrapper so that it can be a drop in replacement for a java.util.concurrent scheduler service. This will need to include handling futures which we are not currently doing at all.

In addition we should be able to have a wrapper to go the opposite direction. So you can use the java.util.concurrent thread pools with our SimpleScheduler interface.

Perform actions to release version 1.2.0

Go through release process for 1.2.0. Once this issue is closed, it will indicate that 1.2.0 has been released, and is available.

Items to update:

  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Wiki changelog
  • Wiki homepage for java docs
  • Wiki page for complete list of javadocs links
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 1.3.0-SNAPSHOT
  • Update git unstable_staging to 1.3.0-SNAPSHOT

Perform actions to release version 0.9

Go through release process for 0.9. This is to be the last ticket to be closed for this release. Which will indicate that 0.9 has been released after this has been closed.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 1.0.0-SNAPSHOT

It is unlikely that you can cancel a callable from a subpool

I recently discovered that the FutureFuture class for canceled was a bit too lose in returning true that it canceled. I fixed this by only returning true if it was absolutely able to cancel the future.

The side effect from this was that you are a lot less likely to cancel tasks from sub pools. I was able to solve this for runnables without too much effort, but callables are a little more tricky. For runnables we are able to cancel easily and just not run if it was called to cancel.

For callables since the .call must return a result, at that point our only option is to throw an exception. We could throw a specific exception type (probably a CancelationException) and then in ListenableFutureTask override the .get() functions and catch ExecutionException and see if the cause is a CancelationExcpetion (and if it is we would throw a new CancelationException). I hate this because I hate catching the exception, checking the type, and then throwing yet another new exception. But I am not sure if there is any choice around this.

Look at simplifying future handling for SubmitterSchedulerLimiter and PrioritySchedulerLimiter

We should consider if we want to or can simplify the Future handling for SubmitterSchedulerLimiter and PrioritySchedulerLimiter. Right now we have this concept of a "FutureListenableFuture" which provides a future which depends on the implementation of another Future coming in later.

In the ExecutorLimiter however we are just wrapping the task in a ListernableFutureTask and calling to execute. I think we could simplify things by doing this same behavior for the other scheduler limiters. This would also allow us to take in a SimpleSchedulerInterface as the scheduler in the constructor and fulfill the SubmitterSchedulerInterface.

Publish 0.3 artifacts and update resources

Go through release process for 0.3. This is to be the last ticket to be closed which will indicate that 0.3 has been released.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.4-SNAPSHOT

This release is large enough that in addition the following should be updated:

  • README.md
  • Features wiki page

Create a striped lock structure for VirtualLock abstraction

TaskExecutorDistributor and the new CallableDistributor could take benefit of a striped lock based off a given key. Let's go ahead and provide this implementation so that these classes and others can take advantage of the implementation.

This may eventually become obsolete once VirtualLocks implements the java.util.concurrent.locks.Lock interface.

Publish 0.5 artifacts and update resources

Go through release process for 0.5. This is to be the last ticket to be closed for this release. Which will indicate that 0.5 has been released after this has been closed.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.6-SNAPSHOT

It may be possible for idle worker threads to spin at 100% if the thread has been interrupted

With the recent changes in 0.7 to use LockSupport.park for waiting to accept work into the thread pool worker threads, it looks like this may have regressed. In working in the threadly example with the prime tester, even after current working threads have been canceled (with an interrupted allowed), if left running some of the formerly working threads would spin at 100%.

In some ways even worse....in previous versions to 0.7 depending on how and when this interrupt occurs, the worker may be in the available workers list, and then when it is attempted to be used for a future task, an exception will be thrown when an old (and stopped) worker is provided the new task.

The PriorityScheduledExecutor may give new runnables threads which are in interrupted status

It seems that the implementation of the FutureTask may interrupt a thread after it has already been given back to the thread pool. Presumably these threads can get interrupted via other means as well. The java.util.concurrent.ThreadPoolExecutor solves this by resetting the interrupted status on the thread before starting the task (unless the thread pool is shutting down). We should do the same, particularly now that we piggy back on the FutureTask.

Reduce synchronization in TaskExecutorDistributor

I was thinking, it may be possible to reduce synchronization such that when a worker has finished a task it could check for the next one without synchronization. I will need to investigate this further later.

Create a simple abstraction for getting results via asynchronous calls

The interface should be pretty simple:
submit(key, Callable toRun); // submits a callable with a key to get results later
get(key); // once ready for the results, get the next result for a key
getAll(key); // once at least one result for a key, return all ready results

The big advantage this has over the future interface is you can control how tasks are distributed across threads. If multiple Callable's are called with the same key, they will only run on the same thread (using the TaskDistributor as the back end). Then you can easily get the results for all the threads once your ready for them.

LowPriority tasks can cut in line with high priority tasks

I have known about this issue for a while, but this morning it started bugging me enough that I think we need to look into it. Because low priority tasks have their own queue, imagine the following situation:

  • You schedule a ton of high priority tasks (enough that the thread pool is now completely consumed)
  • You schedule one low priority task
  • The low priority tasks waits for an available worker, but is able to get it before the waiting high priority task.

One possible solution would be to have one queue which low and high priority tasks belong in. Of course if we do that, we have to figure out how do we not let consuming a low priority task block consuming high priority tasks.

Perform actions to release version 1.0.0

Go through release process for 1.0.0. This is to be the last ticket to be closed for this release. Which will indicate that 1.0.0 has been released after this has been closed.

Items to update:

  • SPECIAL FOR 1.0.0 release update version guide https://github.com/threadly/threadly/wiki/Version-Guide
  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Wiki changelog
  • Wiki homepage for java docs
  • Update wiki page for complete list of javadocs links
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 1.0.1-SNAPSHOT
  • Update git unstable_staging to 1.1.0-SNAPSHOT

Publish 0.6 artifacts and update resources

Go through release process for 0.6. This is to be the last ticket to be closed for this release. Which will indicate that 0.6 has been released after this has been closed.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.7-SNAPSHOT

TestablePriorityScheduler still does not work quite right

This is the most important issue before the .1 release. The .wait and awake operations are able to overlap some, which can end up having some undesirable effects when progressed very quickly within unit tests. We need to create a barrier that consistently prevents threads from waking up until the previous thread has fully gone to waiting. concurrent_testability-debug branch is focused on fixing this issue.

Improve TestableLock and TestablePriorityScheduler to do deadlock detection

Since the TestableLock is only used in unit testing, it is an ideal place to do inspection for deadlocks. This will help vet concurrent code within unit test. If a unit test of concurrent code can have a possible deadlock, an exception should be thrown which will cause the test to fail (so likely it will need to be thrown from the actual tick method of TestablePriorityScheduler.)

Update documentation

Need to publish javadocs on the website, as well as update description and give a short feature list.

Publish 0.4 artifacts and update resources

Go through release process for 0.4. This is to be the last ticket to be closed which will indicate that 0.4 has been released.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.5-SNAPSHOT

Publish 0.7 artifacts and update resources

Go through release process for 0.7. This is to be the last ticket to be closed for this release. Which will indicate that 0.7 has been released after this has been closed.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.8-SNAPSHOT

Perform actions to release version 1.1.0

Go through release process for 1.1.0. Once this issue is closed, it will indicate that 1.1.0 has been released, and is available.

Items to update:

  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Wiki changelog
  • Wiki homepage for java docs
  • Wiki page for complete list of javadocs links
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 1.2.0-SNAPSHOT
  • Update git unstable_staging to 1.2.0-SNAPSHOT

Perform actions to release version 0.8

Go through release process for 0.8. This is to be the last ticket to be closed for this release. Which will indicate that 0.8 has been released after this has been closed.

Items to update:

  • Wiki homepage for java docs
  • Wiki changelog
  • Homepage stable download
  • Homepage javadocs
  • Homepage news
  • Add release information to github

Code operations:

  • Deploy artifact to sonatype
  • Tag git
  • Keep build forever in jenkins
  • Update git master to 0.9-SNAPSHOT

PriorityScheduledExecutor shutdown function does not match java.util.concurrent.ExecutorService

This is not really a bug, but could be considered confusing. I personally like the contract with this shutdown more than the one in java.util.concurrent.ExecutorService.

The current implementation refuses new submissions after this call. And will NOT interrupt any tasks which are currently running (much like ExecutorService). But any tasks which are waiting in queue to be run (but have not started yet), will not be run (unlike ExecutorService).

Should we change shutdown to match ExecutorService to prevent confusion? For now I have improved the javadoc to better explain what this behavior is doing.

Recent changes for SchedulerLimiter submit functions do not handle TestableSchedulers correctly

The recent changes for Issue #50 caused this minor regression. It is only an issue if using the SchedulerLimiter with a TestableScheduler. Because we are using the ListenableFutureTask instead of the ListenableFutureVirtualTask, we can not pass the factory through to the class.

After reflection we should do one of the following:

  • Remove VirtualRunnable/VirtualCallable/VirtualLock design, and abandon the TestableScheduler
  • Change to the slightly less efficient ListenableFutureVirtualTask

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.