Giter Site home page Giter Site logo

copper-engine's Introduction

PRs Welcome License Build Status

copper-engine

COPPER - the high performance Java workflow engine.

COPPER is an open-source, powerful, light-weight, and easily configurable workflow engine. The power of COPPER is that it uses Java as a description language for workflows. The project artifacts can be found on Maven Central. See copper-engine.org for more information.

How to build

COPPER is built using Gradle. However, you don't need to install Gradle, because COPPER is using the Gradle wrapper. Note: If you are behind an internet proxy, you have to configure the corresponding system properties in gradle. See Accessing the web via a proxy.

To build all COPPER projects, just execute the following in the projects root directory:

./gradlew assemble

If you want to build all and run all tests, just execute:

./gradlew build

To generate Eclipse project files, run:

./gradlew eclipse

once in the projects root directory and open the corresponding projects with the eclipse IDE. (You must perform this step every time the project dependencies change).

How to contribute

  1. Create an issue on GitHub.
  2. Create a fork on GitHub.
  3. Configure your IDE (Eclipse, IntelliJ IDEA) as described below.
  4. Run ./gradlew assemble once if you haven't done so in step 3. This will generate some WSDL stubs needed for some tests.
  5. Commit your changes incl. WHATSNEW.txt
    • Ensure, that your sources are UTF-8 encoded!
    • Ensure, that your sources start with our Apache License header. (The build will fail if they don't.)
  6. Build all and run tests: ./gradlew clean build
  7. Push your changes to your fork.
  8. Create a pull request on GtHub.

Have fun!

How to configure your IDE

Eclipse

Run ./gradlew eclipse once. This will create Eclipse project files which you can import. This also creates proper code style settings. Before committing you should always reformat the code. You can configure Eclipse to do this automatically on each save.

Every time a dependency changes in build.gradle you must run ./gradlew eclipse again. You don't need to restart Eclipse for this, simply press F5 on the projects.

IntelliJ IDEA

Before you open the project in IntelliJ for the first time, run ./gradlew assemble once. This also creates proper code style settings, which IntelliJ automatically picks up. After that open build.gradle with "File->Open" and follow the instructions, accept the defaults.

Before committing you should always reformat the code. You can configure IntelliJ to do this automatically on each commit.

Performance Test

See PERFORMANCE_TEST_HOWTO.MD

License

Copyright 2002-2018 Copper Engine Development Team

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

copper-engine's People

Contributors

atomic-t avatar austermann avatar based2 avatar benfortuna avatar dmoebius avatar haydenhodge avatar hbrackmann avatar keymaster65 avatar klumw avatar mschaefers avatar siordache avatar theodiefenthal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

copper-engine's Issues

Persistent LockManager

We need a Lock Manager Service, that manages locks persistently in the underlying database.
Interface draft:

  • void acquireLock(String lockId, String correlationId, String workflowInstanceId, long timeout)
  • void releaseLock(String lockId)
  • void releaseAllLocks(String workflowInstanceId)
  • readLockOwner(String lockId)

java.lang.IllegalArgumentException with PojoDependencyInjector

running: de.scoopgmbh.copper.monitoring.example.MonitoringExampleMain
i get the exception:

java.lang.IllegalArgumentException: argument type mismatch
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_25]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_25]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_25]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_25]
at de.scoopgmbh.copper.AbstractDependencyInjector.inject(AbstractDependencyInjector.java:81) ~[copper-coreengine/:na]
at de.scoopgmbh.copper.monitoring.server.wrapper.MonitoringDependencyInjector.inject(MonitoringDependencyInjector.java:55) ~[copper-monitoring-server/:na]
at de.scoopgmbh.copper.persistent.PersistentProcessor$1.run(PersistentProcessor.java:53) ~[copper-coreengine/:na]
at de.scoopgmbh.copper.persistent.PersistentProcessor$1.run(PersistentProcessor.java:47) ~[copper-coreengine/:na]
at de.scoopgmbh.copper.persistent.txn.CopperTransactionController.run(CopperTransactionController.java:51) ~[copper-coreengine/:na]
at de.scoopgmbh.copper.persistent.PersistentProcessor.process(PersistentProcessor.java:47) ~[copper-coreengine/:na]
at de.scoopgmbh.copper.common.Processor.run(Processor.java:77) ~[copper-coreengine/:na]

Request: support for offline transformation

Currently COPPER only supports "online" load-time transformation of workflow classes, which also means that a Java compiler must be available at runtime, ie. tools.jar from the JDK must be found in the system CLASSPATH.

It would be great if COPPER also supports offline transformation of workflow classes; for example through an Ant, Maven or Gradle task. This has the following benefits:

  • COPPER could run on a normal JRE without tools.jar (or any other Java compiler, e.g. ecj),
  • you no longer need to redistribute the source code of your workflow classes with your application,
  • workflow classes could optionally reside in system classpath. You won't need a separate ClassLoader then, which avoids some class loading errors when using workflow classes in an unexpected way.
  • start time is reduced

Technically this could be solved by providing another implementation of WorkflowRepository which complements FileBasedWorkflowRepository.

INVOKESTATIC on Java8 Interface needs ASM 5

When working in a Java 8 environment and calling a method implemented in an interface, I get a message saying "IllegalArgumentException: INVOKESPECIAL/STATIC on interfaces require ASM 5".

Delegate to methods containing wait()

Currently you can extend COPPER workflows using polymorphy, ie. classes that "extends" another class. If you want to avoid that because you get too many inheritance trees it would be nice if you could delegate to other methods with wait().

Technically it's possible (because the Workflow constructor is protected), but it seems it does not work and it's not officially supported. Please either state that it's officially supported, or create another possibility to delegate to wait() to avoid inheritance.

AuditTrail for not batched persistent engines

Our engine can run with batcher set to null and thus, no batching is done which slows everything quite a bit down. However, some people might want to run COPPER without the batcher.

Currently, we implement only a MockAuditTrail (Where we write the audit just to the logger, not DB) and a BatchingAuditTrail (+SpringTxnAuditTrail which extends BatchingAuditTrail). BatchingAuditTrail, as the name suggests, throws a NullPointerException when no batcher is set when we call a log method.

=> I would suggest renaming the BatchingAuditTrail do DefaultAuditTrail and batch if a batcher is set or otherwise just execute.


EDIT: After looking a bit more into the Audit, I come to think that it is quite a bit messed up.

As JDBC SQL operations are always blocking, the asnyc log method is something which should be a feature of the batched audit trail only and not of the audit trail interface itself.

The SpringTxnAuditTrail should then inherit from AuditTrail and not from BatchingAuditTrail. Currently, one could think that SpringTxnAuditTrail can also run asny log, which in turn is executed by the batcher and thus solely transacted.

However, this changes will break API compatibility and should therefore be made with COPPER 5.0

Implement PersistentLockManager analogon for the transient engine

I think, the PersistentLockManager is quite a useful helper class for persistent workflows and has a bunch of usecases. For instance, if we have a remote system where we can send messages to. The remote could accept an arbitrary number of messages but only one at a time with a given ID as parameter (e.g. a coupon code). There would be quite a burden in implementing the adapter in a way that stores information on which IDs it currently sent a message to. (Not speaking about having a distributed system with multiple engines which could possibly work on the same coupon code).

There are probably many other usecases, but I just wanted to mention this one and don't see, why this feature should not be offered to transient engines as well as it seems to me equivalently helpful. OK, we cannot work distributed in that case, but it is still a useful feature.

:projects:copper-coreengine:fetchSecretKeyRingFile FAILED

gradlew clean assemble

...

:projects:copper-coreengine:fetchSecretKeyRingFile FAILED

FAILURE: Build failed with an exception.

  • Where:
    Build file 'C:\dev\sandbox\copper\build.gradle' line: 145

  • What went wrong:
    Execution failed for task ':projects:copper-coreengine:fetchSecretKeyRingFile'.

    Could not find property 'secretKeyRingFileDestFile' on task ':projects:copper-coreengine:fetchSecretKeyRingFile'.

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 12.722 secs

issue on using FileBasedWorkflowRepository under spring boot multi-module projects

Hi,

I have a spring boot multi-module projects with web and core.
The web module depends on the core module in its pom.xml. I am using the copper under core module by using FileBasedWorkflowRepository to locate thew workflow classes. I put all the workflow classer under path src\workflow\java in the core module.

It's ok when I doing the test on the workflows under core module. But when I run the web module, it reports:

java.lang.IllegalArgumentException: source directory src/workflow/java does not exist!
at org.copperengine.core.wfrepo.FileBasedWorkflowRepository.createWfClassMap(FileBasedWorkflowRepository.java:282)
at org.copperengine.core.wfrepo.FileBasedWorkflowRepository.start(FileBasedWorkflowRepository.java:253)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)

How can i fix this exception? And what's the best practice to use workflow with multi-modules project? I also noticed that there is ClasspathWorkflowRepository. But I prefer the hot deploy feature supported by FileBasedWorkflowRepository.

Thanks

Speed up PersistentLockManager

In version 4.0 and below, a copper workflow using the PersistentLockManager needs at least one copper wait/notify cycle to acquire a lock, even if the lock is free. Due to the current implementation, the mechanisms in ScottyDBStorage to enhance queue updates and dequeue operations do not work for locks, i.e. there is mostly a latency of round about 500ms.
We should find a way to speed this up, because from a technical point of view there is no need to wait so long.

Race condition in test "LocalVarTest.testWorkflow2"

The test workflow "LocalVarTransientWorkflow2" does the reply() twice.

It replies first the value "10" and then the value "5". If the test is fast enoug it "sees" the first response and failes!

I have no idea what the test should test and if the right fix is to just remove the first reply...

May be you could have a look.

Reduce size of full-source.jar

The source.jar built by the fullSourcesJar task is currently ~44MB. That's because of the 40MB "copper_2x_objects.zip" in copper-regtest resources.

If we would remove this file, the size of the source.jar shrinks down to ~7MB.

Any chance to make "copper_2x_objects.zip" smaller? E.g. remove duplicates.

Remove dependency on c3p0 in copper-coreengine

A quick grep reveals that it is used in MySQL|OracleConnectionCustomizer and OracleConnectionTester, and those are beans used in copper-regtest only. (There may be more.)

So those classes could probably be moved to copper-regtest, unless some other project depends on those classes. In that case those classes should be moved to copper-spring or a new subproject copper-extras or something like that...

if ecj is found in classpath, workflow classes cannot be transformed

Since COPPER 3.0-RC.5, COPPER advertises that it can use ecj to transform workflow classes when FileBasedWorkflowRepository is used. (My fault, I wrote that... :-} However, this is not true.

ecj creates slightly different bytecode, which the byte code transformer cannot transform.

It would be nice if ecj works as advertised, because then it's possible to run COPPER on a plain JRE without tools.jar. And you still have the possibility of hot deployment, as opposed to when using ClasspathWorkflowRepository.

preserve response order during batch insert

As discussed with Michael Austermann, basically if responses are inserted really fast, their order of arrival will be random as the batcher will insert them with the same timestamp, this will make some order dependant workflow fail.

@austermann

where is the mysql tables‘ initialization DDL when using the PersistentWorkflow

I am using the PersistentWorkflow with mysql, but when i try to start a PersistentScottyEngine. I got below error:

Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'dtb_tangram.cop_response' doesn't exist

I think there is ddl for initialize db before start an engine. Where can i find it. Thank you!

Extend JMX Interface

There is a plan to develop a new monitoring GUI.
For this GUI we need some new monitoring data from the engines:

  • Engine Activity
    -- engine up/running since
    -- last processing TS
    -- number of workflow instances started in the last N minutes, e.g. the last 60 minutes
  • ProcessorPools
    -- queue size
    -- number of active threads
    -- processor pool state (like running, down, suspended,...)

COPPER doesn't work with MySQL with case-sensistive table names

When MySQL is installed on a filesystem which supports case-sensitive filenames (such as Ext2-4, basically all Unix filesystems) then by default MySql has case-sensitive table names too. E.g. you can have two different tables 'COP_WAIT' and 'cop_wait' at the same time. If you only create COP_WAIT and try to use 'cop_wait' then the table won't be found.

This contradicts SQL ANSI standard, but anyway it's the default on most Unix MySQL installations. See http://dev.mysql.com/doc/refman/5.5/en/identifier-case-sensitivity.html

To overcome this, the suggestion is usually to set the global MySQL configuration parameter lower_case_table_names=1. But this has two major drawbacks:

  1. The parameter is global and affects all databases on the MySQL server.
  2. You cannot easily change this parameter if there are already some databases installed on the MySQL server. Migration is difficult, because all existing tables must be renamed to their lowercase counterpart during a global downtime.

Therefore, at the moment it would be best to fix COPPER that it doesn't mix case when accessing table names and to use UPPERCASE consistently for all DML and DDL statements.

Add rescue capabilities for workflow instances

If a workflow process is in state error or invalid, if should be possibe to change

  • data (Input data for getData())
  • members
  • Stack variables

The rigth place to start a "change workflow instance state variables" is the GUI, that shows the workflow instance list.

copper-monitoring-server has no main class

the gradle project copper-monitoring-server mentions a main class that doesn't exist:

manifest {
  attributes 'Main-Class': 'de.scoopgmbh.copper.monitor.server.ServerMain'
}

Change executing engine of a workflow instance at runtime

Sometimes you need a fast transient workflow, that needs persistence only exceptionally, e.g. in case of errors.
Today you need to spilt your logical workflow into two physical ones, one for the transient part, and one for the persistent part.
It might be helpful to change the executing engine of a workflow instance at runtime, e.g. move a workflow instance from a transient engine to a persistent one.

Add support for Java8

  • support for persistent workflow classes containing new Java8 language features, ie. closures etc.
  • support for new Java8 class file format

Can be done only after ASM 5.0 is ready and stable, because ASM 5.0 adds Java8 compatibility. See http://asm.ow2.org/history.html

Synchronization barrier

We could implement a synchronization barrier for our workflows as another feature besides wait and resubmit.

I am used to synchronization barriers from graphics card programming where they are quite useful. You can basically start a group of threads and by calling barrier, you make sure, all threads need to hit that barrier/breakpoint before any (i.e. all) of them continue with execution. On GPU programming, you mainly use this to make sure that the memory is synced in some sort of way, e.g. all threads finished writing to memory before the first one starts reading from the results of any of those threads.

If I transform this concept to COPPER and I currently would like to want a group of 100 workflows to synchronize on a barrier, I would need to "pollute" the database with 100*99 correllation IDs. This is due to the fact that each correllation ID belongs to and is consumed by one workflow only. So each workflow would need to notify the other 99 workflows for its completion and then wait for the other 99 sending this workflow a (unique) notify.

It might be interesting to discuss if we would like to implement something like one correlation ID for multiple workflows, but I think, this is not going anywhere. We would end up with treating each response as kind of early response because at any time, another workflow could start waiting for it. I think, this is not really desired.

We could, however, do this for internal usage only, i.e. when we know beforehand how many and which workflows are going to wait for a barrier/correlation ID.

For barriers, you always need some kind of grouping. We could offer a command like engine.startGroup and for each engine.run following, the started workflow belongs to that group until a engine.endGroup is called. This way, the application developer has the freedom to declare the groups in a way and size he likes, but COPPER also knows (when endGroup is called), which and how many workflows belong to the group. With this knowledge, COPPER can more effectively sync the workflows on a given barrier.

Now the main question is, how useful is that idea? By persisting workflows on each barrier, it doesn't really make sense to use this for just memory synchronization. And this concept fits not really in the main application domain of COPPER where workflows are just started asynchronously by some kind of event and don't really want to sync groupwise on other instances. It could, however, be used by some kind of batched processing and would thus extend the domain of COPPER.

I am looking forward to your comments on this idea.

Request for Timestamp column in COP_WAIT table

(originally reported by dweidenhammer, from https://code.google.com/p/copper/issues/detail?id=39)

I'm facing the following problem:
Each day one or more copper workflows which want to send a SOAP message fail because of HTTP connection problems. Therefore, we have implemented a retry scheme which restarts the workflow after a certain amount of time. As the external service is quite often unavailable, I have a growing number of workflows which are in this retry loop.

COPPER manages these workflows internally in the COP_WAIT table. Unfortunately, this table does contain neither a creation timestamp nor a counter. Therefore, I'm unable to figure out easily (using COPPER means) for how long these retries for a specific workflow have been executed and how many times.

It would be nice to add two columns to the COP_WAIT table which are maintained by COPPER:

CREATION_DATE    TIMESTAMP(6)
COUNTER          NUMBER(...)

Offer new application restart options after crash?

I just tried out a small setup:

I create some workflows which send an asynchronous request to a remote JMS service and go to sleep for lets say 15 seconds.
After 5 seconds, I let the application crash.
After another 5 seconds, the server sent its responses.

As for JMS, the server can send responses to the message queue and doesn't care that the application crashes. When the application is relaunched, it can consume all the messages it should have got in the past.

If I restart the application after 20 seconds, COPPER will initialize and restart all old workflows. By this, the engine will let all workflows run into a timeout and let them (in my setup) fail.
After the engine is started up, I, as a developer, can start putting the "old messages" to COPPER (Calling engine.notify). By this time, the workflows already ran into their timeout and the old responses are internally treated as early responses.

The problem is that I can't put the responses into COPPER before COPPER resumes with the old workflows.

So the question is whether and how we should offer a crash-restart routine on startup to the application developer?

We could provide a initialization-callback where the application can add responses before COPPER continues it's own initialization with restarting workflows. By doing so, we should also offer the opportunity to set the response timestamp as COPPER internally always sets the response timestamp to the time when engine.notify() is called. (In the case of JMS we could use the message-timestamp).

It is also questionalbe what should happen when the response arrived after 20 seconds but the the workflow wanted to wait only for 15 seconds. From my feelings, the answer didn't came in time so the workflow should run as if the answer didn't arrive in time. However, COPPERs timeout are always kind of lazy and not fixed. A timeout of 15 seconds means that the workflow is rerun not before 15 seconds are past but can also be rerun after hours if the application was to busy with other stuff. What do you think about this? Maybe another optional parameter for the wait method?

Another point in question is: We want to keep COPPER as small as possible and implement only features which are required by productional usage. So do you think this is a productional usecase which will hit us in future or not?

Support for Java 8 featuers in Workflows?

If I add the following line to my workflow with a lambda expression:

IntStream.range(0, 10).forEach(i -> { waitIDs[i] = "WorkflowSwitchProcessorPool-" + i; });

I get the following error from the workflow transformation procedure:

ERROR | startup failed
Exception in thread "main" java.lang.Error: Startup failed
at org.copperengine.ext.wfrepo.classpath.ClasspathWorkflowRepository.start(ClasspathWorkflowRepository.java:176)
at org.copperengine.core.tranzient.TransientScottyEngine.startup(TransientScottyEngine.java:227)
at org.copperengine.examples.exampleApplications.TransientSwitchProcessorPoolMain.main(TransientSwitchProcessorPoolMain.java:25)
Caused by: java.lang.IllegalArgumentException: INVOKESPECIAL/STATIC on interfaces require ASM 5
at org.objectweb.asm.MethodVisitor.visitMethodInsn(Unknown Source)
at org.objectweb.asm.ClassReader.a(Unknown Source)
at org.objectweb.asm.ClassReader.b(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.copperengine.ext.wfrepo.classpath.ClasspathWorkflowRepository.findInterruptableMethods(ClasspathWorkflowRepository.java:231)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
at org.copperengine.ext.wfrepo.classpath.ClasspathWorkflowRepository.start(ClasspathWorkflowRepository.java:115)
... 7 more

Which obviously is ASM related. Would be nice if we could use Java 8 fully within our workflows.

Remove copyright statement in every file header; clarify copyright owner

This will be a long issue description because we need to solve this whole copyright matter once and for all.

First: Copyright line

We can remove the "Copyright ..." line in all file headers because the Apache License 2.0 doesn't require it. See http://www.apache.org/dev/apply-license.html#copy-per-file :

Do I have to have a copy of the license in each source file?
Only one full copy of the license is needed per distribution. See the policy.

So, strictly speaking, we don't need any license header at all in every source file, as long as we have a NOTICE file at top, which contains a) the license, b) a copyright statement and c) the sources it applies to. The Apache License 2.0 (as opposed to 1.x) has been tailored specifically for this case (that you don't have to include the header into every file). Instead, the NOTICE file serves as a link between the LICENSE file and the sources it applies to (source).

But, just to be safe, I think it's better to contain at least the standard Apache license header (without Copyright statement) in every file, in case the file gets copied from our project to another project, so that the license is still retained. Apache itself adds the following to all of their own source files:

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. [...]

(source) I think linking to the NOTICE file is a good convention. We should do the same.

Second: Copyright owner

There's still the open question who is the copyright owner. Copyright laws are different in each country, but common to most of them is that whoever created the file first or whoever contributed significant work to the file, is the copyright owner automatically and this cannot be changed. So, the real copyright owner can be determined by looking at the VCS (Git), but for Java we could also make use of the @author annotation. We really should use the @author tag more.

But we can add another copyright owner, if the original copyright owner approves it. This second copyright owner could also be a company, an organisation, a team, eg. "COPPER Workflow Engine development team". Notice that Apache does the same: they always add "Copyright [yyyy] The Apache Software Foundation" to the NOTICE file in all of their projects.

In the NOTICE file we could attribute the copyright of the whole source tree to this second copyright owner:

COPPER Workflow Engine
Copyright 2002-2014 The COPPER Workflow Engine development team

This product includes software developed by
the COPPER Workflow Engine development team.
The members and copyright holders of the COPPER Workflow Engine
development team are the current members of the copper-engine
GitHub project at https://github.com/copper-engine.
See https://github.com/copper-engine?tab=members for a list
of current team members.

Apache does the same with its own files. I think this clarifies the situation in an elegant way.

Third: copyright approval

This brings us to the situation that

  1. the current copyright owners need to approve that this second copyright owner may be added. This is trivial, I think we all comply.
  2. future contributors who sign up on the GitHub project must also approve that the COPPER team will be added as the second copyright owner. This can be done implicitely by adding a contributors agreement to both the NOTICE file and the README.md file which contains instructions about how to contribute.

So, putting it all together, we don't need a copyright line in every header, and each year we'll only have to change the year in NOTICE, that's all. :)

add location information to DBStorageMXBean

(forwarded from Henning Brackmann)

Display the database COPPER is connected with in DBStorageMXBean, e.g. the IP address or the JDBC connect string or both. E.g. add a new method DBStorageMXBean.getDescription().

Copper might remove too much responses when a workflow instances finishes

When a workflow finished Copper will remove all responses with a CorrelationId that was used in the last "wait()" call.

Copper will remove those responses even if they were received after the last "wait()" and though never processed by the workflow.

It would be helpful if Copper would only delete those responses that were actually received by the workflow, when the workflow finishes.

PersistentProcessor Dependencies

ProcessingEngine should not be abused as a Dependency context.
e.g. instead of:
engine.getDependencyInjector().inject(pw);
engine.inject(pw);

The constructor should expose the correct ProcessingEngine Dependency.

SortedReponseList StackOverflow

at java.util.Collections.sort(Collections.java:216)
at org.copperengine.core.SortedReponseList.makeSureListIsSorted(SortedReponseList.java:18)
at org.copperengine.core.SortedReponseList.toArray(SortedReponseList.java:66)
at java.util.Collections.sort(Collections.java:216)
at org.copperengine.core.SortedReponseList.makeSureListIsSorted(SortedReponseList.java:18)
at org.copperengine.core.SortedReponseList.toArray(SortedReponseList.java:66)
at java.util.Collections.sort(Collections.java:216)
at org.copperengine.core.SortedReponseList.makeSureListIsSorted(SortedReponseList.java:18)
at org.copperengine.core.SortedReponseList.toArray(SortedReponseList.java:66)
at java.util.Collections.sort(Collections.java:216)
at org.copperengine.core.SortedReponseList.makeSureListIsSorted(SortedReponseList.java:18)
at org.copperengine.core.SortedReponseList.toArray(SortedReponseList.java:66)
at java.util.Collections.sort(Collections.java:216)

...

Make log4j a runtime dependency

All COPPER subprojects have a compile time dependency on log4j, but COPPER should use the slf4j facade only. Change the log4j dependency into a runtime dependency at least for copper-coreengine and copper-jmx-interface, so that COPPER users are free to choose another slf4j backend.

PS: Even better would it be to have a dependency with scope "provided" instead of "runtime", but this is a Maven only concept and Gradle doesn't support it. (See http://issues.gradle.org/browse/GRADLE-784, there are workarounds however.)

Support for OSGi

Hi,

I'm evaluating workflow engines and I was wondering if you have plans to add OSGi metadata to the manifest? I suspect it won't be too difficult as there don't appear to be many dependencies on the core library at least.

I would consider creating a pull request but just wanted to see if you object to the idea of OSGi metadata at all, or perhaps have plans for it already.

regards,
ben

PersistentScottyEngine.notifyProcessorPoolsOnResponse without effect when using PersistentLockManager

coppers PersistentLockManageruses org.copperengine.core.persistent.PersistentScottyEngine.notify(Response<?>, Connection) to notify the engine about its responses.
In turn notify(Response<?>, Connection) is called from outside of copper, so it does not know when the transaction is commited, thus it cannot notify the processor pools. If it would, it would maybe be too early to read the new data due to transaction isolation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.