jenkinsci / concurrent-step-plugin Goto Github PK
View Code? Open in Web Editor NEWJenkins plugin to use utils in Java concurrent package.
Home Page: https://plugins.jenkins.io/concurrent-step/
License: MIT License
Jenkins plugin to use utils in Java concurrent package.
Home Page: https://plugins.jenkins.io/concurrent-step/
License: MIT License
If a running pipeline is aborted in BlueOcean it can occure that the next Job will wait infinitive. After a restart of the master everything is fine again.
Hello! I'm trying to wait for a barrier within a script
section, but it seems to hang there forever, even when I set a timeout. My code looks something like this, nested within a declarative pipeline steps{}
block.
def testBarrier = createBarrier count: numberOfTestNodes; // 5 in my case
def testGroups = [:]
script {
for (int i = 0; i < numberOfTestNodes; i++) {
def num = i
testGroups["node $num"] = {
node('workers') {
def javaHome = tool name: 'openjdk-11'
// do some prep work
awaitBarrier barrier: testBarrier, timeout: 10, unit: 'SECONDS'
// main work goes here
stash name: "node $num", includes: '**/simulation.log'
}
}
}
parallel testGroups
}
In the Jenkins build console log, I can see awaitBarrier
being printed out, but it hangs on the last one forever. I counted them and there are definitely 5 instances of awaitBarrier
printed out.
I'm using version 1.0.0 of the plugin in Jenkins 2.219
Thanks!
This is mainly about the Semaphore code of the project, I haven't tested the other tasks: The plug-in is using the common ForkJoinPool for background jobs, however the size of the pool is limited (by default) to approx. the number of CPUs , which leads (in case one has a more threads waiting for semaphores than CPU) to deadlocks or low throughput (if all threads are waiting to acquire semaphores, but no-one releases one resp. if most threads are waiting but just a few are free to trigger the actual Pipeline steps - see my PR). The limitation also leads to situations where a job acquiring semaphore A affects are totally different job acquiring semaphore B, just because there are no threads left actually waiting for the semaphore.
The limitation of the ForkJoinPool is fine for CPU-bound tasks, but the tasks executed by the plug-in are not CPU-bound, they're just waiting for semaphores resp. for the step to finish.
One option to solve this is to increase the size of the ForkJoinPool or use a dedicated pool per Jenkins job. Another option would be the following:
Do not acquire semaphores in dedicated Futures, but try to use a single thread that more or less polls a single or multiple semaphores each few hundred ms and launches (on success) dedicated executor threads (not bound to a pool or at least bound to a pool sufficiently sized) triggering the Pipeline steps. This will limit the number of threads waiting for semaphores to one and makes sure acquire steps always execute. The first part could be achieved by a custom thread, that simply takes requests (consisting out of semaphore, count, optional timeout and Runnable to start if successful) via a queue, iterates over the queue every few hundred ms, tries to acquire a semaphore with a zero or very short timeout and launches the task, or by scheduled CompletableFutures, that do the same but reschedule themselves to be executed again in a few hundred ms if needed. Concerning the second part I'd avoid in any case running the body invoker in the common ForkJoinPool to make sure they can always be executed and can release the semaphore again, no matter whether the pool is exhausted or not - I'd either use dedicated threads, a custom pool (if possible resizeable without the need to restart Jenkins) or check whether there is a better way to run the body invokers asynchronously without wasting a thread waiting here.
Not seeing any activity since 2020. Is this EoL and no longer supported?
It would be nice if synchronization objects have global or folder level subtypes that would allow synchronization across different pipelines, similar to lockable-resources-plugin .
When a running job (with active barrier running) gets aborted (might manually) the barrier does not gets released.
When the job will be restarted, the awaitBarrier does not get a free slot.
I started a parallel job with 4 parallel steps (sub build-jobs) and a barrier count of 3 slots.
After start the parallel job, the expected 3 parallel steps started up.
When the 3 steps finished, the 4th and last job started.
As far as good, everything as expected to here. This was why I aborted the running sub build-job and the pipeline execution in the parent job.
When I restarted the parallel job, only 2 sub build-jobs started up. (Gues the 3rd was still locked)
Also aborted the two running jobs and the parent job.
Restarted the parent job again and no free slot was available anymore.
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] }
15:25:49 [Pipeline] }
15:25:49 [Pipeline] }
15:25:49 [Pipeline] }
I would expect these behaviors:
1st: When a step gets finished in any condition the used barriers gets released
awaitBarrier (barrier){ // what ever step to do here }
2nd: When a new barrier is created, its really a new barrier with n count of slots (no dependencies to alredy existing) => scope
def barrier = createBarrier count: 3;
3rd: missing a command to release a barrier slot, may for custom try catch release actions
This issue seems to be related to #10 but is quite a bit different. The old issue should be reopended to, since there are some more reports with details of problems after closing.
`
def barrier = createBarrier count: 3
parameters {
booleanParam(defaultValue: true, description: 'Install on: Windows 10 Pro', name: 'INSTALL_ON_WIN10')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2016 Standard', name: 'INSTALL_ON_WINS2016')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2016 Standard (Member of AD)', name: 'INSTALL_ON_WINS2016_AD')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2019 Standard', name: 'INSTALL_ON_WINS2019')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2019 Standard', name: 'INSTALL_ON_WINS2019')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2019 Standard (Member of AD)', name: 'INSTALL_ON_WINS2019_AD')
booleanParam(defaultValue: true, description: 'Upgrade: Release 1.6.15', name: 'UPGRADE_REL_1_6_15')
booleanParam(defaultValue: true, description: 'Upgrade: Release 1.7.10', name: 'UPGRADE_REL_1_7_10')
booleanParam(defaultValue: true, description: 'Upgrade: Release 1.8.15', name: 'UPGRADE_REL_1_8_15')
}
stages {
stage("Install / Upgrade") {
parallel {
stage('Install on WIN10') {
when {
expression { params.INSTALL_ON_WIN10 == true }
}
steps {
awaitBarrier (barrier){
build job: 'Pipeline-VM-vSphere_develop',
parameters: [
string(name: 'TARGET', value: "WIN10")
]
}
}
}
stage('Install on WINS2016') {
when {
expression { params.INSTALL_ON_WINS2016 == true }
}
steps {
awaitBarrier (barrier){
build job: 'Pipeline-VM-vSphere_develop',
parameters: [
string(name: 'TARGET', value: "WINS2016")
]
}
}
}
stage('Install on WINS2016_AD') { ... }
stage('Install on WINS2019') { ... }
...
}
}
}
`
I see:
This plugin steps can't recover from Jenkins crash
in the docs.
How would I manually reset the plugin's state after a Jenkins crash (especially semaphores)?
No response
This plugin came up in discussion on Jenkins JIRA:
https://issues.jenkins.io/browse/JENKINS-44085
Jesse Glick made this comment:
From a brief glance at https://github.com/jenkinsci/concurrent-step-plugin I would say that it is designed incorrectly (confuses “native” Java threads with “virtual” CPS VM threads) and should not be used. Most or all of its steps probably could be reimplemented correctly while using the same Pipeline script interface.
Jenkins and plugins versions report:
Jenkins: 2.263.4
OS: Linux - 4.15.0-130-generic
---
gradle:1.36
credentials-binding:1.24
external-monitor-job:1.7
bootstrap4-api:4.6.0-2
momentjs:1.1.1
dtkit-api:3.0.0
workflow-aggregator:2.6
git:4.6.0
jjwt-api:0.11.2-9.c8b45b8bb173
file-operations:1.11
git-server:1.9
workflow-basic-steps:2.22
timestamper:1.11.8
ssh-slaves:1.31.5
xunit:2.3.9
docker-java-api:3.1.5.2
bouncycastle-api:2.20
lockable-resources:2.10
docker-workflow:1.26
plain-credentials:1.7
resource-disposer:0.15
mapdb-api:1.0.9.0
mailer:1.32.1
analysis-model-api:9.8.1
subversion:2.14.0
script-security:1.76
forensics-api:1.0.0
git-parameter:0.9.13
pipeline-rest-api:2.19
echarts-api:5.0.1-1
pipeline-build-step:2.13
github:1.33.1
matrix-auth:2.6.5
credentials:2.3.15
branch-api:2.6.2
jsch:0.1.55.2
workflow-api:2.41
pipeline-stage-tags-metadata:1.8.4
ansicolor:0.7.5
pipeline-model-extensions:1.8.4
workflow-support:3.8
pipeline-input-step:2.12
apache-httpcomponents-client-4-api:4.5.13-1.0
ldap:1.26
git-client:3.6.0
font-awesome-api:5.15.2-2
scm-api:2.6.4
checks-api:1.5.0
workflow-step-api:2.23
workflow-cps:2.90
ace-editor:1.1
jdk-tool:1.5
command-launcher:1.5
Parameterized-Remote-Trigger:3.1.5.1
pipeline-model-definition:1.8.4
pipeline-stage-view:2.19
workflow-scm-step:2.12
ws-cleanup:0.39
http_request:1.8.27
pipeline-model-declarative-agent:1.1.1
pipeline-model-api:1.8.4
htmlpublisher:1.25
jquery-detached:1.2.1
structs:1.22
email-ext:2.82
workflow-cps-global-lib:2.18
pipeline-graph-analysis:1.10
ant:1.11
workflow-durable-task-step:2.38
build-timeout:1.20
github-branch-source:2.9.7
pam-auth:1.6
antisamy-markup-formatter:2.1
junit:1.48
windows-slaves:1.7
github-api:1.123
snakeyaml-api:1.27.0
jquery:1.12.4-1
data-tables-api:1.10.23-3
concurrent-step:1.0.0
warnings-ng:8.10.0
plugin-util-api:2.0.0
okhttp-api:3.14.9
trilead-api:1.0.13
handlebars:1.1.1
workflow-multibranch:2.22
greenballs:1.15.1
display-url-api:2.3.4
jackson2-api:2.12.1
pipeline-milestone-step:1.3.2
token-macro:2.13
workflow-job:2.40
pipeline-stage-step:2.5
jquery3-api:3.5.1-3
durable-task:1.35
cloudbees-folder:6.15
pipeline-github-lib:1.0
authentication-tokens:1.4
docker-plugin:1.2.2
docker-commons:1.17
popper-api:1.16.1-2
jaxb:2.3.0.1
matrix-project:1.18
ssh-credentials:1.18.1
Debian 10
Expected result:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] createLatch
[Pipeline] parallel
[Pipeline] { (Branch: wait)
[Pipeline] { (Branch: countdown1)
[Pipeline] { (Branch: countdown2)
[Pipeline] awaitLatch
[Pipeline] sleep
Sleeping for 3 sec
[Pipeline] sleep
Sleeping for 2 sec
[Pipeline] countDownLatch
[Pipeline] }
[Pipeline] countDownLatch
[Pipeline] echo
var1=true
[Pipeline] echo
var2=true
[Pipeline] }
[Pipeline] }
[Pipeline] // parallel
[Pipeline] End of Pipeline
[Checks API] No suitable checks publisher found.
Finished: SUCCESS
Actual result:
The pipeline will hang and after aborting display a backtrace
java.lang.IllegalStateException: countDownLatch step must be called with a body
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:246)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:157)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
at WorkflowScript.run(WorkflowScript:20)
at ___cps.transform___(Native Method)
...
This pipeline used to work fine. I upgraded from Jenkins 2.249.2 to 2.263.4; it still worked. Then I upgraded all the plugins, which caused this error. However, there were a lot of plugins and I don't have the exact versions.
As the title says:
Using a semaphore in a pipeline libary that is imported in multiple projects is shared between jobs.
Is this intended ?
How can I prevent this ?
Hi,
I am using the plugin in the following way
def semaphore = createSemaphore permit:7
def s = [:]
for(int i = 0; i < 100; i ++)
{
def id="${i}"
s.put("semaphore" + id, { -> acquireSemaphore (semaphore){
sleep time:100,unit:"MILLISECONDS"
echo "semaphore" + id + " body"
}
})
}
parallel s
this seems to run only 3 processes concurrently not matter what the value of permit
is. Is the number of concurrent steps limited by the cpu count or something else irrespective of the permit
parameter ?
Thanks.
This plugin is extremely nice. It seems as though the beta release of this plugin was all successful in November 2019. Is there a plan to move forward with an official release that will appear in the Jenkins Update Center soon?
The following Jenkinsfile results in a deadlock when run with parallelism greater than fork-pool size. On my PC 5 is enough.
def PARALLELISM = 5
def latches = []
for(int i = 0; i < PARALLELISM+1; i++) {
latches[i] = createLatch count: 1
}
def s = [:]
for(int i = 0; i < PARALLELISM; i++) {
def id = i
s.put("latch" + id, {
countDownLatch(latches[id+1]) {
echo "awaiting latch " + id
awaitLatch(latches[id])
echo "got latch " + id
sleep time: 100, unit: "MILLISECONDS"
echo "releasing latch " + (id+1)
}
echo "released latch " + (id+1)
})
}
echo "releasing latch 0"
countDownLatch(latches[0])
parallel s
Here is console out:
Started
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] countDownLatch
[Pipeline] parallel
[Pipeline] { (Branch: latch0)
[Pipeline] { (Branch: latch1)
[Pipeline] { (Branch: latch2)
[Pipeline] { (Branch: latch3)
[Pipeline] { (Branch: latch4)
[Pipeline] countDownLatch
[Pipeline] {
[Pipeline] countDownLatch
[Pipeline] {
[Pipeline] countDownLatch
[Pipeline] {
[Pipeline] countDownLatch
[Pipeline] countDownLatch
[Pipeline] echo
awaiting latch 0
[Pipeline] awaitLatch
[Pipeline] echo
awaiting latch 1
[Pipeline] awaitLatch
[Pipeline] echo
awaiting latch 2
[Pipeline] awaitLatch
[Pipeline] echo
got latch 0
[Pipeline] sleep
Sleeping for 0.1 sec
[Pipeline] echo
releasing latch 1
[Pipeline] }
[Pipeline] // countDownLatch
[Pipeline] {
[Pipeline] echo
released latch 1
[Pipeline] }
[Pipeline] echo
got latch 1
[Pipeline] sleep
Sleeping for 0.1 sec
[Pipeline] echo
awaiting latch 3
[Pipeline] awaitLatch
[Pipeline] echo
releasing latch 2
[Pipeline] }
[Pipeline] {
[Pipeline] echo
awaiting latch 4
[Pipeline] awaitLatch
And relevant threads from tracedump:
ForkJoinPool.commonPool-worker-1
threadId:107 - state:WAITING
stackTrace:
at java.lang.Object.wait(Native Method)
- waiting on org.jenkinsci.plugins.workflow.cps.CpsBodyExecution@1643a3e
at java.lang.Object.wait(Object.java:502)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:305)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution.lambda$start$0(CountDownStep.java:81)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution$$Lambda$83/961550691.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1618)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
ForkJoinPool.commonPool-worker-2
threadId:108 - state:WAITING
stackTrace:
at java.lang.Object.wait(Native Method)
- waiting on org.jenkinsci.plugins.workflow.cps.CpsBodyExecution@368f088b
at java.lang.Object.wait(Object.java:502)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:305)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution.lambda$start$0(CountDownStep.java:81)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution$$Lambda$83/961550691.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1618)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
ForkJoinPool.commonPool-worker-3
threadId:109 - state:WAITING
stackTrace:
at java.lang.Object.wait(Native Method)
- waiting on org.jenkinsci.plugins.workflow.cps.CpsBodyExecution@4888f736
at java.lang.Object.wait(Object.java:502)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:305)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution.lambda$start$0(CountDownStep.java:81)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution$$Lambda$83/961550691.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1618)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution [#1]
threadId:110 - state:WAITING
stackTrace:
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.CountDownLatch$Sync@73d4066e
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.lambda$run$0(AwaitStep.java:91)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution$$Lambda$87/1373648057.apply(Unknown Source)
at java.util.Optional.map(Optional.java:215)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:80)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:69)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$$Lambda$80/1455251952.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@5910cbd9
org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution [#2]
threadId:112 - state:WAITING
stackTrace:
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.CountDownLatch$Sync@9aa2002
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.lambda$run$0(AwaitStep.java:91)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution$$Lambda$87/1373648057.apply(Unknown Source)
at java.util.Optional.map(Optional.java:215)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:80)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:69)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$$Lambda$80/1455251952.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@694929f6
org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution [#3]
threadId:113 - state:WAITING
stackTrace:
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.CountDownLatch$Sync@78c1372d
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.lambda$run$0(AwaitStep.java:91)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution$$Lambda$87/1373648057.apply(Unknown Source)
at java.util.Optional.map(Optional.java:215)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:80)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:69)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$$Lambda$80/1455251952.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@158f949d
Converting Jenkinsfile to equivalent try-finally construct solves the issue:
def PARALLELISM = $PARALLELISM$
def latches = []
for(int i = 0; i < PARALLELISM+1; i++) {
latches[i] = createLatch count: 1
}
def s = [:]
for(int i = 0; i < PARALLELISM; i++) {
def id = i
s.put("latch" + id, {
try {
echo "awaiting latch " + id
awaitLatch(latches[id])
echo "got latch " + id
sleep time: 100, unit: "MILLISECONDS"
} finally {
echo "releasing latch " + (id+1)
countDownLatch(latches[id+1])
}
echo "released latch " + (id+1)
})
}
echo "releasing latch 0"
countDownLatch(latches[0])
parallel s
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.