Giter Site home page Giter Site logo

concurrent-step-plugin's Introduction

Jenkins Concurrent Step Plugin

Jenkins plugin to synchronize status among parallel stages in pipeline.
** This plugin steps can't recover from Jenkins crash and can only be used in pipeline script **

Block and Asynchronous

This plugin doesn't block the pipeline main thread. It leverages Jenkins asynchronous execution so the parallel stages continue to execute in other branches.

Barrier

def barrier = createBarrier count: 3;
boolean out = false
parallel(
        await1: {
            awaitBarrier barrier
            echo "out=${out}"
        },
        await2: {
            awaitBarrier (barrier){
                sleep 2 //simulate a long time execution.
            }
            echo "out=${out}"
        },
        await3: {
            awaitBarrier (barrier){
                sleep 3 //simulate a long time execution.
                out = true
            }
            echo "out=${out}"
        }
)

Latch

def latch = createLatch count: 2;
def var1 = false;
def var2 = false;
parallel(
        wait: {
            awaitLatch latch
            echo "var1=${var1}"
            echo "var2=${var2}"
        },
        countdown1: {
            countDownLatch (latch) {
                sleep 3 //simulate a long time execution.
                var1 = true
            }
        },
        countdown2: {
            countDownLatch (latch) {
                sleep 2 //simulate a long time execution.
                var2 = true
            }
        }
)

Semaphore

def semaphore = createSemaphore permit:1
def out2=0
parallel(
        semaphore1: {
            acquireSemaphore (semaphore){
                echo "out1 1"   //actions after acurire a semaphore and before release The semaphore is automatically released
                sleep 3
                out2=2
            }
        },
        semaphore2: {
            sleep 1
            acquireSemaphore (semaphore){
                echo "out2 ${out2}"
            }
        }
)

Condition

def condition = createCondition()
def out = false
parallel(
        wait1: {
            awaitCondition condition
            echo "out=${out}"
        },
        wait2: {
            awaitCondition condition
            echo "out=${out}"
        },
        signalAll: {
            signalAll (condition:condition) {
                sleep 3 //simulate a long time execution.
                out = true
            }
        }
)

Release Steps with closure body

It usually requires try/finally handling in release branch. To simplify pipeline code, all release steps has an optional closure parameter. The lock will be released immediately after closure code is executed even an exception is thrown in the block.
Because of its semantic, Semaphore block closure is a parameter of acquireSemaphore and should not release the lock manually again.

Samples

See more samples at src/test/resources

concurrent-step-plugin's People

Contributors

rschuetz avatar topikachu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

concurrent-step-plugin's Issues

acquireSemaphore permit seems to be limited

Hi,
I am using the plugin in the following way

def semaphore = createSemaphore permit:7

def s = [:]

for(int i = 0; i < 100; i ++) 
{
 def id="${i}"
 s.put("semaphore" + id, { -> acquireSemaphore (semaphore){
                                  sleep time:100,unit:"MILLISECONDS"
                                  echo "semaphore" + id + " body"
                              }
                         })
}

parallel s

this seems to run only 3 processes concurrently not matter what the value of permit is. Is the number of concurrent steps limited by the cpu count or something else irrespective of the permit parameter ?

Thanks.

Is official release of this plugin planned?

This plugin is extremely nice. It seems as though the beta release of this plugin was all successful in November 2019. Is there a plan to move forward with an official release that will appear in the Jenkins Update Center soon?

Would be nice to have Global synchronize.

It would be nice if synchronization objects have global or folder level subtypes that would allow synchronization across different pipelines, similar to lockable-resources-plugin .

How to manually recover from a Jenkins crash?

Describe your use-case which is not covered by existing documentation.

I see:

This plugin steps can't recover from Jenkins crash

in the docs.

How would I manually reset the plugin's state after a Jenkins crash (especially semaphores)?

Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.

No response

Await Barrier never releases after Job abort

When a running job (with active barrier running) gets aborted (might manually) the barrier does not gets released.
When the job will be restarted, the awaitBarrier does not get a free slot.

I started a parallel job with 4 parallel steps (sub build-jobs) and a barrier count of 3 slots.
After start the parallel job, the expected 3 parallel steps started up.
When the 3 steps finished, the 4th and last job started.
As far as good, everything as expected to here. This was why I aborted the running sub build-job and the pipeline execution in the parent job.
When I restarted the parallel job, only 2 sub build-jobs started up. (Gues the 3rd was still locked)
Also aborted the two running jobs and the parent job.
Restarted the parent job again and no free slot was available anymore.

15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] awaitBarrier
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] // stage
15:25:49 [Pipeline] }
15:25:49 [Pipeline] }
15:25:49 [Pipeline] }
15:25:49 [Pipeline] }

I would expect these behaviors:
1st: When a step gets finished in any condition the used barriers gets released
awaitBarrier (barrier){ // what ever step to do here }
2nd: When a new barrier is created, its really a new barrier with n count of slots (no dependencies to alredy existing) => scope
def barrier = createBarrier count: 3;
3rd: missing a command to release a barrier slot, may for custom try catch release actions

This issue seems to be related to #10 but is quite a bit different. The old issue should be reopended to, since there are some more reports with details of problems after closing.

`
def barrier = createBarrier count: 3

parameters {
booleanParam(defaultValue: true, description: 'Install on: Windows 10 Pro', name: 'INSTALL_ON_WIN10')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2016 Standard', name: 'INSTALL_ON_WINS2016')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2016 Standard (Member of AD)', name: 'INSTALL_ON_WINS2016_AD')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2019 Standard', name: 'INSTALL_ON_WINS2019')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2019 Standard', name: 'INSTALL_ON_WINS2019')
booleanParam(defaultValue: true, description: 'Install on: Windows Server 2019 Standard (Member of AD)', name: 'INSTALL_ON_WINS2019_AD')
booleanParam(defaultValue: true, description: 'Upgrade: Release 1.6.15', name: 'UPGRADE_REL_1_6_15')
booleanParam(defaultValue: true, description: 'Upgrade: Release 1.7.10', name: 'UPGRADE_REL_1_7_10')
booleanParam(defaultValue: true, description: 'Upgrade: Release 1.8.15', name: 'UPGRADE_REL_1_8_15')
}

stages {
stage("Install / Upgrade") {
parallel {
stage('Install on WIN10') {
when {
expression { params.INSTALL_ON_WIN10 == true }
}
steps {
awaitBarrier (barrier){
build job: 'Pipeline-VM-vSphere_develop',
parameters: [
string(name: 'TARGET', value: "WIN10")
]
}
}
}
stage('Install on WINS2016') {
when {
expression { params.INSTALL_ON_WINS2016 == true }
}
steps {
awaitBarrier (barrier){
build job: 'Pipeline-VM-vSphere_develop',
parameters: [
string(name: 'TARGET', value: "WINS2016")
]
}
}
}
stage('Install on WINS2016_AD') { ... }
stage('Install on WINS2019') { ... }
...
}
}
}
`

Thread usage in AcquireStep

This is mainly about the Semaphore code of the project, I haven't tested the other tasks: The plug-in is using the common ForkJoinPool for background jobs, however the size of the pool is limited (by default) to approx. the number of CPUs , which leads (in case one has a more threads waiting for semaphores than CPU) to deadlocks or low throughput (if all threads are waiting to acquire semaphores, but no-one releases one resp. if most threads are waiting but just a few are free to trigger the actual Pipeline steps - see my PR). The limitation also leads to situations where a job acquiring semaphore A affects are totally different job acquiring semaphore B, just because there are no threads left actually waiting for the semaphore.

The limitation of the ForkJoinPool is fine for CPU-bound tasks, but the tasks executed by the plug-in are not CPU-bound, they're just waiting for semaphores resp. for the step to finish.

One option to solve this is to increase the size of the ForkJoinPool or use a dedicated pool per Jenkins job. Another option would be the following:

Do not acquire semaphores in dedicated Futures, but try to use a single thread that more or less polls a single or multiple semaphores each few hundred ms and launches (on success) dedicated executor threads (not bound to a pool or at least bound to a pool sufficiently sized) triggering the Pipeline steps. This will limit the number of threads waiting for semaphores to one and makes sure acquire steps always execute. The first part could be achieved by a custom thread, that simply takes requests (consisting out of semaphore, count, optional timeout and Runnable to start if successful) via a queue, iterates over the queue every few hundred ms, tries to acquire a semaphore with a zero or very short timeout and launches the task, or by scheduled CompletableFutures, that do the same but reschedule themselves to be executed again in a few hundred ms if needed. Concerning the second part I'd avoid in any case running the body invoker in the common ForkJoinPool to make sure they can always be executed and can release the semaphore again, no matter whether the pool is exhausted or not - I'd either use dedicated threads, a custom pool (if possible resizeable without the need to restart Jenkins) or check whether there is a better way to run the body invokers asynchronously without wasting a thread waiting here.

Job waits infinitve in acquireSemaphore

If a running pipeline is aborted in BlueOcean it can occure that the next Job will wait infinitive. After a restart of the master everything is fine again.

Deadlock when using countDownLatch with closure

The following Jenkinsfile results in a deadlock when run with parallelism greater than fork-pool size. On my PC 5 is enough.

def PARALLELISM = 5

def latches = []
for(int i = 0; i < PARALLELISM+1; i++) {
  latches[i] = createLatch count: 1
}

def s = [:]

for(int i = 0; i < PARALLELISM; i++) {
  def id = i
  s.put("latch" + id, {
    countDownLatch(latches[id+1]) {
      echo "awaiting latch " + id
      awaitLatch(latches[id])
      echo "got latch " + id
      sleep time: 100, unit: "MILLISECONDS"
      echo "releasing latch " + (id+1)
    }
    echo "released latch " + (id+1)
  })
}

echo "releasing latch 0"
countDownLatch(latches[0])
parallel s

Here is console out:

Started
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] createLatch
[Pipeline] countDownLatch
[Pipeline] parallel
[Pipeline] { (Branch: latch0)
[Pipeline] { (Branch: latch1)
[Pipeline] { (Branch: latch2)
[Pipeline] { (Branch: latch3)
[Pipeline] { (Branch: latch4)
[Pipeline] countDownLatch
[Pipeline] {
[Pipeline] countDownLatch
[Pipeline] {
[Pipeline] countDownLatch
[Pipeline] {
[Pipeline] countDownLatch
[Pipeline] countDownLatch
[Pipeline] echo
awaiting latch 0
[Pipeline] awaitLatch
[Pipeline] echo
awaiting latch 1
[Pipeline] awaitLatch
[Pipeline] echo
awaiting latch 2
[Pipeline] awaitLatch
[Pipeline] echo
got latch 0
[Pipeline] sleep
Sleeping for 0.1 sec
[Pipeline] echo
releasing latch 1
[Pipeline] }
[Pipeline] // countDownLatch
[Pipeline] {
[Pipeline] echo
released latch 1
[Pipeline] }
[Pipeline] echo
got latch 1
[Pipeline] sleep
Sleeping for 0.1 sec
[Pipeline] echo
awaiting latch 3
[Pipeline] awaitLatch
[Pipeline] echo
releasing latch 2
[Pipeline] }
[Pipeline] {
[Pipeline] echo
awaiting latch 4
[Pipeline] awaitLatch

And relevant threads from tracedump:

ForkJoinPool.commonPool-worker-1
threadId:107 - state:WAITING
stackTrace:
at java.lang.Object.wait(Native Method)
- waiting on org.jenkinsci.plugins.workflow.cps.CpsBodyExecution@1643a3e
at java.lang.Object.wait(Object.java:502)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:305)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution.lambda$start$0(CountDownStep.java:81)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution$$Lambda$83/961550691.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1618)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

ForkJoinPool.commonPool-worker-2
threadId:108 - state:WAITING
stackTrace:
at java.lang.Object.wait(Native Method)
- waiting on org.jenkinsci.plugins.workflow.cps.CpsBodyExecution@368f088b
at java.lang.Object.wait(Object.java:502)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:305)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution.lambda$start$0(CountDownStep.java:81)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution$$Lambda$83/961550691.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1618)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

ForkJoinPool.commonPool-worker-3
threadId:109 - state:WAITING
stackTrace:
at java.lang.Object.wait(Native Method)
- waiting on org.jenkinsci.plugins.workflow.cps.CpsBodyExecution@4888f736
at java.lang.Object.wait(Object.java:502)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:305)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution.lambda$start$0(CountDownStep.java:81)
at com.github.topikachu.jenkins.concurrent.latch.CountDownStep$Execution$$Lambda$83/961550691.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1618)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution [#1]
threadId:110 - state:WAITING
stackTrace:
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.CountDownLatch$Sync@73d4066e
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.lambda$run$0(AwaitStep.java:91)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution$$Lambda$87/1373648057.apply(Unknown Source)
at java.util.Optional.map(Optional.java:215)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:80)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:69)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$$Lambda$80/1455251952.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@5910cbd9

org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution [#2]
threadId:112 - state:WAITING
stackTrace:
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.CountDownLatch$Sync@9aa2002
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.lambda$run$0(AwaitStep.java:91)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution$$Lambda$87/1373648057.apply(Unknown Source)
at java.util.Optional.map(Optional.java:215)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:80)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:69)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$$Lambda$80/1455251952.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@694929f6

org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution [#3]
threadId:113 - state:WAITING
stackTrace:
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.CountDownLatch$Sync@78c1372d
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.lambda$run$0(AwaitStep.java:91)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution$$Lambda$87/1373648057.apply(Unknown Source)
at java.util.Optional.map(Optional.java:215)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:80)
at com.github.topikachu.jenkins.concurrent.latch.AwaitStep$Execution.run(AwaitStep.java:69)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$$Lambda$80/1455251952.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@158f949d

Converting Jenkinsfile to equivalent try-finally construct solves the issue:

def PARALLELISM = $PARALLELISM$

def latches = []
for(int i = 0; i < PARALLELISM+1; i++) {
  latches[i] = createLatch count: 1
}

def s = [:]

for(int i = 0; i < PARALLELISM; i++) {
  def id = i
  s.put("latch" + id, {
    try {
      echo "awaiting latch " + id
      awaitLatch(latches[id])
      echo "got latch " + id
      sleep time: 100, unit: "MILLISECONDS"
    } finally {
      echo "releasing latch " + (id+1)
      countDownLatch(latches[id+1])
    }
    echo "released latch " + (id+1)
  })
}

echo "releasing latch 0"
countDownLatch(latches[0])
parallel s

Await Barrier never releases

Hello! I'm trying to wait for a barrier within a script section, but it seems to hang there forever, even when I set a timeout. My code looks something like this, nested within a declarative pipeline steps{} block.

def testBarrier = createBarrier count: numberOfTestNodes; // 5 in my case
def testGroups = [:]
script {
    for (int i = 0; i < numberOfTestNodes; i++) {
        def num = i
        testGroups["node $num"] = {
            node('workers') {
                def javaHome = tool name: 'openjdk-11'
                // do some prep work
                awaitBarrier barrier: testBarrier, timeout: 10, unit: 'SECONDS'
                // main work goes here
                stash name: "node $num", includes: '**/simulation.log'
            }
        }
    }
    parallel testGroups
}

In the Jenkins build console log, I can see awaitBarrier being printed out, but it hangs on the last one forever. I counted them and there are definitely 5 instances of awaitBarrier printed out.

I'm using version 1.0.0 of the plugin in Jenkins 2.219

Thanks!

Consider refactoring plugin to use "virtual" CPS VM threads

This plugin came up in discussion on Jenkins JIRA:

https://issues.jenkins.io/browse/JENKINS-44085

Jesse Glick made this comment:

https://issues.jenkins.io/browse/JENKINS-44085?focusedCommentId=405935&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-405935

From a brief glance at https://github.com/jenkinsci/concurrent-step-plugin I would say that it is designed incorrectly (confuses “native” Java threads with “virtual” CPS VM threads) and should not be used. Most or all of its steps probably could be reimplemented correctly while using the same Pipeline script interface.

java.lang.IllegalStateException: countDownLatch step must be called with a body

Version report

Jenkins and plugins versions report:

Jenkins: 2.263.4
OS: Linux - 4.15.0-130-generic
---
gradle:1.36
credentials-binding:1.24
external-monitor-job:1.7
bootstrap4-api:4.6.0-2
momentjs:1.1.1
dtkit-api:3.0.0
workflow-aggregator:2.6
git:4.6.0
jjwt-api:0.11.2-9.c8b45b8bb173
file-operations:1.11
git-server:1.9
workflow-basic-steps:2.22
timestamper:1.11.8
ssh-slaves:1.31.5
xunit:2.3.9
docker-java-api:3.1.5.2
bouncycastle-api:2.20
lockable-resources:2.10
docker-workflow:1.26
plain-credentials:1.7
resource-disposer:0.15
mapdb-api:1.0.9.0
mailer:1.32.1
analysis-model-api:9.8.1
subversion:2.14.0
script-security:1.76
forensics-api:1.0.0
git-parameter:0.9.13
pipeline-rest-api:2.19
echarts-api:5.0.1-1
pipeline-build-step:2.13
github:1.33.1
matrix-auth:2.6.5
credentials:2.3.15
branch-api:2.6.2
jsch:0.1.55.2
workflow-api:2.41
pipeline-stage-tags-metadata:1.8.4
ansicolor:0.7.5
pipeline-model-extensions:1.8.4
workflow-support:3.8
pipeline-input-step:2.12
apache-httpcomponents-client-4-api:4.5.13-1.0
ldap:1.26
git-client:3.6.0
font-awesome-api:5.15.2-2
scm-api:2.6.4
checks-api:1.5.0
workflow-step-api:2.23
workflow-cps:2.90
ace-editor:1.1
jdk-tool:1.5
command-launcher:1.5
Parameterized-Remote-Trigger:3.1.5.1
pipeline-model-definition:1.8.4
pipeline-stage-view:2.19
workflow-scm-step:2.12
ws-cleanup:0.39
http_request:1.8.27
pipeline-model-declarative-agent:1.1.1
pipeline-model-api:1.8.4
htmlpublisher:1.25
jquery-detached:1.2.1
structs:1.22
email-ext:2.82
workflow-cps-global-lib:2.18
pipeline-graph-analysis:1.10
ant:1.11
workflow-durable-task-step:2.38
build-timeout:1.20
github-branch-source:2.9.7
pam-auth:1.6
antisamy-markup-formatter:2.1
junit:1.48
windows-slaves:1.7
github-api:1.123
snakeyaml-api:1.27.0
jquery:1.12.4-1
data-tables-api:1.10.23-3
concurrent-step:1.0.0
warnings-ng:8.10.0
plugin-util-api:2.0.0
okhttp-api:3.14.9
trilead-api:1.0.13
handlebars:1.1.1
workflow-multibranch:2.22
greenballs:1.15.1
display-url-api:2.3.4
jackson2-api:2.12.1
pipeline-milestone-step:1.3.2
token-macro:2.13
workflow-job:2.40
pipeline-stage-step:2.5
jquery3-api:3.5.1-3
durable-task:1.35
cloudbees-folder:6.15
pipeline-github-lib:1.0
authentication-tokens:1.4
docker-plugin:1.2.2
docker-commons:1.17
popper-api:1.16.1-2
jaxb:2.3.0.1
matrix-project:1.18
ssh-credentials:1.18.1
  • What Operating System are you using (both controller, and any agents involved in the problem)?

Debian 10

Reproduction steps

Results

Expected result:

Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] createLatch
[Pipeline] parallel
[Pipeline] { (Branch: wait)
[Pipeline] { (Branch: countdown1)
[Pipeline] { (Branch: countdown2)
[Pipeline] awaitLatch
[Pipeline] sleep
Sleeping for 3 sec
[Pipeline] sleep
Sleeping for 2 sec
[Pipeline] countDownLatch
[Pipeline] }
[Pipeline] countDownLatch
[Pipeline] echo
var1=true
[Pipeline] echo
var2=true
[Pipeline] }
[Pipeline] }
[Pipeline] // parallel
[Pipeline] End of Pipeline
[Checks API] No suitable checks publisher found.
Finished: SUCCESS

Actual result:

The pipeline will hang and after aborting display a backtrace

java.lang.IllegalStateException: countDownLatch step must be called with a body
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:246)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:157)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
	at WorkflowScript.run(WorkflowScript:20)
	at ___cps.transform___(Native Method)
...

More info

This pipeline used to work fine. I upgraded from Jenkins 2.249.2 to 2.263.4; it still worked. Then I upgraded all the plugins, which caused this error. However, there were a lot of plugins and I don't have the exact versions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.