fabric8io / fabric8-pipeline-library Goto Github PK
View Code? Open in Web Editor NEWFabric8 Pipeline for Jenkins
License: Apache License 2.0
Fabric8 Pipeline for Jenkins
License: Apache License 2.0
The pod templates that we are currently using (e.g. the maven template) refer to resources managed by gofabric8 and are used to store stuff like settings, ssh keys, gnupgp keys and more.
It would be nice, if we had a flavor of the templates, without all these fixed resources, or if those were optional.
Something like this would allow the user to get started, regardless of how he setup the environment or what jenkins image he uses. Of course, he wouldn't be able to enjoy the Fabric8 in its full length, but he could easily hack a pipeline that does a maven build, run the integration/system tests and even update internal environments. Then he gradually adds more things to the mix. I think that a step by step approach is really important, as it gives time to the user to digest and better understand how to use our stuff. It also gives us more flexibility.
The implementation is the tricky part....
What I'd like to avoid is an endless chain of if then else.
What I'd also like to avoid is having tons of different templates for the same thing.
What could possibly make sense here, is to leverage template nesting / composition.
So we could have something like a light maven template called withMaven
and additional templates that attach the secrets or the rest of the resources (e.g. to define the ssh keys: withSsh
). We could then bind them together:
withMaven(mavenImage: 'maven:3.3.9') {
withSsh('jenkins-ssh') {
withGpg('jenkins-gpg') {
//do stuff
}
}
}
And if this is starting to getting verbose, we could hack the withFabric8
that adds the things we need with a simple declaration.
So I'm loving the github release notes we generate for npm projects:
https://github.com/fabric8-ui/fabric8-runtime-console/releases
e.g. if a project is using the Conventional Commits (http://conventionalcommits.org/) format for commit messages then we can generate nice release notes for the project.
I wonder if we could start to enable this on all java & go projects too if they opt in to using Conventional Commits? Maybe it could be a flag we enable in the Jenkinsfile or something?
Aloha,
currently the Fabric8-Jenkins-Build-Job runs into an weird exception when creating a new project using v2.2.192.
I suppose it's in some fashion (or not) related with the following line but I'm not sure:
The original stack trace can be found here:
https://gist.github.com/anonymous/8b6b08236331677d24e42ff62edf571b
Thanks for any hint,
Qaiser
we define the environments for a team in the fabric8-environments
ConfigMap
Rather than having lots of different jobs that include Staging and/or Production, it might be just nice to have a PromoteAll job and function that promotes a release to all environments that are defined in the ConfigMap
?
Possibly adding an include/exclude list too? e.g. you typically wanna exclude Test
.
So something like
promoteToEnviroments()
which default to something like
promoteToEnviroments(excludes=['Test'], includes=['*'])
Or something like that?
Folks want to customise the generated deployment / deployment config yamls, in java this is done with the help of the fabric8-maven-plugin. For non java pipelines we use the shared function https://github.com/fabric8io/fabric8-pipeline-library/blob/master/vars/getDeploymentResources.groovy, we should raise a PR to merge the parameterised yaml so folks can customise in their repo and when the pipeline runs it will still replace the version number, project name, labels etc.
Seems to happen when there's low resources or multiple jobs running. This has been seen on OSO and GKE.
Executing shell script inside container [maven] of pod [kubernetes-137ebf2065f949d4acac4e019ed07af7-1e96524904d1e]
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
GitHub has been notified of this commit’s build result
java.io.IOException: Pipe not connected
at java.io.PipedOutputStream.write(PipedOutputStream.java:140)
at java.io.OutputStream.write(OutputStream.java:75)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:125)
at hudson.Launcher$ProcStarter.start(Launcher.java:384)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:157)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:63)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:172)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:184)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:126)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
at stageProject.call(/var/jenkins_home/jobs/fabric8-cd/jobs/fabric8-maven-plugin/branches/master/builds/16/libs/github.com/fabric8io/fabric8-pipeline-library/vars/stageProject.groovy:18)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor239.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:74)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:165)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE
When I have a pipeline without any hubot configured I still see:
[Pipeline] hubotApprove
Hubot sending to room fabric8_default => Would you like to promote version 2.0.1 to the next environment?
to Proceed reply: fabric8 jenkins proceed job maxandersen/mirror/master build 1
to Abort reply: fabric8 jenkins abort job maxandersen/mirror/master build 1
No service hubot is running!!!
No service found!
Two things come to mind:
why does it even tell me about hubot when I don't have it running. I assume for many this would be the default and thus 9 lines of output is wasteful. If it must do something maybe only do 1 line like Service hubot is not running!
shouldn't enable/disablement of features in the shared jenkins pipeline be something controlled by the user ? i.e. by the user enable/disabling extensions rather than it is defined in the world global: github.com/fabric8io/fabric8-pipeline-library@master
?
after a while the timeout can get to an hour or two - if you update the PR status at all it'd be nice to reset the timer or something?
Maybe a UI hint from the build log we could trigger the reset of the sleep or something?
can you guys provide operation with mvn install -DskipTests
there's a number of maven plugins and tools out there for generating release notes based on git commit history and fixed issues on github etc. Here's some of them:
it'd be nice to include this OOTB in our release pipelines. So I guess we need to try some of these tools and see which ones work well, generate nice HTML output and work well with github issues etc.
Then if we can package it up in a docker image we can start to include it OOTB in our release pipelines (maybe making it optional via an environment variable or something) so folks can disable it if they wish?
I had a job waiting on the approval step that I left for 11 hours on DevTools OSO and the Proceed & Abort links in the jenkins console didn't seem to do anything any more. I eventually had to just kill the build.
I wonder if the build pods go unresponsive after a while?
it'd be awesome if we automatically invoked sonarqube builds whenever its running in a developer namespace
There are occasions where a readme update or CI change to say a Jenkins plugin for example may mean a developer doesn't want a full release automatically triggered.
CD pipelines should check the PR comment to see if we have a @fabric8cd skip release
or something similar.
the old version is scaled down too quickly before the new version is ready. Not sure why but it seems a regression
I created a .Net app and the version used is the short git sha of 0517806
. When the application deployment config yaml was applied the version changed to 517806.0
which means the image stream isn't found.
Currently our maven Jenkinsfile library works out the release version using the jenkins build number which is bad if the jenkins job is ever recreated.
We could extract this semver code that java fabric8 projects uses itself to work out the next version.
Then call this new function from the Jenkinsfiles here https://github.com/fabric8io/fabric8-jenkinsfile-library/blob/master/maven/CanaryReleaseStageAndApprovePromote/Jenkinsfile#L25
Bonus points for adding the first unit test for the library too ;)
if we can't find the openshift.yml file lets iterate through the local file system in a multi maven project until we find it
https://github.com/fabric8io/fabric8-pipeline-library/blob/master/src/io/fabric8/Fabric8Commands.groovy#L496
Testing Using a Jenkinsfile of:
#!/usr/bin/groovy
@Library('github.com/fabric8io/fabric8-pipeline-library@master')
def dummy
mavenNode{
container('maven'){
echo 'inside build pod'
}
}
node{
approve {
room = null
version = '1.0.0'
console = null
environment = 'Stage'
}
}
The build pod is kepy running until the jobs has finished rather than at the closing parenthesis of the mavenNode
, this means that build pods will stick around during the approve step which is a waste of resources.
@iocanel suggested trying this fabric8io/kubernetes-plugin@2b4f6d8 which works great. I wonder however if instead we need to mark the build pod as complete list the openshift s2i build pods?
The kubernetes-plugin also the user to name the cloud.
Some images come with preconfigured a cloud named kubernetes
(e.g. the Fabric8 Jenkins image), while others use other names like openshift
(e.g. the Openshift image) etc.
In any case the pipelines, should be able to optionally accept the cloud name as a parameter, defaulting to kubernetes
if none is specified.
The next release kubernetes-plugin will allow the user to name the build pod, based on the name set on the PodTemplate (currently are named kubernetes-xxx-yyy-zzz
which is meaningless).
So it would be great if we were able to optionally pass a name.
Since for the biggest part we are composing PodTemplates I am not sure if it makes sense to use naming by type (e.g. maven
, go
, nodejs
) though this could possibly be a default value.
An other approach would be to name templates in the same manner as we label them (by job name and build number). This would allows to easily correlate a build pod with a specific jenkins build. For example the pod would be named something like myproject-12-xxx-yyy-zzz
.
Bugs, quotas, provisioning cost (it often takes a while until a pvc is bound) sometimes make working with PVCs a PITA.
I should be able to pass different template names as parameters and if none is passed the podTemplate should be ephemeral.
the current isCI()
and isCD()
functions should now delegate to a getPipeline()
helper method which should lazily invoke this code: https://github.com/fabric8io/fabric8/blob/master/components/kubernetes-api/src/main/java/io/fabric8/kubernetes/api/pipelines/Pipelines.java#L34
and cache the Pipeline object (transiently!) around for the lifetime of a job - lazily requerying if its null.
something kinda like...
def isCI(){
return getPipeline().isCI()
}
def isCD(){
return getPipeline().isCD()
}
// TODO not sure if this works ;) just trying to cache this value for later
def transient _pipeline: Pipeline = null;
def getPipeline() {
def kubernetes = new DefaultKubernetesClient()
def namespace = kubernetes.getNamespace()
// TODO ensure that BRANCH_NAME and GIT_URL are populated!
return io.fabric8.kubernetes.api.pipelines.Pipelines.getPipeline(kubernetes, namespace, env);
}
so its easier to keep an eye on what tests passed / failed etc
In some cases we may need to add something extra to a container, without having to recreate an image for it. Since the pod templates are already leveraging multiple container it would be nice to have a tool, that would allow us to ask a container to share something found in its path, by moving it inside the worksapce. This would allow other containers to use that if needed.
I just got this after a pipeline was in the approve state for a while. Am guessing jenkins pod got killed:
Proceed or Abort
Resuming build at Tue Apr 18 18:50:27 UTC 2017 after Jenkins restart
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Steps to reproduce
def envStage = utils.environmentNamespace('my-project')
to a JenkinsfileExpected
We would expect a new namespace of 'my-project' is created.
Actual
'default-my-project' is created.
When using OSO we're gonna be restricting builds to 1 concurrent build per user. In which case its safe to have a write once PV for the local mvn repo for doing CD releases or for snapshot builds
However when using OSD we probably want to use the job workspace as the local maven repository so that we can have parallel builds running and avoid overwriting each other or causing inconsistencies in the builds.
So we maybe need a configuration to know if we can use a single read/write once; use a read/write many or use workspace based persistence for builds. Some folks may want to disable persistence too maybe?
Maybe we need a ConfigMap we load to configure these kinds of things?
it'd be awesome to create milestones every time we do a release where there are closed issues which are not associated with a milestone
Then we can easily see what releases got fixed in what version - all done mostly automatically. (folks can always update the milestone on the issue after the release).
So how about a function, githubCreateMilestone(String version)
which would:
for background see this issue:
fabric8-services/fabric8-wit#726
essentially if we can detect planner / workitem-tracker is running (e.g. via a kubernetes Service being present or via a configuration as per this issue: #74) and when we have the new REST API as per fabric8-services/fabric8-wit#726 then when a kubernetesApply()
is done and the deployment has completed we should post the necessary JSON to the REST API so that the workitem tracker can update the issue with a comment that something is ready for test etc
Just to avoid duplicating code.
The following code from a jenkinsfile used to work:
kubernetes.pod('buildpod').withImage('<ip address>:80/shiftwork/jhipster-build')
.withPrivileged(true)
.withHostPathMount('/var/run/docker.sock','/var/run/docker.sock')
.withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/')
.withSecret('jenkins-docker-cfg','/home/jenkins/.docker')
.withSecret('jenkins-maven-settings','/root/.m2')
.withServiceAccount('jenkins')
.inside {
Now however it results in an error.
hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: static io.fabric8.kubernetes.pipeline.Kubernetes.withPrivileged() is applicable for argument types: (java.lang.Boolean) values: [true]
at groovy.lang.MetaClassImpl.invokeStaticMissingMethod(MetaClassImpl.java:1503)
at groovy.lang.MetaClassImpl.invokeStaticMethod(MetaClassImpl.java:1489)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:897)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(ScriptBytecodeAdapter.java:168)
at io.fabric8.kubernetes.pipeline.Kubernetes$Pod.methodMissing(Kubernetes.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaClassImpl.invokeMissingMethod(MetaClassImpl.java:941)
at groovy.lang.MetaClassImpl.invokePropertyOrMissing(MetaClassImpl.java:1264)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1024)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:812)
at io.fabric8.kubernetes.pipeline.Kubernetes$Pod.invokeMethod(Kubernetes.groovy)
at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:103)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
at WorkflowScript.run(WorkflowScript:33)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor240.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:58)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:163)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Perhaps I missed a blog post. But I am unaware of any upgrade path from a previous version.
It would seem the withPrivileged()
method has not be deprecated, it has just been removed.
https://github.com/fabric8io/fabric8-jenkinsfile-library/search?utf8=%E2%9C%93&q=withPrivileged
https://github.com/fabric8io/fabric8-pipeline-library/search?utf8=%E2%9C%93&q=withPrivileged
It would be good to know how to upgrade.
Below is the relevant plugins we have installed.
These two pluggins appear to be child plugins of https://github.com/jenkinsci/kubernetes-pipeline-plugin.
It seems there are several related projects. In order to get more clarity Ive compiled the below table.
Name | Active | Has withPrivileged() | Comments |
---|---|---|---|
jenkins-pipeline-library | No - Deprecated | Yes | |
fabric8-pipeline-library | Yes | No | |
kubernetes-plugin | Yes | Yes | Kubernetes Pipeline is Jenkins plugin which extends Jenkins Pipeline to allow building and testing inside Kubernetes Pods reusing kubernetes features like pods, build images, service accounts, volumes and secrets while providing an elastic slave pool (each build runs in new pods). |
fabric8-jenkinsfile-library | Yes | No | |
kubernetes-pipeline-plugin | Yes 1.4-SNAPSHOT | Yes | Uses io.fabric8.kubernetes.pipeline package name, yet not in fabric8 github project. Kubernetes Pipeline is Jenkins plugin which extends Jenkins Pipeline to provide native support for using Kubernetes pods, secrets and volumes to perform builds. |
we noticed today that if we get an error in the pipeline when updating downstream projects the entire build fails. Perhaps we dont want to do this and catch the error, log it and continue to the next project?
The error in this case was no permissions to create the updateVersion branch in the downstream project.
e.g. things like which branches are CD release branches versus CI branches/PRs versus developer branches (run tests + re-run apps fast) - see #3
We also should make it easy to enable/disable various features like:
I'm not sure the perfect approach; do we use the fabric8.yml file to enable/disable those features? Or use a ConfigMap?
Either way we should come up with a standard function to wrap that up so that we can make the pipelines configurable to enable/disable feature flags from a nice UI or CLI tool - without users having to hack groovy source etc
as right now I'm not sure when to use one or the other; should we just have a single class of functions?
Or maybe some are k8s related?
Using fabric8-pipeline-library@master
(commit 3f84b0b
).
Missing script approvals configuration in Jenkins CI.
I've added approvals manually in "In-process Script Approval" page but is there a way to configure "Signatures already approved" list at fabric8 CI/CD creation ?
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod jenkins.model.Jenkins getInstance
at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:192)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onStaticCall(SandboxInterceptor.java:142)
at org.kohsuke.groovy.sandbox.impl.Checker$2.call(Checker.java:180)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedStaticCall(Checker.java:177)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:91)
at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall$0.callStatic(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
at io.fabric8.Fabric8Commands.getCloudConfig(Fabric8Commands.groovy:711)
at io.fabric8.Fabric8Commands$getCloudConfig$0.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
at mavenTemplate.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/19/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenTemplate.groovy:14)
at mavenNode.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/19/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenNode.groovy:8)
at WorkflowScript.run(WorkflowScript:30)
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method jenkins.model.Jenkins getCloud java.lang.String
at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:178)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:119)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall$0.callStatic(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
at io.fabric8.Fabric8Commands.getCloudConfig(Fabric8Commands.groovy:711)
at io.fabric8.Fabric8Commands$getCloudConfig$0.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
at mavenTemplate.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/20/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenTemplate.groovy:14)
at mavenNode.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/20/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenNode.groovy:8)
at WorkflowScript.run(WorkflowScript:30)
One last:
approval: method io.fabric8.kubernetes.client.KubernetesClient services
callers:
at io.fabric8.Fabric8Commands.hasService(Fabric8Commands.groovy:637)
at io.fabric8.Fabric8Commands$hasService$1.call(Unknown Source)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at sonarQubeScanner.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/21/libs/github.com/fabric8io/fabric8-pipeline-library/vars/sonarQubeScanner.groovy:15)
at mavenCanaryRelease.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/21/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenCanaryRelease.groovy:62)
for background see fabric8io/fabric8-platform#65 and for an implementation see fabric8io/fabric8-platform#70
Without any changes to our Jenkinsfile, our build started to fail. In Jenkins log we see:
Feb 27, 2017 10:49:12 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
SEVERE: Error in provisioning; slave=KubernetesSlave name: kubernetes-8ca638566b324447ad9fed48eccf8a81-33a3f3091084a, template=org.csanchez.jenkins.plugins.kubernetes.PodTemplate@6db9af81
java.lang.NullPointerException
at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.combine(PodTemplateUtils.java:59)
at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.lambda$combine$14(PodTemplateUtils.java:118)
at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.combine(PodTemplateUtils.java:118)
at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.unwrap(PodTemplateUtils.java:164)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.getPodTemplate(KubernetesCloud.java:375)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.access$000(KubernetesCloud.java:87)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:555)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:532)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This started to happen when 088d36f was committed.
The Jenkinsfile is:
#!/usr/bin/groovy
@Library('github.com/fabric8io/fabric8-pipeline-library@master')
def failIfNoTests = ""
try {
failIfNoTests = ITEST_FAIL_IF_NO_TEST
} catch (Throwable e) {
failIfNoTests = "false"
}
def localItestPattern = ""
try {
localItestPattern = ITEST_PATTERN
} catch (Throwable e) {
localItestPattern = "*KT"
}
def versionPrefix = ""
try {
versionPrefix = VERSION_PREFIX
} catch (Throwable e) {
versionPrefix = "1.0"
}
def utils = new io.fabric8.Utils()
def canaryVersion = "${versionPrefix}.${env.BUILD_NUMBER}"
def label = "buildpod.${env.JOB_NAME}.${env.BUILD_NUMBER}".replace('-', '_').replace('/', '_')
mavenNode{
checkout scm
echo 'NOTE: running pipelines for the first time will take longer as build and base docker images are pulled onto the node'
container(name: 'maven') {
stage 'Build Release'
mavenCanaryRelease {
version = canaryVersion
}
stage 'Integration Test'
mavenIntegrationTest {
environment = 'Testing'
failIfNoTests = localFailIfNoTests
itestPattern = localItestPattern
}
}
}
I found that by using @Library('github.com/fabric8io/fabric8-pipeline-library@versionUpdate3f1bf454-700d-4274-9e22-3b6bab4361bc') it does build.
It would be good if there is a stable branch and perhaps more user friendly branch names.
The pipeline library steps allows to generate gh-pages based on maven profile -Pdoc-html
and -Pdoc-pdf
, typically replicate whats is done as part of the fabric8 tools/ci-docs.sh
utility.
we can then add method to release.groovy
like
def documentation(project) {
Model m = readMavenPom file: 'pom.xml'
generateWebsiteDocs {
project = project[0]
releaseVersion = project[1]
artifactId = m.artifactId
}
}
Which will generate the documentation and push to gh-pages
branch of the repo
The following assumption in the code is not true in any of the OpenShift deployments I have tried
def findTagSha(OpenShiftClient client, String imageStreamName, String namespace) {
...
// latest tag is the first
TAG_EVENT_LIST:
for (def list : tags) {
The order of the tags in an ImageStream seems to be random, so picking the first tag found does not work reliably.
eg.
status:
dockerImageRepository: 172.30.209.124:5000/mta/simontest123
tags:
Fabric8 always picks up old image 7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3 and deploys it to staging and production, when it should have used the newer image 59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db
e.g. we don't want to pollute the CD release mvn repo with snapshots; we only want fixed releases.
Whereas for snapshots we may wanna use a separate location
Right now a Pipeline could fail to get a new version of a pod running in an enviroment (e.g. the pod never becomes ready - maybe due to quota issues or a missing environment specific Service
, Secret
or ConfigMap
or something.
Currently once the apply is done, the kubernetesApply()
just assumes everything's great and carries on.
It would be nice to have a better flavour of this which does the Arquillian equivalent of this line:
assertThat(kubernetesClient).deployments().pods().isPodReadyForPeriod();
Then the pipeline would wait for the pods to go green & be ready (readiness checks + liveness checks kick in) - if things don't work it'd barf the build.
Maybe extra bonus points would be to automatically rollback the Deployment change if the new version doesn't startup correctly?
we could use branch names and naming conventions/patterns to decide which branches are
We could use variables to define the patterns used to differentiate between the kinds of builds. e.g. branches called master
or starting with release
could be the default production releases; branches starting with editing-
could be developer editing branches and anything else assumed to be CI / PR branches?
Then if folks fork a master branch, they get a new CI build for the changes they push
This is just a quick thought whilst it's on my mind..
To get the next pom version we could use something like this in the mavenCanaryRelease.groovy
What's missing is the PR to update the next pom version number. This isn't a great approach and we could use what fabric8 does and base the next version on incrementing the latest git tag. This means no code changes are needed for the next version.
#!/usr/bin/groovy
def call(body) {
// evaluate the body block, and collect configuration into the object
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
def flow = new io.fabric8.Fabric8Commands()
def s2iMode = flow.isOpenShiftS2I()
echo "s2i mode: ${s2iMode}"
def m = readMavenPom file: 'pom.xml'
def version
sh "git checkout -b ${env.JOB_NAME}-${env.BUILD_NUMBER}"
if (config.version){
version = config.version
sh "mvn org.codehaus.mojo:versions-maven-plugin:2.2:set -U -DnewVersion=${version}"
} else {
sh 'mvn build-helper:parse-version versions:set -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} '
m = readMavenPom file: 'pom.xml'
version = m.version
}
sh "mvn clean -e -U deploy"
if (flow.isSingleNode()){
echo 'Running on a single node, skipping docker push as not needed'
def groupId = m.groupId.split( '\\.' )
def user = groupId[groupId.size()-1].trim()
def artifactId = m.artifactId
if (!s2iMode) {
sh "docker tag ${user}/${artifactId}:${version} ${env.FABRIC8_DOCKER_REGISTRY_SERVICE_HOST}:${env.FABRIC8_DOCKER_REGISTRY_SERVICE_PORT}/${user}/${artifactId}:${version}"
}
} else {
if (!s2iMode) {
retry(3){
sh "mvn fabric8:push -Ddocker.push.registry=${env.FABRIC8_DOCKER_REGISTRY_SERVICE_HOST}:${env.FABRIC8_DOCKER_REGISTRY_SERVICE_PORT}"
}
}
}
if (flow.hasService("content-repository")) {
try {
sh 'mvn site site:deploy'
} catch (err) {
// lets carry on as maven site isn't critical
echo 'unable to generate maven site'
}
} else {
echo 'no content-repository service so not deploying the maven site report'
}
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.