This repositotory contains tooling used in Openshift CI. Please refer to the documentation for details.
openshift / ci-tools Goto Github PK
View Code? Open in Web Editor NEWDPTP Tooling
License: Apache License 2.0
DPTP Tooling
License: Apache License 2.0
This repositotory contains tooling used in Openshift CI. Please refer to the documentation for details.
Those point to https://github.com/kubernetes/community/blob/master/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests which doesn't exist.
When launching a pod that is a step name subset (so <JOB_PREFIX>-<STEP_NAME>
) we can omit duplicating <STEP_NAME>
in the container, which reduces annoyances getting into those steps and debugging them (and puts some consistency in place). We should probably have conventions for the container name, but setup
, teardown
, and test
are all good conventions (i.e. what we use right now in the template). Pre and post are setup and teardown, test is anything in the middle.
We should also make sure that when the step registry is launching a pod the "user facing" pod is first in the containers list, which makes rsh
, rsync
, exec
, log
and other commands happier (simplifies debugging).
We're going to have to annotate some of the pods/secrets so that cluster bot can get auth info out, but that can wait (bot basically needs to get at the right container to get the kubeadmin).
We currently pin a now-deleted v3.9.1
tag which was apparently deleted: openshift/api#456
I tried to fix this up but ended up in go mod hell, not quite sure what was going on.
ci-tools/cmd/ocp-build-data-enforcer/main.go
Line 291 in f34cc59
match?
TL;DR: ci-operator fails to operate when 4.3-art-latest
is used in tag_specification
stanza. This is because it iterates over .Spec.Tags
of that imagestream and Spec
is empty/changing in 4.3-art-latest
. It looks like ci-operator should iterate over .Status.Tags
but it is not clear to me whether the current behavior is a bug or a feature, and whether such change would be a risk.
Gory details:
This PR openshift/release#6254 was attempting to add a ci-operator config using 4.3-art-latest
imagestream in its tag_specification
stanza (advised by Steve):
tag_specification:
name: "4.3-art-latest"
namespace: ocp
Rehearsals of test jobs generated from such configs failed with the following error (see the full error in [1]):
parameter RELEASE_IMAGE_LATEST is required and must be specified
This was reported on Slack. We discovered that ci-operator is not building a release payload like usual, accompanied by the following log output:
No latest release image necessary, stable image stream does not include a cluster-version-operator image
(This is likely a first issue to fix: if we need a release, we should either have RELEASE_IMAGE_*
set, or error out on ...stable image stream does not include...
instead of continuing)
Manual inspection in ci-operator namespace showed that their stable
imagestreams contained no images; we expected images from 4.3-art-latest
to be tagged there. This varied a lot: in other runs, we saw e.g. only machine-os-content
image tagged in stable
, and once, we even saw all expected images tagged there and ci-operator
even attempted to assemble a release payload.
It looks like ci-operator iterates over .Spec.Tags
of the source imagestream, followed by some filtering/validation based on the image's presence in its .Status
:
ci-tools/pkg/steps/release/release_images.go
Lines 183 to 197 in 766aafa
We asked ART on Slack, Justin Pierce suggested ci-operator should use Status.Tags
instead of .Spec.Tags
, but given that ci-operator now basically uses both, it's not clear to me if the change is really that simple and what risk it implies.
/cc @stevekuznetsov @lioramilbaum @droslean
[1] linebreaks mine
could not wait for template instance to be ready:
could not determine if template instance was ready:
failed to create objects:
Template.template.openshift.io "e2e-test" is invalid:
template.parameters[10]: Required value:
template.parameters[10]:
parameter RELEASE_IMAGE_LATEST is required and must be specified
Hi CI Operator team,
Now Openshift Quay QE team and Openshift InterOP teams have a requirement to send the test report to multiple Slack channel, pls give support.
PROW CI Docs: https://docs.ci.openshift.org/docs/how-tos/notification/
Example:
`reporter_config:
slack:
channel:
- '#quay-qe1'
- '#quay-qe2'
- '#quay-qe3'
job_states_to_report:
- success
- failure
- error
report_template: '{{if eq .Status.State "success"}} :rainbow: Job *{{.Spec.Job}}*
ended with *{{.Status.State}}*. <{{.Status.URL}}|View logs> :rainbow: {{else}}
:volcano: Job *{{.Spec.Job}}* ended with *{{.Status.State}}*. <{{.Status.URL}}|View
logs> :volcano: {{end}}'`
in podspec.go in the default podspec there are three VolumeMounts defined. However, there are only two Volumes defined. The Volume definition gcs-credentials
is missing. When I run this against ci-operator configs, the jobs fail because of the missing Volume.
In RH Prow is this Volume definition being injected somehow. I thought it might have been with Prow presets:
but that doesn't seem to be the case.
var defaultPodSpec = corev1.PodSpec{
ServiceAccountName: "ci-operator",
Containers: []corev1.Container{
{
Args: []string{
"--image-import-pull-secret=/etc/pull-secret/.dockerconfigjson",
"--gcs-upload-secret=/secrets/gcs/service-account.json",
"--report-credentials-file=/etc/report/credentials",
},
Command: []string{"ci-operator"},
Image: "ci-operator:latest",
ImagePullPolicy: corev1.PullAlways,
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{"cpu": *resource.NewMilliQuantity(10, resource.DecimalSI)},
},
VolumeMounts: []corev1.VolumeMount{
{
Name: "pull-secret",
MountPath: "/etc/pull-secret",
ReadOnly: true,
},
{
Name: "result-aggregator",
MountPath: "/etc/report",
ReadOnly: true,
},
{
Name: "gcs-credentials",
MountPath: cioperatorapi.GCSUploadCredentialsSecretMountPath,
ReadOnly: true,
},
},
},
},
Volumes: []corev1.Volume{
{
Name: "pull-secret",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{SecretName: "registry-pull-credentials"},
},
},
{
Name: "result-aggregator",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{SecretName: "result-aggregator"},
},
},
},
}
Observed the following panic in a job kicked off by cluster-bot
- https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/673/build-log.txt
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x1704aee]
goroutine 1 [running]:
github.com/openshift/ci-tools/pkg/results.(*reporter).Report(0xc000e815c0, 0x1e85480, 0xc000620690)
/go/src/github.com/openshift/ci-tools/pkg/results/report.go:144 +0x2fe
main.(*options).Report(0xc0002062c0, 0x1e83ec0, 0xc000620690)
/go/src/github.com/openshift/ci-tools/cmd/ci-operator/main.go:504 +0xf7
main.main()
/go/src/github.com/openshift/ci-tools/cmd/ci-operator/main.go:201 +0x30e
Many times i have observed CI has failed a test for no actual test case failing. I suspect if CI finds certain keywords lile error
or time out
etc, then also the test case fails.
Specifically can you point out why this test case has failed? https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cluster-logging-operator/934/pull-ci-openshift-cluster-logging-operator-master-e2e-operator/1372384244810125312
So Clayton and I were looking at openshift/release#8154. After making a few small edits, the only code left in the changeset belonged to the template:
ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml
At this point, the job defined here below no longer triggered for rehearsals.
https://github.com/openshift/release/blob/master/ci-operator/jobs/openshift/release/openshift-release-release-4.2-periodics.yaml#L324
Clayton suggested I make sure the rehearsal annotation was present, but as you can see, I have linked in directly above.
I have run into this behavior before, and thought this was expected.
In order to force a run on the job, I left in a dummy variable in interim commit:
openshift/release@c9ef6a2
This produced job log:
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/8154/rehearse-8154-release-openshift-origin-installer-e2e-remote-libvirt-s390x-4.2/7
However, the following commit (with the dummy var removed) (openshift/release@16c4dbf) did not retrigger this job.
I asked Steve Kuznetsov about this as well, and he linked me to:
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/8154/pull-ci-openshift-release-master-pj-rehearse/18031#1:build-log.txt%3A15
When using multi-secrets, at least in a rehearsal, the job fails with:
failed to create or restart test pod: unable to create pod: Pod "addon-prow-operator-test" is invalid: spec.volumes[2].name: Duplicate value: "test-secret"
Despite the fact that the job yaml doesn't reference this. For the moment it makes the multi-secret support unusable.
$ make build
hack/build-go.sh
go: downloading vbom.ml/util v0.0.0-20180919145318-efcd4e0f9787
../../../go/pkg/mod/github.com/!google!cloud!platform/[email protected]/util/gcs/read.go:32:2: unrecognized import path "vbom.ml/util": https fetch: Get "https://vbom.ml/util?go-get=1": dial tcp: lookup vbom.ml on 127.0.0.1:53: no such host
[ERROR] PID 2449932: hack/build-go.sh:13: `go build ./cmd/...` exited with status 1.
[INFO] Stack Trace:
[INFO] 1: hack/build-go.sh:13: `go build ./cmd/...`
[INFO] Exiting with code 1.
[ERROR] hack/build-go.sh exited with code 1 after 00h 00m 01s
make: *** [Makefile:33: build] Error 1
See fvbommel/util#7
We needed to revert the PR for Default artifacts directory, set env because of reported breakage of some Prow jobs.
The reason is likely the interaction with the test-specific secret:
stanza in ci-operator config.
I'm attaching different outputs of ci-operator (revision a7bff05, before revert) local execution:
ci-operator --git-ref openshift/osde2e@master --config $OSRELEASE/ci-operator/config/openshift/osde2e/openshift-osde2e-master.yaml --target=e2e-prod-4.2 --artifact-dir /tmp --dry-run
/assign @stevekuznetsov
2020/05/01 15:35:47 error: unable to signal to artifacts container to terminate in pod format, triggering deletion: could not run remote command: unable to upgrade connection: container not found ("artifacts")
2020/05/01 15:35:47 error: unable to retrieve artifacts from pod format: could not read gzipped artifacts: unable to upgrade connection: container not found ("artifacts")
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_ci-tools/744/pull-ci-openshift-ci-tools-master-format/2348#1:build-log.txt%3A21
https://search.apps.build01.ci.devcluster.openshift.com/chart?search=error%3A+unable+to+retrieve+artifacts+from+pod+.*%3A+could+not+read+gzipped+artifacts%3A+unable+to+upgrade+connection%3A+container+not+found&maxAge=48h&context=1&type=build-log&name=.*&maxMatches=5&maxBytes=20971520&groupBy=job
The diff between failed job
and success is removing from:
statement which I'm suspecting is coming from
https://github.com/openshift/ci-tools/blame/master/pkg/steps/source.go#L102C5-L102C5
https://github.com/openshift/ci-tools/blame/2db1b19e7a50e0b8fb690ad2ba7757f961792168/pkg/steps/source.go#L102C5-L102C5
The baremetalds-e2e-test
was canceled because of a timeout. The artifacts directory of the step doesn't exist in the artifacts of the job https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_release/13120/rehearse-13120-pull-ci-openshift-metal3-dev-scripts-master-e2e-metal-ipi-upgrade/1321109185345097728/artifacts/e2e-metal-ipi-upgrade/
cc @andfasano
Compare the last good and first bad jUnit files:
Broken, at 2019-12-03 21:22:13 +0000 UTC:
<testsuites>
<testsuite name="operator" tests="4" skipped="0" failures="0" time="4961.711833966">
<testcase name="Find all of the input images from ocp/4.3:${component} and tag them into the output image stream" time="16.180708508"></testcase>
<testcase name="All images are built and tagged into stable" time="2.219e-06"></testcase>
<testcase name="Create the release image containing all images built by this job" time="61.258032765"></testcase>
<testcase name="Run template e2e-aws" time="4884.272640237"></testcase>
</testsuite>
</testsuites>
Good, at 2019-12-03 09:51:52 +0000 UTC
<testsuites>
<testsuite name="operator" tests="7" skipped="0" failures="0" time="4247.361443821">
<testcase name="Find all of the input images from ocp/4.3:${component} and tag them into the output image stream" time="12.935432193"></testcase>
<testcase name="All images are built and tagged into stable" time="1.464e-06"></testcase>
<testcase name="Create the release image containing all images built by this job" time="64.139096316"></testcase>
<testcase name="Run template e2e-aws - e2e-aws container lease" time="57"></testcase>
<testcase name="Run template e2e-aws - e2e-aws container setup" time="1874"></testcase>
<testcase name="Run template e2e-aws - e2e-aws container teardown" time="523"></testcase>
<testcase name="Run template e2e-aws - e2e-aws container test" time="1694"></testcase>
</testsuite>
</testsuites>
Within https://github.com/openshift/ci-tools/blob/master/CONFIGURATION.md when clicking onto a link towards "Upstream documentation" it returns a 404. Didn't check every link, but quite a few didn't work any more.
Example: https://github.com/openshift/ci-tools/blob/master/CONFIGURATION.md#imagesnamedockerfile_path
We want to use repo-brancher
tool to fast forward our code from master to our release branch (out job can be found here), but because our promotion namespace is not the ocp
, the repo-brancher
will be failed to check from https://github.com/openshift/ci-tools/blob/master/pkg/promotion/promotion.go#L39, from repo-brancher
code, I found the promotion namespace is a const var (https://github.com/openshift/ci-tools/blob/master/pkg/promotion/promotion.go#L15)
Can we add a PromotionNamespace
option for repo-brancher
Options
, if the PromotionNamespace
option is not specified, repo-brancher
will run the current logic, if the PromotionNamespace
option is specified, we will check the promotion namespace option whether matches with the configuration.PromotionConfiguration.Namespace
, the code maybe
func (o *Options) matches(configuration *cioperatorapi.ReleaseBuildConfiguration) bool {
if !isDisabled(configSpec) {
return false
}
promotionNamespace := extractPromotionNamespace(configSpec)
if o.PromotionNamesapce != "" {
return promotionNamespace == o.PromotionNamesapce && configuration.PromotionConfiguration.Name == o.CurrentRelease
}
promotionName := extractPromotionName(configSpec)
return RefersToOfficialImage(promotionName, promotionNamespace) && configuration.PromotionConfiguration.Name == o.CurrentRelease
}
any comments?
Which Git release is required? Is it 2.x or 1.8.3 suffice or any version for that matter?
Which golang version to use? Didn't work with golang 1.11 but when upgraded to 1.13.5, it did.
The error today is:
error: failed to load configuration: Response from configresolver != 200
We should improve this to:
/assign @AlexNPavel
Currently, there's no way to see which jobs are mandatory and which not, unless going to a repository config file or on a PR commenting with /test ?
.
It would be nice to have all this information centralized in one place.
Show on https://steps.ci.openshift.org/search the type of the jobs (periodic, presubmit) and if the job is optional.
As a user, I'd like for jobs to have access to multiple secrets so that individual jobs can have common secrets and individual secrets. This is to support an addon testing effort that will involve users supplying us tokens that supplant that values that we use for most of our job runs.
We in OCP storage team use cmd/pr-reminder and I'd appreciate if a PR age was not counted since the PR was created, but since it got a new comment, label, new title, new push / force-push or similar activity. It happens to me that I'm waiting for review comments to be addressed in several PRs, they get orange / red and I tend to ignore them. I then miss a new comment or a new commit. It does not need to get green, any other form of visible emphasis would be enough.
@psalajova, what do you think?
In #3068, @zaneb added support for specifying always_run: false
in configs/
, but it only works right when creating new job files; the logic for merging changes to existing jobs didn't get updated, so once a job has been created, changing the value of always_run
in its config will have no effect; you have to manually update the job. (FTR, there are 80 jobs where the config specifies always_run: false
but the job itself is currently always_run: true
.)
We could change it to always use the value of always_run
from the config, but that would require fixing all of the existing configs first; most of them never got updated to specify always_run: false
. (There are currently 2804 jobs where the job specifies always_run: false
but the config does not.)
Alternatively, we could change it to do merged.AlwaysRun = old.AlwaysRun && new.AlwaysRun
(so that if either the config or the pre-existing job specifies always_run: false
, then it becomes always_run: false
). That would fix the 80 currently-broken jobs without affecting the 2804 "secretly allow-run-false" jobs. Though it doesn't really fix the problem, because it would mean that you can convert existing always-run jobs to not-always-run, but you'd still need manual jobs-file editing when converting a not-always-run job back to always-run. So, meh.
I was hoping we could just use the same code that's used in other places to determine the images to be promoted and get rid of this partial (re-)implementation.
Originally posted by @bbguimaraes in #3615 (comment)
See this configuration from Origin:
tests:
- as: images-artifacts
commands: '# noop, just to force the building the `artifacts` image'
container:
from: artifacts
We should build this in the [images]
target, if we don't.
Hi,
The cloneref binary is required to run the test cases in prow and is added to the environment in the ci-tools source here:
Line 69 in bcb9b44
Now, this comes from an amd64 image, and doesn't work for non-amd64 archs. To run it on a multi-arch environment, there could be a workaround to get the cloneref
binary using go get
. I have been able to run cloneref using go get k8s.io/test-infra/prow/cmd/clonerefs
on a ppc64le system.
Would it be a right solution to enable ci-tools to run on multi-arch systems? Or should there be another way to support non-amd64 archs? Thanks.
The bot that clones the jira tickets while backporting should maintain the assignee field. e.g https://issues.redhat.com/browse/OCPBUGS-4805 is a clone of https://issues.redhat.com/browse/OCPBUGS-4101. If the assignee field is not maintained then it slips out of original assignee's list of bugs and could potentially get lost.
Is there any way to enable support for spinning up an OpenShift cluster on a specified machine via CRC (or any mechanism that yields a small, single node cluster)?
I have a fairly small set of tests that are not particularly resource intensive and I'd rather not have to go through AWS/Google Cloud/whatever.
Thanks for the consideration
According to the Kubernetes naming docs, object names can contain .
; I believe this implies that names are valid DNS 1123 subdomains. The subdomain regexp is here. The regexp used by api/config
however is for DNS 1123 labels. Secret names that contain .
, such as quay.io Secret
's, will fail prowgen.
I can submit a PR to use k8s.io/apimachinery/pkg/util/validation.IsDNS1123Subdomain()
, or simply copy the regexp if there's a dependency issue.
There should be an option to use the signoff feature for the created commit.
The test-infra dco plugin that we use for kubevirt org does not have the option to whitelist the bot explicitly. Therefore dco check fails on the created PR.
We fixed that by amending the commit with a signoff and re-pushing it again. https://github.com/kubevirt/project-infra/pull/429/files#diff-e0a716b1a6d9204c23041c4e82ea7360R178
Those 3 jobs are with rehearse label:
But they are not rehearsed.
They should be since we have changed the config file with the cluster
params.
openshift/release#6565 has updated base image to golang-1.13 and rehearsed image builds - but it didn't rehearse jobs which depend on it, like e2e. This caused a few issues.
cc @petr-muller
when trying to run tests locally I get the following error :
[irosenzw@IdoWork ci-tools]$ ./ci-operator --config openshift-console-master.yaml --git-ref openshift/console@master
2020/06/02 11:54:42 unset version 0
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xa8 pc=0x16b590c]
goroutine 1 [running]:
main.(*options).Complete(0xc00046a000, 0xc00061ff20, 0x4)
/home/irosenzw/openshift-ci/ci-tools/cmd/ci-operator/main.go:374 +0x11c
main.main()
/home/irosenzw/openshift-ci/ci-tools/cmd/ci-operator/main.go:196 +0x1f8
Seen in an integration job run:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x13cec21]
goroutine 118 [running]:
github.com/openshift/ci-tools/pkg/load.Registry.func1(0xc00039e2c0, 0x3f, 0x0, 0x0, 0x19840e0, 0xc0003cee40, 0x0, 0x0)
/go/src/github.com/openshift/ci-tools/pkg/load/load.go:93 +0x81
path/filepath.walk(0xc000402270, 0x28, 0x19cdf00, 0xc0002049c0, 0xc000373cf8, 0x0, 0x0)
/usr/local/go/src/path/filepath/path.go:378 +0x20c
path/filepath.walk(0xc000043440, 0x20, 0x19cdf00, 0xc000204680, 0xc000373cf8, 0x0, 0x0)
/usr/local/go/src/path/filepath/path.go:382 +0x2ff
path/filepath.walk(0x7ffcfddbc352, 0x1c, 0x19cdf00, 0xc0002045b0, 0xc000373cf8, 0x0, 0xc000373cc0)
/usr/local/go/src/path/filepath/path.go:382 +0x2ff
path/filepath.Walk(0x7ffcfddbc352, 0x1c, 0xc000373cf8, 0x153df64b408bb, 0xc021590a06)
/usr/local/go/src/path/filepath/path.go:404 +0xff
github.com/openshift/ci-tools/pkg/load.Registry(0x7ffcfddbc352, 0x1c, 0x24a9800, 0xc0003cecf0, 0xc0003ced20, 0xc0003ced50, 0x0, 0x0)
/go/src/github.com/openshift/ci-tools/pkg/load/load.go:92 +0xfe
github.com/openshift/ci-tools/pkg/load.(*registryAgent).loadRegistry(0xc000151220, 0x0, 0x0)
/go/src/github.com/openshift/ci-tools/pkg/load/registryAgent.go:118 +0xdc
github.com/openshift/ci-tools/pkg/coalescer.(*coalescer).Run.func1()
/go/src/github.com/openshift/ci-tools/pkg/coalescer/coalescer.go:40 +0x6c
sync.(*Once).doSlow(0xc000442850, 0xc0004ef760)
/usr/local/go/src/sync/once.go:66 +0xe3
sync.(*Once).Do(...)
/usr/local/go/src/sync/once.go:57
github.com/openshift/ci-tools/pkg/coalescer.(*coalescer).Run(0xc0002e4ca0, 0x0, 0x0)
/go/src/github.com/openshift/ci-tools/pkg/coalescer/coalescer.go:34 +0xb8
github.com/openshift/ci-tools/pkg/load.reloadWatcher.func1(0x1982a40, 0xc0002e4ca0)
/go/src/github.com/openshift/ci-tools/pkg/load/agent_utils.go:42 +0x35
created by github.com/openshift/ci-tools/pkg/load.reloadWatcher
/go/src/github.com/openshift/ci-tools/pkg/load/agent_utils.go:41 +0x32e
/assign @AlexNPavel
whitelist
into account.OWNERS
file is not changed.When using the following configuration instead of tag_specification
, the IMAGE_FORMAT variable is empty in the test environment:
releases:
latest:
release:
channel: stable
version: "4.5"
This is a problem because the images that are built by ci-operator for the current run are not easily accessible.
Link to the test run that has the empty env variable: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/11443/rehearse-11443-pull-ci-openshift-knative-serverless-operator-master-4.5-upstream-e2e-aws-ocp-45/1300444362731163648/build-log.txt
Description
oc sa create-kubeconfig
does not work any more against 4.12 cluster.
https://bugzilla.redhat.com/show_bug.cgi?id=2108241
It has impact on our tools because
https://docs.openshift.com/container-platform/4.10/authentication/bound-service-account-tokens.html
3. The application that uses the bound token must handle reloading the token when it rotates.
The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours.
So we rely on ci-secret-generator
more than ever.
The workaround before the bug is fixed (it might not be fixed ever).
which needs 4.11:cli
.
However, it turns out that the generator cannot use 4.11:cli
because it generates the deprecated message in the output of oc sa create-kubeconfig
.
Blocker before bumping to 4.11:cli
: @bear-redhat is working on it.
oc apply
: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/branch-ci-openshift-release-master-app-ci-apply/1552582797707710464#1:build-log.txt%3A18Next steps:
Option 1:
4.11:cli
Option 2:
oc sa create-kubeconfig
any more.I bet (Update: Checked the code again. Found no blocker there).pull-ci-openshift-release-master-build-clusters
will block us on the way and we have to figure out how to fix that too.
As part of https://issues.redhat.com/browse/MCO-392, we are working on removing machine-os-content
from the payload for 4.16+. This repo has a few references to it. From a cursory look, most of the references are to older releases, which should be fine. There are some other references I'm less sure about.
Please make sure this repo will work fine once machine-os-content is no longer in 4.16+ payloads.
This checks if _any_ tag is created by commit, not just `isTagRef`.
Originally posted by @bbguimaraes in #3615 (comment)
I have a bug that is targeted at 4.10.0, and I am attempting to use BZ backporter to backport it to relevant branches. Unfortunately, I am only able to see 4.1.z and 3.11.z as options.
openshift/release#4938 added a new job together with a new template used by this new job. The rehearsal created for this job got stuck on failing to mount the template CM:
MountVolume.SetUp failed for volume "job-definition" : configmaps "rehearse-template-cluster-launch-installer-ipi-e2e-dd409aae" not found
The pj-rehearse output suggests that the CM was not even created:
time="2019-09-12T16:19:57Z" level=info msg="Rehearsing Prow jobs for a configuration PR" pr=4938
time="2019-09-12T16:20:00Z" level=info msg="templates changed" pr=4938 templates="[{ci-operator/templates/openshift/installer/cluster-launch-installer-ipi-e2e.yaml dd409aae952175ffec1a2c6f38822f919a85e1ab}]"
time="2019-09-12T16:20:00Z" level=info msg="Job has been chosen for rehearsal" diffs=" .Agent: a: '' b: 'kubernetes'" job-name=pull-ci-openshift-installer-master-e2e-ipi pr=4938 repo=openshift/installer
time="2019-09-12T16:20:00Z" level=info msg="Created a rehearsal job to be submitted" pr=4938 rehearsal-job=rehearse-4938-pull-ci-openshift-installer-master-e2e-ipi target-job=pull-ci-openshift-installer-master-e2e-ipi target-repo=openshift/installer
time="2019-09-12T16:20:00Z" level=info msg="Submitting a new prowjob." job=rehearse-4938-pull-ci-openshift-installer-master-e2e-ipi name=2b7aacd1-d579-11e9-9688-0a58ac100856 org=openshift pr=4938 repo=release type=presubmit
time="2019-09-12T16:20:00Z" level=info msg="Submitted rehearsal prowjob" job=rehearse-4938-pull-ci-openshift-installer-master-e2e-ipi name=2b7aacd1-d579-11e9-9688-0a58ac100856 org=openshift pr=4938 repo=release type=presubmit
I have copied the commit from the problematic PR here for reference:
petr-muller/release@d69407a
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.