Giter Site home page Giter Site logo

aos-cd-jobs's People

Contributors

0xmichalis avatar adammhaile avatar ashwindasr avatar bbguimaraes avatar bparees avatar chaitanyaenr avatar codificat avatar dennisperiquet avatar dobbymoodge avatar ingvagabund avatar jhadvig avatar joepvd avatar jpeeler avatar jupierce avatar locriandev avatar markllama avatar mwoodson avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar richm avatar runcom avatar sdodson avatar smarterclayton avatar sosiouxme avatar stevekuznetsov avatar tbielawa avatar thegreyd avatar thiagoalessio avatar vfreex avatar ximinhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aos-cd-jobs's Issues

Investigate breaking down the integration test job into two separate jobs

The integration test job today runs hack/test-integration.sh and hack/test-end-to-end{,-docker}.sh. While the later test normally takes up to 8min, it may end up hang for more than an hour (see openshift/origin#15093). We need to investigate if we can run the integration test in less than an hour* so it won't make any difference billing-wise to have the additional job.

  • also assume that in the future the test will get slower so initially it shouldn't run anywhere near an hour.

Ensure GCE Jobs Generate Appropriate Artifacts

Today, we have artifact generation and fetching tasks for all the jobs that run off of AWS, as it only requires one hop from the Jenkins master using the origin-ci-tool SSH configuration onto the AWS host. We do not have the same level of artifact gathering for the GCE job, as this job uses the AWS host as a intermediary and connects to GCE from there -- so we grab e.g. the origin-node log from the AWS host, but that host is not running origin.

We need to determine if we can get the normal artifact gathering logic to apply cleanly to the GCE jobs (maybe by using SSH with -J jump) or if we just need to hard-code a list of artifacts to get us to a better state quickly, if we think the prior will be hard to do and we do not expect to have more jobs on GCE in the future.

/cc @deads2k @smarterclayton

Stream rcm scripts for stage-to-prod and use pipeline-scripts/buildlib.groovy

Testing changes before merging

Opening this as a issue for visibility and to discuss the current process and perhaps find actionable improvements.

I would like to understand what is the current process for merging changes to this repo and what measures are in place to verify the changes before they affect people working on other repos that depend on the Jenkins jobs.

In the last couple of weeks I've seen errors in merge jobs of openshift-ansible:

  • introduction of a job (tox) that didn't install all dependencies and failed
  • job tries to run a file that doesn't exist, because of changes in the branch structure (openshift/origin#13833)
  • others that I did not keep record

Since bugs here affect a number of developers and PRs, I believe it is worth having a process to reduce the risk of human errors.

auto-sync openshift/kubernetes

The openshift level of kubernetes carries multiple patches. Sometimes we need to be able to pull a level of kube that matches our vendor tree. We should automate the process of keeping these patches up to date by copying the UPSTREAM commits to openshift/kubernetes, then auto-creating a bump(k8s.io/kubernetes):<openshift/kuberenetes sha> commit in openshift/origin.

hack/move-upstream.sh make this possible (I think), but its not set up to be run automatically.

@sttts you are well versed in bash, do you think you could put together a script that does this for us?
@stevekuznetsov once we have a script, can you wire our job?

git rebase -i fails on empty commits

We need to find a way for devs to remove CARRY commits without force-pushing. Creating a revert of a CARRY commit as a SQUASH does not work because git rebase -i doesn't know what to do with the empty commit. We may want to add logic and identify empty commits before actually doing the rebase and drop all those commits automatically.

Found in #258

@stevekuznetsov @jupierce

Improve Semantics of Post-Build Tasks

Today, we run a number of post-build publishers. As only one org.jenkinsci.plugins.postbuildscript.PostBuildScript can be run as a publisher, we end up with a block of buildSteps of type hudson.tasks.Shell. This means that if we have a post-build flow like -- generate artifacts, retrieve artifacts, fetch systemd journals, deprovision cloud resources -- and one of the first hudson.tasks.Shell fails, the rest will not run. Although the larger org.jenkinsci.plugins.postbuildscript.PostBuildScript is set to run regardless of whether the job failed or not, the linear flow of hudson.tasks.Shell will exit early on any individual failure. We can try to address this by adding || true to our actions in these steps but in reality we just need a way to parameterize a named_shell_action so we don't always add set -o errexit. This way, failures will be silently ignored and all post-build tasks will run.

/cc @soltysh

Clean up push-to-mirrors.sh

Stage reported as SUCCESS even though it failed

########## STARTING STAGE: SYNC ORIGIN PULL REQUEST 15034 ##########
+ [[ -s /var/lib/jenkins/jobs/kargakis_test/workspace/activate ]]
+ source /var/lib/jenkins/jobs/kargakis_test/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285
++ export PATH=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/kargakis_test/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/kargakis_test/workspace/.config
/tmp/hudson2337358816745890171.sh: line 3: PULL_REFS: unbound variable
++ set +o xtrace
########## FINISHED STAGE: SUCCESS: SYNC ORIGIN PULL REQUEST 15034 [00h 00m 00s] ##########
Build step 'Execute shell' marked build as failure

fork_ami support for private repos

The initial version of the new fork_ami job works only with public repos; changing the flow to git clone locally and update the remote repo with the local copy does not work for some reason related to git+ssh. We probably want to investigate more.

Weird output in service-catalog job

During e2e tests. Example:

ha:////4MeVcH72wR3UhhDqyvcTQ7177yig6/hEwOTecrrdEgO0AAAAox+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQdnMzAAJrJTsABbRw/aUAAAA=ha:////4Izl/FHSTYTHykrmlzZzvlpaz1KQtn8ISZGCH/weetbEAAAAjR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDAw2yTZAQAIfTy0igAAAA==• Failure [35.369 seconds]ha:////4KkXsqZwcoBRYjJhzhKwR9zNVPMps4dDi1lVCC6jyPfqAAAAjh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2Ogn2QEAspZgwYsAAAA=ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
[service-catalog] walkthrough
ha:////4E0uZ+vhxmbt1sosEgabVrkGkIVzGZSiKgVaFAIjWEFYAAAAoh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNnEGQWslOwD8ozaepQAAAA==/data/src/github.com/openshift/origin/cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog/test/e2e/framework/framework.go:89ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
  ha:////4MeVcH72wR3UhhDqyvcTQ7177yig6/hEwOTecrrdEgO0AAAAox+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQdnMzAAJrJTsABbRw/aUAAAA=ha:////4Izl/FHSTYTHykrmlzZzvlpaz1KQtn8ISZGCH/weetbEAAAAjR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDAw2yTZAQAIfTy0igAAAA==Run walkthrough-example  [It]ha:////4KkXsqZwcoBRYjJhzhKwR9zNVPMps4dDi1lVCC6jyPfqAAAAjh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2Ogn2QEAspZgwYsAAAA=ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
  ha:////4E0uZ+vhxmbt1sosEgabVrkGkIVzGZSiKgVaFAIjWEFYAAAAoh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNnEGQWslOwD8ozaepQAAAA==/data/src/github.com/openshift/origin/cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog/test/e2e/walkthrough.go:338ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=

  ha:////4MeVcH72wR3UhhDqyvcTQ7177yig6/hEwOTecrrdEgO0AAAAox+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQdnMzAAJrJTsABbRw/aUAAAA=failed to wait ClusterServiceBroker to be ready
  Expected error:
      <*errors.errorString | 0xc420260c30>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition

package-dockertested: skopeo version skew breaks pre-flight checks

We install docker using the rhel7next* suite of repos, which are synced from the latest RHEL 7 Extras compose in Brew. During this process, recently we are bringing in skopeo-containers as a dependency:

$ sudo yum --disablerepo=\* --enablerepo=rhel7next\* install docker
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Resolving Dependencies
--> Running transaction check
---> Package docker.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
--> Processing Dependency: docker-client = 2:1.12.6-32.git88a4867.el7 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: docker-common = 2:1.12.6-32.git88a4867.el7 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: docker-rhel-push-plugin = 2:1.12.6-32.git88a4867.el7 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: container-selinux >= 2:2.12-2 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: oci-register-machine >= 1:0-3.10 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: skopeo-containers for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: libseccomp.so.2()(64bit) for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.15-1.git583ca40.el7 will be installed
---> Package docker-client.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
---> Package docker-common.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
---> Package docker-rhel-push-plugin.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
---> Package libseccomp.x86_64 0:2.3.1-2.el7 will be installed
---> Package oci-register-machine.x86_64 1:0-3.11.gitdd0daef.el7 will be installed
---> Package oci-systemd-hook.x86_64 1:0.1.7-4.gite533efa.el7 will be installed
--> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.7-4.gite533efa.el7.x86_64
---> Package skopeo-containers.x86_64 1:0.1.20-2.el7 will be installed
--> Running transaction check
---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================================================================================
 Package                                 Arch                   Version                                     Repository                        Size
===================================================================================================================================================
Installing:
 docker                                  x86_64                 2:1.12.6-32.git88a4867.el7                  rhel7next-extras                  14 M
Installing for dependencies:
 container-selinux                       noarch                 2:2.15-1.git583ca40.el7                     rhel7next-extras                  29 k
 docker-client                           x86_64                 2:1.12.6-32.git88a4867.el7                  rhel7next-extras                 3.3 M
 docker-common                           x86_64                 2:1.12.6-32.git88a4867.el7                  rhel7next-extras                  76 k
 docker-rhel-push-plugin                 x86_64                 2:1.12.6-32.git88a4867.el7                  rhel7next-extras                 1.5 M
 libseccomp                              x86_64                 2.3.1-2.el7                                 rhel7next                         56 k
 oci-register-machine                    x86_64                 1:0-3.11.gitdd0daef.el7                     rhel7next-extras                 1.0 M
 oci-systemd-hook                        x86_64                 1:0.1.7-4.gite533efa.el7                    rhel7next-extras                  30 k
 skopeo-containers                       x86_64                 1:0.1.20-2.el7                              rhel7next-extras                 7.9 k
 yajl                                    x86_64                 2.0.4-4.el7                                 rhel7next                         39 k

Transaction Summary
===================================================================================================================================================
Install  1 Package (+9 Dependent packages)

This means when we run pre-flight checks in the installer that try to install and use skopeo, they need to be grabbing the bleeding edge version from the rhel7next* repositories as well, otherwise they fail like so:

    Failure summary:
     
      1. Host:     localhost
         Play:     Verify Requirements
         Task:     openshift_health_check
         Message:  One or more checks failed
         Details:  check "docker_image_availability":
                   Some dependencies are required in order to check Docker image availability.
                   Error: Package: 1:skopeo-0.1.19-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                              Requires: skopeo-containers = 1:0.1.19-1.el7
                              Installed: 1:skopeo-containers-0.1.20-2.el7.x86_64 (@rhel7next-extras)
                                  skopeo-containers = 1:0.1.20-2.el7
                              Available: 1:skopeo-containers-0.1.17-0.7.git1f655f3.el7.x86_64 (oso-rhui-rhel-server-extras)
                                  skopeo-containers = 1:0.1.17-0.7.git1f655f3.el7
                              Available: 1:skopeo-containers-0.1.17-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                                  skopeo-containers = 1:0.1.17-1.el7
                              Available: 1:skopeo-containers-0.1.18-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                                  skopeo-containers = 1:0.1.18-1.el7
                              Available: 1:skopeo-containers-0.1.19-1.el7.x86_64 (oso-rhui-rhel-server-extras)
                                  skopeo-containers = 1:0.1.19-1.el7
                   
     
    The execution of "/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml"
    includes checks designed to fail early if the requirements
    of the playbook are not met. One or more of these checks
    failed. To disregard these results, you may choose to
    disable failing checks by setting an Ansible variable:
     
       openshift_disable_check=docker_image_availability
     
    Failing check names are shown in the failure details above.
    Some checks may be configurable by variables if your requirements
    are different from the defaults; consult check documentation.
    Variables can be set in the inventory or passed on the
    command line using the -e flag to ansible-playbook.

We should probably add a oct prepare skopeo command (oct should be well-formed for this, but maybe we want a more generic oct prepare package?). Then, we need to add logic to the package-dockertested job to install skopeo from rhel7next* with the new command, as it does for docker. Furthermore, we need to update the call to the update-dockertested-repo.sh script to add to the dockertested repo the package containing skopeo. Then, we can finally update the job configuration for ami_build_origin_int_rhel_base so that new base AMIs will install the correct version of skopeo.

Github URL incorrect in reconciliation email

Github URL needs "tree" in URL and should start with https://

OIT has detected a change in the Dockerfile for openshift3/ose-ansible
Source file: github.com/openshift/openshift-ansible/images/installer/Dockerfile.rhel7
This has been automatically reconciled and the new file can be seen here:
http://pkgs.devel.redhat.com/cgit/rpms/aos3-installation-docker/tree/Dockerfile?id=a2b02117af95716e20f383c27ef984effaf41800

sjb: +o errexit is not respected in host scripts

https://ci.openshift.redhat.com/jenkins/view/All/job/ami_build_origin_int_rhel_fork/16/consoleFull#-206914687858b6e51eb7608a5981914356

########## STARTING STAGE: MAKE A TRELLO COMMENT ##########
+ [[ -s /var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/activate ]]
+ source /var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345
++ export PATH=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/.config
+ set +o nounset +o errexit
+ [[ -n <none> ]]
+ trello comment 'A fork ami has been created for this card: `_16`' --card-url '<none>'
++ export status=FAILURE
++ status=FAILURE
+ set +o xtrace
########## FINISHED STAGE: FAILURE: MAKE A TRELLO COMMENT ##########

cc: @stevekuznetsov

In GCE CI testing, don't allow stage binaries to be paired with master images?

Build break openshift/origin#16882 was caused by a binary under test built from stage being paired with incompatible newer docker images from master.

I understand that on GCE, a binary under test may see older docker images, and that this artifact helps somewhat in showing system upgradability without requiring lockstep.

@smarterclayton is the intention to do this the other way around as well? If not, can we have separate sets of images for separate branches in GCE?

Lots of Logs Not Collected From systemd Journal

We want to add a new way to get artifacts where we grab the output of a command run so that we can get all sorts of stuff, including but not limited to:

  • journals for:
    • docker
    • origin master
    • origin node(s)
  • yum list installed
  • ???

sjb/hack/determine_install_upgrade_version.py errors

https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_conformance_install_update/1489/console

+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890
++ export PATH=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/.config
++ mktemp
+ script=/tmp/tmp.M96xyH5KQb
+ cat
+ chmod +x /tmp/tmp.M96xyH5KQb
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.M96xyH5KQb openshiftdevel:/tmp/tmp.M96xyH5KQb
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "/tmp/tmp.M96xyH5KQb"'
+ cd /data/src/github.com/openshift/aos-cd-jobs
++ cat ./PKG_NAME
+ pkg_name=origin
+ [[ origin == \o\r\i\g\i\n ]]
+ deployment_type=origin
+ echo origin
++ cat ORIGIN_BUILT_VERSION
+ sudo python sjb/hack/determine_install_upgrade_version.py origin-3.6.0-0.alpha.2.433.d922159.x86_64 --dependency_branch master
Traceback (most recent call last):
  File "sjb/hack/determine_install_upgrade_version.py", line 126, in <module>
    available_pkgs = sort_pkgs(available_pkgs)
  File "sjb/hack/determine_install_upgrade_version.py", line 73, in sort_pkgs
    exceptional_pkg["original_pkg"] = copy.deepcopy(pkg)
  File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "/usr/lib64/python2.7/copy.py", line 334, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "/usr/lib64/python2.7/copy.py", line 334, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "/usr/lib64/python2.7/copy.py", line 334, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "/usr/lib64/python2.7/copy.py", line 298, in _deepcopy_inst
    state = deepcopy(state, memo)
  File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "/usr/lib64/python2.7/copy.py", line 329, in _reconstruct
    y = callable(*args)
  File "/usr/lib64/python2.7/copy_reg.py", line 93, in __newobj__
    return cls.__new__(cls, *args)
TypeError: object.__new__(thread.lock) is not safe, use thread.lock.__new__()
Exception AttributeError: AttributeError("'YumRepository' object has no attribute '_sack'",) in <bound method YumRepository.__del__ of <yum.yumRepo.YumRepository object at 0x2cc5590>> ignored
Exception AttributeError: AttributeError("'YumSqlitePackageSack' object has no attribute 'primarydb'",) in <bound method YumSqlitePackageSack.__del__ of <yum.sqlitesack.YumSqlitePackageSack object at 0x2cc54d0>> ignored
++ export status=FAILURE
++ status=FAILURE
+ set +o xtrace
########## FINISHED STAGE: FAILURE: INSTALL ORIGIN [00h 00m 03s] ##########```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.