Giter Site home page Giter Site logo

soe-ci's Introduction

Continuous Integration Scripts for Satellite 6


  • Author: Janine Eichler
  • Email: [email protected]
  • Revision: 0.4
  • Introduce Jenkins Pipeline

  • Domain Architect: Eric Lavarde
  • Email: [email protected]
  • Consultant: Patrick C. F. Ernzer
  • Email: [email protected]
  • Date: February to November 2016
  • Revision: 0.3
  • Satellite 6.2 is now a minimum requirement, 6.1 will not work.
  • Satellite 6.3, 6.4 and 6.5 have been tested.
  • as a general rule the latest version of Satellite is the one Patrick develops against.

Table of Contents

Standard Operating Environment Overview

It is extremely helpful if you read the following blog posts, that explain the concepts behind this repo, before trying to implement.

Introduction

Continuous Integration for Infrastructure (CII) is a process by which the Operating System build ("build") component of a Standard Operating Environment (SOE) can be rapidly developed, tested and deployed.

A build is composed of the following components:

  • Red Hat provided base packages
  • 3rd party and in-house developed packages
  • Deployment instructions in the form of kickstart templates
  • Configuration instructions in the form of puppet modules
  • Test instructions in the form of BATS scripts

The CII system consists of the following components:

  • A git repository, containing the 3rd party and in-house packages, kickstarts, puppet modules and BATS scripts. This is where development of the build takes place.
  • A Jenkins instance. This is responsible for building artefacts such as RPMs and Puppet modules, Pushing artefacts into the Red Hat Satellite, and orchestrating and reporting tests.
  • Red Hat Satellite 6. This acts as the repository for Red Hat-provided and 3rd party packages, kickstarts and puppet modules. The Foreman module is also used to deploy test clients.
  • A virtualisation infrastructure to run test clients. I have used KVM/Libvirt, VMware and RHEV in different engagements.

The architecture is shown in this YeD diagram.

Setup

The following steps should help you get started with CII.

Jenkins Server

NB I have SELinux enabled on the Jenkins server and it poses no problems.

Installation

  • Install a standard RHEL 7 server with a minimum of 4GB RAM, 50GB availabile in /var/lib/jenkins and 10GB available in /var/lib/mock if you intend to do non CI triggered mock builds (highly recommended for debugging). It's fine to use a VM for this.
  • verify with timedatectl that your timezone is set correctly (for correct timestamps in Jenkins).
  • Register the server to RHN RHEL7 base and RHEL7 rhel-7-server-satellite-tools repos. You need the Satellite Tools repo for puppet.
  • Configure the server for access to the EPEL and Jenkins repos.
    • note that for EPEL 7, in addition to the 'optional' repository (rhel-7-server-optional-rpms), you also need to enable the 'extras' repository (rhel-7-server-extras-rpms).
  • Install httpd, mock, createrepo, git, nc and puppet on the system. All but mock are available from the standard RHEL repos so should just install with yum. mock is available from EPEL.
    • [root@jenkins ~]# yum install httpd mock createrepo git nc puppet
  • Ensure that httpd is enabled, running and reachable.
[root@jenkins ~]# systemctl enable httpd ; systemctl start httpd
[root@jenkins ~]# firewall-cmd --get-active-zones
[root@jenkins ~]# firewall-cmd --zone=public --add-service=http --permanent
[root@jenkins ~]# firewall-cmd --zone=public --add-service=https --permanent
[root@jenkins ~]# firewall-cmd --reload
[root@jenkins ~]# firewall-cmd --zone=public --list-all
  • Configure mock by copying the rhel-7-x86_64.cfg or rhel-6-x86_64.cfg file to /etc/mock on the jenkins server and setting MOCK_CONFIG for the relevant Jenkins job.
    • edit the file and replace the placeholder YOUROWNKEY with your key as found in the /etc/yum.repos.d/redhat.repo file on the Jenkins server.
    • please see this post on the Satellite blog for a more detailled explanation on mock with Satellite 6.
    • make sure the baseurl points at your Satellite server. The easiest way to do this is to just copy the relevant repo blocks from the Jenkins server's /etc/yum.repos.d/redhat.repo
    • if your Jenkins server is able to access the Red Hat CDN, then you can leave the baseurls pointing at https://cdn.redhat.com
    • if you are getting mock errors related to systemd-nspawn then add config_opts['use_nspawn'] = False to your relevant mock config files.
  • Install jenkins, tomcat and Java. If you have setup the Jenkins repo correctly you should be able to simply use yum.
    • [root@jenkins ~]# yum install jenkins tomcat java
  • Ensure that jenkins is enabled, running and reachable.
[root@jenkins ~]# systemctl enable jenkins ; systemctl start jenkins
[root@jenkins ~]# firewall-cmd --zone=public --add-port="8080/tcp" --permanent
[root@jenkins ~]# firewall-cmd --reload
[root@jenkins ~]# firewall-cmd --zone=public --list-all
  • Now that Jenkins is running, browse to it's console at http://jenkinsserver:8080/

  • Select the 'Manage Jenkins' link, followed by 'Manage Plugins'. You will need to add the following plugins:

  • Select 'Configure System'

    • Enable 'Environment variables' in the Global properties section and click save (there is no need to Add any). Failing to enable this property leads to #48
  • Restart Jenkins

  • Add the jenkins user to the mock group (usermod -a -G mock jenkins). This will allow Jenkins to build RPMs.

  • Create /var/www/html/pub/soe-repo, /var/www/html/pub/soe-puppet and assign their ownership to the jenkins user. These will be used as the upstream repositories to publish artefacts to the satellite.

    • Create /var/www/html/pub/soe-puppet-only for the puppet only workflow and assign its ownership to the jenkins user. It serves for the puppet only workflow.
    • The pipeline handles both full builds and puppet-only via variables, so set them up both.
  • su to the jenkins user (su jenkins -s /bin/bash) and use ssh-keygen to create an ssh keypair. These will be used for authentication to both the git repository, and to the satellite server.

Jenkins Jobs

Overview

First of all, let's have a look at the bigger picture here. You'll create at least two jobs. One will be solely responsible for polling your SCM. When changes are detected on the SCM, the second job will be triggered. The second job will then take care of building, pushing to Satellite and running the tests. It is important to note that these steps (building the rpm's and/or puppet modules, pushing them to Satellite, running the tests etc.) will be done using a Jenkins Pipeline, so it is code written in Groovy and part of the soe-ci git repository. Documentation for the pipeline can be found here. We use a scripted pipeline, not the declarative one.

The configuration options will be explained later on.

In general you might want to have multiple of these job pairs. E.g. for EL6 and EL7, building only puppet modules or the "full" build including RPMs.

Job parameters and what they do

Right now a couple of job parameters are used which you need to configure:

Parameter Name Description
SOE_CI_REPO_URL the git repository url of your soe-ci project
SOE_CI_BRANCH the branch containing the Jenkinsfile and config file to build the RPMs and puppet modules from
CREDENTIALS_ID_SOE_CI_IN_JENKINS the credentials id of the user configured in Jenkisn to access the source code of the soe ci project in git
ACME_SOE_REPO_URL the git repository url to checkout the 'acme-soe' project from
ACME_SOE_BRANCH the branch to checkout of the 'acme-soe' repo.
CREDENTIALS_ID_ACME_SOE_IN_JENKINS analogous to CREDENTIALS_ID_SOE_CI_IN_JENKINS, can be the same, depending on your setup
RHEL_VERSION this param indirectly configures two things: which mock config to use (the pattern is /etc/mock/<RHEL_VERSION>-x86_64.cfg) and which config file the pipeline uses for environment parameters (pattern is <RHEL_VERSION>-script-env-vars-puppet-only.groovy for a PUPPET_ONLY build and <RHEL_VERSION>-script-env-vars-rpm.groovy for a rpm and puppet module build.)
REBUILD_VMS whether or not to reinstall the test VMs before running the tests
POWER_OFF_VMS_AFTER_BUILD Whether or not to power off the VMs after the build. The build result (successful / failed) is not taken into consideration.
PUPPET_ONLY Whether or not only puppet modules should be built and pushed. This will reduce the duration of the build significantly (provided you update a CV that only contains puppet modules), however it ignores RPMs completely.
CLEAN_WORKSPACE Whether or not to clean the jenkins workspace before the job execution.
VERBOSE Whether or not to run the executed scripts in a verbose mode. Basically it decides whether run 'bash -x' or just 'bash '

Create the jobs

Create a Job which runs the pipeline

  • Create a directory matching the job name in /var/lib/jenkins/jobs (e.g. (e.g. /var/lib/jenkins/jobs/soe-el7) and copy the config-jenkinsfile.xml file into it as /var/lib/jenkins/jobs/<job-name>/config.xml. Make sure the jenkins user is the owner of both.
  • Reload the configuration from disk using 'Manage Jenkins -> Reload Configuration from Disk'.
  • Check that the build plan is visible and correct via the Jenkins UI, you will surely need to adapt the parameter values to your environment.
    • Make sure that in the "Pipeline" section of the job configuration the "Lightweight checkout" is ticked and that the value for "Script Path" points to the "Jenkinsfile" in your repository.

Create a job which polls the SCMs and then triggers the previously created job

  • Create a directory matching the job name in /var/lib/jenkins/jobs (e.g. (e.g. /var/lib/jenkins/jobs/scm-poll-for-soe-el7) and copy the config.xml file into it. Make sure the jenkins user is the owner of both.
  • Reload the configuration from disk using 'Manage Jenkins -> Reload Configuration from Disk'.
  • Check that the build plan is visible and correct via the Jenkins UI, you will surely need to adapt the parameter values to your environment.

How do I change the pipeline

It's simple. It's written in Groovy. If you want to test your changes first before pushing multiple commits to the repository, you can simply create a new job, which is a copy of your "second" job (remember, there are two jobs, one which pulls and one which builds), and change in the "Pipeline" section the "Defintion" from "Pipeline Script from SCM" to "Pipeline Script". Copy and paste your pipeline in the box (it's a groovy sandbox), and you're good to go. Remember this is for testing only. Once you're done, push your changes accordingly and delete the job you created for testing purposes.

Git Repository

  • Clone the following two git repos:
  • Push these to a private git remote (or branch/fork on github).
  • Edit the build plan on your Jenkins instance so that the two SCM checkouts point (one for acme-soe, the other for soe-ci) point to your private git remote - you will need to edit both of these.
  • Make sure to set up the files script-env-vars.groovy script-env-vars-puppet-only.groovy script-env-vars-rpm.groovy
    • be sure to have a different PUPPET_REPO_ID for the full and puppet-only build.
  • Maintain your pipeline script Jenkinsfile in soe-ci.git
  • Commit and push to git

Satellite 6

  • Install and register a Red Hat Satellite 6 as per the instructions.
  • Enable the following repos: RHEL 7 Server Kickstart 7Server, RHEL 7 Server RPMs 7Server, RHEL 7 Server - RH Common RPMs 7 Server
  • Create a sync plan that does a daily sync of the RHEL product
  • Do an initial sync
  • Create a product called 'ACME SOE'
  • Create two Puppet repos
  • Create an RPM repository called 'RPMs' with an upstream repo of http://jenkinsserver/pub/soe-repo
  • Do NOT create a sync plan for the ACME SOE product. This will be synced by Jenkins when needed.
    • keep an eye on RHBZ #1132980 if you use a web proxy at your site to download packages to the Satellite.
    • see here or here for a workaround until this is fixed.
  • Take a note of the repo IDs for the Puppet and RPMs repos. You can find these by hovering over the repository names in the Products view on the Repositories tab. The digits at the end of the URL are the repo IDs.
  • Create a jenkins user on the satellite.
  • Configure hammer for passwordless usage by creating a ~jenkins/.hammer/cli.modules.d/foreman.yml file (older Satellite versions use ~jenkins/.hammer/cli_config.yml). More details here.
  • Copy over the public key of the jenkins user on the Jenkins server to the jenkins user on the satellite and ensure that jenkins on the Jenkins server can do passwordless ssh to the satellite.
  • Configure a Compute Resource on the satellite - I use libvirt, but most people are using VMWare or RHEV. This will be used to deploy test machines.

Bootstrapping

In order to create a Content View on the satellite, you need some initial content. That can be generated by Jenkins.

Now manually trigger both

  • a normal build
  • a puppet-only build

This will fail, however it will create some content in the output directories by building the demo RPMs and Puppet modules. Check that these are available then do the following tasks:

  • On the satellite, do a manual sync of your ACME SOE product. Check that it syncs correctly and you have got the RPMs and puppet modules that Jenkins built for you.
  • Add the ACME SOE RPM and Puppet repos to the Content View, along with the RHEL 7 RPMs and RHEL 7 Common repos, and any third party puppet modules that are needed.
  • Publish the Content View - ensure that it contains your RPMs and puppet modules.
  • Create a lifecycle environment that your test clients will live in. I called mine 'SOE Test' you will need to get the ID of this environment, most likely it will be 2 or you can find it with 'hammer lifecycle-environment list --organization="Default_Organization"'
  • Create an activation key that provides access to the RHEL 7 RPMs, RHEL 7 Common, RPMS, and Puppet repos. (you don't need access to the kickstart repo after installation)
  • Create a hostgroup (I called mine 'Test Servers') that deploys machines on to the Compute Resource that you configured earlier, and uses the activation key that you created. Create a default root password and make a note of it.
  • Create a couple of initial test servers and deploy them. Ensure that they can see your private RPM and puppet repositories as well as the Red Hat repositories.
    • If you plan to use the conditional VM build feature, edit the comment field of your test host(s) with the names of the Puppet modules, RPM packages and/or kickstart files surrounded by '#' if they are relevant to be tested on this specific host. E.g. if the 'ssh' module is modified, a host will only be re-built and tested if its comment field contains the string '#ssh#'.
  • CReate one or two Host Collections and configure TESTVM_HOSTCOLLECTION in script-env-vars-puppet-only.groovy and script-env-vars-rpm.groovy

FIXME: add instructions on HC

Getting Started

At this point, you should be good to go. In fact Jenkins may have already kicked off a build for you when you pushed to github.

Develop your build in your checkout of acme-soe. Software that you want packaging goes in 'rpms', puppet modules in 'puppet' and BATS tests in 'tests'. You MUST update versions (in specfiles and metadata.json files) whenever you make a change, otherwise satellite6 will not pick up that you have new versions, even though Jenkins will have repackaged them.

COMING SOON

soe-ci's People

Contributors

abradshaw avatar aldavud avatar ericzolf avatar evgeni avatar ggatward avatar hhenkel avatar jeichler avatar nstrug avatar opuk avatar pcfe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

soe-ci's Issues

buildtestvms.sh requires vmtools

buildtestvms.sh is using "hammer host reboot" which needs vmtools (either vmware-tools or open-vm-tools) to be installed and active.

If OTOH you use hammer host stop followed by a hammer host start then you can power cycle a VM hard without needing the vmtools.

As the VM is going to be rebuilt, we might as well power cycle it hard and do without the requirement for vmtools (and save a couple seconds that the VM would take to cleanly shutdown).

[enh] add support for multiple mock configs

currently mock is called without the -r flag. This means that only the release that the symlink /etc/mock/default.cfg points to.

change the scripts to have one more param in the jenkins job so as to be able to select the mock config (e.g. rhel-6-x86_64, rhel-7-x86_64)

publishcv.sh assumes that hammer content-view info lists CV versions in chronological order

In Sat 6.2.6 I just noticed that CV versions are not necessarily listed in chronological order.

This breaks publishcv.sh (lines 39 through 42)

Example

# hammer content-view info --name "cv-soe-testing-6Server" --organization-id 1 | sed -n "/Versions:/,/Components:/p"
Versions:
 1) ID:        744
    Version:   74.0
    Published: 2017/01/10 13:15:34
2) ID:        739
    Version:   72.0
    Published: 2017/01/10 07:27:22
3) ID:        742
    Version:   73.0
    Published: 2017/01/10 12:10:11
4) ID:        696
    Version:   61.0
    Published: 2017/01/04 12:50:57
5) ID:        697
    Version:   62.0
    Published: 2017/01/04 15:53:22
6) ID:        700
    Version:   63.0
    Published: 2017/01/04 16:40:46
7) ID:        701
    Version:   64.0
    Published: 2017/01/04 17:30:44
8) ID:        703
    Version:   65.0
    Published: 2017/01/04 18:37:44
9) ID:        704
    Version:   66.0
    Published: 2017/01/04 19:14:22
10)ID:        705
    Version:   67.0
    Published: 2017/01/05 11:08:41
11)ID:        708
    Version:   68.0
    Published: 2017/01/05 13:06:46
12)ID:        709
    Version:   69.0
    Published: 2017/01/05 14:24:48
13)ID:        711
    Version:   70.0
    Published: 2017/01/05 15:25:52
14)ID:        728
    Version:   71.0
    Published: 2017/01/09 18:27:05
Components:

add wait for SUT install done to buildtestvms.sh

currently we have this bit of code;

# we need to wait until all the test machines have been rebuilt by foreman
[...]

in pushtests.sh

  • That's not nice for the pipeline as time spent installing is counted towards pushtests.sh instead of buildtestvms.sh.
  • Replicate the test (for a System under Test (SUT) being in build mode) to buildtestvms.sh from pushtests.sh.
  • Do not remove the test from pushtests.sh though, in a puppet-only build buildtestvms.sh is never called.

mock and systemd nspawn

note to self:
config_opts['use_nspawn'] = False
to

  • docs
  • example config el7
  • example config el6
    if in testing I continue having issues with mock builds with the latest mock

Introduce (additional) puppet only Jenkins job

As a user I want to easily use Option 2: Using a Content View as a Puppet Environment directly from A separate lifecycle for Puppet modules in Satellite 6 by @wzzrd

The idea is that there are 2 HG
a) one for full SOE CI runs
b) one for puppet only

Jenkins should build the job for full SOE CI runs once or twice day for the HG members of a) (so build by schedule)
and do a puppet run on the HG members of b) as soon as there is a git commit (so poll git every minute).

capsule-sync.check.sh should have a timeout

I should have thought of this earlier.
Just came across a customer who has had capsule syncs stuck for many days but did not look into the cause.

Introduce a timeout to capsule-sync.check.sh to that after a maximum of N hours the script does
exit 1

common.sh's function get_test_vm_list() breaks on Satellite 6.2

common.sh uses

"hammer content-host list --organization \"${ORG}\" \
--host-collection \"$TESTVM_HOSTCOLLECTION\" \

While this works fine on Satellite 6.1, Satellite 6.2's hammer has changed in that regard.

# hammer content-host list --organization "Sat Test" --host-collection "Test Servers RHEL7"
Error: Unrecognised option '--host-collection'

See: 'hammer content-host list --help'

Parameterise build

As a release manager I want to specify locations of satellite servers, repository IDs, host groups etc in the build plan, rather than in common.sh

Update diagram

Use latest version of deployment diagram and upload as svg

handle test clients behind a capsule

currently we do not wait for capsule sync to finish before trying to install and test.

so either we need to make it clear that the test machines must not be behind a capsule or wait with buildtestvms.sh until capsule syncs are finished.

In case of the latter, an crude but workable solution might be
hammer --csv task list --search 'state = running and label = Actions::Katello::ContentView::CapsuleGenerateAndSync' | tail -n +2
run in a loop until we get 0 lines. But more elegant would be to actually wait on jyust the capsule sync we need.

@ericzolf comments?
I'll start poking at this once I know your preference. (just assign ticket to me after you commented)

Handle puppet module deletion

We need to be able to delete a puppet module from git, and have it deleted from the export repo, and hence from the new CV ver.

Puppet Test Harness

As a build engineer, I want to be able to test puppet modules against realworld configuration prior to committing them.

rpmpush.sh should only run if new RPMs were built

As a user, I do not want to wait for Satellite 6 to run hammer repository synchronize on REPO_ID if no new RPMs were built. The repo sync operation being quite expensive.

The following 2 commands should only run if rpmbuild.sh wrote entries to MODIFIED_CONTENT_FILE

# refresh the upstream yum repo
createrepo ${YUM_REPO}

# use hammer on the satellite to push the RPMs into the repo
# the ID of the ACME Test repository is 16
ssh -q -l ${PUSH_USER} -i /var/lib/jenkins/.ssh/id_rsa ${SATELLITE} \
    "hammer repository synchronize --id ${REPO_ID}" || \
  { err "Repository '${REPO_ID}' couldn't be synchronized."; exit 1; }

Image creation

Output an Openstack/RHEV-ready image as part of the build.

Always display all stages in jenkins' (classic) stage view

the pipeline in Jenkins' classic view (aka the "not blue ocean" view) doesn't display stages which come after a failed stage. Therefore the stage view in the job overview page is lost eveytime the number of executed stages differs.
assuming that blue ocean isn't used everywhere, we need to workaround this behaviour.

related open jenkins issues: https://issues.jenkins-ci.org/browse/JENKINS-43995 , which basically means that the display options per step and stage are limited right now, but it's sufficient.

side note: if stages are added / removed / renamed in the pipeline itself, the stage view loses the history again. but that makes sense.

RHEL mock config files need a note for baseurl

please add a note to both mock config files and the readme.
The user does not only need to take the cert and key from their existing /etc/yum.repos.d/redhat.repo but also the baseurl

wait for initial puppet run to complete before runnign bats tests

As a user of soe-ci, I want any initial puppet run to complete before running the bats tests.

pushtests.sh waits 30 secs between being fairly certain that the box is up

 if [[ ${status} == 3 ]] && ping -c 1 -q $I && nc -w 1 $I 22

and starting the testing ("yum install -y bats rsync" and 'cd tests ; bats -t *.bats').
That's too tight in some setups.
I just ran into a machine that was still busy installing packages that were ensure installed via puppet, so the "yum install -y bats rsync" failed, making my tests fail when there was no reason to.

Ideally we would detect that the initial run has finished before running any of the bats tests.

As a dirty work-around, I've added a 5 min sleep to my current setup (not pushed upstream), but that's not elegant. So fix this properly in the future.

Jenkins Pipeline hard-codes the config groovy files

currently:

def loadEnvVars() {
  loadEnvVarsFromFile("script-env-vars.groovy")
  if (params.PUPPET_ONLY == true) {
    loadEnvVarsFromFile("script-env-vars-puppet-only.groovy")
  } else {
    loadEnvVarsFromFile("script-env-vars-rpm.groovy")
  }
}

if you want to use the Jenkinsfile for example with el6 and el7, you need to use another branch or copy it. that's far from good.

PR is coming this week to resolve this.

power status on VMware comes back with poweredOn / poweredOff

when I query a host on VMware I get poweredOn / poweredOff and not running / shutoff

could someone using libvirt (maybe @ericzolf or @nstrug ) please verify what they get with Sat 6.1.7 (or later)
hammer host status --id sometesthost.example.com

if it's also poweredOn / poweredOff then I'll make a PR (with a0507bd ) for you.
If OTOH is's still running / shutoff , then the the script buildtestvms.sh needs extending to cover both cases.

Ansible Tower integration

We need to provide ansible support as an alternative/addition to puppet.

  • provide an ansible-only job/pipeline
  • use ansible tower provisioning callback in kickstart template
  • ensure ansible user created

Unexplained difference between chroot_setup_cmd for RHEL 6 vs. 7 mock configs

If I look at the differences between the RHEL 6 and the RHEL 7 config files for mock, regarding config_opts['chroot_setup_cmd'],

common packages are:
bash
bzip2
cpio
diffutils
gcc-c++
gzip
make
patch
rpm-build
sed
shadow-utils
tar
unzip
which

Packages only for RHEL 6 are:
-coreutils
-findutils
-gawk
-gcc
-grep
-info
-redhat-release
-redhat-rpm-config
-util-linux-ng
-xz

Packages only for RHEL 7 are:
+gdd
+perl

And I don't understand why there should be such differences.

puppet done test should stop after some time

currently the code does not have a timeout for running puppet-done-test.sh

Ideally we want to wait some maximum (e.g. 30 minutes) and then simply continue so that the test can fail instead of running indefinitely.

It should be sufficient to change the above line to /usr/bin/timeout 30m /root/puppet-done-test.sh

Handle RPM deletion

Similar to #83
We should delete RPMs from YUM_REPO if there no longer is a coresponding directory in git

puppetpush.sh should only run if new puppet modules were built

As a user, I do not want to wait for Satellite 6 to run hammer repository synchronize on PUPPET_REPO_ID if no new puppet modules were built. The repo sync operation being quite expensive.

The repo sync command should only run if puppetbuild.sh wrote entries to MODIFIED_CONTENT_FILE.

# use hammer on the satellite to push the modules into the repo
ssh -q -l ${PUSH_USER} -i ${RSA_ID} ${SATELLITE} \
    "hammer repository synchronize --id ${PUPPET_REPO_ID}" || \
  { err "Repository '${PUPPET_REPO_ID}' couldn't be synchronized."; exit 1; }

the rest of puppetpush.sh should run though.

[Question] This looks simple for single group of team

Hello,

how about multiple teams working on different projects (for example we have 100 diff teams working on diff projects ), we created multiple products for them in satellite and repos for them, now issue is .. for a particular consumer you can only bind single content view, if App1 wants to use product or repo from App2 they have to add their repos to App1's content view (but you cant simple assign 2 CV to a particular system).

how to segregate OS related repos and application repos?, so for example if we update OS repo for some erratas, we can just update CV for OS repos only and App CVs are different, but problem is only one CV can be assigned to particular host

Regards,
DJ

Kickstart

I just came across your project and see that adding kickstart functionality is on the to do list. I am the author of the jaks project that makes use of %pre and arguments passed to the kernel via the initramfs argument list to kickstart anaconda based distros. It also slipstreamed post configurations w/ %post.

It is all written in shell so it is portable. Is there a desired place to hook it in? I was looking at the kickstart script currently in place and it looks as if it is just picking up some recently modified .erb files vs. any type of os build.

I currently don't have an environment to facilitate testing and implementation but think it might be a good method of kickstart usage in your project.

Stale parameter MODAUTHOR in Jenkins' config.xml

There is a MODAUTHOR parameter defined in config.xml but no script is making use of it. As we would have at a typical customer, Puppet modules coming from different sources, I'm not really sure what the purpose of this would be, hence I'd suggest to remove the parameter and I can do it myself, but before I do, I wanted to be sure I didn't oversee anything.

Local hammer

Rewrite scripts to use local hammer on the Jenkins host instead of ssh-ing to satellite.

Bug in publishcv.sh

Hi Nick,

Just noticed a bug in publishcv.sh.

"hammer content-view version promote --content-view "${CV}" --organization "${ORG}"
--lifecycle-environment-id "${TESTVM_ENV}" --id ${VER}"

Should read:
"hammer content-view version promote --content-view "${CV}" --organization "${ORG}"
--to-lifecycle-environment-id "${TESTVM_ENV}" --id ${VER}"

Perhaps this changed in v 6.0.6?

Chris

publishcv.sh should only run if MODIFIED_CONTENT_FILE is populated

As a user, I do not want to wait for the extremely expensive hammer content view publish and/or hammer content-view version promote to run if nothing changed (e.g. when debugging buildtestvms.sh).

Ideally only affected (C)CVs are published and promoted.

But that would probably need a cleaner hammer way then grep -E in the following mess

[root@satellite ~]# hammer content-view info  --id 6 --organization-id 1
ID:                     6
Name:                   cv-Jenkins-SOE-el7
Label:                  cv-Jenkins-SOE-el7
Composite:              false
Description:            the CV that the SOE-el7 Jenkins job updates
Content Host Count:     2
Organisation:           Sat Test
Yum Repositories:       
 1) ID:    3
    Name:  Red Hat Satellite Tools 6.2 for RHEL 7 Server RPMs x86_64
    Label: Red_Hat_Satellite_Tools_6_2_for_RHEL_7_Server_RPMs_x86_64
 2) ID:    15
    Name:  Red Hat Enterprise Linux 7 Server - Fastrack RPMs x86_64
    Label: Red_Hat_Enterprise_Linux_7_Server_-_Fastrack_RPMs_x86_64
 3) ID:    1
    Name:  Red Hat Enterprise Linux 7 Server Kickstart x86_64 7.2
    Label: Red_Hat_Enterprise_Linux_7_Server_Kickstart_x86_64_7_2
 4) ID:    2
    Name:  Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server
    Label: Red_Hat_Enterprise_Linux_7_Server_RPMs_x86_64_7Server
 5) ID:    14
    Name:  Red Hat Enterprise Linux 7 Server - Extras RPMs x86_64
    Label: Red_Hat_Enterprise_Linux_7_Server_-_Extras_RPMs_x86_64
 6) ID:    17
    Name:  Red Hat Enterprise Linux 7 Server - Optional Fastrack RPMs x86_64
    Label: Red_Hat_Enterprise_Linux_7_Server_-_Optional_Fastrack_RPMs_x86_64
 7) ID:    16
    Name:  Red Hat Enterprise Linux 7 Server - Optional RPMs x86_64 7Server
    Label: Red_Hat_Enterprise_Linux_7_Server_-_Optional_RPMs_x86_64_7Server
 8) ID:    9
    Name:  EPEL RHEL7 x86_64
    Label: EPEL_RHEL7_x86_64
 9) ID:    70
    Name:  RPMS RHEL7
    Label: RPMS_RHEL7
Docker Repositories:    

OSTree Repositories:    

Puppet Modules:         
 1) ID:      14
    Name:    dmz
    Author:  acme
    Created: 2016/07/30 17:59:29
    Updated: 2016/07/30 17:59:29
 2) ID:      13
    Name:    role
    Author:  acme
    Created: 2016/07/30 17:59:27
    Updated: 2016/07/30 17:59:27
 3) ID:      12
    Name:    qpid
    Author:  acme
    Created: 2016/07/30 17:59:26
    Updated: 2016/07/30 17:59:26
 4) ID:      11
    Name:    bats
    Author:  acme
    Created: 2016/07/30 17:59:24
    Updated: 2016/07/30 17:59:24
 5) ID:      10
    Name:    profile_nfs
    Author:  acme
    Created: 2016/07/30 17:59:22
    Updated: 2016/07/30 17:59:22
 6) ID:      9
    Name:    profile
    Author:  acme
    Created: 2016/07/30 17:59:20
    Updated: 2016/07/30 17:59:20
 7) ID:      8
    Name:    ssh
    Author:  acme
    Created: 2016/07/30 17:59:19
    Updated: 2016/07/30 17:59:19
 8) ID:      7
    Name:    auto
    Author:  acme
    Created: 2016/07/30 17:59:17
    Updated: 2016/07/30 17:59:17
 9) ID:      6
    Name:    role_www
    Author:  acme
    Created: 2016/07/30 17:59:16
    Updated: 2016/07/30 17:59:16
 10)ID:      5
    Name:    role_db
    Author:  acme
    Created: 2016/07/30 17:59:14
    Updated: 2016/07/30 17:59:14
 11)ID:      4
    Name:    profile_apache
    Author:  acme
    Created: 2016/07/30 17:59:12
    Updated: 2016/07/30 17:59:12
 12)ID:      3
    Name:    profile_base
    Author:  acme
    Created: 2016/07/30 17:59:11
    Updated: 2016/07/30 17:59:11
 13)ID:      2
    Name:    firewall
    Author:  acme
    Created: 2016/07/30 17:59:09
    Updated: 2016/07/30 17:59:09
 14)ID:      1
    Name:    profile_postgres
    Author:  acme
    Created: 2016/07/30 17:59:08
    Updated: 2016/07/30 17:59:08
Lifecycle Environments: 
 1) ID:   2
    Name: Engineering
 2) ID:   1
    Name: Library
Versions:               
 1) ID:        6
    Version:   1.0
    Published: 2016/07/30 17:59:34
 2) ID:        7
    Version:   2.0
    Published: 2016/07/30 19:01:46
 3) ID:        8
    Version:   3.0
    Published: 2016/07/30 19:25:58
 4) ID:        9
    Version:   4.0
    Published: 2016/07/30 19:38:19
 5) ID:        10
    Version:   5.0
    Published: 2016/07/30 20:25:07
 6) ID:        11
    Version:   6.0
    Published: 2016/07/31 13:52:05
 7) ID:        12
    Version:   7.0
    Published: 2016/07/31 14:08:27
 8) ID:        13
    Version:   8.0
    Published: 2016/07/31 14:32:49
 9) ID:        14
    Version:   9.0
    Published: 2016/07/31 14:46:36
Components:             

Activation Keys:        
 1) ak-Jenkins-SOE-el7

[root@satellite ~]# 

Conditional Test Execution

As a build engineer, I want to be able to write a test that will only execute if the component that it is testing is deployed to the test machine.

power off VMs and cleanup only if configured && build was successful or unstable

request from @pcfe
the VMs should only be shut down when

  • configured in the job config
  • and build was successful or unstable

right now I tend to put that in a subsequent job because of:

  • limitations in the pipeline (status for stages and steps rather than just on the build itself)
  • adds the possibility to cleanup and shutdown manually, via cron plus triggered by the pipeline job
  • can be reused by all existing soe-ci-pipeline jobs probably

@pcfe: you might want to state your opinion on that first, before i start implementing this

Update diagram

The graphml deployment diagram is out of date, replace it with the latest version, and in svg format.

[ENH] add first step to check script validity

As we also test the scripts when we check a new version in, it would make sense to have a quick check at the beginning, just to avoid the most obvious errors, something like:

for i in scripts/*.sh; do bash -n $i; done

(probably need to concatenate the return values or something like this)

BUILD_URL not populated

In publishcv.sh the variable BUILD_URL is used but it's empty.
This makes finding a specific build a major pain.
Need to fix this.

"hammer content-view publish --name "${cv}" --organization "${ORG}" --description "Build ${BUILD_URL}""

@nstrug do I assume correctly that this is meant to point to jenkins so that a user can determin which CV version got created by which build job?

@ericzolf you touched that line last according to git, did that ever work for you?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.