satelliteqe / robottelo-ci Goto Github PK
View Code? Open in Web Editor NEWJenkins jobs configuration files to be used to run Robottelo against Satellite 6.
License: GNU General Public License v3.0
Jenkins jobs configuration files to be used to run Robottelo against Satellite 6.
License: GNU General Public License v3.0
Purpose
Historically we have had build dates in the compose, but that isn't/may not always be the case. Thus, it is (now even more) important to be able to readily provide dev details as to what specific components have been tested.
During an automated install, we should go ahead and put those details aside in text file of some sort for easier retrieval and a record of what is presently there (in case dev asks to install updated rpms for debugging, etc.)
Proposed Implementation
for i in
This may need to be updated, however.rpm -qa | grep -iE "^katello|^pulp|^candlepin|^foreman|^headpin|^thumbslug|^elasticsearch|ldap|signo|ruby193-rubygem-runcible" | sort
; do echo "* $i"; done
foreman-debug
can provide said details as well.Mockup
[root@hostname ~]# cat compose-details.txt Install date: $date Package details for "my.hostname.example.com" abc-1.0.0.x86_64.rpm xyz-1.0.0.x86_64.rpm pdq-1.0.0.x86_64.rpm ...
Other Ideas
Maybe we could also include other important details in this output log, just some thoughts.
Currently those tasks are run if the user is omaciel
. To improve the job, create two boolean parameters to control if the user want to run those tasks or not.
It is difficult to identify the jenkins job runs as they are currently named with job numbers like #1, #2
etc. To improve this, may be we need to accept compose name as input parameters and name the jenkins jobs automatically like 6.1.4 Compose 1
I triggered the above job with TEST_TYPE = smoke-api and it failed with the following error. It is not able to escape 'not stubbed' properly.
++ which py.test
+ PYTEST='/home/jenkins/shiningpanda/jobs/ad0ef6b5/virtualenvs/d41d8cd9/bin/py.test -v --junit-xml=foreman-results.xml -m '\''not stubbed'\'''
+ '[' -n '' ']'
+ case "${TEST_TYPE}" in
++ echo smoke-api
++ cut -d- -f2
+ TEST_TYPE=api
+ /home/jenkins/shiningpanda/jobs/ad0ef6b5/virtualenvs/d41d8cd9/bin/py.test -v --junit-xml=foreman-results.xml -m ''\''not' 'stubbed'\''' tests/foreman/smoke/test_api_smoke.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.10, pytest-2.8.7, py-1.4.31, pluggy-0.3.1 -- /home/jenkins/shiningpanda/jobs/ad0ef6b5/virtualenvs/d41d8cd9/bin/python2.7
cachedir: .cache
rootdir: /home/jenkins/workspace/satellite6-standalone-automation, inifile:
plugins: xdist-1.14
collecting ...
generated xml file: /home/jenkins/workspace/satellite6-standalone-automation/foreman-results.xml
========================= no tests ran in 0.00 seconds =========================
ERROR: file not found: stubbed'
Build step 'Virtualenv Builder' marked build as failure
Archiving artifacts
Recording test results
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Finished: FAILURE
Create a job to run Betelgeuse and update information on Polarion for robottelo test cases on daily basis.
Add some defaults values, the initial values should be defined to store a large number of builds and add a description saying that the job is managed by JJB and that any change on the Jenkins UI is going to be overridden when the jobs gets updated.
It would be great to always include logs (foreman logs included) with every single installer job so that one can grab them from Jenkins itself.
Create a new job that will run Satellite upgrade automation from automation-tools.
tests/foreman/rhai
Requirements:
Satellite 6.0.7, 6.0.8 and 6.1.0 have been released, more releases will land in the future, and nightly builds are available too. Each version acts a little bit differently, and NailGun currently makes use of that versioning information when determining how to talk to the server. In addition, other parts of Satellite QE's software suite may make use of versioning information in the future. Jenkins should be updated to somehow make use of this versioning information. At the very least, version numbers should be passed to NailGun.
Currently the parameter names are OS, CAPSULE and TOOLS URLs and the OS should be renamed to SAT or SATELLITE since OS stands for Operating System.
Dependencies on
SatelliteQE/automation-tools#185
SatelliteQE/automation-tools#186
Integrate capsule installation/configuration from automation-tools into something consumable within jenkins.
REQUIREMENTS
Possible parameters for installation (not sure the best way to impl this in jenkins)
https://wiki.jenkins-ci.org/display/JENKINS/URLTrigger+Plugin
By cron like polling at specific url we can trigger build automatically.
For upstream we can poll at ->
https://fedorapeople.org/groups/katello/releases/yum/nightly/katello/RHEL/7/x86_64/repodata/repomd.xml
For downstream we can poll at ->
http://satellite6.server.com/devel/candidate-trees/Satellite/latest-Satellite-6.1-RHEL-7/COMPOSE_ID
But in downstream there is a formal QE hand-off process, however, automation can run even on (yet) not handed compose
Robottelo expect that a project of type sam
is configured in order to just run SAM related tests, because this SAM related jobs need to provide that information.
right now the clean up script runs as a cron job
Upgrade Trigger Job is failing with issue 'Bag Trigger ${{OPENSTACK_CONFIG}}'
The automation pipeline currently runs as follows:
The proposal is to maintain the same order but create a new job in order to orchestrate the entire pipeline. This way if provision have already completed and still running the tests the next build will happen only when the previous execution finishes.
The new approach will have the advantages:
On the other hand, these will be the disadvantages:
If you have any other suggestion, please let me know.
The import tests takes more than 2 hours to execute and it is now running in single thread in the same jenkins job as tier1 but right after tier1 completes. It may be beneficial if we can create a new jenkins job just for import tests and trigger it parallely with tier1 tests.
It'd be nice to have the ability to point the automation to our robottelo repo forks in order to test our code modifications on jenkins before merging them to production. We can keep the SatQE repo as a default choice.
The argument --test-run-id
used in the satellite6-betelgeuse-test-run.sh
script needs to be sanitized so not to contain the following characters: \/.:*"<>|~!@#$?%^&'*()+
,=`
When triggering Satellite 6 installer job, user should be able to select if he/she wants to install using SELinux in permissive or enforcing mode.
We are modifying jenkins manually to test out few things - this might take 12 to 15 hours.
Getting the following every time that generate_jobs.sh or update_job.sh is run:
Traceback (most recent call last):
File "/home/elyezer/.virtualenvs/robottelo-ci/bin/jenkins-jobs", line 11, in <module>
sys.exit(main())
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 172, in main
execute(options, config)
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 321, in execute
output=options.output_dir)
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 288, in update_job
self.parser.generateXML()
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 311, in generateXML
self.xml_jobs.append(self.getXMLForJob(job))
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 321, in getXMLForJob
self.gen_xml(xml, data)
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 328, in gen_xml
module.gen_xml(self, xml, data)
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/modules/triggers.py", line 1121, in gen_xml
self.registry.dispatch('trigger', parser, trig_e, trigger)
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/registry.py", line 200, in dispatch
parser, xml_parent, b, component_data)
File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/registry.py", line 204, in dispatch
format(name, component_type))
jenkins_jobs.errors.JenkinsJobsException: Unknown entry point or macro 'gitlab' for component type: 'trigger'.
Create a job that will install from CDN and, after that, update to a more recent compose build.
Components to consider when upgrading:
Core satellite
Capsule
katello-disconnected
katello-agent
puppet agent
Others....?
Assertions/scenarios that need to be exercised:
All existing content within a populated instance is still available/exposed in upgraded instance
All existing content within upgraded components connected to an upgraded, populated instance, is stll available/exposed
Baseline functionality works in an upgraded instance
What happens when an older component (see above) tries to communicate with an upgraded instance?
Check for availability of any new communication ports that might be opened up in upgraded instance.
Ability to rollback if an upgrade fails, and/or provide a --dry-run option that run through the motions but not actually make system changes.
Connection of external components (see above) acts sanely -- does not cause instability, perhaps given deprecation warnings, auto-upgrades? TBD.
Approach
Populate an older instance and all external components - perhaps use automation to populate known, constant values (rather than random data). We may want to save an image of this "dirty" system for subsequent upgrade tests.
Upgrade core server
Assure content is still composed on upgraded server
Attempt to populate new data onto upgraded instance
Attempt to exercise all new functionality that is a delta between old and new instance
Attempt to connect upgraded components to new server and interact with them
Attempt to connect non-upgraded components to new server and interact with them.
Attempt to connect newly installed components of the latest version to new server and interact with them.
Populate fresh instance of newest release and populate it. Compare schema/data with that from upgraded instance.
Running the satellite6-betelgeuse-test-run-rhel6
fails due to the fact that cloning pylarion
fails since the checkout already exists:
+ git clone https://EDITED
fatal: destination path 'pylarion' already exists and is not an empty directory.
Build step 'Virtualenv Builder' marked build as failure
Perhaps the satellite6-betelgeuse.sh
script should perform a 'clean up' and remove things before proceeding?
On #80 (comment) was suggested and improvement for ISO jobs in order to at least make sure that the ISO build is OK. Running end to end or tier1 tests will ensure that.
Create a job to run betelgeuse test-run
for every completed downstream automation job.
Presently, we specifically disable gpg checking of RPMs due to the fact that we're generally running against test composes. However, we may want to check against production builds from time to time.
Thus, let's have an option in the installer to toggle gpgcheck=1
. Our default should remain "0", however, for the moment.
Allow installing Satellite 6 using the beta channel.
Transitioning tests are supposed to be run in serial since the characteristic of the import feature. This will drop the failures on jenkins.
This is related to SatelliteQE/robottelo#3291
Our Jenkins installation makes use of several plugins. robottelo-ci should be able to manage the plugins that Jenkins has installed. I envision robottelo-ci being able to accomplish the following tasks:
I've been told that when people look at our automation results, it is hard for them to see the information about which system was used so that DEV can look into them (ssh or access via the web ui).
Perhaps we should put that information somewhere in the job description (not the logs)?
Current the installer jobs default to passing -d
to katello-configure
, which is useful when determining if something went wrong for failed jobs. However, -d
causes the installer to not save the answer-file which can also be useful for other tasks.
We should modify our existing jobs to allow the user to determine whether to pass -d
to the installer or not (see this).
Fix current 'SATELLITE_INSTANCE' and 'CAPSULE_INSTANCE' names, so that Automation will delete previous run instance automatically, instead of human/user intervention in Openstack.
Robottelo properties file will be updated when SatelliteQE/robottelo#1946 gets merged. Because this automation jobs should be prepared to update the docker URLs in robottelo.properties file.
We currently wait until our last Tiered job is completed to trigger a Polarion Test Run creation. This Test Run is then populated with the test results obtained from each individual Tiered test's jUnit file. One issue with this approach is that non-stubbed tests are not added to said Test Runs, as they are not contained within the jUnit files.
What we want is to create a Test Run shortly after we update all Test Cases, and include both automated
and notautomated
Test Cases so that we can then test the 'stubbed' tests manually and therefore make sure that we cover all the features and cases identified by our team.
Once the Test Run is populated at the end of the Tiered jobs, we should have a Test Run containing automated tests and their results, as well as a series of tests that will have to be verified manually.
With the transition to Docker, it looks like we have lost the ability to save screenshots for UI failures. I do not currently know a course to fix the issue.
hey, I think it would be great to have proper labels for the builds of satellite6-downstream-trigger
job as this is the root of whole downstream automation chain.
This way it would be easier to browse downstream builds for a specific Compose
Make all jobs send email notifications to the QA list.
Currently we have several helper methods/functions for tests/foreman/cli/test_import.py
which, imho, should be moved to a separate module
[PostBuildScript] - Execution post build scripts.
[PostBuildScript] Build is not failure : do not execute script
Archiving artifacts
ERROR: No artifacts found that match the file pattern "foreman-debug.tar.xz". Configuration error?
ERROR: ‘foreman-debug.tar.xz’ doesn’t match anything
Build step 'Archive the artifacts' changed build result to FAILURE
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Finished: FAILURE
Right now, by default, satellite6-installer chooses the following. The problem here is that updating symlinks of latest-stable is manual task and is often getting missed in builds which results in faulty builds
In satellite6-installer give an option to user to choose between.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.