Giter Site home page Giter Site logo

satelliteqe / automation-tools Goto Github PK

View Code? Open in Web Editor NEW
16.0 33.0 33.0 1.45 MB

A set of tools to help to automate testing Foreman with Robottelo (https://github.com/SatelliteQE/robottelo)

License: GNU General Public License v3.0

Python 97.07% Shell 1.98% Makefile 0.82% Dockerfile 0.13%
satellite6 satellite6qe katello foreman redhat-qe python hacktoberfest

automation-tools's People

Contributors

abalakh avatar cswiii avatar devendra104 avatar elyezer avatar ichimonji10 avatar jacobcallahan avatar jameerpathan111 avatar jhutar avatar jyejare avatar kbidarkar avatar ldjebran avatar lhellebr avatar lpramuk avatar ntkathole avatar omaciel avatar omkarkhatavkar avatar pgagne avatar pondrejk avatar renzon avatar rochacbruno avatar rplevka avatar san7ket avatar sghai avatar sthirugn avatar swadeley avatar tkolhar avatar tstrych avatar vijay8451 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

automation-tools's Issues

Setup proxy for all jobs

Goal:

Ability to test Sat6 and all relevant components through proxy (via both auth'd and non-auth'd methods)

Proposed implementation

providing automation with ability to use optional proxy url/port/user/passwd data
pass this data to whichever service requires it (and in the capsule's case, we might for now have to tweak config files) and 3)
be able to pass those values to/from the job itself.

Caveats and Limitations

  1. Sat requires http and fails if you don't; katello-disconnected requires you not use http and fails if you do.
  2. there's no way in installer to configure proxy for capsule even though it supports it (manual config file tweaking required)
  3. we currently seem to serve up content fine through auth'd proxy but not through non-auth

Relevant BZs

https://bugzilla.redhat.com/show_bug.cgi?id=1128296
https://bugzilla.redhat.com/show_bug.cgi?id=1033011
https://bugzilla.redhat.com/show_bug.cgi?id=1136593
https://bugzilla.redhat.com/show_bug.cgi?id=1136595
https://bugzilla.redhat.com/show_bug.cgi?id=1127397
https://bugzilla.redhat.com/show_bug.cgi?id=1114083
https://bugzilla.redhat.com/show_bug.cgi?id=1155651
(there may be more, simply do a query for "proxy" in Sat6 bugs if you're so inclined)

Some Other Thoughts

I am not sure how much I want to focus on katello-disconnected yet since it may get replaced. focus on others first.
ehelms will apparently be working on bz 1114083 soon

Update docs

Need to update docs now that we have a single point of entry for installing Satellite or Katello via product_install.

Installation from Github is failing

When installing from github if Fabric is not already installed then the installation fail with:

  File "/var/folders/wb/p6f5wntj2mgbmz2g59cpm32m0000gn/T/pip-bbDD6S-build/setup.py", line 4, in <module>

    import automation_tools

  File "automation_tools/__init__.py", line 15, in <module>

    from fabric.api import cd, env, execute, local, put, run

ImportError: No module named fabric.api

Handle different server software versions

Satellite 6.0.7, 6.0.8 and 6.1.0 have been released, more releases will land in the future, and nightly builds are available too. Each version acts a little bit differently, and NailGun currently makes use of that versioning information when determining how to talk to the server. In addition, other parts of Satellite QE's software suite may make use of versioning information in the future. automation-tools should be updated to somehow make use of this versioning information. At the very least, version numbers should be passed to NailGun.

Capsules: Implement parent-side (satellite) prerequisites for setting up external capsule

Complement to #186

In preparation for being able to automate capsule installation/configuration, there are bits that need to be done on the satellite side of the house. There may or may not be some of this in automation-tools already; if there are, we'll simply need to be able to tweak params.

REQUIREMENTS

  • Ability to install multiple capsules in serial or parallel
  • Ability to sync multiple capsules in serial or parallel (preferably async)
  • Ability to halt process if something goes wrong (i.e., sync not triggered if a capsule install/configuration fails)

Steps

  • sync capsule content
  • create content view with RHEL+Tools+capsule (RHEL7)
  • publish/promote content view
  • create activation key associated with content view
  • generate capsule cert

Proposed automation-tools namespace [and possible parameters/inputs]

  • capsule_sync [URL_for_capsule_content]
  • capsule_create_cv
  • capsule_publish_cv
  • capsule_ak_create
  • capsule_generate_cert

TypeError right after running satellite5_iso_install task

[dell-pe2950-01.lab.eng.rdu.redhat.com] out: Installation complete.
[dell-pe2950-01.lab.eng.rdu.redhat.com] out: Visit https://${FQDN} to create the Red Hat Satellite administrator account.
[dell-pe2950-01.lab.eng.rdu.redhat.com] out:

[dell-pe2950-01.lab.eng.rdu.redhat.com] run: yum -y update
[dell-pe2950-01.lab.eng.rdu.redhat.com] out: Loaded plugins: product-id, rhnplugin, security
[dell-pe2950-01.lab.eng.rdu.redhat.com] out: This system is receiving updates from RHN Classic or RHN Satellite.
[dell-pe2950-01.lab.eng.rdu.redhat.com] out: Setting up Update Process

[dell-pe2950-01.lab.eng.rdu.redhat.com] out: No Packages marked for Update
[dell-pe2950-01.lab.eng.rdu.redhat.com] out:

Traceback (most recent call last):
File "/home/jenkins/shiningpanda/jobs/35403905/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/main.py", line 743, in main
_args, *_kwargs
File "/home/jenkins/shiningpanda/jobs/35403905/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 384, in execute
multiprocessing
File "/home/jenkins/shiningpanda/jobs/35403905/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 274, in _execute
return task.run(_args, *_kwargs)
File "/home/jenkins/shiningpanda/jobs/35403905/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 174, in run
return self.wrapped(_args, *_kwargs)
File "/home/jenkins/workspace/satellite5-installer/automation_tools/init.py", line 1011, in product_install
)[host])
TypeError: 'NoneType' object is not iterable

product_install:satellite6-upstream - out: /bin/bash: katello-installer: command not found

I have installed automation-tools on RHEL 6 and when I have run:

# RHN_USERNAME=<user> RHN_PASSWORD=<pass> RHN_POOLID=<pool> fab -H root@$( hostname ) product_install:satellite6-upstream

it failed with:

[...]
[<fqdn>] Executing task 'katello_installer'
[<fqdn>] run: katello-installer -d -v --foreman-admin-password="changeme" 
[<fqdn>] out: /bin/bash: katello-installer: command not found
[<fqdn>] out: 


Fatal error: run() received nonzero return code 127 while executing!

Requested: katello-installer -d -v --foreman-admin-password="changeme" 
Executed: /bin/bash -l -c "katello-installer -d -v --foreman-admin-password=\"changeme\" "

Aborting.
Disconnecting from <fqdn>... done.

Maybe I'm missing something. I wanted to follow docs http://automation-tools.readthedocs.org/en/latest/index.html#satellite-installation

setup_default_capsule should handle situations when a FQDN is not present

If you don't configure a server with a valid fqdn, then setup_default_capsule will fail with the following stack trace:

Executing task 'setup_default_capsule'
Traceback (most recent call last):
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/main.py", line 743, in main
    *args, **kwargs
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 384, in execute
    multiprocessing
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 274, in _execute
    return task.run(*args, **kwargs)
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 174, in run
    return self.wrapped(*args, **kwargs)
  File "/home/jenkins/workspace/satellite6-installer/automation_tools/__init__.py", line 968, in product_install
    execute(setup_default_capsule, host=host)
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 384, in execute
    multiprocessing
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 274, in _execute
    return task.run(*args, **kwargs)
  File "/home/jenkins/shiningpanda/jobs/90156090/virtualenvs/d41d8cd9/lib/python2.7/site-packages/fabric/tasks.py", line 174, in run
    return self.wrapped(*args, **kwargs)
  File "/home/jenkins/workspace/satellite6-installer/automation_tools/__init__.py", line 302, in setup_default_capsule
    domain = hostname.split('.', 1)[1]
IndexError: list index out of range

Make sure that `non-authorized` proxy installation passes expected arguments to installer

Yesterday I noticed that when I chose the non-authorized proxy configuration, even though the installation completed and I was able to use the system, whenever my Satellite system attempted to reach the CDN (after importing a manifest and trying to enable a Red Hat repository from the web ui), it would fail to reach it. I'm not sure if the squid server is at fault or not, but for what is worth, only the port and url flags were passed to the installer and not the username and password.

Separate task for enabling/disabling SELinux

We have a total of 4 places in the code where we handle turning SELinux ON/OFF. We should create a specific task for this and update the code to use it, as it would simplify editing this feature's default behavior.

Provide 'update_rpms' (or something similar)

Provide an ability to do a system-wide package update at any point in the process. I presume this would be little more than a "yum -y update".

This has an added bonus, with a little help, of being able to be used when we have a new compose that we want to install atop an existing (i.e., upgrades).

Nightly installation is failing in jenkins . please fix it

for rhel6

[10.8.29.200] run: katello-installer -d -v --foreman-admin-password="*****" --katello-proxy-url="http://ginger.lab.eng.rdu2.redhat.com" --katello-proxy-port="3128" --katello-proxy-password="r****" --katello-proxy-username="admin" 
[10.8.29.200] out: /bin/bash: katello-installer: command not found
[10.8.29.200] out: 

Fatal error: run() received nonzero return code 127 while executing!

For rhel7
[10.8.30.35] run: usermod -aG dockerroot apache
[10.8.30.35] out: usermod: user 'apache' does not exist
[10.8.30.35] out: 

Fatal error: run() received nonzero return code 6 while executing!

Requested: usermod -aG dockerroot apache
Executed: /bin/bash -l -c "usermod -aG dockerroot apache"

epel repos need yum-plugin-fastestmirror package for mirrorlists to work

  1. katello/katello-deploy uses bootstrap-epel.repo to fetch the right repos automatically for each os.
  2. The link in the above .repo file uses mirror list which inturn requires yum-plugin-fastestmirror package from rhel-optional repo
  3. As we are using ./setup.rb we need this fix for provisioning to succeed.

Put RHEL version logic into separate service() type function

Now that we are/will be supporting various service calls, it would probably be better to isolate the logic on whether to use sysV's 'service' as seen in RHEL6 or systemd's 'sysctl' as seen in RHEL7, into its own function.

Doing this would:

a) help avoid faulty logic and/or duplicitous code every time a new service needs to be executed
b) help assure that there is automation coverage against both RHEL versions rather than inadvertently missing one.

Logic might be something like this pseudocode

def os_service(daemon, state):
    rhelver = determine_distro()
    if rhelver = "rhel7":
        exec(sysctl state daemon)
    if rhelver = "rhel6":
        exec(service daemon state)

Thus, subsequent automation could be triggered easily and mostly hassle free, regardless as to distro.

os_service(abrtd, start)
os_service(squid, restart)

Error while running errata_upgrade task

out: ****-config-system: rebooting at Mon Mar 2 14:29:11 EST 2015
out: Warning: run() received nonzero return code -1 while executing 'tail -f /var/log/***sd'!

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/fabric/main.py", line 743, in main
    *args, **kwargs
  File "/usr/lib/python2.7/site-packages/fabric/tasks.py", line 384, in execute
    multiprocessing
  File "/usr/lib/python2.7/site-packages/fabric/tasks.py", line 274, in _execute
    return task.run(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/fabric/tasks.py", line 174, in run
    return self.wrapped(*args, **kwargs)
  File "/automation_tools/__init__.py", line 1393, in errata_upgrade
    sock.connect((env['host'], 22))
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
socket.error: [Errno 111] Connection refused

Provide ability to create new repofile after installing

Let's say we want to test an upgrade after installing a product (say, GA compose). The cycle might be something like

  • Install product
  • Run populate test
  • Point server to repo with latest compose
  • Install new updates and restart

If we get issues #71 resolved, it seems it could work something like

  • cdn_install
  • (automated population)
  • generate_compose_repofile
  • update_rpms

This would help catch dependency issues in rpm or the like, as well.

SELinux should be a flag for SELinux task

Assuming we implement #133 then I would like to suggest that we make the value for the SELinux mode to be an optional argument that can be passed from the command line. If no value is passed then we should default to Enforcing.

BASE_URL environmental variable not being properly set on repo file

It looks like BASE_URL for a downstream build is not being properly set on the repo file:

16:19:34 sharath: one other thing i noticed was, while using downstream the satellite.repo hostname was http://[EDITED]/latest-stable-Satellite-6.0-RHEL-/compose/Satellite/x86_64/os/
16:20:09 sharath: "latest-stable-Satellite-6.0-RHEL-"     instead of    "latest-stable-Satellite-6.0-RHEL-6"

Note that the OS_VERSION is missing from the repo file.

Capsules: Implement child-side (capsule) prerequisites for setting up external capsules

Complement to #185

Implement the child-side (capsule) functionality in automation-tools that will allow us to complete a capsule install. There may be some functionality in here that can piggyback off existing automation-tools.

  • copy capsule cert to child
  • register target capsule(s) using content view
    • retrieve sat ca cert
    • register
  • install rpms on child
  • install product via capsule-installer
  • partition_disk
  • trigger sync on all content

Proposed namespace [and possible parameters]

  • capsule_copy_cert [cert name, child_server]
  • capsule_register [URL_for_ca_cert, parent_server, child_server]_
  • capsule_install_rpms
  • capsule_installer
  • capsule_sync

Register only to the base repos and not point releases

The following code needs to be fixed as I found in the recent irc discussions that we should be either subscribed to 6Server or 7Server and not minor releases.

run(
        'subscription-manager register --force --user={0} --password={1} '
        '--release="{2}{3}" {4}'
        .format(
            os.environ['RHN_USERNAME'],
            os.environ['RHN_PASSWORD'],
            major_version,
            minor_version,
            '--autosubscribe' if autosubscribe else ''
        )

setup_vm_provisioning checks for a nested virtualization support prior to virtualization support itself

setup_vm_provisioning first checks for a nested virtualization support on the machine by checking the output of /sys/module/kvm_intel/parameters/nested file.
If not supported, it tries to enable it and prompts user to reboot the host.
After that, it is supposed to proceed in code and start checking for virtualization support. This stage cannot be reached in case no virtualization support exists on the host thus I believe these checks should be swapped.

Task to fetch information for a Beaker job

I want to start adding tasks that will allow us to handle Beaker tasks such as reserving systems, etc. So for starters, given a Beaker job ID I would like to get the FQDN for the system and its status (i.e. Reserved, Running, Failed, etc

refactor product_install task for Sat5

We need further refactor to make the product_install task install Satellite 5
Currently we used iso_install task as a teplate for sat5_iso_install task, so it has to mimics what it returns to product_install task.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.