openshift / community.okd Goto Github PK
View Code? Open in Web Editor NEWOKD/Openshift collection for Ansible
Home Page: http://galaxy.ansible.com/community/okd
License: GNU General Public License v3.0
OKD/Openshift collection for Ansible
Home Page: http://galaxy.ansible.com/community/okd
License: GNU General Public License v3.0
The openshift inventory plugin currently does not have any integration tests.
plugins/inventory/openshift.py
Somehow this plugin was accepted without proper integration tests. Go figure.
The k8s_auth
module that's currently part of the community.kubernetes
collection is basically built for (and I believe only works with) OpenShift.
We can move that module into this repository since its maintenance and testing really needs to be done against an OpenShift cluster.
k8s_auth
N/A
How to pass Route specific annotations while creating route on OCP.
Example:
haproxy.router.openshift.io/balance : source
openshift_route
See related issues: https://github.com/ansible-collections/community.kubernetes/labels/openshift
Mostly, we want to get the OpenShift-specific plugins and code out of the community.kubernetes
collection and into this one, then once that's done, release a new major version of that collection, and release a 1.0.0
of this collection, and update the routing in the Ansible build scripts to pull at least the openshift
inventory source from this collection.
N/A
N/A
CRITICAL Idempotence test failed because of the following tasks:
This is the diff between the two runs of Create deployment config.
--- before.json 2021-09-20 16:47:39.459881578 -0400
+++ after.json 2021-09-20 16:46:41.527679534 -0400
@@ -4,20 +4,20 @@
"diff": {
"after": {
"metadata": {
- "generation": 20,
- "resourceVersion": "167809"
+ "generation": 21,
+ "resourceVersion": "167928"
},
"status": {
- "observedGeneration": 20
+ "observedGeneration": 21
}
},
"before": {
"metadata": {
- "generation": 19,
- "resourceVersion": "166980"
+ "generation": 20,
+ "resourceVersion": "167809"
},
"status": {
- "observedGeneration": 19
+ "observedGeneration": 20
}
}
},
@@ -29,7 +29,7 @@
"kind": "DeploymentConfig",
"metadata": {
"creationTimestamp": "2021-09-20T15:57:22Z",
- "generation": 20,
+ "generation": 21,
"managedFields": [
{
"apiVersion": "apps.openshift.io/v1",
@@ -163,7 +163,7 @@
],
"name": "hello-world",
"namespace": "testing",
- "resourceVersion": "167809",
+ "resourceVersion": "167928",
"uid": "1345760f-e069-4f9c-b3b6-28388cbcd8d2"
},
"spec": {
@@ -270,7 +270,7 @@
"message": "config change"
},
"latestVersion": 1,
- "observedGeneration": 20,
+ "observedGeneration": 21,
"readyReplicas": 1,
"replicas": 1,
"unavailableReplicas": 0,
note:
While working on the 1.0.0 release in #51, we noticed a couple issues.
downstream.sh
OKD
in the README that should probably be updated to OKD/Openshift
instead.The version_added
in the openshift_route
module's embedded docs says 1.1 rather than 0.3, the next slated release of this collection.
Testing OpenShift functionality has additional and specialized requirements that goes beyond the standard Kubernetes CI testing. This meta-issue is to track establishing task for setting up the means of testing this collection's content against OpenShift.
[More tasks likely to be added]
When using community.okd.openshift_adm_groups_sync, the following error is logged: AttributeError: 'OpenshiftGroupsSync' object has no attribute '_OpenshiftGroupsSync__ldap_connection'
This is the task that fails:
- name: oc adm groups sync
community.okd.openshift_adm_groups_sync:
#api_key: "{{ openshift_auth_results.openshift_auth.api_key }}"
username: "{{ openshift_admin_username }}"
password: "{{ openshift_admin_password }}"
host: "{{ openshift_admin_url }}"
validate_certs: "{{ openshift_validate_certs }}"
type: ldap
sync_config:
kind: LDAPSyncConfig
apiVersion: v1
url: 'ldaps://{{ ipa_ldap_server }}'
insecure: true
validate_certs: true
#ca: ca.crt
bindDN: '{{ oauth_company_ipa_ldap_binddn }}'
bindPassword: '{{ oauth_company_ipa_ldap_bindpassword }}'
augmentedActiveDirectory:
groupsQuery:
derefAliases: 'never'
pageSize: '0'
groupUIDAttribute: 'dn'
groupNameAttributes: '[ cn ]'
usersQuery:
baseDN: "{{ ipa_ldap_basedn_users }}"
scope: 'sub'
derefAliases: 'never'
filter: '(objectclass=inetOrgPerson)'
pageSize: 0
userNameAttributes: '[ uid ]'
groupMembershipAttributes: '[ memberOf ]'
allow_groups:
- cn=openshift-cluster-admin,cn=groups,cn=accounts,dc=company,dc=net
ignore_errors: true
Any idea what is going on here?
Thanks.
SUMMARY
When using openshift_process a MODULE FAILURE error is encountered.
ISSUE TYPE
COMPONENT NAME
Ansible module community.okd.openshift_process 1.1.2
ANSIBLE VERSION
$ ansible --version
ansible 2.9.17
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dmorrissette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
CONFIGURATION
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 10
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['~/repos/ansible/scripts/ansible-list-hosts.py']
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = Ansible managed
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /etc/ansible/.vault_pass.txt
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore
OS / ENVIRONMENT
Red Hat Enterprise Linux release 8.3 (Ootpa)
Python 3.6.8
STEPS TO REPRODUCE
The following task definition is failing for me, I tried replacing the vars too just in case. The okd.k8s module seems fine so far.
- name: Process migrator template
community.okd.openshift_process:
api_key: "{{ capps_deploy_token }}"
host: "{{ ocp_host }}"
name: migrator-template
namespace: "{{ capps_project }}"
parameters:
VERSION: "{{ capps_deploy_version }}"
state: rendered
validate_certs: no
EXPECTED RESULTS
To process the template or respond with a configuration error instead of a module failure error.
ACTUAL RESULTS
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible-tmp-1620167971.6559699-404515-124202413047542/AnsiballZ_openshift_process.py", line 102, in <module>
_ansiballz_main()
File "/tmp/ansible-tmp-1620167971.6559699-404515-124202413047542/AnsiballZ_openshift_process.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/tmp/ansible-tmp-1620167971.6559699-404515-124202413047542/AnsiballZ_openshift_process.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.community.okd.plugins.modules.openshift_process', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_community.okd.openshift_process_payload_jrzkt864/ansible_community.okd.openshift_process_payload.zip/ansible_collections/community/okd/plugins/modules/openshift_process.py", line 389, in <module>
File "/tmp/ansible_community.okd.openshift_process_payload_jrzkt864/ansible_community.okd.openshift_process_payload.zip/ansible_collections/community/okd/plugins/modules/openshift_process.py", line 385, in main
File "/tmp/ansible_community.okd.openshift_process_payload_jrzkt864/ansible_community.okd.openshift_process_payload.zip/ansible_collections/community/okd/plugins/modules/openshift_process.py", line 334, in execute_module
KeyError: 'message'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1620167971.6559699-404515-124202413047542/AnsiballZ_openshift_process.py\", line 102, in <module>\n _ansiballz_main()\n File \"/tmp/ansible-tmp-1620167971.6559699-404515-124202413047542/AnsiballZ_openshift_process.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1620167971.6559699-404515-124202413047542/AnsiballZ_openshift_process.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.okd.plugins.modules.openshift_process', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.okd.openshift_process_payload_jrzkt864/ansible_community.okd.openshift_process_payload.zip/ansible_collections/community/okd/plugins/modules/openshift_process.py\", line 389, in <module>\n File \"/tmp/ansible_community.okd.openshift_process_payload_jrzkt864/ansible_community.okd.openshift_process_payload.zip/ansible_collections/community/okd/plugins/modules/openshift_process.py\", line 385, in main\n File \"/tmp/ansible_community.okd.openshift_process_payload_jrzkt864/ansible_community.okd.openshift_process_payload.zip/ansible_collections/community/okd/plugins/modules/openshift_process.py\", line 334, in execute_module\nKeyError: 'message'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
We are going to release 1.0.1 with one major change—the inclusion of referenced docs via downstream build script to overcome Automation Hub's current inability to install dependent collections as part of the import process that generates docs using ansible-doc
.
See #57 for more context.
Releases
N/A
I am unable to add a users list entry to the non-custom Kubernetes resource named 'privileged' of kind SecurityContextConstraint without overwriting all existing users list entries, as strategic-merge does not work.
k8s
ansible 2.9.14
config file = /home/user/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.local/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Mar 20 2020, 17:08:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
None
/bin/pip3 show openshift
Name: openshift
Version: 0.11.2
Summary: OpenShift python client
Home-page: https://github.com/openshift/openshift-restclient-python
Author: OpenShift
Author-email: UNKNOWN
License: Apache License Version 2.0
Location: /usr/local/lib/python3.6/site-packages
Requires: jinja2, kubernetes, python-string-utils, ruamel.yaml, six
Kubernetes environment is Openshift:
Server Version: 4.3.35
Kubernetes Version: v1.16.2+7279a4a
The 'privileged' SecurityContextConstraints in the K8s environment contains a list of users. After executing the following task, the users list contains only one entry: "system:serviceaccount:myproject:default"
- name: Add default service account user to privileged SCC
k8s:
definition:
kind: SecurityContextConstraints
metadata:
name: privileged
users:
- "system:serviceaccount:myproject:default"
The task fails when explicitly specifying strategic-merge as the merge_type:
- name: Add default service account user to privileged SCC
k8s:
merge_type: strategic-merge
definition:
kind: SecurityContextConstraints
metadata:
name: privileged
users:
- "system:serviceaccount:myproject:default"
User "system:serviceaccount:myproject:default" is added to the list of users in the 'privileged' SCC, while keeping already listed users.
Either a list with only 1 user or the following exception when explicitly specifying strategic-merge.
fatal: [<remote_ip>]: FAILED! => {
"changed": false,
"error": 415,
"invocation": {
"module_args": {
"api_key": null,
"api_version": "v1",
"append_hash": false,
"apply": false,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"context": null,
"force": false,
"host": null,
"kind": null,
"kubeconfig": null,
"merge_type": [
"strategic-merge"
],
"name": null,
"namespace": null,
"password": null,
"proxy": null,
"resource_definition": {
"apiVersion": "security.openshift.io/v1",
"kind": "SecurityContextConstraints",
"metadata": {
"name": "privileged"
},
"users": [
"system:serviceaccount:myproject:default"
]
},
"src": null,
"state": "present",
"username": null,
"validate": null,
"validate_certs": null,
"wait": false,
"wait_condition": null,
"wait_sleep": 5,
"wait_timeout": 120
}
},
"msg": "Failed to patch object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml\",\"reason\":\"UnsupportedMediaType\",\"code\":415}\\n'",
"reason": "Unsupported Media Type",
"status": 415
}
The community.general
collection still has an oc
connection plugin that was never moved into this collection. We should probably move it here then figure out if it should be redirected to kubectl
(if they're effectively the same) or eventually moved into an OpenShift-specific collection?
https://github.com/ansible-collections/community.general/blob/main/plugins/connection/oc.py
oc
I pulled over this issue from the kubernetes repo: ansible-collections/community.kubernetes#174
Dear maintainers,
This is important for your collections!
In accordance with the Community decision, we have created the news-for-maintainers repository for announcements of changes impacting collection maintainers (see the examples) instead of Issue 45 that will be closed soon.
Watch
button in the upper right corner on the repository's home page.Issues
.Also we would like to remind you about the Bullhorn contributor newsletter which has recently started to be released weekly. To learn what it looks like, see the past releases. Please subscribe and talk to the Community via Bullhorn!
Join us in #ansible-social (for news reporting & chat), #ansible-community (for discussing collection & maintainer topics), and other channels on Matrix/IRC.
Help the Community and the Steering Committee to make right decisions by taking part in discussing and voting on the Community Topics that impact the whole project and the collections in particular. Your opinion there will be much appreciated!
Thank you!
We are going to tag 0.3.0 as 1.0.0 and release it to Automation Hub as well, but as redhat.openshift
.
Releases
N/A
This happens on ARO (Azure Red Hat OpenShift), I would assume the same behavior happens in other OCP 4.x clusters.
I believe the issue here is we hardcode in a /
vs using urllib.parse.urljoin
or file.path.join
to deal with reduntant slashes.
The following ansible
- name: check cluster
azure_rm_openshiftmanagedcluster_info:
resource_group: "{{ azr_resource_group }}"
name: "{{ azr_aro_cluster }}"
register: _aro_cluster
- name: get credentials
azure_rm_openshiftmanagedcluster_credentials_info:
resource_group: "{{ azr_resource_group }}"
name: "{{ azr_aro_cluster }}"
register: _aro_creds
- debug:
var: _aro_cluster
- set_fact:
kube_api: "{{ _aro_cluster.clusters.properties.apiserverProfile.url }}"
ocp_console: "{{ _aro_cluster.clusters.properties.consoleProfile.url }}"
kube_username: "{{ _aro_creds.credentials.kubeadminUsername }}"
kube_password: "{{ _aro_creds.credentials.kubeadminPassword }}"
# no_log: true
- name: get access token from openshift
community.okd.openshift_auth:
host: "{{ kube_api }}"
username: "{{ kube_username }}"
password: "{{ kube_password }}"
register: openshift_auth_results
results in
fatal: [localhost]: FAILED! => changed=false
msg: Couldn't find OpenShift's OAuth API
req_method: GET
req_reason: Forbidden
req_status_code: 403
req_url: https://api.xxxxx.eastus.aroapp.io:6443//.well-known/oauth-authorization-server
Module(s) for automating the lifecycle of builds in OpenShift
TBD
This implementation would only support v2 of the build API which is currently under development.
Issue moved from: ansible-collections/community.kubernetes#10
Currently, the tests run by ansible-test
(e.g. ansible-test integration --docker -v --color
) are running on OpenShift 3.9.0 (openshift/origin:v3.9.0
), which has been out of support for some time.
The current release of OpenShift 3 is 3.11, though the Docker Hub image for that version is 2 years old (https://hub.docker.com/r/openshift/origin/tags), and also uses Kubernetes 1.11 as a base, which has not been supported upstream for some time either (see the K8s version skew policy).
It would be good to use a supported image for the CI environment k8s cluster testing—for Operator SDK, the bsycorp/kind
(see tags). There's also the official SIG kind, which I've successfully used on other ansible testing projects (example).
Ideally, we would have something from CRC or OKD that's equivalent to the single-container approach in openshift/origin
, but I'm not sure if there's any timeline for something CI/local-friendly for OpenShift 4.x.
tests / CI
N/A
The downstream script which changes somethings for the AH release overwrites the repo: key value, changing it to a repo that does not exist (redhat.openshift) when in fact, it should remain as the community.okd repo url. This also effects the issues: key name as well
downstream.sh
n/a
n/a
All environments affected
run the downstream script and it will change the key
repo url stays the same
repo key altered to https://github.com/ansible-collections/redhat.openshift/issues
Dear maintainers,
Could anyone please:
Also there were 2 other announcements recently that need maintainer attention:
When you create PRs, could you please put something like Relates to <corresponding_issue>
so that it'll be easier for me to track repos that fixed CI (but please don't put Fixes ...
as it'll close the issue).
Looking forward to your feedback,
Thanks!
Currently community.okd requires kubernetes.core 2.1.x or 2.2.x, but Ansible 5 and 6 include kubernetes.core 2.3.0 since Ansible 5.6.0 (released on April 5th). This basically breaks community.okd in Ansible 5.6.0, and this should be fixed ASAP. Right now ansible-core does not check whether the correct versions are installed, but ansible-galaxy does, so installing the collections from Ansible 5.6.0 manually with ansible-galaxy collection install
is not possible (when using https://github.com/ansible-community/ansible-build-data/blob/main/5/galaxy-requirements.yaml), which in particular makes it impossible to build an Execution Environment for Ansible 5.6.0 (ansible-community/images#22). Please fix this ASAP. The next Ansible 5 release is supposed to happen on April 26th, it would be great if this could be fixed by then.
As a convenience for users, all OKD modules should enable them to use module_defaults to feed values to common parameters. This should be like and consistent with that found in the kubernetes collection:
https://github.com/ansible-collections/community.kubernetes/blob/main/meta/runtime.yml
meta/runtime.yml
The downstream build process changes the links for issues and repository in the galaxy.yml file to point to a non-existent redhat.openshift repo. We'll need to copy the file, but not change the link. This problem is manifest in the link to the repo/issues on AH. Also, the link should be updated to point to the new repo, though there is at least a redirect in place.
Create a module for creating and effectively managing OpenShift DeploymentConfigs in an straightforward idempotent way.
plugins/modules/k8s.py
While a DeploymentConfig could be created and managed through the k8s
module, you have to manage the relationships to associated resources like ImageStreams in your play. This module encodes and encapsulates that logic for managing these relationships and does so in a safe and idempotent way.
This feature request is based on feedback from the field regarding the management of DeploymentConfigs in Operators using Ansible where, incorrectly handled, one can create a recursive reconciliation loop.
UPDATE: This feature request has been updated to add logic to the k8s
module rather than the a dedicated module based on feedback in the comments here.
I am trying to backup a postgres DB running on openshift. I have an inventory file using the community.okd.openshift
plugin and am able to list all the hosts in my namespace using this config.
@all:
|[email protected]_6443:
| |--@namespace_ansible-db-test:
| | |--@namespace_ansible-db-test_pods:
| | | |--postgresql-1-deploy_deployment
| | | |--postgresql-1-zvk9k_postgresql
| | |--@namespace_ansible-db-test_routes:
| | |--@namespace_ansible-db-test_services:
| | | |--postgresql
|--@label_deployment_postgresql-1:
| |--postgresql-1-zvk9k_postgresql
|--@label_deploymentconfig_postgresql:
| |--postgresql-1-zvk9k_postgresql
|--@label_name_postgresql:
| |--postgresql-1-zvk9k_postgresql
|--@label_openshift.io/deployer-pod-for.name_postgresql-1:
| |--postgresql-1-deploy_deployment
|--@label_template.openshift.io/template-instance-owner_ffc16a7a-5243-48a4-993a-5b61e3e8ea92:
| |--postgresql
|--@label_template_postgresql-ephemeral-template:
| |--postgresql
|--@ungrouped:
But when I try to ping the pod or even run a debug message, the run fails in the gather_facts step. This is the message that I get.
<postgresql-1-zvk9k_postgresql> ESTABLISH oc CONNECTION
<postgresql-1-zvk9k_postgresql> EXEC ['/usr/local/bin/oc', '-n', 'ansible-db-test', 'exec', '-i', 'postgresql-1-zvk9k', '-c', 'postgresql', '--', '/bin/sh', '-c', '/bin/sh -c \'( umask 77 && mkdir -p "` echo /tmp/ `"&& mkdir "` echo /tmp/ansible-tmp-1649114380.5317209-68473-280544582421228 `" && echo ansible-tmp-1649114380.5317209-68473-280544582421228="` echo /tmp/ansible-tmp-1649114380.5317209-68473-280544582421228 `" ) && sleep 0\'']
fatal: [postgresql-1-zvk9k_postgresql]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/ `\"&& mkdir \"` echo /tmp/ansible-tmp-1649114380.5317209-68473-280544582421228 `\" && echo ansible-tmp-1649114380.5317209-68473-280544582421228=\"` echo /tmp/ansible-tmp-1649114380.5317209-68473-280544582421228 `\" ), exited with result 1",
"unreachable": true
}
This is my inventory file
plugin: community.okd.openshift
connections:
- host: <REDACTED>
api_key: "<REDACTED>"
verify_ssl: false
namespaces:
- ansible-db-test
And this is my playbook
---
- name: Backup Databases
hosts: namespace_ansible-db-test:&label_name_postgresql
tasks:
- name: debug message on consul-0 pod
debug:
var: hostvars[inventory_hostname]
- shell: hostname
Ansible configuration
pi$ ansible --version
ansible 2.10.17
config file = None
configured module search path = ['/Users/pi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/pi/.pyenv/versions/miniconda3-latest/envs/rnoc/lib/python3.9/site-packages/ansible
executable location = /Users/pi/.pyenv/versions/miniconda3-latest/envs/rnoc/bin/ansible
python version = 3.9.5 (default, May 18 2021, 12:31:01) [Clang 10.0.0 ]
The strange thing is that when I run the playbook after I login using oc login
, it runs perfectly fine.
Am I missing something here? I thought that the connection encapsulates the login flow as well and we should be able to use the host as is.
OpenShift Template resources are reported to have "quirks" with how they are implemented in the APIs. To abstract users from having to deal with this, logic needs be added to the k8s
module in this collection to handle Template resources as a developer would expect.
plugins/modules/k8s.py
This issue is to track preparation and publishing of 0.1.0 of this collection to Galaxy. It is a community supported release containing only to the OpenShift parts of the community.kubernetes collection that have been extracted.
Unable to install the oks collection--> ERROR! Failed to find collection community.okd:0.1.0
Requirements.yaml:
---
collections:
- name: community.kubernetes
version: 1.0.0
- name: community.okd
version: 0.1.0
ansible 2.9.11
config file = /Users/test/Documents/REPO/deployment-automation/ansible.cfg
configured module search path = ['/Users/test/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.11/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.6
COLLECTIONS_PATHS(/Users/test/Documents/REPO/deployment-automation/ansible.cfg) = ['/Users/test/Documents/REPO/deployment-automation/collections']
Mac OS Catalina
requirements.yml
fileansible-galaxy collection install -r requirements.yml
The OKD collection should be installed:
Process install dependency map
ERROR! Failed to find collection community.okd:0.1.0
The molecule tests for openshift_auth
have the host hardcoded to https://kubernetes.default.svc
, which is the fqdn for the API server when running as a pod in the cluster. We should make the task able to determine whether it's running incluster or not, and have it determine the proper host. Ideally, this can be done via the planned community.kubernetes.k8s_cluster_info
module
tests
Update: See this comment for the current issues.
When uploading version 1.0.0 of the downstream redhat.openshift
collection, we ran into a few errors.
Error running ansible-doc: cmd="/usr/bin/env ANSIBLE_COLLECTIONS_PATHS=/tmp/tmp58gaersy ansible-doc --type module --json redhat.openshift.openshift_route redhat.openshift.openshift_auth redhat.openshift.k8s redhat.openshift.k8s_auth redhat.openshift.openshift_process" returncode="1" b"[WARNING]: Failed to create the directory '/.ansible': [Errno 13] Permission\ndenied: b'/.ansible'\nERROR! module redhat.openshift.openshift_route missing documentation (or could not parse documentation): unknown doc_fragment(s) in file /tmp/tmp58gaersy/ansible_collections/redhat/openshift/plugins/modules/openshift_route.py: kubernetes.core.k8s_auth_options, kubernetes.core.k8s_wait_options, kubernetes.core.k8s_state_options\n"
...
ERROR: Found 1 shebang issue(s) which need to be resolved:
ERROR: ci/incluster_integration.sh:1:1: unexpected non-module shebang: b'#!/bin/bash'
See documentation for help: https://docs.ansible.com/ansible/2.9/dev_guide/testing/sanity/shebang.html
Running sanity test 'shellcheck'
ERROR: Found 8 shellcheck issue(s) which need to be resolved:
ERROR: ci/downstream.sh:35:21: SC2068: Double quote array expansions to avoid re-splitting elements.
ERROR: ci/incluster_integration.sh:14:1: SC2034: component appears unused. Verify it or export it.
ERROR: ci/incluster_integration.sh:15:12: SC2086: Double quote to prevent globbing and word splitting.
ERROR: ci/incluster_integration.sh:19:23: SC2086: Double quote to prevent globbing and word splitting.
ERROR: ci/incluster_integration.sh:21:23: SC2086: Double quote to prevent globbing and word splitting.
ERROR: ci/incluster_integration.sh:24:12: SC2086: Double quote to prevent globbing and word splitting.
ERROR: ci/incluster_integration.sh:55:82: SC2086: Double quote to prevent globbing and word splitting.
ERROR: ci/incluster_integration.sh:63:80: SC2086: Double quote to prevent globbing and word splitting.
See documentation for help: https://docs.ansible.com/ansible/2.9/dev_guide/testing/sanity/shellcheck.html
We should also see if we can get the errors to reproduce in our own automated tests so we don't run into similar errors in the future.
ALSO, I just noticed the repo
that ends up being referenced is https://github.com/ansible-collections/redhat.openshift, and the issue tracker is https://github.com/ansible-collections/redhat.openshift/issues — neither of which currently exist.
Automation Hub downstream release.
Related: #51
See release log: https://cloud.redhat.com/ansible/automation-hub/redhat/openshift/import-log?version=1.0.0 (requires authentication).
I just hit a case where perform_action return None and this triggers a backtrace. I use kubernetes.core 2.1.1.
TASK [Create a project] ********************************************************
�[31mAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'NoneType' object is not subscriptable�[0m
�[31mfatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/opt/ansible/.ansible/tmp/ansible-tmp-1632244793.0376081-183-47114061504571/AnsiballZ_k8s.py\", line 102, in <module>\n _ansiballz_main()\n File \"/opt/ansible/.ansible/tmp/ansible-tmp-1632244793.0376081-183-47114061504571/AnsiballZ_k8s.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/opt/ansible/.ansible/tmp/ansible-tmp-1632244793.0376081-183-47114061504571/AnsiballZ_k8s.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.okd.plugins.modules.k8s', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.okd.k8s_payload_wsyicxux/ansible_community.okd.k8s_payload.zip/ansible_collections/community/okd/plugins/modules/k8s.py\", line 317, in <module>\n File \"/tmp/ansible_community.okd.k8s_payload_wsyicxux/ansible_community.okd.k8s_payload.zip/ansible_collections/community/okd/plugins/modules/k8s.py\", line 313, in main\n File \"/tmp/ansible_community.okd.k8s_payload_wsyicxux/ansible_community.okd.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py\", line 524, in execute_module\nTypeError: 'NoneType' object is not subscriptable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}�[0m
We have a commented out test in the molecule/default/tasks/openshift_auth.yml
for the state: absent
case. The token was not revoked and so the test failed. We should determine why the token was not revoked, and whether it's a bug or an expected behavior. We should also consider adding additional output when the rejection fails, so that users can determine whether it's a real error or not.
community.okd.openshift_auth
The kubeconfig contains information for current context and if the user is logged in,
it also contains the token with which user is logged in as. The modules should have
a way to lookup the token. It is equivalent of oc whoami -t
Lookup Plugins - k8s, k8s_info
We'd like to release the 0.3.0 version of this collection.
We will need to go through past PRs and make sure they are all accounted for in changelog fragments, then I'll coordinate its release into Galaxy.
This release will serve as an "alpha" of 1.0.0.
Releases
N/A
This connection plugin currently exists in the community.general collection. With active development moving to this collection this plugin should be migrated once a release of community.okd is made according to the documented procedures.
Once a release of community.okd is available:
I'm trying to patch disable the default storage class as shown below, but when doing so the annotations are not applied. Perhaps I'm doing it wrong. Also not sure if this is an issue in the upstream k8s modules that we redirect to?
(oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
)
❯ oc get storageclass -o json standard
{
"allowVolumeExpansion": true,
"apiVersion": "storage.k8s.io/v1",
"kind": "StorageClass",
"metadata": {
"annotations": {
"storageclass.kubernetes.io/is-default-class": "true"
},
"creationTimestamp": "2022-05-09T08:15:10Z",
"name": "standard",
"resourceVersion": "44960",
"uid": "a4b9a1a4-f66a-41d8-9cca-18f486f76945"
},
"provisioner": "kubernetes.io/cinder",
"reclaimPolicy": "Delete",
"volumeBindingMode": "WaitForFirstConsumer"
}
- name: Disable default storage class
community.okd.k8s:
state: patched
definition:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
"storageclass.kubernetes.io/is-default-class": "false"
TASK [Disable default storage class] ***************************************************************************************************************************************************
redirecting (type: action) community.okd.k8s to kubernetes.core.k8s_info
redirecting (type: action) community.okd.k8s to kubernetes.core.k8s_info
ok: [localhost] => changed=false
method: patch
result:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
creationTimestamp: '2022-05-09T08:15:10Z'
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:allowVolumeExpansion: {}
f:metadata:
f:annotations: {}
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: Go-http-client
operation: Update
time: '2022-05-09T08:15:10Z'
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:storageclass.kubernetes.io/is-default-class: {}
manager: kubectl-patch
operation: Update
time: '2022-05-09T09:24:22Z'
name: standard
resourceVersion: '44960'
uid: a4b9a1a4-f66a-41d8-9cca-18f486f76945
provisioner: kubernetes.io/cinder
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
See kubernetes collection contributing guide for a template: https://github.com/ansible-collections/community.kubernetes/blob/main/CONTRIBUTING.md
contributing guide
N/A
When I use community.okd.k8s
I get a warning message.
[WARNING]: class KubernetesRawModule is deprecated and will be removed in 2.0.0. Please use K8sAnsibleMixin instead.
community.okd.k8s
2.10.3
N/A
macOS/Linux
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- redhat.openshift.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
Task should work, no errors or warnings.
_____________________________
< TASK [redhat.openshift.k8s] >
-----------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
[WARNING]: class KubernetesRawModule is deprecated and will be removed in 2.0.0. Please use K8sAnsibleMixin instead.
changed: [127.0.0.1]
Create a module that emulates the logic of theoc expose
command in an Ansible native-way.
plugins/modules/openshift_route.py
One could create the necessary route interfaces to something like a service using a series of lookups and templatized k8s
resource declarations without this module. The idea is to make that easy and more efficient here by abstracting automation developers away from need to understand and handle it themselves. This also has the benefit of making Ansible plays be more concise and readable.
UPDATE: It was noted that there is a kubectl expose
command that oc expose
builds upon. Further, it was suggested that we refrain from overloading the term "expose" given their different implementations in Kubernetes and OpenShift and opt for turns that describe the object the modify. We will create this module first and look to create related module(s) in the kubernetes collection with only that native functionality.
I wanted to note this in a new bug report just because I tend to run things directly on my Mac, and the ball of wax that is the downstream build process seems to be a breeding ground for Linux/bash-specific issues with sed
especially.
When I run make downstream-test-sanity
on my Mac, I get a few strange errors like:
ERROR: Found 3 pep8 issue(s) which need to be resolved:
ERROR: plugins/modules/k8s.py:442:1: E265: block comment should start with '# '
ERROR: plugins/modules/openshift_process.py:386:1: E265: block comment should start with '# '
ERROR: plugins/modules/openshift_route.py:475:1: E265: block comment should start with '# '
(And some others that made me scratch my head.) It looks like the sed
commands near the end of f_handle_doc_fragments_workaround()
in downstream.sh
end up not rewriting the contents of the file on my Mac, but rather adding in the generated docs to the top, on top of the existing file...
Current workaround is to build the downstream artifact on a Linux OS instead of on macOS, or I guess install the more Linux-y-flavored sed on the Mac.
Downstream build.
N/A
N/A
macOS Mojave
make downstream-test-sanity
Downstream sanity tests pass, as they do in CI.
Downstream sanity tests do not pass.
--config
doesn't seem to be a flag for oc 4.8.11
, though I'm seeing it in documentation.
❯ oc version
Client Version: 4.8.11
Kubernetes Version: v1.21.4+6438632
❯ oc --config
Error: unknown flag: --config
See 'oc --help' for usage.
❯ oc --kubeconfig
Error: flag needs an argument: --kubeconfig
See 'oc --help' for usage.
❯ podman run -it --rm openshift/origin-cli:v3.11.0 oc --config
Error: flag needs an argument: --config
opened a docs issue as well openshift/openshift-docs#42594
Develop a module for OpenShift specific resources that can be managed in a declarative way. This k8s
module would be analogous to the community.kubernetes.k8s
module in the community.kubernetes collection, but optimized for managing declarative resources such as Project
on OpenShift systems. It was decided to re-use the k8s
module name here so users need only switch the namespace for openshift-optimized functions rather than dealing with a search-and-replace on the module name throughout their content.
Theoretically this module could be used to manage native Kubernetes resources, but that use is not advised and will not supported (tested).
k8s
Ansible once included a module plugin called oc
after the OpenShift command line tool. This module is not a continuation of that module and therefore should not be named oc
to avoid any confusion or give the expectation that the module is meant to be compatible with that command line tool. This is consistent with the k8s
module in that it is not called "kubectl" for similar reasons.
This module should be built upon the openshift-restclient-python library.
Imperative functions of the oc
command line tool would be implemented over time as purpose specific modules.
When using k8s_info module to get select resources under a namespace, none of kind "Route" are returned.
k8s_info
$ ansible --version
ansible 2.9.13
config file = None
configured module search path = ['/home/ngillett/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ngillett/dev/python3/lib/python3.7/site-packages/ansible
executable location = /home/ngillett/dev/python3/bin/ansible
python version = 3.7.9 (default, Aug 20 2020, 14:07:20) [GCC 10.2.1 20200723 (Red Hat 10.2.1-1)]
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 32 (Thirty Two)
Release: 32
Codename: ThirtyTwo
Create route resources in OpenShift, use k8s_info to fetch them.
- name: Find existing resources
k8s_info:
api_key: "{{ exodus_gw_ocp_token }}"
host: "{{ exodus_gw_ocp_host }}"
namespace: "{{ exodus_gw_ocp_namespace }}"
kind: "{{ item }}"
ca_cert: "{{ exodus_gw_ca_cert }}"
register: exodus_gw_resources
with_items:
- Pod
- Service
- ImageStream
- DeploymentConfig
- ReplicationController
- NetworkPolicy
- Route
Given that route resources exist under the namespace, they should be listed by the above.
$ oc get --namespace=exodus-gw-qa route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
exodus-gw-qa.apps.ocp.prod.psi.redhat.com-host exodus-gw-qa.apps.ocp.prod.psi.redhat.com exodus-gw http edge None
exodus-gw.qa.psi.redhat.com-cname exodus-gw.qa.psi.redhat.com exodus-gw http edge None
ok: [exodus-gw-qa-project] => (item=Route) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_version": "v1",
"ca_cert": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem",
"client_cert": null,
"client_key": null,
"context": null,
"field_selectors": [],
"host": "https://api.ocp.prod.psi.redhat.com:6443",
"kind": "Route",
"kubeconfig": null,
"label_selectors": [],
"name": null,
"namespace": "exodus-gw-qa",
"password": null,
"proxy": null,
"username": null,
"validate_certs": null
}
},
"item": "Route",
"resources": []
}
This issue is to track preparation and publishing of 0.2.0 of this collection to Galaxy.
This feature is to supplement Template resources in OpenShift. (Template resource support is being handled through #22.) The k8s
module can handle the standard CRUD operations for managing Template resources. A supplementary module it needed to handle the parts of the OpenShift Templates implementation that cannot be expressed via the generic k8s
interface.
Since these additional operations correspond pretty much exactly to the oc process
command, this module will be called openshift_process
. This naming will also help differentiate Template resources from these imperative operations.
This module would basically accept either a template name, file, or definition, and the parameters that should be set, and will properly send that request to the API server.
plugins/modules/openshift_process.py
Here is an example of what a task would look like using this proposed module:
community.okd.openshift_process:
name: nginx-example
namespace: openshift # only needed if using a template already on the server
parameters:
NAMESPACE: default
NAME: test123
GITHUB_WEBHOOK_SECRET: '{{ gh_secret }}'
Need to create a downstream release script and make target.
Makefile
We are running sanity tests across every collection included in the Ansible community package (as part of this issue) and found that ansible-test sanity --docker
against community.okd 2.1.0 fails with ansible-core 2.13.0rc1 in ansible 6.0.0a2.
n/a
ansible [core 2.13.0rc1]
2.1.0
ansible-test sanity --docker
Tests are either passing or ignored.
ERROR: Found 5 import issue(s) on python 3.10 which need to be resolved:
ERROR: plugins/connection/oc.py:150:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: plugins/inventory/openshift.py:116:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: plugins/module_utils/k8s.py:10:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: plugins/module_utils/openshift_process.py:9:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: plugins/modules/openshift_route.py:319:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: Found 5 validate-modules issue(s) which need to be resolved:
ERROR: plugins/connection/oc.py:0:0: invalid-documentation: DOCUMENTATION.author: Invalid author for dictionary value @ data['author']. Got ['xuxinkun']
ERROR: plugins/connection/oc.py:0:0: invalid-documentation: DOCUMENTATION.connection: extra keys not allowed @ data['connection']. Got 'oc'
ERROR: plugins/connection/oc.py:0:0: invalid-documentation: DOCUMENTATION.name: required key not provided @ data['name']. Got None
ERROR: plugins/inventory/openshift.py:0:0: invalid-documentation: DOCUMENTATION.author: Invalid author for dictionary value @ data['author']. Got ['Chris Houseknecht <@chouseknecht>']
ERROR: plugins/inventory/openshift.py:0:0: invalid-documentation: DOCUMENTATION.plugin_type: extra keys not allowed @ data['plugin_type']. Got 'inventory'
ERROR: The 2 sanity test(s) listed below (out of 43) failed. See error output above for details.
import --python 3.10
validate-modules
ERROR: Command "podman exec ansible-test-controller-W0Mik85U /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/community/okd LC_ALL=en_US.UTF-8 /usr/bin/python3.10 /root/ansible/bin/ansible-test sanity --containers '{}' --skip-test pylint --metadata tests/output/.tmp/metadata-hmfuy9w6.json --truncate 0 --color no --host-path tests/output/.tmp/host-6n79qpjg" returned exit status 1.
We are happy to announce that the registration for the Ansible Contributor Summit is open!
This is a great opportunity for interested people to meet, discuss related topics, share their stories and opinions, get the latest important updates and just to hang out together.
There will be different announcements & presentations by Community, Core, Cloud, Network, and other teams.
Current contributors will be happy to share their stories and experience with newcomers.
There will be links to interactive self-passed instruqt scenarios shared during the event that help newcomers learn different aspects of development.
Online on Matrix and Youtube. Tuesday, April 12, 2022, 12:00 - 20:00 UTC.
Add the event to your calendar. Use the ical URL (for example, in Google Calendar "Add other calendars" > "Import from URL") instead of importing the .ics file so that any updates to the event will be reflected in your calendar.
Check out the Summit page:
We are looking forward to seeing you!:)
Implement a Probot/stale bot consistent with ansible-collections/community.kubernetes#53.
Organization / maintenance
N/A
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.