Giter Site home page Giter Site logo

ansible-collections / google.cloud Goto Github PK

View Code? Open in Web Editor NEW
99.0 13.0 126.0 4.12 MB

GCP Ansible Collection https://galaxy.ansible.com/google/cloud

Home Page: https://cloud.google.com

License: GNU General Public License v3.0

Python 99.69% Jinja 0.19% Shell 0.12%
ansible googlecloudplatform gcp devops ansible-collection

google.cloud's Introduction

Google Cloud Platform Ansible Collection

This collection provides a series of Ansible modules and plugins for interacting with the Google Cloud Platform

This collection works with Ansible 2.16+

Communication

  • Join the Ansible forum:

    • Get Help: get help or help others. Please use appropriate tags, for example cloud.
    • Social Spaces: gather and interact with fellow enthusiasts.
    • News & Announcements: track project-wide announcements including social events.
  • The Ansible Bullhorn newsletter: used to announce releases and important changes.

For more information about communication, see the Ansible communication guide.

Installation

ansible-galaxy collection install google.cloud

Resources Supported

  • App Engine FirewallRule (gcp_appengine_firewall_rule, gcp_appengine_firewall_rule_info)
  • BigQuery Dataset (gcp_bigquery_dataset, gcp_bigquery_dataset_info)
  • BigQuery Table (gcp_bigquery_table, gcp_bigquery_table_info)
  • Cloud Bigtable Instance (gcp_bigtable_instance, gcp_bigtable_instance_info)
  • Cloud Build Trigger (gcp_cloudbuild_trigger, gcp_cloudbuild_trigger_info)
  • Cloud Functions CloudFunction (gcp_cloudfunctions_cloud_function, gcp_cloudfunctions_cloud_function_info)
  • Cloud Scheduler Job (gcp_cloudscheduler_job, gcp_cloudscheduler_job_info)
  • Cloud Tasks Queue (gcp_cloudtasks_queue, gcp_cloudtasks_queue_info)
  • Compute Engine Address (gcp_compute_address, gcp_compute_address_info)
  • Compute Engine Autoscaler (gcp_compute_autoscaler, gcp_compute_autoscaler_info)
  • Compute Engine BackendBucket (gcp_compute_backend_bucket, gcp_compute_backend_bucket_info)
  • Compute Engine BackendService (gcp_compute_backend_service, gcp_compute_backend_service_info)
  • Compute Engine RegionBackendService (gcp_compute_region_backend_service, gcp_compute_region_backend_service_info)
  • Compute Engine Disk (gcp_compute_disk, gcp_compute_disk_info)
  • Compute Engine Firewall (gcp_compute_firewall, gcp_compute_firewall_info)
  • Compute Engine ForwardingRule (gcp_compute_forwarding_rule, gcp_compute_forwarding_rule_info)
  • Compute Engine GlobalAddress (gcp_compute_global_address, gcp_compute_global_address_info)
  • Compute Engine GlobalForwardingRule (gcp_compute_global_forwarding_rule, gcp_compute_global_forwarding_rule_info)
  • Compute Engine HttpHealthCheck (gcp_compute_http_health_check, gcp_compute_http_health_check_info)
  • Compute Engine HttpsHealthCheck (gcp_compute_https_health_check, gcp_compute_https_health_check_info)
  • Compute Engine HealthCheck (gcp_compute_health_check, gcp_compute_health_check_info)
  • Compute Engine InstanceTemplate (gcp_compute_instance_template, gcp_compute_instance_template_info)
  • Compute Engine Image (gcp_compute_image, gcp_compute_image_info)
  • Compute Engine Instance (gcp_compute_instance, gcp_compute_instance_info)
  • Compute Engine InstanceGroup (gcp_compute_instance_group, gcp_compute_instance_group_info)
  • Compute Engine InstanceGroupManager (gcp_compute_instance_group_manager, gcp_compute_instance_group_manager_info)
  • Compute Engine RegionInstanceGroupManager (gcp_compute_region_instance_group_manager, gcp_compute_region_instance_group_manager_info)
  • Compute Engine InterconnectAttachment (gcp_compute_interconnect_attachment, gcp_compute_interconnect_attachment_info)
  • Compute Engine Network (gcp_compute_network, gcp_compute_network_info)
  • Compute Engine NetworkEndpointGroup (gcp_compute_network_endpoint_group, gcp_compute_network_endpoint_group_info)
  • Compute Engine NodeGroup (gcp_compute_node_group, gcp_compute_node_group_info)
  • Compute Engine NodeTemplate (gcp_compute_node_template, gcp_compute_node_template_info)
  • Compute Engine RegionAutoscaler (gcp_compute_region_autoscaler, gcp_compute_region_autoscaler_info)
  • Compute Engine RegionDisk (gcp_compute_region_disk, gcp_compute_region_disk_info)
  • Compute Engine RegionUrlMap (gcp_compute_region_url_map, gcp_compute_region_url_map_info)
  • Compute Engine RegionHealthCheck (gcp_compute_region_health_check, gcp_compute_region_health_check_info)
  • Compute Engine ResourcePolicy (gcp_compute_resource_policy, gcp_compute_resource_policy_info)
  • Compute Engine Route (gcp_compute_route, gcp_compute_route_info)
  • Compute Engine Router (gcp_compute_router, gcp_compute_router_info)
  • Compute Engine Snapshot (gcp_compute_snapshot, gcp_compute_snapshot_info)
  • Compute Engine SslCertificate (gcp_compute_ssl_certificate, gcp_compute_ssl_certificate_info)
  • Compute Engine Reservation (gcp_compute_reservation, gcp_compute_reservation_info)
  • Compute Engine SslPolicy (gcp_compute_ssl_policy, gcp_compute_ssl_policy_info)
  • Compute Engine Subnetwork (gcp_compute_subnetwork, gcp_compute_subnetwork_info)
  • Compute Engine TargetHttpProxy (gcp_compute_target_http_proxy, gcp_compute_target_http_proxy_info)
  • Compute Engine TargetHttpsProxy (gcp_compute_target_https_proxy, gcp_compute_target_https_proxy_info)
  • Compute Engine RegionTargetHttpProxy (gcp_compute_region_target_http_proxy, gcp_compute_region_target_http_proxy_info)
  • Compute Engine RegionTargetHttpsProxy (gcp_compute_region_target_https_proxy, gcp_compute_region_target_https_proxy_info)
  • Compute Engine TargetInstance (gcp_compute_target_instance, gcp_compute_target_instance_info)
  • Compute Engine TargetPool (gcp_compute_target_pool, gcp_compute_target_pool_info)
  • Compute Engine TargetSslProxy (gcp_compute_target_ssl_proxy, gcp_compute_target_ssl_proxy_info)
  • Compute Engine TargetTcpProxy (gcp_compute_target_tcp_proxy, gcp_compute_target_tcp_proxy_info)
  • Compute Engine TargetVpnGateway (gcp_compute_target_vpn_gateway, gcp_compute_target_vpn_gateway_info)
  • Compute Engine UrlMap (gcp_compute_url_map, gcp_compute_url_map_info)
  • Compute Engine VpnTunnel (gcp_compute_vpn_tunnel, gcp_compute_vpn_tunnel_info)
  • Google Kubernetes Engine Cluster (gcp_container_cluster, gcp_container_cluster_info)
  • Google Kubernetes Engine NodePool (gcp_container_node_pool, gcp_container_node_pool_info)
  • Cloud DNS ManagedZone (gcp_dns_managed_zone, gcp_dns_managed_zone_info)
  • Cloud DNS ResourceRecordSet (gcp_dns_resource_record_set, gcp_dns_resource_record_set_info)
  • Filestore Instance (gcp_filestore_instance, gcp_filestore_instance_info)
  • Cloud IAM Role (gcp_iam_role, gcp_iam_role_info)
  • Cloud IAM ServiceAccount (gcp_iam_service_account, gcp_iam_service_account_info)
  • Cloud IAM ServiceAccountKey (gcp_iam_service_account_key, gcp_iam_service_account_key_info)
  • Cloud Key Management Service KeyRing (gcp_kms_key_ring, gcp_kms_key_ring_info)
  • Cloud Key Management Service CryptoKey (gcp_kms_crypto_key, gcp_kms_crypto_key_info)
  • Cloud (Stackdriver) Logging Metric (gcp_logging_metric, gcp_logging_metric_info)
  • ML Engine Model (gcp_mlengine_model, gcp_mlengine_model_info)
  • ML Engine Version (gcp_mlengine_version, gcp_mlengine_version_info)
  • Cloud Pub/Sub Topic (gcp_pubsub_topic, gcp_pubsub_topic_info)
  • Cloud Pub/Sub Subscription (gcp_pubsub_subscription, gcp_pubsub_subscription_info)
  • Memorystore (Redis) Instance (gcp_redis_instance, gcp_redis_instance_info)
  • Resource Manager Project (gcp_resourcemanager_project, gcp_resourcemanager_project_info)
  • Runtime Configurator Config (gcp_runtimeconfig_config, gcp_runtimeconfig_config_info)
  • Runtime Configurator Variable (gcp_runtimeconfig_variable, gcp_runtimeconfig_variable_info)
  • Service Usage Service (gcp_serviceusage_service, gcp_serviceusage_service_info)
  • Cloud Source Repositories Repository (gcp_sourcerepo_repository, gcp_sourcerepo_repository_info)
  • Cloud Spanner Instance (gcp_spanner_instance, gcp_spanner_instance_info)
  • Cloud Spanner Database (gcp_spanner_database, gcp_spanner_database_info)
  • Cloud SQL Instance (gcp_sql_instance, gcp_sql_instance_info)
  • Cloud SQL Database (gcp_sql_database, gcp_sql_database_info)
  • Cloud SQL User (gcp_sql_user, gcp_sql_user_info)
  • Cloud SQL SslCert (gcp_sql_ssl_cert, gcp_sql_ssl_cert_info)
  • Cloud Storage Bucket (gcp_storage_bucket, gcp_storage_bucket_info)
  • Cloud Storage BucketAccessControl (gcp_storage_bucket_access_control, gcp_storage_bucket_access_control_info)
  • Cloud Storage DefaultObjectACL (gcp_storage_default_object_acl, gcp_storage_default_object_acl_info)
  • Cloud TPU Node (gcp_tpu_node, gcp_tpu_node_info)
  • Secret Manager (gcp_secret_manager)

google.cloud's People

Contributors

alancoding avatar bcoca avatar c2thorn avatar chrisst avatar danawillow avatar dcostakos avatar dmsimard avatar drebes avatar emilymye avatar ericsysmin avatar jefferbrecht avatar kustodian avatar lucasboisserie avatar megan07 avatar modular-magician avatar nat-henderson avatar nkakouros avatar nonoctis avatar rambleraptor avatar rebecca-pete avatar rileykarson avatar ritmas avatar rmoriar1 avatar s-hertel avatar safaci2000 avatar sirgitsalot avatar sjshuck avatar slevenick avatar thedoubl3j avatar toumorokoshi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

google.cloud's Issues

gcp_storage_bucket fails with unhelpful error message

SUMMARY

Trying to execute an ansible task where it creates multiple buckets. It used to work using ansible 2.9 in july/august 2020.

Came back to my playbook to add a few buckets last week, and the same code with the same ansible version did not work.

It's possible I'm missing or misconfiguring some of the variables or SSH keys, but the error message is unclear and doesn't help me move forward.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_storage_bucket

ANSIBLE VERSION
ansible 2.10.3
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.6 (default, Sep 25 2020, 09:36:53) [GCC 10.2.0]
CONFIGURATION
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

This is happening on Ubuntu 20.04 and 20.10 from my workstation with the relevant keys appropriately passed in.

STEPS TO REPRODUCE

The variables are correctly set, and the task gives an ok return for items which already exist. However it gives back an error message that doesn't clarify which value is invalid, and leaves almost no assistance to resolve the problem.

- name: create buckets with CORS
  gcp_storage_bucket:
    name: "{{ item }}-{{ project_prefix }}"
    project: "{{ project }}"
    auth_kind: serviceaccount
    location: us-central-1
    service_account_file: "{{ gcp_cred_file }}"
    state: present
    cors:
      - max_age_seconds: 3600
        method:
          - GET
          - HEAD
          - DELETE
        origin: "*"
        response_header: "Content-Type"
  with_items:
    - new-bucket
    - existing-bucket-1
    - existing-bucket-2
EXPECTED RESULTS

I expect to get [ok] or [modified] returns for all listed entries.

ACTUAL RESULTS

It returns [ok] for the existing items.

It returns this error when trying to create new-bucket:

TASK [base_platform_vm : create buckets with CORS] *******************************************************************************************************************************************************************************************************************
task path: /home/john/workspace/tools2/vm_provisioning/roles/base_platform_vm/tasks/main.yml:520
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: john
<localhost> EXEC /bin/sh -c 'echo ~john && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/john/.ansible/tmp `"&& mkdir "` echo /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090 `" && echo ansible-tmp-1605000552.5869176-3137747-26869247795090="` echo /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090 `" ) && sleep 0'
redirecting (type: modules) ansible.builtin.gcp_storage_bucket to google.cloud.gcp_storage_bucket
Using module file /home/john/.ansible/collections/ansible_collections/google/cloud/plugins/modules/gcp_storage_bucket.py
<localhost> PUT /home/john/.ansible/tmp/ansible-local-3135940lxzin2mr/tmplfs6xfm_ TO /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090/AnsiballZ_gcp_storage_bucket.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090/ /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090/AnsiballZ_gcp_storage_bucket.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090/AnsiballZ_gcp_storage_bucket.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/john/.ansible/tmp/ansible-tmp-1605000552.5869176-3137747-26869247795090/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_gcp_storage_bucket_payload_l7h6mofg/ansible_gcp_storage_bucket_payload.zip/ansible_collections/google/cloud/plugins/module_utils/gcp_utils.py", line 312, in raise_for_status
    response.raise_for_status()
  File "/usr/lib/python3/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
failed: [localhost] (item=ground-truth) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "acl": null,
            "auth_kind": "serviceaccount",
            "cors": [
                {
                    "max_age_seconds": 3600,
                    "method": [
                        "GET",
                        "HEAD",
                        "DELETE"
                    ],
                    "origin": [
                        "*"
                    ],
                    "response_header": [
                        "Content-Type"
                    ]
                }
            ],
            "default_event_based_hold": null,
            "default_object_acl": null,
            "env_type": null,
            "labels": null,
            "lifecycle": null,
            "location": "us-central-1",
            "logging": null,
            "metageneration": null,
            "name": "ground-truth-preprod",
            "owner": null,
            "predefined_default_object_acl": null,
            "project": "project-key",
            "scopes": [
                "https://www.googleapis.com/auth/devstorage.full_control"
            ],
            "service_account_contents": null,
            "service_account_email": null,
            "service_account_file": "/home/john/.ssh/key.json",
            "state": "present",
            "storage_class": null,
            "versioning": null,
            "website": null
        }
    },
    "item": "ground-truth",
    "msg": "GCP returned error: {'error': {'code': 400, 'message': 'Invalid Value', 'errors': [{'message': 'Invalid Value', 'domain': 'global', 'reason': 'invalid'}]}}"
}

Multiple erros when trying to generate GCE inventory via Inventory Plugin

SUMMARY

When trying to generate a Google Cloud Engine inventory via Inventory Plugin, I get multiple errors

ISSUE TYPE
  • Bug Report
COMPONENT NAME

Inventory Plugin

ANSIBLE VERSION
ansible 2.9.14
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/cheo/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0]
CONFIGURATION
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = [u'host_list', u'virtualbox', u'yaml', u'constructed']
OS / ENVIRONMENT

Ubuntu 18.04.5 LTS
Linux 4.15.0-118-generic #119-Ubuntu SMP x86_64

STEPS TO REPRODUCE

ansible-inventory -i inventory-gcp_compute.yml --graph

plugin: gcp_compute
zones: # populate inventory with instances in these regions
  - us-central1-a
projects:
  - vpn-server-sasp
#filters:
  #- machineType = n1-standard-1
  #- scheduling.automaticRestart = true AND machineType = n1-standard-1
service_account_file: /home/cheo/sergio/ansible-gce/vpn-server-sasp-d5e0d0f06446.json
auth_kind: serviceaccount
scopes:
  - 'https://www.googleapis.com/auth/cloud-platform'
  - 'https://www.googleapis.com/auth/compute.readonly'
keyed_groups:
  # Create groups from GCE labels
  - prefix: gcp
    key: labels
hostnames:
  # List host by name instead of the default public ip
  - name
compose:
  # Set an inventory parameter to use the Public IP address to connect to the host
  # For Private ip use "networkInterfaces[0].networkIP"
  ansible_host: networkInterfaces[0].accessConfigs[0].natIP
EXPECTED RESULTS

A list of GCE instances

ACTUAL RESULTS
ansible-inventory 2.9.14
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/cheo/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible-inventory
  python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml as it did not pass its verify_file() method
virtualbox declined parsing /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml as it did not pass its verify_file() method
[WARNING]:  * Failed to parse /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml with yaml plugin: Plugin configuration YAML file, not YAML inventory
  File "/usr/lib/python2.7/dist-packages/ansible/inventory/manager.py", line 280, in parse_source
    plugin.parse(self._inventory, self._loader, source, cache=cache)
  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
    raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]:  * Failed to parse /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml with constructed plugin: Incorrect plugin name in file: gcp_compute
  File "/usr/lib/python2.7/dist-packages/ansible/inventory/manager.py", line 280, in parse_source
    plugin.parse(self._inventory, self._loader, source, cache=cache)
  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/constructed.py", line 109, in parse
    self._read_config_data(path)
  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/__init__.py", line 224, in _read_config_data
    raise AnsibleParserError("Incorrect plugin name in file: %s" % config.get('plugin', 'none found'))
[WARNING]: Unable to parse /home/cheo/sergio/ansible-gce/inventory-gcp_compute.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available

GCE inventory plugin env var override

SUMMARY

Hi,

I'm using plugin gcp_compute for retrieve my instance on Google. In my CD the location of service-account file is not the same as describe in inventory file (local vs CD). But env var GCP_SERVICE_ACCOUNT_FILE does not override the value describe in service_account_file entry.

Run

GCP_SERVICE_ACCOUNT_FILE="/Users/me/real-account.json" ansible-inventory -i inventories/project-id.gcp.yml --list

The result is:

[WARNING]:  * Failed to parse /Users/me/code/ansible/inventories/project-id.gcp.yml with gcp_compute plugin: [Errno 2] No such file or directory: '/Users/me/.fake-file-account.json'

In notes

Environment variables values will only be used if the playbook values are not set.

So maybe is not a bug but want behavior.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

plugins/inventory/gcp_compute.py

ANSIBLE VERSION
ansible 2.8.15
  config file = /Users/me/code/MeilleursAgents/MA-Infra/ansible/ansible.cfg
  configured module search path = ['/Users/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/me/.venvs/p3-ma-infrar/lib/python3.7/site-packages/ansible
  executable location = /Users/me/.venvs/p3-ma-infrar/bin/ansible
  python version = 3.7.3 (default, Mar  6 2020, 22:34:30) [Clang 11.0.3 (clang-1103.0.32.29)]
CONFIGURATION
CACHE_PLUGIN(/Users/me/code/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/me/code/ansible/ansible.cfg) = ~/.ansible-fact-cache
CACHE_PLUGIN_TIMEOUT(/Users/me/code/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(/Users/me/code/ansible/ansible.cfg) = ['/Users/me/code/ansible/filter_plugins']
DEFAULT_MANAGED_STR(/Users/me/code/ansible/ansible.cfg) = Managed by Ansible
DEFAULT_ROLES_PATH(/Users/me/code/ansible/ansible.cfg) = ['/Users/me/code/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(env: ANSIBLE_VAULT_IDENTITY_LIST) = ['pass.txt']
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = pass.txt
HOST_KEY_CHECKING(/Users/me/code/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/me/code/ansible/ansible.cfg) = auto
INVENTORY_ENABLED(/Users/me/code/ansible/ansible.cfg) = ['gcp_compute', 'script', 'ini']
RETRY_FILES_ENABLED(/Users/me/code/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/me/code/ansible/ansible.cfg) = ignore
OS / ENVIRONMENT
Mac OS 10.15.6
STEPS TO REPRODUCE

Inventory

---
plugin: gcp_compute
projects:
  - project-id
auth_kind: serviceaccount
# Fake path
service_account_file: ~/.fake-file-account.json
keyed_groups:
  - prefix: fr
    separator: '-'
    key: tags['items']
hostnames:
  - name
GCP_SERVICE_ACCOUNT_FILE="/Users/me/real-account.json" ansible-inventory -i inventories/project-id.gcp.yml --list
EXPECTED RESULTS

Env var GCP_SERVICE_ACCOUNT_FILE should override the value describe in the file.

ACTUAL RESULTS
[WARNING]:  * Failed to parse /Users/me/code/ansible/inventories/project-id.gcp.yml with gcp_compute plugin: [Errno 2] No such file or directory: '/Users/me/.fake-file-account.json'

Link ansible/ansible#71683

Line breaks in documentation strings give problems

SUMMARY

This line doesn't look like it's getting parsed as expected:

- The Cloud Pub/Sub service\naccount associated with the enclosing subscription's

There's another example or two.

Basically, this isn't a raw string, so python itself isn't going to differentiate between line breaks which are intended to be part of the YAML entry or actual line breaks (this is before any YAML parsing actually happens).

The auto-generator tool should probably sanitize these out.

I've tried to figure out what the effect is with ansible-doc, and it kind of weirdly doesn't show this option, so it's not being explicit with the error, but I think there's some YAML error being produced there.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION

most recent commit, 4944f92

CONFIGURATION

defaults

OS / ENVIRONMENT

N/A

STEPS TO REPRODUCE

use ansible-doc, see behavior I mentioned with missing full options from this module

or try to parse module's DOCUMENTATION with yaml

EXPECTED RESULTS

valid yaml

ACTUAL RESULTS

not valid yaml

Making this a raw string seems to fix it.

Apply Resource Policy to Compute Instance

SUMMARY

Hello, I hope you all are doing well.

Ansible collection does not allow creating an instance under a resource policy. I have to use the gcloud sdk to currently bring my VMs in a COLLOCATED placement policy which a resource policy provides. Ansible can create both resource policy and instance, but no way for them to play together which breaks my heart. I don't mind installing gcloud solely to connect these components together, but a better way is this feature. Have some way to apply a resource policy to a vm instance. For example, have a field in instance module to follow a resource policy during creation.

This PR tried to complete this feature but ran into quota errors (#194). I'm able to apply resource policies to instances while consistently avoiding the quota errors I encountered with a work around I gathered through the documentation. Seems to me like quota errors should not stop this feature request.

Thank you.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

gcp_compute_resource_policy field for google.cloud.gcp_compute_instance module.

ADDITIONAL INFORMATION

Forums about quota error mentioned above :
https://groups.google.com/g/gce-discussion/c/asJA71NDkZM/m/2qon8TQZBQAJ
https://groups.google.com/g/gce-discussion/c/Lfyk38giqK8

'NoneType' object has no attribute 'get'

From @ColFouks on Mar 24, 2020 08:33

SUMMARY

The gcp dynamic inventory fails.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute

ANSIBLE VERSION
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/anatolii_kaliuzhnyi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['gcp_compute']
OS / ENVIRONMENT

Red Hat Enterprise Linux Server release 7.7 (Maipo)

STEPS TO REPRODUCE

Enable gcp_compute plugin in the inventory section of ansible.cfg:

[inventory]
enable_plugins = gcp_compute
Make inventory.gcp.yml file:

plugin: gcp_compute
projects:
  - test-project-124
keyed_groups:
  - prefix: gcp
    key: project
auth_kind: serviceaccount
service_account_file: key.json

Try to create graph of inventories:

ansible-inventory -i inventory.gcp.yml --graph

EXPECTED RESULTS

Graph of inventories

ACTUAL RESULTS

[WARNING]: Unable to parse inventory.gcp.yml as an inventory source

[WARNING]: No inventory was parsed, only implicit localhost is available

@all:
  |--@ungrouped:

Copied from original issue: ansible/ansible#68423

gcp_iam_service_account_key - TypeError: 'NoneType' object is not subscriptable

SUMMARY

I'm receiving an error when running the gcp_iam_service_account_key module. It creates an empty JSON key file and the task fails. Rerunning the task after deleting the empty file works and the file is created successfully.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_iam_service_account_key

ANSIBLE VERSION
python 3.7.4
ansible 2.9.6
ansible 2.9.9
CONFIGURATION
ANSIBLE_SSH_CONTROL_PATH(/Users/bluo/repos/ansiblesite/ansible.cfg) = /tmp/%%h-%%r
DEFAULT_CALLBACK_WHITELIST(/Users/bluo/repos/ansiblesite/ansible.cfg) = ['datadog_callback']
DEFAULT_GATHERING(/Users/bluo/repos/ansiblesite/ansible.cfg) = smart
DEFAULT_MODULE_PATH(/Users/bluo/repos/ansiblesite/ansible.cfg) = ['/Users/bluo/repos/ansiblesite/playbooks/library']
HOST_KEY_CHECKING(/Users/bluo/repos/ansiblesite/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/Users/bluo/repos/ansiblesite/ansible.cfg) = /Users/bluo/.ansible/retry
OS / ENVIRONMENT

macOS Catalina 10.15.5

STEPS TO REPRODUCE

Run the playbook like directed in the instructions. The last task fails, delete the empty file, and rerun the task via debugger. File is created successfully.

- hosts: localhost
  connection: local
  gather_facts: no
  vars:
    auth_kind: serviceaccount
    sa_json: ~/.gcp_service_admin.json
  vars_prompt:
    - name: project_name
      private: no

  tasks:
    - name: get project name and append random numbers
      set_fact:
        project_id: "{{ project_name | lower | replace (' ', '') }}-{{ 100000 | random }}"
        service_account_display_name: "{{ project_name.replace(' ', '-') | lower }}"

    - name: create GCP project
      gcp_resourcemanager_project:
        name: "{{ project_name }}"
        id: "{{ project_id }}"
        auth_kind: "{{ auth_kind }}"
        service_account_file: "{{ sa_json }}"
        parent:
          type: organization
          id: "REDACTED"
        state: present
      debugger: always

    - name: create service account
      gcp_iam_service_account:
        name: "sa-{{ service_account_display_name }}@{{ project_id }}.iam.gserviceaccount.com"
        display_name: "{{ service_account_display_name }}"
        project: "{{ project_id }}"
        auth_kind: "{{ auth_kind }}"
        service_account_file: "{{ sa_json }}"
        state: present
      register: serviceaccount
      debugger: always

    - name: create service account key
      gcp_iam_service_account_key:
        service_account: "{{ serviceaccount }}"
        private_key_type: TYPE_GOOGLE_CREDENTIALS_FILE
        path: "~/sa-{{ service_account_display_name }}.json"
        project: "{{ project_id }}"
        auth_kind: "{{ auth_kind }}"
        service_account_file: "{{ sa_json }}"
        state: present
      debugger: on_failed
EXPECTED RESULTS

create a gcp project, create a service account, create a service key and have it saved on my computer

ACTUAL RESULTS

empty ~/sa-{{ service_account_display_name }}.json file and the task fails

The full traceback is:
Traceback (most recent call last):
  File "/Users/bluo/.ansible/tmp/ansible-tmp-1592427610.3913631-4932-201653545614182/AnsiballZ_gcp_iam_service_account_key.py", line 102, in <module>
    _ansiballz_main()
  File "/Users/bluo/.ansible/tmp/ansible-tmp-1592427610.3913631-4932-201653545614182/AnsiballZ_gcp_iam_service_account_key.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/Users/bluo/.ansible/tmp/ansible-tmp-1592427610.3913631-4932-201653545614182/AnsiballZ_gcp_iam_service_account_key.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_iam_service_account_key', init_globals=None, run_name='__main__', alter_sys=True)
  File "/Users/bluo/anaconda3/lib/python3.7/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/Users/bluo/anaconda3/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/Users/bluo/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/var/folders/fz/m4s7z4p53fn_384hq601kn380000gn/T/ansible_gcp_iam_service_account_key_payload_b83tndte/ansible_gcp_iam_service_account_key_payload.zip/ansible/modules/cloud/google/gcp_iam_service_account_key.py", line 279, in <module>
  File "/var/folders/fz/m4s7z4p53fn_384hq601kn380000gn/T/ansible_gcp_iam_service_account_key_payload_b83tndte/ansible_gcp_iam_service_account_key_payload.zip/ansible/modules/cloud/google/gcp_iam_service_account_key.py", line 201, in main
  File "/var/folders/fz/m4s7z4p53fn_384hq601kn380000gn/T/ansible_gcp_iam_service_account_key_payload_b83tndte/ansible_gcp_iam_service_account_key_payload.zip/ansible/modules/cloud/google/gcp_iam_service_account_key.py", line 213, in create
TypeError: 'NoneType' object is not subscriptable
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/Users/bluo/.ansible/tmp/ansible-tmp-1592427610.3913631-4932-201653545614182/AnsiballZ_gcp_iam_service_account_key.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/Users/bluo/.ansible/tmp/ansible-tmp-1592427610.3913631-4932-201653545614182/AnsiballZ_gcp_iam_service_account_key.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/Users/bluo/.ansible/tmp/ansible-tmp-1592427610.3913631-4932-201653545614182/AnsiballZ_gcp_iam_service_account_key.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_iam_service_account_key', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/Users/bluo/anaconda3/lib/python3.7/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/Users/bluo/anaconda3/lib/python3.7/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/Users/bluo/anaconda3/lib/python3.7/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/var/folders/fz/m4s7z4p53fn_384hq601kn380000gn/T/ansible_gcp_iam_service_account_key_payload_b83tndte/ansible_gcp_iam_service_account_key_payload.zip/ansible/modules/cloud/google/gcp_iam_service_account_key.py\", line 279, in <module>\n  File \"/var/folders/fz/m4s7z4p53fn_384hq601kn380000gn/T/ansible_gcp_iam_service_account_key_payload_b83tndte/ansible_gcp_iam_service_account_key_payload.zip/ansible/modules/cloud/google/gcp_iam_service_account_key.py\", line 201, in main\n  File \"/var/folders/fz/m4s7z4p53fn_384hq601kn380000gn/T/ansible_gcp_iam_service_account_key_payload_b83tndte/ansible_gcp_iam_service_account_key_payload.zip/ansible/modules/cloud/google/gcp_iam_service_account_key.py\", line 213, in create\nTypeError: 'NoneType' object is not subscriptable\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

gcp_container_cluster kubectl config malformed yaml

SUMMARY

When using the gcp_container_cluster module with the kubectl_path argument, the output yaml contains PyYaml object tags. The resulting yaml file is invalid when tags are present and cannot be properly read by kubectl

ISSUE TYPE
  • Bug Report
COMPONENT NAME

This bug is present in the gcp_container_cluster module.

ANSIBLE VERSION
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/tatemz/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Nov  7 2019, 10:07:09) [GCC 9.2.1 20191008]
CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
- name: Create a GKE cluster
  gcp_container_cluster:
    name: my-gke-cluster
    initial_node_count: 1
    location: us-central1-a
    project: myproject-id
    auth_kind: serviceaccount
    service_account_file: "/path/to/service/account.json"
    state: present
    kubectl_path: "/tmp/kubectl.config"
EXPECTED RESULTS

The output Kube config should have valid cluster server endpoints.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERRT_DATA
    server: 'https://12.345.678.90'
  name: my-gke-cluster
...
ACTUAL RESULTS

The invalid output Kube config may look like below (note the clusters[0].cluster.server).

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERRT_DATA
    server: !!python/unicode 'https://12.345.678.90'
  name: my-gke-cluster
...

HA VPN modules missing

SUMMARY

HA VPN modules should be here since this pull request has been merged:
#309

but checking the code that was changed with the pull request, there are only changes to the gcp_compute_vpn_tunnel.py module

the pull request reports:

compute: added a deprecation warning to `google_compute_vpn_gateway`

compute: promoted `google_compute_ha_vpn_gateway` to GA

compute: promoted `google_compute_external_vpn_gateway` to GA

compute: promoted HA VPN fields in `google_compute_vpn_tunnel` to GA

but doesn't seem to see those changed applied and for example the google_compute_ha_vpn_gateway and google_compute_external_vpn_gateway modules doesn't exist at all

ISSUE TYPE
  • Bug Report
COMPONENT NAME

google_compute_ha_vpn_gateway
google_compute_external_vpn_gateway

ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS

gcp_compute_address is not idempotent when registering an INTERNAL address.

SUMMARY

gcp_compute_address is not idempotent when registering an INTERNAL address.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_address.py

ANSIBLE VERSION
ansible 2.9.10
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE

Run the task reserve and internal address. Run the same task again, a new IP will be allocated.

  tasks:
    - name: create an internal address
      gcp_compute_address:
        name: test-internal-ip
        region: us-west1
        project: project-name
        address_type: INTERNAL
        subnetwork: subnet/self/link
EXPECTED RESULTS

Should have been idempotent.

ACTUAL RESULTS

On subsequent run of task, new IP was allocated everytime.


Possible cause would be because of

"networkTier": "PREMIUM"

is added as default parameter with INTERNAL addresses as well.

Removal of parameter enable_flow_logs in gcp_compute_subnetwork

SUMMARY

When using gcp_* modules installed with ansible 2.10 form pip parameter enable_flow_logs
is missing in module gcp_compute_subnetwork. To the best of my knowledge because of this there is now way to enable flow logs using any ansible modules

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_network

ANSIBLE VERSION
 $ ansible --version 
ansible 2.10.1
  ....
  python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0]
CONFIGURATION
DEFAULT_HOST_LIST ...
DEFAULT_LOG_PATH ...
DEFAULT_ROLES_PATH ...
DEFAULT_STDOUT_CALLBACK ...
OS / ENVIRONMENT

target OS Ubuntu,
Google Cloud Platform

STEPS TO REPRODUCE
---
- hosts: 127.0.0.1
  gather_facts: false
  connection: local
  tasks:

  - set_fact: &c
      auth_kind: serviceaccount
      project: some_project

  - name: Get subnetworks
    gcp_compute_subnetwork_info:
      <<: *c
      filters:
      - name = subnet-1
      region: us-east1
    register: subnet_output
  - set_fact:
      subnets: "{{subnet_output.resources}}"
  - name: Subnet info
    debug:
      var: subnets
  - name: For each subnet enable log flow
    gcp_compute_subnetwork:
      <<: *c
      name: "{{ item.name}}"
      ip_cidr_range: "{{item.ipCidrRange}}"
      region: "{{ item.region | urlsplit('path') | basename }}"
      enable_flow_logs: true
      # workaround to strange requirement of module 
      # it requires network url but passed as dict with key selfLink so
      network: "{{ dict(['selfLink'] | zip(item.network))  }}" # | urlspplit('path') | dirname }}"
    loop: "{{ subnets  }}"
    register: enabled_logflow
EXPECTED RESULTS

ok: [127.0.0.1]

ACTUAL RESULTS
Unsupported parameters for (gcp_compute_subnetwork) module: enable_flow_logs Supported parameters include: auth_kind, description, env_type, ip_cidr_range, name, network, private_ip_google_access, project, region, scopes, secondary_ip_ranges, service_account_contents, service_account_email, service_account_file, state'

gcp_storage_object plugin does not support to download file extensions other than flat files

From GoogleCloudPlatform/magic-modules#2428

Related issue from @sparampalli
ansible/ansible#56173

SUMMARY

gcp_storage_object plugin does not support to download file extensions other than flat files. For e.g. if you are attempting to download .pkg/.zip files etc, files will be downloaded but corrupted. The downloaded files MD5/Checksum does not match with original files.
ISSUE TYPE

Bug Report

COMPONENT NAME

gcp_storage_object
ANSIBLE VERSION

Ansible version - ansible 2.8.0.dev0

CONFIGURATION

ansible-playbook example.yml
[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] *****************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************
ok: [localhost]

TASK [SaaS Actions | Patch Controller | Register the temporary Storage] **********************************************************************************************************
ok: [localhost]

TASK [SaaS Actions | Patch Controller | Get the package from GCP Storage bucket] *************************************************************************************************
ok: [localhost]

TASK [debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"bucket": "XXXXXXXX",
"changed": false,
"contentType": "application/octet-stream",
"crc32c": "grZeMQ==",
"etag": "CMTGuN6DkeECEAE=",
"failed": false,
"generation": "1553095537337156",
"id": "XXXXXXX/17.2.13/1p1/patch.pkg/1553095537337156",
"kind": "storage#object",
"md5Hash": "cQ1nu3ZXBkDA46MNSJ5KBQ==",
"mediaLink": "https://www.googleapis.com/download/storage/v1/b/XXXXXXX/o/17.2.13%2F1p1%2Fpatch.pkg?generation=1553095537337156&alt=media",
"metageneration": "1",
"name": "17.2.13/1p1/patch.pkg",
"selfLink": "https://www.googleapis.com/storage/v1/b/XXXXXXX/o/17.2.13%2F1p1%2Fpatch.pkg",
"size": "135010",
"storageClass": "MULTI_REGIONAL",
"timeCreated": "2019-03-20T15:25:37.336Z",
"timeStorageClassUpdated": "2019-03-20T15:25:37.336Z",
"updated": "2019-03-20T15:25:37.336Z"
}
}

PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

OS / ENVIRONMENT

Ubuntu 16.04
Mac OS
STEPS TO REPRODUCE


EXPECTED RESULTS

File will be downloaded having the same MD5/Checksum of the file which is in the bucket
ACTUAL RESULTS

File md5 downloaded through ansible module
$ md5 patch.pkg
MD5 (patch.pkg) = 14ad8dac4c6a05e544eeda078a47744a
$ cksum patch.pkg
4179052778 269112 patch.pkg

File md5 downloaded directly from bucket
$ md5 *.pkg
MD5 (patch.pkg) = 710d67bb76570640c0e3a30d489e4a05
$ cksum *.pkg
3630494301 135010 patch.pkg

$ ansible-playbook -vvvv getfile_single.yml
ansible-playbook 2.8.0.dev0
config file = None
configured module search path = [u'/Users/username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 2.7.10 (default, Feb 22 2019, 21:17:52) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

Loading callback plugin default of type stdout, v2.0 from /Library/Python/2.7/site-packages/ansible/plugins/callback/default.pyc

PLAYBOOK: getfile_single.yml *****************************************************************************************************************************************************
Positional arguments: getfile_single.yml
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
forks: 5
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in getfile_single.yml

PLAY [localhost] *****************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************
task path: /Users/username/sri-working/ansible_plabooks/gcp/getfile_single.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: username
<127.0.0.1> EXEC /bin/sh -c 'echo ~username && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969" && echo ansible-tmp-1557238791.06-108244522845969="echo /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969" ) && sleep 0'
Using module file /Library/Python/2.7/site-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/username/.ansible/tmp/ansible-local-628558fHMIu/tmpS0ivRT TO /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969/ /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/username/.ansible/tmp/ansible-tmp-1557238791.06-108244522845969/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [SaaS Actions | Patch Controller | Register the temporary Storage] **********************************************************************************************************
task path: /Users/username/sri-working/ansible_plabooks/gcp/getfile_single.yml:5
ok: [localhost] => {
"ansible_facts": {
"bucket_path": "17.2.13%2F1p1%2Fpatch.pkg",
"local_path": "/Users/username/Downloads/17.2.13_1p1_patch.pkg"
},
"changed": false
}

TASK [SaaS Actions | Patch Controller | Get the package from GCP Storage bucket] *************************************************************************************************
task path: /Users/username/sri-working/ansible_plabooks/gcp/getfile_single.yml:10
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: username
<127.0.0.1> EXEC /bin/sh -c 'echo ~username && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005" && echo ansible-tmp-1557238792.52-205689716613005="echo /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005" ) && sleep 0'
Using module file /Library/Python/2.7/site-packages/ansible/modules/cloud/google/gcp_storage_object.py
<127.0.0.1> PUT /Users/username/.ansible/tmp/ansible-local-628558fHMIu/tmp535NoH TO /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005/AnsiballZ_gcp_storage_object.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005/ /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005/AnsiballZ_gcp_storage_object.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005/AnsiballZ_gcp_storage_object.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/username/.ansible/tmp/ansible-tmp-1557238792.52-205689716613005/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"bucket": "XXXXXXXX",
"changed": false,
"contentType": "application/octet-stream",
"crc32c": "grZeMQ==",
"etag": "CMTGuN6DkeECEAE=",
"generation": "1553095537337156",
"id": "XXXXXXXX/17.2.13/1p1/patch.pkg/1553095537337156",
"invocation": {
"module_args": {
"action": "download",
"auth_kind": "serviceaccount",
"bucket": "XXXXXXXX",
"dest": "/Users/username/Downloads/17.2.13_1p1_patch.pkg",
"overwrite": true,
"project": "saastest-202018",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloud-platform.read-only",
"https://www.googleapis.com/auth/devstorage.full_control",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/devstorage.read_write"
],
"service_account_email": null,
"service_account_file": "XXXXXXXX",
"src": "17.2.13%2F1p1%2Fpatch.pkg",
"state": "present"
}
},
"kind": "storage#object",
"md5Hash": "cQ1nu3ZXBkDA46MNSJ5KBQ==",
"mediaLink": "https://www.googleapis.com/download/storage/v1/b/XXXXXXXX/o/17.2.13%2F1p1%2Fpatch.pkg?generation=1553095537337156&alt=media",
"metageneration": "1",
"name": "17.2.13/1p1/patch.pkg",
"selfLink": "https://www.googleapis.com/storage/v1/b/XXXXXXXX/o/17.2.13%2F1p1%2Fpatch.pkg",
"size": "135010",
"storageClass": "MULTI_REGIONAL",
"timeCreated": "2019-03-20T15:25:37.336Z",
"timeStorageClassUpdated": "2019-03-20T15:25:37.336Z",
"updated": "2019-03-20T15:25:37.336Z"
}

TASK [debug] *********************************************************************************************************************************************************************
task path: /Users/username/sri-working/ansible_plabooks/gcp/getfile_single.yml:28
ok: [localhost] => {
"msg": {
"bucket": "XXXXXXXX",
"changed": false,
"contentType": "application/octet-stream",
"crc32c": "grZeMQ==",
"etag": "CMTGuN6DkeECEAE=",
"failed": false,
"generation": "1553095537337156",
"id": "XXXXXXXX/17.2.13/1p1/patch.pkg/1553095537337156",
"kind": "storage#object",
"md5Hash": "cQ1nu3ZXBkDA46MNSJ5KBQ==",
"mediaLink": "https://www.googleapis.com/download/storage/v1/b/XXXXXXXX/o/17.2.13%2F1p1%2Fpatch.pkg?generation=1553095537337156&alt=media",
"metageneration": "1",
"name": "17.2.13/1p1/patch.pkg",
"selfLink": "https://www.googleapis.com/storage/v1/b/XXXXXXXX/o/17.2.13%2F1p1%2Fpatch.pkg",
"size": "135010",
"storageClass": "MULTI_REGIONAL",
"timeCreated": "2019-03-20T15:25:37.336Z",
"timeStorageClassUpdated": "2019-03-20T15:25:37.336Z",
"updated": "2019-03-20T15:25:37.336Z"
}
}
META: ran handlers
META: ran handlers

PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0

gcp_compute_instance throws Module error when trying to deconstruct "service_account_contents"

From @rangapv on Jul 08, 2020 09:13

SUMMARY

When trying to create a GCP instance in Ansible using the module "service_account_contents:" the playbook fails with the . For "service_account_contents" , i tried many different options
once as a variable(string myvar22 - defined in global_vars directory) with JSON ;key:value pairs ::::::: the error is the same as Module error , service_account_contents: "{{ myvar22 | string }}"
then tried with service_account_contents: "{{ JSON file | string }}" ::::::: the error is the same as Module error
then again tried with service_account_contents: "{{ path to json file with key:values }}" ::::::: the error is the same as Module error
then again tried with lookup, service_account_contents: "lookup('file','path to json file with key:values')" ::::::: the error is the same as Module error
ERROR MESSAGE:(PARTIAL LIST)
Module error...

"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1

End of Error Message.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

Module name: "gcp_compute_instance"
the parameter "service_account_contents: is not deconstructing the supplied json file

ANSIBLE VERSION
$ ansible --version
ansible 2.9.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/rangapv08/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.5.2 (default, Apr 16 2020, 17:47:17) [GCC 5.4.0 20160609]
CONFIGURATION
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = rangapv07
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

STEPS TO REPRODUCE

ansible=playbook gcp.yaml -vvv

- name: Create instance(s)
  hosts: localhost
  connection: local
  gather_facts: no

  tasks:

   - debug:
       msg: The Value os myvar22 is {{ myvar22 | string }}
   - name: Launch instances
     gcp_compute_instance:
         name: kubenode32
         auth_kind: serviceaccount
         machine_type: "{{ machine_type }}"
         disks:
         - auto_delete: 'true'
           boot: 'true'
           initialize_params:
             disk_size_gb: 10
             source_image: "{{ image }}"
         service_account_file: "{{ credentials_file }}"
         service_account_contents: "{{ myvar22 }}"
         project: "{{ project_id }}"
         state: present
         zone: "{{ zone }}"
         scopes:
           - storage-full
           - cloud-platform
     register: gce
EXPECTED RESULTS

The Playbook should run to completion with no errors, and a new GCP instance needs to be created

ACTUAL RESULTS
$ ansible-playbook ./gcp.yaml -vvvv
ansible-playbook 2.9.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/rangapv08/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.5.2 (default, Apr 16 2020, 17:47:17) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: Found both group and host with same name: l1
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.5/dist-packages/ansible/plugins/callback/default.py

PLAYBOOK: gcp.yaml *************************************************************************************************
Positional arguments: ./gcp.yaml
remote_user: rangapv76
inventory: ('/etc/ansible/hosts',)
become_method: sudo
tags: ('all',)
forks: 5
verbosity: 4
connection: smart
timeout: 10
1 plays in ./gcp.yaml
PLAY [Create instance(s)] ******************************************************************************************
META: ran handlers

TASK [debug] *******************************************************************************************************
task path: /home/rangapv08/myansible/gcp.yaml:8
ok: [localhost] => {
    "msg": "The Value os myvar22 is { \"private_key_id\": \"a92e78b18244fff535879d8cb7dcf4b65bb2385e\" }"
}

TASK [Launch instances] ********************************************************************************************
task path: /home/rangapv08/myansible/gcp.yaml:10
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: rangapv08
<127.0.0.1> EXEC /bin/sh -c 'echo ~rangapv08 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/rangapv08/.ansible/tmp `"&& mkdir /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677 && echo ansible-tmp-1594197593.8106487-11724-166051136371677="` echo /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677 `" ) && sleep 0'
Using module file /usr/local/lib/python3.5/dist-packages/ansible/modules/cloud/google/gcp_compute_instance.py
<127.0.0.1> PUT /home/rangapv08/.ansible/tmp/ansible-local-11716jigud28e/tmpey0lja_y TO /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/ /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py", line 102, in <module>
    _ansiballz_main()
  File "/home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_compute_instance', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib/python3.5/runpy.py", line 196, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/modules/cloud/google/gcp_compute_instance.py", line 1739, in <module>
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/modules/cloud/google/gcp_compute_instance.py", line 1056, in main
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/modules/cloud/google/gcp_compute_instance.py", line 1159, in fetch_resource
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py", line 85, in get
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py", line 150, in full_get
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py", line 195, in session
  File "/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py", line 222, in _credentials
  File "/home/rangapv08/.local/lib/python3.5/site-packages/google/oauth2/service_account.py", line 226, in from_service_account_file
    filename, require=["client_email", "token_uri"]
  File "/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/_service_account_info.py", line 74, in from_filename
    return data, from_dict(data, require=require)
  File "/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/_service_account_info.py", line 55, in from_dict
    signer = crypt.RSASigner.from_service_account_info(data)
File "/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/crypt/base.py", line 114, in from_service_account_info
    info[_JSON_FILE_PRIVATE_KEY], info.get(_JSON_FILE_PRIVATE_KEY_ID)
  File "/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/crypt/_python_rsa.py", line 171, in from_string
    raise ValueError("No key could be detected.")
ValueError: No key could be detected.
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/home/rangapv08/.ansible/tmp/ansible-tmp-1594197593.8106487-11724-166051136371677/AnsiballZ_gcp_compute_instance.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_compute_instance', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python3.5/runpy.py\", line 196, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.5/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.5/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/modules/cloud/google/gcp_compute_instance.py\", line 1739, in <module>\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/modules/cloud/google/gcp_compute_instance.py\", line 1056, in main\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/modules/cloud/google/gcp_compute_instance.py\", line 1159, in fetch_resource\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py\", line 85, in get\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py\", line 150, in full_get\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py\", line 195, in session\n  File \"/tmp/ansible_gcp_compute_instance_payload_057hypv2/ansible_gcp_compute_instance_payload.zip/ansible/module_utils/gcp_utils.py\", line 222, in _credentials\n  File \"/home/rangapv08/.local/lib/python3.5/site-packages/google/oauth2/service_account.py\", line 226, in from_service_account_file\n    filename, require=[\"client_email\", \"token_uri\"]\n  File \"/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/_service_account_info.py\", line 74, in from_filename\n    return data, from_dict(data, require=require)\n  File \"/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/_service_account_info.py\", line 55, in from_dict\n    signer = crypt.RSASigner.from_service_account_info(data)\n  File \"/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/crypt/base.py\", line 114, in from_service_account_info\n    info[_JSON_FILE_PRIVATE_KEY], info.get(_JSON_FILE_PRIVATE_KEY_ID)\n  File \"/home/rangapv08/.local/lib/python3.5/site-packages/google/auth/crypt/_python_rsa.py\", line 171, in from_string\n    raise ValueError(\"No key could be detected.\")\nValueError: No key could be detected.\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}
PLAY RECAP *********************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Copied from original issue: ansible/ansible#70519

Support assigning IAM roles to users

SUMMARY

Right now it's not possible to bind an IAM role with a user in an ansible-friendly way. This use case should be supported since roles are only created to be assigned to users. A role without users is useless

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Either add the functionality to the gcp_iam_role module to be able to create and bind roles in one task or create a new module to exclusively bind roles to users

ADDITIONAL INFORMATION

There's no use on creating roles if they can't be added to a user. Right now my workaround is to use the command module to add the role using gcloud

name: assign role to user
command: gcloud projects add-iam-policy-binding {{gcp_project}} --member "serviceAccount:myAccount@{{gcp_project}}.iam.gserviceaccount.com" --role  "projects/{{gcp_project}}/roles/myRole"

But that has a lot of requirements. You need to have gcloud locally, you need to be authenticated, it is not idempotent, etc etc

The simplest way that I can think for this new feature would be to add a parameter bindings to the gcp_iam_role module to be able to bind the role to a list of users
e.g.

- name: Create and bind my role
      gcp_iam_role:
        name: "myRole"
        title: Dummy role
        project: "{{gcp_project}}"
        included_permissions:
          - compute.addresses.create
          - compute.addresses.get
        bindings: # new parameter
          - serviceAccount:myAccount@{{gcp_project}}.iam.gserviceaccount.com

Missing module for managing router nats

SUMMARY

I would like to request a module for manipulating router nat configuration.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

google.cloud. gcp_compute_router_nats

ADDITIONAL INFORMATION

The equivalent gcloud command is gcloud compute routers nats and the rationale for this is that I am using the google.cloud module to configure private GKE clusters which need a NAT router for external access.

gcp_storage_object module changing the size of downloaded file

SUMMARY
I'm using gcp_storage_object to download a exe file from gcp storage bucket. I'm able to download the file successfully but the size of the file is getting changed

ISSUE TYPE

  • Bug Report

COMPONENT NAME
gcp_storage_object

ANSIBLE VERSION
ansible 2.9

OS / ENVIRONMENT
ubuntu 18.0

STEPS TO REPRODUCE

-  name: create a object
   run_once: true
   delegate_to: 127.0.0.1
   gcp_storage_object:
    action: download
    overwrite: yes
    bucket: MyBucket
    src: test.exe
    dest: "~/temp/test.exe"
    auth_kind: application
    project: xxx-xxx-xxx
    state: present

EXPECTED RESULTS
Should download the required file without any change in size

ACTUAL RESULTS
Downloaded file size is double than actual file size

gcp_storage_object generates Python 3 incompatible code

Description

Using the gcp_storage_object module with Python 3 issues an error on calls for write(), as the module tries to encode data with utf-8 and Python 3 expects a string and not a bytes object.

Environment

โ†’ ansible --version
ansible 2.9.0
  python version = 3.7.5rc1 (default, Oct  8 2019, 16:47:45) [GCC 9.2.1 20191008]

Input

  gcp_storage_object:
    action: download
    overwrite: true
    bucket: "my-bucket"
    src: "my-file"
    dest: "my-file"
    project: "my-project"
    state: present
    auth_kind: serviceaccount
    service_account_file: "ansible.json"

Output

failed: [**redacted** (item=my-file) => {"ansible_loop_var": "item", "changed": false, "item": "my-file", "module_stderr": "Shared connection to **REDACTED** closed.\r\n", "module_stdout": "bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)\r\nTraceback (most recent call last):\r\n  File \"/tmp/ansible-gustavo/ansible-tmp-1572983343.09006-245847309513835/AnsiballZ_gcp_storage_object.py\", line 102, in <module>\r\n    _ansiballz_main()\r\n  File \"/tmp/ansible-gustavo/ansible-tmp-1572983343.09006-245847309513835/AnsiballZ_gcp_storage_object.py\", line 94, in _ansiballz_main\r\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n  File \"/tmp/ansible-gustavo/ansible-tmp-1572983343.09006-245847309513835/AnsiballZ_gcp_storage_object.py\", line 40, in invoke_module\r\n    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_storage_object', init_globals=None, run_name='__main__', alter_sys=False)\r\n  File \"/usr/lib/python3.6/runpy.py\", line 208, in run_module\r\n    return _run_code(code, {}, init_globals, run_name, mod_spec)\r\n  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n    exec(code, run_globals)\r\n  File \"/tmp/ansible_gcp_storage_object_payload_bvybg4th/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py\", line 286, in <module>\r\n  File \"/tmp/ansible_gcp_storage_object_payload_bvybg4th/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py\", line 188, in main\r\n  File \"/tmp/ansible_gcp_storage_object_payload_bvybg4th/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py\", line 199, in download_file\r\nTypeError: write() argument must be str, not bytes\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

gcp_compute_instance always reports changed

SUMMARY

gce_compute_instance always returns changed.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gce_compute_instance

ANSIBLE VERSION
$ poetry run ansible --version                                                                                  
ansible 2.9.10
  config file = None
  configured module search path = ['/Users/zzzzz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/zzzzzz/Library/Caches/pypoetry/virtualenvs/replicaset-assertions-QUQBoPOI-py3.8/lib/python3.8/site-packages/ansible
  executable location = /Users/zzzzzz/Library/Caches/pypoetry/virtualenvs/replicaset-assertions-QUQBoPOI-py3.8/bin/ansible
  python version = 3.8.0+ (heads/3.8:694c03f, Nov 13 2019, 17:42:51) [Clang 11.0.0 (clang-1100.0.33.8)]
CONFIGURATION

It's empty.

OS / ENVIRONMENT

OSX 10.15.5
Python 3.8.0+

STEPS TO REPRODUCE
- name: Create instances
  hosts: dbservers
  gather_facts: False
  tasks:
    - name: assert instances
      delegate_to: localhost
      block:
        - name: get_host
          set_fact:
            zone: "{{ zones[(zones|length) % (inventory_hostname.split('-')[4] | int)]}}"
        - name: create data disks
          gcp_compute_disk:
            name: "{{inventory_hostname}}-storage"
            size_gb: "{{ data_disk_size }}"
            zone: "{{ zone }}"
            project: "{{ project }}"
            auth_kind: "{{ auth_kind }}"
            state: present
          register: data_disk
        - name: create instances
          register: gce_instance_vars
          gcp_compute_instance:
            name: "{{inventory_hostname}}"
            machine_type: "{{ machine_type}}"
            # deletion_protection should be set to false, otherwise even a change to the instance leads to error
            deletion_protection: false
            disks:
              - auto_delete: true
                boot: true
                initialize_params:
                  disk_size_gb: "{{boot_disk_size}}"
                  source_image: "{{ source_image }}"
              - auto_delete: false
                boot: false
                source: "{{ data_disk }}"
                device_name: storage
            labels:
              environment: "{{ project }}"
              cluster: "{{ role }}-{{ cluster }}"
            network_interfaces:
              - network:
                  selfLink: projects/ctp-playground/global/networks/default
            zone: "{{ zone }}"
            project: "{{ project }}"
            auth_kind: "{{ auth_kind }}"
            state: present
        - name: save response to file
          copy:
            dest: /tmp/{{inventory_hostname}}-b.json
            content: "{{gce_instance_vars | to_nice_json }}"
EXPECTED RESULTS

Should not be changed and the saved variable output should not have any diff.

ACTUAL RESULTS

The instances are always marked changed, but the variable output from the gcp_compute_instance does not change between runs.


Creating logging metrics is not idempotent

SUMMARY

When using gcp_logging_metric , if the metric already exists, the run fails

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_logging_metric

ANSIBLE VERSION
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]

google.cloud collection version 0.0.9

CONFIGURATION
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['host_list', 'yaml', 'ini', 'gcp_compute']
OS / ENVIRONMENT

Linux, Python3

STEPS TO REPRODUCE
  • Run the playbook to create a log-based metric. The metric will be created properly.
  • Run the same playbook a second time, it will fail
- name: Create my metric
      gcp_logging_metric:
        name: metrics/status_code
        description: Count of response statuses
        project: "{{ gcp_project }}"
        filter: resource.type="gce_instance"
        metric_descriptor:
          display_name: Response status count
          metric_kind: DELTA
          unit: "1"
          value_type: INT64
          labels:
            - key: response_code
              description: HTTP status code for the request
              value_type: INT64
            - key: path
              description: The request path
              value_type: STRING
            - key: service
              description: The upstream service
              value_type: STRING
        label_extractors:
          response_code: EXTRACT(jsonPayload.code)
          path: REGEXP_EXTRACT(jsonPayload.path, "(.*?)(?:\\?|\\z)")
          service: REGEXP_EXTRACT(jsonPayload.path, "^/(\\w+/\\w+)/.*")
EXPECTED RESULTS

Ansible reporting no change needed in the task

ACTUAL RESULTS
The full traceback is:
  File "/tmp/ansible_gcp_logging_metric_payload_tgmaipyk/ansible_gcp_logging_metric_payload.zip/ansible_collections/google/cloud/plugins/module_utils/gcp_utils.py", line 311, in raise_for_status
    response.raise_for_status()
  File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
fatal: [localhost]: FAILED! => changed=false 
  invocation:
    module_args:
      auth_kind: serviceaccount
      bucket_options: null
      description: Count of response statuses
      env_type: null
      filter: resource.type="gce_instance" 
      label_extractors:
        path: REGEXP_EXTRACT(jsonPayload.path, "(.*?)(?:\\?|\\z)")
        response_code: EXTRACT(jsonPayload.code)
        service: REGEXP_EXTRACT(jsonPayload.path, "^/(\\w+/\\w+)/.*")
      metric_descriptor:
        display_name: Response status count
        labels:
        - description: HTTP status code for the request
          key: response_code
          value_type: INT64
        - description: The request path
          key: path
          value_type: STRING
        - description: The upstream service
          key: service
          value_type: STRING
        metric_kind: DELTA
        unit: '1'
        value_type: INT64
      name: metrics/status_code
      project: my-project
      scopes:
      - https://www.googleapis.com/auth/cloud-platform
      service_account_contents: null
      service_account_email: null
      service_account_file: ......
      state: present
      value_extractor: null
  msg: 'GCP returned error: {''error'': {''code'': 409, ''message'': ''Metric metrics/status_code already exists.'', ''status'': ''ALREADY_EXISTS''}}'


Add support for Google manages certificates

Support for ACL at the object level

SUMMARY

gcp_storage_object does not allow to set a specific permission for an object. When using fine-grained controls on a bucket, it would be useful to authorize that. Also, content-type cannot be specified either.

ISSUE TYPE
  • Feature Idea

Missing module to configure BGP for a VPN tunnel

SUMMARY

It seems there is no module to configure a BGP session for a VPN tunnel: after creating the tunnel with gcp_compute_vpn_tunnel, we still have to assign an IP address (equivalent to gcloud compute routers add-interface) and setup the BGP session (equivalent to gcloud compute routers add-bgp-peer).

ISSUE TYPE
  • Bug Report

gcp_compute inventory plugin not compatible with ansible-base

SUMMARY

Old imports from ansible core module locations are still hanging around, but they no longer work with current Ansible devel.

Proposed fix up at #218

ISSUE TYPE
  • Bug Report
COMPONENT NAME

plugins/inventory

ANSIBLE VERSION
ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under
development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.10.0.dev0
  config file = /Users/alancoding/Documents/repos/tower-qa/ansible.cfg
  configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible
  executable location = /Users/alancoding/.virtualenvs/akit/bin/ansible
  python version = 3.7.7 (default, Mar 10 2020, 15:43:03) [Clang 11.0.0 (clang-1100.0.33.17)]
CONFIGURATION

defaults

OS / ENVIRONMENT

N/A

STEPS TO REPRODUCE
ansible-inventory -i gcp_compute.yml --list --export

where it is a valid inventory file

EXPECTED RESULTS

runs

ACTUAL RESULTS
[WARNING]:  * Failed to parse
/tmp/awx_1899_wnfsg0zr/project/inventories/gcp_compute.yml with auto plugin: No
module named 'ansible.module_utils.gcp_utils'
  File "/usr/local/lib/python3.6/site-packages/ansible/inventory/manager.py", line 287, in parse_source
    plugin.parse(self._inventory, self._loader, source, cache=cache)
  File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/auto.py", line 50, in parse
    plugin = inventory_loader.get(plugin_name)
  File "/usr/local/lib/python3.6/site-packages/ansible/plugins/loader.py", line 761, in get
    self._module_cache[path] = self._load_module_source(name, path)
  File "/usr/local/lib/python3.6/site-packages/ansible/plugins/loader.py", line 730, in _load_module_source
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/tmp/awx_1899_wnfsg0zr/requirements_collections/ansible_collections/google/cloud/plugins/inventory/gcp_compute.py", line 148, in <module>
    from ansible.module_utils.gcp_utils import (

(I'm really just filing this issue to use the pytest-github library to skip some tests)

`gcp_iam_role` does not work

SUMMARY

For more than a year, gcp_iam_role is not working. It fails with a python traceback. As mentioned in the previous discussion, the error happens when updating the role, not when creating.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_iam_role

STEPS TO REPRODUCE
    - name: Create  role
      gcp_iam_role:
        name: myrole
        description: created by me
        included_permissions:
          - storage.buckets.get
          - storage.buckets.list
          - storage.objects.get
          - storage.objects.list
ACTUAL RESULTS
The full traceback is:
Traceback (most recent call last):
  File "<stdin>", line 102, in <module>
  File "<stdin>", line 94, in _ansiballz_main
  File "<stdin>", line 40, in invoke_module
  File "/usr/lib/python3.8/runpy.py", line 207, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_gcp_iam_role_payload_2qm_h4z6/ansible_gcp_iam_role_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_iam_role.py", line 354, in <module>
  File "/tmp/ansible_gcp_iam_role_payload_2qm_h4z6/ansible_gcp_iam_role_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_iam_role.py", line 205, in main
  File "/tmp/ansible_gcp_iam_role_payload_2qm_h4z6/ansible_gcp_iam_role_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_iam_role.py", line 234, in update
TypeError: put() got an unexpected keyword argument 'params'
fatal: [viktor-cloud-hopper -> localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_gcp_iam_role_payload_2qm_h4z6/ansible_gcp_iam_role_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_iam_role.py\", line 354, in <module>\n  File \"/tmp/ansible_gcp_iam_role_payload_2qm_h4z6/ansible_gcp_iam_role_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_iam_role.py\", line 205, in main\n  File \"/tmp/ansible_gcp_iam_role_payload_2qm_h4z6/ansible_gcp_iam_role_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_iam_role.py\", line 234, in update\nTypeError: put() got an unexpected keyword argument 'params'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

gcp_compute_instance doesn't add service account to the instance

SUMMARY

gcp_compute_instance doesn't apply changes when adding a service account and scopes to and existing (stopped) vm instance

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_instance

ANSIBLE VERSION
ansible 2.9.7
  config file = None
  ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.7.4 (default, Oct 12 2019, 18:55:28) [Clang 11.0.0 (clang-1100.0.33.8)]
CONFIGURATION
ansible-config dump --only-changed -v
No config file found; using defaults
OS / ENVIRONMENT
uname -a
Darwin 18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64
STEPS TO REPRODUCE
- name: Update instance
  gcp_compute_instance:
    name: "my-test-instance"
    zone: us-central1-a
    project: my-test-project
    auth_kind: serviceaccount
    deletion_protection: no
    service_account_file: "/Users/<user>/Downloads/credentials.json"
    service_accounts:
      - email: [email protected] 
        scopes:
          - "https://www.googleapis.com/auth/devstorage.read_only"
          - "https://www.googleapis.com/auth/logging.write"
          - "https://www.googleapis.com/auth/monitoring.write"
          - "https://www.googleapis.com/auth/service.management.readonly"
          - "https://www.googleapis.com/auth/servicecontrol"
          - "https://www.googleapis.com/auth/trace.append"
    status: TERMINATED
    state: present
EXPECTED RESULTS
ok: [localhost] => (item=my-test-instance) => {
    "ansible_loop_var": "item",
    "canIpForward": false,
    "changed": false,
    "cpuPlatform": "Unknown CPU Platform",
    "creationTimestamp": "2020-04-21T06:11:57.224-07:00",
    "deletionProtection": false,
    "description": "",
    "disks": [
        {
            "autoDelete": true,
            "boot": true,
            "deviceName": "my-test-instance",
            "diskSizeGb": "10",
            "guestOsFeatures": [
                {
                    "type": "VIRTIO_SCSI_MULTIQUEUE"
                }
            ],
            "index": 0,
            "interface": "SCSI",
            "kind": "compute#attachedDisk",
            "licenses": [
                "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/licenses/debian-9-stretch"
            ],
            "mode": "READ_WRITE",
            "source": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a/disks/my-test-instance",
            "type": "PERSISTENT"
        }
    ],
    "displayDevice": {
        "enableDisplay": false
    },
    "fingerprint": "WY2SwusMoUE=",
    "id": "6986067812958818291",
    "invocation": {
        "module_args": {
            "auth_kind": "serviceaccount",
            "can_ip_forward": null,
            "deletion_protection": false,
            "disks": null,
            "env_type": null,
            "guest_accelerators": null,
            "hostname": null,
            "labels": null,
            "machine_type": null,
            "metadata": null,
            "min_cpu_platform": null,
            "name": "my-test-instance",
            "network_interfaces": null,
            "project": "my-test-project",
            "scheduling": null,
            "scopes": [
                "https://www.googleapis.com/auth/compute"
            ],
            "service_account_contents": null,
            "service_account_email": null,
            "service_account_file": "/Users/<user>/Downloads/my-test-project-c7549fc3f4db.json",
            "service_accounts": [
                {
                    "email": "[email protected]",
                    "scopes": [
                        "https://www.googleapis.com/auth/devstorage.read_only",
                        "https://www.googleapis.com/auth/logging.write",
                        "https://www.googleapis.com/auth/monitoring.write",
                        "https://www.googleapis.com/auth/service.management.readonly",
                        "https://www.googleapis.com/auth/servicecontrol",
                        "https://www.googleapis.com/auth/trace.append"
                    ]
                }
            ],
            "shielded_instance_config": null,
            "state": "present",
            "status": "TERMINATED",
            "tags": null,
            "zone": "us-central1-a"
        }
    },
    "item": "my-test-instance",
    "kind": "compute#instance",
    "labelFingerprint": "42WmSpB8rSM=",
    "machineType": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a/machineTypes/f1-micro",
    "metadata": {
        "ssh-keys": "omissis"
    },
    "name": "my-test-instance",
    "networkInterfaces": [
        {
            "accessConfigs": [
                {
                    "kind": "compute#accessConfig",
                    "name": "External NAT",
                    "networkTier": "PREMIUM",
                    "type": "ONE_TO_ONE_NAT"
                }
            ],
            "fingerprint": "eedH4rIkuwQ=",
            "kind": "compute#networkInterface",
            "name": "nic0",
            "network": "https://www.googleapis.com/compute/v1/projects/my-test-project/global/networks/default",
            "networkIP": "10.128.0.8",
            "subnetwork": "https://www.googleapis.com/compute/v1/projects/my-test-project/regions/us-central1/subnetworks/default"
        }
    ],
    "reservationAffinity": {
        "consumeReservationType": "ANY_RESERVATION"
    },
    "scheduling": {
        "automaticRestart": false,
        "onHostMaintenance": "TERMINATE",
        "preemptible": true
    },
    "selfLink": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a/instances/my-test-instance",
    "serviceAccounts": [
        {
            "email": "[email protected]",
            "scopes": [
                "https://www.googleapis.com/auth/devstorage.read_only",
                "https://www.googleapis.com/auth/logging.write",
                "https://www.googleapis.com/auth/monitoring.write",
                "https://www.googleapis.com/auth/service.management.readonly",
                "https://www.googleapis.com/auth/servicecontrol",
                "https://www.googleapis.com/auth/trace.append"
            ]
        }
    ],
    "startRestricted": false,
    "status": "TERMINATED",
    "tags": {
        "fingerprint": "42WmSpB8rSM="
    },
    "zone": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a"
}
ACTUAL RESULTS
ok: [localhost] => (item=my-test-instance) => {
    "ansible_loop_var": "item",
    "canIpForward": false,
    "changed": false,
    "cpuPlatform": "Unknown CPU Platform",
    "creationTimestamp": "2020-04-21T06:19:23.517-07:00",
    "deletionProtection": false,
    "disks": [
        {
            "autoDelete": true,
            "boot": true,
            "deviceName": "my-test-instance",
            "diskSizeGb": "10",
            "guestOsFeatures": [
                {
                    "type": "VIRTIO_SCSI_MULTIQUEUE"
                }
            ],
            "index": 0,
            "interface": "SCSI",
            "kind": "compute#attachedDisk",
            "licenses": [
                "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/licenses/debian-9-stretch"
            ],
            "mode": "READ_WRITE",
            "source": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a/disks/my-test-instance",
            "type": "PERSISTENT"
        }
    ],
    "displayDevice": {
        "enableDisplay": false
    },
    "fingerprint": "wKws2l_yhiY=",
    "id": "7680208216746314805",
    "invocation": {
        "module_args": {
            "auth_kind": "serviceaccount",
            "can_ip_forward": null,
            "deletion_protection": false,
            "disks": null,
            "env_type": null,
            "guest_accelerators": null,
            "hostname": null,
            "labels": null,
            "machine_type": null,
            "metadata": null,
            "min_cpu_platform": null,
            "name": "my-test-instance",
            "network_interfaces": null,
            "project": "my-test-project",
            "scheduling": null,
            "scopes": [
                "https://www.googleapis.com/auth/compute"
            ],
            "service_account_contents": null,
            "service_account_email": null,
            "service_account_file": "/Users/<user>/Downloads/my-test-project-c7549fc3f4db.json",
            "service_accounts": [
                {
                    "email": "[email protected]",
                    "scopes": [
                        "https://www.googleapis.com/auth/devstorage.read_only",
                        "https://www.googleapis.com/auth/logging.write",
                        "https://www.googleapis.com/auth/monitoring.write",
                        "https://www.googleapis.com/auth/service.management.readonly",
                        "https://www.googleapis.com/auth/servicecontrol",
                        "https://www.googleapis.com/auth/trace.append"
                    ]
                }
            ],
            "shielded_instance_config": null,
            "state": "present",
            "status": "TERMINATED",
            "tags": null,
            "zone": "us-central1-a"
        }
    },
    "item": "my-test-instance",
    "kind": "compute#instance",
    "labelFingerprint": "42WmSpB8rSM=",
    "machineType": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a/machineTypes/f1-micro",
    "metadata": {
        "ssh-keys": "omissis"
    },
    "name": "my-test-instance",
    "networkInterfaces": [
        {
            "accessConfigs": [
                {
                    "kind": "compute#accessConfig",
                    "name": "external-nat",
                    "networkTier": "PREMIUM",
                    "type": "ONE_TO_ONE_NAT"
                }
            ],
            "fingerprint": "fOIsyTl-u34=",
            "kind": "compute#networkInterface",
            "name": "nic0",
            "network": "https://www.googleapis.com/compute/v1/projects/my-test-project/global/networks/default",
            "networkIP": "10.128.0.9",
            "subnetwork": "https://www.googleapis.com/compute/v1/projects/my-test-project/regions/us-central1/subnetworks/default"
        }
    ],
    "reservationAffinity": {
        "consumeReservationType": "ANY_RESERVATION"
    },
    "scheduling": {
        "automaticRestart": false,
        "onHostMaintenance": "TERMINATE",
        "preemptible": true
    },
    "selfLink": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a/instances/my-test-instance",
    "startRestricted": false,
    "status": "TERMINATED",
    "tags": {
        "fingerprint": "42WmSpB8rSM="
    },
    "zone": "https://www.googleapis.com/compute/v1/projects/my-test-project/zones/us-central1-a"
}

Please note that in the actual result I'm also expecting to see ( just under selflink ):

"serviceAccounts": [
        {
            "email": "[email protected]",
            "scopes": [
                "https://www.googleapis.com/auth/devstorage.read_only",
                "https://www.googleapis.com/auth/logging.write",
                "https://www.googleapis.com/auth/monitoring.write",
                "https://www.googleapis.com/auth/service.management.readonly",
                "https://www.googleapis.com/auth/servicecontrol",
                "https://www.googleapis.com/auth/trace.append"
            ]
        }
    ],

which is missing

gcp_compute_instance module doesn't update instance tags

SUMMARY

gcp_compute_instance module doesn't update instance tags for an existing instance.
For example the same module can update labels, but apparently not tags.
code to update labels:

def label_fingerprint_update(module, request, response):
auth = GcpSession(module, 'compute')
auth.post(
''.join(["https://compute.googleapis.com/compute/v1/", "projects/{project}/zones/{zone}/instances/{name}/setLabels"]).format(**module.params),
{u'labelFingerprint': response.get('labelFingerprint'), u'labels': module.params.get('labels')},
)

but no code to update tags

fyi this issue was also reported in the ansible repository few years ago:
ansible/ansible#47883

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_instance

ANSIBLE VERSION
ansible 2.9.11
  config file = /home/alex/ansible-project/ansible.cfg
  configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/alex/.local/lib/python3.8/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.5 (default, Aug 12 2020, 00:00:00) [GCC 10.2.1 20200723 (Red Hat 10.2.1-1)]
CONFIGURATION
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/alex/.vault_pass.txt
HOST_KEY_CHECKING(/home/alex/ansible-project/ansible.cfg) = False
INVENTORY_ENABLED(/home/alex/ansible-project/ansible.cfg) = ['google.cloud.gcp_compute', 'auto', 'yaml', 'ini', 'toml']
OS / ENVIRONMENT

Fedora 32

STEPS TO REPRODUCE
  • create instance without defining tags with gcp_compute_instance module
  • define some tags
  • run the playbook again
EXPECTED RESULTS

machine has the tags defined

ACTUAL RESULTS

machine doesn't have tags defined

Allow gcp_cloudfunctions_cloud_function to upload a local file

SUMMARY

Currently, only resources that are available at a google cloud storage or repo can be defined. However, the gcloud utility does have a --source flag that accepts local files.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

gcp_cloudfunctions_cloud_function

Add support for Cloud Deployment Manager

SUMMARY

Add support for deployment manager

ISSUE TYPE
  • Feature idea
ADDITIONAL INFORMATION

It would allow Ansible to leverage GCP native deployment manager to augment, complement or even as an alternative to some of the GCP modules

- name: cloud deployment
  gcp_deployment:
    name: my-deployment
    config:
      resources:
        - name: my-vm
          type: compute.v1.instance
          properties:
            machineType: zones/us-central1-a/machineTypes/n1-standard-1
    ...

gcp_sql_instance does not support private IP during creation

SUMMARY

To allow gcp_sql_instance module to expose private_ip instead of public_ip for some security purposes

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

gcp_sql_instance

ADDITIONAL INFORMATION

This would be used to solve our current database connectivity from k8s cluster which allow only through private IP by Policy

[DOCS] Fix broken links in the gcp modules

This is the batch of broken links on gcp modules.

NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them. Also note this report was generated from docs.ansible.com/ansible, but I'm guessing the same links are broken within the collection as well.

ISSUE TYPE
  • Documentation Report
COMPONENT NAME

docs.ansible.com

BROKEN LINKS

https://docs.ansible.com/ansible/devel/modules/gcp_compute_forwarding_rule_module.html#gcp-compute-forwarding-rule-module
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/regions/region/addresses/address (HTTP_401)
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/regions/region/addresses/address (HTTP_401)
โ”œโ”€BROKENโ”€ https://cloud.google.com/compute/docs/reference/v1/forwardingRule

https://docs.ansible.com/ansible/devel/modules/gcp_container_cluster_info_module.html#gcp-container-cluster-info-module
โ”œโ”€BROKENโ”€ http://kubernetes.io/v1.1/docs/user-guide/labels.html

https://docs.ansible.com/ansible/devel/modules/gcp_container_node_pool_info_module.html#gcp-container-node-pool-info-module
โ”œโ”€BROKENโ”€ http://kubernetes.io/v1.1/docs/user-guide/labels.html

https://docs.ansible.com/ansible/devel/modules/gcp_dns_resource_record_set_info_module.html#gcp-dns-resource-record-set-info-module
โ”œโ”€BROKENโ”€ http://0.0.0.0:1337/modules/www.example.com

https://docs.ansible.com/ansible/devel/modules/gcp_pubsub_subscription_info_module.html#gcp-pubsub-subscription-info-module
โ”œโ”€BROKENโ”€ https://example.com/push%22

https://docs.ansible.com/ansible/devel/modules/gcp_redis_instance_info_module.html#gcp-redis-instance-info-module
โ””โ”€BROKENโ”€ http://redis.io/topics/config

https://docs.ansible.com/ansible/devel/modules/gcp_cloudscheduler_job_module.html#gcp-cloudscheduler-job-module
โ”œโ”€BROKENโ”€ https://www.googleapis.com/auth/cloud-platform%22
โ”œโ”€BROKENโ”€ https://www.googleapis.com/auth/cloud-platform%22

https://docs.ansible.com/ansible/devel/modules/gcp_cloudscheduler_job_info_module.html#gcp-cloudscheduler-job-info-module
โ”œโ”€BROKENโ”€ https://www.googleapis.com/auth/cloud-platform%22

https://docs.ansible.com/ansible/devel/modules/gcp_compute_autoscaler_module.html#gcp-compute-autoscaler-module
โ”œโ”€BROKENโ”€ http://0.0.0.0:1337/modules/www.googleapis.com/compute/instance/network/received_bytes_count
โ”œโ”€BROKENโ”€ http://0.0.0.0:1337/modules/www.googleapis.com/compute/instance/network/received_bytes_count

https://docs.ansible.com/ansible/devel/modules/gcp_compute_autoscaler_info_module.html#gcp-compute-autoscaler-info-module
โ”œโ”€BROKENโ”€ http://0.0.0.0:1337/modules/www.googleapis.com/compute/instance/network/received_bytes_count

https://docs.ansible.com/ansible/devel/modules/gcp_compute_firewall_module.html#gcp-compute-firewall-module
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/myproject/global/
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/myproject/global/

https://docs.ansible.com/ansible/devel/modules/gcp_compute_route_module.html#gcp-compute-route-module
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/global/gateways/default-internet-gateway
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/regions/region/forwardingRules/forwardingRule (HTTP_401)
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/zones/zone/ (HTTP_401)
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/global/gateways/default-internet-gateway
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/regions/region/forwardingRules/forwardingRule (HTTP_401)
โ”œโ”€BROKENโ”€ https://www.googleapis.com/compute/v1/projects/project/zones/zone/ (HTTP_401)

https://docs.ansible.com/ansible/devel/modules/gcp_container_cluster_module.html#gcp-container-cluster-module
โ”œโ”€BROKENโ”€ http://kubernetes.io/v1.1/docs/user-guide/labels.html
โ”œโ”€BROKENโ”€ http://kubernetes.io/v1.1/docs/user-guide/labels.html

https://docs.ansible.com/ansible/devel/modules/gcp_container_node_pool_module.html#gcp-container-node-pool-module
โ”œโ”€BROKENโ”€ http://kubernetes.io/v1.1/docs/user-guide/labels.html
โ”œโ”€BROKENโ”€ http://kubernetes.io/v1.1/docs/user-guide/labels.html

https://docs.ansible.com/ansible/devel/modules/gcp_redis_instance_module.html#gcp-redis-instance-module
โ”œโ”€BROKENโ”€ http://redis.io/topics/config
โ”œโ”€BROKENโ”€ http://redis.io/topics/config

initialClusterVersion is not possible on google.cloud.gcp_container_cluster

SUMMARY

In an ansible URI call I can specify this as per the Google Cloud API

  uri:
    url: https://container.googleapis.com/v1beta1/projects/saastest-202018/locations/{{ k8s_gcp_region }}/clusters
    method: POST
    headers:
      Authorization: Bearer {{ gcloud_access_token.stdout }}
    body_format: json
    body:
      cluster:
        name: "{{ cluster_name }}"
        masterAuth:
          clientCertificateConfig: {}
        loggingService: logging.googleapis.com/kubernetes
        monitoringService: monitoring.googleapis.com/kubernetes
        network: projects/saastest-202018/global/networks/default
        addonsConfig:
          httpLoadBalancing: {}
          horizontalPodAutoscaling: {}
          kubernetesDashboard:
            disabled: true
          istioConfig:
            disabled: true
        subnetwork: projects/saastest-202018/regions/{{ k8s_gcp_region }}/subnetworks/default
        nodePools:
          - name: default-pool
            config:
              machineType: n1-standard-8
              diskSizeGb: 100
              oauthScopes:
                - 'https://www.googleapis.com/auth/devstorage.read_only'
                - 'https://www.googleapis.com/auth/logging.write'
                - 'https://www.googleapis.com/auth/monitoring'
                - 'https://www.googleapis.com/auth/servicecontrol'
                - 'https://www.googleapis.com/auth/service.management.readonly'
                - 'https://www.googleapis.com/auth/trace.append'
              metadata:
                disable-legacy-endpoints: 'true'
              imageType: UBUNTU
              labels:
                saas_controller_node_tier: small
              diskType: pd-standard
            initialNodeCount: 1
            autoscaling:
              enabled: true
              minNodeCount: 1
              maxNodeCount: 100
            management:
              autoRepair: true
            version: "{{ k8s_initial_cluster_version }}"
        networkPolicy: {}
        ipAllocationPolicy:
          useIpAliases: true
        masterAuthorizedNetworksConfig:
          enabled: true
          cidrBlocks:
            - displayName: Any
              cidrBlock: 0.0.0.0/0
        defaultMaxPodsConstraint:
          maxPodsPerNode: '110'
        authenticatorGroupsConfig: {}
        privateClusterConfig:
          enablePrivateNodes: true
          masterIpv4CidrBlock: "{{ k8s_master_ipv4_cidr_block }}"
        databaseEncryption:
          state: DECRYPTED
        shieldedNodes:
          enabled: true
        **initialClusterVersion: "{{ k8s_initial_cluster_version }}"**
        location: "{{ k8s_gcp_region }}"

As you see I can specify initialClusterVersion, however in Ansible apparently that's not possible at the moment.

We are also missing ability to set shieldedNodes

ISSUE TYPE
  • Bug Report
COMPONENT NAME

google.cloud.gcp_container_cluster

ANSIBLE VERSION
ansible 2.9.7
  config file = None
  configured module search path = ['/Users/aeric/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ansible
  executable location = /Library/Frameworks/Python.framework/Versions/3.6/bin/ansible
  python version = 3.6.8 (v3.6.8:3c6b436a57, Dec 24 2018, 02:04:31) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS

1:1 Matching between possible Google Cloud endpoints so that would include being able to set initialClusterVersion

ACTUAL RESULTS

"changed": false, "msg": "Unsupported parameters for (google.cloud.gcp_container_cluster) module: database_encryption, initial_cluster_version, shielded_nodes

[Bug] Compute instance always reports as changed

SUMMARY

Creating a compute instance reports back as changed even if it was already created in a previous run.
This results in a non idempotent behaviour which is usually not anticipated for Ansible modules unless mentioned otherwise in the documentation.

This was tested against master and version 1.0.1 of this collection.

The issue is related to #257 but that one was closed by the author without solving the root issue.
Pinging @Rylon who was also involved in the previous issue.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_instance.py

ANSIBLE VERSION
ansible 2.10.2
  config file = /home/user/git-repo/policy/ansible.cfg
  configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/user/git-repo/policy/venv/lib/python3.7/site-packages/ansible
  executable location = /home/user/git-repo/policy/venv/bin/ansible
  python version = 3.7.8 (default, Jun 29 2020, 05:44:46) [GCC 7.5.0]
CONFIGURATION
DEFAULT_HOST_LIST(/home/user/git-repo/policy/ansible.cfg) = ['/home/user/git-repo/policy/inventories']
DEFAULT_REMOTE_USER(/home/user/git-repo/policy/ansible.cfg) = ans
DEFAULT_ROLES_PATH(/home/user/git-repo/policy/ansible.cfg) = ['/home/user/.ansible/roles', '/usr/share/ansible/roles', '/etc/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(/home/user/git-repo/policy/ansible.cfg) = ['[email protected]_pass.production', '[email protected]_pass.testing', '[email protected]_pass.development']
INTERPRETER_PYTHON(/home/user/git-repo/policy/ansible.cfg) = auto
INVENTORY_ENABLED(/home/user/git-repo/policy/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml', 'gcp_compute']
OS / ENVIRONMENT

Running on Ubuntu 18.04.

STEPS TO REPRODUCE

Taken from the Ansible documentation with minor modifications.

#!ansible-playbook
---
- name: Create an instance
  hosts: localhost
  gather_facts: no
  vars:
      gcp_project: your-project
      gcp_cred_kind: application
      zone: "us-central1-a"
      region: "us-central1"

  tasks:
   - name: create a disk
     gcp_compute_disk:
         name: 'disk-instance'
         size_gb: 20
         source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
         zone: "{{ zone }}"
         project: "{{ gcp_project }}"
         auth_kind: "{{ gcp_cred_kind }}"
         scopes:
           - https://www.googleapis.com/auth/compute
         state: present
     register: disk
   - name: create a address
     gcp_compute_address:
         name: 'address-instance'
         region: "{{ region }}"
         project: "{{ gcp_project }}"
         auth_kind: "{{ gcp_cred_kind }}"
         scopes:
           - https://www.googleapis.com/auth/compute
         state: present
     register: address
   - name: create a instance
     gcp_compute_instance:
         state: present
         name: test-vm
         machine_type: n1-standard-1
         disks:
           - auto_delete: true
             boot: true
             source: "{{ disk }}"
         network_interfaces:
             - network: null # use default
               access_configs:
                 - name: 'External NAT'
                   nat_ip: "{{ address }}"
                   type: 'ONE_TO_ONE_NAT'
         zone: "{{ zone }}"
         project: "{{ gcp_project }}"
         auth_kind: "{{ gcp_cred_kind }}"
         scopes:
           - https://www.googleapis.com/auth/compute
     register: instance

   - name: Wait for SSH to come up
     wait_for: host={{ address.address }} port=22 delay=10 timeout=60
EXPECTED RESULTS

In the first run all create tasks should be listed as changed in the log, but in a subsequent run these tasks should all report ok and not changed.
This is the case for the modules gcp_compute_address and gcp_compute_disk but not for gcp_compute_instance.

Below you find the two hypotheical runs showing the expected behaviour.

First run:

./gcp-issue.yml

PLAY [Create an instance] *****************************************************************************************

TASK [create a disk] **********************************************************************************************
changed: [localhost]

TASK [create a address] *******************************************************************************************
changed: [localhost]

TASK [create a instance] ******************************************************************************************
changed: [localhost]

TASK [Wait for SSH to come up] ************************************************************************************
ok: [localhost]

PLAY RECAP ********************************************************************************************************
localhost                  : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Second run (right after without changing anything):

./gcp-issue.yml

PLAY [Create an instance] *****************************************************************************************

TASK [create a disk] **********************************************************************************************
ok: [localhost]

TASK [create a address] *******************************************************************************************
ok: [localhost]

TASK [create a instance] ******************************************************************************************
ok: [localhost]

TASK [Wait for SSH to come up] ************************************************************************************
ok: [localhost]

PLAY RECAP ********************************************************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

NOTE: The difference between the two runs should be that the task create instance is not marked as changed.

ACTUAL RESULTS

What happens instead is that the task create instance is always marked as changed even if the VM was just created in the previous run and no modifications were performed.
See the log output with increased verbosity as a gist here.

Further digging into the gcp_compute_instance.py reveals that the is_different() method does not work properly. Or to be more precise, it correctly reports that the request and response are different on a syntax level, but they shouldn't really.

For example, the module uses this address prefix https://compute.googleapis.com/ for the machine type where the response uses https://www.googleapis.com/. The requested versus response values contained in their respective dictionaries (request_vals, response_vals) are listed below.

Requested values

{
    'disks': [{
        'autoDelete': True,
        'boot': True,
        'source': 'https://www.googleapis.com/compute/v1/projects/your-project/zones/us-central1-a/disks/disk-instance'
    }],
    'machineType': 'https://compute.googleapis.com/compute/v1/projects/your-project/zones/us-central1-a/machineTypes/n1-standard-1',
    'name': 'test-vm',
    'networkInterfaces': [{
        'accessConfigs': [{
            'name': 'External NAT',
            'natIP': '34.123.210.249',
            'type': 'ONE_TO_ONE_NAT'
        }]
    }]
}

Response values:

{
    'disks': [{
        'autoDelete': True,
        'boot': True,
        'source': 'https://www.googleapis.com/compute/v1/projects/your-project/zones/us-central1-a/disks/disk-instance'
    }],
    'machineType': 'https://www.googleapis.com/compute/v1/projects/your-project/zones/us-central1-a/machineTypes/n1-standard-1',
    'name': 'test-vm',
    'networkInterfaces': [{
        'accessConfigs': [{
            'name': 'External NAT',
            'natIP': '34.123.210.249',
            'type': 'ONE_TO_ONE_NAT',
            'networkTier': 'PREMIUM'
        }],
        'network': 'https://www.googleapis.com/compute/v1/projects/your-project/global/networks/default',
        'networkIP': '10.128.0.34',
        'subnetwork': 'https://www.googleapis.com/compute/v1/projects/your-project/regions/us-central1/subnetworks/default'
    }]
}

From here onwards I am not sure how to proceed, because one could update the module to ignore certain values or introduce equivalence mappings between certain values (e.g. for the address prefixes).
I would appreciate some pointers into the right direction or a statement, whether this behaviour is anticipated as you are probably facing a similar behaviour internally as well.

Thanks!

google.cloud.gcp_container_cluster is not stateful

SUMMARY

When using the ansible module to manage the state of the cluster, editable values such as master_authorized_networks_config is not matched so it does not update the cluster as expected.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

google.cloud.gcp_container_cluster

ANSIBLE VERSION
ansible 2.9.7
  config file = None
  configured module search path = ['/Users/aeric/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ansible
  executable location = /Library/Frameworks/Python.framework/Versions/3.6/bin/ansible
  python version = 3.6.8 (v3.6.8:3c6b436a57, Dec 24 2018, 02:04:31) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
  tasks:
    - name: create a cluster
      google.cloud.gcp_container_cluster:
        auth_kind: serviceaccount
        service_account_file: "{{ gcp_service_account_file }}"
        project: saastest-202018
        name: "{{ k8s_gcp_region }}-saas-cluster-ansible"
        initial_node_count: 1
        addons_config:
          http_load_balancing:
            disabled: no
          horizontal_pod_autoscaling:
            disabled: no
        private_cluster_config:
          enable_private_nodes: yes
          master_ipv4_cidr_block: "{{ k8s_master_ipv4_cidr_block }}"
        subnetwork: projects/saastest-202018/regions/{{ k8s_gcp_region }}/subnetworks/default
        resource_labels:
          type: saas
        location: "{{ k8s_gcp_region }}"
        ip_allocation_policy:
          use_ip_aliases: yes
        master_authorized_networks_config:
          enabled: true
          cidr_blocks:
            - display_name: Any
              cidr_block: 10.0.0.0/8
        default_max_pods_constraint:
          max_pods_per_node: '110'

Then change the config for the master_authorized_networks_config which is a modifiable field.

  tasks:
    - name: create a cluster
      google.cloud.gcp_container_cluster:
        auth_kind: serviceaccount
        service_account_file: "{{ gcp_service_account_file }}"
        project: saastest-202018
        name: "{{ k8s_gcp_region }}-saas-cluster-ansible"
        initial_node_count: 1
        addons_config:
          http_load_balancing:
            disabled: no
          horizontal_pod_autoscaling:
            disabled: no
        private_cluster_config:
          enable_private_nodes: yes
          master_ipv4_cidr_block: "{{ k8s_master_ipv4_cidr_block }}"
        subnetwork: projects/saastest-202018/regions/{{ k8s_gcp_region }}/subnetworks/default
        resource_labels:
          type: saas
        location: "{{ k8s_gcp_region }}"
        ip_allocation_policy:
          use_ip_aliases: yes
        master_authorized_networks_config:
          enabled: true
          cidr_blocks:
            - display_name: Any
              cidr_block: 0.0.0.0/0
        default_max_pods_constraint:
          max_pods_per_node: '110'
EXPECTED RESULTS
TASK [Gathering Facts] ***********************************************************************************************************************************************
ok: [localhost]

TASK [create a cluster] **********************************************************************************************************************************************
changed: [localhost]
ACTUAL RESULTS
TASK [Gathering Facts] ***********************************************************************************************************************************************
ok: [localhost]

TASK [create a cluster] **********************************************************************************************************************************************
ok: [localhost]

gcp_dns_ resource_record_set invalid value for parameter name

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_dns_ resource_record_set

ANSIBLE VERSION

tried with older versions as well got the same issue i.e 2.8 2.7

ansible 2.9.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/nepallink/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Apr 15 2020, 17:20:14) [GCC 7.5.0]

CONFIGURATION
- name: creating/adding a cname record
  gcp_dns_resource_record_set:
    name: "example.com."
    managed_zone: 
      name: "{{ lookup('env','ZONE_NAME') }}"
      dnsName: "{{ lookup('env','DNS_NAME') }}"
    type: CNAME
    ttl: 300
    target:
    - "{{ lookup('env','CNAME_VALUE') }}" 
    project: "{{ lookup('env','GCP_PROJECT') }}"
    auth_kind: serviceaccount
    service_account_file: "{{ credentials_file }}"
    state: present
OS / ENVIRONMENT
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"
EXPECTED RESULTS

DNS CNAME record created

ACTUAL RESULTS
PLAY [localhost] ************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [localhost]

TASK [zeroonedns : creating/adding a cname record] **************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "GCP returned error: {u'error': {u'message': u\"Invalid value for 'parameters.name': ''\", u'code': 400, u'errors': [{u'reason': u'invalid', u'message': u\"Invalid value for 'parameters.name': ''\", u'domain': u'global'}]}}"}

PLAY RECAP ******************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

gcp_storage_object Python3 UnicodeDecodeError: 'utf-8' codec can't decode

From @bucklander on Jul 31, 2020 22:41

SUMMARY

While using Python3, gcp_storage_object module is unable to upload binary files (and perhaps other file types) without throwing:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x98 in position 522: invalid start byte

This is possibly related to Ansible python3's explicit string conversion (or not). When using other file types (E.g. simple text files) or just using Python2 with any file type, the problem does not exist.

Below are two example playbooks; One which uploads a gzip tarball file, and another which uploads a simple text file. Each playbook is executed with Python2 and again with Python3.

I'm certainly no expert in file compression or file types, however in my particular case the gzip'd file format seems to cause the issue. I'm also seeing this problem with binary files within regular tarball (no gzip compression). You could also replace the compressed file examples below with seemingly any randomly generated binary file, and the result appears to be the same.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_storage_object

ANSIBLE VERSION
$ ansible --version
ansible 2.9.11
  config file = None
  configured module search path = ['${HOME}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
  executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible
  python version = 3.8.5 (v3.8.5:580fbb018f, Jul 20 2020, 12:11:27) [Clang 6.0 (clang-600.0.57)]
CONFIGURATION
$ ansible-config dump --only-changed
$ 
OS / ENVIRONMENT

This can be reproduced on a few different systems. Here are a few specific examples:

macOS Mojave 10.14.6
Python 2.7.15
Python 3.8.5

CentOS Linux release 8.1.1911 (Core) 
Python 3.6.8
STEPS TO REPRODUCE

Execute the following example playbooks with a simple, flat text file as well as a tarball.

echo "hello" > foobar.txt
tar -zcf binary.tar foobar.txt

Example playbook to upload binary file to GCS:

# upload-bin-file.yml
---

- name: Upload a binary file to Google Cloud Storage bucket using gcp_storage_object module
  hosts: all
  connection: local
  tasks:
      - name: Upload Latest Backup(s) to GCS Bucket
        gcp_storage_object:
          action: upload
          overwrite: yes
          bucket: gcs-bucket-name
          src: binary.tar
          dest: "binary.tar"
          project: google-cloud-project-name
          auth_kind: serviceaccount
          service_account_file: "jwt.json"
          state: present

Example playbook to upload text file to GCS:

# upload-text-file.yml
---

- name: Upload a text file to Google Cloud Storage bucket using gcp_storage_object module
  hosts: all
  connection: local
  tasks:
      - name: Upload Latest Backup(s) to GCS Bucket
        gcp_storage_object:
          action: upload
          overwrite: yes
          bucket: gcs-bucket-name
          src: foobar.txt
          dest: "foobar.txt"
          project: google-cloud-project-name
          auth_kind: serviceaccount
          service_account_file: "jwt.json"
          state: present

Run upload-text-file.yml using Python 2 (modify Python envs per your system):

ansible-playbook --connection=local --inventory localhost, upload-text-file.yml -e ansible_python_interpreter=`which python2`

Run upload-text-file.yml using Python 3:

ansible-playbook --connection=local --inventory localhost, upload-text-file.yml -e ansible_python_interpreter=`which python3`

Run upload-bin-file.yml using Python 2:

ansible-playbook --connection=local --inventory localhost, upload-bin-file.yml -e ansible_python_interpreter=`which python2`

Run upload-bin-file.yml using Python 3:

ansible-playbook --connection=local --inventory localhost, upload-bin-file.yml -e ansible_python_interpreter=`which python3`
EXPECTED RESULTS

It's expected that all files upload successfully to GCS, regardless of using Python 2, or Python 3 or file type.

ACTUAL RESULTS

Python 2 Text File Upload (successful):

$ ansible-playbook --connection=local --inventory localhost, upload-text-file.yml -e ansible_python_interpreter=/Library/Frameworks/Python.framework/Versions/2.7/bin/python2

PLAY [Upload a file to GCS using gcp_storage_object module] ********************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************
ok: [localhost]

TASK [Upload Latest Backup(s) to GCS Bucket] ***********************************************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Python 3 Text File Upload (successful):

$ ansible-playbook --connection=local --inventory localhost, upload-text-file.yml -e ansible_python_interpreter=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3

PLAY [Upload a file to GCS using gcp_storage_object module] *****************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [localhost]

TASK [Upload Latest Backup(s) to GCS Bucket] ********************************************************************************************************************************
changed: [localhost]

PLAY RECAP ******************************************************************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Python 2 Binary File Upload (successful):

$ ansible-playbook --connection=local --inventory localhost, upload-bin-file.yml -e ansible_python_interpreter=/Library/Frameworks/Python.framework/Versions/2.7/bin/python2

PLAY [Upload a file to GCS using gcp_storage_object module] *****************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [localhost]

TASK [Upload Latest Backup(s) to GCS Bucket] ********************************************************************************************************************************
changed: [localhost]

PLAY RECAP ******************************************************************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Python 3 Binary File Upload (fails):

$ ansible-playbook --connection=local --inventory localhost, upload-bin-file.yml -e ansible_python_interpreter=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 -vvvv
ansible-playbook 2.9.11
  config file = None
  configured module search path = ['${HOME}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
  executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible-playbook
  python version = 3.8.5 (v3.8.5:580fbb018f, Jul 20 2020, 12:11:27) [Clang 6.0 (clang-600.0.57)]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible/plugins/callback/default.py

PLAYBOOK: upload-bin-file.yml ***********************************************************************************************************************************************************************************
Positional arguments: upload-bin-file.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
extra_vars: ('ansible_python_interpreter=/Library/Frameworks/Python.framework/Versions/3.8/bin/python3',)
forks: 5
1 plays in upload-bin-file.yml

PLAY [Upload a file to GCS using gcp_storage_object module] *****************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************
task path: ${HOME}/Desktop/upload-bin-file.yml:3
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: bwallander
<localhost> EXEC /bin/sh -c 'echo ~bwallander && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ${HOME}/.ansible/tmp `"&& mkdir ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500 && echo ansible-tmp-1596236353.566118-46540-87177398296500="` echo ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500 `" ) && sleep 0'
Using module file /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible/modules/system/setup.py
<localhost> PUT ${HOME}/.ansible/tmp/ansible-local-46536i64et516/tmpj1i25bxy TO ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500/AnsiballZ_setup.py
<localhost> EXEC /bin/sh -c 'chmod u+x ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500/ ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c '/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r ${HOME}/.ansible/tmp/ansible-tmp-1596236353.566118-46540-87177398296500/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [Upload Latest Backup(s) to GCS Bucket] ********************************************************************************************************************************************************************
task path: ${HOME}/Desktop/upload-bin-file.yml:7
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: bwallander
<localhost> EXEC /bin/sh -c 'echo ~bwallander && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ${HOME}/.ansible/tmp `"&& mkdir ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804 && echo ansible-tmp-1596236354.939475-46572-195465935008804="` echo ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804 `" ) && sleep 0'
Using module file /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible/modules/cloud/google/gcp_storage_object.py
<localhost> PUT ${HOME}/.ansible/tmp/ansible-local-46536i64et516/tmp1a31n78a TO ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py
<localhost> EXEC /bin/sh -c 'chmod u+x ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/ ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py && sleep 0'
<localhost> EXEC /bin/sh -c '/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r ${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py", line 102, in <module>
    _ansiballz_main()
  File "${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_storage_object', init_globals=None, run_name='__main__', alter_sys=True)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 207, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py", line 286, in <module>
  File "/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py", line 190, in main
  File "/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py", line 206, in upload_file
  File "/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/module_utils/gcp_utils.py", line 99, in post_contents
  File "/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/module_utils/gcp_utils.py", line 159, in full_post
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/sessions.py", line 578, in post
    return self.request('POST', url, data=data, json=json, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/google/auth/transport/requests.py", line 448, in request
    response = super(AuthorizedSession, self).request(
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
    resp = self.send(prep, **send_kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
    r = adapter.send(request, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
    resp = conn.urlopen(
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen
    httplib_response = self._make_request(
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/urllib3/connectionpool.py", line 392, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1255, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1301, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1250, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1039, in _send_output
    for chunk in chunks:
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 994, in _read_readable
    datablock = readable.read(self.blocksize)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x98 in position 522: invalid start byte
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"${HOME}/.ansible/tmp/ansible-tmp-1596236354.939475-46572-195465935008804/AnsiballZ_gcp_storage_object.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_storage_object', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py\", line 286, in <module>\n  File \"/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py\", line 190, in main\n  File \"/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/modules/cloud/google/gcp_storage_object.py\", line 206, in upload_file\n  File \"/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/module_utils/gcp_utils.py\", line 99, in post_contents\n  File \"/var/folders/t4/mfbl431x7hx6_w64l1p01yjm0000gn/T/ansible_gcp_storage_object_payload_9pk0efwx/ansible_gcp_storage_object_payload.zip/ansible/module_utils/gcp_utils.py\", line 159, in full_post\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/sessions.py\", line 578, in post\n    return self.request('POST', url, data=data, json=json, **kwargs)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/google/auth/transport/requests.py\", line 448, in request\n    response = super(AuthorizedSession, self).request(\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/sessions.py\", line 530, in request\n    resp = self.send(prep, **send_kwargs)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/sessions.py\", line 643, in send\n    r = adapter.send(request, **kwargs)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/requests/adapters.py\", line 439, in send\n    resp = conn.urlopen(\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 670, in urlopen\n    httplib_response = self._make_request(\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 392, in _make_request\n    conn.request(method, url, **httplib_request_kw)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py\", line 1255, in request\n    self._send_request(method, url, body, headers, encode_chunked)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py\", line 1301, in _send_request\n    self.endheaders(body, encode_chunked=encode_chunked)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py\", line 1250, in endheaders\n    self._send_output(message_body, encode_chunked=encode_chunked)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py\", line 1039, in _send_output\n    for chunk in chunks:\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py\", line 994, in _read_readable\n    datablock = readable.read(self.blocksize)\n  File \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/codecs.py\", line 322, in decode\n    (result, consumed) = self._buffer_decode(data, self.errors, final)\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x98 in position 522: invalid start byte\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP ******************************************************************************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Copied from original issue: ansible/ansible#71034

Add capability to 'gcp_storage_object' to enable folder object downloads and recursive capabilities

SUMMARY

As of right now gcp_storage_object isn't the most useful due to the lack of support regarding recursive and multi-file downloads. Instead we have to wrap gsutil rsync/cp commands with ansible shell execs.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

gcp_storage_object

ADDITIONAL INFORMATION

The feature would be used to recursively download folders, files, or the entirety of the buckets contents.

I'm fairly indifferent on the syntax/usage, and mainly interested in having the capability.

- name: Download bucket contents or folder recursively
    gcp_storage_object:
      bucket: bucket-name
      action: download
      src: "*" || src: "./" || src: "." || src: "foldername" || src: "foldername/*"
      dest: "destfolder/"
      state: present

Inclusion of google.cloud in Ansible 2.10

This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:

DEADLINE: 2020-08-18

The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).

Follow semantic versioning rules

Your collection versioning must follow all semver rules. This means:

  • Patch level releases can only contain bugfixes;
  • Minor releases can contain new features, new modules and plugins, and bugfixes, but must not break backwards compatibility;
  • Major releases can break backwards compatibility.

Changelogs and Porting Guide

Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes, major_changes, removed_features and deprecated_features sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:

  1. If possible, use the antsibull-changelog tool, which uses the same changelog fragment as the ansible/ansible repository (see the documentation).
  2. If you cannot use antsibull-changelog, you can provide the changelog in a machine-readable format as changelogs/changelog.yaml inside your collection (see the documentation of changelogs/changelog.yaml format).

If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.

Make sure your collection passes the sanity tests

Run ansible-test sanity --docker -v in the collection with the latest ansible-base or stable-2.10 ansible/ansible checkout.

Keep informed

Be sure you're subscribed to:

Questions and Feedback

If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.

(Internal link to keep track of issues: ansible-collections/overview#102)

`*_info` modules import from `ansible.module_utils.gcp_utlis`

SUMMARY

It should be ansible_collections.google.cloud.plugins.module_utils.gcp_utils. This makes the modules fail.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION

A few minutes old devel checkouts for both ansible and google.cloud collection.

CONFIGURATION
None
OS / ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS
fatal: [localhost]: FAILED! => {
    "msg": "Could not find imported module support code for gcp_compute_instance_info.  Looked for either navigate_hash.py or gcp_utils.py"
}

Behaviour of kubectl_path of gcp_container_cluster is unintutive, deprecated and undocumented

SUMMARY

Specifying the parameter kubectl_path of gcp_container_cluster seems to cause the master auth credentials to be written to the kubeconfig file created at the specified path (judging from the exception below). This is

  • undocumented
  • unintuitive as gcloud container clusters create creates a kubeconfig without a state-of-the-art token-based authentication
  • not only favouring, but enforcing deprecated (master auth for clusters) practice
ISSUE TYPE
  • Bug Report
  • Documentation report
  • Feature request
COMPONENT NAME

gcp_container_cluster

ANSIBLE VERSION
ansible-playbook 2.9.2
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.6.8 (default, Aug  7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

and

  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.6.8 (default, Aug  7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
No config file found; using defaults
OS / ENVIRONMENT

Docker image centos/centos7

STEPS TO REPRODUCE

Build

FROM centos:centos7

RUN yum upgrade -y && yum install -y epel-release git
RUN yum install -y python3-pip openssh-clients && yum clean all
RUN git clone https://github.com/ansible/ansible.git && cd ansible && pip3 install . && cd .. && rm -rf ansible
RUN pip3 install --upgrade pip lxml requests google-auth

with docker build -t dev . and run docker run -v "$(pwd):/mnt" -it dev sh -c 'cd /mnt; ansible-config dump --only-changed; ansible-playbook --ask-vault-pass install_gke_cluster.yml -vvv' with playbook

---
- hosts: localhost
  roles:
    - role: k8s.cluster.gke
...

and task roles/k8s.cluster.gke/tasks/main.yml:

- name: "Create Google Kubernetes Engine Cluster"
  gcp_container_cluster:
    name: "{{cluster_name}}"
    project: "{{project_id}}"
    auth_kind: "serviceaccount"
    location: "{{cluster_location}}"
    logging_service: "none"
    monitoring_service: "none"
    service_account_contents: "{{service_account_contents}}"
    initial_node_count: 1
    kubectl_path: "/tmp/1"
  register: cluster

put values in roles/k8s.cluster.gke/defaults/main.yml as I can't give you my GKE credentials :)

EXPECTED RESULTS

kubeconfig to be created in /tmp/1 (what gcloud container clusters create creates in .kube/config). The kubeconfig should contain token-based credentials as that is what gcloud creates.

If you tackle the feature request aspect and provide token-based authentication, then you still might want to support username/password authentication for backwards compatibility. Therefore the failure below should be handle more gracefully, at least with an intuitive error message that doesn't require to look into the source code.

Both the current requirement for master credentials and how to get a token-based kubeconfig (as soon as implemented) should be documented.

ACTUAL RESULTS

The command fails due to

host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'


PLAYBOOK: install_gke_cluster.yml *********************************************************************************************************************************************************************************
1 plays in install_gke_cluster.yml

PLAY [localhost] **************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
task path: /mnt/install_gke_cluster.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818 `" && echo ansible-tmp-1577455971.2539594-163372358451818="` echo /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-7udgzyfs8/tmpaeb2oavb TO /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818/ /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1577455971.2539594-163372358451818/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [k8s.cluster.gke : Create Google Kubernetes Engine Cluster] **************************************************************************************************************************************************
task path: /mnt/roles/k8s.cluster.gke/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943 `" && echo ansible-tmp-1577455972.5203164-148877134187943="` echo /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/cloud/google/gcp_container_cluster.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-7udgzyfs8/tmpwxrdvmj0 TO /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/ /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_container_cluster', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py", line 2055, in <module>
  File "/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py", line 1373, in main
  File "/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py", line 1598, in write_file
  File "/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py", line 1634, in _contents
KeyError: 'username'

fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1577455972.5203164-148877134187943/AnsiballZ_gcp_container_cluster.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.google.gcp_container_cluster', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py\", line 2055, in <module>\n  File \"/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py\", line 1373, in main\n  File \"/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py\", line 1598, in write_file\n  File \"/tmp/ansible_gcp_container_cluster_payload_d0jf0pd_/ansible_gcp_container_cluster_payload.zip/ansible/modules/cloud/google/gcp_container_cluster.py\", line 1634, in _contents\nKeyError: 'username'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

gcp_dns_resource_record_set does not work with service_account_contents

SUMMARY

Migrating old issue over from: ansible/ansible#58242 Still happening for me on ansible 2.9.10

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_dns_resource_record_set

ANSIBLE VERSION
ansible 2.9.10
  config file = /Users/user/workspace/devops-ansible/ansible.cfg
  configured module search path = ['/Users/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
STEPS TO REPRODUCE
- name: DNS setup
  hosts: localhost
  gather_facts: false
  become: false
  vars:
    target_ip: "123.123.123.123"
    gcp_project: "PROJECT"
    gcp_cred_kind: serviceaccount
    gcp_cred_content: "{{ gcp_service_account_PROJECT }}"
    gcp_cred_file: "./PROJECT-12345678.json"
  tasks:
    - name: create dns zones
      gcp_dns_managed_zone:
        name: foo-zone
        dns_name: bar.foo.
        description: "foo"
        project: "{{ gcp_project }}"
        auth_kind: "{{ gcp_cred_kind }}"
        service_account_file: "{{ gcp_cred_file }}"
        state: present
      register: registered_zones

    - name: Create dns records
      gcp_dns_resource_record_set:
        managed_zone: "{{ registered_zones }}"
        name: "www.bar.foo."
        target:
          - "{{ target_ip }}"
        type: "A"
        ttl: 3600
        project: "{{ gcp_project }}"
        auth_kind: "{{ gcp_cred_kind }}"
        service_account_contents: "{{ gcp_cred_content }}"
        # this works
        #service_account_file: "{{ gcp_cred_file }}"
        state: present
EXPECTED RESULTS

I expect the dns records to actually be created regardless if service_account_contents or service_account_file is used.

ACTUAL RESULTS
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Credential type 'serviceaccount' not implemented"}

gcp_compute_instance doesn't attach persistent disk to already launched instance

SUMMARY

gcp_compute_instance doesn't attach persistent disk to already created instance.
When instance is initially created with persistent disk it's being attached.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_instance

ANSIBLE VERSION
2.9.6
CONFIGURATION
ANSIBLE_PIPELINING(/home/***/***/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/***/***/ansible.cfg) = -o "ControlMaster=auto" -o "ControlPersist=3600s"
ANSIBLE_SSH_CONTROL_PATH(/home/***/***/ansible.cfg) = %(directory)s/%%h-%%r
ANSIBLE_SSH_RETRIES(/home/***/***/ansible.cfg) = 5
OS / ENVIRONMENT

Ubuntu 18.04.4 LTS

STEPS TO REPRODUCE

Launch instance
Create persistent disk
Try to attach persistent disk

    - name: main.yml | launch gce instances
      local_action:
        module: gcp_compute_instance
        # Account variables
        project: "{{ _project }}"
        auth_kind: serviceaccount
        service_account_file: "{{ credentials_file }}"
        # Environments variables
        name: "{{ instance_name }}"
        machine_type: "{{ _machine_type }}"
        zone: "{{ _zone }}"
        disks:
          - auto_delete: true
            boot: true
            source: "{{ system_disk_result }}"
        network_interfaces:
          - network: "{{ system_network_result }}"
            access_configs:
              - name: 'External NAT'
                nat_ip: "{{ external_ip_address_result }}"
                type: 'ONE_TO_ONE_NAT'
        tags:
          items: "{{ _tags }}"
        state: present
      register: instance_result

    - name: main.yml | create additional disk
      local_action:
        module: gcp_compute_disk
        # Account variables
        project: "{{ _project }}"
        auth_kind: serviceaccount
        service_account_file: "{{ credentials_file }}"
        # Environments variables
        name: "{{ persistent_disk_name }}"
        type: "{{ persistent_disk_type }}"
        size_gb: "{{ persistent_disk_size }}"
        labels:
          instance: "{{ instance_name }}"
        zone: "{{ _zone }}"
        state: present
      register: persistent_disk_result

    - name: main.yml | attach persistent disk
      local_action:
        module: gcp_compute_instance
        # Account variables
        project: "{{ _project }}"
        auth_kind: serviceaccount
        service_account_file: "{{ credentials_file }}"
        # Environments variables
        name: "{{ instance_name }}"
        machine_type: "{{ _machine_type }}"
        zone: "{{ _zone }}"
        disks:
          - auto_delete: false
            boot: false
            interface: 'SCSI'
            mode: 'READ_WRITE'
            device_name: "{{ persistent_disk_name }}"
            source: "{{ persistent_disk_result }}"
            type: "PERSISTENT"
        state: present
      register: attach_persistent_disk_result
EXPECTED RESULTS

Persistent disk is attached to instance

ACTUAL RESULTS

Persistent disk is not attached to instance when I check in GCP portal

TASK [main.yml | attach persistent disk] **************************************************************************************************************************************************************************************************************************************
task path: /home/***/***/gcp-deploy-instance.yml:141
Wednesday 08 April 2020  11:47:07 +0300 (0:00:04.605)       0:00:26.557 ******* 
Using module file /home/***/***/env/lib/python3.6/site-packages/ansible/modules/cloud/google/gcp_compute_instance.py
Pipelining is enabled.
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c '/home/***/***/env/bin/python3.6 && sleep 0'
ok: [localhost -> localhost] => changed=false 
  cpuPlatform: Intel Broadwell
  creationTimestamp: '2020-04-08T01:46:54.844-07:00'
  deletionProtection: false
  disks:
  - autoDelete: true
    boot: true
    deviceName: persistent-disk-0
    diskSizeGb: '35'
    index: 0
    interface: SCSI
    kind: compute#attachedDisk
    mode: READ_WRITE
    source: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a/disks/test2-instance-name
    type: PERSISTENT
  fingerprint: *********
  id: *********
  invocation:
    module_args:
      auth_kind: serviceaccount
      can_ip_forward: null
      deletion_protection: null
      disks:
      - auto_delete: false
        boot: false
        device_name: test2-instance-name-disk-01
        disk_encryption_key: null
        index: null
        initialize_params: null
        interface: SCSI
        mode: READ_WRITE
        source:
          changed: true
          creationTimestamp: '2020-04-08T01:47:04.551-07:00'
          failed: false
          id: *********
          kind: compute#disk
          labelFingerprint: *********
          labels:
            instance: test2-instance-name
          name: test2-instance-name-disk-01
          physicalBlockSizeBytes: '4096'
          selfLink: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a/disks/test2-instance-name-disk-01
          sizeGb: '30'
          status: READY
          type: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a/diskTypes/pd-standard
          zone: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a
        type: PERSISTENT
      env_type: null
      guest_accelerators: null
      hostname: null
      labels: null
      machine_type: n1-standard-2
      metadata: null
      min_cpu_platform: null
      name: test2-instance-name
      network_interfaces: null
      project: *********
      scheduling: null
      scopes:
      - https://www.googleapis.com/auth/compute
      service_account_contents: null
      service_account_email: null
      service_account_file: .google/test2-cluster-name.json
      service_accounts: null
      shielded_instance_config: null
      state: present
      status: null
      tags: null
      zone: europe-west2-a
  kind: compute#instance
  labelFingerprint: *********
  machineType: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a/machineTypes/n1-standard-2
  metadata: {}
  name: test2-instance-name
  networkInterfaces:
  - accessConfigs:
    - kind: compute#accessConfig
      name: External NAT
      natIP: *********
      networkTier: PREMIUM
      type: ONE_TO_ONE_NAT
    fingerprint: *********
    kind: compute#networkInterface
    name: nic0
    network: https://www.googleapis.com/compute/v1/projects/***/global/networks/default
    networkIP: *********
    subnetwork: https://www.googleapis.com/compute/v1/projects/***/regions/europe-west2/subnetworks/default
  scheduling:
    automaticRestart: true
    onHostMaintenance: MIGRATE
    preemptible: false
  selfLink: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a/instances/test2-instance-name
  startRestricted: false
  status: RUNNING
  tags:
    fingerprint: *********
    items:
    - prometheus
    - webcp
  zone: https://www.googleapis.com/compute/v1/projects/***/zones/europe-west2-a

`gcp_compute_route` fails every time for more than a year

SUMMARY

gcp_compute_route fails with a TypeError.

This is a transfer of ansible/ansible#55153.

It is more than a year that you are offering a module that breaks your users' apps. Your modules deprecated community ones that could at least perform what they claimed they do. Could there at least be some professionalism and some community feedback here?

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_compute_route

ANSIBLE VERSION
ansible 2.10.0.dev0
  config file = None
  configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.8/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
CONFIGURATION
None
OS / ENVIRONMENT

irrelevant

STEPS TO REPRODUCE
         gcp_compute_route:
            name: name-here
            dest_range: 192.168.0.0/24
            next_hop_instance:
              selfLink: zones/europe-west1-b/instances/vpn
            priority: 100
EXPECTED RESULTS

No errors.

ACTUAL RESULTS
The full traceback is:
Traceback (most recent call last):
  File "<stdin>", line 102, in <module>
  File "<stdin>", line 94, in _ansiballz_main
  File "<stdin>", line 40, in invoke_module
  File "/usr/lib/python3.8/runpy.py", line 206, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec) 
  File "/usr/lib/python3.8/runpy.py", line 96, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_compute_route.py", line 539, in <module>
  File "/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_compute_route.py", line 391, in main
  File "/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/basic.py", line 2043, in exit_json
  File "/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/basic.py", line 2012, in _return_formatted
  File "/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/basic.py", line 722, in warn
  File "/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/common/warnings.py", line 18, in warn
    return _formatwarnmsg_impl(msg)
TypeError: warn requires a string not a <class 'dict'>
fatal: [slagfalt-vpn -> localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib/python3.8/runpy.py\", line 206, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.8/runpy.py\", line 96, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.8/runpy.py\", line 86, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_compute_route.py\", line 539, in <module>\n  File \"/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_compute_route.py\", line 391, in main\n  File \"/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/basic.py\", line 2043, in exit_json\n  File \"/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/basic.py\", line 2012, in _return_formatted\n  File \"/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/basic.py\", line 722, in warn\n  File \"/tmp/ansible_gcp_compute_route_payload_yo8rsw04/ansible_gcp_compute_route_payload.zip/ansible/module_utils/common/warnings.py\", line 18, in warn\n    return _formatwarnmsg_impl(msg)\nTypeError: warn requires a string not a <class 'dict'>\n",                                                              
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", 
    "rc": 1
}

repo name change to just "google.cloud"

SUMMARY

As this repo now lives in a organization call "ansible-collections", might be worth it just to change the name of the repo to just "google" and remove the "ansible_collections" prefix.

`gcp_storage_object` does not work (raises ValueError: None could not be converted to unicode)

SUMMARY

When trying to upload an object, the module gives a python traceback and no object is uploaded.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

gcp_storage_object

ANSIBLE VERSION

Just checked out the google.cloud collection.

ACTUAL RESULTS
The full traceback is:
Traceback (most recent call last):
  File "<stdin>", line 102, in <module>
  File "<stdin>", line 94, in _ansiballz_main
  File "<stdin>", line 40, in invoke_module
  File "/usr/lib/python3.8/runpy.py", line 207, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_gcp_storage_object_payload_gqp67u8p/ansible_gcp_storage_object_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_storage_object.py", line 301, in <module>
  File "/tmp/ansible_gcp_storage_object_payload_gqp67u8p/ansible_gcp_storage_object_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_storage_object.py", line 203, in main
  File "/usr/lib/python3.8/site-packages/google/cloud/storage/blob.py", line 180, in __init__
    name = _bytes_to_unicode(name)
  File "/usr/lib/python3.8/site-packages/google/cloud/_helpers.py", line 389, in _bytes_to_unicode
    raise ValueError("%r could not be converted to unicode" % (value,))
ValueError: None could not be converted to unicode
failed: [viktor-cloud-hopper -> localhost] (item=/home/nikos/Projects/ethical-hacking/EN2720/data/.tmp/roles/twmn.mamma-nubes/function.spec) => {
    "ansible_loop_var": "item",
    "changed": false,
    "item": "/home/nikos/Projects/ethical-hacking/EN2720/data/.tmp/roles/twmn.mamma-nubes/function.spec",
    "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_gcp_storage_object_payload_gqp67u8p/ansible_gcp_storage_object_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_storage_object.py\", line 301, in <module>\n  File \"/tmp/ansible_gcp_storage_object_payload_gqp67u8p/ansible_gcp_storage_object_payload.zip/ansible_collections/google/cloud/plugins/modules/gcp_storage_object.py\", line 203, in main\n  File \"/usr/lib/python3.8/site-packages/google/cloud/storage/blob.py\", line 180, in __init__\n    name = _bytes_to_unicode(name)\n  File \"/usr/lib/python3.8/site-packages/google/cloud/_helpers.py\", line 389, in _bytes_to_unicode\n    raise ValueError(\"%r could not be converted to unicode\" % (value,))\nValueError: None could not be converted to unicode\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.