Giter Site home page Giter Site logo

ibm / operator-collection-sdk Goto Github PK

View Code? Open in Web Editor NEW
6.0 7.0 6.0 1.52 MB

The IBM Operator Collection SDK provides the resources and tools that are needed to develop Operator Collections against the IBM® z/OS® Cloud Broker as part of the IBM Z® and Cloud Modernization Stack.

License: Apache License 2.0

Jinja 96.13% Dockerfile 3.87%

operator-collection-sdk's Introduction

IBM Operator Collection SDK

License Test Release

Overview

The IBM Operator Collection SDK provides the resources and tools that are needed to develop Operator Collections against the IBM® z/OS® Cloud Broker which is part of the IBM Z® and Cloud Modernization Stack.

Operator Collections are simply Ansible Collections that are dynamically converted to Ansible Operators in Openshift when imported in the IBM® z/OS® Cloud Broker. This allows you to write any Ansible Playbook, develop and iterate on it locally, publish to Openshift, and expose a catalog of statefully managed new services.

The IBM Operator Collection SDK simplifies the development of these Operator Collections by providing:

  • The ability to scaffold a new Operator Collection with a preconfigured set of requirements
  • The ability to quickly debug your Ansible automation in an operator in Openshift, using a local build of your latest Ansible modifications

This project also provides the documented Operator Collection specification, along with a tutorial to guide you along the development process.

Documentation Table of Contents

How to contribute

Check out the contributor documentation.

Copyright

© Copyright IBM Corporation 2023.

operator-collection-sdk's People

Contributors

freemanlatrell avatar ibm-open-source-bot avatar kberkos-public avatar some-ibmer avatar yemi-kelani avatar zohiba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator-collection-sdk's Issues

There is no description of what the RACF example operator does - there should be

What is the link to the document from the "main" branch?

Which section(s) is the issue in?

Ideally, there should be a readme here that provides a high-level description of the operator and how it functions

What needs fixing?

There is no description of how the sample operator works - this is required.

Additional context

Since there is only one example right now, we probably don't need anything at the 'examples' level but if we add more there should probably be some sort of readme at the examples level too to direct people to the different examples.

Add support air-gapped operator collections

Feature Request

Describe the problem you need a feature to resolve.

When users need to develop collections that are supported in air-gapped environments, there's currently no way to know how to support collection dependencies for these operators.

Describe the solution you'd like.

  • Update init_collection.yml playbook with requirement.yml template containing example on how to support local dependencies
  • Update Racf Operator example to support local dependencies

Enhancements to create ZosCloudBroker instance

Feature Request

Describe the problem you need a feature to resolve.

It would if we could make the following enhancements to the Create ZosCloudBroker function:

  1. Either not display block Storage Classes (as they are not valid for ZosCloudBroker), or add hover help to guide users away from block Storage Classes. Also, the documentation at: https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=broker-creating-zos-cloud-instance could be enhanced to make this clear.

  2. When creating a ZosCloudBroker, one should not be permitted to specify the name of an existing ZosCloudBroker instance. I had a failed instance, then attempted to recreate it with the same name, which clearly failed .. but I clearly did not see the failure until I had entered all the fields and attempted to create it. Once I specified a different name, I was able to create a second instance.

  3. As Volume Access Mode can only be set to ReadWriteMany, it would be nice if this was prefilled by default. I think this can be done by setting a default: tag in the config.yml file for the create ZosCloudBroker panel.

Describe the solution you'd like.

Proposed solutions are specified with the problem statement above. Thank you.

Trouble shooting doc should include guidance on when to update YAML

What is the link to the document from the "main" branch?

I do not know if we currently have some trouble shooting doc for the z/OS Cloud Broker and for Operator Collections so I do not have a link.

Which section(s) is the issue in?

Trouble Shooting

What needs fixing?

I've encountered numerous cases where attempts to delete resources fail. Usually when I have installed a new version of the z/OS Cloud Broker or of a sub operator, because the resources being deleted relied on function in the earlier version. A 'Resource is being deleted' message then flashes periodically ... I guess because internally we keep trying to delete resources even though the delete request fails. This is actually most irritating indeed !

As the act of deleting a resource depends on the finalizer tag, the issue can be alleviated by editing the YAML of the resource to delete code immediately after the finalizer: tag. Latrell showed me this trick but I think we need to document it in a trouble shooting section as I see that customers will run into this. Incidentally, deleting the sub operator did not help as the resources created by the sub operator were still in existence and hence resulted in the flashing 'Resource is being deleted' message.

Additional context

I am raising this against the operator-collection but I think it applies to the broker too. We should look to add this to a generic section on trouble shooting.

When creating a ZosEndpoint, field endpointType is not documented and its purpose is unclear

What is the link to the document from the "main" branch?

https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=interface-importing-operator-collection

Which section(s) is the issue in?

Define a z/OS endpoint.

What needs fixing?

Explain the purpose of field endpointType and maybe give some suggestions for suitable values for this field.
The description below the field currently states: 'ZosEndpointType defines the valid ZosEndpoint Types' but this is not very helpful.

Additional context

I've raised this as a doc issue but if field endpointType is not required then we should delete it from the product.

I created an endpoint and left endpointType blank. This worked fine. Also, when I then looked at the defined endpoint, the endpointType did not appear anywhere on the panel. If this field does serve a purpose, I would have expected to see a blank field that I could possible edit to set a value for it.

delete_operator playbooks fails to cleanup credential secret

Bug Report

When executing the delete_operator.yml playbook, the final task that attempts to remove the credential secret fails with the following error:

TASK [Remove Credential Secret] ************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'credential_name'. 'dict object' has no attribute 'credential_name'\n\nThe error appears to be in '/Users/latrellfreeman/.ansible/collections/ansible_collections/ibm/operator_collection_sdk/playbooks/delete_operator.yml': line 83, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n    - name: Remove Credential Secret\n      ^ here\n"}
...ignoring

What did you do?

Execute the delete_operator.yml playbook

What did you expect to see?

Remove Credential Secret task should complete successfully

Improve formatting of doc links on panels

Feature Request

While creating a z/OS endpoint, I notice that the doc links to the right of the panel are not well aligned. This makes the panel look very messy. Also, the links are not actual links. And finally, the links point to generic doc sections rather than sections specific to the current panel.

For example, we have:

ZosEndpoint is the Schema for the zosendpoints API. View documentation: ibm.biz/ibm-zoscb-doc. License ibm.biz/ibm-zoscb-license.

Describe the solution you'd like.

It would be better to align this text with the words:

z/OS Endpoint
provided by IBM

(Though should the p be a capital P or should it read 'z/OS Endpoint provided by IBM'.)

on the same panel.

The text would look better if formatted as follows:

ZosEndpoint is the Schema for the zosendpoints API.
View documentation: ibm.biz/ibm-zoscb-doc.
License ibm.biz/ibm-zoscb-license.

Ideally, all links should be proper links to documents that users can just click on. I had to cut and paste the provided link into a browser.

Also, it would be better to direct users to https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=interface-importing-operator-collection , as this documentation specifically discusses defining z/OS Endpoints.

Btw, the same issue occurs on other panels too (e.g. Create Operator Collection) so it would be good to fix this on all new panels added for z/OS Cloud Broker.

Importing an operator collection section is unclear and looks out of date compared to the code

What is the link to the document from the "main" branch?

https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=interface-importing-operator-collection

Which section(s) is the issue in?

Import an operator collection

What needs fixing?

Point 1. states 'In the z/OS® Cloud Broker navigation pane, select Import operator collections.'

It is not clear which is the navigation pane. The instructions really need to spell out which panel one is in and precisely what options/actions one expects to perform from that panel. At present, the user has to do a lot of guess work.

When I look at the list of Installed Operators, I see an API of Operator Collection. When I select that, I can see an action to Create OperatorCollection. I do not see any actions to do with Importing an operator collection.

Additional context

I am running z/OS Cloud Broker V2.2 and the release notes at the following link state that the documentation that I am looking at is valid forV2.2:

https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=broker-zos-cloud-release-notes

So, it does look like the function has moved on a bit and the documentation has not been kept in sync.

Typo in ocsdk-create-operator output message

Bug Report

What did you do?

When running the ocsdk-create-operator command I see a typo in one of the message prompts

What did you expect to see?

The typo should be corrected

What did you see instead? Under which circumstances?

The message reads:
Enter you SSH Username for this endpoint (Press Enter to skip if the zoscb-encrypt CLI isn't installed):

Collection Version

$ ansible-galaxy collection verify ibm.operator_collection_sdk

ibm.operator_collection_sdk:0.2.0

Possible Solution

Additional context

image

Better documentation needed for z/OS Cloud Broker operator installation

What is the link to the document from the "main" branch?

https://github.com/IBM/operator-collection-sdk/blob/main/ibm/operator_collection_sdk/README.md

Which section(s) is the issue in?

Installation

What needs fixing?

We describe how the Operator Collection SDK can be installed from GitHub or Docker. However, later in Step 1., we state 'Install the z/OS Cloud Broker Operator in your namespace ..'. This is confusing and it is not clear how the Operator Collection that was installed locally can be installed into the namespace. Also, one would actually expect the Operator Collection SDK to be available with OpenShift. After discussing with Latrell, we think that it would make more sense if the first step in Installation directs users to this link: https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=broker-creating-zos-cloud-instance .

Additional context

sdk-repo - ssh passphrase validation when creating operator collection

Feature Request

Describe the problem you need a feature to resolve.

I entered an incorrect passphrase but this was not validated up front. Instead, we proceeded to create the operator, which eventually failed.

Describe the solution you'd like.

Add logic to validate the passphrase when it is entered, and fail create operator collection if the passphrase is incorrect.

Change z/OC to z/OS

What is the link to the document from the "main" branch?

/ibm/operator_collection_development/README.md

Which section(s) is the issue in?

Prerequisites

What needs fixing?

Link 'z/OC Cloud Broker Encryption CLI' should read 'z/OS Cloud Broker Encryption CLI'

Additional context

Issue in generating offline Operator Collections for the ones downloaded from Ansible Galaxy

Bug Report

What did you do?

When executing the Ansible playbook create_offline_requirements.yml against the ZPM and IMS Operator Collections downloaded from Ansible Galaxy, it fails due to the missing galaxy.yml file.

What did you expect to see?

Execution of Ansible playbook create_offline_requirements.yml against the ZPM and IMS Operator Collections downloaded from Ansible Galaxy should generate the offline Operator Collections for ZPM and IMS successfully.

Collection Version

$ ansible-galaxy collection verify ibm.operator_collection_sdk

1.0.0

Possible Solution

For the Operator Collections downloaded from Ansible Galaxy, bypass the validation check for galaxy.yml file.

Additional context

Inconsistent names used in the create and verify steps for ZosCloudBroker

What is the link to the document from the "main" branch?

https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=broker-creating-zos-cloud-instance

Which section(s) is the issue in?

Procedure, step 6. and Verify step 3.

What needs fixing?

In Procedure step 6., name is set to zoscloudbroker but in Verify step 3., ac-zoscloudbroker-01 is specified. This can be confusing as users could infer that we add a prefix and a suffix to the name they supplied. It would be better to specify a consistent name in both cases.

Additional context

Create credential secret from OC SDK

Feature Request

Describe the problem you need a feature to resolve.

Currently, when users are trying to create a credential secret in an airgapped environment (without access to github or other sources) creating the secret is very difficult.

Describe the solution you'd like.

I'd like to be able to run a playbook that will exec into the ansible pod and create the credential secret for me automatically.

Used TIMEDOUT/TIMEOUT instead of FAILED

Feature Request

Describe the problem you need a feature to resolve.

When creating an operator collection, we issue messages like:

TASK [Create OperatorCollection] *******************************************************************************************************************************
changed: [localhost]
FAILED - RETRYING: [localhost]: Validate OperatorCollection installed successfully (5 retries left).

TASK [Validate OperatorCollection installed successfully] *******************************************************************************************************************************
ok: [localhost]

TASK [Create SSH Credential Secret with passphrase] *******************************************************************************************************************************
changed: [localhost]

TASK [Create SubOperatorConfig with mapped credential] *******************************************************************************************************************************
changed: [localhost]
FAILED - RETRYING: [localhost]: Validate SubOperatorConfig installed successfully (30 retries left).
FAILED - RETRYING: [localhost]: Validate SubOperatorConfig installed successfully (29 retries left).
....

End users find it alarming when they see the word FAILED.

Describe the solution you'd like.

For cases where there is a genuine timeout, can we replace the word FAILED with TIMEDOUT or TIMEOUT ?
If we fail to validate after all retry attempts have been exhaused, then we can say FAILED.

The other thing to consider is to increase the time interval between validation checks and hence reduce the frequency of issuing such messages.

Thank you.

Operator Collection SDK tutorial.md file should indicate how to create Endpoints

What is the link to the document from the "main" branch?

https://github.com/IBM/operator-collection-sdk/blob/main/docs/tutorial.md

Which section(s) is the issue in?

The Create the Operator section mentions Endpoint wazi-sandbox but does not describe how to create an Endpoint or how an Endpoint comes into play.

What needs fixing?

Explain how an Endpoint is created (if created automatically as part of installing an Operator see https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.2?topic=interface-importing-operator-collection) or if created manually. It is possible that we may be able to just link to existing sections on Endpoints in the z/OS Cloud Broker Docs.

Btw, sometimes we say importing and sometimes we say installing an operator. This is confusing. It would be nice if we could use the same terminology please.

Additional context

If we document how to create Endpoints in the tutorial, the README files for all Operators can reference the tutorial and hence avoid the need to describe how to create an endpoint in multiple places. I believe documentation for the IMS Operator already describe the specifics of Endpoints. It would be nice if it just referenced the tutorial. This is what I would like to do in the README file for the MQ Operator.

Operator Collection Tutorial indicates that users will need to update the operator-config.yml but this is not always the case

What is the link to the document from the "main" branch?

https://github.com/IBM/operator-collection-sdk/blob/main/docs/tutorial.md

Which section(s) is the issue in?

The Update operator config section.

What needs fixing?

I do not wish to duplicate doc that is in the tutorial.md file in the MQ Operator README file. So, I would like to be able to reference the tutorial.md file as much as possible. However, the tutorial.md file currently states that users will need to update the operator-config.yml file. But, this is not necessarily true because for the MQ Operator, the operator-config.yml file is provided complete. Users do not need to update it.

I think the tutorial needs to differentiate between steps required to install an existing/provided Operator and between creating a brand new Operator.

At present, the instructions are somewhat misleading.

Additional context

Add ability to leverage ocsdk-extra-vars.yml variables during operator creation

Feature Request

Describe the problem you need a feature to resolve.

With the latest enhancements in the OC SDK VS Code Extension, the endpoint variables needed to execute the create_operator playbook can now be stored in a file named ocsdk-extra-vars.yml. This allow the user to bypass the prompts needed to retrieve these values in the VS Code Extension. For consistency, and to provide the same ability to users who aren't using the extension, we should leverage the same file to bypass the Ansible prompts when executing the playbooks outside of the VS Code extension.

Describe the solution you'd like.

Add ability for the create_operator playbook to retrieve variables stored in the ocsdk-extra-vars.yml file.

Permission Denied error when running ansible-galaxy collection install command

Bug Report

What did you do?

I ran the following command on the vm I am installing the tool on:
ansible-galaxy collection install [email protected]:IBM/operator-collection-sdk.git#ibm/operator_collection_sdk,v0.x -f

What did you expect to see?

I expected for the install to complete successfully

What did you see instead? Under which circumstances?

it returned the following error:

[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.

Collection Version

$ ansible-galaxy collection verify ibm.operator_collection_sdk

ibm.operator_collection_sdk:0.2.0

Possible Solution

Additional context

image

Update documentation "configure extra-vars to bypass prompts" section

What is the link to the document from the "main" branch?

https://github.com/IBM/operator-collection-sdk/blob/main/ibm/operator_collection_sdk/README.md#configure-extra-vars-file-to-bypass-prompts

Which section(s) is the issue in?

"Configure extra-vars file to bypass prompts"

What needs fixing?

Documentation is outdated with changes from #56
image

Additional context

The playbook in issue #56 reads the variables in from a yaml file. It looks like this is a different playbook that inputs variables via a different method (--extra-vars vars.json as opposed to reading them implicitly from ocsdk_extra_vars.yaml/yml). Update it to say that if an ocsdk_extra_vars.yaml file is detected, then it would just use that automatically.

Persistent Volume Claims remains bound to z/OS Cloud Broker instance after instance has been deleted

Bug Report

What did you do?

I created zoscloudbroker which failed because I had failed to set a storage class. While this broker instance was in a failed state, I created zoscloudbroker2 and set the storage class. This was successful.

I then deleted both broker instances. The deletions were both successful.

I then recreated zoscloudbroker and set the storage class. This was successful.

I then selected Storage and Persistent Volume Claims and noticed that there was a bound PVC for zoscloudbroker2.

What did you expect to see?

Given that the zoscloudbroker2 had been successfully deleted, I would not have expected to see a PVC for it.

What did you see instead? Under which circumstances?

I saw instances of resources associated with a deleted broker instance. This should not occur as it requires end users to perform manual cleanup and hence adds to their admin overhead.

Collection Version

$ ansible-galaxy collection verify ibm.operator_collection_sdk

Note: The above command failed on my system so I got the version from the OpenShift panel:

IBM® z/OS® Cloud Broker
2.2.0 provided by IBM

Possible Solution

All resources associated with zoscloudbroker2 should ideally be deleted when the broker is deleted. If it is a requirement to first delete the PVC, then deletion of the broker should have failed with an indication that the PVC must first be deleted.

Additional context

For now, I manually deleted the PVC for zoscloudbroker2.

Control subsequent attributes displayed, based on the value selected for an earlier attribute

Feature Request

Describe the problem you need a feature to resolve.

With the operator-config.yml file, there is no way to control subsequent attributes displayed based on the value of a previous attribute that has been selected.

Currently, I have added another API to achieve this, but this clutters up the views of operators in the OpenShift Console.

Describe the solution you'd like.

If I select attribute A, I would like to limit to displaying attributes A1 and A2 and if I select attribute B, I would like to limit to just displaying attribute B1 ?

This would also help with creating a hierarchical grouping of related resources. For example, I could have an API for MQ Queues and group the 4 types of queues MQ supports under this API, but then limit the properties to only those specific properties that apply to the type of queue currently being defined.

I have discussed this with Laterell. He said other people have also requested this type of function, and asked if I could raise a Feature Request for it.

'Resource is being deleted' message is a nuisance

Feature Request

When deleting a resource using a z/OS Cloud Broker sub-operator collection, the 'Resource is being deleted' message gets to be a nuisance and hinders the ability to enter actions against neighbouring listed resources. It would be nice if this could be improved in some way.

I am not sure if this is a zCB or OpenShift issue.

Describe the solution you'd like.

It would be better to show a different state for defined/being deleted resources, or maybe the flashing message should be supressed and only made visible if one hovers over the resource. The present design is unusable. And, if this is an OpenShift issue then it would be nice if a person who deals with OpenShift issues could be requested to raise this issue with the OpenShift team please. Thank You.

z/OS Endpoint doc appears to be outdated

What is the link to the document from the "main" branch?

https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.1?topic=interface-importing-operator-collection

Which section(s) is the issue in?

Define a z/OS endpoint

What needs fixing?

Steps 4 and 5 request the user to hit Submit, but there is no Submit button anymore. The action is Create.
I think this whole section needs to be revised to relate to the current panel.

It is possible that there is an internal draft document being worked on that I do not have access to. If that is the case, please close out this issue and accept my apologies.

Additional context

Add ability to specify existing ZosEndpoint during operator creation

Feature Request

Describe the problem you need a feature to resolve.

When using the create_operator playbook, it would be nice to be able to specify an existing ZosEndpoint instead of having to configure a new endpoint every time this playbook is executed

Describe the solution you'd like.

  1. User is prompted to enter entire a new or existing ZosEndpoint name
  2. If existing endpoint is selected, the existing ZosEndpoint name is entered and will be used during operator configuration, and host, port, and SSH prompts are skipped

Leading and trailing spaces are not being stripped off from data entered on a create operator command

Bug Report

What did you do?

While creating an operator using the ocsdk-create-operator command, I accidentally added a space at the end of host:

Enter your ZosEndpoint host: winmvs4c.hursley.ibm.com

What did you expect to see?

I expected leading and trailing spaces to be removed so that a valid host name was set.

It is important to fix this as I think customers will definitely be caught out by this.

What did you see instead? Under which circumstances?

This space was accepted as a valid character and so resulted in an invalid host "winmvs4c.hursley.ibm.com " that was unreachable. However, the fact that it was unreachable was only highlighted when I attempted to use an operator API to define a resource (I actually tried to define a MQ local queue).

Collection Version

IBM® z/OS® Cloud Broker 2.2.0

Possible Solution

Add logic to strip off leading and trailing spaces.

Additional context

I received the following error when I tried to define an MQ local queue:

Failed to connect to the host via ssh: OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021
debug1: auto-mux: Trying existing master
debug1: Control socket "/tmp/cp/686e1f108f" does not exist
debug2: resolving "winmvs4c.hursley.ibm.com " port 22
ssh: Could not resolve hostname winmvs4c.hursley.ibm.com : Name or service not known

The extra space can be seen in debug2.

Fields are not displayed in OpenShift in the order in which they are defined in the sub opertor-config.yml file

Bug Report

In the MQ sub operator-config.yml config file, I specifically defined fields to appear in the following order:

Command Prefix
Queue Name
Queue Description
Queue Type
Name of like queue

because logically, this makes more sense from a usability perspective.

However, in OpenShift. the fields appear in the following order:

Queue Type
Command Prefix
Queue Name
Queue Description
Name of like queue

What did you do?

Defined some fields in an operator configuration file.

What did you expect to see?

The fields to be displayed in the same order in which I defined them in the operator configuration file.

What did you see instead? Under which circumstances?

Fields appearing in the OpenShift panels in a different order.

Collection Version

2.2.1

Possible Solution

Honour the order of fields specified in the operator configuration file so that they are displayed in this same order in Openshift.

Additional context

I did not check to see if there were any specific options to indicate that the specified order of fields should be honoured. It is possible that the order of fields is being adjusted due to some default option that permits this to take place.

Actual operators installed

What is the link to the document from the "main" branch?

Apologies but I do not have a link because I do not know if we document this anywhere, but I wanted to raise the issue as it surprised me a little.

Which section(s) is the issue in?

Installation of Operators

What needs fixing?

Prior to installing the operator for IBM z/OS Cloud Broker, I had no operators installed.

After installing the operator for IBM z/OS Cloud Broker, I had two operators installed: IBM Cloud Pak foundational services and IBM z/OS Cloud Broker. The Last updated date and time stamps for both operators are identical and are set to the time I installed the IBM z/OS Cloud Broker. As I had not specifically installed IBM Cloud Pak foundational services, it must have been installed as part of the broker install.

If we do not already document this anywhere, then I think we should mention it as we would potentially be installing additional resources in user environments. Also, are they any implications to us installing the extra resource ? What if users already have a different version of IBM Cloud Pak foundational services installed in their namespace and we install a different (maybe newer) version, is it likely to impact their existing environment ? Or is it that we only install IBM Cloud Pak foundational services if it does not already exist in the namespace ?

If we already document this somewhere then please close out this issue and accept my apologies.

Additional context

Another thing to note is that if the IBM z/OS Cloud Broker operator is unistalled, the other operator is left in tact. Is this expected behaviour ? Is it not wrong to only perform a partial uninstall of resources ? Ideally, we should uninstall all resources that we had created.

Error in init_operator.yml when directory name has space in it

Bug Report

What did you do?

Tried to run init_operator.yml in a directory the had a space ( ) in its name. I ran the command in a directory called OC Workspace.

What did you expect to see?

Successful creation

What did you see instead? Under which circumstances?

Failure, see attached image.

Collection Version

I used the develop branch, v1.1.0

Possible Solution

Additional context

Not sure if this is an operator_sdk or an ansible-galaxy issue.

Screenshot 2023-11-28 at 2 21 47 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.