Giter Site home page Giter Site logo

provider-upjet-gcp's People

Contributors

bradkwadsworth-mw avatar donovanmuller avatar dverveiko avatar erhancagirici avatar ezgidemirel avatar hasheddan avatar jastang avatar jeanduplessis avatar jonathano avatar ldalorion avatar lsviben avatar meerkat-b avatar mergenci avatar muvaf avatar mykolalosev avatar myzataras avatar nullable-eth avatar piotr1215 avatar plumbis avatar pocokwins avatar redbackthomson avatar renovate[bot] avatar sergenyalcin avatar steperchuk avatar stevendborrelli avatar svscheg avatar turkenf avatar turkenh avatar ulucinar avatar ytsarev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

provider-upjet-gcp's Issues

Subscription (PubSub). Expiration period (Never expire)

In the configuration of a PubSub Subscription , it seems to be that user cannot point out "never expire" in the expiration configuration at subscription level:

It would be great if I could set ttl "" to configure that the subscription never expires as other IaC tools.

Configure datalossprevention (4), dataplex (1), dataproc (10), datastore (1), deploymentmanager (1), dialogflow (4)

External name configuration of 4 resources in the datalossprevention group:

External name configuration of 1 resources in the dataplex group:

External name configuration of 10 resources in the dataproc group:

External name configuration of 1 resources in the datastore group:

External name configuration of 1 resources in the deploymentmanager group:

External name configuration of 4 resources in the dialogflow group:

Configure appengine (2), assuredworkloads (1), bigquery (15), bigtable (10), billing (2)

sql: `User` CRD should fail fast on invalid passwordSecretRef

What happened?

Given a passwordSecretRef referencing a non existing secret
When User reconciles
It silently sets an empty password without reflecting any error in the CRD status nor in the provider logs

Sample provider log:

1.6715338598788996e+09    DEBUG    provider-gcp    External resource is up to date    {"controller": "managed/sql.gcp.upbound.io/v1beta1, kind=user", "request": "/kuttl-medium-mysql-instance-w6qpx-pr9tl", "uid": "af071a1a-82f4-427d-a337-64fcd44cdcdb", "version": "103110491", "external-name": "app", "requeue-after": 1671534459.878897}

Sample User CR status:

  NAME                                                              READY   SYNCED   EXTERNAL-NAME   AGE
  user.sql.gcp.upbound.io/kuttl-medium-mysql-instance-w6qpx-pr9tl   True    True     app             22h

  status:
  atProvider:
  id: app/%/kuttl-medium-mysql-instance-w6qpx-zv8tz
  conditions:
  - lastTransitionTime: "2022-12-19T12:08:52Z"
    reason: Available
    status: "True"
    type: Ready
  - lastTransitionTime: "2022-12-19T12:07:32Z"
    reason: ReconcileSuccess
    status: "True"
    type: Synced
  - lastTransitionTime: "2022-12-19T12:08:04Z"
    reason: Finished
    status: "True"
    type: AsyncOperation
  - lastTransitionTime: "2022-12-19T12:08:04Z"
    reason: Success
    status: "True"
    type: LastAsyncOperation

How can we reproduce it?

Set an invalid passwordSecretRef pointing to a non existant secret

https://github.com/upbound/provider-gcp/blob/e0252bbec3e86726f82df44ffb91df9c6fcb7076/examples/sql/user.yaml#L10-L15

What environment did it happen in?

  • Universal Crossplane Version: image: crossplane/crossplane:v1.10.1
  • Provider Version: v0.20.0

Upgrade underlying GCP terraform provider

What problem are you facing?

The current Terraform provider is outdated.

How could Official GCP Provider help solve your problem?

Upgrade to the latest Terraform provider for GCP to enable access to more resources and bug fixes

Incorrect error message when creating a cloud resource that already exists

What happened?

Getting an obscure permissions error where the issue is that the resource with the same name already exists.

How can we reproduce it?

Create a storage bucket with the name that already exists (storage buckets names are global, so it might exist in another project). Easy way to check it is to try and create a bucket manually with the name that is already taken, GCP will warn that the name is taken.
Crossplane gives an obscure error about permissions (which is incorrect).

image

Change the bucket name to a unique value and all works as expected.

What environment did it happen in?

  • Crossplane Version: v1.8.0-rc.0.72.gdd23304b

  • Provider Name: provider-gcp

  • Provider Version: xpkg.upbound.io/upbound/provider-gcp:v0.5.0

  • Cloud provider or hardware configuration
    image

  • Kubernetes version (use kubectl version)

kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-03-06T21:32:53Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes distribution (e.g. Tectonic, GKE, OpenShift)
    KIND
  • OS (e.g. from /etc/os-release)
    image
  • Kernel (e.g. uname -a)
Linux pop-os 5.18.10-76051810-generic #202207071639~1657252310~22.04~7d5e891 SMP PREEMPT_DYNAMIC Fri J x86_64 x86_64 x86_64 GNU/Linux

upbound gcp DatabaseInstance Postgres does not emit root password into written secret

We allocated a :

apiVersion: sql.gcp.upbound.io/v1beta1
kind: DatabaseInstance
metadata:
  name: mydb
spec:
  deletionPolicy: "Delete"
  forProvider:
    deletionProtection: false
    databaseVersion: POSTGRES_13
    region: us-central1
    settings:
      - ipConfiguration:
          - privateNetwork: projects/.../mynetworkvpc
            requireSsl: true
        tier: db-custom-1-3840
        databaseFlags:
          - name: max_connections
            value: "500"
  writeConnectionSecretToRef:
    name: myrootcreds
    namespace: mynamespace

verified that it populated the secret with some certs and secrets.
then, when we gave it to :

apiVersion: postgresql.sql.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
  name: mynewdbinstance
spec:
  sslMode: disable
  credentials:
    source: PostgreSQLConnectionSecret
    connectionSecretRef:
      namespace: mynamespace
      name: myrootcreds

and then tried to create a database schema using the provider-sql with:

apiVersion: postgresql.sql.crossplane.io/v1alpha1
kind: Database
metadata:
  name: mydb
spec:
  providerConfigRef:
    name: mynewdbinstance
  forProvider: {}

it fails with

status:
  conditions:
    - lastTransitionTime: '2022-11-19T00:23:33Z'
      message: >-
        observe failed: cannot select database: pq: Could not detect default
        username. Please provide one explicitly
      reason: ReconcileError
      status: 'False'
      type: Synced

if you look at the secret it does not contain the password field.

How can we reproduce it?

What environment did it happen in?

Crossplane version: 1.10.1
provider-sql version: crossplane/provider-sql:v0.5.0
upbound gcp version: xpkg.upbound.io/upbound/provider-gcp:v0.18.0

also tried this with provider-sql version: crossplane/provider-sql:v0.6.0 to no avail

`connectionDetails` is not working

What happened?

I rewrote one of my compositions to use the official GCP provider. Everything seems to be working except connectionDetails.

apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
  name: google-postgresql-official
  labels:
    provider: google-official
    db: postgresql
spec:
  ...
  resources:
  - name: sql
    base:
      apiVersion: sql.gcp.upbound.io/v1beta1
      kind: DatabaseInstance
      spec:
        ...
    connectionDetails:
    - type: FromValue
      name: username
      value: postgres
    - fromConnectionSecretKey: password
    - fromConnectionSecretKey: publicIP
      name: endpoint
    - type: FromValue
      name: port
      value: "5432"

The full source code is in https://github.com/vfarcic/devops-toolkit-crossplane/blob/master/packages/sql/gcp-official.yaml#L171.

The secret does not contain any of the fields specified in connectionDetails. One of the other hand, almost identical Composition based on the "other" GCP provider works (https://github.com/vfarcic/devops-toolkit-crossplane/blob/master/packages/sql/gcp.yaml#L171).

How can we reproduce it?

Full instructions how to reproduce it are in https://github.com/vfarcic/devops-toolkit-crossplane/blob/master/packages/sql/gcp.md.

After the DB instance is created, check the fields in the secret my-db located in the Namespace infra.

gcp: Support BucketObject 'source' type fields (file path references)

What problem are you facing?

Provider Name: provider-gcp
Provider Version: development

https://github.com/upbound/official-providers/blob/cba2980241265105e38d027b79b7486a69abb31b/provider-gcp/examples-generated/storage/bucketobject.yaml#L13

GCP BucketObject resource contains two fields that represent the actual data to upload into a Bucket:

  • content: String value uploaded as-is. This works as expected
  • source: A reference to a file path, whose contents are uploaded. This is currently not supported

The source field is currently generated as a string. https://github.com/upbound/official-providers/blob/cba2980241265105e38d027b79b7486a69abb31b/provider-gcp/package/crds/storage.gcp.upbound.io_bucketobjects.yaml#L236-L240 However, this value cannot contain an actual file path, as it would be relative to the Terraform workspace in the provider environment. Marking the field as sensitive: true will generate a sourceSecretRef, however, this will also not work, since the provider implementation only accepts a file path and not actual file content encoded into a Secret.

How could Official Providers help solve your problem?

Allow the source field to reference a Secret with encoded file content and make a file path available to the underlying Terraform provide, which will have that file path set in the source field in main.tf.json. To support an example like below:

apiVersion: storage.gcp.upbound.io/v1beta1
kind: BucketObject
metadata:
  labels:
    testing.upbound.io/example-name: bucket-object
  name: bucket-object
spec:
  forProvider:
    bucketSelector: 
      matchLabels:
        testing.upbound.io/example-name: bucket-object
    name: bucket-object
    sourceSecretRef:
      name: bucket-object
      namespace: upbound-system
      key: upbound.txt
    contentType: text/plain

---

apiVersion: storage.gcp.upbound.io/v1beta1
kind: Bucket
metadata:
  labels:
    testing.upbound.io/example-name: bucket-object
  name: bucket-object-34234
spec:
  forProvider:
    location: US
    storageClass: MULTI_REGIONAL
    
---

apiVersion: v1
kind: Secret
metadata:
  labels:
    testing.upbound.io/example-name: bucket-object
  name: bucket-object
  namespace: upbound-system
data:
  upbound.txt: VlhCaWIzVnVaQW89Cg==

`Cluster.container.gcp.upbound.io` can get into an irreconcilable state

What happened?

When updating a Cluster, I observed the following error:

  Warning  CannotObserveExternalResource  49s (x309 over 5h6m)  managed/container.gcp.upbound.io/v1beta1, kind=cluster  cannot run plan: plan failed: 1 error occurred:
           * node_version can only be specified if remove_default_node_pool is not true

This is from https://github.com/hashicorp/terraform-provider-google/blob/89affbee97e69004aeb7d09b535a43bbed2590bb/google/resource_container_cluster.go#L4818, and makes sense given that specifying a node version when you don't want the default node pool to exist would be ineffectual. However, I don't believe that we manually set the node version, but rather that it was late initialized despite us using removeDefaultNodePool: true. Manually removing the node version appeared to be ineffectual as it continued to be re-added.

How can we reproduce it?

I believe just creating a Cluster with removeDefaultNodePool: true could do the trick, but the specific case that we saw it in was updating an existing cluster to enable workload identity.

What environment did it happen in?

  • Universal Crossplane Version: v1.10.1-up.1
  • Provider Version: v0.20.0

Request for field `gpu_sharing_config` on `google_container_node_pool` resource

What resource do you need?

Terraform Resource Name: google_container_node_pool

Field: node_config.guest_accelerator.gpu_sharing_strategy

Note that the only docs I could find are on google_container_cluster, not google_container_node_pool.
Hopefully that's just a documentation discrepancy and under the hood both node_config objects have the same schema, but I'm not sure.
Although for our use case it's more useful on google_container_node_pool we could find a workaround if it were only available on google_container_cluster.

What is your use case?

We want to have Crossplane manage a node pool with GPU-sharing strategy.
We are managing our GPU node pools outside of crossplane in order to have this feature. Without it we would significantly increase our cloud costs.

Configure remaining GCP resources without testing

What problem are you facing?

We will have contractors coming on board to increase coverage. Currently, we configure & test the resources but configuration requires some knowledge of Upjet and Crossplane that may be hard to get them used to, i.e. we may end up spending same amount of effort reviewing their configuration vs configuring on our own.

How could Official Providers help solve your problem?

Let's configure the remaining resources (312 CRDs) without testing and leave only the testing part to them. The exact set of API groups this issue targets will be listed below.

Request for `storage_hmac_key` resource

What resource do you need?

Terraform Resource Name: storage_hmac_key

What is your use case?

Lack of storage_hmac_key prevents any client usage of GCS buckets as AWS S3 buckets.

storage_hmac_key is required for GCS bucket to be used by clients through the AWS S3 API instead of GCS API.

See the following related documentation:
https://cloud.google.com/storage/docs/aws-simple-migration
https://cloud.google.com/storage/docs/authentication/hmackeys
https://cloud.google.com/storage/docs/authentication/managing-hmackeys#terraform

This resource was present in https://doc.crds.dev/github.com/crossplane-contrib/provider-jet-gcp/storage.gcp.jet.crossplane.io/HMACKey/[email protected]

/CC

Configure cloudplatform (17), cloudrun (5), cloudscheduler (1), cloudtasks (1), composer (1), compute (5)

Error with "enable_components" on Cluster resource

Hello,

I have the following error:
observe failed: cannot run refresh: refresh failed: Missing required argument: The argument "enable_components" is required, but no definition was found.

This is due to the loggingConfig which is an array containing enableComponents.
When loggingService is none, loggingConfig can be ignored but I believe the api (or the terraform representation) returns something like

loggingConfig: [{
    enableComponents: []
}]

And if I'm not mistaken, crossplane requires a non-empty value for a property when it's the only one in the object. Meaning, the content returned by the api is not valid (only loggingConfig: [] would be valid).

It happened on GKE, Crossplane 1.10.1, GCP provider 0.19.0.
And:

apiVersion: container.gcp.upbound.io/v1beta1
kind: Cluster
spec:
  forProvider:
    project: test-error
    location: us-central1-a
    initialNodeCount: 1
    networkingMode: VPC_NATIVE
    networkRef:
      name: base
    subnetworkRef:
      name: cluster-subnet
    loggingService: none
    loggingConfig: []             # with and without this line
    releaseChannel:
      - channel: RAPID
    removeDefaultNodePool: true
    addonsConfig:
      - httpLoadBalancing:
          - disabled: false
    privateClusterConfig:
      - enablePrivateNodes: true
        enablePrivateEndpoint: false
        masterIpv4CidrBlock: 172.16.0.16/28
    ipAllocationPolicy:
      - clusterIpv4CidrBlock: 5.0.0.0/16
        servicesIpv4CidrBlock: 5.1.0.0/16
    defaultSnatStatus:
      - disabled: true
    masterAuthorizedNetworksConfig:
      - cidrBlocks:
          - cidrBlock: 0.0.0.0/0
            displayName: World

Configure _iam_binding and _iam_policy resources

Pending resolution of #14 we are collecting questionable resources here:

External name configuration of 9 resources in the iap group:

(Extracted from upbound/official-providers#281)

External name configuration of 2 resources in the storage group:

(Extracted from upbound/official-providers#444)

External name configuration of 2 resources in the secretmanager group:

External name configuration of 2 resources in the sourcerepo group:

External name configuration of 4 resources in the spanner group:

(Extracted from upbound/official-providers#284)

External name configuration of 2 resources in the notebooks group:

External name configuration of 1 resources in the orgpolicy group:

External name configuration of 4 resources in the privateca group:

(Extracted from upbound/official-providers#283)

External name configuration of 1 resources in the notebooks group:

(Extracted from #17)

External name configuration of resources in the compute group:

(Extracted from #18)

External name configuration of resources in the iap group:

External name configuration of resources in the healthcare group:

Fails to reconcile on openshift: invalid RBAC for providerconfigusages.gcp.upbound.io

What happened?

Trying to use upbound/provider-gcp provider on openshift 4.10 where the k8s api plugin the OwnerReferencesPermissionEnforcement is turned on by default,, the MR never reconciles and displays the following error message

   message: 'connect failed: cannot get terraform setup: cannot track ProviderConfig usage: 
   cannot apply ProviderConfigUsage: cannot create object: providerconfigusages.gcp.upbound.io
      "ad06b191-1412-432b-bc6a-435f43de0228" is forbidden: cannot set blockOwnerDeletion
      if an ownerReference refers to a resource you can''t set finalizers on: , <nil>'
    reason: ReconcileError

This seems quite similar to crossplane/crossplane#3443

How can we reproduce it?

Turn on the OwnerReferencesPermissionEnforcement plugin (see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement) in the k8s cluster running upbound/provider-gcp provider integration tests

What environment did it happen in?

  • Opensource Crossplane Version: image: crossplane/crossplane:v1.10.1
  • Provider Version: xpkg.upbound.io/upbound/provider-gcp:v0.20.0
  • Openshift version 1.10
 kubectl version 
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:26:51Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5+012e945", GitCommit:"3c28e7a79b58e78b4c1dc1ab7e5f6c6c2d3aedd3", GitTreeState:"clean", BuildDate:"2022-07-13T08:38:41Z", GoVersion:"go1.17.12", Compiler:"gc", Platform:"linux/amd64"}

upbound sslcertificate resource tries to recreate even with external-name annotation

We found this problem while moving from crossplane jet gcp provider to upbound gcp provider:

old:

apiVersion: compute.gcp.jet.crossplane.io/v1alpha1
kind: SSLCertificate
metadata:
  name: my-ssl-cert-2
spec:
  forProvider:
    name: my-ssl-cert-2
    certificateSecretRef:
      name: tls-wildcard-crt
      key: tls.crt
      namespace: mynamespace
    privateKeySecretRef:
      name: tls-wildcard-crt
      key: tls.key
      namespace: mynamespace

new:

apiVersion: compute.gcp.upbound.io/v1beta1
kind: SSLCertificate
metadata:
  name: my-ssl-cert-2
  annotations:
    crossplane.io/external-name: my-ssl-cert-2
spec:
  forProvider:
    #name: my-ssl-cert-2
    certificateSecretRef:
      name: tls-wildcard-crt
      key: tls.crt
      namespace: mynamespace
    privateKeySecretRef:
      name: tls-wildcard-crt
      key: tls.key
      namespace: mynamespace

What happened?

We are expecting that the ssl certificate does not recreate and the status data would just populate given that the
crossplane.io/external-name: my-ssl-cert-2 was specified in the annotations of the new version.

How can we reproduce it?

yaml included

What environment did it happen in?

  • Universal Crossplane Version:
    not using uxp

  • Provider Version:
    old:

apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
  name: provider-jet-gcp
spec:
  package: crossplane/provider-jet-gcp:v0.2.0-preview

new:

apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
  name: provider-upbound-gcp
spec:
  package: xpkg.upbound.io/upbound/provider-gcp:v0.18.0

gcp.project: `Project` cannot create projects

What happened?

I attempted to create a Project using v0.9.0 version of Project.cloudplatform.gcp.upbound.io/v1beta1. I received the following error:

1.662006147674133e+09 DEBUG events Warning {"object": {"kind":"Project","name":"aaron-platform-test-87654","uid":"c150507c-f0e9-44f7-bc07-ca5921bd7096","apiVersion":"cloudplatform.gcp.upbound.io/v1beta1","resourceVersion":"29423"}, "reason": "CannotObserveExternalResource", "message": "cannot run refresh: refresh failed: the user does not have permission to access Project "aaron-platform-test-87654" or it may not exist: "}

Importing existing projects works fine.

How can we reproduce it?

Add a unique hash to the following:

apiVersion: cloudplatform.gcp.upbound.io/v1beta1
kind: Project
metadata:
  name: aaron-platform-test-<HASH>
spec:
  deletionPolicy: Orphan
  forProvider:
    folderId: '761317604662'
    labels:
      owner: squad-platform
      service: testing
    projectId: aaron-platform-test-<HASH>
    skipDelete: true

What environment did it happen in?

  • Crossplane Version: 1.9.0
  • Provider Name: provider-gcp
  • Provider Version: v0.9.0

RegionTargetHTTPProxy - Use SelfLink instead of resource ID

What problem are you facing?

I would like to create an internal loadbalancer in a SharedVPC environment. As part of this work I have to create a RegionTargetHTTPProxy, which has to select a RegionURLMap. When I select it with the matchLabel selector, the provider replaces the RegionURLMap field with the resource ID, which is pointing to a different project where it's actually located. From this reason I get an error message that the RegionTargetHTTPProxy cannot be created, because it cannot find the selected resources.

This issue could be fixed by using the selflink instead of the resource id in the matchlabel selector.

How could Official GCP Provider help solve your problem?

The above mentioned issue could be solved by editing the resource configurator for the google_compute_forwarding_rule in the following way:

	p.AddResourceConfigurator("google_compute_region_target_http_proxy", func(r *config.Resource) {
		config.MarkAsRequired(r.TerraformResource, "region")

		r.References["url_map"] = config.Reference{
			Type:      "RegionURLMap",
			Extractor: common.PathSelfLinkExtractor,
		}

	})

gcp.secretVersion: error response from GCP API

What happened?

https://marketplace.upbound.io/providers/upbound/provider-gcp/v0.17.0/resources/secretmanager.gcp.upbound.io/SecretVersion/v1beta1

When secretVersion attempts to create, I receive this error:

Conditions:
    Last Transition Time:  2022-11-09T20:23:19Z
    Reason:                ReconcileSuccess
    Status:                True
    Type:                  Synced
    Last Transition Time:  2022-11-09T20:23:19Z
    Reason:                Creating
    Status:                False
    Type:                  Ready
    Last Transition Time:  2022-11-09T20:23:57Z
    Reason:                Finished
    Status:                True
    Type:                  AsyncOperation
    Last Transition Time:  2022-11-09T20:23:20Z
    Message:               apply failed: Error creating SecretVersion: googleapi: got HTTP response code 404 with body: <!DOCTYPE html>
<html lang=en>
  <meta charset=utf-8>
  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
  <title>Error 404 (Not Found)!!1</title>
  <style>
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
  </style>
  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
  <p><b>404.</b> <ins>Thatโ€™s an error.</ins>
  <p>The requested URL <code>/v1/test:addVersion?alt=json</code> was not found on this server.  <ins>Thatโ€™s all we know.</ins>
: 
    Reason:  ApplyFailure
    Status:  False
    Type:    LastAsyncOperation

How can we reproduce it?

Try doing what I did above.

---
apiVersion: v1
kind: Secret
metadata:
  annotations:
    meta.upbound.io/example-id: secretmanager/v1beta1/secretversion
  labels:
    testing.upbound.io/example-name: test
  name: test
  namespace: default
type: Opaque
stringData:
  secret: "test"
---
apiVersion: secretmanager.gcp.upbound.io/v1beta1
kind: SecretVersion
metadata:
  annotations:
    meta.upbound.io/example-id: secretmanager/v1beta1/secretversion
  labels:
    testing.upbound.io/example-name: test
  name: test
spec:
  forProvider:
    secretDataSecretRef:
      key: secret
      name: test
      namespace: default
    secretSelector:
      matchLabels:
        testing.upbound.io/example-name: test
---
apiVersion: secretmanager.gcp.upbound.io/v1beta1
kind: Secret
metadata:
  annotations:
    meta.upbound.io/example-id: secretmanager/v1beta1/secretversion
  labels:
    testing.upbound.io/example-name: test
  name: test
spec:
  forProvider:
    labels:
      environment: dev
    replication:
      - automatic: true

What environment did it happen in?

UXP v1.10.1-up.1
provider-gcp v0.17.0
K8a v1.25.0

cloud sql user manual-intervention "Depends on SQL instance to be successfully deleted"

As suggested into

https://github.com/upbound/upjet/blob/main/docs/add-new-resource-short.md mentions

There are cases where the resource requires user to take an action that is not possible with a Crossplane provider or automated testing tool. In such cases, we should leave the actions to be taken as annotation on the resource like the following:
annotations:
upjet.upbound.io/manual-intervention: "User needs to upload a authorization script and give its path in spec.forProvider.filePath"

An issue in the official-providers repo explaining the situation should be opened preferably with the example manifests (and any resource configuration) already tried.

What happened?

The CloudSql User can not be deleted unless the DatabaseInstance was deleted first:

https://github.com/upbound/provider-gcp/blob/090f08482cf66c04771bbcf4d983c4138e97d234/examples/sql/user.yaml#L4-L5

status:
  atProvider:
    id: default//kuttl-medium-mysql-instance-w6qpx-zv8tz
  conditions:
  - lastTransitionTime: "2022-12-16T16:27:20Z"
    message: 'observe failed: cannot run plan: plan failed: Instance cannot be destroyed:
      Resource google_sql_user.kuttl-medium-mysql-instance-w6qpx-pr9tl has lifecycle.prevent_destroy
      set, but the plan calls for this resource to be destroyed. To avoid this error
      and continue with the plan, either disable lifecycle.prevent_destroy or reduce
      the scope of the plan using the -target flag.'
    reason: ReconcileError
    status: "False"
    type: Synced
  - lastTransitionTime: "2022-12-16T16:25:34Z"
    reason: Available
    status: "True"
    type: Ready
  - lastTransitionTime: "2022-12-16T16:24:47Z"
    reason: Finished
    status: "True"
    type: AsyncOperation
  - lastTransitionTime: "2022-12-16T16:24:47Z"
    reason: Success
    status: "True"
    type: LastAsyncOperation

How can we reproduce it?

https://github.com/upbound/platform-ref-gcp/blob/ecd7de576e9da710360e17952b2c3cfd1be851c9/package/database/postgres/composition.yaml#L37-L52

Workaround

Imperative workaround:

https://github.com/upbound/platform-ref-gcp/blob/ecd7de576e9da710360e17952b2c3cfd1be851c9/examples/testhooks/delete-sql-user.sh#L4-L7

Delete the sql user before deleting the database not to orphan the user object
Use explicit ordering of the sql resources to avoid database stuck
Note(turkenh): This is a workaround for the infamous dependency problem during deletion.
${KUBECTL} delete user.sql.gcp.upbound.io --all

Declarative workaround:

  • a Kyverno Policy could enforce a specific deletion order for the MRs,

@bobh66 is preparing a write up for this, see https://crossplane.slack.com/archives/CEG3T90A1/p1671041047666789?thread_ts=1671039517.972969&cid=CEG3T90A1

What environment did it happen in?

  • Universal Crossplane Version:
  • Provider Version:

Moving appengine (3) resources to v1beta1 version

Moving 3 resources to v1beta1 version


google_app_engine_domain_mapping - The resource requires a verified domain.


google_app_engine_flexible_app_version - returns an error:

 Message:               apply failed: Error creating FlexibleAppVersion: googleapi: Error 404: <eye3 title='/CloudGaia.GenerateAccessToken, NOT_FOUND'/> APPLICATION_ERROR;google.iam.credentials.v1/CloudGaia.GenerateAccessToken;Account disabled: 57634375066;AppErrorCode=5;StartTimeMs=1677241408857;unknown;Deadline(sec)=5.0;ResFormat=uncompressed;ServerTimeSec=0.011431517;LogBytes=-1;FailFast;EffSecLevel=privacy_and_integrity;ReqFormat=uncompressed;ReqID=aadd8d4552d84583;GlobalID=0;Server=[2002:a17:903:2cb::]:9887: 

google_app_engine_service_split_traffic - Need clarify configuration: returns several errors, one of them

Error creating ServiceSplitTraffic: googleapi: Error 409: Cannot operate on apps/official-provider-testing/services/liveapp/versions/apps/official-provider-testing/services/liveapp/versions/liveapp-v1 because an operation is already in progress for apps/official-provider-testing/services/liveapp/versions/apps/official-provider-testing/services/liveapp/versions/liveapp-v1 by c85518c3-d54c-4a4d-8186-97cbab0e5239.

servicesplittraffic.txt

user.sql.gcp.upbound.io should populate connection secret

What problem are you facing?

The user/password for SQL DatabaseInstance is configured by a standalone resource of User.sql

Currently, it does not populate any connection secret.

How could Official GCP Provider help solve your problem?

Configure sensitive fields population down to connection secret to being reused within the Compositions.

A good example of such Composition can be found upbound/platform-ref-gcp#39 where we have GCP-based Database abstraction implementation.

gcp: skip potentially dangerous `kms_crypto/kms_key_ring_iam_policy/binding`

What problem are you facing?

Provider Name: provider-gcp
Provider Version:

Moved from https://github.com/upbound/official-providers/issues/446

The above resources are a powerful mechanism and similarly to iam roles it can lead to cluster-wide outage (see example: https://upboundio.slack.com/archives/C013YNJ423Y/p1659622122579009).

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_kms_crypto_key_iam

Three different resources help you manage your IAM policy for KMS crypto key. Each of these resources serves a different use case:

google_kms_crypto_key_iam_policy: Authoritative. Sets the IAM policy for the crypto key and replaces any existing policy already attached.
google_kms_crypto_key_iam_binding: Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the crypto key are preserved.
google_kms_crypto_key_iam_member: Non-authoritative. Updates the IAM policy to grant a role to a new member. Other members for the role for the crypto key are preserved.

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_kms_key_ring_iam

Three different resources help you manage your IAM policy for KMS key ring. Each of these resources serves a different use case:

google_kms_key_ring_iam_policy: Authoritative. Sets the IAM policy for the key ring and replaces any existing policy already attached.
google_kms_key_ring_iam_binding: Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the key ring are preserved.
google_kms_key_ring_iam_member: Non-authoritative. Updates the IAM policy to grant a role to a new member. Other members for the role for the key ring are preserved.

More details and discussion about the dangers of using those resources can be found here: #14

How could Official Providers help solve your problem?

The suggestion is not to implement those, but use google_kms_crypto_key_iam_member and google_kms_key_ring_iam_member exclusively. Similar decision has been made in the platform team regarding the usage of other iam resources.

Allow us to set `account_id` on ServiceAccounts

What problem are you facing?

I often run into this issue:

 Events:                                                                                                                                                                                                                                                                                                                                                                  
   Type     Reason                         Age                From                                                               Message                                                                                                                                                                                                                                  
   ----     ------                         ----               ----                                                               -------                                                                                                                                                                                                                                  
   Warning  CannotObserveExternalResource  18s (x7 over 74s)  managed/cloudplatform.gcp.upbound.io/v1beta1, kind=serviceaccount  cannot run refresh: refresh failed: "account_id" ("jonnys-subscription-ngpq4-bf2ws") doesn't match regexp "^[a-z](?:[-a-z0-9]{4,28}[a-z0-9])$":

For ServiceAccount resources. Their names can only be so short. But the name is auto generated and 12 characters of the 28 are wasted by random ids.

How could Official GCP Provider help solve your problem?

I would love to be able to overwrite the account_id of a service account.

gcp: project/org level permissions required to test resources

What problem are you facing?

Following resources from https://github.com/upbound/official-providers/issues/284 could not be tested due to lack of project/org level permissions for testing.

All of the untested resources require org level permissions to actually create the resources.

External name configuration of 1 resources in the recaptcha group:

External name configuration of 1 resources in the resourcemanager group:

External name configuration of 2 resources in the scc group:

Both resources require additional permissions on org level and org must be enrolled in the security command center program

  • Organization Admin roles/resourcemanager.organizationAdmin
  • Security Center Admin roles/securitycenter.admin
  • Security Admin roles/iam.securityAdmin
  • Create Service Accounts roles/iam.serviceAccountCreator

How could Official Providers help solve your problem?

Provide a way to safely test resources requiring project/org level resources

SSLPolicy has no equivalent in gcp upbound provider

kind: SSLPolicy
metadata:
  name: my-ssl-policy-main
spec:
  providerConfigRef:
    name: default
  forProvider:
    project: PROJECT_ID_X
    profile: MODERN
    minTlsVersion: "TLS_1_2" 

was in jet gcp but is not in upbound provider gcp... would appreciate if you can bring that over!

Add networkSelector for ManagedZone

What problem are you facing?

The ManagedZone does not allow you to select the correct network when using cross-project binding .

How could Official GCP Provider help solve your problem?

Use the self_url instead of the resource id to select the network. This will make it possible to reference networks in a different project using a shared vpc.

Drop support for *_iam_binding and *_iam_policy resources in official-providers

What problem are you facing?

Provider Name: provider-gcp
Provider Version: all...

This issue grew out of a discussion in #squad-control-planes. Google has multiple apis which can be used to configure IAM for a resource. These APIs come in three varieties:

  • *_iam_binding
  • *_iam_policy
  • *_iam_member

The behavior of these API calls make them incompatible with one another. Terraform calls this out in a collection of standard Notes which appear on all IAM resource pages:

Note:
google_project_iam_policy cannot be used in conjunction with google_project_iam_binding, google_project_iam_member, or google_project_iam_audit_config or they will fight over what your policy should be.

Note:
google_project_iam_binding resources can be used in conjunction with google_project_iam_member resources only if they do not grant privilege to the same role.

Note:
The underlying API method projects.setIamPolicy has a lot of constraints which are documented here. In addition to these constraints, IAM Conditions cannot be used with Basic Roles such as Owner. Violating these constraints will result in the API returning 400 error code so please review these if you encounter errors with this resource.

Lastly, terraform offers this warning:

Be careful!
You can accidentally lock yourself out of your project using this resource. Deleting a google_project_iam_policy removes access from anyone without organization-level access to the project. Proceed with caution. It's not recommended to use google_project_iam_policy with your provider project to avoid locking yourself out, and it should generally only be used with projects fully managed by Terraform. If you do use this resource, it is recommended to import the policy before applying the change.

If two users were to implement their permissions using different resources -- say one uses a *_iam_member and another uses a *_iam_policy, those resources would "fight" over the ultimate permissions available in the account. It is also possible, with google_project_iam_policy and google_organization_iam_policy to catastrophically leave an account or organization with zero permissions configured (to wipe out all IAM data).

How could Official Providers help solve your problem?

Given that Crossplane Managed Resources are meant to be deployed and reconciled individually, only *_iam_member behaves as a Managed Resource. We should drop support for the other two apis.

Configure healthcare (8), iap (5)

gcp: Community Parity

All Access Tokens for Authentication

What problem are you facing?

Currently in a GCP environment that restricts the creation of service account keys.

How could Official GCP Provider help solve your problem?

Would like to have the ability to use temporary access tokens that can be generated via a CronJob for provider authentication.

Configure accessapproval (3), accesscontextmanager (9), activedirectory (2), apigee (10), apikeys (1)

ForwardingRule - Use SelfLink instead of resource ID

What problem are you facing?

I would like to create an internal loadbalancer in a SharedVPC environment. As part of this work I have to create a ForwardingRule, which has to select an Address, a Network, a SubNetwork and a TargetProxy. When I select these resources with the matchLabel selector, the provider replaces the Address, Network, SubNetwork, Target fields with the resource ID, which is pointing to a different project where they actually located. From this reason I get an error message that the ForwardingRule cannot be created, because it cannot find the selected resources.

This issue could be fixed by using the selflink instead of the resource id in the matchlabel selector.

How could Official GCP Provider help solve your problem?

The above mentioned issue could be solved by editing the resource configurator for the google_compute_forwarding_rule in the following way:

	p.AddResourceConfigurator("google_compute_forwarding_rule", func(r *config.Resource) {
		// Note(donovanmuller): See https://github.com/upbound/upjet/issues/95
		// BackendService is also a valid reference Type
		r.References["backend_service"] = config.Reference{
			Type:      "RegionBackendService",
			Extractor: common.PathSelfLinkExtractor,
		}
		r.References["ip_address"] = config.Reference{
			Type:      "Address",
			Extractor: common.PathSelfLinkExtractor,
		}
		r.References["network"] = config.Reference{
			Type:      "Network",
			Extractor: common.PathSelfLinkExtractor,
		}
		r.References["subnetwork"] = config.Reference{
			Type:      "Subnetwork",
			Extractor: common.PathSelfLinkExtractor,
		}
		r.References["target"] = config.Reference{
			Type:      "RegionTargetHTTPProxy",
			Extractor: common.PathSelfLinkExtractor,
		}
		config.MarkAsRequired(r.TerraformResource, "region")
	})

Configure billing (1), binaryauthorization (5), certificate (2), cloudasset (3), cloudbuild (2), clouddeploy (2), cloudfunctions (4), cloudidentity (2), cloudiot (2), cloudplatform (7)

External name configuration of 1 resources in the billing group:

External name configuration of 5 resources in the binaryauthorization group:

External name configuration of 2 resources in the certificate group:

External name configuration of 3 resources in the cloudasset group:

External name configuration of 2 resources in the cloudbuild group:

External name configuration of 2 resources in the clouddeploy group:

External name configuration of 4 resources in the cloudfunctions group:

External name configuration of 2 resources in the cloudidentity group:

External name configuration of 2 resources in the cloudiot group:

External name configuration of 7 resources in the cloudplatform group:

Configure logging (13), memcache (1), mlengine (1), monitoring (5), network (2), networkmanagement (1), networkservices (3), notebooks (5)

External name configuration of 13 resources in the logging group:

External name configuration of 1 resources in the memcache group:

External name configuration of 1 resources in the mlengine group:

External name configuration of 5 resources in the monitoring group:

External name configuration of 2 resources in the network group:

External name configuration of 1 resources in the networkmanagement group:

External name configuration of 3 resources in the networkservices group:

External name configuration of 5 resources in the notebooks group:

Request for `google_iam_workload_identity_pool*` resources

What resource do you need?

Terraform Resource Name:

What is your use case?

I want to use configure GitHub Action auth to access GCP (GKE cluster). To do so, I need 2 things:

  • Add a new GCP service account (available in Upbound GCP provider).
  • Add a new GCP workload identity pool and provider (missing in Upbound GCP provider).

Related docs:

Note: The 2 requested resources are not mentioned in #13

Address - Use selflink instead of resource ID

What problem are you facing?

I would like to create an internal loadbalancer in a SharedVPC environment. As part of this work I have to create as Address resource, which has to select a SubNetwork. When I select it with the matchLabel selector, the provider replaces the SubNetwork field with the resource ID, which is pointing to a different project where it's actually located. From this reason I get an error message that the Address cannot be created, because it cannot find the selected SubNetwork.

This issue could be fixed by using the selflink instead of the resource id in the matchlabel selector.

How could Official GCP Provider help solve your problem?

The above mentioned issue could be solved by editing the resource configurator for the google_compute_address in the following way:

	p.AddResourceConfigurator("google_compute_address", func(r *config.Resource) {
		r.References["network"] = config.Reference{
			Type:      "Network",
			Extractor: common.PathSelfLinkExtractor,
		}
		r.References["subnetwork"] = config.Reference{
			Type:      "Subnetwork",
			Extractor: common.PathSelfLinkExtractor,
		}
		config.MarkAsRequired(r.TerraformResource, "region")
	})

upbound gcp compute Instance .status does not show resulting private ip after provisioning

What happened?

created a vm with : https://marketplace.upbound.io/providers/upbound/provider-gcp/v0.20.0/resources/compute.gcp.upbound.io/Instance/v1beta

using :

apiVersion: compute.gcp.upbound.io/v1beta1
kind: Instance
metadata:
  name: CLOWN_NAME_X
spec:
  forProvider:
    bootDisk:
      - initializeParams:
          #- image: debian-cloud/debian-11
          - image: GCP_IMAGE_ID_X
    machineType: f1-micro
    networkInterface:
      - subnetwork: VPC_REGIONAL_SUBNET_X
        #for public ip address include the following section XD
        #accessConfig:
        #  - networkTier: PREMIUM
    zone: FIRST_ZONE_X
    tags:
      - pfpt-clowns
    metadata:
      enable-oslogin: "true"
    project: PROJECT_ID_X

expecting the status field to have all of the ip information but it does not show any:

  status:
    atProvider:
      bootDisk:
      - diskEncryptionKeySha256: ""
      cpuPlatform: Intel Broadwell
      currentStatus: RUNNING
      id: projects/mynetwork/zones/europe-west3-a/instances/clown4-private
      instanceId: "123"
      labelFingerprint: ----
      metadataFingerprint: -----
      networkInterface:
      - ipv6AccessType: ""
        name: nic0
      selfLink: https://www.googleapis.com/compute/v1/----
      tagsFingerprint: ----
    conditions:
    - lastTransitionTime: "2022-12-08T23:03:04Z"
      reason: Available
      status: "True"
      type: Ready
    - lastTransitionTime: "2022-12-08T23:02:44Z"
      reason: ReconcileSuccess
      status: "True"
      type: Synced
    - lastTransitionTime: "2022-12-08T23:03:00Z"
      reason: Finished
      status: "True"
      type: AsyncOperation
    - lastTransitionTime: "2022-12-08T23:03:00Z"
      reason: Success
      status: "True"
      type: LastAsyncOperation

using gcloud describe shows the ip info
i really dont want to lug in another provider to read it... seems overkill and complex..

How can we reproduce it?

straight forward apply something similar to above.

What environment did it happen in?

  • Universal Crossplane Version: 1.10.1
  • Provider Version: xpkg.upbound.io/upbound/provider-gcp:v0.18.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.