Giter Site home page Giter Site logo

exoscale / terraform-provider-exoscale Goto Github PK

View Code? Open in Web Editor NEW
29.0 30.0 25.0 30.87 MB

Terraform Exoscale provider

Home Page: https://www.terraform.io/docs/providers/exoscale/

License: Mozilla Public License 2.0

Makefile 0.16% Go 99.58% Shell 0.26%
terraform-provider exoscale terraform

terraform-provider-exoscale's Introduction

Exoscale Terraform Provider

Requirements

Installation

From the Terraform Registry (recommended)

The Exoscale provider is available on the Terraform Registry. To use it, simply execute the terraform init command in a directory containing Terraform configuration files referencing Exoscale provider resources:

terraform init

Output:

Initializing the backend...

Initializing provider plugins...
- Finding exoscale/exoscale versions matching "0.18.2"...
- Installing exoscale/exoscale v0.18.2...
- Installed exoscale/exoscale v0.18.2 (signed by a HashiCorp partner, key ID 8B58C61D4FFE0C86)

...

From Sources

If you prefer to build the plugin from sources, clone the GitHub repository locally and run the command make build from the root of the sources directory. Upon successful compilation, a terraform-provider-exoscale_vdev plugin binary file can be found in the bin/ directory. Then, follow the Terraform documentation on how to install provider plugins.

Usage

The complete and up-to-date documentation for the Exoscale provider is available on the Terraform Registry. Additionally, you can find information on the general Terraform usage on the HashiCorp Terraform website.

Contributing

  • If you think you've found a bug in the code or you have a question regarding the usage of this software, please reach out to us by opening an issue in this GitHub repository.
  • Contributions to this project are welcome: if you want to add a feature or a fix a bug, please do so by opening a Pull Request in this GitHub repository. In case of feature contribution, we kindly ask you to open an issue to discuss it beforehand.
  • The documentation in the docs folder is generated from the descriptions in the source code. If you change the .Description of a resource attribute for example, you will need to run go generate in the root folder of the repository to update the generated docs. This is necessary to mere any PR as our CI checks whether the docs are up to date with the sources.
  • Code changes require associated acceptance tests: the complete provider test suite (make test-acc) is executed as part of the project's GitHub repository CI workflow, however you can execute targeted tests locally before submitting a Pull Request to ensure tests pass (e.g. for the exoscale_compute resource only):
  • We are migrating the provider to the new plugin framework. If you'd like to implement new resources, please do so in the framework. The zones datasource may provide the necessary inspiration.
make GO_TEST_EXTRA_ARGS="-v -run ^TestAccResourceCompute$" test-acc

Development Setup

If you would like to use the terraform provider you have built and try configurations on it as you are developing, then we recommend setting up a dev_override. Create a file named dev.tfrc in the root directory of this repository:

provider_installation {
  dev_overrides {
    "exoscale/exoscale" = "/path/to/the/repository/root/directory"
  }

  direct {}
}

Now export TF_CLI_CONFIG_FILE=$PWD/dev.tfrc in your shell and from now on, whenever you run a terraform command in this shell and the configuration references the exoscale/exoscale provider, it will use the provider you built locally instead of downloading an official release. For this to work you need to make sure you always run go build so that your changes are compiled into a provider binary in the root directory of the repository.

terraform-provider-exoscale's People

Contributors

7felf avatar brutasse avatar cab105 avatar cgriggs01 avatar exo-cedric avatar falzm avatar greut avatar horakg avatar illi-j avatar jessicatoscani avatar kobajagi avatar llambiel avatar ltupin avatar marcaurele avatar markschmid avatar maxlareo avatar mayeu avatar mcorbin avatar olmoser avatar paultyng avatar philippechepy avatar pierre-emmanuelj avatar pst avatar pyr avatar sauterp avatar secustor avatar ste-m avatar stffabi avatar sxzxgx avatar tgrondier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-exoscale's Issues

Feature Request: IAM support

Hello πŸ‘‹

I was checking the egoscale project and it seems that Exoscale's IAM support has been added there.

Any ETA or work in progress to add the IAM configuration support in the Terraform provider?

API error ParamError 431 (CSErrorCode(0) 0) when creating a security group rule

When trying to create the following security group rule:

resource "exoscale_security_group_rule" "nessus_server-egress-tcp" {
  security_group_id = exoscale_security_group.nessus_server.id
  type              = "EGRESS"
  protocol          = "TCP"
  cidr              = "0.0.0.0/0"
  start_port        = 1
  end_port          = 65535
  description       = "allow scanning of all ports"
}

I get the following error:

β•·
β”‚ Error: API error ParamError 431 (CSErrorCode(0) 0): No new rule would be created.
β”‚ 
β”‚   on server_nessus.tf line 19, in resource "exoscale_security_group_rule" "nessus_server-egress-tcp":
β”‚   19: resource "exoscale_security_group_rule" "nessus_server-egress-tcp" {
β”‚ 
β•΅

Do you know why this error is occurring?

Empty `name` attribute not applied on existing `exoscale_domain_record`

If a domain record exists with a non-empty name attribute, it cannot be changed to be empty.

The plan looks like this:

[...]
  # exoscale_domain_record.google-site-verification-1 will be updated in-place
  ~ resource "exoscale_domain_record" "google-site-verification-1" {
        id          = "24385174"
      - name        = "google-site-verification-1" -> null
        # (6 unchanged attributes hidden)
    }
[...]

Applying tells us it succeeded:

[...]
exoscale_domain_record.google-site-verification-1: Modifications complete after 0s [id=24385174]
[...]
Apply complete! Resources: 0 added, 11 changed, 0 destroyed.

But after applying the records are not changed and further terraform plan calls will tell us that it has to change the record...

Thanks for this provider, and kudos to your support team for always being very responsive and helpful!

Instance pool availability

Hello,

Is there an upcoming release embedding the instance pool resource ? I can see that they are available on the Exoscale portal.

Best regards
Jonathan Rubiero

Changing a VM's state from "Stopped" to "Running" after provisioning it "Stopped" causes an error when trying to create a DNS record for the VM

Given the following Terraform configuration

terraform {
  required_version = ">= 0.14"
  required_providers {
    exoscale = {
      source  = "exoscale/exoscale"
      version = "~> 0.23"
    }
  }
}

variable "ssh_key" {
  type        = string
  description = "Name of an already present SSH Keypair"
}

variable "node_state" {
  type    = string
  default = "Stopped"
}

variable "domain" {
  type        = string
  description = "Domain name to create in Exoscale DNS"
}

data "exoscale_compute_template" "ubuntu2004" {
  zone = "ch-dk-2"
  name = "Linux Ubuntu 20.04 LTS 64-bit"
}

resource "exoscale_domain" dom {
  name = var.domain
}

resource "exoscale_compute" "vm" {
  display_name       = "test-vm"
  hostname           = "test-vm"
  key_pair           = var.ssh_key
  zone               = "ch-dk-2"
  template_id        = data.exoscale_compute_template.ubuntu2004.id
  size               = "Micro"
  disk_size          = 10
  state              = var.node_state
}

resource "exoscale_domain_record" "vm" {
  domain      = exoscale_domain.dom.id
  name        = "test-vm"
  ttl         = 60
  record_type = "A"
  content     = exoscale_compute.vm.ip_address
}

creating the DNS A record for the VM in "Stopped" state fails with:

$ export EXOSCALE_API_KEY=EXO....
$ export EXOSCALE_API_SECRET=....
$ terraform apply -var ssh_key=simon.gerber@vshn2021 -var domain=c-exo-ocp4-test.appuio-beta.ch

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # exoscale_compute.vm will be created
  + resource "exoscale_compute" "vm" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 10
      + display_name       = "test-vm"
      + gateway            = (known after apply)
      + hostname           = "test-vm"
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "simon.gerber@vshn2021"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = (known after apply)
      + size               = "Micro"
      + state              = "Stopped"
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "985a353f-2e5a-4b67-ab9f-13f12d5a69f7"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-dk-2"
    }

  # exoscale_domain.dom will be created
  + resource "exoscale_domain" "dom" {
      + auto_renew = (known after apply)
      + expires_on = (known after apply)
      + id         = (known after apply)
      + name       = "c-exo-ocp4-test.appuio-beta.ch"
      + state      = (known after apply)
      + token      = (known after apply)
    }

  # exoscale_domain_record.vm will be created
  + resource "exoscale_domain_record" "vm" {
      + content     = (known after apply)
      + domain      = (known after apply)
      + hostname    = (known after apply)
      + id          = (known after apply)
      + name        = "test-vm"
      + prio        = (known after apply)
      + record_type = "A"
      + ttl         = 60
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

exoscale_domain.dom: Creating...
exoscale_compute.vm: Creating...
exoscale_domain.dom: Creation complete after 2s [id=c-exo-ocp4-test.appuio-beta.ch]
exoscale_compute.vm: Creation complete after 5s [id=eabb91be-5388-4bbd-9bcc-3e16d062b428]
exoscale_domain_record.vm: Creating...

Error: dns error: Validation failed (content: can't be blank, is an invalid IPv4 address)

  on main.tf line 21, in resource "exoscale_domain_record" "vm":
  21: resource "exoscale_domain_record" "vm" {


This can be worked around by ensuring the domain record is only created when the requested node state is "Running":

resource "exoscale_domain_record" "vm" {
  count       = var.node_state == "Running" ? 1 : 0
  domain      = exoscale_domain.dom.id
  name        = "test-vm"
  ttl         = 60
  record_type = "A"
  content     = exoscale_compute.vm.ip_address
}

but then a subsequent terraform apply with an explicit request of node_state=Running fails with:

$ terraform apply -var ssh_key=simon.gerber@vshn2021 -var domain=c-exo-ocp4-test.appuio-beta.ch -var node_state=Running
exoscale_domain.dom: Refreshing state... [id=c-exo-ocp4-test.appuio-beta.ch]
exoscale_compute.vm: Refreshing state... [id=eabb91be-5388-4bbd-9bcc-3e16d062b428]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
  ~ update in-place

Terraform will perform the following actions:

  # exoscale_compute.vm will be updated in-place
  ~ resource "exoscale_compute" "vm" {
        id                 = "eabb91be-5388-4bbd-9bcc-3e16d062b428"
        name               = "test-vm"
      ~ state              = "Stopped" -> "Running"
        tags               = {}
        # (16 unchanged attributes hidden)
    }

  # exoscale_domain_record.vm[0] will be created
  + resource "exoscale_domain_record" "vm" {
      + domain      = "c-exo-ocp4-test.appuio-beta.ch"
      + hostname    = (known after apply)
      + id          = (known after apply)
      + name        = "test-vm"
      + prio        = (known after apply)
      + record_type = "A"
      + ttl         = 60
    }

Plan: 1 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

exoscale_compute.vm: Modifying... [id=eabb91be-5388-4bbd-9bcc-3e16d062b428]
exoscale_compute.vm: Still modifying... [id=eabb91be-5388-4bbd-9bcc-3e16d062b428, 10s elapsed]
exoscale_compute.vm: Modifications complete after 19s [id=eabb91be-5388-4bbd-9bcc-3e16d062b428]

Error: Provider produced inconsistent final plan

When expanding the plan for exoscale_domain_record.vm[0] to include new values
learned so far during apply, provider
"registry.terraform.io/exoscale/exoscale" produced an invalid new value for
.content: was cty.StringVal(""), but now cty.StringVal("159.100.253.137").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Note that the field content for the exoscale_domain_record resource is missing in the preview of the changes and that the provider doesn't signal that the exoscale_compute resource's field ip_address will change.

As far as I can tell, the root cause is that the provider doesn't correctly signal that a exoscale_compute resource's attribute ip_address may change when it's updated in place.

After upgrading from 0.30.1 to 0.31.1 ressources are maked for destruction and re-creation

It seems that the main reason is that some properties where converted to all lowercase.

I thinks only the exoscale_security_group and exoscale_security_group_rules ressources are affected.

Here is a small extraction form a terraform apply.

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":
[...]
  # exoscale_security_group.bastion has changed
  ~ resource "exoscale_security_group" "bastion" {
        id          = "e14c5d37-7f4c-4fdd-bde5-a611c260dc7f"
      ~ name        = "Bastion" -> "bastion"
        # (1 unchanged attribute hidden)
    }
[...]

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or
respond to these changes.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:
[...]
  # exoscale_security_group.bastion must be replaced
-/+ resource "exoscale_security_group" "bastion" {
      ~ id          = "e14c5d37-7f4c-4fdd-bde5-a611c260dc7f" -> (known after apply)
      ~ name        = "bastion" -> "Bastion" # forces replacement
        # (1 unchanged attribute hidden)
    }
[...]

Plan: 36 to add, 1 to change, 36 to destroy.

I'm a bit reluctant to blindly destroy and recreate every single security group I've created as I didn't change anything on my terraform config.

I can provide the full log by email or dm on request.

Feature Request: data source existing zones

Summary:
To be able to browse the entire infrastructure, we need a data source with the list of zones.

Example use case:

  1. A project instantiates compute instances in different zones, with standardized name and labels
  2. The holder of the DNS domain keeps control of DNS configuration in the top most organization and does not delegate domain management
  3. To generate the DNS records, we need to query all machines having the given labels with the datasource compute_instance, which requires a zone name
  4. By iterating on all zones, the configuration that manages the DNS zone doesn't need to be updated to track architecture changes

resourceSSHKeypairCreate does not return the privatekey value

Hello πŸ‘‹

I'm currently using terraform with the Exoscale provider and I noticed an error in the resourceSSHKeypairCreate function.

According to the Exoscale's API documentation the request should return the private key.

But if we create an exoscale_ssh_keypair ressource like this :

resource "exoscale_ssh_keypair" "admin" {
  name       = "mykeypairname"
}
output "key_pair" {
  value = exoscale_ssh_keypair.admin
}

The output gives us :

Outputs:

ssh_key_private = {
  "fingerprint" = <some fingerprint>
  "id" = "mykeypairname"
  "name" = "mykeypairname"
}

But if we log the terraform execution we see that the return of the create request we have our privateKey :

 2020/04/09 13:23:39 [DEBUG] exoscale_ssh_keypair (ID = <new resource>): beginning create
 2020/04/09 13:23:39 [DEBUG] exoscale API Request Details:
 ---[ REQUEST ]---------------------------------------
 GET /v1[...]
 Host: api.exoscale.com
 User-Agent: Exoscale-Terraform-Provider/0.16.1 egoscale/0.22.0 (go1.12.6; linux/amd64)
 Accept-Encoding: gzip
 
 
 -----------------------------------------------------
 2020/04/09 13:23:39 [DEBUG] exoscale API Response Details:
 ---[ RESPONSE ]--------------------------------------
 HTTP/2.0 200 OK
 Content-Type: application/json;charset=utf-8
 Date: Thu, 09 Apr 2020 11:23:39 GMT
 Referrer-Policy: no-referrer-when-downgrade
 Strict-Transport-Security: max-age=15724800; includeSubDomains
 Vary: Accept-Encoding, User-Agent
 X-Content-Type-Options: nosniff
 X-Request-Id: <requestId>
 X-Xss-Protection: 1; mode=block
 
 {
  "createsshkeypairresponse": {
   "keypair": {
    "privatekey":"THE PRIVATE KEY WAS HERE"
     "name": "mykeypairname",
    "fingerprint": <some fingerprint>
   }
  }
 }
 -----------------------------------------------------

But in the next log output, we can see that there's a new request that has been done

2020/04/09 13:23:40 [DEBUG] exoscale_ssh_keypair (ID = mykeypairname ): beginning read
2020/04/09 13:23:40 [DEBUG] exoscale API Request Details:
 ---[ REQUEST ]---------------------------------------
 GET /v1?[...]
 Host: api.exoscale.com
 User-Agent: Exoscale-Terraform-Provider/0.16.1 egoscale/0.22.0 (go1.12.6; linux/amd64)
 Accept-Encoding: gzip
 
 
 -----------------------------------------------------
2020/04/09 13:23:40 [DEBUG] exoscale API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 200 OK
Content-Type: application/json;charset=utf-8
Date: Thu, 09 Apr 2020 11:23:40 GMT
Referrer-Policy: no-referrer-when-downgrade
Strict-Transport-Security: max-age=15724800; includeSubDomains
Vary: Accept-Encoding, User-Agent
X-Content-Type-Options: nosniff
X-Request-Id: <requestId>
X-Xss-Protection: 1; mode=block

{
 "listsshkeypairsresponse": {
  "count": 1,
  "sshkeypair": [
   {
    "name": "mykeypairname",
    "fingerprint": <some fingerprint>
   }
  ]
  }
 }
 -----------------------------------------------------

So I looked into the code and I saw that the return value of the resourceSSHKeypairCreate() is return resourceSSHKeypairRead(d, meta).

But should call resourceSSHKeypairApply() in order to get back the create's response.

In the terraform documentation about ssh_key_pair for Exoscale, it says

public_key - A SSH public key that will be copied into the instances at first boot. If not provided, a SSH keypair is generated and the is saved locally (see the private_key attribute).

Does it really save the private key somewhere ?

What do you think about this issue ?

Doc links not working

After a check, it looks like these links are not working anymore for version 0.31.2 of the exoscale provider.

[d-anti_affinity_group]: ../d/anti_affinity_group.html
[d-compute_template]: ../d/compute_template.html
[d-elastic_ip]: ../d/elastic_ip.html
[d-private_network]: ../d/private_network.html
[d-security_group]: ../d/security_group.html
[r-anti_affinity_group]: anti_affinity_group.html
[r-elastic_ip]: ../r/elastic_ip.html
[r-private_network]: ../r/private_network.html

I can't figure out where the problem stands, sorry for not contributing more on this one.

Apple m1 ARM builds missing for v0.29.0

β•·
β”‚ Error: Incompatible provider version
β”‚
β”‚ Provider registry.terraform.io/exoscale/exoscale v0.29.0 does not have a package available for your current platform, darwin_arm64.
β”‚
β”‚ Provider releases are separate from Terraform CLI releases, so not all providers are available for all platforms. Other versions of this
β”‚ provider may have different platforms supported.

Darwin ARM builds are available for v0.28.0

Issue with exoscale_network resource creation

Hi,

I tried to create a private net with this below code.

resource "exoscale_network" "okd" {
  zone             = var.zone  
  name             = "okd"  
  network_offering = "PrivNet"  
  start_ip         = "192.168.0.20"  
  end_ip           = "192.168.0.254"  
  netmask          = "255.255.255.0"  
}

and I obtained this error.

Error: API error ParamError 431 (ServerAPIException 9999): Wrong input parameters:

  • The network range is invalid.

on main.tf line 19, in resource "exoscale_network" "okd":
19: resource "exoscale_network" "okd" {

Works with this. Changing the end_ip address.

resource "exoscale_network" "okd" {
  zone             = var.zone  
  name             = "okd"  
  network_offering = "PrivNet"  
  start_ip         = "192.168.0.20"  
  end_ip           = "192.168.0.253"  
  netmask          = "255.255.255.0"  
}

Feature Request: Separate TF Resource for managing databases of a DBaaS instance

Hello,

It would be nice to have different resources for database instances and the databases itself.
This would allow us to manage also the databases with Terraform directly.

Example:

locals {
  zone = "ch-dk-2"
}

resource "exoscale_database_instance" "pg_prod" {
  zone = local.zone
  name = "pg-prod"
  type = "pg"
  plan = "startup-4"

  maintenance_dow  = "sunday"
  maintenance_time = "23:00:00"

  termination_protection = true

  pg {
    version = "13"
    backup_schedule = "04:00"
  }
}

resource "exoscale_database" "my_database" {
    database_instance = exoscale_database_instance.pg_prod
    database_name = "<DATABASE_NAME>"
}

Links from other projects:

provider panic: runtime error: invalid memory address or nil pointer dereference

Hello guys,

We are receiving the following error using the latest version of the provider, while creating some amount of sks_nodepools resources (12 nodepools). Tested on macOS and Linux with the same result.

Stack trace from the terraform-provider-exoscale_v0.31.2 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x13cf625]

goroutine 330 [running]:
github.com/exoscale/egoscale/v2.sksNodepoolFromAPI(0xc0008d8f28)
        github.com/exoscale/[email protected]/v2/sks_nodepool.go:125 +0x2a5
github.com/exoscale/egoscale/v2.sksClusterFromAPI.func3(0xc0001a0cb0)
        github.com/exoscale/[email protected]/v2/sks_cluster.go:116 +0x145
github.com/exoscale/egoscale/v2.sksClusterFromAPI(0xc0001a0cb0, {0xc000458cb0, 0x8})
        github.com/exoscale/[email protected]/v2/sks_cluster.go:120 +0xa8
github.com/exoscale/egoscale/v2.(*Client).GetSKSCluster(0xc0003edda0, {0x1b42d20, 0xc0003b40c0}, {0xc000458cb0, 0x8}, {0xc00052c8a0, 0x24})
        github.com/exoscale/[email protected]/v2/sks_cluster.go:231 +0xaa
github.com/exoscale/terraform-provider-exoscale/exoscale.resourceSKSNodepoolRead({0x1b42d20, 0xc0002e9620}, 0xc000183380, {0x19cce00, 0xc0005e20e0})
        github.com/exoscale/terraform-provider-exoscale/exoscale/resource_exoscale_sks_nodepool.go:313 +0x3f5
github.com/exoscale/terraform-provider-exoscale/exoscale.resourceSKSNodepoolCreate({0x1b42ce8, 0xc0006b4000}, 0xc0008d96c0, {0x19cce00, 0xc0005e20e0})
        github.com/exoscale/terraform-provider-exoscale/exoscale/resource_exoscale_sks_nodepool.go:299 +0x1065
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc000436b60, {0x1b42c78, 0xc000580340}, 0x2, {0x19cce00, 0xc0005e20e0})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x12e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000436b60, {0x1b42c78, 0xc000580340}, 0xc0000fb790, 0xc000651d80, {0x19cce00, 0xc0005e20e0})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x871
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc0001234e8, {0x1b42c78, 0xc000580340}, 0xc00053a000)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xd8a
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000183580, {0x1b42d20, 0xc000881170}, 0xc000585810)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x30e
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x19d0820, 0xc000183580}, {0x1b42d20, 0xc000881170}, 0xc000471620, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002f3dc0, {0x1b51ca8, 0xc0002a0780}, 0xc000904600, 0xc000452a20, 0x213b980, 0x0)
        google.golang.org/[email protected]/server.go:1194 +0xc8f
google.golang.org/grpc.(*Server).handleStream(0xc0002f3dc0, {0x1b51ca8, 0xc0002a0780}, 0xc000904600, 0x0)
        google.golang.org/[email protected]/server.go:1517 +0xa2a
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/[email protected]/server.go:859 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/[email protected]/server.go:857 +0x294

Error: The terraform-provider-exoscale_v0.31.2 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Terraform version: v1.1.4 on darwin_amd64
Provider version: 0.31.2

I've tested the following resources:

exoscale_anti_affinity_group
exoscale_private_network
exoscale_security_group_rule
exoscale_security_group
exoscale_sks_cluster
exoscale_sks_nodepool

How to add labels to a sks nodepool

When describing an exoscale_sks_nodepool ressource, I would like to have a label arugment.

This is already supported by exo sks create using the --nodepool-label argument.

Any plans to make it available in the terraform provider?

Unable to provision a SOS bucket via AWS Plugin

It is impossible for me to provision an SOS bucket via terraform (with the AWS provider). But until very recently, we were able to do so.

We use Terraform v0.12.17 and the AWS version 2.41 plugin (we have the same problem with version 2.20 which used to work until now).

Here is the code we use:

provider "aws" {
  version = "~> 2.41"

  access_key = var.api_key
  secret_key = var.api_secret

  region = var.zone
  endpoints {
    s3 = "https://sos-${var.zone}.exo.io"
  }

  # Deactivate the AWS specific behaviours
  #
  # https://www.terraform.io/docs/backends/types/s3.html#skip_credentials_validation
  skip_credentials_validation = true
  skip_metadata_api_check = true
  skip_requesting_account_id = true
  skip_get_ec2_platforms = true
  skip_region_validation = true
}

resource "aws_s3_bucket" "bucket" {
  bucket = var.name

  lifecycle {
    ignore_changes = [object_lock_configuration]
  }
}

Here is the terraform output :

> terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.state-backups.aws_s3_bucket.bucket will be created
  + resource "aws_s3_bucket" "bucket" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "a12s-production-platform-state-backups"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.state-backups.aws_s3_bucket.bucket: Creating...
module.state-backups.aws_s3_bucket.bucket: Still creating... [10s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [20s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [30s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [40s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [50s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [1m0s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [1m10s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [1m20s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [1m30s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [1m40s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [1m50s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [2m0s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [2m10s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [2m20s elapsed]
module.state-backups.aws_s3_bucket.bucket: Still creating... [2m30s elapsed]

Error: error getting S3 Bucket website configuration: timeout while waiting for state to become 'success' (timeout: 2m0s)

  on modules/exo-s3/main.tf line 22, in resource "aws_s3_bucket" "bucket":
  22: resource "aws_s3_bucket" "bucket" {

case sensitive instance state

The documentation says state = "Running" is the correct way of setting the instance state. However, this leads to constant updates of the instance state because the result is lower case.

apply fails when instance pool is being destroyed

Given that Terraform has created an instance pool and that instance pool is being manually deleted, terraform apply will result in the following error:

data.exoscale_compute_template.ubuntu: Refreshing state... [id=ea595a7f-6a76-4539-9b8e-a0ca78488900]
exoscale_instance_pool.test: Refreshing state... [id=1dd6e335-13da-c51b-2b3c-08a47b66b094]
exoscale_instance_pool.test: Modifying... [id=1dd6e335-13da-c51b-2b3c-08a47b66b094]

Error: API error ErrorCode(409) 409 (ServerAPIException 9999): The instance pool is already being destroyed.

  on pool.tf line 6, in resource "exoscale_instance_pool" "test":
   6: resource "exoscale_instance_pool" "test" {

Failure to destroy on NLB manual removal

Abstract

When creating an instance pool, then manually removing it, and running terraform destroy the application crashes with Error: resource not found.

Steps to reproduce

  1. Create the following Terraform config, run terraform init, and terraform apply
variable "exoscale_key" {}
variable "exoscale_secret" {}

terraform {
  required_providers {
    exoscale = {
      source  = "terraform-providers/exoscale"
    }
  }
}

provider "exoscale" {
  key = var.exoscale_key
  secret = var.exoscale_secret
}

resource "exoscale_nlb" "website" {
  zone = "at-vie-1"
  name = "test"
}
  1. Delete the created NLB manually
  2. Run terraform destroy

What happens?

exoscale_nlb.website: Refreshing state... [id=447c880b-0e2b-42eb-aa06-b3daf7d1a9dd]

Error: resource not found

terraform version

Terraform v0.13.3
+ provider registry.terraform.io/terraform-providers/exoscale v0.19.0

Using IDs for template in exoscale_compute resource trigger resource re-creation when terraform apply

Hi,

Considering this code :

resource "exoscale_compute" "k8s-master" {
  display_name = "ex-master-${terraform.workspace}"
  template = "${module.custom_templates.ubuntu_k8s.id}"
...

Will always result on a resource recreation when calling terraform apply :

-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.compute_instances.exoscale_compute.k8s-master must be replaced
-/+ resource "exoscale_compute" "k8s-master" {
      ~ affinity_group_ids = [] -> (known after apply)
      ~ affinity_groups    = [] -> (known after apply)
        disk_size          = 10
        display_name       = "ex-master-default"
      ~ gateway            = "159.100.252.1" -> (known after apply)
      ~ id                 = "d9685101-c478-4d5f-a656-8eee851772fe" -> (known after apply)
        ip4                = true
        ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      ~ ip_address         = "159.100.252.162" -> (known after apply)
        key_pair           = "cluster"
      ~ name               = "ex-master-default" -> (known after apply)
      ~ password           = (sensitive value)
      ~ security_group_ids = [
          - "edf52b3d-0d97-4f7f-a08d-fdf4261989b9",
          - "fb28883e-2862-4308-9394-3aa4ad329570",
        ] -> (known after apply)
        security_groups    = [
            "default",
            "k8s-master",
        ]
        size               = "Small"
      ~ state              = "Running" -> (known after apply)
      ~ tags               = {} -> (known after apply)
      ~ template           = "Linux Ubuntu 18.04 LTS Kubernetes Ready" -> "64e5af24-3859-4ff9-8f04-ed5b9331e2d2" # forces replacement
      ~ user_data_base64   = true -> (known after apply)
      ~ username           = "root" -> (known after apply)
        zone               = "ch-dk-2"

This issue prevent me to use my own custom template hosted at Exoscale.

Is there any workaround you know about ? Using the template name will not work as described in the documentation (using name only permit the usage of "featured" templates.)

Best regards
J.R

terraform import exoscale_compute fails

I am trying to import an existing compute instance that has been created with terraform before but the state has been lost.

The resources of the compute, ip and secondary up looks like this
resource "exoscale_ipaddress" "xxx-production" {
  zone                     = "${local.primary_zone}"
  healthcheck_mode         = "http"
  healthcheck_port         = 80
  healthcheck_path         = "/status"
  healthcheck_interval     = 15
  healthcheck_timeout      = 5
  healthcheck_strikes_ok   = 2
  healthcheck_strikes_fail = 4

  lifecycle {
    prevent_destroy = true
  }
}

resource "exoscale_secondary_ipaddress" "xxx-production01" {
  compute_id = "${exoscale_compute.xxx-web-production01.id}"
  ip_address = "${exoscale_ipaddress.xxx-production.ip_address}"
}

resource "exoscale_compute" "xxx-web-production01" {
  zone         = "${local.primary_zone}"
  display_name = "xxx-web-production01"
  template_id  = "${data.exoscale_compute_template.ubuntu.id}"

  size      = "Micro"
  disk_size = 10
  key_pair  = "${local.ssh_key_pair_name}"
  state     = "Running"

  affinity_group_ids = ["${exoscale_affinity.xxx-web-production.id}"]
  security_group_ids = ["${exoscale_security_group.xxx-web.id}"]

  ip6      = false
  keyboard = "de-ch"

  tags = {
    production = "true"
  }

  timeouts {
    create = "60m"
    delete = "2h"
  }
}

I tried it with the ID of the instance (some data is redacted) like it is described in the blog post

$ terraform import exoscale_compute.xxx-web-production01 xxxxxxxx-db9f-4b2f-b70b-f8defb649411
exoscale_compute.xxx-web-production01: Importing from ID "xxxxxxxx-db9f-4b2f-b70b-f8defb649411"...

Error: compute_id: '' expected type 'string', got unconvertible type 'egoscale.UUID'

By looking at the code it looks like it is related to the secondary ip of the compute instance
https://github.com/terraform-providers/terraform-provider-exoscale/blob/ef99805cf7bcb348d56e052c1cf652a8cbaa30bb/exoscale/resource_exoscale_compute.go#L847


Using the name instead of the id seems to work better but fails too.

$ terraform import exoscale_compute.xxx-web-production01 xxx-web-production01
exoscale_compute.xxx-web-production01: Importing from ID "xxx-web-production01"...
exoscale_compute.xxx-web-production01: Import prepared!
  Prepared exoscale_compute for import
  Prepared exoscale_secondary_ipaddress for import
exoscale_secondary_ipaddress.xxx-web-production01: Refreshing state... [id=xxxxxxxx-cc96-4998-b67b-d1591a351fc6_159.100.xx.xx]
exoscale_compute.xxx-web-production01: Refreshing state... [id=xxx-web-production01]

Error: uuid: incorrect UUID length: xxx-web-production01

exoscale_domain_record update with name="@"

When updating an exoscale_domain_record with name = "@", terraform always tries to update the record, even though nothing was changed, see below:

  # module.exoscale-dns.exoscale_domain_record.default-domain["1"] will be updated in-place
  ~ resource "exoscale_domain_record" "default-domain" {
        id          = "XXXX"
      + name        = "@"
        # (6 unchanged attributes hidden)
    }

How to enable IPv6 on instances of pool ?

I need to enable IPv6 on instances created with exoscale_instance_pool resource.
It works with exoscale_compute but I don't know how to do it with exoscale_instance_pool.

thanks in advance

Failure to destroy on privnet manual removal

Abstract

When creating a private network, then manually removing it, and running terraform destroy the application fails to remove with Error: no network found for ID b995fbe3-494c-742a-09a0-2dbe247b0c2a.

Steps to reproduce

  1. Create the following Terraform config, run terraform init, and terraform apply
variable "exoscale_key" {}
variable "exoscale_secret" {}

terraform {
  required_providers {
    exoscale = {
      source  = "terraform-providers/exoscale"
    }
  }
}

provider "exoscale" {
  key = var.exoscale_key
  secret = var.exoscale_secret
}

resource "exoscale_network" "test" {
  name = "test"
  zone = "at-vie-1"
}
  1. Delete the created NLB manually
  2. Run terraform destroy

What happens?

exoscale_network.test: Refreshing state... [id=b995fbe3-494c-742a-09a0-2dbe247b0c2a]

Error: no network found for ID b995fbe3-494c-742a-09a0-2dbe247b0c2a

terraform version

Terraform v0.13.3
+ provider registry.terraform.io/terraform-providers/exoscale v0.19.0

case sensitive security group rule handling

When creating an exoscale_security_group_rule with the lower case protocol (e.g. tcp), an update will result in an unnecessary update:

  # exoscale_security_group_rule.ssh[0] must be replaced
-/+ resource "exoscale_security_group_rule" "ssh" {
        cidr                = "194.166.232.118/32"
        description         = "Managed by Terraform!"
        end_port            = 22
      - icmp_code           = 0 -> null
      - icmp_type           = 0 -> null
      ~ id                  = "a1b9b61c-a8e2-418a-a41a-eb473a8ca9a1" -> (known after apply)
      ~ protocol            = "TCP" -> "tcp" # forces replacement
      ~ security_group      = "autoscaling" -> (known after apply)
        security_group_id   = "81b815ed-5d90-4d6c-a082-d5c548583ca1"
        start_port          = 22
        type                = "INGRESS"
      + user_security_group = (known after apply)
    }

All nodes removed when removing one node pool

I created a cluster with two node pools. Then I removed one node pool, but all the nodes were removed from the cluster.

Tainting the node pool and applying again restored new nodes to the cluster.

case sensitive instance size

I have this Terraform configuration:

resource "exoscale_compute" "test" {

  display_name = "test-bug-nic"
  template_id  = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"

  zone = "ch-gva-2"

  size            = "tiny"
  disk_size       = 20
  key_pair        = "perso"
  security_groups = ["default"]
}

Terraform apply:

Terraform will perform the following actions:

  # exoscale_compute.test will be created
  + resource "exoscale_compute" "test" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 20
      + display_name       = "test-bug-nic"
      + gateway            = (known after apply)
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "perso"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = [
          + "default",
        ]
      + size               = "tiny"
      + state              = (known after apply)
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-gva-2"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

If I run terraform plan again:

Terraform will perform the following actions:

  # exoscale_compute.test will be updated in-place
  ~ resource "exoscale_compute" "test" {
        affinity_group_ids = []
        affinity_groups    = []
        disk_size          = 20
        display_name       = "test-bug-nic"
        gateway            = "185.19.28.1"
        id                 = "ad76a840-d407-4e76-bead-ddda00db347d"
        ip4                = true
        ip6                = false
        ip_address         = "185.19.31.122"
        key_pair           = "perso"
        name               = "test-bug-nic"
        password           = (sensitive value)
        security_group_ids = [
            "f5332702-1fdb-42dd-a68c-fa1008911650",
        ]
        security_groups    = [
            "default",
        ]
      ~ size               = "Tiny" -> "tiny"
        state              = "Running"
        tags               = {}
        template           = "Linux Debian 10 (Buster) 64-bit"
        template_id        = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"
        user_data_base64   = true
        username           = "root"
        zone               = "ch-gva-2"
    }

Plan: 0 to add, 1 to change, 0 to destroy.

As you can see, configuring the instance size with a lowercase offering name will cause Terraform to always update the size. It would be nice to have the size case insensitive if possible.

Apply/Destroy fails on exoscale_nlb_service if NLB has been manually destroyed

Apply fails on the NLB or the NLB service resource if the NLB has been manually deleted.

Steps to reproduce:

  1. Apply the following code:
variable "exoscale_key" {}
variable "exoscale_secret" {}

terraform {
  required_providers {
    exoscale = {
      source  = "terraform-providers/exoscale"
    }
  }
}

provider "exoscale" {
  key = var.exoscale_key
  secret = var.exoscale_secret
}

data "exoscale_compute_template" "ubuntu" {
  zone = "at-vie-1"
  name = "Linux Ubuntu 20.04 LTS 64-bit"
}

resource "exoscale_instance_pool" "test" {
  name = "autoscaling"
  service_offering = "micro"
  size = 1
  disk_size = 10
  template_id = data.exoscale_compute_template.ubuntu.id
  zone = "at-vie-1"

  user_data = <<EOF
#!/bin/bash

apt update
apt install -y nginx
EOF
}

resource "exoscale_nlb" "test" {
  zone = "at-vie-1"
  name = "test"
}

resource "exoscale_nlb_service" "test" {
  instance_pool_id = exoscale_instance_pool.test.id
  name = "HTTP"
  nlb_id = exoscale_nlb.test.id
  port = 80
  target_port = 80
  zone = "at-vie-1"

  healthcheck {
    port = 8080
    mode = "http"
    uri = "/"
    interval = 5
    timeout = 3
    retries = 1
  }
}
  1. Delete the NLB
  2. Run apply again or destroy

Result:

data.exoscale_compute_template.ubuntu: Refreshing state... [id=c19542b7-d269-4bd4-bf7c-2cae36d066d3]
exoscale_nlb.test: Refreshing state... [id=0c02dc78-e390-44e8-8760-6f89e83fb70c]
exoscale_instance_pool.test: Refreshing state... [id=e60e1237-ba46-babf-5405-e4f31684e3a7]

Error: resource not found

Cannot create ESP security group rule using exoscale_security_group_rules resource

Exoscale Terraform Provider Version: 0.31.1

Using the exoscale_security_group_rules to generate a list of rules for a group doesn't appear to create the rule for the ESP protocol.

Here is an example terraform script where I create the security group, add the rules using rule group, and the same thing in a different security group using single exoscale_security_group_rule where the ESP protocol appears in the security group:

resource "exoscale_security_group" "swarm_prod_rulegroup" {
  name             = "swarm-prod-rulegroup"
  description      = "Swarm security group ruleset for production cluster"
}

resource "exoscale_security_group_rules" "ruleset_swarm_prod_rulegroup" {
  security_group = exoscale_security_group.swarm_prod_rulegroup.name

  ingress {
    protocol                 = "TCP"
    ports                    = ["2377"]
    user_security_group_list = [exoscale_security_group.swarm_prod_rulegroup.name]
  }

  ingress {
    protocol                 = "TCP"
    ports                    = ["4789"]
    user_security_group_list = [exoscale_security_group.swarm_prod_rulegroup.name]
  }

  ingress {
    protocol                 = "TCP"
    ports                    = ["7946"]
    user_security_group_list = [exoscale_security_group.swarm_prod_rulegroup.name]
  }

  ingress {
    protocol                 = "UDP"
    ports                    = ["7946"]
    user_security_group_list = [exoscale_security_group.swarm_prod_rulegroup.name]
  }

  ingress {
    protocol                 = "ESP"
    user_security_group_list = [exoscale_security_group.swarm_prod_rulegroup.name]
  }
}

resource "exoscale_security_group" "swarm_prod_rule" {
  name             = "swarm-prod-rule"
  description      = "Swarm security group ruleset for production cluster"
}

resource "exoscale_security_group_rule" "rule_prod_swarm_rule_tcp_2377" {
  security_group      = exoscale_security_group.swarm_prod_rule.name
  type                = "INGRESS"
  protocol            = "TCP"
  start_port          = 2377
  end_port            = 2377
  user_security_group_id = exoscale_security_group.swarm_prod_rule.id
}

resource "exoscale_security_group_rule" "rule_prod_swarm_rule_tcp_4789" {
  security_group      = exoscale_security_group.swarm_prod_rule.name
  type                = "INGRESS"
  protocol            = "TCP"
  start_port          = 4789
  end_port            = 4789
  user_security_group_id = exoscale_security_group.swarm_prod_rule.id
}

resource "exoscale_security_group_rule" "rule_prod_swarm_rule_tcp_7946" {
  security_group      = exoscale_security_group.swarm_prod_rule.name
  type                = "INGRESS"
  protocol            = "TCP"
  start_port          = 7946
  end_port            = 7946
  user_security_group_id = exoscale_security_group.swarm_prod_rule.id
}

resource "exoscale_security_group_rule" "rule_prod_swarm_rule_udp_7946" {
  security_group      = exoscale_security_group.swarm_prod_rule.name
  type                = "INGRESS"
  protocol            = "UDP"
  start_port          = 7946
  end_port            = 7946
  user_security_group_id = exoscale_security_group.swarm_prod_rule.id
}

resource "exoscale_security_group_rule" "rule_prod_swarm_rule_esp" {
  security_group      = exoscale_security_group.swarm_prod_rule.name
  type                = "INGRESS"
  protocol            = "ESP"
  user_security_group_id = exoscale_security_group.swarm_prod_rule.id
}

There doesn't appear to be any particular error messages during the apply phase:

exoscale_security_group.swarm_prod_rule: Creating...
exoscale_security_group.swarm_prod_rulegroup: Creating...
exoscale_security_group.swarm_prod_rulegroup: Creation complete after 4s [id=1fb79502-b3a1-4966-b04a-b676dc776e0f]
exoscale_security_group.swarm_prod_rule: Creation complete after 4s [id=8e4fcc17-3303-4a91-9ff3-71a7d9802a63]
exoscale_security_group_rule.rule_prod_swarm_rule_esp: Creating...
exoscale_security_group_rule.rule_prod_swarm_rule_tcp_2377: Creating...
exoscale_security_group_rule.rule_prod_swarm_rule_tcp_7946: Creating...
exoscale_security_group_rule.rule_prod_swarm_rule_udp_7946: Creating...
exoscale_security_group_rule.rule_prod_swarm_rule_tcp_4789: Creating...
exoscale_security_group_rules.ruleset_swarm_prod_rulegroup: Creating...
exoscale_security_group_rule.rule_prod_swarm_rule_tcp_7946: Creation complete after 4s [id=e67e335f-bf8e-4f8b-b80e-448473c7c60d]
exoscale_security_group_rule.rule_prod_swarm_rule_tcp_4789: Creation complete after 4s [id=c2fcdd74-b365-44bb-aa3e-67469945ad88]
exoscale_security_group_rule.rule_prod_swarm_rule_esp: Creation complete after 4s [id=c9eeadcc-5910-40c2-9394-c79c64eace28]
exoscale_security_group_rule.rule_prod_swarm_rule_udp_7946: Creation complete after 4s [id=2b524e02-ca33-4e21-94ac-68869e4939eb]
exoscale_security_group_rule.rule_prod_swarm_rule_tcp_2377: Creation complete after 4s [id=a1263ca9-d286-4100-8ad5-5a89e549a707]
exoscale_security_group_rules.ruleset_swarm_prod_rulegroup: Still creating... [10s elapsed]
exoscale_security_group_rules.ruleset_swarm_prod_rulegroup: Creation complete after 11s [id=5577006791947779410]

As a side note, trying to create the ESP rule manually through the web admin or creating a single exoscale_security_group_rule after the exoscale_security_group_rule_group to add the ESP entry, causes a hard crash of the terraform script, even with all mentions of the ESP rule commented out. Not sure if that is expected behaviour when mixing rule and rule_group's. Just an observation.

Thanks

Strange cycle error

I have this Terraform configuration:


resource "exoscale_network" "testbug1" {
  name = "testbug1"
  display_text = "demo intra privnet"
  zone = "ch-gva-2"
}

resource "exoscale_compute" "test" {
  count = 3
  display_name = "test-bug-${count.index}"
  template_id  = "3e0ef678-b8c8-4467-9c39-eae7906be840"

  zone = "ch-gva-2"

  size            = "Tiny"
  disk_size       = 20
  key_pair        = "perso"
  security_groups = ["default"]
}

resource "exoscale_nic" "eth_static1" {
  count = length(exoscale_compute.test)

  compute_id = exoscale_compute.test.*.id[count.index]
  network_id = exoscale_network.testbug1.id

}

Terraform apply output:

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # exoscale_compute.test[0] will be created
  + resource "exoscale_compute" "test" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 20
      + display_name       = "test-bug-0"
      + gateway            = (known after apply)
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "perso"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = [
          + "default",
        ]
      + size               = "Tiny"
      + state              = (known after apply)
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "3e0ef678-b8c8-4467-9c39-eae7906be840"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-gva-2"
    }

  # exoscale_compute.test[1] will be created
  + resource "exoscale_compute" "test" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 20
      + display_name       = "test-bug-1"
      + gateway            = (known after apply)
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "perso"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = [
          + "default",
        ]
      + size               = "Tiny"
      + state              = (known after apply)
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "3e0ef678-b8c8-4467-9c39-eae7906be840"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-gva-2"
    }

  # exoscale_compute.test[2] will be created
  + resource "exoscale_compute" "test" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 20
      + display_name       = "test-bug-2"
      + gateway            = (known after apply)
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "perso"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = [
          + "default",
        ]
      + size               = "Tiny"
      + state              = (known after apply)
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "3e0ef678-b8c8-4467-9c39-eae7906be840"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-gva-2"
    }

  # exoscale_network.testbug1 will be created
  + resource "exoscale_network" "testbug1" {
      + display_text = "demo intra privnet"
      + id           = (known after apply)
      + name         = "testbug1"
      + tags         = (known after apply)
      + zone         = "ch-gva-2"
    }

  # exoscale_nic.eth_static1[0] will be created
  + resource "exoscale_nic" "eth_static1" {
      + compute_id  = (known after apply)
      + gateway     = (known after apply)
      + id          = (known after apply)
      + ip_address  = (known after apply)
      + mac_address = (known after apply)
      + netmask     = (known after apply)
      + network_id  = (known after apply)
    }

  # exoscale_nic.eth_static1[1] will be created
  + resource "exoscale_nic" "eth_static1" {
      + compute_id  = (known after apply)
      + gateway     = (known after apply)
      + id          = (known after apply)
      + ip_address  = (known after apply)
      + mac_address = (known after apply)
      + netmask     = (known after apply)
      + network_id  = (known after apply)
    }

  # exoscale_nic.eth_static1[2] will be created
  + resource "exoscale_nic" "eth_static1" {
      + compute_id  = (known after apply)
      + gateway     = (known after apply)
      + id          = (known after apply)
      + ip_address  = (known after apply)
      + mac_address = (known after apply)
      + netmask     = (known after apply)
      + network_id  = (known after apply)
    }

Plan: 7 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

exoscale_network.testbug1: Creating...
exoscale_compute.test[1]: Creating...
exoscale_compute.test[0]: Creating...
exoscale_compute.test[2]: Creating...
exoscale_network.testbug1: Creation complete after 1s [id=73128b07-785a-1701-e9af-8648e9f1cd77]
exoscale_compute.test[1]: Still creating... [10s elapsed]
exoscale_compute.test[2]: Still creating... [10s elapsed]
exoscale_compute.test[0]: Still creating... [10s elapsed]
exoscale_compute.test[1]: Still creating... [20s elapsed]
exoscale_compute.test[2]: Still creating... [20s elapsed]
exoscale_compute.test[0]: Still creating... [20s elapsed]
exoscale_compute.test[2]: Creation complete after 28s [id=370fa860-7358-4b19-98f1-5058d445ba53]
exoscale_compute.test[0]: Creation complete after 28s [id=2610e38f-bd55-4d98-86fe-41015ab36fa9]
exoscale_compute.test[1]: Creation complete after 28s [id=79e0e7c5-4685-4257-8274-dafba60fdcb1]
exoscale_nic.eth_static1[1]: Creating...
exoscale_nic.eth_static1[2]: Creating...
exoscale_nic.eth_static1[0]: Creating...
exoscale_nic.eth_static1[2]: Creation complete after 4s [id=78674fd6-ea33-84a4-15e2-45a75f64d31d]
exoscale_nic.eth_static1[1]: Creation complete after 8s [id=413ffd23-3bec-9db6-9efc-294e8bf4d9bc]
exoscale_nic.eth_static1[0]: Creation complete after 10s [id=7e15135e-259d-55ec-5b9b-4bb76b5d899a]

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

Now, I update the template_id to a new one (for example c00df33a-df5b-47d2-b9c8-a2eee9ef24a3), and re-run terraform apply:

$ terraform apply
exoscale_network.testbug1: Refreshing state... [id=73128b07-785a-1701-e9af-8648e9f1cd77]
exoscale_compute.test[1]: Refreshing state... [id=79e0e7c5-4685-4257-8274-dafba60fdcb1]
exoscale_compute.test[2]: Refreshing state... [id=370fa860-7358-4b19-98f1-5058d445ba53]
exoscale_compute.test[0]: Refreshing state... [id=2610e38f-bd55-4d98-86fe-41015ab36fa9]
exoscale_nic.eth_static1[2]: Refreshing state... [id=78674fd6-ea33-84a4-15e2-45a75f64d31d]
exoscale_nic.eth_static1[1]: Refreshing state... [id=413ffd23-3bec-9db6-9efc-294e8bf4d9bc]
exoscale_nic.eth_static1[0]: Refreshing state... [id=7e15135e-259d-55ec-5b9b-4bb76b5d899a]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # exoscale_compute.test[0] must be replaced
-/+ resource "exoscale_compute" "test" {
      ~ affinity_group_ids = [] -> (known after apply)
      ~ affinity_groups    = [] -> (known after apply)
        disk_size          = 20
        display_name       = "test-bug-0"
      ~ gateway            = "159.100.242.1" -> (known after apply)
      ~ id                 = "2610e38f-bd55-4d98-86fe-41015ab36fa9" -> (known after apply)
        ip4                = true
        ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      ~ ip_address         = "159.100.243.172" -> (known after apply)
        key_pair           = "perso"
      ~ name               = "test-bug-0" -> (known after apply)
      ~ password           = (sensitive value)
      ~ security_group_ids = [
          - "f5332702-1fdb-42dd-a68c-fa1008911650",
        ] -> (known after apply)
        security_groups    = [
            "default",
        ]
        size               = "Tiny"
      ~ state              = "Running" -> (known after apply)
      ~ tags               = {} -> (known after apply)
      ~ template           = "Linux Ubuntu 19.10 64-bit" -> (known after apply)
      ~ template_id        = "3e0ef678-b8c8-4467-9c39-eae7906be840" -> "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3" # forces replacement
      ~ user_data_base64   = true -> (known after apply)
      ~ username           = "root" -> (known after apply)
        zone               = "ch-gva-2"
    }

  # exoscale_compute.test[1] must be replaced
-/+ resource "exoscale_compute" "test" {
      ~ affinity_group_ids = [] -> (known after apply)
      ~ affinity_groups    = [] -> (known after apply)
        disk_size          = 20
        display_name       = "test-bug-1"
      ~ gateway            = "185.19.28.1" -> (known after apply)
      ~ id                 = "79e0e7c5-4685-4257-8274-dafba60fdcb1" -> (known after apply)
        ip4                = true
        ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      ~ ip_address         = "185.19.31.122" -> (known after apply)
        key_pair           = "perso"
      ~ name               = "test-bug-1" -> (known after apply)
      ~ password           = (sensitive value)
      ~ security_group_ids = [
          - "f5332702-1fdb-42dd-a68c-fa1008911650",
        ] -> (known after apply)
        security_groups    = [
            "default",
        ]
        size               = "Tiny"
      ~ state              = "Running" -> (known after apply)
      ~ tags               = {} -> (known after apply)
      ~ template           = "Linux Ubuntu 19.10 64-bit" -> (known after apply)
      ~ template_id        = "3e0ef678-b8c8-4467-9c39-eae7906be840" -> "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3" # forces replacement
      ~ user_data_base64   = true -> (known after apply)
      ~ username           = "root" -> (known after apply)
        zone               = "ch-gva-2"
    }

  # exoscale_compute.test[2] must be replaced
-/+ resource "exoscale_compute" "test" {
      ~ affinity_group_ids = [] -> (known after apply)
      ~ affinity_groups    = [] -> (known after apply)
        disk_size          = 20
        display_name       = "test-bug-2"
      ~ gateway            = "159.100.242.1" -> (known after apply)
      ~ id                 = "370fa860-7358-4b19-98f1-5058d445ba53" -> (known after apply)
        ip4                = true
        ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      ~ ip_address         = "159.100.242.77" -> (known after apply)
        key_pair           = "perso"
      ~ name               = "test-bug-2" -> (known after apply)
      ~ password           = (sensitive value)
      ~ security_group_ids = [
          - "f5332702-1fdb-42dd-a68c-fa1008911650",
        ] -> (known after apply)
        security_groups    = [
            "default",
        ]
        size               = "Tiny"
      ~ state              = "Running" -> (known after apply)
      ~ tags               = {} -> (known after apply)
      ~ template           = "Linux Ubuntu 19.10 64-bit" -> (known after apply)
      ~ template_id        = "3e0ef678-b8c8-4467-9c39-eae7906be840" -> "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3" # forces replacement
      ~ user_data_base64   = true -> (known after apply)
      ~ username           = "root" -> (known after apply)
        zone               = "ch-gva-2"
    }

  # exoscale_nic.eth_static1[0] must be replaced
-/+ resource "exoscale_nic" "eth_static1" {
      ~ compute_id  = "2610e38f-bd55-4d98-86fe-41015ab36fa9" -> (known after apply) # forces replacement
      + gateway     = (known after apply)
      ~ id          = "7e15135e-259d-55ec-5b9b-4bb76b5d899a" -> (known after apply)
      + ip_address  = (known after apply)
      ~ mac_address = "0a:57:6e:00:85:28" -> (known after apply)
      + netmask     = (known after apply)
        network_id  = "73128b07-785a-1701-e9af-8648e9f1cd77"
    }

  # exoscale_nic.eth_static1[1] must be replaced
-/+ resource "exoscale_nic" "eth_static1" {
      ~ compute_id  = "79e0e7c5-4685-4257-8274-dafba60fdcb1" -> (known after apply) # forces replacement
      + gateway     = (known after apply)
      ~ id          = "413ffd23-3bec-9db6-9efc-294e8bf4d9bc" -> (known after apply)
      + ip_address  = (known after apply)
      ~ mac_address = "0a:3d:b2:00:85:28" -> (known after apply)
      + netmask     = (known after apply)
        network_id  = "73128b07-785a-1701-e9af-8648e9f1cd77"
    }

  # exoscale_nic.eth_static1[2] must be replaced
-/+ resource "exoscale_nic" "eth_static1" {
      ~ compute_id  = "370fa860-7358-4b19-98f1-5058d445ba53" -> (known after apply) # forces replacement
      + gateway     = (known after apply)
      ~ id          = "78674fd6-ea33-84a4-15e2-45a75f64d31d" -> (known after apply)
      + ip_address  = (known after apply)
      ~ mac_address = "0a:4c:b8:00:85:28" -> (known after apply)
      + netmask     = (known after apply)
        network_id  = "73128b07-785a-1701-e9af-8648e9f1cd77"
    }

Plan: 6 to add, 0 to change, 6 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes


Error: Cycle: exoscale_compute.test[2] (destroy), exoscale_compute.test[2], exoscale_nic.eth_static1[0] (destroy), exoscale_nic.eth_static1[1] (destroy), exoscale_compute.test[0] (destroy), exoscale_compute.test[0], exoscale_compute.test[1], exoscale_nic.eth_static1 (prepare state), exoscale_nic.eth_static1[2] (destroy), exoscale_compute.test[1] (destroy)

i was expecting Terraform to destroy and then recreate the instances with the new template, but instead I have a cycle error but I'm not sure to understand why.

feature-request: sks: export kubeconfig string (or even separate credentials) of the resource

Hello!
I'd really like to have ability not just create SKS cluster using terraform, but at the same time get the connection credentials to use it in-place with kubernetes/helm terraform providers.
It is kind of enabler for me to use SKS on Exoscale.
As I can understand there is pretty straight forward possibility to implement such feature.
However if there is any cons which prevents of doing so please let me know.

PS: most of managed kubernetes providers like DO, GCP, AWS, YACloud have this ability.

[exoscale_security_group] Can not import existing resource

Currently I'm not able to import an existing SecurityGroup

TerraformVersion: 1.1.6
ProviderVersion: 0.31.2

Importing the SecurityGroup leads to following error.

β•·
β”‚ Error: nil entry in ImportState results. This is always a bug with
β”‚ the resource that is being imported. Please report this as
β”‚ a bug to Terraform.
β”‚ 
β”‚ 
β•΅

The logs show that the API calls are successful and return the group and its rules. Therefore I guess this is not API related, but some bug in the code base.

TraceLogs
exoscale_security_group.sks: Importing from ID "aaaaaaaa-1f9d-4a5b-a30a-d6dd01f7efd2"...
2022-02-21T10:59:30.005+0100 [TRACE] vertex "exoscale_domain_record.sks_alias_root": visit complete
2022-02-21T10:59:30.005+0100 [TRACE] vertex "output.sks_endpoint": starting visit (*terraform.NodeApplyableOutput)
2022-02-21T10:59:30.005+0100 [TRACE] GRPCProvider: ImportResourceState
2022-02-21T10:59:30.005+0100 [TRACE] vertex "output.sks_id": starting visit (*terraform.NodeApplyableOutput)
2022-02-21T10:59:30.005+0100 [TRACE] setValue: Saving Create change for output.sks_instance_pool_id in changeset
2022-02-21T10:59:30.005+0100 [TRACE] vertex "output.nlb_ip_address": starting visit (*terraform.NodeApplyableOutput)
2022-02-21T10:59:30.005+0100 [TRACE] setValue: Saving value for output.sks_instance_pool_id in state
2022-02-21T10:59:30.005+0100 [TRACE] vertex "output.sks_instance_pool_id": visit complete
2022-02-21T10:59:30.005+0100 [TRACE] vertex "exoscale_security_group_rule.log_ipv4": visit complete
2022-02-21T10:59:30.006+0100 [TRACE] provider.terraform-provider-exoscale_v0.31.2: Received request: tf_proto_version=5 tf_req_id=b5b6c813-9886-ba74-6b34-6f7fc7b814ea tf_rpc=ImportResourceState @module=sdk.proto tf_resource_type=exoscale_security_group @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:620 tf_provider_addr=provider timestamp=2022-02-21T10:59:30.005+0100
2022-02-21T10:59:30.006+0100 [TRACE] setValue: Saving Create change for output.nlb_ip_address in changeset
2022-02-21T10:59:30.006+0100 [TRACE] setValue: Saving NoOp change for output.sks_id in changeset
2022-02-21T10:59:30.006+0100 [TRACE] setValue: Removing output.nlb_ip_address from state (it is now null)
2022-02-21T10:59:30.006+0100 [TRACE] provider.terraform-provider-exoscale_v0.31.2: Calling downstream: @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:627 tf_provider_addr=provider tf_rpc=ImportResourceState tf_req_id=b5b6c813-9886-ba74-6b34-6f7fc7b814ea tf_resource_type=exoscale_security_group @module=sdk.proto tf_proto_version=5 timestamp=2022-02-21T10:59:30.005+0100
2022-02-21T10:59:30.006+0100 [TRACE] setValue: Saving value for output.sks_id in state
2022-02-21T10:59:30.006+0100 [TRACE] vertex "output.nlb_ip_address": visit complete
2022-02-21T10:59:30.006+0100 [TRACE] vertex "output.sks_id": visit complete
2022-02-21T10:59:30.006+0100 [TRACE] setValue: Saving NoOp change for output.sks_endpoint in changeset
2022-02-21T10:59:30.006+0100 [TRACE] setValue: Saving value for output.sks_endpoint in state
2022-02-21T10:59:30.006+0100 [TRACE] vertex "output.sks_endpoint": visit complete
2022-02-21T10:59:30.006+0100 [INFO]  provider.terraform-provider-exoscale_v0.31.2: 2022/02/21 10:59:30 [DEBUG] exoscale API Request Details:
---[ REQUEST ]---------------------------------------
GET /v2/security-group HTTP/1.1
Host: api-ch-gva-2.exoscale.com
User-Agent: Go-http-client/1.1
Authorization: EXO2-HMAC-SHA256 credential=EXOaaaaaaaaaaaaaaaaaaaaa,expires=1645438170,signature=BwoexeaaaaaaaaaahMlT9Wj9xLCrSZ+30ast+XlQjpXQ=
Accept-Encoding: gzip


-----------------------------------------------------: timestamp=2022-02-21T10:59:30.006+0100
2022-02-21T10:59:30.234+0100 [INFO]  provider.terraform-provider-exoscale_v0.31.2: 2022/02/21 10:59:30 [DEBUG] exoscale API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 200 OK
Connection: close
Content-Security-Policy: frame-ancestors https://portal.exoscale.com/ https://www.exoscale.com; frame-src https://portal.exoscale.com/ https://sos-ch-dk-2.exo.io/
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Feb 2022 09:59:30 GMT
Referrer-Policy: no-referrer-when-downgrade
Strict-Transport-Security: max-age=31557600; includeSubDomains; preload
X-Content-Type-Options: nosniff always
X-Xss-Protection: 1; mode=block always

{
 "security-groups": [
  {
   "id": "1d2e3a75-b425-4c5a-a4fa-be6057a9129c",
   "name": "default",
   "description": "Default Security Group",
   "rules": [
    {
     "id": "72a91e54-27b1-47d8-a574-40ae897cc1f4",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "SSH",
     "network": "0.0.0.0/0",
     "start-port": 22,
     "end-port": 22
    },
    {
     "id": "dc89affa-e664-4d7f-889f-429f84120f2b",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "network": "0.0.0.0/0",
     "start-port": 443,
     "end-port": 443
    },
    {
     "id": "d210ff82-8cf2-45c2-b497-f0f7debb1144",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "network": "0.0.0.0/0",
     "start-port": 80,
     "end-port": 80
    }
   ]
  },
  {
   "id": "a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2",
   "name": "jentis-prod-sks",
   "rules": [
    {
     "id": "3c0b303e-5069-4aae-a92b-f06b6dfca0bc",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "Nodes logs/exec",
     "network": "0.0.0.0/0",
     "start-port": 10250,
     "end-port": 10250
    },
    {
     "id": "ab7d1ae3-49e7-4f16-83ca-3859f417672d",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "Nodes logs/exec",
     "network": "::/0",
     "start-port": 10250,
     "end-port": 10250
    },
    {
     "id": "782b82e0-c838-445d-ab6c-400e47c62037",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "NodePort services",
     "network": "0.0.0.0/0",
     "start-port": 30000,
     "end-port": 32767
    },
    {
     "id": "e9eb5a9b-4c8e-4f2a-993b-bdbff301882c",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "NodePort services",
     "network": "::/0",
     "start-port": 30000,
     "end-port": 32767
    },
    {
     "id": "e7460a46-dd54-4fa8-b074-3fd2cf76d02f",
     "protocol": "udp",
     "flow-direction": "ingress",
     "description": "Calico traffic",
     "start-port": 4789,
     "end-port": 4789,
     "security-group": {
      "id": "a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2",
      "name": "jentis-prod-sks"
     }
    }
   ]
  },
  {
   "id": "ed365e01-3f1d-4d5b-89b5-f9caad0a4812",
   "name": "jentis-dev-sks",
   "rules": [
    {
     "id": "6593b7ee-bdf5-45c7-a90c-19675f51fc53",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "Nodes logs/exec",
     "network": "0.0.0.0/0",
     "start-port": 10250,
     "end-port": 10250
    },
    {
     "id": "87ada82e-4df5-4c0a-91a7-b3ff2a021963",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "Nodes logs/exec",
     "network": "::/0",
     "start-port": 10250,
     "end-port": 10250
    },
    {
     "id": "2688e0ac-9e9b-4d01-aca4-8aff4f1e4c00",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "NodePort services",
     "network": "0.0.0.0/0",
     "start-port": 30000,
     "end-port": 32767
    },
    {
     "id": "b8f7b420-31cb-47b8-b982-41e3b71ab04e",
     "protocol": "tcp",
     "flow-direction": "ingress",
     "description": "NodePort services",
     "network": "::/0",
     "start-port": 30000,
     "end-port": 32767
    },
    {
     "id": "1c4747d5-1c51-46b8-b772-a810bc46309c",
     "protocol": "udp",
     "flow-direction": "ingress",
     "description": "Calico traffic",
     "start-port": 4789,
     "end-port": 4789,
     "security-group": {
      "id": "ed365e01-3f1d-4d5b-89b5-f9caad0a4812",
      "name": "jentis-dev-sks"
     }
    }
   ]
  }
 ]
}
-----------------------------------------------------: timestamp=2022-02-21T10:59:30.234+0100
2022-02-21T10:59:30.234+0100 [INFO]  provider.terraform-provider-exoscale_v0.31.2: 2022/02/21 10:59:30 [DEBUG] exoscale API Request Details:
---[ REQUEST ]---------------------------------------
GET /v2/security-group/a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2 HTTP/1.1
Host: api-ch-gva-2.exoscale.com
User-Agent: Go-http-client/1.1
Authorization: EXO2-HMAC-SHA256 credential=EXOaaaaaaaaaaaaaacdd6208944c,expires=1645438170,signature=ISaaaaaaaaaaBf0mEiZ9kdfN6r7KY4lAfYnWEuOI=
Accept-Encoding: gzip


-----------------------------------------------------: timestamp=2022-02-21T10:59:30.234+0100
2022-02-21T10:59:30.324+0100 [INFO]  provider.terraform-provider-exoscale_v0.31.2: 2022/02/21 10:59:30 [DEBUG] exoscale API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 200 OK
Connection: close
Content-Security-Policy: frame-ancestors https://portal.exoscale.com/ https://www.exoscale.com; frame-src https://portal.exoscale.com/ https://sos-ch-dk-2.exo.io/
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Feb 2022 09:59:30 GMT
Referrer-Policy: no-referrer-when-downgrade
Strict-Transport-Security: max-age=31557600; includeSubDomains; preload
X-Content-Type-Options: nosniff always
X-Xss-Protection: 1; mode=block always

{
 "id": "a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2",
 "name": "jentis-prod-sks",
 "rules": [
  {
   "id": "3c0b303e-5069-4aae-a92b-f06b6dfca0bc",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "Nodes logs/exec",
   "network": "0.0.0.0/0",
   "start-port": 10250,
   "end-port": 10250
  },
  {
   "id": "ab7d1ae3-49e7-4f16-83ca-3859f417672d",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "Nodes logs/exec",
   "network": "::/0",
   "start-port": 10250,
   "end-port": 10250
  },
  {
   "id": "782b82e0-c838-445d-ab6c-400e47c62037",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "NodePort services",
   "network": "0.0.0.0/0",
   "start-port": 30000,
   "end-port": 32767
  },
  {
   "id": "e9eb5a9b-4c8e-4f2a-993b-bdbff301882c",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "NodePort services",
   "network": "::/0",
   "start-port": 30000,
   "end-port": 32767
  },
  {
   "id": "e7460a46-dd54-4fa8-b074-3fd2cf76d02f",
   "protocol": "udp",
   "flow-direction": "ingress",
   "description": "Calico traffic",
   "start-port": 4789,
   "end-port": 4789,
   "security-group": {
    "id": "a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2",
    "name": "jentis-prod-sks"
   }
  }
 ]
}
-----------------------------------------------------: timestamp=2022-02-21T10:59:30.323+0100
2022-02-21T10:59:30.324+0100 [INFO]  provider.terraform-provider-exoscale_v0.31.2: 2022/02/21 10:59:30 [DEBUG] exoscale API Request Details:
---[ REQUEST ]---------------------------------------
GET /v2/security-group/a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2 HTTP/1.1
Host: api-ch-gva-2.exoscale.com
User-Agent: Go-http-client/1.1
Authorization: EXO2-HMAC-SHA256 credential=EXOaaaaaaaaaaaaa8944c,expires=1645438170,signature=ISSih2uQRgghaaaaaaaaaaa6r7KY4lAfYnWEuOI=
Accept-Encoding: gzip


-----------------------------------------------------: timestamp=2022-02-21T10:59:30.324+0100
2022-02-21T10:59:30.608+0100 [INFO]  provider.terraform-provider-exoscale_v0.31.2: 2022/02/21 10:59:30 [DEBUG] exoscale API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 200 OK
Connection: close
Content-Security-Policy: frame-ancestors https://portal.exoscale.com/ https://www.exoscale.com; frame-src https://portal.exoscale.com/ https://sos-ch-dk-2.exo.io/
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Feb 2022 09:59:30 GMT
Referrer-Policy: no-referrer-when-downgrade
Strict-Transport-Security: max-age=31557600; includeSubDomains; preload
X-Content-Type-Options: nosniff always
X-Xss-Protection: 1; mode=block always

{
 "id": "a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2",
 "name": "jentis-prod-sks",
 "rules": [
  {
   "id": "3c0b303e-5069-4aae-a92b-f06b6dfca0bc",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "Nodes logs/exec",
   "network": "0.0.0.0/0",
   "start-port": 10250,
   "end-port": 10250
  },
  {
   "id": "ab7d1ae3-49e7-4f16-83ca-3859f417672d",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "Nodes logs/exec",
   "network": "::/0",
   "start-port": 10250,
   "end-port": 10250
  },
  {
   "id": "782b82e0-c838-445d-ab6c-400e47c62037",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "NodePort services",
   "network": "0.0.0.0/0",
   "start-port": 30000,
   "end-port": 32767
  },
  {
   "id": "e9eb5a9b-4c8e-4f2a-993b-bdbff301882c",
   "protocol": "tcp",
   "flow-direction": "ingress",
   "description": "NodePort services",
   "network": "::/0",
   "start-port": 30000,
   "end-port": 32767
  },
  {
   "id": "e7460a46-dd54-4fa8-b074-3fd2cf76d02f",
   "protocol": "udp",
   "flow-direction": "ingress",
   "description": "Calico traffic",
   "start-port": 4789,
   "end-port": 4789,
   "security-group": {
    "id": "a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2",
    "name": "jentis-prod-sks"
   }
  }
 ]
}
-----------------------------------------------------: timestamp=2022-02-21T10:59:30.608+0100
2022-02-21T10:59:30.608+0100 [TRACE] provider.terraform-provider-exoscale_v0.31.2: Called downstream: tf_resource_type=exoscale_security_group tf_rpc=ImportResourceState @module=sdk.proto tf_proto_version=5 tf_provider_addr=provider tf_req_id=b5b6c813-9886-ba74-6b34-6f7fc7b814ea @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:633 timestamp=2022-02-21T10:59:30.608+0100
2022-02-21T10:59:30.608+0100 [TRACE] provider.terraform-provider-exoscale_v0.31.2: Served request: @module=sdk.proto tf_proto_version=5 tf_req_id=b5b6c813-9886-ba74-6b34-6f7fc7b814ea @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:639 tf_provider_addr=provider tf_resource_type=exoscale_security_group tf_rpc=ImportResourceState timestamp=2022-02-21T10:59:30.608+0100
2022-02-21T10:59:30.608+0100 [ERROR] vertex "exoscale_security_group.sks (import id \"a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2\")" error: nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to Terraform.
2022-02-21T10:59:30.608+0100 [TRACE] vertex "exoscale_security_group.sks (import id \"a19d2acb-1f9d-4a5b-a30a-d6dd01f7efd2\")": visit complete, with errors
2022-02-21T10:59:30.608+0100 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/exoscale/exoscale\"] (close)" errored, so skipping
2022-02-21T10:59:30.608+0100 [TRACE] dag/walk: upstream of "root" errored, so skipping
β•·
β”‚ Error: nil entry in ImportState results. This is always a bug with
β”‚ the resource that is being imported. Please report this as
β”‚ a bug to Terraform.
β”‚ 
β”‚ 
β•΅

2022-02-21T10:59:30.608+0100 [TRACE] statemgr.Filesystem: removing lock metadata file terraform.tfstate.d/prod/.terraform.tfstate.lock.info
2022-02-21T10:59:30.608+0100 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate.d/prod/terraform.tfstate using fcntl flock
2022-02-21T10:59:30.609+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2022-02-21T10:59:30.610+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/exoscale/exoscale/0.31.2/linux_amd64/terraform-provider-exoscale_v0.31.2 pid=118026
2022-02-21T10:59:30.610+0100 [DEBUG] provider: plugin exited

exoscale_compute fails to apply when instance is stopping

When an exoscale_compute resource was created with Terraform, then deleted manually, the next terraform apply fails with the following error message:

data.exoscale_compute_template.ubuntu: Refreshing state... [id=ea595a7f-6a76-4539-9b8e-a0ca78488900]
exoscale_compute.test: Refreshing state... [id=3e7f5ab6-d704-49d2-a4f7-8401abbccf4e]
exoscale_compute.test: Modifying... [id=3e7f5ab6-d704-49d2-a4f7-8401abbccf4e]

Error: VM 3e7f5ab6-d704-49d2-a4f7-8401abbccf4e must be either Running or Stopped. got Stopping

This only happens when the instance takes longer than usual to destroy and a terraform apply is run in the mean time.

When the instance is in state Stopping the Terraform code should wait for the transient state to pass before executing commands.

Cannot loop on instance pool instances

Using the exoscale_instance_pool resource type (provider v0.18.2), I'm trying to retrieve the instances linked to the resource and loop on them to create DNS entries.

I've seen the virtual_machines attribute in the code (though it is not listed on https://registry.terraform.io/providers/exoscale/exoscale/latest/docs/resources/instance_pool#argument-reference), but I cannot loop on it, as Terraform (v0.12.19) complains that virtual_machines has no indices:

Error: Invalid index

  on /home/raphink/dev/provisionning/terraform-rke-cluster-exoscale/node-group/dns.tf line 10, in resource "aws_route53_record" "this":
  10:     exoscale_instance_pool.this.virtual_machines[count.index].ip_address
    |----------------
    | count.index is 2

This value does not have any indices.

Am I missing something?

exoscale_security_group_rules does not use stable sorting of rules

When using exoscale_security_group_rules do define multiple rules at once, terraform will detect changes all the time.

Terraform does fetch the current state to compare it to the desired state. If there are changes, it does apply the changes. It seems that when terraform fetches the iprules, it uses a "random" sorting of those rules. Therefore the resource gets updated even if there is no change on the state or the definition.

I guess it would make sense to apply some stable sorting for the comparison, but have no clues about the terraform internas.

Impossible to provision new servers

Hi there,

We're running into an issue provisioning new servers (using Chef, but it's failing when trying to connect to the new host via SSH)

This is using the latest provider (0.13.0) and the latest Terraform release (v0.12.12). Previously we were using version 0.11.0 from the old repo, and that worked, but I'm getting the same issue with this old version now, so I guess something changed in Terraform itself?

The security groups I'm using for this new server include opening SSH from every destination

It looks like the IP of the newly created server was not set (empty host in the logs):

exoscale_compute.lb_77 (remote-exec):   Host:
exoscale_compute.lb_77 (remote-exec):   User: root
exoscale_compute.lb_77 (remote-exec):   Password: false
exoscale_compute.lb_77 (remote-exec):   Private key: false
exoscale_compute.lb_77 (remote-exec):   Certificate: false
exoscale_compute.lb_77 (remote-exec):   SSH Agent: true
exoscale_compute.lb_77 (remote-exec):   Checking Host Key: false

After 5 minutes I'm getting a timeout:

Error: timeout - last error: dial tcp :22: connect: connection refused

I can log in manually to the new servers using SSH, as both root and ubuntu

Let me know if you need more information, we're not doing anything funky, our config looks like https://www.terraform.io/docs/providers/exoscale/r/compute.html

compute: ssh key issue

Hello,

I have an issue creating virtual machines with an SSH key.

My Exoscale account has one SSH key (not managed by Terraform) which is my default key. I want to create a new key and 2 virtual machines using this new key:

resource "exoscale_ssh_keypair" "test-keypair" {
  name       = "test-bug"
  public_key = file("/home/mathieu/.ssh/id_rsa.pub")
}


resource "exoscale_compute" "test" {
  count = 2

  display_name = "test-${count.index}"
  template_id  = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"

  zone = "ch-gva-2"

  size            = "tiny"
  disk_size       = 20
  key_pair        = exoscale_ssh_keypair.test-keypair.name
  security_groups = ["default"]

}

variable "exoscale_api_key" {
  description = "Exoscale API key"
}

variable "exoscale_secret_key" {
  description = "Exoscale secret key"
}

provider "exoscale" {
  version = "~> 0.15"
  key = var.exoscale_api_key
  secret = var.exoscale_secret_key
}

Terraform apply output:

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # exoscale_compute.test[0] will be created
  + resource "exoscale_compute" "test" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 20
      + display_name       = "test-0"
      + gateway            = (known after apply)
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "test-bug"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = [
          + "default",
        ]
      + size               = "tiny"
      + state              = (known after apply)
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-gva-2"
    }

  # exoscale_compute.test[1] will be created
  + resource "exoscale_compute" "test" {
      + affinity_group_ids = (known after apply)
      + affinity_groups    = (known after apply)
      + disk_size          = 20
      + display_name       = "test-1"
      + gateway            = (known after apply)
      + id                 = (known after apply)
      + ip4                = true
      + ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      + ip_address         = (known after apply)
      + key_pair           = "test-bug"
      + name               = (known after apply)
      + password           = (sensitive value)
      + security_group_ids = (known after apply)
      + security_groups    = [
          + "default",
        ]
      + size               = "tiny"
      + state              = (known after apply)
      + tags               = (known after apply)
      + template           = (known after apply)
      + template_id        = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"
      + user_data_base64   = (known after apply)
      + username           = (known after apply)
      + zone               = "ch-gva-2"
    }

  # exoscale_ssh_keypair.test-keypair will be created
  + resource "exoscale_ssh_keypair" "test-keypair" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "test-bug"
      + private_key = (sensitive value)
      + public_key  = "<redacted>"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

exoscale_ssh_keypair.test-keypair: Creating...
exoscale_ssh_keypair.test-keypair: Creation complete after 1s [id=test-bug]
exoscale_compute.test[1]: Creating...
exoscale_compute.test[0]: Creating...
exoscale_compute.test[1]: Still creating... [10s elapsed]
exoscale_compute.test[0]: Still creating... [10s elapsed]
exoscale_compute.test[0]: Creation complete after 12s [id=efc5d72f-1e31-4a6c-a0ff-3d40f3ce7571]
exoscale_compute.test[1]: Creation complete after 14s [id=64c25e45-d53d-4994-b377-3901e617967e]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

As you can see, the plan indicates key_pair = "test-bug"

But my virtual machines were created with my default key pair (I see it in Base SSH Key in the portal). If I run terraform apply again, Terraform tries to recreate my machine because the keypair is wrong:

terraform apply
exoscale_ssh_keypair.test-keypair: Refreshing state... [id=test-bug]
exoscale_compute.test[1]: Refreshing state... [id=64c25e45-d53d-4994-b377-3901e617967e]
exoscale_compute.test[0]: Refreshing state... [id=efc5d72f-1e31-4a6c-a0ff-3d40f3ce7571]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # exoscale_compute.test[0] must be replaced
-/+ resource "exoscale_compute" "test" {
      ~ affinity_group_ids = [] -> (known after apply)
      ~ affinity_groups    = [] -> (known after apply)
        disk_size          = 20
        display_name       = "test-0"
      ~ gateway            = "185.19.28.1" -> (known after apply)
      ~ id                 = "efc5d72f-1e31-4a6c-a0ff-3d40f3ce7571" -> (known after apply)
        ip4                = true
        ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      ~ ip_address         = "185.19.28.193" -> (known after apply)
      ~ key_pair           = "perso" -> "test-bug" # forces replacement
      ~ name               = "test-0" -> (known after apply)
      ~ password           = (sensitive value)
      ~ security_group_ids = [
          - "f5332702-1fdb-42dd-a68c-fa1008911650",
        ] -> (known after apply)
        security_groups    = [
            "default",
        ]
      ~ size               = "Tiny" -> "tiny"
      ~ state              = "Running" -> (known after apply)
      ~ tags               = {} -> (known after apply)
      ~ template           = "Linux Debian 10 (Buster) 64-bit" -> (known after apply)
        template_id        = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"
      ~ user_data_base64   = true -> (known after apply)
      ~ username           = "root" -> (known after apply)
        zone               = "ch-gva-2"
    }

  # exoscale_compute.test[1] must be replaced
-/+ resource "exoscale_compute" "test" {
      ~ affinity_group_ids = [] -> (known after apply)
      ~ affinity_groups    = [] -> (known after apply)
        disk_size          = 20
        display_name       = "test-1"
      ~ gateway            = "185.19.28.1" -> (known after apply)
      ~ id                 = "64c25e45-d53d-4994-b377-3901e617967e" -> (known after apply)
        ip4                = true
        ip6                = false
      + ip6_address        = (known after apply)
      + ip6_cidr           = (known after apply)
      ~ ip_address         = "185.19.29.37" -> (known after apply)
      ~ key_pair           = "perso" -> "test-bug" # forces replacement
      ~ name               = "test-1" -> (known after apply)
      ~ password           = (sensitive value)
      ~ security_group_ids = [
          - "f5332702-1fdb-42dd-a68c-fa1008911650",
        ] -> (known after apply)
        security_groups    = [
            "default",
        ]
      ~ size               = "Tiny" -> "tiny"
      ~ state              = "Running" -> (known after apply)
      ~ tags               = {} -> (known after apply)
      ~ template           = "Linux Debian 10 (Buster) 64-bit" -> (known after apply)
        template_id        = "c00df33a-df5b-47d2-b9c8-a2eee9ef24a3"
      ~ user_data_base64   = true -> (known after apply)
      ~ username           = "root" -> (known after apply)
        zone               = "ch-gva-2"
    }

Plan: 2 to add, 0 to change, 2 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Failure to destroy on instance pool manual removal

Abstract

When creating an instance pool, then manually removing it, and running terraform destroy the application crashes with Instance pool "xxx" not found.

Steps to reproduce

  1. Create the following Terraform config, run terraform init, and terraform apply
variable "exoscale_key" {}
variable "exoscale_secret" {}

terraform {
  required_providers {
    exoscale = {
      source  = "terraform-providers/exoscale"
    }
  }
}

provider "exoscale" {
  key = var.exoscale_key
  secret = var.exoscale_secret
}

data "exoscale_compute_template" "ubuntu" {
  zone = "ch-gva-2"
  name = "Linux Ubuntu 18.04 LTS 64-bit"
}

resource "exoscale_instance_pool" "test" {
  zone = "at-vie-1"
  name = "test"
  template_id = data.exoscale_compute_template.ubuntu.id
  size = 1
  service_offering = "micro"
  disk_size = 10
  key_pair = ""
}
  1. Delete the created instance pool manually
  2. Run terraform destroy

What happens?

data.exoscale_compute_template.ubuntu: Refreshing state... [id=ea595a7f-6a76-4539-9b8e-a0ca78488900]
exoscale_instance_pool.test: Refreshing state... [id=429b4124-079b-c3c9-8616-be5c0cce6431]

Error: Instance pool "429b4124-079b-c3c9-8616-be5c0cce6431" not found

terraform version

Terraform v0.13.3
+ provider registry.terraform.io/terraform-providers/exoscale v0.19.0

exoscale_database creation error: invalid request: Cannot find ID info

When creating a new exoscale_database resource, terraform produces the following error:

exoscale_database.prod: Creating...
β•·
β”‚ Error: Post "https://api-de-fra-1.exoscale.com/v2/dbaas-service": invalid request: Cannot find ID info
β”‚
β”‚   with exoscale_database.prod,
β”‚   on main.tf line 6, in resource "exoscale_database" "prod":
β”‚    6: resource "exoscale_database" "prod" {
β”‚

Provider versions used:

Terraform: 1.0.6
Exoscale: 0.29.0

Example code

see https://registry.terraform.io/providers/exoscale/exoscale/latest/docs/resources/database#example-usage):

# example from https://registry.terraform.io/providers/exoscale/exoscale/latest/docs/resources/database#example-usage
locals {
  zone = "de-fra-1"
}

resource "exoscale_database" "prod" {
  zone = local.zone
  name = "prod"
  type = "pg"
  plan = "startup-4"

  maintenance_dow  = "sunday"
  maintenance_time = "23:00:00"

  termination_protection = true

  user_config = jsonencode({
    pg_version    = "13"
    backup_hour   = 1
    backup_minute = 0
    ip_filter     = ["194.182.161.182/32"]
    pglookout     = {
      max_failover_replication_time_lag = 60
    }
  })
}

output "database_uri" {
  value = exoscale_database.prod.uri
  sensitive = true # added to avoid terraform error
}

# config
terraform {
  required_providers {
    exoscale = {
      source = "exoscale/exoscale"
      version = ">= 0.29.0"
    }
  }
  required_version = ">= 1.0.6"
}

provider "exoscale" {
  key = "xxx"
  secret = "xxx"
}

Result

% terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # exoscale_database.prod will be created
  + resource "exoscale_database" "prod" {
      + created_at             = (known after apply)
      + disk_size              = (known after apply)
      + features               = (known after apply)
      + id                     = (known after apply)
      + maintenance_dow        = "sunday"
      + maintenance_time       = "23:00:00"
      + metadata               = (known after apply)
      + name                   = "prod"
      + node_cpus              = (known after apply)
      + node_memory            = (known after apply)
      + nodes                  = (known after apply)
      + plan                   = "startup-4"
      + state                  = (known after apply)
      + termination_protection = true
      + type                   = "pg"
      + updated_at             = (known after apply)
      + uri                    = (sensitive value)
      + user_config            = jsonencode(
            {
              + backup_hour   = 1
              + backup_minute = 0
              + ip_filter     = [
                  + "194.182.161.182/32",
                ]
              + pg_version    = "13"
              + pglookout     = {
                  + max_failover_replication_time_lag = 60
                }
            }
        )
      + zone                   = "de-fra-1"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + database_uri = (sensitive value)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

exoscale_database.prod: Creating...
β•·
β”‚ Error: Post "https://api-de-fra-1.exoscale.com/v2/dbaas-service": invalid request: Cannot find ID info
β”‚
β”‚   with exoscale_database.prod,
β”‚   on main.tf line 6, in resource "exoscale_database" "prod":
β”‚    6: resource "exoscale_database" "prod" {
β”‚
β•΅

Adding SSH key to instance pool behind NLB results in error

When modifying an instance pool that is being used by an NLB by adding a previously unset SSH key the following error is generated:

exoscale_instance_pool.autoscaling: Destroying... [id=41e6810f-e0f9-eb33-355a-aa729b651c54]

Error: API error ErrorCode(403) 403 (ServerAPIException 9999): operation destroyInstancePool on resource 41e6810f-e0f9-eb33-355a-aa729b651c54 is locked

This is because the instance pool ID results in an in-place replacement instead of a destroy and recreate:

# exoscale_nlb_service.autoscaling will be updated in-place
  ~ resource "exoscale_nlb_service" "autoscaling" {
        description      = "Managed by Terraform!"
        id               = "0bc0ed66-b628-451f-9778-eb1d562542bc"
      ~ instance_pool_id = "41e6810f-e0f9-eb33-355a-aa729b651c54" -> (known after apply)

Creating instance with template_id should retrieve the username from the given template not root

The issue

When creating an exoscale_compute resource with a custom template created in Exoscale with a custom username like alice in the documentation, at the end of the terraform apply when we output the username we got root instead of the custom userame : alice

  • The main.tf

    resource "exoscale_compute" "instance" {
      template_id        = "65f880d4-7b2f-11eb-9439-0242ac130002"
    }
  • The output.tf

    output "username" {
      value = exoscale_compute.instance.username
    }
    
  • The terraform apply output :

    username = "root"

The cause

In the resource_exoscale_compute.go file at the line 269, we have :

if byName {
  resp, err = client.GetWithContext(ctx, &egoscale.ListTemplates{
      ZoneID:         zone.ID,
      Name:           d.Get("template").(string),
      TemplateFilter: "featured",
   })

   if err != nil {
	return err
   }

   template := resp.(*egoscale.Template)
   templateID = template.ID.String()

   if name, ok := template.Details["username"]; username == "" && ok {
      username = name
   }

} else {
  templateID = d.Get("template_id").(string)
}

if username == "" {
  log.Printf("[INFO] %s: username not found in the template details, falling back to `root`", resourceComputeIDString(d))
  username = "root"
}

In the else (Case using the template_id value) we should call the API to retrieve the username set in the template.

$ exo vm template ls

β”‚b3f4fb0e-6a5d-4e10-b7b5-f6260c2a663bβ”‚ Linux Ubuntu 20.10 64-bit β”‚ 2020-12-08T08:23:55+0000 β”‚ ch-gva-2 β”‚ 10 GiB β”‚

$ exo vm template show b3f4fb0e-6a5d-4e10-b7b5-f6260c2a663b
┼───────────────┼──────────────────────────────────────┼
β”‚   TEMPLATE    β”‚                                      β”‚
┼───────────────┼──────────────────────────────────────┼
β”‚ ID            β”‚ b3f4fb0e-6a5d-4e10-b7b5-f6260c2a663b β”‚
β”‚ Name          β”‚ Linux Ubuntu 20.10 64-bit            β”‚
β”‚ OS Type       β”‚ Ubuntu                               β”‚
β”‚ Creation Date β”‚ 2020-12-08T08:23:55+0000             β”‚
β”‚ Zone          β”‚ ch-gva-2                             β”‚
β”‚ Disk Size     β”‚ 10 GiB                               β”‚
β”‚ Username      β”‚ ubuntu                               β”‚
β”‚ Password?     β”‚ true                                 β”‚
β”‚ Boot Mode     β”‚ uefi                                 β”‚
┼───────────────┼──────────────────────────────────────┼

Instead of the "falling back to root"

What should we use instead of exoscale_nic

Hi, thank you for this terraform provider!

I see here that the exoscale_nic resource is deprecated.

I use this resource to map a compute instance (eg, id=1a2b3c) to a private network IP address (eg, 10.0.0.4). However, I can't seem to see how I can achieve this with one of the other resources. Hopefully I'm not missing something really obvious! Can you point me in the right direction please ☺️

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.