Giter Site home page Giter Site logo

linode / terraform-provider-linode Goto Github PK

View Code? Open in Web Editor NEW
190.0 16.0 91.0 21.29 MB

Terraform Linode provider

Home Page: https://www.terraform.io/docs/providers/linode/

License: Mozilla Public License 2.0

Makefile 0.13% Go 99.87%
linode terraform-provider terraform

terraform-provider-linode's Introduction

Terraform Provider for Linode

Release GoDoc Go Report Card Gitter chat

Maintainers

This provider plugin is maintained by Linode.

Requirements

  • Terraform 0.12.0+
  • Go 1.20.0 or higher (to build the provider plugin)

Using the provider

See the Linode Provider documentation to get started using the Linode provider. The examples included in this repository demonstrate usage of many of the Linode provider resources.

Additional documentation and examples are provided in the Linode Guide, Using Terraform to Provision Linode Environments.

Development

Building the provider

If you wish to build or contribute code to the provider, you'll first need Git and Go installed on your machine (version 1.11+ is required).

You'll also need to correctly configure a GOPATH, as well as adding $GOPATH/bin to your $PATH.

To compile the provider, run make. This will build the provider and put the provider binary in the $GOPATH/bin directory.

Clone this repository to: $GOPATH/src/github.com/linode/terraform-provider-linode

mkdir -p $GOPATH/src/github.com/linode
cd $GOPATH/src/github.com/linode
git clone https://github.com/linode/terraform-provider-linode.git

Enter the provider directory and build the provider

cd $GOPATH/src/github.com/linode/terraform-provider-linode
make

Testing the provider

In order to run the full suite of Acceptance tests, run make int-test. Acceptance testing will require the LINODE_TOKEN variable to be populated with a Linode APIv4 Token. See Linode Provider documentation for more details.

Note: Acceptance tests create real resources, and often cost money to run.

make int-test

Use the following command template to execute specific Acceptance test

make ARGS="-run TestAccResourceVolume_basic" int-test

Use the following command template to execute particular Acceptance tests within a specific package

make TEST_TAGS="volume" int-test

There are a number of useful flags and variables to aid in debugging.

  • TF_LOG_PROVIDER - This instructs Terraform to emit provider logging messages at the given level.

  • TF_LOG - This instructs Terraform to emit logging messages at the given level.

  • TF_LOG_PROVIDER_LINODE_REQUESTS - This instructs terraform-provider-linode to output API request logs at the given level.

  • TF_SCHEMA_PANIC_ON_ERROR - This forces Terraform to panic if a Schema Set command failed.

These values (along with LINODE_TOKEN) can be placed in a .env file in the repository root to avoid repeating them on the command line.

To filter down to logs relevant to the Linode provider, the following command can be used:

terraform apply 2> >(grep '@module=linode' >&2)

terraform-provider-linode's People

Contributors

0xch4z avatar adrianteri avatar akerl avatar amisiorek-akamai avatar apparentlymart avatar bdwyertech avatar bellislinode avatar btobolaski avatar cliedeman avatar damasosanoja avatar dependabot[bot] avatar displague avatar ellisbenjamin avatar ezilber-akamai avatar grubernaut avatar jen20 avatar jordanfelle avatar jriddle-linode avatar jskobos avatar lbgarber avatar lgarber-akamai avatar mitchellh avatar phillc avatar phinze avatar radeksimko avatar stack72 avatar tombuildsstuff avatar yec-akamai avatar ykim-1 avatar zliang-akamai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-linode's Issues

Can't create assosciation of volume to instance inside module

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.12.16

  • provider.linode v1.9.0

Affected Resource(s)

  • linode_volume
  • linode_instance

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

./modules/main.tf

locals {
  key = var.key
}

resource "linode_sshkey" "mykey" {
  label = "default_key"
  ssh_key = chomp(file("pubkey"))
}

resource "linode_instance" "postgres_node" {
  region = var.region
  label = var.postgres_node_label
  private_ip = true
  type = var.node_type
  config {
    label = "postgres_config"
    kernel = "linode/latest-64bit"
    root_device = "/dev/sda"
    devices {
      sda {
        disk_label = "boot"
      }
      sdb {
        disk_label = "Swap Image"
      }
      sdc {
        disk_label = "data"
      }
      sdh {
        volume_id = linode_volume.postgres_volume.id
      }
    }
  }
  disk {
    label = "boot"
    size = "20000"
    image = var.image
    root_pass = var.root_pass
    authorized_keys = [ linode_sshkey.mykey.ssh_key ]
  }
  disk {
    label = "Swap Image"
    filesystem = "swap"
    size = "8192"
  }
  disk {
    label = "pgdata"
    filesystem = "ext4"
    size = "53000"
  }
  boot_config_label = "postgres_config"
}

resource "linode_volume" "postgres_volume" {
  label = var.pgvollabel
  region = var.region
  size = var.volume_size
}

./modules/variables.tf

variable "key" {
  description = "Public SSH Key's path."
}

variable "key_label" {
  description = "new SSH key label"
}

variable "image" {
  description = "Default Image"
  default = "linode/ubuntu18.04"
}

variable "postgres_node_label" {
  description = "Label for the Postgres Linode"
  default = "postgres-host"
}

variable "region" {
  description = "The region where your Linode will be located."
  default = "us-east"
}

variable "node_type" {
  description = "Node Type"
  default = "g6-standard-2"
}

variable "authorized_keys" {
  description = "SSH Keys to use for the Linode."
  type = list
}

variable "root_pass" {
  description = "Your Linode's root user's password."
}

variable "stackscript_id" {
  description = "Stackscript ID."
}

variable "stackscript_data" {
  description = "Map of required StackScript UDF data."
  type = map
  default = {}
}

variable "volume_size" {
  description = "Default size of the Data Volume"
  default = 40
}

variable "pgvollabel" {
  description = "Label for the Postgres Data Volume"
}

./modules/outputs.tf
output "linode_sshkey" {
  value = linode_sshkey.mykey.ssh_key
}

main.tf

provider "linode" {
  token = var.token
}

resource "linode_sshkey" "mikhail" {
  label = "Mikhail"
  ssh_key = chomp(file("pubkey"))
}

data "linode_profile" "mikhailveygman" {}

module "postgres_primary" {
  source = "./modules/postgres"
  key = var.ssh_key
  key_label = var.ssh_key_label
  image = var.default_image
  postgres_node_label = var.postgres_primary_label
  pgvollabel = var.postgres_primary_volume_label
  node_type = var.default_node_type
  root_pass = var.root_pass
  authorized_keys = [ module.postgres_primary.linode_sshkey ]
  stackscript_id = module.postgres_primary.stackscript
  stackscript_data = {
    "my_password" = var.stackscript_data["my_password"]
    "my_userpubkey" = var.stackscript_data["my_userpubkey"]
    "my_hostname" = var.stackscript_data["postgres_primary_host"]
    "my_username" = var.stackscript_data["my_username"]
  }
}

module "postgres_backup" {
  source = "./modules/postgres"
  key = var.ssh_key
  key_label = var.ssh_key_label
  image = var.default_image
  postgres_node_label = var.postgres_backup_label
  pgvollabel = var.postgres_backup_volume_label
  node_type = var.default_node_type
  root_pass = var.root_pass
  authorized_keys = [ module.postgres_backup.linode_sshkey ]
  stackscript_id = module.postgres_backup.stackscript
  stackscript_data = {
    "my_password" = var.stackscript_data["my_password"]
    "my_userpubkey" = var.stackscript_data["my_userpubkey"]
    "my_hostname" = var.stackscript_data["postgres_backup_host"]
    "my_username" = var.stackscript_data["my_username"]
  }
}

variables.tf


variable "token" {}
variable "root_pass" {}
variable "ssh_key" {}
variable "region" {
  default = "us-east"
}
variable "default_node_type" {}
variable "default_image" {}
variable "stackscript_data" {}
variable "nano_node" {}
variable "ssh_key_label" {
  default = "Default SSH Key Label"
  description = "Key Label for SSH Key"
}
variable "default_label" {
  description = "Default Node Label"
  default = "default_host"
}

variable "postgres_primary_label" {
  description = "Label for Primary Postgres Host"
  default = "postgres_primary"
}

variable "postgres_backup_label" {
  description = "Label for Backup Postgres Host"
  default = "postgres_backup"
}

variable "postgres_primary_volume_label" {
  description = "Default Label for the Postgres Primary Volume"
  default = "postgres_primary_volume_label"
}

variable "postgres_backup_volume_label" {
  description = "Default Label for the Postgres backup volume"
  default = "postgres_backup_volume_label"
}

variable "pg_volume_size" {
  description = "Default Volume Size for Postgres Data Volume"
  default = 40
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

Created Association of the volume_id to the linode_instance with /dev/sdh

Actual Behavior

Error: Error mapping disk label data to ID

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
terraform apply

Important Factoids

References

Three character tag names are not allowed

Terraform Version

Terraform v0.11.10
+ provider.aws v1.59.0
+ provider.linode v1.5.0
+ provider.null v2.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

Terraform Configuration Files

 tags = ["dev", "appserver"]

Expected Behavior

Instance should be created with a 3 character tag. Or the error is incorrect and 4 character tag names are the actual limit.

Actual Behavior

Error creating a Linode Instance: [400] [tags[0]] Tag 'dev' must be between 3 and 50 characters
4 character tag names do work.

Steps to Reproduce

  1. Create an terraform with a 3 character tag name on an instance.
  2. terraform apply

Resizing LKE Pool in a cluster deletes the entire pool and creates a new one

Terraform Version

Terraform v0.13.5

  • provider registry.terraform.io/linode/linode v1.13.4

Effected Resource(s)

  • linode_lke_cluster

Terraform Configuration Files

resource "linode_lke_cluster" "lke-meltan-cluster" {
  label       = "lke-meltan-cluster"
  k8s_version = "1.18"
  region      = "ap-northeast"
  tags        = ["52Pokรฉ"]

  pool {
    type  = "g6-standard-2"
    count = 3
  }
}

Expected Behavior

When changing count in a pool to a larger number, existing nodes should be unaffected and only new ones are created, when changing to a lower number, only a few random nodes should be deleted.

Actual Behavior

When I change count to either larger or smaller number, the entire pool is deleted and recreated. This behavior differs with resizing pool in Linode Cloud Manager.

Error: failed to create Firewall: [404] Not found

Terraform Version

Terraform v0.13.4

  • provider registry.terraform.io/hashicorp/tls v2.2.0
  • provider registry.terraform.io/linode/linode v1.13.2

Effected Resource(s)

  • linode_firewall

Terraform Configuration Files

resource "tls_private_key" "key1" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "linode_sshkey" "use_key" {
  label   = "use_key"
  ssh_key = replace(tls_private_key.key1.public_key_openssh, "\n", "")
  # ssh_key = tls_private_key.key1.public_key_pem
}

resource "linode_instance" "webserver1" {
  label           = "simple_instance"
  image           = "linode/ubuntu20.04"
  region          = "ap-west"
  type            = "g6-standard-1"
  authorized_keys = [linode_sshkey.use_key.ssh_key]
  root_pass       = "terr4form-test"

  group = "new-node"
  tags  = ["prod"]

  # swap_size = 256
  private_ip = true
}


resource "linode_firewall" "firewall_security" {
  depends_on = [linode_instance.webserver1]

  label = "firewall1"
  tags  = ["prod"]

  inbound {
    protocol  = "TCP"
    ports     = ["80"]
    addresses = ["0.0.0.0/0"]
  }

  outbound {
    protocol  = "TCP"
    ports     = ["80"]
    addresses = ["0.0.0.0/0"]
  }

  linodes = [linode_instance.webserver1.id]

}

Debug Output

tls_private_key.key1: Creating...
tls_private_key.key1: Creation complete after 2s [id=1aa3afe3b13461657774ce3afe4490936c383358]
linode_sshkey.use_key: Creating...
linode_sshkey.use_key: Creation complete after 1s [id=70081]
linode_instance.webserver1: Creating...
linode_instance.webserver1: Still creating... [10s elapsed]
linode_instance.webserver1: Still creating... [20s elapsed]
linode_instance.webserver1: Still creating... [30s elapsed]
linode_instance.webserver1: Still creating... [40s elapsed]
linode_instance.webserver1: Still creating... [50s elapsed]
linode_instance.webserver1: Creation complete after 58s [id=22351655]
linode_firewall.firewall_security: Creating...

Error: failed to create Firewall: [404] Not found

  on node_mumbai_linode.tf line 39, in resource "linode_firewall" "firewall_security":
  39: resource "linode_firewall" "firewall_security" {

Panic Output

None

Expected Behavior

Create a firewall attached to the created Instance.

Actual Behavior

throws an error 404 - not found

Steps to Reproduce

Use the basic linode setup, and use the above scripts.

Linode instance downsize doesn't work

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

Terraform v0.11.8

  • provider.linode v1.3.0
  • provider.null v1.0.0
  • provider.random v2.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

modify instance type from
"g6-standard-1" to
"g6-nanode-1"

log:

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

module.newtest.linode_instance.linode-template: Modifying... (ID: 11668149)
config.0.root_device: "/dev/root" => "/dev/sda"
disk.1.size: "43008" => "17408"
type: "g6-standard-1" => "g6-nanode-1"

Error: Error applying plan:

1 error(s) occurred:

  • module.newtest.linode_instance.linode-template: 1 error(s) occurred:

  • linode_instance.linode-template: Error resizing instance 11668149: [400] Linode has allocated more disk than the new service plan allows. Delete or resize disks smaller.

Expected Behavior

the instance get down sized

Actual Behavior

failed to do so

Unauthorized error should be propagated when thrown while creating disk/configs

Target:

  • Terraform v0.12.18
  • provider.linode v1.9.1

Affected Resource:

  • linode_instance

Terraform Configuration Files

provider "linode" {
    token = var.token
}

resource "linode_instance" "cluster01-node01" {
    boot_config_label = "My Debian 10 Disk Profile"
    label = "cluster01-node01"
    private_ip = true
    region = "eu-central"
    type = "g6-standard-4"
    watchdog_enabled = true
    swap_size = null

    config {
        devices {
            sda {
                disk_label = "Debian 10 Disk"
            }
            sdb {
                disk_label = "100G"
            }
        }
        helpers {
            devtmpfs_automount = false
            distro             = false
            modules_dep        = false
            network            = true
            updatedb_disabled  = true
        }
        kernel = "linode/grub2"
        label = "My Debian 10 Disk Profile"
        memory_limit = -1
        run_level = "default"
        virt_mode = "fullvirt"
        root_device = "/dev/sda"
    }

    disk {
        label = "Debian 10 Disk"
        image = "linode/debian10"
        read_only = false
        filesystem = "ext4"
        size = 1024*1024*50
    }
    disk {
        label = "100G"
        read_only = false
        size = 1024*1024*100
        filesystem = "ext4"
    }

    timeouts {}
}

Debug Output of "terraform import linode_instance.cluster01-node01 "

{
  "version": 4,
  "terraform_version": "0.12.18",
  "serial": 1,
  "lineage": "020ed547-d80e-ed6e-7996-ce4bc81a8bc3",
  "outputs": {},
  "resources": [
    {
      "mode": "managed",
      "type": "linode_instance",
      "name": "cluster01-node01",
      "provider": "provider.linode",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "alerts": [
              {
                "cpu": 360,
                "io": 10000,
                "network_in": 10,
                "network_out": 10,
                "transfer_quota": 80
              }
            ],
            "authorized_keys": null,
            "authorized_users": null,
            "backup_id": null,
            "backups": [
              {
                "enabled": null,
                "schedule": null
              }
            ],
            "backups_enabled": null,
            "boot_config_label": null,
            "config": [],
            "disk": [],
            "group": "",
            "id": "19046848",
            "image": null,
            "ip_address": "139.162.148.162",
            "ipv4": [
              "139.162.148.162",
              "192.168.164.228"
            ],
            "ipv6": "2a01:7e01::f03c:92ff:febb:73e1/64",
            "label": "cluster01-node01",
            "private_ip": true,
            "private_ip_address": "192.168.164.228",
            "region": "eu-central",
            "root_pass": null,
            "specs": [
              {
                "disk": 163840,
                "memory": 8192,
                "transfer": 5000,
                "vcpus": 4
              }
            ],
            "stackscript_data": null,
            "stackscript_id": null,
            "status": "offline",
            "swap_size": 0,
            "tags": [],
            "timeouts": {
              "create": null,
              "delete": null,
              "update": null
            },
            "type": "g6-standard-4",
            "watchdog_enabled": true
          },
          "private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6NjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjEyMDAwMDAwMDAwMDB9LCJzY2hlbWFfdmVyc2lvbiI6IjAifQ=="
        }
      ]
    }
  ]
}

Expected Behavior

The "config" and the disks should be created in Linode

Actual Behavior

The Linode configuration and disks are not created at all, the system cannot boot.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Set the "token" variable in your configuration file
  2. terraform apply

Error importing domain_record resource

Hi there,

I'm trying to import our current domain configuration to Terraform, so we can manage them via Terraform, I'm facing some errors, hope someone can help me!

Terraform Version

โžœ   git:(master) โœ— terraform -v                                                             
Terraform v0.11.8
+ provider.consul v2.2.0
+ provider.linode v1.4.0

Affected Resource(s)

  • linode_domain
  • linode_domain_record

Terraform Configuration Files

##  Domain 
resource "linode_domain" "my_awesome_domain" {
  domain    = "myawesomedomain.com"
  soa_email = "[email protected]"
  type      = "master"
}

## Domain resource
resource "linode_domain_record" "a_wildcard" {
  domain_id   = "123456"
  name        = "*"
  record_type = "A"
  target      = "XX.XX.XX.XX"
}

Debug Output

โžœ  git:(master) โœ— terraform import linode_domain.my_awesome_domain 123456           
linode_domain.my_awesome_domain Importing from ID "123456"...
linode_domain.my_awesome_domaint: Import complete!
Imported linode_domain (ID: 123456)
linode_domain.my_awesome_domain: Refreshing state... (ID: 123456)

โžœ  git:(master) โœ— terraform import linode_domain_record.a_wildcard 123456           
linode_domain_record.a_wildcard: Importing from ID "123456"...

Error: linode_domain_record.a_wildcard (import id: 123456): import linode_domain_record.a_wildcard (id: 123456): nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to Terraform.

Panic Output

Not panic

Expected Behavior

Imported domain record a_wildcard

Actual Behavior

Throwing an unclear error

Steps to Reproduce

  1. terraform import domain
  2. terraform import domain_resource

Important Factoids

References

Thank you very much!

terraform always report change "/dev/root" => "/dev/sda"

Terraform Version

Terraform v0.11.8

  • provider.linode v1.3.0
  • provider.null v1.0.0
  • provider.random v2.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

Terraform Configuration Files

resource "linode_instance" "bare" {
...
config {
label = "boot_config"
kernel = "linode/grub2"

devices {
  sda = {
    disk_label = "root"
  }

  sdb = {
    disk_label = "data"
  }
}

root_device = "/dev/sda"

}
}

Debug Output

terraform plan
Terraform will perform the following actions:

~ module.cluster1-master.linode_instance.bare
config.0.root_device: "/dev/root" => "/dev/sda"

~ module.cluster1-workers.linode_instance.bare[0]
config.0.root_device: "/dev/root" => "/dev/sda"

~ module.cluster1-workers.linode_instance.bare[1]
config.0.root_device: "/dev/root" => "/dev/sda"

~ module.newtest-workers.linode_instance.bare
config.0.root_device: "/dev/root" => "/dev/sda"

~ module.newtest.linode_instance.bare
config.0.root_device: "/dev/root" => "/dev/sda"

Expected Behavior

Apply the said actions.

Actual Behavior

The actions are always there, no matter how many time they are applied.
It's just annoying. Instances work as expected.

Error creating a Linode ObjectStorageBucket: [404] Not found

Terraform Version

Terraform v0.12.17

  • provider.linode v1.9.1

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_object_storage_bucket

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "linode_object_storage_bucket" "storage-a" {
  cluster = "us-east-1"
  label   = "storage-a"
}

Debug Output

https://gist.github.com/matthewbaggett/7779948f66c49bae6b38833488d7601e

Expected Behavior

To create a new storage bucket called storage-a

Actual Behavior


  on linode-bug.tf line 1, in resource "linode_object_storage_bucket" "storage-a":
   1: resource "linode_object_storage_bucket" "storage-a" {

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply using the HCL above.

Error listing object storage clusters: [404] Not found

Terraform Version

Terraform v0.12.17

  • provider.linode v1.9.1

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_object_storage_cluster

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

data "linode_region" "region" {
  id = "us-east"
}

data "linode_object_storage_cluster" "primary" {
  id = "${data.linode_region.region.id}-1"
}

resource "linode_object_storage_bucket" "storage-b" {
  cluster = data.linode_object_storage_cluster.primary.id
  label   = "storage-b"
}

Debug Output

https://gist.github.com/matthewbaggett/a7c2e4fbe01591aef62ac9426ddfb0ed

Expected Behavior

To find a storage cluster and create a new storage bucket called storage-b

Actual Behavior


  on linode-bug-2.tf line 5, in data "linode_object_storage_cluster" "primary":
   5: data "linode_object_storage_cluster" "primary" {

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply using the HCL above.

swap_size is ignored when resizing a linode

Hi there,

swap_size is ignored when resizing (enlarging) a linode, even though the plan shows that it would be increased. This is probably related to #42 and can be mitigated at once. If the linode resizing behaviour does honour it then the implementation when enlarging an instance should be trivial. (It may disregard it, though, which may be the cause of this behaviour.)

Terraform Version

+ provider.linode v1.6.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

Expected Behavior

Increasing the swap size while also selecting a larger instance size should result in having the new swap size after the resize operation with the disk expanding to the remaining free space (i.e. what remains with the larger, new swap).

Actual Behavior

swap size remains unchanged.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Never switched to metered billing? Have beta or restricted privileges?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

Include relevant links to Linode docs in the Terraform provider docs

Features: images, sshkeys, data resources

This PR represents the final PRs against my deprecated personal repository:

This makes the following config stanzas possible:

// new data resources
data "linode_region" "main" {
  id = "us-east"
}
data "linode_instance_type" "default" {
  id = "g6-nanode-1"
}
data "linode_image" "ubuntu" {
  id = "linode/ubuntu18.04"
}
data "linode_sshkey" "foo" {
  label = "foo"
}

// new resource
resource "linode_sshkey" "foo" {
  label = "foo"
  ssh_key = "${chomp(file("~/.ssh/id_rsa.pub"))}"
}

// root_pass not needed, key can reference sshkey item
resource "linode_instance" "foobaz" {
    type = "g6-nanode-1"
    region = "us-west"
    authorized_keys = [ "${linode_sshkey.foo.ssh_key}" ] 
}

// new resource
resource "linode_image" "foobar" {
    label = "foo-volume"
    description = "My new disk image"
    linode_id = "${linode_instance.foobaz.id}"
    disk_id = "${linode_instance.foobaz.disk.0.id}"
}

Instance tags are not idempotent

Terraform Version

terraform version
Terraform v0.11.10
+ provider.linode v1.2.0
+ provider.null v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

Terraform Configuration Files

resource "linode_instance" "mmserver" {
  count = "${var.mmserver_count}"
  image = "${var.image}"
  label = "${var.mmserver_name}-${count.index}"
  group = "${var.group}"
  tags =  ["${var.tags}"]
  region = "${var.region}"
  type = "${var.type}"
  swap_size = "${var.swap_size}"
  authorized_keys = ["${var.ssh_key}"]
  root_pass = "${var.root_pass}"
  private_ip = true
}

Expected Behavior

Subsequent terraform apply should show no changes

Actual Behavior

Instance tags are reordered on every apply.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Build a terraform Linode instance resource that includes tags.
  2. terraform apply
  3. Run terraform apply again.
  4. Tags will be rewritten.

Support Versioning & Lifecycle Rules For Object Storage Buckets

Description

Being able to configure a bucket's versioning and lifecycle rules is extremely important for any serious production use case. See DigitalOcean's Terraform Provider (Spaces Bucket Resource) for an example of what support could look like.

New or Affected Resource(s)

  • linode_object_storage_bucket

Potential Terraform Configuration

See: https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs/resources/spaces_bucket#versioning

linode_domain_record always updates SRV name and target values

SRV domain records are requiring updates on every apply, even when there are no config changes.

Terraform Version

$ terraform -v
Terraform v0.12.20
+ provider.linode v1.9.1

Affected Resource(s)

  • linode_domain_record

Terraform Configuration Files

terraform {
  required_version = ">= 0.12"
}

provider "linode" {
  version = "~> 1.8"
}

variable "domain"           { default = "CHANGEME.4zbug.stj.io" }
variable "domain_soa_email" { default = "[email protected]" }

resource "linode_domain" "root" {
  type        = "master"
  domain      = var.domain
  soa_email   = var.domain_soa_email
}

resource "linode_domain_record" "a-oli" {
  domain_id   = linode_domain.root.id
  name        = "oli"
  record_type = "A"
  target      = "127.0.0.1"
}

resource "linode_domain_record" "srv-oli" {
  domain_id   = linode_domain.root.id
  name        = "oli"
  record_type = "SRV"
  target      = "oli-backend-1"
  service     = "oli"
  protocol    = "tcp"
  port        = 1001
  priority    = 10
  weight      = 0

  lifecycle {
    ignore_changes = [
      priority,
      weight,
    ]
  }
}

Debug Output

CLI output for first and second apply.
https://gist.github.com/stvnjacobs/0168c72ff1f68ad4ea19868e92f3e465

The trace log output for the first and second apply.
https://gist.github.com/stvnjacobs/267a4155fa1f44ed3e3ba3ca5cd35d86#file-01-apply-log
https://gist.github.com/stvnjacobs/267a4155fa1f44ed3e3ba3ca5cd35d86#file-02-apply-log

Expected Behavior

There should be no updates required for second apply.

Actual Behavior

Updates to SRV name and target records are required for second apply.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Set domain variable
  2. terraform apply
  3. terraform apply

Unable to pass an ssh file value as an authorized key due to multi-line key

Using the built-in Terraform file function, an operator can retrieve the value of an SSH key and pass it into a variable or resource as a string. For whatever reason, Terraform appears to add a newline (\n) at the end of the retrieved string.

The linode_instance resource does not allow multi-lined ssh-keys so the additional \n at the end of the retrieved key causes the resource creation to fail.

I build AWS infrastructure using this pattern and the AWS resources can create key-pairs using Terraform's file function. The linode_instance should be able to do the same as hardcoding ssh keys in code is typically a bad pattern.

Terraform Version

Terraform v0.11.10

  • provider.linode v1.2.0

Affected Resource(s)

  • linode_instance

Terraform Configuration Files

variable "pub_key" {
  default     = "../files/id_rsa.pub"
}

resource "linode_instance" "instance" {
  image           = "linode/ubuntu18.04"
  region          = "us-central"
  type            = "g6-nanode-1"
  authorized_keys = ["${var.pub_key}"]
}

Expected Behavior

The linode_instance resource can use the ssh_key retrieved by Terraform's file function and create the Linode instance.

Actual Behavior

Resource creation fails with the following error: * linode_instance.instance: Error creating a Linode Instance: [400] [authorized_keys] SSH Key 1 must not be multi-lined.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Using the code I posted above
  2. terraform apply

cannot access the value of the object storage key

Terraform Version

Terraform v0.12.20

Affected Resource(s)

linode_object_storage_key

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

provider "linode"{
  token = var.linode_token
}

variable "linode_token" {
  default = ""
}

resource "linode_object_storage_key" "storage_keysxx" {
    label = "staging access - created by terraform"
}


output "storage_keys" {
    value = ["${linode_object_storage_key.storage_keysxx.access_key}","${linode_object_storage_key.storage_keysxx.secret_key}"]
}

Debug Output

https://gist.github.com/pilvikala/3bcb3836256677166484cded7b0039eb

Panic Output

N/A

Expected Behavior

The output prints the secret key and the secret key is in the state file. This way the secret key can be pushed to another service that can use it.

Actual Behavior

Terraform prints [REDACTED] instead of the secret key and the same value is in the state file.

Steps to Reproduce

  1. terraform apply

Important Factoids

N/A

References

N/A

build from source (make build) failing

Terraform Version

Terraform v0.12.7-dev

Expected Behavior

What should have happened?
make build should have completed without any errors

Actual Behavior

What actually happened?

==> Checking that code complies with gofmt requirements...
go install
verifying github.com/hashicorp/[email protected]: checksum mismatch
	downloaded: h1:It2vmod2dBMB4+r+aUW2Afx0HlftyUwzNsNH3I2vrJ8=
	go.sum:     h1:B5lTDWSdEbbjxrTZQ8GdsDlH7FnDrIvF42GykbhQ6tA=
GNUmakefile:10: recipe for target 'build' failed
make: *** [build] Error 1

Steps to Reproduce

  • git clone the repository
  • make build

Important Factoids

I guess this is pretty much the exact same issue that was observed in https://github.com/terraform-providers/terraform-provider-digitalocean/issues/246#issuecomment-498311720. I believe (re-)generating go.sum ( rather than updating it ) should fix this issue.

TestAccLinodeVolume_detached test is flaky

During tests (TestAccLinodeVolume_detached at least), the attach POST can sometimes occur after the DELETE instance command, resulting in:

------- Stdout: -------
=== RUN   TestAccLinodeVolume_detached
=== PAUSE TestAccLinodeVolume_detached
=== CONT  TestAccLinodeVolume_detached
--- FAIL: TestAccLinodeVolume_detached (34.03s)
	testing.go:538: Step 2 error: Error applying: 1 error occurred:
			* linode_volume.foobar: 1 error occurred:
			* linode_volume.foobar: Error attaching Linode Volume 18442 to Linode Instance 11775199: [400] [linode_id] Linode is not active	

FAIL

The API call order, according to the logs when run with -count 1, appears to be:

DELETE instance/id
  200
GET volume/id
  200
  (active, null linodeid, no tags)
PUT volume/id (tag)
  200
  (active, null linodeid, tag)
POST volume/id/attach (linodeid)
  400
{
   "errors": [
      {
         "field": "linode_id",
         "reason": "Linode is not active"
      }
   ]
}

Unexpected Content-Type: Expected: application/json, Received: text/html

Terraform Version

Terraform v0.12.24
+ provider.linode v1.9.1
+ provider.random v2.2.1
+ provider.template v2.1.2

Effected Resource(s)

  • linode_instance

Terraform Configuration Files

link

Debug Output

The relevant section appears to be this:

2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: 2020/04/24 21:50:11 [DEBUG] Linode API Response Details:
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: ---[ RESPONSE ]--------------------------------------
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: HTTP/2.0 502 Bad Gateway
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: Content-Length: 166
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: Content-Type: text/html
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: Date: Fri, 24 Apr 2020 21:51:44 GMT
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: Server: nginx
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: 
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: <html>
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: <head><title>502 Bad Gateway</title></head>
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: <body bgcolor="white">
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: <center><h1>502 Bad Gateway</h1></center>
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: <hr><center>nginx</center>
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: </body>
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: </html>
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: 
2020-04-24T21:50:11.647Z [DEBUG] plugin.terraform-provider-linode_v1.9.1_x4: -----------------------------------------------------

Expected Behavior

terraform apply should have completed successfully.

Actual Behavior

An error was thrown saying that Linode's API had returned an unexpected response.

Steps to Reproduce

Unfortunately, this issue is very flaky and hard to reproduce reliably. There may not be anything to do here as far as Terraform is concerned, but I wanted to raise this as an issue because it happened to me a handful of times this afternoon.

Can't set TTL of domain record to 0

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
terraform -v
Terraform v0.11.13

  • provider.linode v1.5.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_domain_record

Terraform Configuration Files

An imported domain record resource:

provider "linode" {
    token = "my_token"
}

resource "linode_domain_record" "example_label" {
    domain_id = "1210739"
    name = "testterraformimport"
    record_type = "A"
    target = "23.92.18.235"
    ttl_sec = "0"
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

TF_LOG=debug terraform plan
2019/03/21 18:25:09 [INFO] Terraform version: 0.11.13
2019/03/21 18:25:09 [INFO] Go runtime version: go1.12
2019/03/21 18:25:09 [INFO] CLI args: []string{"/usr/local/Cellar/terraform/0.11.13/bin/terraform", "plan"}
2019/03/21 18:25:09 [DEBUG] Attempting to open CLI config file: /Users/nmelehan/.terraformrc
2019/03/21 18:25:09 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/03/21 18:25:09 [INFO] CLI command args: []string{"plan"}
2019/03/21 18:25:09 [INFO] command: empty terraform config, returning nil
2019/03/21 18:25:09 [DEBUG] command: no data state file found for backend config
2019/03/21 18:25:09 [DEBUG] New state was assigned lineage "438640f2-88ff-7d5e-bb54-1ece2f261d9d"
2019/03/21 18:25:09 [INFO] command: backend initialized: <nil>
2019/03/21 18:25:09 [DEBUG] checking for provider in "."
2019/03/21 18:25:09 [DEBUG] checking for provider in "/usr/local/Cellar/terraform/0.11.13/bin"
2019/03/21 18:25:09 [DEBUG] checking for provider in ".terraform/plugins/darwin_amd64"
2019/03/21 18:25:09 [DEBUG] found provider "terraform-provider-linode_v1.5.0_x4"
2019/03/21 18:25:09 [DEBUG] found valid plugin: "linode", "1.5.0", "/Users/nmelehan/Desktop/terraformdomainrecord/.terraform/plugins/darwin_amd64/terraform-provider-linode_v1.5.0_x4"
2019/03/21 18:25:09 [DEBUG] checking for provisioner in "."
2019/03/21 18:25:09 [DEBUG] checking for provisioner in "/usr/local/Cellar/terraform/0.11.13/bin"
2019/03/21 18:25:09 [DEBUG] checking for provisioner in ".terraform/plugins/darwin_amd64"
2019/03/21 18:25:09 [INFO] command: backend <nil> is not enhanced, wrapping in local
2019/03/21 18:25:09 [INFO] backend/local: starting Plan operation
2019/03/21 18:25:09 [INFO] terraform: building graph: GraphTypeInput
2019/03/21 18:25:09 [DEBUG] Attaching resource state to "linode_domain_record.example_label": &terraform.ResourceState{Type:"linode_domain_record", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc00033a4b0), Deposed:[]*terraform.InstanceState{}, Provider:"provider.linode", mu:sync.Mutex{state:0, sema:0x0}}
2019/03/21 18:25:09 [TRACE] Graph after step *terraform.AttachStateTransformer:

linode_domain_record.example_label - *terraform.NodeAbstractResource
2019/03/21 18:25:09 [DEBUG] resource linode_domain_record.example_label using provider provider.linode
2019/03/21 18:25:09 [TRACE] Graph after step *terraform.ProviderTransformer:

linode_domain_record.example_label - *terraform.NodeAbstractResource
  provider.linode - *terraform.NodeApplyableProvider
provider.linode - *terraform.NodeApplyableProvider
2019/03/21 18:25:09 [DEBUG] ReferenceTransformer: "linode_domain_record.example_label" references: []
2019/03/21 18:25:09 [DEBUG] ReferenceTransformer: "provider.linode" references: []
2019-03-21T18:25:09.918-0400 [DEBUG] plugin: starting plugin: path=/Users/nmelehan/Desktop/terraformdomainrecord/.terraform/plugins/darwin_amd64/terraform-provider-linode_v1.5.0_x4 args=[/Users/nmelehan/Desktop/terraformdomainrecord/.terraform/plugins/darwin_amd64/terraform-provider-linode_v1.5.0_x4]
2019-03-21T18:25:09.921-0400 [DEBUG] plugin: waiting for RPC address: path=/Users/nmelehan/Desktop/terraformdomainrecord/.terraform/plugins/darwin_amd64/terraform-provider-linode_v1.5.0_x4
2019-03-21T18:25:09.930-0400 [DEBUG] plugin.terraform-provider-linode_v1.5.0_x4: plugin address: timestamp=2019-03-21T18:25:09.930-0400 network=unix address=/var/folders/ln/49ppf5wx6433g_q4jvyv1pgm0000gp/T/plugin431929432
2019/03/21 18:25:09 [INFO] terraform: building graph: GraphTypeValidate
2019/03/21 18:25:09 [DEBUG] Attaching resource state to "linode_domain_record.example_label": &terraform.ResourceState{Type:"linode_domain_record", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc00033a4b0), Deposed:[]*terraform.InstanceState{}, Provider:"provider.linode", mu:sync.Mutex{state:0, sema:0x0}}
2019/03/21 18:25:09 [TRACE] Graph after step *terraform.AttachStateTransformer:

linode_domain_record.example_label - *terraform.NodeValidatableResource
2019/03/21 18:25:09 [DEBUG] ReferenceTransformer: "provider.linode" references: []
2019/03/21 18:25:09 [TRACE] Graph after step *terraform.ReferenceTransformer:

linode_domain_record.example_label - *terraform.NodeValidatableResource
  provider.linode - *terraform.NodeApplyableProvider
provider.linode - *terraform.NodeApplyableProvider
2019/03/21 18:25:09 [DEBUG] Starting graph walk: walkValidate
2019/03/21 18:25:09 [DEBUG] Attaching resource state to "linode_domain_record.example_label": &terraform.ResourceState{Type:"linode_domain_record", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc00033a4b0), Deposed:[]*terraform.InstanceState{}, Provider:"provider.linode", mu:sync.Mutex{state:0, sema:0x0}}
2019/03/21 18:25:09 [TRACE] Graph after step *terraform.AttachStateTransformer:

linode_domain_record.example_label - *terraform.NodeValidatableResourceInstance
2019/03/21 18:25:09 [DEBUG] ReferenceTransformer: "linode_domain_record.example_label" references: []
2019/03/21 18:25:09 [ERROR] root: eval: *terraform.EvalValidateResource, err: Warnings: []. Errors: [expected ttl_sec to be one of [300 3600 7200 14400 28800 57600 86400 172800 345600 604800 1209600 2419200], got 0]
2019/03/21 18:25:09 [ERROR] root: eval: *terraform.EvalSequence, err: Warnings: []. Errors: [expected ttl_sec to be one of [300 3600 7200 14400 28800 57600 86400 172800 345600 604800 1209600 2419200], got 0]
2019/03/21 18:25:09 [DEBUG] plugin: waiting for all plugin processes to complete...

Error: linode_domain_record.example_label: expected ttl_sec to be one of [300 3600 7200 14400 28800 57600 86400 172800 345600 604800 1209600 2419200], got 0


2019-03-21T18:25:09.941-0400 [DEBUG] plugin.terraform-provider-linode_v1.5.0_x4: 2019/03/21 18:25:09 [ERR] plugin: stream copy 'stderr' error: stream closed
2019-03-21T18:25:09.941-0400 [DEBUG] plugin.terraform-provider-linode_v1.5.0_x4: 2019/03/21 18:25:09 [ERR] plugin: plugin server: accept unix /var/folders/ln/49ppf5wx6433g_q4jvyv1pgm0000gp/T/plugin431929432: use of closed network connection
2019-03-21T18:25:09.942-0400 [DEBUG] plugin: plugin process exited: path=/Users/nmelehan/Desktop/terraformdomainrecord/.terraform/plugins/darwin_amd64/terraform-provider-linode_v1.5.0_x4

Expected Behavior

What should have happened?
The domain record should accept a value of 0 for the TTL. This is the value that is returned by the API for a record with a TTL set to Default in the Cloud Manager.

For my Terraform-imported domain record resource, the CLI verifies that the TTL is already 0, and it can set it to 0:

linode-cli domains records-view 1210739 12457311
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ id       โ”‚ type โ”‚ name                โ”‚ target       โ”‚ ttl_sec โ”‚ priority โ”‚ weight โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 12457311 โ”‚ A    โ”‚ testterraformimport โ”‚ 23.92.18.235 โ”‚ 0       โ”‚ 0        โ”‚ 0      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
linode-cli domains records-update --type A --ttl_sec 0 1210739 12457311
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ id       โ”‚ type โ”‚ name                โ”‚ target       โ”‚ ttl_sec โ”‚ priority โ”‚ weight โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 12457311 โ”‚ A    โ”‚ testterraformimport โ”‚ 23.92.18.235 โ”‚ 0       โ”‚ 0        โ”‚ 0      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
linode-cli domains records-update --type A --ttl_sec 300 1210739 12457311
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ id       โ”‚ type โ”‚ name                โ”‚ target       โ”‚ ttl_sec โ”‚ priority โ”‚ weight โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 12457311 โ”‚ A    โ”‚ testterraformimport โ”‚ 23.92.18.235 โ”‚ 300     โ”‚ 0        โ”‚ 0      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
linode-cli domains records-update --type A --ttl_sec 0 1210739 12457311
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ id       โ”‚ type โ”‚ name                โ”‚ target       โ”‚ ttl_sec โ”‚ priority โ”‚ weight โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 12457311 โ”‚ A    โ”‚ testterraformimport โ”‚ 23.92.18.235 โ”‚ 0       โ”‚ 0        โ”‚ 0      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Actual Behavior

What actually happened?
Got an error from Terraform, debug output listed above

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create a domain record resource with the TTL set to 0
  2. terraform plan

Linode nodebalancer documentation example bug

Hi,

the example on the https://www.terraform.io/docs/providers/linode/r/nodebalancer_node.html doesn't work with terraform 0.11.3 and linode provider version 1.6

Using that format I get the following error:

* linode_nodebalancer_node.web_nodes_https: At column 1, line 1: output of an HIL expression must be a string, or a single list (argument 1 is TypeList) in:

${linode_instance.web.*.private_ip_address}:80

Terraform Version

Terraform v0.11.11
+ provider.dme v0.1.0
+ provider.linode v1.6.0

Affected Resource(s)

  • linode_nodebalancer_node

Terraform Configuration Files

https://www.terraform.io/docs/providers/linode/r/nodebalancer_node.html

Important Factoids

Linode(.com) has actually a working example, see: https://www.linode.com/docs/applications/configuration-management/create-a-nodebalancer-with-terraform/ .

Change ipv4 to ip_address

Hi,

The examples use the ipv4 field of linode_instance resources.

This is incorrect, as this is now a set of strings. As a set, you can't index it either.

But the fix is to use ip_address for the ipv4 field instead.

[PROPOSAL] Switch to Go Modules

As part of the preparation for Terraform v0.12, we would like to migrate all providers to use Go Modules. We plan to continue checking dependencies into vendor/ to remain compatible with existing tooling/CI for a period of time, however go modules will be used for management. Go Modules is the official solution for the go programming language, we understand some providers might not want this change yet, however we encourage providers to begin looking towards the switch as this is how we will be managing all Go projects in the future. Would maintainers please react with ๐Ÿ‘ for support, or ๐Ÿ‘Ž if you wish to have this provider omitted from the first wave of pull requests. If your provider is in support, we would ask that you avoid merging any pull requests that mutate the dependencies while the Go Modules PR is open (in fact a total codefreeze would be even more helpful), otherwise we will need to close that PR and re-run go mod init. Once merged, dependencies can be added or updated as follows:

$ GO111MODULE=on go get github.com/some/module@master
$ GO111MODULE=on go mod tidy
$ GO111MODULE=on go mod vendor

GO111MODULE=on might be unnecessary depending on your environment, this example will fetch a module @ master and record it in your project's go.mod and go.sum files. It's a good idea to tidy up afterward and then copy the dependencies into vendor/. To remove dependencies from your project, simply remove all usage from your codebase and run:

$ GO111MODULE=on go mody tidy
$ GO111MODULE=on go mod vendor

Thank you sincerely for all your time, contributions, and cooperation!

Should there be a default_ssh_user attribute for the linode_image data source?

For the linode/containerlinux image, the default SSH user is core. It would be helpful if the linode_image resource exposed this so that it could be used for connection, e.g.:

data "linode_image" "changelog" {
  id = "linode/containerlinux"
}

# ... elided for brevity

resource "linode_volume" "db" {
  region = "${data.linode_region.changelog.id}"
  label = "db"
  size = 10
  linode_id = "${linode_instance.2019.id}"

  connection {
    user = "${linode_image.changelog.default_ssh_user}" # <--- Feature request
    host = "${linode_instance.2019.ip_address}"
  }
}

What do you think?

Terraform Disk Resize Does not Work

Terraform Version

Terraform v0.11.9

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Debug Output

https://gist.github.com/ellisbenjamin/e5f8666848d7d61ab2b7f83854130edd

Expected Behavior

Linode should be shut down, disk resized, and powered on.

Actual Behavior

Error: Error applying plan:

1 error(s) occurred:

* linode_instance.foobar: 1 error(s) occurred:

* linode_instance.foobar: Error waiting for resize of Instance 14471744 Disk 30040285: Linode 14471744 action disk_resize failed

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.


root@localhost:~/terraformold#

Steps to Reproduce

  1. Create linode instance by running terraform apply with the following config.
resource "linode_instance" "foobar" {
        label = "example2"
        type = "g6-nanode-1"
        region = "us-east"
        group = "tf_test"

        disk {
                label = "disk"
                image = "linode/ubuntu18.04"
                root_pass = "b4d_p4s5"
                size = 5000
        }

        config {
                label = "config"
                kernel = "linode/latest-64bit"
                devices {
                        sda {
                                disk_label = "disk"
                        }
                }
        }
}
  1. Update the config as follows:
resource "linode_instance" "foobar" {
        label = "example2"
        type = "g6-nanode-1"
        region = "us-east"
        group = "tf_test"

        disk {
                label = "disk"
                image = "linode/ubuntu18.04"
                root_pass = "b4d_p4s5"
                size = 4000
        }

        config {
                label = "config"
                kernel = "linode/latest-64bit"
                devices {
                        sda {
                                disk_label = "disk"
                        }
                }
        }
}

  1. Run terraform apply again

not compatible with Terraform 0.12

Terraform Version

Terraform v0.11.14

  • provider.linode v1.6.0

Debug Output

terraform 0.12checklist
After analyzing this configuration and working directory, we have identified some necessary steps that we recommend you take before upgrading to Terraform v0.12:

  • Upgrade provider "linode" to a version that is compatible with Terraform 0.12.

    No compatible version is available for automatic installation at this time. If this provider is still supported (not archived) then a compatible release should be available soon. For more information, check for 0.12 compatibility tasks in the provider's issue tracker.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform 0.12checklist

=============

Terraform Version

Terraform v0.12.0

  • provider.linode v1.6.0

Debug Output

terraform_ init --upgrade

No available provider "linode" plugins are compatible with this Terraform version.

From time to time, new Terraform major releases can change the requirements for
plugins such that older plugins become incompatible.

Terraform checked all of the plugin versions matching the given constraint:
(any version)

Unfortunately, none of the suitable versions are compatible with this version
of Terraform. If you have recently upgraded Terraform, it may be necessary to
move to a newer major release of this provider. Alternatively, if you are
attempting to upgrade the provider to a new major version you may need to
also upgrade Terraform to support the new version.

Consult the documentation for this provider for more information on
compatibility between provider versions and Terraform versions.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform_ init --upgrade

Linode instance upsize with disk expansion at same time doesn't work

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.11.8

  • provider.linode v1.3.0
  • provider.null v1.0.0
  • provider.random v2.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

Terraform Configuration Files

I'm doing upsize for one instance, and the disk size expand at the same time.

Debug Output

~ module.newtest.linode_instance.bare
config.0.root_device: "/dev/root" => "/dev/sda"
disk.1.size: "17408" => "43008"
type: "g6-nanode-1" => "g6-standard-1"

module.newtest-workers.linode_instance.bare: Modifying... (ID: 11904746)
config.0.root_device: "/dev/root" => "/dev/sda"
module.cluster1-workers.linode_instance.bare[1]: Modifications complete after 1s (ID: 11763152)
module.cluster1-workers.linode_instance.bare[0]: Modifications complete after 1s (ID: 11763151)
module.cluster1-master.linode_instance.bare: Modifications complete after 1s (ID: 11763148)
module.newtest-workers.linode_instance.bare: Modifications complete after 1s (ID: 11904746)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 10s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 20s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 30s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 40s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 50s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 1m0s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 1m10s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 1m20s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 1m30s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 1m40s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 1m50s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 2m0s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 2m10s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 2m20s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 2m30s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 2m40s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 2m50s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 3m0s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 3m10s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 3m20s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 3m30s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 3m40s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 3m50s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 4m0s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 4m10s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 4m20s elapsed)
module.newtest.linode_instance.bare: Still modifying... (ID: 11904745, 4m30s elapsed)

Error: Error applying plan:

1 error(s) occurred:

  • module.newtest.linode_instance.bare: 1 error(s) occurred:

  • linode_instance.bare: Error resizing Disk 25076234: size exceeds disk size for Instance 11904745

Expected Behavior

The instance shall upsize first, then based on the new disk cap, allow the disk expansion.

Actual Behavior

First "terraform apply", the instance is upsized, then the apply failed.
I have to a 2nd time apply to make the disk expansion work.

Consider providing a list of Linode instance types/regions

Iโ€™ve been reading the docs for linode_instance, trying to write Terraform to match my existing instance so I can import it and manage it in Terraform.

My background: Iโ€™ve created a handful of Linode instances through the console, I spent a lot of time using Terraform to manage AWS resources, but Iโ€™ve never even looked at the Linode API.

I found the region and type parameters confusing:

region - (Required) This is the location where the Linode is deployed. Examples are "us-east", "us-west", "ap-south", etc. Changing region forces the creation of a new Linode Instance..

type - (Required) The Linode type defines the pricing, CPU, disk, and RAM specs of the instance. Examples are "g6-nanode-1", "g6-standard-2", "g6-highmem-16", etc.

The example values donโ€™t match anything Iโ€™m used to seeing in the Linode console. In my mind, the name of my Linodeโ€™s plan is Linode 2048 (apparently g6-standard-1, which I guessed wrong the first time) and the region is London, UK.

Having a pointer to the bit of the API docs where these values are defined would be helpful.

Cannot access linode_lke_cluster using terraform v0.13.5 kubernetes

Terraform Version

0.13.5

Effected Resource(s)

  • linode_lke_cluster

Terraform Configuration Files

terraform {

  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 1.13.3"
    }
    linode = {
      source  = "linode/linode"
      version = "~> 1.13.3"
    }
  }
}

provider "linode" {
  token = var.LINODE_TOKEN
}

resource "linode_lke_cluster" "k8" {
  label       = "test-123-abc-cluster"
  k8s_version = "1.18"
  region      = "eu-central"
  pool {
    type  = "g6-standard-1"
    count = 3
  }
}

provider "kubernetes" {
  host = linode_lke_cluster.k8.api_endpoints[0]
  token = linode_lke_cluster.k8.kubeconfig
  load_config_file = false
  insecure = true
}

Debug Output

https://gist.github.com/j-737casa/ce9f5e67ad6a13a89509a3d5ab58836e

Expected Behavior

Should have created secret

Actual Behavior

Does not create secret - returns unauthorized

Linode Instance resizing doesn't work

When an instance with managed disks is resized, the underlying disks aren't resized to accommodate. This means when someone upsizes via terraform, they have unallocated disk space that they cannot use without resizing via the GUI. When someone downsizes, the disk is not downsized, causing the operation to fail due to insufficient space.

With explicitly defined disks, resizes always occur before the Linode itself is resized, causing disk upsizes to fail.

This issue consolidates the concerns brought up in #13 and #17.

Empty "name" for A domain record

Terraform Version

v0.12.23
provider.linode >= 1.9

Affected Resource(s)

  • linode_domain_record

Terraform Configuration Files

resource "linode_domain" "jrecordbind" {
  type      = "master"
  domain    = "jrecordbind.org"
  soa_email = "[email protected]"
}

resource "linode_domain_record" "jrecordbind_A" {
  domain_id   = linode_domain.jrecordbind.id
  name        = ""
  record_type = "A"
  target      = var.deutche_welle_ip
}

Expected Behavior

Linode DNS manager allows me to create a "A" record without a "name", I expect to do the same via terraform

Actual Behavior

Terraform reports error Error: name is required for non-SRV records

Suggestions/Workarounds are welcome

Linode Downsize and Upsize Disk Issue

When I configure a .tf to change the type and disk size in Terraform. I got this issue:

Error: failed to apply instance disk spec: Error resizing disk xxx: size exceeds disk size for Instance yyy

It seems that Terraform is not proceding in order, which is first to resize the disk, then resize the VM in linode. Any solutions?

I'm using the 0.13 terraform core and 1.12.4 linode provider version

tf file:

provider "linode" {
  token = "jjjj"
}

resource "linode_instance" "presm11" {
    label             = "label"
    region            = "us-southeast"
    type              = "g6-standard-4"
    config {
        kernel       = "linode/grub2"
        label        = " Disk Profile"
        root_device  = "/dev/sda"
        devices {
            sda {
                disk_label = "Ubuntu 16.04 LTS Disk"
            }
            sdb {
                disk_label = "512 MB Swap Image"
            }
        }
    }
    disk {
        label            = "Ubuntu 16.04 LTS Disk"
        size             = 160000
    }
    disk {
        label            = "512 MB Swap Image"
        size             = 512
    }
}

terraform fails if running apply twice with missing tfstate file

Running apply again to regenrate the tfstate file fails because it is trying to create a new linode_instance instance with an existing tag.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Expected Behavior

What should have happened?

Terraform should have skipped the instance creation itself and create the tfstate file again.

Actual Behavior

Error: Error creating a Linode Instance: [400] [label] Label must be unique among your linodes

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Add a simple linode_instance resource;
  2. Run terraform apply
  3. Delete tfstate generated files
  4. Run apply again

Hard to understand errors when enabling Network Helper for linode/containerlinux

Enabling the Network Helper for linode/containerlinux image results in difficult to understand errors, e.g.:

linode_instance.2019: "authorized_users": conflicts with config
linode_instance.2019: "config": conflicts with image
linode_instance.2019: "image": conflicts with config

According to the Use CoreOS Container Linux on Linode guide, Network Helper is not compatible [with CoreOS]. Would it make sense to fail with a more helpful error?

linode_instance.2019: "config.network": incompatible with image

Terraform Version

v0.11.10

Affected Resource(s)

  • linode_instance v1.3.0

Terraform Configuration Files

data "linode_image" "changelog" {
  id = "linode/containerlinux"
}

resource "linode_instance" "2019" {
  # ... elided for brevity
  image = "${data.linode_image.changelog.id}"
  private_ip = true

  config {
    network = true
  }
}

Debug Output

2019/01/02 12:34:39 [INFO] Terraform version: 0.11.10  
2019/01/02 12:34:39 [INFO] Go runtime version: go1.11.1
2019/01/02 12:34:39 [INFO] CLI args: []string{"/usr/local/Cellar/terraform/0.11.10/bin/terraform", "plan"}
2019/01/02 12:34:39 [DEBUG] found provider "terraform-provider-linode_v1.3.0_x4"
2019/01/02 12:34:39 [DEBUG] found provider "terraform-provider-null_v1.0.0_x4"
2019/01/02 12:34:39 [DEBUG] found provider "terraform-provider-template_v1.0.0_x4"

# ... elided for brevity

linode_instance.2019 - *terraform.NodeValidatableResourceInstance
2019/01/02 12:34:39 [DEBUG] ReferenceTransformer: "linode_instance.2019" references: []
2019/01/02 12:34:39 [ERROR] root: eval: *terraform.EvalValidateResource, err: Warnings: []. Errors: ["config": conflicts with image "authorized_users": conflicts with config "image": conflicts with config]
2019/01/02 12:34:39 [ERROR] root: eval: *terraform.EvalSequence, err: Warnings: []. Errors: ["config": conflicts with image "authorized_users": conflicts with config "image": conflicts with config]
2019/01/02 12:34:39 [TRACE] [walkValidate] Exiting eval tree: linode_instance.2019
2019/01/02 12:34:39 [DEBUG] Attaching resource state to "linode_volume.db": &terraform.ResourceState{Type:"linode_volume", Dependencies:[]string{}, Primary:(*terraform.InstanceState)(0xc000411130), Deposed:[]*terraform.InstanceState{}, Provider:"provider.linode", mu:sync.Mutex{state:0, sema:0x0}}
2019/01/02 12:34:39 [TRACE] Graph after step *terraform.AttachStateTransformer:

# ... elided for brevity

linode_instance.2019: "authorized_users": conflicts with config
linode_instance.2019: "config": conflicts with image
linode_instance.2019: "image": conflicts with config

ssl_key gets updated every time terraform is run

Description

linode_nodebalancer_config resources get updated every time if they are set to use SSL/HTTPS because the ssl_key is not returned by the API so terraform has nothing to compare it against. (On a side note: the ssl_ket DOES get stored in tfstate, which may not be the wisest thing to do, though convenient some of the time.)

Now there is the ssl_fingerprint exported attribute where the documentation says that: "Please use the ssl_commonname and ssl_fingerprint to identify the certificate." and "The fingerprint for the SSL certification this port is serving if this port is not configured to use SSL." but it doesn't add up. First of all, there is no way (and no use) to identify the certificate, because there seems to be no way not to update it. (Or maybe it just supposed to mean it generally, not in the context of terraform.) Second, the documentation for ssl_fingreprint and also ssl_commonname says these are returned when the port is not configured to use SSL. (I guess this is just a typo, but adds to the confusion.)

To solve this, you could verify if the certificate has changed (you can use either the fingerprint from the API or the stored certificate) and only update both the key and the certificate if the certificate has changed. (There is no point in updating just the key without the certificate anyway and this is what is happening now.)

I don't know TF internals enough (well, at all) so I don't know how much gymnastics this involves.

New or Affected Resource(s)

linode_nodebalancer_config

Potential Terraform Configuration

I don't think the configuration has to change, the check can be done internally based on the provided values.

Adapt to new Linode Resize behavior

Resized Linodes now automatically resize the disk/filesystem to the full size of the disk under specific conditions (1 disk with up to 1 additional swap disk).

One of the tests is failing because of this change in behavior:

https://github.com/terraform-providers/terraform-provider-linode/blob/master/linode/resource_linode_instance_test.go#L676

Users may find that a change in the Instance type to a larger plan followed by a change to a smaller plan may fail. With the new behavior, the disk has been enlarged and is too large to be reduced back into the smaller plan.

The provider can address this by first attempting to reduce the size of disks before attempting to resize the instance.

In the case of an Instance with "disks" and "configs", changing the disk size should be left to the user and would require that https://github.com/terraform-providers/terraform-provider-linode/issues/13 be resolved.

In the case of an Instance using the "simple" configuration (no "disks" or "configs") the provider should attempt to resize the first disk by the disk size difference between the source and target instance types.

Incorrect release tag?

The release tags have gone from 1.11.2 to 1.12.3 - was this an incorrect tag?

If so, this would be a useful thing to point out in the CHANGELOG as to make sure others are not confused :D

Private Images Do Not Allow Configuration of Multiple Disks

Terraform Version

2019/10/23 13:47:24 [INFO] Terraform version: 0.12.12
2019/10/23 13:47:24 [DEBUG] found provider "terraform-provider-linode_v1.8.0_x4"

Affected Resource(s)

  • linode_instance

Terraform Configuration Files

provider "linode" {
  token = "${var.linode_token}"
}

resource "linode_instance" "tf_example" {
  label = "tf_example"
  region = "${var.region}"
  type = "g6-nanode-1"
  group = "test"
  tags = ["test"]

  config {
    label = "main_boot"
    kernel = "linode/latest-64bit"
    root_device = "/dev/sda"

    devices {
      sda {
        disk_label = "boot"
      }
      sdb {
        disk_label = "Swap Image"
      }
      sdc {
        disk_label = "logs"
      }
    }
  }
  disk {
    label = "boot"
    size = "3000"
    image = "linode/centos8"
    root_pass = "${var.root_pass}"
    authorized_keys = ["${var.ssh_key}"]
  }
  disk {
    label = "Swap Image"
    filesystem = "swap"
    size = "512"
  }
  disk {
    label = "logs"
    filesystem = "ext4"
    size = "10000"
  }

  boot_config_label = "main_boot"
}

variable "linode_token" {}
variable "root_pass" {}
variable "ssh_key" {}
variable "region" {
  default = "us-central"
}

Debug Output

https://gist.github.com/kellinm/5ca9ecafc58063f0b8c40f922570bdcc

Expected Behavior

Terraform should make a linode with disks as specified and the private image as the operating system.

Actual Behavior

If I change the image as noted above from a public one to a private one, it throws the error as shown in the gist and creates the linode, but with only the boot disk and not in a running state.

Steps to Reproduce

  1. Change the image field to point at a private image
  2. terraform apply

Important Factoids

None that I am aware of; but I will tag a support ticket with a link to this bugrep so you can check any other logs that would be useful to you.

I filed a community forums issue earlier, but the more I look, the more I don't think it's intentional behavior.

rDNS Resource Fails to Plan when missing

Terraform Version

Terraform v0.13.5

  • provider registry.terraform.io/hashicorp/aws v2.70.0
  • provider registry.terraform.io/hashicorp/helm v1.2.4
  • provider registry.terraform.io/hashicorp/kubernetes v1.12.0
  • provider registry.terraform.io/hashicorp/local v1.4.0
  • provider registry.terraform.io/hashicorp/null v2.1.2
  • provider registry.terraform.io/hashicorp/random v2.3.0
  • provider registry.terraform.io/hashicorp/template v2.1.2
  • provider registry.terraform.io/linode/linode v1.13.4

Affected Resource(s)

  • linode_rdns

Terraform Configuration Files

Any TF config using linode_rdns.

Expected Behavior

Terraform correctly marks the rdns resource as missing (similar to how it does for instances).

Actual Behavior

Terraform fails with a 404 error.

Steps to Reproduce

  • Create a TF config with a Linode instance + rDNS.
  • Delete the instance manually in the Cloud Manger.
  • Run terraform plan on the same instance.

I believe the issue is that the resourceRead method doesn't check if the error is a 404 and set the ID to "". Instead it simply fails.

config helper docs on terraform.io do not render in the correct depth

The Linode Instance Config docs on terraform.io don't reflect the formatting and extra indentation level on config.helpers..

Compare with this: https://github.com/terraform-providers/terraform-provider-linode/blob/master/website/docs/r/instance.html.md#configs

A user may be confused by this and assume:

  config {
    label = "boot_config"
    kernel = "linode/direct-disk"
    network = false
  }

Rather than:

  config {
    label = "boot_config"
    kernel = "linode/direct-disk"
    helper {
        network = false
    }
  }

Nodebalancer config doesn't properly handle check_passive

Terraform Version

Terraform v0.11.10
+ provider.linode v1.0.0

Affected Resource(s)

  • linode_nodebalancer_config

Terraform Configuration Files

resource "linode_nodebalancer" "balancer" {
  label = "dev-loadbalancer"
  region = "us-central"
  client_conn_throttle = 20
}

resource "linode_nodebalancer_config" "balancer-http" {
  nodebalancer_id = "${linode_nodebalancer.balancer.id}"
  port = 80
  protocol = "http"
  check = "http"
  check_path = "/"
  check_interval = 30
  check_attempts = 3
  check_timeout = 5
  check_passive = false
  stickiness = "table"
  algorithm = "leastconn"
}

Debug Output

Debug seems to show that check_passive is excluded from the PUT request when set to false in TF.

https://gist.github.com/mcg/ed16c061e041207f3134b4d26ff3f6ef

Expected Behavior

linode_nodebalancer_config, check_passive: false successfully sets the check to false.

Actual Behavior

check_passive will remain true. However, if check_passive is manually set to false outside of Terraform and check_passive is configured as true in the Terraform, it will correctly make that change.

To clarify...

  • Transitioning check_passive false -> true works.
  • Transitioning check_passive true -> false fails.

Steps to Reproduce

  1. Create a NodeBalancerConfig with check_passive: false
  2. On a clean/empty state, terraform apply
  3. Confirm that check_passive was not changed to false.

Linode Instance Imported with root_device of "/dev/root"

Terraform Version

Terraform v0.11.10

  • provider.linode v1.0.0

Affected Resource(s)

  • linode_instance

Terraform Configuration Files

provider "linode" {
    token = "${var.token}"
}

resource "linode_instance" "terraform-import" {
    label = "terraform-import"
    group = "Terraform"
    region = "us-east"
    type = "g6-standard-1"
    config {
        label = "My Debian 9 Disk Profile"
        kernel = "linode/grub2"
        root_device = "/dev/sda"
        devices {
            sda = {
                disk_label = "Debian 9 Disk"
            }
            sdb = {
                disk_label = "512 MB Swap Image"
            }
        }
    }
    disk {
        label = "Debian 9 Disk"
        size = 50688
    }
    disk {
        label = "512 MB Swap Image"
        size = 512
    }
}

Expected Behavior

When importing a Linode instance, it is required to set the root_device in the config block so that the correct disk is targeted as the root device. Inclusion of this setting, and all other required settings for an import, should result in a clean terraform plan.

Actual Behavior

The Linode instance is imported with the root_device set to /dev/root regardless of the setting. This results the following terraform plan:

  ~ linode_instance.terraform-import
      config.0.root_device: "/dev/root" => "/dev/sda"

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create a Linode resource block based on an existing Linode.
  2. terraform import linode_instance.<instance_label> <instance_id>
  3. terraform plan

Important Factoids

This is more of an annoyance, as it does not seem to affect the Linode upon terraform apply, ie, the correct disk would be applied anyway if it worked.

SIGSEGV importing a linode_volume

Terraform Version

Terraform v0.12.25
+ provider.linode v1.10.0

Effected Resource(s)

  • linode_volume

Terraform Configuration Files

resource "linode_volume" "swarm_data" {
  label  = "swarm_data"
  region = var.linode_region
  size   = 100
}

Panic Output

https://gist.github.com/1player/b879ab2f9899eacfbc24c652afecc82b

Steps to Reproduce

terraform import linode_volume.swarm_data REDACTED_ID

REDACTED_ID has been extracted from linode-cli volumes list.

Of two volumes I wanted to import, one was imported without any error, this one makes terraform crash.

Changing Image in Disks config section of instances does not cause a resource destruction

Hi there,

I've found that when specifying an image for a Linode instance via the disks config section of an instance, changing the image parameter does not cause the resource to be recreated (indeed, it doesn't cause anything to happen).

Terraform Version

Terraform v0.11.13

  • provider.aws v2.4.0
  • provider.linode v1.5.0

Affected Resource(s)

Please list the resources as a list, for example:

  • linode_instance

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "linode_instance" "li_instance" {
    label               = "my-test-instance"
    region             = "us-central"
    type               = "g6-nanode-1"

    lifecycle {
        create_before_destroy = false
    }

  disk {
    label = "boot"
    size = 10000
    image  = "linode/centos7"
  }

  config {
    label = "boot_config"
    kernel = "linode/latest-64bit"
    devices {
      sda = { disk_label = "boot" },
    }
    root_device = "/dev/sda"
  }

    boot_config_label = "boot_config"
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

Terraform plan and apply should destroy the existing instance and re-launch using the new image.

Output plan:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

linode_instance.li_instance: Refreshing state... (ID: 13461897)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ linode_instance.li_instance
      config.0.root_device: "/dev/root" => "/dev/sda"


Plan: 0 to add, 1 to change, 0 to destroy.

Actual Behavior

What actually happened?
No changes are made

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. Change the image configuration e.g. linode/centos7 to linode/debian9
  3. Run terraform apply

Important Factoids

The docs state that this should work, i have a feeling that the Linode API returns NULL for the current image so perhaps terraform can't figure out that this has changed? The image is not stored in the state file.

References

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.