Giter Site home page Giter Site logo

terraform-google-managed-instance-group's People

Contributors

andrewfarley avatar dalekurt avatar danielezer avatar danisla avatar morgante avatar petervandenabeele avatar pindar avatar prodriguezdefino avatar sysc0d avatar tpoindessous avatar yanson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-managed-instance-group's Issues

Allow SSH only from specific hosts

Hello, is it possible to make only specific IPs accessible to the SSH via configuration in the future? Right know it's hard coded to 0.0.0.0/0. I'd like to propose to make this configurable via a variable. What do you think?

health check options

The current form of the terraform google mig module does not allow flexibility in the type of compute health check that is used, i.e. the current tf code only allows http. I have an enhancement which will allow someone to pick http, https, tcp, or ssl. I would like to submit a PR to have that added to the mainline code.

Add support for instance template updates.

I'm currently working on a project that is deploying applications via the managed instance group's rolling update feature.

After a deployment terraform will recognize that the instance_template value has drifted from that in the state file and attempt to update it.

I've found that applying the attached patch seems to solve the issue and allow for dynamic instance templates without compromising terraform's functionality. Just pass
gm_ignore_changes_for = ["instance_template"]

mypatch.txt

I'm happy to create the PR myself, just give me the word.

Module use options that will be deprecated in terraform 2.0

terraform apply now output the following warnings:

Warning: module.nat.module.nat-gateway.google_compute_instance_group_manager.default: "auto_healing_policies": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Warning: module.nat.module.nat-gateway.google_compute_instance_group_manager.default: "rolling_update_policy": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

module is borked

So, this worked last week... now it doesn't.




Error: module.vault.module.vault-server.google_compute_instance_group_manager.default: "auto_healing_policies.0.health_check": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Error: module.vault.module.vault-server.google_compute_instance_group_manager.default: "auto_healing_policies.0.initial_delay_sec": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Error: module.vault.module.vault-server.google_compute_instance_group_manager.default: "rolling_update_policy": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.```

terraform apply error

Hi,

after doing a terraform init

version           = "1.1.7"
Error: Error applying plan:

1 error(s) occurred:

* module.gce-lb-http.google_compute_backend_service.default: 1 error(s) occurred:

* google_compute_backend_service.default: Error creating backend service: googleapi: Error 400: Invalid value for field 'resource.backends[0].group': ''. Only Instance Group URLs are allowed., invalid

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Here my main.tf :

/*
 * Copyright 2017 Google Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *   http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

variable region {
  default = "europe-west1"
}

provider google {
  region = "${var.region}"
}

module "gce-lb-http" {
  source      = "github.com/GoogleCloudPlatform/terraform-google-lb-http"
  version     = "1.0.5"
  name        = "http-lb"
  target_tags = ["${module.mig1.target_tags}", "${module.mig2.target_tags}"]

  backends = {
    "0" = [
      {
        group = "${module.mig1.instance_group}"
        # group = "https://www.googleapis.com/compute/v1/projects/courseur-staging/zones/us-central1-b/instanceGroups/gke-examplecluster-hdja738s-group"
      },
      {
        group = "${module.mig2.instance_group}"
      },
    ]
  }

  backend_params = [
    // health check path, port name, port number, timeout seconds.
    "/,http,80,10",
  ]
}

my instance manager conf :

variable group1_size {
  default = "2"
}

variable group2_size {
  default = "2"
}

variable subnetwork {
  default = "pub"
}

variable network {
  default = "nomad"
}

variable update_strategy {
  # default = "ROLLING_UPDATE"
  default = "NONE"
}

data "template_file" "group-startup-script" {
  template = "${file("${format("%s/gceme.sh.tpl", path.module)}")}"

  vars {
    PROXY_PATH = ""
  }
}

module "mig1" {
  source            = "github.com/GoogleCloudPlatform/terraform-google-managed-instance-group"
  version           = "1.1.7"
  region            = "europe-west1"
  zonal             = false
  zone              = "us-west1-b"
  network           = "${var.network}"
  subnetwork        = "${var.subnetwork}"
  name              = "traefik-group-1"
  size              = "${var.group1_size}"
  target_tags       = ["allow-traefik-group-1"]
  service_port      = 80
  service_port_name = "http"
  update_strategy   = "${var.update_strategy}"
  startup_script    = "${data.template_file.group-startup-script.rendered}"
}

module "mig2" {
  source            = "github.com/GoogleCloudPlatform/terraform-google-managed-instance-group"
  version           = "1.1.7"
  region            = "europe-west1"
  zonal             = false
  zone              = "us-east1-b"
  network           = "${var.network}"
  subnetwork        = "${var.subnetwork}"
  name              = "traefik-group-2"
  size              = "${var.group2_size}"
  target_tags       = ["allow-traefik-group-2"]
  service_port      = 80
  service_port_name = "http"
  update_strategy   = "${var.update_strategy}"
  startup_script    = "${data.template_file.group-startup-script.rendered}"
}

Add Support for Terraform 2.x

As managed instance group is the base module used by terraform-google-lb to create an instance group.

As some scripts from terraform-google-lb has been updated (not example directory) to Terraform 2.x, when creating load balancer it will return errors due to base module is not migrated.

adding second disk

hi
is it possible to use terraform-google-managed-instance-group
and have a second disk attached to the instances?

Module doesn't allow multiple NATs in the same AZ

Our project has the following structure:

  • 4 networks: mgmt, int, stg, prd
  • 2 subnetworks for each network: us-east1-c, us-west1-a

When trying to create NAT instances, the following error is reported from terraform:

* module.mgmt-nat-us-west-1-a.module.nat-gateway.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-west1-a-vm-ssh' already exists, alreadyExists
* module.stg-nat-us-west-1-a.module.nat-gateway.google_compute_health_check.mig-health-check: 1 error(s) occurred:

* google_compute_health_check.mig-health-check: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/healthChecks/nat-gateway-us-west1-a' already exists, alreadyExists
* module.mgmt-nat-us-west-1-a.google_compute_firewall.nat-gateway: 1 error(s) occurred:

* google_compute_firewall.nat-gateway: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-us-west1-a' already exists, alreadyExists
* module.prd-nat-us-east-1-c.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.int-nat-us-west-1-a.module.nat-gateway.google_compute_firewall.mig-health-check: 1 error(s) occurred:

* google_compute_firewall.mig-health-check: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-west1-a-vm-hc' already exists, alreadyExists
* module.stg-nat-us-west-1-a.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.mgmt-nat-us-east-1-c.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.int-nat-us-west-1-a.module.nat-gateway.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-west1-a-vm-ssh' already exists, alreadyExists
* module.prd-nat-us-west-1-a.module.nat-gateway.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-west1-a-vm-ssh' already exists, alreadyExists
* module.mgmt-nat-us-west-1-a.module.nat-gateway.google_compute_health_check.mig-health-check: 1 error(s) occurred:

* google_compute_health_check.mig-health-check: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/healthChecks/nat-gateway-us-west1-a' already exists, alreadyExists
* module.int-nat-us-west-1-a.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.prd-nat-us-east-1-c.module.nat-gateway.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-east1-c-vm-ssh' already exists, alreadyExists
* module.mgmt-nat-us-east-1-c.module.nat-gateway.google_compute_firewall.mig-health-check: 1 error(s) occurred:

* google_compute_firewall.mig-health-check: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-east1-c-vm-hc' already exists, alreadyExists
* module.mgmt-nat-us-west-1-a.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.int-nat-us-east-1-c.google_compute_firewall.nat-gateway: 1 error(s) occurred:

* google_compute_firewall.nat-gateway: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-us-east1-c' already exists, alreadyExists
* module.int-nat-us-east-1-c.module.nat-gateway.google_compute_firewall.mig-health-check: 1 error(s) occurred:

* google_compute_firewall.mig-health-check: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-east1-c-vm-hc' already exists, alreadyExists
* module.int-nat-us-west-1-a.google_compute_firewall.nat-gateway: 1 error(s) occurred:

* google_compute_firewall.nat-gateway: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-us-west1-a' already exists, alreadyExists
* module.prd-nat-us-west-1-a.module.nat-gateway.google_compute_health_check.mig-health-check: 1 error(s) occurred:

* google_compute_health_check.mig-health-check: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/healthChecks/nat-gateway-us-west1-a' already exists, alreadyExists
* module.prd-nat-us-west-1-a.google_compute_firewall.nat-gateway: 1 error(s) occurred:

* google_compute_firewall.nat-gateway: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-us-west1-a' already exists, alreadyExists
* module.int-nat-us-east-1-c.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.mgmt-nat-us-west-1-a.module.nat-gateway.google_compute_firewall.mig-health-check: 1 error(s) occurred:

* google_compute_firewall.mig-health-check: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-west1-a-vm-hc' already exists, alreadyExists
* module.mgmt-nat-us-east-1-c.module.nat-gateway.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-east1-c-vm-ssh' already exists, alreadyExists
* module.prd-nat-us-west-1-a.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.stg-nat-us-east-1-c.module.nat-gateway.google_compute_health_check.mig-health-check: 1 error(s) occurred:

* google_compute_health_check.mig-health-check: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/healthChecks/nat-gateway-us-east1-c' already exists, alreadyExists
* module.prd-nat-us-east-1-c.google_compute_firewall.nat-gateway: 1 error(s) occurred:

* google_compute_firewall.nat-gateway: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-us-east1-c' already exists, alreadyExists
* module.stg-nat-us-east-1-c.google_compute_address.default: 1 error(s) occurred:

* google_compute_address.default: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.
* module.mgmt-nat-us-east-1-c.google_compute_firewall.nat-gateway: 1 error(s) occurred:

* google_compute_firewall.nat-gateway: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-us-east1-c' already exists, alreadyExists
* module.prd-nat-us-east-1-c.module.nat-gateway.google_compute_firewall.mig-health-check: 1 error(s) occurred:

* google_compute_firewall.mig-health-check: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-east1-c-vm-hc' already exists, alreadyExists
* module.mgmt-nat-us-east-1-c.module.nat-gateway.google_compute_health_check.mig-health-check: 1 error(s) occurred:

* google_compute_health_check.mig-health-check: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/healthChecks/nat-gateway-us-east1-c' already exists, alreadyExists
* module.int-nat-us-east-1-c.module.nat-gateway.google_compute_firewall.default-ssh: 1 error(s) occurred:

* google_compute_firewall.default-ssh: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-east1-c-vm-ssh' already exists, alreadyExists
* module.int-nat-us-east-1-c.module.nat-gateway.google_compute_health_check.mig-health-check: 1 error(s) occurred:

* google_compute_health_check.mig-health-check: Error creating HealthCheck: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/healthChecks/nat-gateway-us-east1-c' already exists, alreadyExists
* module.prd-nat-us-west-1-a.module.nat-gateway.google_compute_firewall.mig-health-check: 1 error(s) occurred:

* google_compute_firewall.mig-health-check: Error creating firewall: googleapi: Error 409: The resource 'projects/<<project-redacted>>/global/firewalls/nat-gateway-us-west1-a-vm-hc' already exists, alreadyExists

It seems that the zone is used as a name in different places, making it non-unique across different networks. I'd be happy to contribute a patch.

`project` parameter forcing `google_compute_instance_template.default` recreation

Issue

I have been using the module via the nat module with the configuration as such:

module "nat" {
  source     = "[email protected]:GoogleCloudPlatform/terraform-google-nat-gateway.git?ref=1.1.8"

  project         = "${module.project.PROJECT_ID}"
  region          = "${var.G_REGION}"
  zone            = "${module.cluster.K8S_ZONE}"
  tags            = ["${module.cluster.K8S_TAG}"]
  network         = "${module.network.VPC_NAME}"
  subnetwork      = "${module.network.SUB_NAME[0]}"
  ip_address_name = "${google_compute_address.nat-ip.name}"
}

Calling the module specifically with project being an output from a prior module, forces the module.nat.module.nat-gateway.google_compute_instance_template.default to be recreated.

Specifically the following two fields in terraform plan:

-/+ module.nat.module.nat-gateway.google_compute_instance_template.default (new resource required)
id:                                     "default-20180529094950180000000001" => <computed> (forces new resource)
network_interface.0.access_config.#:    "1" => <computed> (forces new resource)

After investigating between two almost identical configurations, I have found the solution to be passing the project explicitly from a variable set in my inputs.tfvars:

module "nat" {
  source     = "[email protected]:GoogleCloudPlatform/terraform-google-nat-gateway.git?ref=1.1.8"

  project         = "${var.G_PROJECT}"
  region          = "${var.G_REGION}"
  zone            = "${module.cluster.K8S_ZONE}"
  tags            = ["${module.cluster.K8S_TAG}"]
  network         = "${module.network.VPC_NAME}"
  subnetwork      = "${module.network.SUB_NAME[0]}"
  ip_address_name = "${google_compute_address.nat-ip.name}"
}

I'm not entirely sure on the specifics of how terraform calculates the resource attributes, but the actual value of the project field is identical. One is calculated, one is fixed.
I think its got something to do with this line here.

The ideal scenario would be for the resource not to be recreated when the project comes from another module's output. I use the project output as a sort of flow control through the infrastructure build. However I suspect in this situation that its just how terraform works perhaps.

Raised this issue incase anyone else sees this, and that there is a 'solution' of sorts that might help others.

conditional resources result in errors

I'm getting errors when using this simple config:

module "mig1" {
  source            = "GoogleCloudPlatform/managed-instance-group/google"
  region            = "europe-west2"
  zone              = "europe-west2-b"
  name              = "group1"
  size              = 2
  service_port      = 80
  service_port_name = "http"
}

Errors:

* module.mig1.output.region_depends_id: Resource 'null_resource.region_dummy_dependency' not found for variable 'null_resource.region_dummy_dependency.id'
* module.mig1.output.region_instance_group: Resource 'google_compute_region_instance_group_manager.default' not found for variable 'google_compute_region_instance_group_manager.default.instance_group'
* module.mig1.output.region_instances: Resource 'data.google_compute_instance_group.regional' not found for variable 'data.google_compute_instance_group.regional.instances'

I believe it's because of the 'zonal' variable resulting in a count of 0 for resources that are always in the output.

Terraform v0.11.0
+ provider.google v1.2.0
+ provider.null v1.0.0

compute_image forces re-creation of compute_instance_template on every terraform plan/apply

I have to specify

module "nat-zone1" {
  ...
  compute_image = "projects/debian-cloud/global/images/family/debian-9"
  ...
}

to override default debian-cloud/debian-9 compute_image, because otherwise terraform thinks the compute instance template has changed and needs to be recreated every terraform plan or apply:

-/+ module.main.module.nat-zone-1.module.nat-gateway.google_compute_instance_template.default (new resource required)
      id:                                                  "default-20180919081152445300000003" => <computed> (forces new resource)
      can_ip_forward:                                      "true" => "true"
      disk.#:                                              "1" => "1"
      disk.0.auto_delete:                                  "true" => "true"
      disk.0.boot:                                         "true" => "true"
      disk.0.device_name:                                  "persistent-disk-0" => <computed>
      disk.0.disk_size_gb:                                 "0" => "0"
      disk.0.disk_type:                                    "pd-ssd" => "pd-ssd"
      disk.0.interface:                                    "SCSI" => <computed>
      disk.0.mode:                                         "READ_WRITE" => <computed>
      disk.0.source_image:                                 "projects/debian-cloud/global/images/family/debian-9" => "debian-cloud/debian-9" (forces new resource)
      disk.0.type:                                         "PERSISTENT" => "PERSISTENT"
      machine_type:                                        "f1-micro" => "f1-micro"

Terraform version used:

terraform --version
Terraform v0.11.8
+ provider.google v1.18.0
+ provider.null v1.0.0
+ provider.random v2.0.0
+ provider.template v1.0.0

Error: Unsupported argument

terraform apply -auto-approve -target=module.lbroup-fd61bad/mai

Error: Incorrect attribute value type

on .terraform/modules/mig/GoogleCloudPlatform-terraform-google-managed-instance-group-fd61bad/main.tf line 26, in resource "google_compute_instance_template" "default":
26: tags = ["${concat(list("allow-ssh"), var.target_tags)}"]

Inappropriate value for attribute "tags": element 0: string required.

Error: Unsupported argument

on .terraform/modules/mig/GoogleCloudPlatform-terraform-google-managed-instance-group-fd61bad/main.tf line 33, in resource "google_compute_instance_template" "default":
33: access_config = ["${var.access_config}"]

An argument named "access_config" is not expected here. Did you mean to define
a block of type "access_config"?

Error: Incorrect attribute value type

on .terraform/modules/mig/GoogleCloudPlatform-terraform-google-managed-instance-group-fd61bad/main.tf line 52, in resource "google_compute_instance_template" "default":
52: scopes = ["${var.service_account_scopes}"]

Inappropriate value for attribute "scopes": element 0: string required.

Timeout when instance_group_manager is run

Hello

I git clone this code and upgraded it to 0.12
The upgrade to 0.12 removed the unnecessary update_strategy, rolling_update_policy. Then, the test was conducted, and the generation was stopped due to timeout while performing instance_group_manager.
After that, timeout occurs continuously in the refreshing state even if I perform the terraform apply or terraform destroy again.

module "mig1" {
source = "../../"
module_enabled = var.module_enabled
region = var.region
zone = var.zone
zonal = true
name = var.network_name
machine_type = "n1-standatd-1"
size = 3
target_tags = [var.network_name]
service_port = 80
service_port_name = "http"
startup_script = data.template_file.startup-script.rendered
wait_for_instances = true

network = google_compute_subnetwork.default.name
subnetwork = google_compute_subnetwork.default.name
instance_labels = var.labels
http_health_check = var.http_health_check

/* update_strategy = "ROLLING_UPDATE"

rolling_update_policy = [

{

type = "PROACTIVE"

minimal_action = "REPLACE"

max_surge_fixed = 4

max_unavailable_fixed = 4

min_ready_sec = 50

},

]*/
}

error:

module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [4m20s elapsed]
module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [4m30s elapsed]
module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [4m40s elapsed]
module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [4m50s elapsed]
module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [5m0s elapsed]
module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [5m10s elapsed]
module.mig1.google_compute_instance_group_manager.default[0]: Still creating... [5m20s elapsed]

Error: timeout while waiting for state to become 'created' (last state: 'creating', timeout: 5m0s)

$~/terraform/sample/terraform-google-managed-instance-group/examples/zonal$ terraform apply
var.machine_type
Enter a value: n1-standard-1

data.template_file.startup-script: Refreshing state...
data.google_compute_zones.available: Refreshing state...
module.mig1.data.google_compute_zones.available: Refreshing state...
google_compute_network.default: Refreshing state... [id=projects/iaas-demo-208601/global/networks/mig-zonal-example]
module.mig1.google_compute_health_check.mig-health-check[0]: Refreshing state... [id=projects/iaas-demo-208601/global/healthChecks/mig-zonal-example]
google_compute_subnetwork.default: Refreshing state... [id=projects/iaas-demo-208601/regions/asia-northeast3/subnetworks/mig-zonal-example]
module.mig1.google_compute_firewall.mig-health-check[0]: Refreshing state... [id=projects/iaas-demo-208601/global/firewalls/mig-zonal-example-vm-hc]
module.mig1.google_compute_firewall.default-ssh[0]: Refreshing state... [id=projects/iaas-demo-208601/global/firewalls/mig-zonal-example-vm-ssh]
module.mig1.google_compute_instance_template.default[0]: Refreshing state... [id=projects/iaas-demo-208601/global/instanceTemplates/default-20200714000854582100000001]
module.mig1.google_compute_instance_group_manager.default[0]: Refreshing state... [id=projects/iaas-demo-208601/zones/asia-northeast3-b/instanceGroupManagers/mig-zonal-example]

Error: timeout while waiting for state to become 'created' (last state: 'creating', timeout: 5m0s)

compute@developer

I am getting the following error using this module:

google_compute_instance_group_manager.default: The resource 'One of [[email protected]]' of type 'serviceAccount' was not found.

Is there a permission missing on a service account?

Module installation fails when default VPC network is a Legacy network

When using this module as described (setting source = GoogleCloudPlatform/managed-instance-group/google), terraform apply fails with the following:

* google_compute_instance_template.default: Error creating instance template: googleapi: Error 404: The resource 'projects/cloud9-dev/regions/us-central1/subnetworks/default' was not found, notFound```

This is a result of the `default` VPC network in GCP being a Legacy network with no subnetwork.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.