Giter Site home page Giter Site logo

hashicorp / learn-terraform-provision-eks-cluster Goto Github PK

View Code? Open in Web Editor NEW
371.0 20.0 1.1K 94 KB

Home Page: https://developer.hashicorp.com/terraform/tutorials/kubernetes/eks

License: Mozilla Public License 2.0

HCL 100.00%
terraform kubernetes aws hashicorp eks tutorial

learn-terraform-provision-eks-cluster's Introduction

Learn Terraform - Provision an EKS Cluster

This repo is a companion repo to the Provision an EKS Cluster tutorial, containing Terraform configuration files to provision an EKS cluster on AWS.

learn-terraform-provision-eks-cluster's People

Contributors

alanszlosek avatar brianmmcclain avatar burdandrei avatar danielcalvo avatar duplo83 avatar hashicorp-copywrite[bot] avatar im2nguyen avatar judithpatudith avatar liorrozen avatar ritsok avatar robin-norwood avatar topfunky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learn-terraform-provision-eks-cluster's Issues

How to specify an AWS profile/role to be used when generating the EKS token?

Background:
I want to provide administrator access to the cluster to other users besides me (the creator).
These other users have a "default" AWS User that assumes an AWS Role (with admin access to the cluster, similar to this setup) that is stored as a profile in their ~/.aws/credentials file.

They also need to be able to run terraform apply, but they are getting "Error: Unauthorized", because their default user is used to get the EKS token instead of the Role with admin access to the cluster.

Question:
With the recent change to the kuberenetes.tf file what would be the proper way to pass a profile (or role) for the aws-iam-authenticator command?

Using the optional inputs kubeconfig_aws_authenticator_env_variables or kubeconfig_aws_authenticator_additional_args to the Terraform EKS module doesn't seem to affect what profile/role is used for authentication with the EKS cluster.

Library needs updating

The security group tagging for the nodes causes an issue if you want to build a loadbalancer when running a service not allowing it to pull the external IP stating that multiple instances have been tagged. Resolution was created in recent version of the aws-terrform modules on github. This has not been updated since the change in the module.

Unable to run terraform plan or apply with files from this git

I am getting the following errors when trying to run terraform plan or terraform apply. I did some digging in the terraform documentation for the terraform-aws-modules/eks/aws module and it looks like subnets, workers_group_defaults, and worker_groups all are not options anymore?

Errors when running plan or apply:

│ Error: Unsupported argument
│
│   on eks-cluster.tf line 5, in module "eks":
│    5:   subnets         = module.vpc.private_subnets
│
│ An argument named "subnets" is not expected here.

│ Error: Unsupported argument
│
│   on eks-cluster.tf line 15, in module "eks":
│   15:   workers_group_defaults = {
│
│ An argument named "workers_group_defaults" is not expected here.

│ Error: Unsupported argument
│
│   on eks-cluster.tf line 19, in module "eks":
│   19:   worker_groups = [
│
│ An argument named "worker_groups" is not expected here.

Error: Your current user or role does not have access to Kubernetes objects on this EKS cluster

Hello,
I've used this project to lunch EKS cluster in my AWS environment. I see following error in the AWS console:

Your current user or role does not have access to Kubernetes objects on this EKS cluster
This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.

image

Learn more link takes me to AWS docs but I don't know how I can include this to my terraform config.
Can we fix it so that as soon as I lunch eks cluster with terraform, RBAC will be configured properly.?

pre_bootstrap_user_data in eks_managed_node_groups is unusable for BOTTLEROCKET and hence containerd cannot be used

pre_bootstrap_user_data = <<-EOT
and
pre_bootstrap_user_data = <<-EOT
cannot be used for BOTTLEROCKET_x86_64 as it throws * : Ec2LaunchTemplateInvalidConfiguration: User data was not a TOML format that could be processed .
BOTTLEROCKET needs to be used for containerd in the EKS nodes rather than the default AL2_x86_64 .

Presently in this repo none of the BOTTLEROCKET amitypes like BOTTLEROCKET_ARM_64 BOTTLEROCKET_x86_64 BOTTLEROCKET_ARM_64_NVIDIA BOTTLEROCKET_x86_64_NVIDIA cannot be used without changes.

Error: Query Returned No Results (aws_ami)

Hello,

After performing terraform plan, I get the following error:

Error: Your query returned no results. Please change your search criteria and try again.
│ 
│   with module.eks.data.aws_ami.eks_worker[0],
│   on .terraform/modules/eks/data.tf line 20, in data "aws_ami" "eks_worker":
│   20: data "aws_ami" "eks_worker" {
│ 

Is the syntax outdated? I have installed all the prerequisites and updated modules/IAM credential access.

`terraform init` broken

Running terraform init on a fresh clone of this repo results in an error.

Steps to reproduce

git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster
cd learn-terraform-provision-eks-cluster
terraform init

Actual result

❯ terraform init
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 19.5.1 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 1.1.0 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.19.0 for vpc...
- vpc in .terraform/modules/vpc

Initializing Terraform Cloud...
╷
│ Error: Invalid or missing required argument
│
│   on terraform.tf line 3, in terraform:
│    3:   cloud {
│
│ "organization" must be set in the cloud configuration or as an environment variable: TF_CLOUD_ORGANIZATION.

Expected result

The command should succeed.

Workaround

Adding organization = "my-example-org" to the cloud section (replace it with your own) seems to solve the issue:

terraform {

  cloud {
    organization = "my-example-org"
    workspaces {
      name = "learn-terraform-eks"
    }
  }

  required_providers {
...

System info

OS: Mac os 13.2
Terraform: 1.3.7
Repo commit: 685bb97

Unsuported argument subnets: adapt sample code to module eks version >= 18.0

Current sample code doesn't work because it is not compatible with module eks version 18

│ Error: Unsupported argument
│
│   on eks-cluster.tf line 5, in module "eks":
│    5:   subnets         = module.vpc.private_subnets
│
│ An argument named "subnets" is not expected here.

│ Error: Unsupported argument
│
│   on eks-cluster.tf line 15, in module "eks":
│   15:   workers_group_defaults = {
│
│ An argument named "workers_group_defaults" is not expected here.

│ Error: Unsupported argument
│
│   on eks-cluster.tf line 19, in module "eks":
│   19:   worker_groups = [
│
│ An argument named "worker_groups" is not expected here.

There is a workaround to make it work with version 17 here: #53 (comment)

Why does terraform apply fail?

I'm trying to follow the tutorial, But terraform apply fails.

Steps to reproduce:

  1. Create an aws account.
  2. Create access keys (access key ID and secret access key)
  3. Run aws-cli docker image:
    docker run --rm -it --entrypoint bash amazon/aws-cli:latest
  4. In the running container, install terraform.
  5. Provide your access keys:
    aws configure
  6. Clone the repository:
    git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster
  7. Run terraform apply:
    cd learn-terraform-provision-eks-cluster && terraform apply --auto-approve

I have provided my root user access keys (aws configure) and did the above and now the eks is created and the autoscaling groups are also created and ec2 instances are running. But the terraform apply fails with this output:

.
.
.
module.eks.null_resource.wait_for_cluster[0]: Still creating... [50s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [40s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [40s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Creation complete after 41s [id=education-eks-Lof9Mf4j-worker-group-120210315132910683800000014]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m0s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [50s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Creation complete after 1m5s [id=9027141840267261715]
data.aws_eks_cluster_auth.cluster: Reading...
data.aws_eks_cluster.cluster: Reading...
data.aws_eks_cluster_auth.cluster: Read complete after 0s [id=education-eks-Lof9Mf4j]
data.aws_eks_cluster.cluster: Read complete after 0s [id=education-eks-Lof9Mf4j]
module.eks.kubernetes_config_map.aws_auth[0]: Creating...
module.eks.aws_autoscaling_group.workers[1]: Still creating... [1m0s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [1m10s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Creation complete after 1m12s [id=education-eks-Lof9Mf4j-worker-group-220210315132910701500000015]

Error: Unauthorized

And here is the last snapshot in logs:

-----------------------------------------------------: timestamp=2021-03-15T15:17:57.245Z
2021-03-15T15:17:57.246Z [INFO]  plugin.terraform-provider-aws_v3.25.0_x5: 2021/03/15 15:17:57 [DEBUG] [aws-sdk-go] <DescribeAutoScalingGroupsResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
  <DescribeAutoScalingGroupsResult>
    <AutoScalingGroups>
      <member>
        <HealthCheckType>EC2</HealthCheckType>
        <Instances>
          <member>
            <LaunchConfigurationName>education-eks-BuAAAvsR-worker-group-220210315151705272900000013</LaunchConfigurationName>
            <LifecycleState>InService</LifecycleState>
            <InstanceId>i-04fb7f82204a614b1</InstanceId>
            <HealthStatus>Healthy</HealthStatus>
            <InstanceType>t2.medium</InstanceType>
            <ProtectedFromScaleIn>false</ProtectedFromScaleIn>
            <AvailabilityZone>us-east-2c</AvailabilityZone>
          </member>
        </Instances>
        <TerminationPolicies>
          <member>Default</member>
        </TerminationPolicies>
        <DefaultCooldown>300</DefaultCooldown>
        <AutoScalingGroupARN>arn:aws:autoscaling:us-east-2:008082804869:autoScalingGroup:66e234e4-1b87-4bb7-aef5-eae21601a813:autoScalingGroupName/education-eks-BuAAAvsR-worker-group-220210315151716231500000014</AutoScalingGroupARN>
        <EnabledMetrics/>
        <MaxSize>3</MaxSize>
        <AvailabilityZones>
          <member>us-east-2a</member>
          <member>us-east-2b</member>
          <member>us-east-2c</member>
        </AvailabilityZones>
        <TargetGroupARNs/>
onfigurationName>education-eks-BuAAAvsR-worker-group-220210315151705272900000013</LaunchConfigurationName>
        <AutoScalingGroupName>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</AutoScalingGroupName>
        <HealthCheckGracePeriod>300</HealthCheckGracePeriod>
        <NewInstancesProtectedFromScaleIn>false</NewInstancesProtectedFromScaleIn>
        <CreatedTime>2021-03-15T15:17:16.848Z</CreatedTime>
        <MinSize>1</MinSize>
        <LoadBalancerNames/>
        <Tags>
          <member>
            <ResourceId>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</ResourceId>
            <PropagateAtLaunch>true</PropagateAtLaunch>
            <Value>training</Value>
            <Key>Environment</Key>
            <ResourceType>auto-scaling-group</ResourceType>
          </member>
          <member>
            <ResourceId>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</ResourceId>
            <PropagateAtLaunch>true</PropagateAtLaunch>
            <Value>terraform-aws-modules</Value>
            <Key>GithubOrg</Key>
            <ResourceType>auto-scaling-group</ResourceType>
          </member>
          <member>
            <ResourceId>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</ResourceId>
            <PropagateAtLaunch>true</PropagateAtLaunch>
            <Value>terraform-aws-eks</Value>
            <Key>GithubRepo</Key>
            <ResourceType>auto-scaling-group</ResourceType>
          </member>
          <member>
            <ResourceId>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</ResourceId>
            <PropagateAtLaunch>true</PropagateAtLaunch>
            <Value>education-eks-BuAAAvsR-worker-group-2-eks_asg</Value>
            <Key>Name</Key>
            <ResourceType>auto-scaling-group</ResourceType>
          </member>
          <member>
            <ResourceId>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</ResourceId>
            <PropagateAtLaunch>true</PropagateAtLaunch>
alue>owned</Value>
            <Key>k8s.io/cluster/education-eks-BuAAAvsR</Key>
            <ResourceType>auto-scaling-group</ResourceType>
          </member>
          <member>
            <ResourceId>education-eks-BuAAAvsR-worker-group-220210315151716231500000014</ResourceId>
            <PropagateAtLaunch>true</PropagateAtLaunch>
            <Value>owned</Value>
            <Key>kubernetes.io/cluster/education-eks-BuAAAvsR</Key>
            <ResourceType>auto-scaling-group</ResourceType>
          </member>
        </Tags>
        <ServiceLinkedRoleARN>arn:aws:iam::008082804869:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling</ServiceLinkedRoleARN>
        <SuspendedProcesses>
          <member>
            <ProcessName>AZRebalance</ProcessName>
            <SuspensionReason>User suspended at 2021-03-15T15:17:56Z</SuspensionReason>
          </member>
        </SuspendedProcesses>
        <DesiredCapacity>1</DesiredCapacity>
        <VPCZoneIdentifier>subnet-07746a16929a6d25e,subnet-0fabb7770ed770def,subnet-0dd048ae9087d8308</VPCZoneIdentifier>
      </member>
    </AutoScalingGroups>
  </DescribeAutoScalingGroupsResult>
  <ResponseMetadata>
    <RequestId>d5dab1f1-a153-45ec-a6ba-990de96cc822</RequestId>
  </ResponseMetadata>
</DescribeAutoScalingGroupsResponse>: timestamp=2021-03-15T15:17:57.246Z
2021/03/15 15:17:57 [WARN] Provider "registry.terraform.io/hashicorp/aws" produced an unexpected new value for module.eks.aws_autoscaling_group.workers[0], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .placement_group: was null, but now cty.StringVal("")
      - .capacity_rebalance: was null, but now cty.False
2021/03/15 15:17:57 [WARN] Provider "registry.terraform.io/hashicorp/aws" produced an unexpected new value for module.eks.aws_autoscaling_group.workers[1], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .placement_group: was null, but now cty.StringVal("")
      - .capacity_rebalance: was null, but now cty.False
2021-03-15T15:17:57.342Z [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-15T15:17:57.349Z [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/aws/3.25.0/linux_amd64/terraform-provider-aws_v3.25.0_x5 pid=481
2021-03-15T15:17:57.349Z [DEBUG] plugin: plugin exited
2021-03-15T15:17:57.416Z [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-15T15:17:57.416Z [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-15T15:17:57.418Z [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-15T15:17:57.418Z [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/kubernetes/2.0.1/linux_amd64/terraform-provider-kubernetes_v2.0.1_x5 pid=507
2021-03-15T15:17:57.418Z [DEBUG] plugin: plugin exited
2021-03-15T15:17:57.418Z [DEBUG] plugin: plugin process exited: path=/usr/local/bin/terraform pid=408
2021-03-15T15:17:57.418Z [DEBUG] plugin: plugin exited
2021-03-15T15:17:57.418Z [DEBUG] plugin: plugin process exited: path=/usr/local/bin/terraform pid=359
2021-03-15T15:17:57.418Z [DEBUG] plugin: plugin exited

Any suggestions?

configmaps aws_auth connection refused

I tried running terraform apply, and got this error.

Error: Post http://localhost/api/v1/namespaces/kube-system/configmaps: dial tcp 127.0.0.1:80: connect: connection refused

  on .terraform/modules/eks/terraform-aws-eks-11.1.0/aws_auth.tf line 62, in resource "kubernetes_config_map" "aws_auth":
  62: resource "kubernetes_config_map" "aws_auth" {

What could I be doing wrong here? I originally opened terraform-aws-modules/terraform-aws-eks#853, but was informed that it could be an issue with this repo.

EKS Module does not allow clean destroy (dependency violation)

It seems this EKS module implementation does not allow clean destroy.

I get the following on terraform destroy without modification of the code:


Error: error deleting EC2 Subnet (subnet-0cc3d27fc54396aea): DependencyViolation: The subnet 'subnet-0cc3d27fc54396aea' has dependencies and cannot be deleted.
        status code: 400, request id: fc2374cc-b57e-491b-8d0e-7ff5e0ea8c04



Error: error deleting EC2 Subnet (subnet-0aa0c6858a9610951): DependencyViolation: The subnet 'subnet-0aa0c6858a9610951' has dependencies and cannot be deleted.
        status code: 400, request id: 7e0adef1-f9cf-4f77-bd53-76ee3cadddb4



Error: error deleting EC2 Subnet (subnet-06c15b8c3fe729839): DependencyViolation: The subnet 'subnet-06c15b8c3fe729839' has dependencies and cannot be deleted.
        status code: 400, request id: 7abb9076-1f27-42a7-aa58-55f9d6c0684f



Error: error deleting EC2 VPC (vpc-085a1ba062c00608c): DependencyViolation: The vpc 'vpc-085a1ba062c00608c' has dependencies and cannot be deleted.
        status code: 400, request id: 86fc16a6-7486-4c13-ad8a-3ecc10d2f17
Error: error detaching EC2 Internet Gateway (igw-017db3cdb54073820) from VPC (vpc-085a1ba062c00608c): DependencyViolation: Network vpc-085a1ba062c00608c has some mapped public address(es). Please unmap those public address(es) before detaching the gateway.
        status code: 400, request id: f21a4901-278b-423a-83e6-9dec07fa8fc2


NOTE: I found the following article useful in cleanup: https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-dependency-error-delete-vpc/

Error updating cluster version from 1.19 to 1.20

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = "1.20" ### <<< changed from 1.19
  subnets         = module.vpc.private_subnets

OUTPUT:
terraform plan

module.eks.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: Your query returned no results. Please change your search criteria and try again.
│ 
│   with module.eks.data.aws_ami.eks_worker_windows,
│   on .terraform/modules/eks/data.tf line 27, in data "aws_ami" "eks_worker_windows":
│   27: data "aws_ami" "eks_worker_windows" {

secrets is forbidden

after following the whole process, and setting up the Kubernetes dashboard:
i got this in the notifications:

secrets is forbidden: User "system:serviceaccount:kube-system:service-controller" cannot list resource "secrets" in API group "" in the namespace "default"

Including metadata for node groups

Hi,

This is more of a question :).

Is there anyway i can labels the node groups? (In this case i am assuming the worker_groups in eks-cluster.tf).

Something like this ?

  worker_groups = [
    {
      name = "worker-group"
      metadata = [{
          role                 = "worker"
          os                   = "linux"
     ]}
      tags = [{
          key                 = "worker-group-tag"
          value               = "worker-group-1"
          propagate_at_launch = true
      }]
    }
  ]

terraform destroy errors with config map

Version:
terraform 0.14.5

Platform:
Ubuntu 20.04 (within virtual box on a windows machine)

After deploying (successfully) I attempted to remove with terraform destroy. It failed once with:

Error: Unauthorized

I retried destroying (without changing anything) and it proceeded to delete all except for module.eks.kubernetes_config_map.aws_auth[0], even after multiple attempts it persisted giving me the same error:

Error: Delete "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused

I'm not sure but what I could find was another issue that has a similar error but on deploy: #12

Error: Unauthorized on .terraform/modules/eks/aws_auth.tf line 65, in resource "kubernetes_config_map" "aws_auth":

Description

Fails to create module.eks.kubernetes_config_map.aws_auth because of unauthorised error.

Versions

  • Terraform:
    Terraform v0.14.8
  • Provider(s):
+ provider registry.terraform.io/hashicorp/aws v3.34.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.3
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/random v3.0.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Reproduction

Steps to reproduce the behavior:
Are you using workspaces?
yes
Have you cleared the local cache (see Notice section above)
yes
List steps in order that led up to the issue you encountered
terraform apply -var-file=prod.tfvars

Code Snippet to Reproduce

eks-cluster.ts

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.k8s_name
  cluster_version = "1.18"
  subnets         = data.terraform_remote_state.networking.outputs.private_subnets

  tags = {
    Terraform = "true"
    Environment = local.workspace
    GithubRepo  = "terraform-aws-eks"
    GithubOrg   = "terraform-aws-modules"
  }

  vpc_id = data.terraform_remote_state.networking.outputs.vpc_id

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [data.terraform_remote_state.networking.outputs.worker_group_mgmt_one_id]
    },
    {
      name                          = "worker-group-2"
      instance_type                 = "t2.medium"
      additional_userdata           = "echo foo bar"
      additional_security_group_ids = [data.terraform_remote_state.networking.outputs.worker_group_mgmt_two_id]
      asg_desired_capacity          = 1
    },
  ]
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

kubernetes.tf

# Kubernetes provider
# https://learn.hashicorp.com/terraform/kubernetes/provision-eks-cluster#optional-configure-terraform-kubernetes-provider
# To learn how to schedule deployments and services using the provider, go here: https://learn.hashicorp.com/terraform/kubernetes/deploy-nginx-kubernetes

# The Kubernetes provider is included in this file so the EKS module can complete successfully. Otherwise, it throws an error when creating `kubernetes_config_map.aws_auth`.
# You should **not** schedule deployments and services in this workspace. This keeps workspaces modular (one for provision EKS, another for scheduling Kubernetes resources) as per best practices.

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    command     = "aws"
    args = [
      "eks",
      "get-token",
      "--cluster-name",
      data.aws_eks_cluster.cluster.name
    ]
  }
}

Expected behavior

Create module.eks.kubernetes_config_map.aws_auth[0]

Actual behavior

fails to create

Terminal Output Screenshot(s)

Error: Unauthorized

  on .terraform/modules/eks/aws_auth.tf line 65, in resource "kubernetes_config_map" "aws_auth":
  65: resource "kubernetes_config_map" "aws_auth" {

Additional context

aws provider is pointing to a profile that has AdministratorAccess.

provider "aws" {
  region = var.region
  shared_credentials_file = "$HOME/.aws/credentials"
  profile                 = "terraform"
}

please add examples for adding windows worker node

Hello.,

With help of this EKS terraform examples, we were able to provision the large scale DEV, STAGING and PRODUCTION AWS EKS environment, however, please add the examples on adding windows worker node to existing AWS EKS clusters.

Like, using "eks-worker" along with nodegroup configured for linux ,we are able to provision linux based worker, however for windows, did googled everywhere, it has "eks-worker-windows" or "eks_worker_windows" configured along with custom nodegroup for windows support., needs nodegroup for windows as well as eks worker windows related examples so that best practice can be followed.

This is appreciated.

Problem provisioning (wget issue)

I am getting following error which something to do with wget.

ks.aws_launch_configuration.workers[1]: Still creating... [10s elapsed]
module.eks.aws_launch_configuration.workers[0]: Creation complete after 10s [id=training-eks-W5YjPdfh-worker-group-120200724013458477900000010]
module.eks.aws_launch_configuration.workers[1]: Creation complete after 10s [id=training-eks-W5YjPdfh-worker-group-220200724013458484400000011]
module.eks.random_pet.workers[0]: Creating...
module.eks.random_pet.workers[1]: Creating...
module.eks.random_pet.workers[0]: Creation complete after 0s [id=capable-snipe]
module.eks.random_pet.workers[1]: Creation complete after 0s [id=deep-wombat]
module.eks.aws_autoscaling_group.workers[0]: Creating...
module.eks.aws_autoscaling_group.workers[1]: Creating...
module.eks.null_resource.wait_for_cluster[0] (local-exec): /bin/sh: wget: command not found
module.eks.null_resource.wait_for_cluster[0]: Still creating... [20s elapsed]
module.eks.null_resource.wait_for_cluster[0] (local-exec): /bin/sh: wget: command not found
module.eks.aws_autoscaling_group.workers[0]: Still creating... [10s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [10s elapsed]

Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused

Hello,

During terraform apply I face up with the issue:

module.eks.aws_autoscaling_group.workers[0]: Creation complete after 37s [id=training-eks-ibKAXDyK-worker-group-120200630082429115800000012]
module.eks.aws_autoscaling_group.workers[1]: Creation complete after 37s [id=training-eks-ibKAXDyK-worker-group-220200630082429109300000011]

**Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused

on .terraform/modules/eks/aws_auth.tf line 64, in resource "kubernetes_config_map" "aws_auth":
64: resource "kubernetes_config_map" "aws_auth" {**

Also, I found that Node groups weren't created.

Timeout error running on Mac

I checked out this code and ran it without any changes.
Just by checking the AWS console, It seems like the Kubernetes cluster node group hasn't been created, even though I can see the 3 VMs being span up (2 t2.small, one t2.medium). See logs below.

$ terraform apply
data.aws_availability_zones.available: Refreshing state...
module.eks.data.aws_iam_policy_document.cluster_assume_role_policy: Refreshing state...
module.eks.data.aws_ami.eks_worker: Refreshing state...
module.eks.data.aws_partition.current: Refreshing state...
module.eks.data.aws_caller_identity.current: Refreshing state...
module.eks.data.aws_ami.eks_worker_windows: Refreshing state...
module.eks.data.aws_iam_policy_document.cluster_elb_sl_role_creation[0]: Refreshing state...
module.eks.data.aws_iam_policy_document.workers_assume_role_policy: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # data.aws_eks_cluster.cluster will be read during apply
  # (config refers to values not yet known)
 <= data "aws_eks_cluster" "cluster"  {
      + arn                       = (known after apply)
      + certificate_authority     = (known after apply)
      + created_at                = (known after apply)
      + enabled_cluster_log_types = (known after apply)
      + endpoint                  = (known after apply)
      + id                        = (known after apply)
      + identity                  = (known after apply)
      + name                      = (known after apply)
      + platform_version          = (known after apply)
      + role_arn                  = (known after apply)
      + status                    = (known after apply)
      + tags                      = (known after apply)
      + version                   = (known after apply)
      + vpc_config                = (known after apply)
    }

  # data.aws_eks_cluster_auth.cluster will be read during apply
  # (config refers to values not yet known)
 <= data "aws_eks_cluster_auth" "cluster"  {
      + id    = (known after apply)
      + name  = (known after apply)
      + token = (sensitive value)
    }

  # aws_security_group.all_worker_mgmt will be created
  + resource "aws_security_group" "all_worker_mgmt" {
      + arn                    = (known after apply)
      + description            = "Managed by Terraform"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "10.0.0.0/8",
                  + "172.16.0.0/12",
                  + "192.168.0.0/16",
                ]
              + description      = ""
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
        ]
      + name                   = (known after apply)
      + name_prefix            = "all_worker_management"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = (known after apply)
    }

  # aws_security_group.worker_group_mgmt_one will be created
  + resource "aws_security_group" "worker_group_mgmt_one" {
      + arn                    = (known after apply)
      + description            = "Managed by Terraform"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "10.0.0.0/8",
                ]
              + description      = ""
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
        ]
      + name                   = (known after apply)
      + name_prefix            = "worker_group_mgmt_one"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = (known after apply)
    }

  # aws_security_group.worker_group_mgmt_two will be created
  + resource "aws_security_group" "worker_group_mgmt_two" {
      + arn                    = (known after apply)
      + description            = "Managed by Terraform"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "192.168.0.0/16",
                ]
              + description      = ""
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
        ]
      + name                   = (known after apply)
      + name_prefix            = "worker_group_mgmt_two"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = (known after apply)
    }

  # random_string.suffix will be created
  + resource "random_string" "suffix" {
      + id          = (known after apply)
      + length      = 8
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + result      = (known after apply)
      + special     = false
      + upper       = true
    }

  # module.eks.data.template_file.userdata[0] will be read during apply
  # (config refers to values not yet known)
 <= data "template_file" "userdata"  {
      + id       = (known after apply)
      + rendered = (known after apply)
      + template = (known after apply)
      + vars     = (known after apply)
    }

  # module.eks.data.template_file.userdata[1] will be read during apply
  # (config refers to values not yet known)
 <= data "template_file" "userdata"  {
      + id       = (known after apply)
      + rendered = (known after apply)
      + template = (known after apply)
      + vars     = (known after apply)
    }

  # module.eks.aws_autoscaling_group.workers[0] will be created
  + resource "aws_autoscaling_group" "workers" {
      + arn                       = (known after apply)
      + availability_zones        = (known after apply)
      + default_cooldown          = (known after apply)
      + desired_capacity          = (known after apply)
      + enabled_metrics           = (known after apply)
      + force_delete              = (known after apply)
      + health_check_grace_period = (known after apply)
      + health_check_type         = (known after apply)
      + id                        = (known after apply)
      + launch_configuration      = (known after apply)
      + load_balancers            = (known after apply)
      + max_instance_lifetime     = (known after apply)
      + max_size                  = (known after apply)
      + metrics_granularity       = "1Minute"
      + min_size                  = (known after apply)
      + name                      = (known after apply)
      + name_prefix               = (known after apply)
      + placement_group           = (known after apply)
      + protect_from_scale_in     = (known after apply)
      + service_linked_role_arn   = (known after apply)
      + suspended_processes       = (known after apply)
      + target_group_arns         = (known after apply)
      + termination_policies      = (known after apply)
      + vpc_zone_identifier       = (known after apply)
      + wait_for_capacity_timeout = "10m"

      + tag {
          + key                 = (known after apply)
          + propagate_at_launch = (known after apply)
          + value               = (known after apply)
        }
    }

  # module.eks.aws_autoscaling_group.workers[1] will be created
  + resource "aws_autoscaling_group" "workers" {
      + arn                       = (known after apply)
      + availability_zones        = (known after apply)
      + default_cooldown          = (known after apply)
      + desired_capacity          = (known after apply)
      + enabled_metrics           = (known after apply)
      + force_delete              = (known after apply)
      + health_check_grace_period = (known after apply)
      + health_check_type         = (known after apply)
      + id                        = (known after apply)
      + launch_configuration      = (known after apply)
      + load_balancers            = (known after apply)
      + max_instance_lifetime     = (known after apply)
      + max_size                  = (known after apply)
      + metrics_granularity       = "1Minute"
      + min_size                  = (known after apply)
      + name                      = (known after apply)
      + name_prefix               = (known after apply)
      + placement_group           = (known after apply)
      + protect_from_scale_in     = (known after apply)
      + service_linked_role_arn   = (known after apply)
      + suspended_processes       = (known after apply)
      + target_group_arns         = (known after apply)
      + termination_policies      = (known after apply)
      + vpc_zone_identifier       = (known after apply)
      + wait_for_capacity_timeout = "10m"

      + tag {
          + key                 = (known after apply)
          + propagate_at_launch = (known after apply)
          + value               = (known after apply)
        }
    }

  # module.eks.aws_eks_cluster.this[0] will be created
  + resource "aws_eks_cluster" "this" {
      + arn                   = (known after apply)
      + certificate_authority = (known after apply)
      + created_at            = (known after apply)
      + endpoint              = (known after apply)
      + id                    = (known after apply)
      + identity              = (known after apply)
      + name                  = (known after apply)
      + platform_version      = (known after apply)
      + role_arn              = (known after apply)
      + status                = (known after apply)
      + tags                  = {
          + "Environment" = "training"
          + "GithubOrg"   = "terraform-aws-modules"
          + "GithubRepo"  = "terraform-aws-eks"
        }
      + version               = "1.17"

      + timeouts {
          + create = "30m"
          + delete = "15m"
        }

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = false
          + endpoint_public_access    = true
          + public_access_cidrs       = [
              + "0.0.0.0/0",
            ]
          + security_group_ids        = (known after apply)
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # module.eks.aws_iam_instance_profile.workers[0] will be created
  + resource "aws_iam_instance_profile" "workers" {
      + arn         = (known after apply)
      + create_date = (known after apply)
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = (known after apply)
      + path        = "/"
      + role        = (known after apply)
      + unique_id   = (known after apply)
    }

  # module.eks.aws_iam_instance_profile.workers[1] will be created
  + resource "aws_iam_instance_profile" "workers" {
      + arn         = (known after apply)
      + create_date = (known after apply)
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = (known after apply)
      + path        = "/"
      + role        = (known after apply)
      + unique_id   = (known after apply)
    }

  # module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created
  + resource "aws_iam_policy" "cluster_elb_sl_role_creation" {
      + arn         = (known after apply)
      + description = "Permissions for EKS to create AWSServiceRoleForElasticLoadBalancing service-linked role"
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "ec2:DescribeInternetGateways",
                          + "ec2:DescribeAccountAttributes",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                      + Sid      = ""
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
    }

  # module.eks.aws_iam_role.cluster[0] will be created
  + resource "aws_iam_role" "cluster" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "eks.amazonaws.com"
                        }
                      + Sid       = "EKSClusterAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = true
      + id                    = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags                  = {
          + "Environment" = "training"
          + "GithubOrg"   = "terraform-aws-modules"
          + "GithubRepo"  = "terraform-aws-eks"
        }
      + unique_id             = (known after apply)
    }

  # module.eks.aws_iam_role.workers[0] will be created
  + resource "aws_iam_role" "workers" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                      + Sid       = "EKSWorkerAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = true
      + id                    = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags                  = {
          + "Environment" = "training"
          + "GithubOrg"   = "terraform-aws-modules"
          + "GithubRepo"  = "terraform-aws-eks"
        }
      + unique_id             = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSServicePolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSVPCResourceControllerPolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_elb_sl_role_creation" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0] will be created
  + resource "aws_iam_role_policy_attachment" "workers_AmazonEC2ContainerRegistryReadOnly" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0] will be created
  + resource "aws_iam_role_policy_attachment" "workers_AmazonEKSWorkerNodePolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0] will be created
  + resource "aws_iam_role_policy_attachment" "workers_AmazonEKS_CNI_Policy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = (known after apply)
    }

  # module.eks.aws_launch_configuration.workers[0] will be created
  + resource "aws_launch_configuration" "workers" {
      + arn                         = (known after apply)
      + associate_public_ip_address = (known after apply)
      + ebs_optimized               = (known after apply)
      + enable_monitoring           = (known after apply)
      + iam_instance_profile        = (known after apply)
      + id                          = (known after apply)
      + image_id                    = (known after apply)
      + instance_type               = (known after apply)
      + key_name                    = (known after apply)
      + name                        = (known after apply)
      + name_prefix                 = (known after apply)
      + placement_tenancy           = (known after apply)
      + security_groups             = (known after apply)
      + spot_price                  = (known after apply)
      + user_data_base64            = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = true
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # module.eks.aws_launch_configuration.workers[1] will be created
  + resource "aws_launch_configuration" "workers" {
      + arn                         = (known after apply)
      + associate_public_ip_address = (known after apply)
      + ebs_optimized               = (known after apply)
      + enable_monitoring           = (known after apply)
      + iam_instance_profile        = (known after apply)
      + id                          = (known after apply)
      + image_id                    = (known after apply)
      + instance_type               = (known after apply)
      + key_name                    = (known after apply)
      + name                        = (known after apply)
      + name_prefix                 = (known after apply)
      + placement_tenancy           = (known after apply)
      + security_groups             = (known after apply)
      + spot_price                  = (known after apply)
      + user_data_base64            = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = true
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # module.eks.aws_security_group.cluster[0] will be created
  + resource "aws_security_group" "cluster" {
      + arn                    = (known after apply)
      + description            = "EKS cluster security group."
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = (known after apply)
      + vpc_id                 = (known after apply)
    }

  # module.eks.aws_security_group.workers[0] will be created
  + resource "aws_security_group" "workers" {
      + arn                    = (known after apply)
      + description            = "Security group for all nodes in the cluster."
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = (known after apply)
      + vpc_id                 = (known after apply)
    }

  # module.eks.aws_security_group_rule.cluster_egress_internet[0] will be created
  + resource "aws_security_group_rule" "cluster_egress_internet" {
      + cidr_blocks              = [
          + "0.0.0.0/0",
        ]
      + description              = "Allow cluster egress access to the Internet."
      + from_port                = 0
      + id                       = (known after apply)
      + protocol                 = "-1"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 0
      + type                     = "egress"
    }

  # module.eks.aws_security_group_rule.cluster_https_worker_ingress[0] will be created
  + resource "aws_security_group_rule" "cluster_https_worker_ingress" {
      + description              = "Allow pods to communicate with the EKS cluster API."
      + from_port                = 443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.workers_egress_internet[0] will be created
  + resource "aws_security_group_rule" "workers_egress_internet" {
      + cidr_blocks              = [
          + "0.0.0.0/0",
        ]
      + description              = "Allow nodes all egress to the Internet."
      + from_port                = 0
      + id                       = (known after apply)
      + protocol                 = "-1"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 0
      + type                     = "egress"
    }

  # module.eks.aws_security_group_rule.workers_ingress_cluster[0] will be created
  + resource "aws_security_group_rule" "workers_ingress_cluster" {
      + description              = "Allow workers pods to receive communication from the cluster control plane."
      + from_port                = 1025
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.workers_ingress_cluster_https[0] will be created
  + resource "aws_security_group_rule" "workers_ingress_cluster_https" {
      + description              = "Allow pods running extension API servers on port 443 to receive communication from cluster control plane."
      + from_port                = 443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.workers_ingress_self[0] will be created
  + resource "aws_security_group_rule" "workers_ingress_self" {
      + description              = "Allow node to communicate with each other."
      + from_port                = 0
      + id                       = (known after apply)
      + protocol                 = "-1"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "ingress"
    }

  # module.eks.kubernetes_config_map.aws_auth[0] will be created
  + resource "kubernetes_config_map" "aws_auth" {
      + data = (known after apply)
      + id   = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "aws-auth"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.eks.local_file.kubeconfig[0] will be created
  + resource "local_file" "kubeconfig" {
      + content              = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0644"
      + filename             = (known after apply)
      + id                   = (known after apply)
    }

  # module.eks.null_resource.wait_for_cluster[0] will be created
  + resource "null_resource" "wait_for_cluster" {
      + id = (known after apply)
    }

  # module.eks.random_pet.workers[0] will be created
  + resource "random_pet" "workers" {
      + id        = (known after apply)
      + keepers   = (known after apply)
      + length    = 2
      + separator = "-"
    }

  # module.eks.random_pet.workers[1] will be created
  + resource "random_pet" "workers" {
      + id        = (known after apply)
      + keepers   = (known after apply)
      + length    = 2
      + separator = "-"
    }

  # module.vpc.aws_eip.nat[0] will be created
  + resource "aws_eip" "nat" {
      + allocation_id     = (known after apply)
      + association_id    = (known after apply)
      + customer_owned_ip = (known after apply)
      + domain            = (known after apply)
      + id                = (known after apply)
      + instance          = (known after apply)
      + network_interface = (known after apply)
      + private_dns       = (known after apply)
      + private_ip        = (known after apply)
      + public_dns        = (known after apply)
      + public_ip         = (known after apply)
      + public_ipv4_pool  = (known after apply)
      + tags              = (known after apply)
      + vpc               = true
    }

  # module.vpc.aws_internet_gateway.this[0] will be created
  + resource "aws_internet_gateway" "this" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = (known after apply)
      + vpc_id   = (known after apply)
    }

  # module.vpc.aws_nat_gateway.this[0] will be created
  + resource "aws_nat_gateway" "this" {
      + allocation_id        = (known after apply)
      + id                   = (known after apply)
      + network_interface_id = (known after apply)
      + private_ip           = (known after apply)
      + public_ip            = (known after apply)
      + subnet_id            = (known after apply)
      + tags                 = (known after apply)
    }

  # module.vpc.aws_route.private_nat_gateway[0] will be created
  + resource "aws_route" "private_nat_gateway" {
      + destination_cidr_block     = "0.0.0.0/0"
      + destination_prefix_list_id = (known after apply)
      + egress_only_gateway_id     = (known after apply)
      + gateway_id                 = (known after apply)
      + id                         = (known after apply)
      + instance_id                = (known after apply)
      + instance_owner_id          = (known after apply)
      + local_gateway_id           = (known after apply)
      + nat_gateway_id             = (known after apply)
      + network_interface_id       = (known after apply)
      + origin                     = (known after apply)
      + route_table_id             = (known after apply)
      + state                      = (known after apply)

      + timeouts {
          + create = "5m"
        }
    }

  # module.vpc.aws_route.public_internet_gateway[0] will be created
  + resource "aws_route" "public_internet_gateway" {
      + destination_cidr_block     = "0.0.0.0/0"
      + destination_prefix_list_id = (known after apply)
      + egress_only_gateway_id     = (known after apply)
      + gateway_id                 = (known after apply)
      + id                         = (known after apply)
      + instance_id                = (known after apply)
      + instance_owner_id          = (known after apply)
      + local_gateway_id           = (known after apply)
      + nat_gateway_id             = (known after apply)
      + network_interface_id       = (known after apply)
      + origin                     = (known after apply)
      + route_table_id             = (known after apply)
      + state                      = (known after apply)

      + timeouts {
          + create = "5m"
        }
    }

  # module.vpc.aws_route_table.private[0] will be created
  + resource "aws_route_table" "private" {
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = (known after apply)
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table.public[0] will be created
  + resource "aws_route_table" "public" {
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = (known after apply)
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[0] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[1] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[2] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[0] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[1] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[2] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_subnet.private[0] will be created
  + resource "aws_subnet" "private" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "us-east-2a"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.1.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = false
      + owner_id                        = (known after apply)
      + tags                            = (known after apply)
      + vpc_id                          = (known after apply)
    }

  # module.vpc.aws_subnet.private[1] will be created
  + resource "aws_subnet" "private" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "us-east-2b"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.2.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = false
      + owner_id                        = (known after apply)
      + tags                            = (known after apply)
      + vpc_id                          = (known after apply)
    }

  # module.vpc.aws_subnet.private[2] will be created
  + resource "aws_subnet" "private" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "us-east-2c"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.3.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = false
      + owner_id                        = (known after apply)
      + tags                            = (known after apply)
      + vpc_id                          = (known after apply)
    }

  # module.vpc.aws_subnet.public[0] will be created
  + resource "aws_subnet" "public" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "us-east-2a"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.4.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = true
      + owner_id                        = (known after apply)
      + tags                            = (known after apply)
      + vpc_id                          = (known after apply)
    }

  # module.vpc.aws_subnet.public[1] will be created
  + resource "aws_subnet" "public" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "us-east-2b"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.5.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = true
      + owner_id                        = (known after apply)
      + tags                            = (known after apply)
      + vpc_id                          = (known after apply)
    }

  # module.vpc.aws_subnet.public[2] will be created
  + resource "aws_subnet" "public" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "us-east-2c"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.6.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = true
      + owner_id                        = (known after apply)
      + tags                            = (known after apply)
      + vpc_id                          = (known after apply)
    }

  # module.vpc.aws_vpc.this[0] will be created
  + resource "aws_vpc" "this" {
      + arn                              = (known after apply)
      + assign_generated_ipv6_cidr_block = false
      + cidr_block                       = "10.0.0.0/16"
      + default_network_acl_id           = (known after apply)
      + default_route_table_id           = (known after apply)
      + default_security_group_id        = (known after apply)
      + dhcp_options_id                  = (known after apply)
      + enable_classiclink               = (known after apply)
      + enable_classiclink_dns_support   = (known after apply)
      + enable_dns_hostnames             = true
      + enable_dns_support               = true
      + id                               = (known after apply)
      + instance_tenancy                 = "default"
      + ipv6_association_id              = (known after apply)
      + ipv6_cidr_block                  = (known after apply)
      + main_route_table_id              = (known after apply)
      + owner_id                         = (known after apply)
      + tags                             = (known after apply)
    }

Plan: 54 to add, 0 to change, 0 to destroy.


Warning: Interpolation-only expressions are deprecated

  on .terraform/modules/vpc/outputs.tf line 353, in output "vpc_endpoint_sqs_id":
 353:   value       = "${element(concat(aws_vpc_endpoint.sqs.*.id, list("")), 0)}"

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and 11 more similar warnings elsewhere)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

random_string.suffix: Creating...
random_string.suffix: Creation complete after 0s [id=auUstq2U]
module.eks.aws_iam_role.cluster[0]: Creating...
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0]: Creating...
module.vpc.aws_vpc.this[0]: Creating...
module.vpc.aws_eip.nat[0]: Creating...
module.eks.aws_iam_role.cluster[0]: Creation complete after 1s [id=training-eks-auUstq2U20201023082434283600000001]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0]: Creating...
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]: Creating...
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]: Creating...
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0]: Creation complete after 2s [id=arn:aws:iam::320707539151:policy/training-eks-auUstq2U-elb-sl-role-creation20201023082434284300000002]
module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0]: Creating...
module.vpc.aws_eip.nat[0]: Creation complete after 2s [id=eipalloc-057050c23a686ae35]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]: Creation complete after 1s [id=training-eks-auUstq2U20201023082434283600000001-20201023082435800100000005]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0]: Creation complete after 1s [id=training-eks-auUstq2U20201023082434283600000001-20201023082435792600000004]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]: Creation complete after 1s [id=training-eks-auUstq2U20201023082434283600000001-20201023082435786500000003]
module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0]: Creation complete after 1s [id=training-eks-auUstq2U20201023082434283600000001-20201023082436142200000006]
module.vpc.aws_vpc.this[0]: Creation complete after 6s [id=vpc-00e017ab156ecf9ba]
module.vpc.aws_internet_gateway.this[0]: Creating...
module.vpc.aws_route_table.private[0]: Creating...
module.vpc.aws_subnet.public[2]: Creating...
module.vpc.aws_route_table.public[0]: Creating...
module.vpc.aws_subnet.private[0]: Creating...
module.vpc.aws_subnet.public[0]: Creating...
module.vpc.aws_subnet.private[1]: Creating...
aws_security_group.worker_group_mgmt_one: Creating...
aws_security_group.all_worker_mgmt: Creating...
aws_security_group.worker_group_mgmt_two: Creating...
module.vpc.aws_route_table.private[0]: Creation complete after 1s [id=rtb-038b34575027b15ba]
module.vpc.aws_subnet.public[1]: Creating...
module.vpc.aws_route_table.public[0]: Creation complete after 2s [id=rtb-0592297a8b0f655da]
module.vpc.aws_subnet.private[2]: Creating...
module.vpc.aws_subnet.private[0]: Creation complete after 2s [id=subnet-0460b61059b06492c]
module.eks.aws_security_group.cluster[0]: Creating...
module.vpc.aws_subnet.private[1]: Creation complete after 2s [id=subnet-0b5733c754e31662b]
module.eks.aws_security_group.workers[0]: Creating...
module.vpc.aws_subnet.public[2]: Creation complete after 2s [id=subnet-064502a2b20efacfe]
module.vpc.aws_subnet.public[0]: Creation complete after 2s [id=subnet-00ba793c73ab79e40]
module.vpc.aws_internet_gateway.this[0]: Creation complete after 2s [id=igw-028de28b22296799a]
module.vpc.aws_route.public_internet_gateway[0]: Creating...
module.vpc.aws_subnet.private[2]: Creation complete after 1s [id=subnet-012f1b5e655b9ccea]
module.vpc.aws_route_table_association.private[0]: Creating...
module.vpc.aws_route_table_association.private[2]: Creating...
module.vpc.aws_route_table_association.private[1]: Creating...
aws_security_group.worker_group_mgmt_one: Creation complete after 4s [id=sg-0c0a1dbe208750d97]
aws_security_group.all_worker_mgmt: Creation complete after 4s [id=sg-0eb6946fdd982274a]
module.vpc.aws_subnet.public[1]: Creation complete after 3s [id=subnet-071a3ebbe56a7e7e7]
module.vpc.aws_route_table_association.public[0]: Creating...
module.vpc.aws_route_table_association.public[1]: Creating...
module.vpc.aws_route_table_association.public[2]: Creating...
aws_security_group.worker_group_mgmt_two: Creation complete after 4s [id=sg-049ddd821bcd02eff]
module.vpc.aws_nat_gateway.this[0]: Creating...
module.vpc.aws_route_table_association.private[0]: Creation complete after 1s [id=rtbassoc-06117d231be287de1]
module.vpc.aws_route_table_association.private[2]: Creation complete after 1s [id=rtbassoc-034596eaf7abed0b2]
module.vpc.aws_route_table_association.private[1]: Creation complete after 1s [id=rtbassoc-067e35c391e782c0c]
module.vpc.aws_route.public_internet_gateway[0]: Creation complete after 2s [id=r-rtb-0592297a8b0f655da1080289494]
module.vpc.aws_route_table_association.public[0]: Creation complete after 0s [id=rtbassoc-0d02905f7deb3fae4]
module.vpc.aws_route_table_association.public[2]: Creation complete after 0s [id=rtbassoc-0f843945c010d2beb]
module.vpc.aws_route_table_association.public[1]: Creation complete after 0s [id=rtbassoc-0d683d65e6feea3af]
module.eks.aws_security_group.workers[0]: Creation complete after 3s [id=sg-0f8f21bc617e47c24]
module.eks.aws_security_group_rule.workers_ingress_self[0]: Creating...
module.eks.aws_security_group_rule.workers_egress_internet[0]: Creating...
module.eks.aws_security_group.cluster[0]: Creation complete after 3s [id=sg-0b23abd48f0596625]
module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Creating...
module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Creating...
module.eks.aws_security_group_rule.cluster_egress_internet[0]: Creating...
module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Creating...
module.eks.aws_security_group_rule.workers_ingress_self[0]: Creation complete after 1s [id=sgrule-2634587791]
module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Creation complete after 1s [id=sgrule-2217905385]
module.eks.aws_security_group_rule.workers_egress_internet[0]: Creation complete after 3s [id=sgrule-1787898607]
module.eks.aws_security_group_rule.cluster_egress_internet[0]: Creation complete after 3s [id=sgrule-3029531549]
module.eks.aws_eks_cluster.this[0]: Creating...
module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Creation complete after 4s [id=sgrule-3691370097]
module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Creation complete after 5s [id=sgrule-1760037517]
module.vpc.aws_nat_gateway.this[0]: Still creating... [10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [10s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [20s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [30s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [40s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [50s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [1m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [1m0s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [1m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [1m10s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [1m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [1m20s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [1m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [1m30s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [1m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [1m40s elapsed]
module.vpc.aws_nat_gateway.this[0]: Still creating... [1m50s elapsed]
module.vpc.aws_nat_gateway.this[0]: Creation complete after 1m51s [id=nat-06ee6136cd599207f]
module.vpc.aws_route.private_nat_gateway[0]: Creating...
module.vpc.aws_route.private_nat_gateway[0]: Creation complete after 2s [id=r-rtb-038b34575027b15ba1080289494]
module.eks.aws_eks_cluster.this[0]: Still creating... [1m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [2m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [2m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [2m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [2m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [2m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [2m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [3m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [3m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [3m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [3m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [3m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [3m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [4m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [4m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [4m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [4m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [4m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [4m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [5m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [5m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [5m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [5m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [5m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [5m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [6m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [6m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [6m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [6m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [6m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [6m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [7m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [7m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [7m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [7m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [7m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [7m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [8m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [8m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [8m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [8m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [8m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [8m50s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [9m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [9m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [9m20s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [9m30s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [9m40s elapsed]
module.eks.aws_eks_cluster.this[0]: Creation complete after 9m46s [id=training-eks-auUstq2U]
module.eks.data.template_file.userdata[1]: Reading...
module.eks.data.template_file.userdata[0]: Reading...
module.eks.null_resource.wait_for_cluster[0]: Creating...
module.eks.aws_iam_role.workers[0]: Creating...
module.eks.data.template_file.userdata[1]: Read complete after 0s [id=ef589e77497839a64827abc41c3442c6b4897aafd6c25082ccaca50a55d7da45]
module.eks.data.template_file.userdata[0]: Read complete after 0s [id=ef589e77497839a64827abc41c3442c6b4897aafd6c25082ccaca50a55d7da45]
module.eks.null_resource.wait_for_cluster[0]: Provisioning with 'local-exec'...
module.eks.null_resource.wait_for_cluster[0] (local-exec): Executing: ["/bin/sh" "-c" "for i in `seq 1 60`; do if `command -v wget > /dev/null`; then wget --no-check-certificate -O - -q $ENDPOINT/healthz >/dev/null && exit 0 || true; else curl -k -s $ENDPOINT/healthz >/dev/null && exit 0 || true;fi; sleep 5; done; echo TIMEOUT && exit 1"]
module.eks.local_file.kubeconfig[0]: Creating...
module.eks.local_file.kubeconfig[0]: Creation complete after 0s [id=0b8e6b92b40fb314ff7f01a74731c1233c997c67]
module.eks.aws_iam_role.workers[0]: Creation complete after 1s [id=training-eks-auUstq2U2020102308343361120000000c]
module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Creating...
module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Creating...
module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Creating...
module.eks.aws_iam_instance_profile.workers[1]: Creating...
module.eks.aws_iam_instance_profile.workers[0]: Creating...
module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Creation complete after 1s [id=training-eks-auUstq2U2020102308343361120000000c-2020102308343517760000000f]
module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Creation complete after 1s [id=training-eks-auUstq2U2020102308343361120000000c-20201023083435299600000011]
module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Creation complete after 1s [id=training-eks-auUstq2U2020102308343361120000000c-20201023083435258200000010]
module.eks.aws_iam_instance_profile.workers[0]: Creation complete after 1s [id=training-eks-auUstq2U2020102308343453850000000e]
module.eks.aws_iam_instance_profile.workers[1]: Creation complete after 1s [id=training-eks-auUstq2U2020102308343453840000000d]
module.eks.aws_launch_configuration.workers[0]: Creating...
module.eks.aws_launch_configuration.workers[1]: Creating...
module.eks.null_resource.wait_for_cluster[0]: Still creating... [10s elapsed]
module.eks.aws_launch_configuration.workers[1]: Creation complete after 8s [id=training-eks-auUstq2U-worker-group-220201023083437061300000013]
module.eks.aws_launch_configuration.workers[0]: Still creating... [10s elapsed]
module.eks.aws_launch_configuration.workers[0]: Creation complete after 12s [id=training-eks-auUstq2U-worker-group-120201023083437050500000012]
module.eks.random_pet.workers[1]: Creating...
module.eks.random_pet.workers[0]: Creating...
module.eks.random_pet.workers[0]: Creation complete after 0s [id=set-worm]
module.eks.random_pet.workers[1]: Creation complete after 0s [id=immortal-mouse]
module.eks.aws_autoscaling_group.workers[1]: Creating...
module.eks.aws_autoscaling_group.workers[0]: Creating...
module.eks.null_resource.wait_for_cluster[0]: Still creating... [20s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [10s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [30s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [20s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [40s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [30s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [30s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [50s elapsed]
module.eks.aws_autoscaling_group.workers[1]: Still creating... [40s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [40s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Creation complete after 42s [id=training-eks-auUstq2U-worker-group-120201023083448449100000015]
module.eks.aws_autoscaling_group.workers[1]: Creation complete after 42s [id=training-eks-auUstq2U-worker-group-220201023083448448900000014]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m0s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m30s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m40s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [1m50s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [2m0s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [2m10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [2m20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [2m30s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [2m40s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [2m50s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [3m0s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [3m10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [3m20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [3m30s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [3m40s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [3m50s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [4m0s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [4m10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [4m20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [4m30s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [4m40s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [4m50s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [5m0s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [5m10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [5m20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [5m30s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [5m40s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [5m50s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [6m0s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [6m10s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [6m20s elapsed]
module.eks.null_resource.wait_for_cluster[0]: Still creating... [6m30s elapsed]
module.eks.null_resource.wait_for_cluster[0] (local-exec): TIMEOUT


Error: Error running command 'for i in `seq 1 60`; do if `command -v wget > /dev/null`; then wget --no-check-certificate -O - -q $ENDPOINT/healthz >/dev/null && exit 0 || true; else curl -k -s $ENDPOINT/healthz >/dev/null && exit 0 || true;fi; sleep 5; done; echo TIMEOUT && exit 1': exit status 1. Output: TIMEOUT

If I do a terraform plan after the installation, this is what I get:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

random_string.suffix: Refreshing state... [id=auUstq2U]
module.eks.data.aws_caller_identity.current: Refreshing state... [id=2020-10-23 08:23:25.324727 +0000 UTC]
module.eks.data.aws_partition.current: Refreshing state... [id=aws]
module.eks.data.aws_iam_policy_document.cluster_assume_role_policy: Refreshing state... [id=2764486067]
module.vpc.aws_vpc.this[0]: Refreshing state... [id=vpc-00e017ab156ecf9ba]
data.aws_availability_zones.available: Refreshing state... [id=us-east-2]
module.eks.data.aws_ami.eks_worker_windows: Refreshing state... [id=ami-05057ad276b1ef1d3]
module.eks.data.aws_ami.eks_worker: Refreshing state... [id=ami-0135903686f192ffe]
module.eks.data.aws_iam_policy_document.cluster_elb_sl_role_creation[0]: Refreshing state... [id=3407219844]
module.eks.aws_iam_role.cluster[0]: Refreshing state... [id=training-eks-auUstq2U20201023082434283600000001]
module.eks.data.aws_iam_policy_document.workers_assume_role_policy: Refreshing state... [id=3778018924]
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0]: Refreshing state... [id=arn:aws:iam::320707539151:policy/training-eks-auUstq2U-elb-sl-role-creation20201023082434284300000002]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]: Refreshing state... [id=training-eks-auUstq2U20201023082434283600000001-20201023082435800100000005]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]: Refreshing state... [id=training-eks-auUstq2U20201023082434283600000001-20201023082435786500000003]
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0]: Refreshing state... [id=training-eks-auUstq2U20201023082434283600000001-20201023082435792600000004]
module.vpc.aws_eip.nat[0]: Refreshing state... [id=eipalloc-057050c23a686ae35]
module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0]: Refreshing state... [id=training-eks-auUstq2U20201023082434283600000001-20201023082436142200000006]
aws_security_group.all_worker_mgmt: Refreshing state... [id=sg-0eb6946fdd982274a]
aws_security_group.worker_group_mgmt_one: Refreshing state... [id=sg-0c0a1dbe208750d97]
module.eks.aws_security_group.cluster[0]: Refreshing state... [id=sg-0b23abd48f0596625]
aws_security_group.worker_group_mgmt_two: Refreshing state... [id=sg-049ddd821bcd02eff]
module.eks.aws_security_group.workers[0]: Refreshing state... [id=sg-0f8f21bc617e47c24]
module.vpc.aws_route_table.public[0]: Refreshing state... [id=rtb-0592297a8b0f655da]
module.vpc.aws_internet_gateway.this[0]: Refreshing state... [id=igw-028de28b22296799a]
module.vpc.aws_route_table.private[0]: Refreshing state... [id=rtb-038b34575027b15ba]
module.vpc.aws_subnet.public[1]: Refreshing state... [id=subnet-071a3ebbe56a7e7e7]
module.vpc.aws_subnet.public[2]: Refreshing state... [id=subnet-064502a2b20efacfe]
module.vpc.aws_subnet.public[0]: Refreshing state... [id=subnet-00ba793c73ab79e40]
module.vpc.aws_subnet.private[2]: Refreshing state... [id=subnet-012f1b5e655b9ccea]
module.vpc.aws_subnet.private[1]: Refreshing state... [id=subnet-0b5733c754e31662b]
module.vpc.aws_subnet.private[0]: Refreshing state... [id=subnet-0460b61059b06492c]
module.vpc.aws_route.public_internet_gateway[0]: Refreshing state... [id=r-rtb-0592297a8b0f655da1080289494]
module.eks.aws_security_group_rule.cluster_egress_internet[0]: Refreshing state... [id=sgrule-3029531549]
module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Refreshing state... [id=sgrule-2217905385]
module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Refreshing state... [id=sgrule-3691370097]
module.eks.aws_security_group_rule.workers_ingress_self[0]: Refreshing state... [id=sgrule-2634587791]
module.eks.aws_security_group_rule.workers_egress_internet[0]: Refreshing state... [id=sgrule-1787898607]
module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Refreshing state... [id=sgrule-1760037517]
module.vpc.aws_route_table_association.public[0]: Refreshing state... [id=rtbassoc-0d02905f7deb3fae4]
module.vpc.aws_route_table_association.public[1]: Refreshing state... [id=rtbassoc-0d683d65e6feea3af]
module.vpc.aws_route_table_association.public[2]: Refreshing state... [id=rtbassoc-0f843945c010d2beb]
module.vpc.aws_nat_gateway.this[0]: Refreshing state... [id=nat-06ee6136cd599207f]
module.vpc.aws_route_table_association.private[1]: Refreshing state... [id=rtbassoc-067e35c391e782c0c]
module.vpc.aws_route_table_association.private[2]: Refreshing state... [id=rtbassoc-034596eaf7abed0b2]
module.vpc.aws_route_table_association.private[0]: Refreshing state... [id=rtbassoc-06117d231be287de1]
module.eks.aws_eks_cluster.this[0]: Refreshing state... [id=training-eks-auUstq2U]
module.vpc.aws_route.private_nat_gateway[0]: Refreshing state... [id=r-rtb-038b34575027b15ba1080289494]
module.eks.aws_iam_role.workers[0]: Refreshing state... [id=training-eks-auUstq2U2020102308343361120000000c]
module.eks.data.template_file.userdata[0]: Refreshing state... [id=ef589e77497839a64827abc41c3442c6b4897aafd6c25082ccaca50a55d7da45]
module.eks.data.template_file.userdata[1]: Refreshing state... [id=ef589e77497839a64827abc41c3442c6b4897aafd6c25082ccaca50a55d7da45]
module.eks.null_resource.wait_for_cluster[0]: Refreshing state... [id=1605586206173714438]
module.eks.local_file.kubeconfig[0]: Refreshing state... [id=0b8e6b92b40fb314ff7f01a74731c1233c997c67]
data.aws_eks_cluster_auth.cluster: Refreshing state...
data.aws_eks_cluster.cluster: Refreshing state...
module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Refreshing state... [id=training-eks-auUstq2U2020102308343361120000000c-20201023083435299600000011]
module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Refreshing state... [id=training-eks-auUstq2U2020102308343361120000000c-2020102308343517760000000f]
module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Refreshing state... [id=training-eks-auUstq2U2020102308343361120000000c-20201023083435258200000010]
module.eks.aws_iam_instance_profile.workers[0]: Refreshing state... [id=training-eks-auUstq2U2020102308343453850000000e]
module.eks.aws_iam_instance_profile.workers[1]: Refreshing state... [id=training-eks-auUstq2U2020102308343453840000000d]
module.eks.aws_launch_configuration.workers[0]: Refreshing state... [id=training-eks-auUstq2U-worker-group-120201023083437050500000012]
module.eks.aws_launch_configuration.workers[1]: Refreshing state... [id=training-eks-auUstq2U-worker-group-220201023083437061300000013]
module.eks.random_pet.workers[0]: Refreshing state... [id=set-worm]
module.eks.random_pet.workers[1]: Refreshing state... [id=immortal-mouse]
module.eks.aws_autoscaling_group.workers[1]: Refreshing state... [id=training-eks-auUstq2U-worker-group-220201023083448448900000014]
module.eks.aws_autoscaling_group.workers[0]: Refreshing state... [id=training-eks-auUstq2U-worker-group-120201023083448449100000015]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
+/- create replacement and then destroy

Terraform will perform the following actions:

  # module.eks.kubernetes_config_map.aws_auth[0] will be created
  + resource "kubernetes_config_map" "aws_auth" {
      + data = {
          + "mapAccounts" = jsonencode([])
          + "mapRoles"    = <<~EOT
                - "groups":
                  - "system:bootstrappers"
                  - "system:nodes"
                  "rolearn": "arn:aws:iam::320707539151:role/training-eks-auUstq2U2020102308343361120000000c"
                  "username": "system:node:{{EC2PrivateDNSName}}"
            EOT
          + "mapUsers"    = jsonencode([])
        }
      + id   = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "aws-auth"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.eks.null_resource.wait_for_cluster[0] is tainted, so must be replaced
+/- resource "null_resource" "wait_for_cluster" {
      ~ id = "1605586206173714438" -> (known after apply)
    }

Plan: 2 to add, 0 to change, 1 to destroy.

Changes to Outputs:
  + config_map_aws_auth = [
      + {
          + binary_data = null
          + data        = {
              + "mapAccounts" = jsonencode([])
              + "mapRoles"    = <<~EOT
                    - "groups":
                      - "system:bootstrappers"
                      - "system:nodes"
                      "rolearn": "arn:aws:iam::320707539151:role/training-eks-auUstq2U2020102308343361120000000c"
                      "username": "system:node:{{EC2PrivateDNSName}}"
                EOT
              + "mapUsers"    = jsonencode([])
            }
          + id          = (known after apply)
          + metadata    = [
              + {
                  + annotations      = null
                  + generate_name    = null
                  + generation       = (known after apply)
                  + labels           = null
                  + name             = "aws-auth"
                  + namespace        = "kube-system"
                  + resource_version = (known after apply)
                  + self_link        = (known after apply)
                  + uid              = (known after apply)
                },
            ]
        },
    ]

Warning: Interpolation-only expressions are deprecated

  on .terraform/modules/vpc/outputs.tf line 353, in output "vpc_endpoint_sqs_id":
 353:   value       = "${element(concat(aws_vpc_endpoint.sqs.*.id, list("")), 0)}"

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and 11 more similar warnings elsewhere)


------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Any idea what's going on? This is my Terraform version:

$ terraform version
Terraform v0.13.4
+ provider registry.terraform.io/hashicorp/aws v3.12.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/random v2.3.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Interpolation-only expressions are deprecated

While running the terraform scripts, I am getting below warnings at several places:
Warning: Interpolation-only expressions are deprecated

on .terraform/modules/vpc/outputs.tf line 353, in output "vpc_endpoint_sqs_id":
353: value = "${element(concat(aws_vpc_endpoint.sqs.*.id, list("")), 0)}"

Versions:

terraform --version
Terraform v0.13.5

  • provider registry.terraform.io/hashicorp/aws v3.18.0
  • provider registry.terraform.io/hashicorp/kubernetes v1.13.3
  • provider registry.terraform.io/hashicorp/local v1.4.0
  • provider registry.terraform.io/hashicorp/null v2.1.2
  • provider registry.terraform.io/hashicorp/random v2.3.1
  • provider registry.terraform.io/hashicorp/template v2.2.0

Let me know if any other inputs required

Kubeconfig resource is deleted before aws_auth resource can be deleted

This guide advises us to use the community module here: https://github.com/terraform-aws-modules/terraform-aws-eks

There is a known issue that we have been running into with this module: terraform-aws-modules/terraform-aws-eks#978 (closed but we are trying to reopen). We only started observing these failures after upgrading from Terraform v0.13.5 to v0.14.0

The result of this issue/bug is that anyone who follows this guide to set up their EKS deployment with Terraform might also run into the same issue where a terraform destroy will potentially never complete successfully without manual intervention as that issue recommends (specifically the suggestion to remove the configmap from the state).

We see one of the following errors when we run into this issue on a terraform destroy:

Error: Unauthorized

or

Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused

Would appreciate any recommendations on how, if at all possible without changes to the community module, to mitigate this issue.

Thanks,
Jwal + @jamespollard8

Issue deploying the nodegroup

Hello.

After implementing this lab following the instructions (i only changed the region on the vpc.tf to use us-west-2) i have noticed there are not node groups associated to the EKS cluster (checking on the AWS management console). The instances are correctly listed on the AWS EC2 Management Console but not related to the cluster on the EKS Console.
Also, when running "kubectl get nodes -A " there are not nodes listed nor related to the cluster.

Finally, i tried moving ahead and deploy the metrics server on the cluster, running "kubectl apply -f metrics-server-0.3.6/deploy/1.8+/" (after downloading the package) and when running "kubectl get deployment metrics-server -n kube-system" the output is
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 0/1 1 0 79s

i waited a long time but the deployment is never getting deployed because there are no nodes attached to the cluster

kubectl get nodes
No resources found

PS. i've been trying to perform the same eks cluster launch from within AWS Management Console and it is failing with "instances failed to join the kubernetes cluster" everytime i create a node group, even if i am using the cloudformation templates given here. https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html i just mention this to keep in mind maybe this could be an AWS bug.

Understanding Bastion based on the related guide

The related tutorial mentions setting up a bastion host:

eks-cluster.tf provisions all the resources (AutoScaling Groups, etc...) required to set up an EKS cluster in the private subnets and bastion servers to access the cluster using the AWS EKS Module.

Looking through the guide + codebase, I don't understand this. My understanding is that a bastion host would be something like an EC2 that I would ssh into in order to access the rest of the cluster.

Does it mean something else here?

Also, the security groups have ingress entries for port 22 from 192.168.0.0/16 and 172.16.0.0/12, what is the significance of these IPs? (they are not the CIDR blocks of my VPC, for example)

terraform apply error

module.eks.aws_autoscaling_group.workers[0]: Creation complete after 1m18s [id=education-eks-Im6KCYIf-worker-group-120211102041313454300000015] ╷ │ Error: error reading Route Table (rtb-09cac278f6db9426e): couldn't find resource │ │ with module.vpc.aws_route_table.public[0], │ on .terraform/modules/vpc/main.tf line 198, in resource "aws_route_table" "public": │ 198: resource "aws_route_table" "public" { │ ╵

init fails with "...hashicorp/aws: the previously-selected version 4.15.1 is no longer available"

Similar to #49 - with a fresh clone (as of af9294ca38741185c566b94c9b81a99241e2ba71), I'm getting a hashicorp/aws-related error with init

% terraform init
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 18.26.6 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 1.0.2 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.14.2 for vpc...
- vpc in .terraform/modules/vpc

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/tls from the dependency lock file
- Reusing previous version of hashicorp/cloudinit from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/random v3.1.0...
- Installed hashicorp/random v3.1.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.12.1...
- Installed hashicorp/kubernetes v2.12.1 (signed by HashiCorp)
- Installing hashicorp/tls v3.4.0...
- Installed hashicorp/tls v3.4.0 (signed by HashiCorp)
- Installing hashicorp/cloudinit v2.2.0...
- Installed hashicorp/cloudinit v2.2.0 (signed by HashiCorp)
╷
│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: the previously-selected version 4.15.1 is no longer available
╵

If I try terraform init -upgrade, I get

╷
│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 3.63.0, >= 3.72.0, ~> 4.15.0
╵

I'm on terraform v1.2.7

% terraform -version
Terraform v1.2.7
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.15.1
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.12.1
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/tls v3.4.0

no nodes available to schedule pods

Hello,

Nodes are joining the k8s cluster. I tried to re-create the cluster several times.

COMMAND: kubectl get nodes --all-namespaces
No resources found

COMMAND: kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-bd44f767b-jjlkb 0/1 Pending 0 4m58s
kube-system coredns-bd44f767b-n29b7 0/1 Pending 0 4m58s

COMMAND: kubectl describe pods coredns-bd44f767b-n29b7 -n kube-system
Name: coredns-bd44f767b-n29b7
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node:
Labels: eks.amazonaws.com/component=coredns
k8s-app=kube-dns
pod-template-hash=bd44f767b
Annotations: eks.amazonaws.com/compute-type: ec2
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-bd44f767b
Containers:
coredns:
Image: 602401143452.dkr.ecr.us-east-2.amazonaws.com/eks/coredns:v1.6.6
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/tmp from tmp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-fbctf (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-fbctf:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-fbctf
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 40s (x9 over 12m) default-scheduler no nodes available to schedule pods

Modification in Nodes - error creating EKS Cluster

If I want to change instance type or add new worker Group then I get: Error:

Error: error creating EKS Cluster (terraform-eks-dev-i8DeuS8U): ResourceInUseException: Cluster already exists with name: terraform-eks-dev-i8DeuS8U
{
RespMetadata: {
StatusCode: 409,
RequestID: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
ClusterName: "terraform-eks-dev-i8DeuS8U",
Message_: "Cluster already exists with name: terraform-eks-dev-i8DeuS8U"
}

SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance

I'm getting this error when I tried to create a loadbalcer service in the cluster that I created using the code of this repo and I couldn't understand what's wrong, this is the output of kubectl describe svc

Warning SyncLoadBalancerFailed 54s (x4 over 91s) service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-05f3a11329a20bb93; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
Normal EnsuringLoadBalancer 14s (x5 over 94s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 13s service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-046c2cc46714af250; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)

getting error after terraform apply

module.eks.aws_autoscaling_group.workers[0]: Still creating... [2m20s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [2m30s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [2m40s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [2m50s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Still creating... [3m0s elapsed]
module.eks.aws_autoscaling_group.workers[0]: Creation complete after 3m0s [id=training-eks-qyrm0cwY-worker-group-120200626153528568800000012]

Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp [::1]:80: connect: connection refused

on .terraform/modules/eks/terraform-aws-eks-12.1.0/aws_auth.tf line 64, in resource "kubernetes_config_map" "aws_auth":
64: resource "kubernetes_config_map" "aws_auth" {

Invalid principal in policy

While I run "terraform apply", the following error message is shown:

image

Please note that I was trying to provision the EKS cluster in China region "cn-northwest-1".

t4g.small not works on EKS

Problem

t4g.small instance type not works on EKS cluster

Output error

Error: error creating EKS Node Group (my_eks_cluster:my_eks_node_group_t4gsmall_on_demand): InvalidParameterException: [t4g.small] is not a valid instance type for requested amiType AL2_x86_64

How it was implemented

module "eks" {
  source             = "../../path_to_eks_module"
  cluster_name       = "eks_cluster"
  cluster_vpc_id     = data.aws_vpc.my_eks_vpc.id
  subnet_ids         = data.aws_subnet_ids.my_eks_subnet.ids
  private_subnet_ids = data.aws_subnet_ids.my_eks_private-subnet.ids
  cluster_role_name  = "my_eks_cluster_role"
  cluster_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Principal" : {
          "Service" : "eks.amazonaws.com"
        },
        "Action" : "sts:AssumeRole"
      }
    ]
  })
  cluster_policy_arn   = ["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy", "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"]
  cluster_sg_name      = "my_eks_cluster_sg"
  node_group_role_name = "my_eks_node_group_role"
  node_group_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Principal" : {
          "Service" : "ec2.amazonaws.com"
        },
        "Action" : "sts:AssumeRole"
      }
    ]
  })
  node_group_policy_arn = ["arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy", "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy", "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"]
  node_group_sg_name    = "my_eks_node_group_sg"
  node_group = {
    node_group_1 = {
      node_group_name = "my_eks_node_group_t4gsmall_on_demand"
      instance_types  = ["t4g.small"]
      ami_type        = "AL2_ARM_64"
      capacity_type   = "SPOT"
      desired_size    = 2
      min_size        = 1
      max_size        = 2
    }
  }
}

Aditional info

Terraform provider version: 3.33.0 (also tried with 3.50.0)
OS: Linux Ubuntu 20.04 LTS

terraform init fails due to constraints on aws module

After doing a git checkout, I get this message:

│ Error: Failed to query available provider packages

│ Could not retrieve the list of available versions for provider hashicorp/aws:
│ locked provider registry.terraform.io/hashicorp/aws 3.53.0 does not match
│ configured version constraint >= 3.15.0, >= 3.20.0, >= 3.40.0, >= 3.56.0;
│ must use terraform init -upgrade to allow selection of new versions

Constraints in this project might need to be adjusted.

server could not find the requested resource (post configmaps)

I'm trying to use this example. But when running terraform apply, I observe the following:

module.eks.null_resource.wait_for_cluster[0]: Creation complete after 2m11s [id=860621988250871190]
module.eks.kubernetes_config_map.aws_auth[0]: Creating...

Error: the server could not find the requested resource (post configmaps)

on .terraform/modules/eks/terraform-aws-eks-11.1.0/aws_auth.tf line 62, in resource "kubernetes_config_map" "aws_auth":
62: resource "kubernetes_config_map" "aws_auth" {

After this the process stops.

Is there a way to fix this?

The cluster is created and there are 3 EC2 instances running. I don't know if it is correct or not. Running "terraform output" looks OK.

AWS Instances Keep Spinning Up Even After Termination

training

I was using this repository to spin up Kubernetes on AWS last week but I was having an issue executing the provided terraform script. It would keep timing out before everything was completed.

At the time I thought nothing of it but turns out it was still creating worker groups on my AWS account. Every time I terminate the instances, they just spin up again a few seconds later. I searched google and can't seem to find the root of my issue.

If anyone has come across this issue and may know how to stop this from happening, then I would really appreciate any and all the help that I can get!

node pools failing to join cluster

Using the repo as is without modifications is resulting in the node pools not able to join the cluster. I'm totally new to AWS (been working with Azure/AKS), so it is certainly possible I've done something wrong.

I have the aws cli set up with my access token and my account is an admin account. All the resources, including the node pools themselves, seem to be created fine looking in the console, but the node pool fails to join the cluster.

I'm not sure what information I can provide to drill into the issue.

This is the final output of an apply:

module.eks.module.eks_managed_node_group["one"].aws_eks_node_group.this[0]: Still creating... [23m0s elapsed]
module.eks.module.eks_managed_node_group["two"].aws_eks_node_group.this[0]: Still creating... [23m10s elapsed]
╷
│ Error: error waiting for EKS Node Group (education-eks-yQx173ia:node-group-1-20221207030300087000000014) to create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
│       * i-074125f0baff11416, i-0a6752f739c51b1b7: NodeCreationFailure: Instances failed to join the kubernetes cluster
│ 
│ 
│ 
│   with module.eks.module.eks_managed_node_group["one"].aws_eks_node_group.this[0],
│   on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 272, in resource "aws_eks_node_group" "this":
│  272: resource "aws_eks_node_group" "this" {
│ 
╵
╷
│ Error: error waiting for EKS Node Group (education-eks-yQx173ia:node-group-2-20221207030300087100000016) to create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
│       * i-07d22c63c79231a1d: NodeCreationFailure: Instances failed to join the kubernetes cluster
│ 
│ 
│ 
│   with module.eks.module.eks_managed_node_group["two"].aws_eks_node_group.this[0],
│   on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 272, in resource "aws_eks_node_group" "this":
│  272: resource "aws_eks_node_group" "this" {
│ 

Pods running on different nodes but same namespace are not able to reach each other

I cloned this repo and ran the terraform configurations in order to create an EKS cluster on AWS by using exactly the same code provided by this repo. After that, I deployed two simple applications on the same namespace but scheduled them to run in different nodes (nodeA and NodeB). Those applications are exposed by using the Service type ClusterIP.

When I logged into one of the application pods running and tried to reach the other application pod by doing a simple curl or ping, the service resolution didn't work:

root@app-deployment-b5f48bbc6-4bms6:/# curl -v http://test-service
* Rebuilt URL to: http://test-service/
*   Trying 172.20.251.140...
* TCP_NODELAY set

If I run both applications on the same node the service resolution works properly.

Want to know if some extra configurations such as network, security groups, DNS, etc are missing on this code in order to support pod-to-pod connectivity when they run in different nodes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.