Giter Site home page Giter Site logo

cloudposse / terraform-aws-eks-workers Goto Github PK

View Code? Open in Web Editor NEW
87.0 19.0 75.0 1.34 MB

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers

Home Page: https://cloudposse.com/accelerate

License: Apache License 2.0

Makefile 4.60% HCL 82.09% Shell 1.21% Go 12.10%
terraform terraform-module eks aws kubernetes nodes workers cluster k8s ec2

terraform-aws-eks-workers's Issues

Deployment commands

First thanks for your terraform modules. They are really useful and make devops life easier.
If you can add full deployment commands into each repo, it might be better for newcomers.
like

$ git clone to specific tag
$ cd samples/directory
$ vi fixtures.us-east-2.tfvars
$ export AWS_ACCESS_KEY_ID="xyz"
$ export AWS_SECRET_ACCESS_KEY="abc"
$ export AWS_DEFAULT_REGION="us-west-2"
$ terraform plan

Thanks

Error initializing module

Hi! I'm receiving some errors when initializing the module using terraform init:

There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Attribute redefined

  on .terraform/modules/eks_workers/main.tf line 135, in data "aws_ami" "eks_worker":
 135:   most_recent = true

The argument "most_recent" was already set at
.terraform/modules/eks_workers/main.tf:127,3-14. Each argument may be set only
once.

Removing the second most_recent does indeed fix the error.

Deprecated template provider should be removed from module

Describe the Bug

Template provider used in this module is long deprecated, it should be removed. It does not exist for Mac M1 machines, so you either need to use workarounds or it would not work.

hashicorp/terraform-provider-template#85

Expected Behavior

Module should work on all platforms supported on TF.

Steps to Reproduce

Do whatever

Screenshots

Initializing modules...
Downloading registry.terraform.io/cloudposse/ec2-autoscale-group/aws 0.27.0 for autoscale_group...
- autoscale_group in .terraform/modules/autoscale_group
Downloading registry.terraform.io/cloudposse/label/null 0.24.1 for autoscale_group.this...
- autoscale_group.this in .terraform/modules/autoscale_group.this
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for label...
- label in .terraform/modules/label
Downloading registry.terraform.io/cloudposse/security-group/aws 0.3.3 for security_group...
- security_group in .terraform/modules/security_group
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for security_group.this...
- security_group.this in .terraform/modules/security_group.this
Downloading registry.terraform.io/cloudposse/label/null 0.25.0 for this...
- this in .terraform/modules/this

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding hashicorp/null versions matching ">= 2.0.0"...
- Finding integrations/github versions matching "~> 4.0"...
- Finding hashicorp/aws versions matching ">= 2.0.0, >= 3.63.0, < 4.0.0"...
- Finding hashicorp/template versions matching ">= 2.0.0"...
- Installing hashicorp/null v3.1.1...
- Installed hashicorp/null v3.1.1 (signed by HashiCorp)
- Using integrations/github v4.24.1 from the shared cache directory
- Using hashicorp/aws v3.75.1 from the shared cache directory
╷
│ Error: Incompatible provider version
│
│ Provider registry.terraform.io/hashicorp/template v2.2.0 does not have a
│ package available for your current platform, darwin_arm64.
│
│ Provider releases are separate from Terraform CLI releases, so not all
│ providers are available for all platforms. Other versions of this provider
│ may have different platforms supported.
╵

ERRO[0033] 1 error occurred:
	* exit status 1

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Mac Os, Apple Silicon M1
  • Version: Monterey 12.3.1

Full end-to-end example

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Would like the ability to look at a working end-to-end example, so that I can start from a baseline of something that works as I build.

Expected Behavior

I expected that the examples in the "examples/complete" folder would include creation of an EKS cluster as well as worker nodes using this module. In actuality it creates a VPC, subnets, and the worker nodes, but no cluster.

I'm currently struggling to get this module working in my environment, and don't have a working example to refer to.

Use Case

I'm trying to stand up an eks cluster using the cloudposse module, with workers using this module. I can't use the managed node group module since I need to configure the worker nodes as dedicated tenancy.

I'm currently struggling to get it working. The cluster comes up fine, and the instances start fine, but they never show up as nodes in the cluster (e.g. kubectl get nodes returns nothing)

Describe Ideal Solution

A working example exists that I can use as a working baseline to build from

Alternatives Considered

Continue troubleshooting my setup without being able to refer to a working example

default `eks_worker_ami_name_regex` should be updated to handle 1.20

Hey, I just found a simple bug when you want to use EKS with kubernetes 1.20 the AMI is not found.

Describe the Bug

Set kubernetes_version to 1.20

╷
│ Error: "name_regex": error parsing regexp: missing argument to repetition operator: `*`
│ 
│   with module.eks_workers.data.aws_ami.eks_worker[0],
│   on .terraform/modules/eks_workers/main.tf line 139, in data "aws_ami" "eks_worker":
│  139:   name_regex  = var.eks_worker_ami_name_regex
│ 
╵

Solution

Just need to change eks_worker_ami_name_regex = "^amazon-eks-node-[0-9,.]+-v[0-9]{8}$" as the default regexp.

Workaround

We can pass the parameters eks_worker_ami_name_regex = "^amazon-eks-node-[0-9,.]+-v[0-9]{8}$" to the module directly.

Expose metadata_options {instance_metadata_tags}

Describe the Feature

This is a follow up on cloudposse/terraform-aws-ec2-autoscale-group#87

For some time AWS allows to enable instance tags in metadata through:

  metadata_options {
    instance_metadata_tags = "enabled"
  }

See: https://aws.amazon.com/about-aws/whats-new/2022/01/instance-tags-amazon-ec2-instance-metadata-service/

Expected Behavior

Ability to enable instance_metadata_tags

Use Case

Please see original issue in terraform-aws-ec2-autoscale-group/

Conflicting AWS Provider versions between modules

Describe the Bug

If I happen to use this module with cloudposse/terraform-aws-eks-node-group I get a conflicting provider error for aws.

When running terraform init -backend=false . on the related terraform:

Initializing provider plugins...
- Using previously-installed hashicorp/null v2.1.2
- Using previously-installed hashicorp/local v1.4.0
- Using previously-installed hashicorp/kubernetes v1.12.0
- Using previously-installed hashicorp/template v2.1.2
- Finding hashicorp/aws versions matching ">= 2.0.*, < 4.0.*, ~> 3.0, >= 2.0.*, < 4.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0"...

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider hashicorp/aws:
no available releases match the given constraints >= 2.0.*, < 4.0.*, ~> 3.0,
>= 2.0.*, < 4.0.*, >= 2.0.*, < 4.0.*, ~> 2.0, ~> 2.0

FWIW, this output includes module boundaries from a few other cloudposse modules (like terraform-aws-dynamic-subnets, terraform-aws-vpc, etc)

Expected Behavior

I should be able to get all providers

Steps to Reproduce:

Steps to reproduce the behavior:

  1. Create .tf with module usage of both cloudposse/terraform-aws-eks-node-group and this module using the latest versions of both as of writing this (0.10.0 and 0.15.2 respectively)
  2. Run terraform init -backend=false .
  3. See error output

Environment:

  • OS: OSX
  • Version 10.15.6

Output of terraform -version within terraform dir:

Terraform v0.13.1
+ provider registry.terraform.io/hashicorp/kubernetes v1.12.0
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/template v2.1.2

Proposition to add encryption of volumes for EKS Worker Nodes

I have code to add to this module that will create kms key with needed policies and creation of new ami with encryption of volumes, i can create PR to add this possibility to this module but not sure if its necessary because this can be done outside of module. @aknysh will this need for future use or useless for now?

"list" function was deprecated in Terraform v0.12 and is no longer available

Hello,

thanks for all your Terraform modules 👌

Describe the Bug

Seems there is a change in come function Terraform handles. It was a warning but now if you upgrade to tf 0.15.0 it will throw an error.

╷
│ Error: Error in function call
│ 
│   on .terraform/modules/eks_workers.autoscale_group/main.tf line 14, in resource "aws_launch_template" "default":
│   14:         for_each = flatten(list(lookup(block_device_mappings.value, "ebs", [])))
│     ├────────────────
│     │ block_device_mappings.value will be known only after apply
│ 
│ Call to function "list" failed: the "list" function was deprecated in Terraform v0.12 and is no longer available; use tolist([ ... ]) syntax to write a literal list.

Expected Behavior

We should handle properly new terraform function such as tolist instead of legacy list function:

https://www.terraform.io/docs/language/functions/list.html

Steps to Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Run '....'
  3. Enter '....'
  4. See error

Screenshots

image

Environment (please complete the following information):

  • OS: Linux Mint (ubuntu base)
  • Version terraform v0.15.0

Additional Context

Here my fix:
for_each = flatten(tolist([lookup(block_device_mappings.value, "ebs", [])]))

Seems to work so far.

Related cloudposse/terraform-aws-ec2-autoscale-group#64

Feel free to ask me anything to help tackle this 👌

Latest image regex doesn't seem to be working

I have found that the regex match doesn't seem to be working for finding the latest image and as a result the nodes that come up are for 1.13 despite my building a cluster using 1.16.

Playing around with the regex search I found this should work instead:
amazon-eks-node-[1-9]\.[1-9]{2}-v\d{8}

Has anyone else been having this issue?

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Terraform 0.12 warnings

I got the following warning with Terraform 0.12

Warning: Quoted references are deprecated

  on .terraform/modules/subnets/private.tf line 45, in resource "aws_subnet" "private":
  45:     ignore_changes = ["tags.kubernetes", "tags.SubnetType"]

In this context, references are expected literally rather than in quotes.
Terraform 0.11 and earlier required quotes, but quoted references are now
deprecated and will be removed in a future version of Terraform. Remove the
quotes surrounding this reference to silence this warning.

(and 3 more similar warnings elsewhere)

Error destroying ASG providing instance profile

Hi,
I've got this error when I am trying to destroy the AWS ASG using this module:

Error: Error getting instance profiles: NoSuchEntity: Instance Profile fs-test-eks-cluster-workers-profile cannot be found.
	status code: 404, request id: b5020704-2009-4d4e-97f1-099e49aa4aa6

  on .terraform/modules/ng-mix-2/main.tf line 166, in data "aws_iam_instance_profile" "default":
 166: data "aws_iam_instance_profile" "default" {

Here is my config for this module:



module "ng-mix-2" {
  source     = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.13.0"
  namespace  = var.namespace
  stage      = var.stage
  name       = var.name
  attributes = var.attributes

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.subnets.private_subnet_ids

  use_custom_image_id        = var.use_custom_image_id
  eks_worker_ami_name_filter = var.eks_worker_ami_name_filter
  health_check_type          = "EC2"

  cluster_name                       = module.eks.eks_cluster_id 
  cluster_endpoint                   = module.eks.eks_cluster_endpoint
  cluster_certificate_authority_data = module.eks.eks_cluster_certificate_authority_data
  cluster_security_group_id          = module.eks.eks_cluster_managed_security_group_id
  use_existing_security_group        = true
  workers_security_group_id          = module.eks.eks_cluster_managed_security_group_id

  use_existing_aws_iam_instance_profile = true
  aws_iam_instance_profile_name         = aws_iam_instance_profile.eks_worker_profile.name

  associate_public_ip_address = false
  enable_monitoring           = true


  instance_type = "t3.medium" 
  min_size      = 1
  max_size      = 5

  mixed_instances_policy = {
    override = [
      {
        instance_type     = "t3.medium"
        weighted_capacity = 2
      }
    ]

    instances_distribution = {
      on_demand_allocation_strategy            = "prioritized"
      on_demand_base_capacity                  = 1
      on_demand_percentage_above_base_capacity = 10
      spot_allocation_strategy                 = "lowest-price"
      spot_instance_pools                      = 2       
      spot_max_price                           = "0.015" 
    }
  }

Thank you very much in advance

Expose option for max_instance_lifetime

Describe the Feature

max_instance_lifetime as an option has been added to the dependency terraform-aws-ec2-autoscale-group, but is not yet available to use in terraform-aws-eks-workers

Expected Behavior

Variable exposed to be passed through to the ASG options

Use Case

Having this would be helpful as a backstop for the in-cluster node replacement options with cluster-autoscaler / node-problem-detector / Draino to ensure that a node is replaced even if all else fails.

Thank you for your consideration!

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Base branch does not exist - skipping

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

Branch main
terraform
main.tf
  • cloudposse/ec2-autoscale-group/aws 0.39.0
  • cloudposse/label/null 0.25.0
versions.tf
  • aws >= 5.16.0
  • hashicorp/terraform >= 1.0
Branch release/v0
terraform
main.tf
  • cloudposse/ec2-autoscale-group/aws 0.30.0
  • cloudposse/label/null 0.25.0
versions.tf
  • aws >= 2.0
  • hashicorp/terraform >= 0.13.0

  • Check this box to trigger a request for Renovate to run again on this repository

AutoScaling Group launch template fails validation

I've copied your instructions from terraform-aws-eks-cluster ("Module usage on Terraform Cloud") but when running this on Terraform Cloud I get this error:

Error: Error creating AutoScaling Group: ValidationError: You must use a valid fully-formed launch template. The requested configuration is currently not supported. Please check the documentation for supported configurations.
status code: 400, request id: 6a08240a-3095-11ea-8715-5592e51a1563

on .terraform/modules/eks_workers.autoscale_group/main.tf line 127, in resource "aws_autoscaling_group" "default":
127: resource "aws_autoscaling_group" "default" {

Here are my files:

main.tf

provider "aws" {
  region = "eu-north-1"

  #   assume_role {
  #     role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
  #   }
}

module "label" {
  source     = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0"
  namespace  = var.namespace
  name       = var.name
  stage      = var.stage
  delimiter  = var.delimiter
  attributes = compact(concat(var.attributes, list("cluster")))
  tags       = var.tags
}

locals {
  # The usage of the specific kubernetes.io/cluster/* resource tags below are required
  # for EKS and Kubernetes to discover and manage networking resources
  # https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking
  tags = merge(var.tags, map("kubernetes.io/cluster/${module.label.id}", "shared"))

  # Unfortunately, most_recent (https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141)
  # variable does not work as expected, if you are not going to use custom AMI you should
  # enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers,
  # otherwise the first version of Kubernetes supported by AWS (v1.11) for EKS workers will be used, but
  # EKS control plane will use the version specified by kubernetes_version variable.
  eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
}

module "vpc" {
  source     = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.0"
  namespace  = var.namespace
  stage      = var.stage
  name       = var.name
  attributes = var.attributes
  cidr_block = "172.16.0.0/16"
  tags       = local.tags
}

module "subnets" {
  source               = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0"
  availability_zones   = var.availability_zones
  namespace            = var.namespace
  stage                = var.stage
  name                 = var.name
  attributes           = var.attributes
  vpc_id               = module.vpc.vpc_id
  igw_id               = module.vpc.igw_id
  cidr_block           = module.vpc.vpc_cidr_block
  nat_gateway_enabled  = false
  nat_instance_enabled = false
  tags                 = local.tags
}

module "eks_workers" {
  source                             = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.11.0"
  namespace                          = var.namespace
  stage                              = var.stage
  name                               = var.name
  attributes                         = var.attributes
  tags                               = var.tags
  instance_type                      = var.instance_type
  eks_worker_ami_name_filter         = local.eks_worker_ami_name_filter
  vpc_id                             = module.vpc.vpc_id
  subnet_ids                         = module.subnets.public_subnet_ids
  health_check_type                  = var.health_check_type
  min_size                           = var.min_size
  max_size                           = var.max_size
  wait_for_capacity_timeout          = var.wait_for_capacity_timeout
  cluster_name                       = module.label.id
  cluster_endpoint                   = module.eks_cluster.eks_cluster_endpoint
  cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
  cluster_security_group_id          = module.eks_cluster.security_group_id

  # Auto-scaling policies and CloudWatch metric alarms
  autoscaling_policies_enabled           = var.autoscaling_policies_enabled
  cpu_utilization_high_threshold_percent = var.cpu_utilization_high_threshold_percent
  cpu_utilization_low_threshold_percent  = var.cpu_utilization_low_threshold_percent
}

module "eks_cluster" {
  source     = "git::https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.15.0"
  namespace  = var.namespace
  stage      = var.stage
  name       = var.name
  attributes = var.attributes
  tags       = var.tags
  region     = "eu-north-1"
  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.subnets.public_subnet_ids

  local_exec_interpreter = "/bin/bash"
  kubernetes_version     = "1.14"

  workers_role_arns          = [module.eks_workers.workers_role_arn]
  workers_security_group_ids = [module.eks_workers.security_group_id]

  # Terraform Cloud configurations
  kubeconfig_path                                = "~/.kube/config"
  configmap_auth_file                            = "/home/terraform/.terraform/configmap-auth.yaml"
  install_aws_cli                                = true
  install_kubectl                                = true
  external_packages_install_path                 = "~/.terraform/bin"
  aws_eks_update_kubeconfig_additional_arguments = "--verbose"
  aws_cli_assume_role_arn                        = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
  aws_cli_assume_role_session_name               = "eks_cluster_example_session"
}

eks.auto.tfvars

region = "eu-north-1"
// region = "eu-west-1"

availability_zones = ["eu-north-1a", "eu-north-1b"]

namespace = "mynamespace"

stage = "test"
// ???

name = "eks3"

instance_type = "t2.small"

health_check_type = "EC2"

wait_for_capacity_timeout = "10m"

max_size = 3

min_size = 2

autoscaling_policies_enabled = true

cpu_utilization_high_threshold_percent = 80

cpu_utilization_low_threshold_percent = 20

associate_public_ip_address = true

kubernetes_version = "1.14"

kubeconfig_path = "/.kube/config"

oidc_provider_enabled = true

variables.tf

variable "region" {
  type        = string
  description = "AWS Region"
}

variable "availability_zones" {
  type        = list(string)
  description = "List of availability zones"
}

variable "namespace" {
  type        = string
  description = "Namespace, which could be your organization name, e.g. 'eg' or 'cp'"
}

variable "stage" {
  type        = string
  description = "Stage, e.g. 'prod', 'staging', 'dev' or 'testing'"
}

variable "name" {
  type        = string
  description = "Solution name, e.g. 'app' or 'cluster'"
}

variable "delimiter" {
  type        = string
  default     = "-"
  description = "Delimiter to be used between `name`, `namespace`, `stage`, etc."
}

variable "attributes" {
  type        = list(string)
  default     = []
  description = "Additional attributes (e.g. `1`)"
}

variable "tags" {
  type        = map(string)
  default     = {}
  description = "Additional tags (e.g. `map('BusinessUnit`,`XYZ`)"
}

variable "instance_type" {
  type        = string
  description = "Instance type to launch"
}

variable "kubernetes_version" {
  type        = string
  default     = ""
  description = "Desired Kubernetes master version. If you do not specify a value, the latest available version is used"
}

variable "health_check_type" {
  type        = "string"
  description = "Controls how health checking is done. Valid values are `EC2` or `ELB`"
}

variable "associate_public_ip_address" {
  type        = bool
  description = "Associate a public IP address with an instance in a VPC"
}

variable "max_size" {
  type        = number
  description = "The maximum size of the AutoScaling Group"
}

variable "min_size" {
  type        = number
  description = "The minimum size of the AutoScaling Group"
}

variable "wait_for_capacity_timeout" {
  type        = string
  description = "A maximum duration that Terraform should wait for ASG instances to be healthy before timing out. Setting this to '0' causes Terraform to skip all Capacity Waiting behavior"
}

variable "autoscaling_policies_enabled" {
  type        = bool
  description = "Whether to create `aws_autoscaling_policy` and `aws_cloudwatch_metric_alarm` resources to control Auto Scaling"
}

variable "cpu_utilization_high_threshold_percent" {
  type        = number
  description = "Worker nodes AutoScaling Group CPU utilization high threshold percent"
}

variable "cpu_utilization_low_threshold_percent" {
  type        = number
  description = "Worker nodes AutoScaling Group CPU utilization low threshold percent"
}

variable "map_additional_aws_accounts" {
  description = "Additional AWS account numbers to add to `config-map-aws-auth` ConfigMap"
  type        = list(string)
  default     = []
}

variable "map_additional_iam_roles" {
  description = "Additional IAM roles to add to `config-map-aws-auth` ConfigMap"

  type = list(object({
    rolearn  = string
    username = string
    groups   = list(string)
  }))

  default = []
}

variable "map_additional_iam_users" {
  description = "Additional IAM users to add to `config-map-aws-auth` ConfigMap"

  type = list(object({
    userarn  = string
    username = string
    groups   = list(string)
  }))

  default = []
}

variable "oidc_provider_enabled" {
  type        = bool
  default     = false
  description = "Create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead of using kiam or kube2iam. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html"
}

variable "kubeconfig_path" {
  type        = string
  description = "The path to `kubeconfig` file"
}

variable "local_exec_interpreter" {
  type        = string
  default     = "/bin/bash"
  description = "shell to use for local exec"
}

variable "configmap_auth_template_file" {
  type        = string
  default     = ""
  description = "Path to `config_auth_template_file`"
}

variable "configmap_auth_file" {
  type        = string
  default     = ""
  description = "Path to `configmap_auth_file`"
}

variable "install_aws_cli" {
  type        = bool
  default     = false
  description = "Set to `true` to install AWS CLI if the module is provisioned on workstations where AWS CLI is not installed by default, e.g. Terraform Cloud workers"
}

variable "install_kubectl" {
  type        = bool
  default     = false
  description = "Set to `true` to install `kubectl` if the module is provisioned on workstations where `kubectl` is not installed by default, e.g. Terraform Cloud workers"
}

variable "kubectl_version" {
  type        = string
  default     = ""
  description = "`kubectl` version to install. If not specified, the latest version will be used"
}

variable "external_packages_install_path" {
  type        = string
  default     = ""
  description = "Path to install external packages, e.g. AWS CLI and `kubectl`. Used when the module is provisioned on workstations where the external packages are not installed by default, e.g. Terraform Cloud workers"
}

variable "aws_eks_update_kubeconfig_additional_arguments" {
  type        = string
  default     = ""
  description = "Additional arguments for `aws eks update-kubeconfig` command, e.g. `--role-arn xxxxxxxxx`. For more info, see https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html"
}

variable "aws_cli_assume_role_arn" {
  type        = string
  default     = ""
  description = "IAM Role ARN for AWS CLI to assume before calling `aws eks` to update kubeconfig"
}

variable "aws_cli_assume_role_session_name" {
  type        = string
  default     = ""
  description = "An identifier for the assumed role session when assuming the IAM Role for AWS CLI before calling `aws eks` to update `kubeconfig`"
}

variable "jq_version" {
  type        = string
  default     = "1.6"
  description = "Version of `jq` to download to extract temporaly credentials after running `aws sts assume-role` if AWS CLI needs to assume role to access the cluster (if variable `aws_cli_assume_role_arn` is set)"
}

Support other AWS partitions

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Other partitions use different ARN designations. Hard coding ARNs means the module only works in AWS Commercial.

Expected Behavior

I'm able to use the module in AWS GovCloud

Use Case

Since the ARNs are hard coded, the module doesn't work in govcloud

Describe Ideal Solution

Change this:

resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy" {
  count      = local.enabled && var.use_existing_aws_iam_instance_profile == false ? 1 : 0
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = join("", aws_iam_role.default.*.name)

to this:

data "aws_partition" "current" {}

resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy" {
  count      = local.enabled && var.use_existing_aws_iam_instance_profile == false ? 1 : 0
  policy_arn = "arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = join("", aws_iam_role.default.*.name)
}

(plus 2 other instances where ARNs are hard coded)

Alternatives Considered

None

Additional Context

PR incoming

kubernetes.io/cluster tag not set on autoscaling group

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

The ASG is not properly tagged with kubernetes.io/cluster/${var.cluster_name}" = "owned" and the workers fail to join the cluster.

The bug was introduced in #52 , since then the tag lists are not merged as they used to be.
54cffae#diff-dc46acf24afd63ef8c556b77c126ccc6e578bc87e3aa09a931f33d9bf2532fbbL179

Expected Behavior

The ASG should be tagged with kubernetes.io/cluster/${var.cluster_name}" = "owned"

Steps to Reproduce

Screenshots

Environment (please complete the following information):

TF 0.14.9

Additional Context

Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.