Description
Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/*
directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply
without any further changes.
If your request is for a new feature, please use the Feature request
template.
β οΈ Note
Before you submit an issue, please perform the following first:
- Remove the local
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
- Re-initialize the project root to pull down modules:
terraform init
- Re-attempt your terraform plan or apply and check if the issue still persists
Versions
1.4.5
the relevant subset:
β βββ module.eks
β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β βββ provider[registry.terraform.io/hashicorp/tls] ~> 3.0
β βββ provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0
β βββ module.self_managed_node_group
β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β βββ module.user_data
β βββ provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
β βββ module.eks_managed_node_group
β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β βββ module.user_data
β βββ provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
β βββ module.fargate_profile
β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β βββ module.kms
β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
Reproduction Code [Required]
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.16.0"
cluster_name = var.eks_cluster_name
cluster_version = var.eks_cluster_version
kms_key_administrators = var.kms_key_administrators
vpc_id = var.vpc_id
subnet_ids = local.eks_subnet_ids
cluster_endpoint_private_access = var.eks_endpoint_private_access
cluster_endpoint_public_access = var.eks_endpoint_public_access
Temp workaround for bug : double owned tag
node_security_group_tags = {
"kubernetes.io/cluster/${var.eks_cluster_name}" = null
}
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
key_name = var.aws_keypair_name
attach_cluster_primary_security_group = true
# Disabling and using externally provided security groups
create_security_group = false
vpc_security_group_ids = var.eks_vpc_security_groups
iam_role_name = "${var.eks_cluster_name}_ng"
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = var.eks_node_disk_size
volume_type = "gp3"
}
}
}
}
eks_managed_node_groups = local.node_groups
tags = merge({
Name = var.eks_cluster_name},
var.tags
)
}
Steps to reproduce the behavior:
no
this is a problem in CI/CD (github actions), so there is no local cache
deployed via laptop (works fine), but when i introduce another principal to deploy it, i run into a problem. I can reproduce when i assume the CI/CD role and run an apply
Expected behavior
after the summary of the plan, i expect terraform to return a code of 0 and terminate
Actual behavior
terraform plan returns an error:
terraform plan
...
all output is as expected
...
Plan: 25 to add, 74 to change, 19 to destroy.
Releasing state lock. This may take a few moments...
Error: Process completed with exit code 1.
Error: Unauthorized
with module.dockyard.kubernetes_storage_class.gp3,
on .terraform/modules/dockyard/terraform/eks-addon.tf line 37, in resource "kubernetes_storage_class" "gp3":
37: resource "kubernetes_storage_class" "gp3" {
This seems to be a permissions issue of having multiple principals deploying the EBS CSI driver. I filed a ticket with Amazon support, and the IAM roles seem to be set up properly.
Terminal Output Screenshot(s)
Additional context