Comments (10)
Yea I resorted to doing the same in the alb module I maintain. For whatever reason, until now I hadn't seen this problem arise, even when specifying multiple worker groups... unsure which change introduced this along the way. If we need to give a top level var of worker_group_count
, that's not ideal but also not the worst situation.
from terraform-aws-eks.
Hey there @laverya . Indeed this is a strange bug 🐛
Completely bizarre that the subnet configuration has anything to do with the count
of the userdata. This is a shot in the dark but can you try specifying your worker group configuration as a local
using list(map())
as I do in the test fixture. Perhaps the way a bare literal is passed like this gives terraform problems.
from terraform-aws-eks.
I've been debugging a bit myself and it seems to be these three locations:
Line 8 in 2814efc
Line 33 in 2814efc
Line 78 in 2814efc
I don't think length()
can be applied to computed values.
As for specifying the worker group as a local - yep, first thing I tried 😄
The solution I'm trying locally right now is to add a worker_group_count
variable that needs to be manually specified by the user & have those three locations refer to it instead. Not pretty (and not backwards compatible) but it seems to work.
from terraform-aws-eks.
I think this might have been here for a while (as long as "a while" can be in a 6 week old repo, I mean)
The only thing I can see within worker_group
that you'd commonly use a computed value for is subnets
, and that was only added a week ago, so it's possible no one encountered this beforehand.
Another fix for this might just be to pull subnets
out of worker_group
and add another top level var of worker_subnets
that would fill the same role - it would sacrifice the ability to put different worker groups in different subnets though, and feels more like a hack than an actual fix. (This won't be the only reason anyone has to use computed values in worker_group
configs)
I'll make a PR with the worker_group_count
var solution momentarily - I haven't changed the version numbers though since I'm not sure if this would become part of 1.4.0
or 2.0.0
.
from terraform-aws-eks.
I ran into this when overriding security group with sg's that I created in-line (using ${aws_security_group.masters.id}
and ${aws_security_group.workers.id}
). I'm able to work around the issue by commenting out the module and applying only the security group changes. Once the security groups exist, I am able to apply the module properly.
from terraform-aws-eks.
Just ran into exactly the same problem as @mars64 above. If I supply security groups to module, like this:
cluster_security_group_id = "${aws_security_group.devel-cluster.id}"
worker_security_group_id = "${aws_security_group.devel-node.id}"
where the groups themselves are defined just above module invocation, I get
module.eks.aws_security_group.cluster: aws_security_group.cluster: value of 'count' cannot be computed
going to work around in the same way, pre-create the groups
from terraform-aws-eks.
I'm facing this issue when using the map_roles
functionality to map role ARNs to kubernetes groups. If I specify the ARNs explicitly it works fine, but if I use interpolation to specify the ARN of a role that is also being created by Terraform I get the error about value of 'count' cannot be computed
.
Simplified example:
data "aws_iam_policy_document" "kube_admin" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_iam_role" "kube_admin" {
name = "kubernetes-admin"
assume_role_policy = "${data.aws_iam_policy_document.kube_admin.json}"
}
module "eks" {
...
map_roles = [
{
role_arn = "${aws_iam_role.kube_admin.arn}"
username = "admin"
group = "system:masters"
}
]
}
When running a plan it gives the error:
* module.eks.data.template_file.map_roles: data.template_file.map_roles: value of 'count' cannot be computed
If I change to role_arn = "some-static-string"
everything works fine. This also applies to map_users
and map_accounts
.
This comment on the Terraform issue board indicates that this is a limitation of the current design:
In the current version of Terraform, there is a restriction where passing a list that contains any values that are to a function will force the function result to always be itself.
Sounds like that might improve a bit when Terraform v0.12
ships, but for now we'll need a workaround to make these things work.
I see PR #75 that proposes to use a separate variable to get around this problem for worker groups. Is that the same solution we should use for any of these problems that are related to count not being a computed value? If so, I can submit a PR with those changes.
from terraform-aws-eks.
@jeff-french you've got the right idea. I've seen this same problem in a few contexts now and the only way I've seen it worked around is having that separate count variable. It's fragile, not great, but also could be worse. The work in #75 has been merged and released so let me know after upgrade if you're still seeing this.
from terraform-aws-eks.
@brandoconnor I ran into it while trying to set cluster_security_group_id
to be a locally created resource.
excerpt:
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "${local.full_name}"
subnets = "${data.terraform_remote_state.vpc.mgmt_subnets}"
tags = "${local.tags}"
vpc_id = "${data.terraform_remote_state.vpc.vpc_id}"
create_elb_service_linked_role = true
worker_groups = "${local.worker_groups}"
cluster_security_group_id = "${aws_security_group.cluster_sg.id}"
worker_additional_security_group_ids = [ "${aws_security_group.worker_nodes_sg.id}" ]
}
resource "aws_security_group" "cluster_sg" {
name = "${local.full_name}"
description = "sg for ${local.full_name}"
vpc_id = "${data.terraform_remote_state.vpc.vpc_id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.mgmt_cidr_blocks}"]
}
# Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "${local.full_name}"
env = "${var.env}"
}
}
results in:
module.eks_cluster.aws_security_group.cluster: aws_security_group.cluster: value of 'count' cannot be computed
I suggest it should receive the same treatment worker_groups_count
got and maybe have a use_custom_cluster_sg
variable, and use it instead of the count here, at least until terraform provides a suitable workaround.
from terraform-aws-eks.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
from terraform-aws-eks.
Related Issues (20)
- Use aws_vpc_security_group_egress_rule and aws_vpc_security_group_ingress_rule instead of aws_security_group_rule HOT 1
- [karpenter] enable ability to import existing eventbridge rules HOT 4
- [Karpenter] upgrade to v.1.0 HOT 2
- Karpenter module terraform destroy running into: DeleteConflict: Cannot delete a policy attached to entities HOT 1
- Support conditional instance refreshing configuration HOT 4
- Karpenter is not authorized to perform: ec2:CreateTags on exiting WorkerNodes HOT 2
- OIDC URL alternatives to use on `aws_eks_identity_provider_config` for IRSA HOT 4
- Docs is not right HOT 3
- Karpenter module tries to re-add empty tags to aws_eks_pod_identity_association HOT 3
- Support using Cilium as a CNI HOT 1
- Module assigning incorrect security groups HOT 4
- Infinite Plan Update on eks_managed_node_group for launch_template version -> $Default HOT 2
- enabling bootstrap_cluster_creator_admin_permissions isn't working HOT 1
- Instance maintenance policy for eks managed node groups HOT 1
- Not able to found disable-default-addons configuration - EKSCTL vs Terraform HOT 1
- Karpenter pod can't use policies created by karpenter module when we have eks and vpc with aws_eks_cluster module HOT 7
- reconciliation of cluster_version and ami_release_version during node-group updates HOT 10
- bootstrap_self_managed_addons forces recreate of EKS clusters. HOT 1
- Support AZRebalance for ASG when using eks_managed_node_groups HOT 3
- EKS Node Group should ignore_changes on status HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from terraform-aws-eks.