Giter Site home page Giter Site logo

terraform-aws-modules / terraform-aws-efs Goto Github PK

View Code? Open in Web Editor NEW
23.0 4.0 36.0 78 KB

Terraform module to create AWS EFS resources πŸ‡ΊπŸ‡¦

Home Page: https://registry.terraform.io/modules/terraform-aws-modules/efs/aws

License: Apache License 2.0

HCL 100.00%
aws-efs elastic-file-system terraform terraform-aws-module terraform-module

terraform-aws-efs's People

Contributors

antonbabenko avatar bryantbiggs avatar dchien234 avatar dev-slatto avatar glavk avatar gp-davidhardy avatar jeenadeepak avatar kartsm avatar kodakmoment avatar magreenbaum avatar scaldabagno avatar semantic-release-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

terraform-aws-efs's Issues

Utilize existing Security Group

Is it possible to utilize an existing security group for the mount targets rather than creating a new one?
If not, it would be nice if this were possible.

Terraform apply times out when there's a change to `security_group_rules`

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • No πŸ›‘ : more like an enhancement to the existing HCL implementation

Is your request related to a problem? Please describe.

  • Prerequisite:
    • You have an existing EFS module
    • You want to update your security_group_rules (for e.g. to add additional CIDR blocks)
  • Observations:
    • When run terraform apply, it will try to destroy the existing aws_security_group_rule and aws_security_group objects, and this operation will time out after 15m (or the default timeout)
    • This is because of the dependency between aws_security_group and the aws_efs_mount_target resource. One cannot destroy the aws_security_group, if it has a dependency object. And the aws_efs_mount_target cannot replace with the new security group since it's not created yet.

Describe the solution you'd like.

  • Solution:
    • Add a create_before_destroy life cycle behavior to the above objects to enable terraform to replace objects properly.

Describe alternatives you've considered.

  • N.A.

Additional context

  • N.A.

deny_nonsecure_transport grants read-write access to all principals

Description

The policy generated by deny_nonsecure_transport grants access to all AWS principals. This makes the use of IAM to control access to the filesystem impossible when this boolean is set, and is an extremely significant side-effect of the boolean (in contrast to having it set to false and using policy_statements) that is not clear in documentation.

The problematic policy was added in #21, in an attempt to fix #20 and #11. In particular, I think the policy given in #20 is the incorrect policy to fix the ECS issue because it only works because it grants access to all principals -- which is far too broad of a policy, and probably not the intention.

#20 mentions that the web console generated the policy. However, I believe the policies generated by the web console are intended to be used where IAM is not used to control access to the filesystem: all of them generate a similar policy granting access to all principals, with specific denies; this is because the 'default' EFS policy is to allow access to all principals, and use firewall rules to control access.

I think this module should support using IAM to selectively control access to the EFS filesystem, instead of firewall rules alone. At the very least, it should be made more explicit that deny_nonsecure_transport precludes the use of IAM. I would suggest creating a new boolean that makes it very explicit as to whether a 'allow all principals' policy will be attached; then, this could be set to false to facilitate the use of IAM to control access.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: v1.6.2

  • Terraform version: v1.6.3

  • Provider version(s): provider registry.terraform.io/hashicorp/aws v5.40.0

Reproduction Code [Required]

module "efs" {
  source = "terraform-aws-modules/efs/aws"

  # File system
  name           = "example"
  creation_token = "example-token"
  encrypted      = true
  kms_key_arn    = "arn:aws:kms:eu-west-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

  lifecycle_policy = {
    transition_to_ia = "AFTER_30_DAYS"
  }

  # File system policy
  attach_policy                      = true
  bypass_policy_lockout_safety_check = false
  deny_nonsecure_transport = true
  policy_statements = [
  # XXX: This policy has no effect, because of the 'grant all' policy added by deny_nonsecure_transport!
    {
      sid     = "Example"
      actions = ["elasticfilesystem:ClientMount"]
      principals = [
        {
          type        = "AWS"
          identifiers = ["arn:aws:iam::111122223333:role/EfsReadOnly"]
        }
      ]
    }
  ]

  # Mount targets / security group
  mount_targets = {
    "eu-west-1a" = {
      subnet_id = "subnet-abcde012"
    }
    "eu-west-1b" = {
      subnet_id = "subnet-bcde012a"
    }
    "eu-west-1c" = {
      subnet_id = "subnet-fghi345a"
    }
  }
  security_group_description = "Example EFS security group"
  security_group_vpc_id      = "vpc-1234556abcdef"
  security_group_rules = {
    vpc = {
      # relying on the defaults provdied for EFS/NFS (2049/TCP + ingress)
      description = "NFS ingress from VPC private subnets"
      cidr_blocks = ["10.99.3.0/24", "10.99.4.0/24", "10.99.5.0/24"]
    }
  }
}

Steps to reproduce the behavior:

  1. Create an EFS filesystem using the module that:
    • Has deny_nonsecure_transport set to true
    • Uses policy_statements to attempt to grant some form of access to a specific IAM principal
  2. Attempt to mount the EFS filesystem with an IAM principal that you did not grant explicit access to
  3. Notice that the EFS filesystem is mounted successfully

Expected behavior

The EFS filesystem should not be able to be mounted by any IAM principal when deny_nonsecure_transport is true

Actual behavior

The EFS filesystem is able to be mounted by any IAM principal when deny_nonsecure_transport is true, regardless of any allows in policy_statements.

Very difficult to specify both transition_to_ia and transition_to_primary_storage_class in lifecycle policies

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 1.0.1

  • Terraform version:

Terraform v1.3.4
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.39.0
  • Provider version(s):
    (output identical to above)

Reproduction Code [Required]

provider "aws" {
  region = "us-east-1"
}
module "maybe_efs" {
  source = "terraform-aws-modules/efs/aws"
  version = "v1.0.1"
  create = true
  name = "lorem-ipsum"
  lifecycle_policy = {
    transition_to_ia = "AFTER_7_DAYS"
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }
}

Steps to reproduce the behavior:

  1. terraform init with above code
  2. terraform plan -out apply-${TF_WORKSPACE:-default}.tfplan
  3. terraform apply apply-${TF_WORKSPACE:-default}.tfplan

Expected behavior

Applies successfully.

Actual behavior

β•·
β”‚ Error: error creating EFS file system (fs-0659eb16fd4b7abe4) lifecycle configuration: BadRequest: One or more LifecyclePolicy objects specified are malformed.
β”‚ {
β”‚   RespMetadata: {
β”‚     StatusCode: 400,
β”‚     RequestID: "2c30e1a2-97a2-4597-bd8f-9c5d3e058c4b"
β”‚   },
β”‚   ErrorCode: "BadRequest",
β”‚   Message_: "One or more LifecyclePolicy objects specified are malformed."
β”‚ }
β”‚
β”‚   with module.maybe_efs.aws_efs_file_system.this[0],
β”‚   on .terraform/modules/maybe_efs/main.tf line 5, in resource "aws_efs_file_system" "this":
β”‚    5: resource "aws_efs_file_system" "this" {
β”‚
β•΅

The EFS filesystem is created, but the resource is tainted.

If I remove either transition_to_ia or transition_to_primary_storage_class from my lifecycle_policy then it will apply just fine. But due to the particular for_each expression used by dynamic "lifecycle_policy" , I don't know if it is possible to construct a data structure that can pass both lifecycle attributes successfully.

Terminal Output Screenshot(s)

Additional context

I will be opening a pull request with a suggested fix.

Example can cause a lot of AWS spend

Description

Maybe not really "bug"... but not a feature request either, IMO. The example should really be changed a bit given the potential cost implications of running it without scrutiny.

Problem:

The current example can quickly chew up some AWS spend...
image

Suggestion:

  1. Change the region to "YOUR-REGION" so resources don't end up somewhere that the user might not be monitoring. Region should always be a very conscious choice.
  2. Reduce the provisioned_throughput_in_mibps from 256 to something very small.
  3. Better yet, change the example to Elastic Throughput!

TF drift size_in_bytes

Description

Terraform creates 3 EFS Volumes, but after they grow in size, this new value shows up a configuration drift:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply" which may have affected this plan:

  # module.efs["fs1"].aws_efs_file_system.this[0] has changed
  ~ resource "aws_efs_file_system" "this" {
        id                              = "fs-072d761e2dca64422"
      ~ size_in_bytes                   = [
          ~ {
              ~ value             = 6144 -> 2185195520
              ~ value_in_standard = 6144 -> 2185195520
                # (1 unchanged attribute hidden)
            },
        ]
        tags                            = {
            "Name" = ""
            "name" = "test_efsmodule_1"
        }
        # (10 unchanged attributes hidden)
    }

If your request is for a new feature, please use the Feature request template.

  • [ x] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:
    "Key":"efs","Source":"registry.terraform.io/terraform-aws-modules/efs/aws","Version":"1.3.0"

  • Terraform version:

Terraform v1.6.3

  • Provider version(s):
  • provider registry.terraform.io/hashicorp/aws v5.23.1
  • provider registry.terraform.io/hashicorp/cloudinit v2.3.2
  • provider registry.terraform.io/hashicorp/helm v2.11.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.23.0
  • provider registry.terraform.io/hashicorp/time v0.9.1
  • provider registry.terraform.io/hashicorp/tls v4.0.4

Reproduction Code [Required]

### EFS Module Inputs ####
locals {
  efs_filesystems = {
    fs1 = {
      fs_tag           = "test_efsmodule_1"
      performance_mode = "generalPurpose"
    },
    fs2 = {
      fs_tag           = "test_efsmodule_2"
      performance_mode = "generalPurpose"
    }
    fs3 = {
      fs_tag           = "test_efsmodule_2"
      performance_mode = "generalPurpose"
    }
    #### Add more filesystems as needed
  }
}
module "efs" {
  for_each = local.efs_filesystems

  source = "terraform-aws-modules/efs/aws"

  encrypted            = false
  enable_backup_policy = false
  performance_mode     = each.value.performance_mode
  #### File system policy
  attach_policy            = true
  deny_nonsecure_transport = false
  policy_statements = [
    {
      sid     = "Example"
      actions = ["elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientRootAccess"]
      principals = [
        {
          type        = "AWS"
          identifiers = ["*"]
        }
      ]
    }
  ]
  ### Mount targets / security group
  mount_targets = { for k, v in zipmap(local.azs, module.vpc.public_subnets) : k => { subnet_id = v } }

  security_group_description = "Example EFS security group"
  security_group_vpc_id      = module.vpc.vpc_id

  security_group_rules = {
    vpc = {
      #### relying on the defaults providied for EFS/NFS (2049/TCP + ingress)
      description = "NFS ingress from VPC private subnets"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }

  tags = {
    name = each.value.fs_tag
  }
  
}
### EFS Module Inputs ####

Steps to reproduce the behavior:

No Yes

Created the EFS Volumes.
Used the EFS FS in my environment - I stored some application logs.
terraform plan showed me that a config drift happen:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply" which may have affected this plan:

  # module.efs["fs1"].aws_efs_file_system.this[0] has changed
  ~ resource "aws_efs_file_system" "this" {
        id                              = "fs-072d761e2dca64422"
      ~ size_in_bytes                   = [
          ~ {
              ~ value             = 6144 -> 2185195520
              ~ value_in_standard = 6144 -> 2185195520
                # (1 unchanged attribute hidden)
            },
        ]
        tags                            = {
            "Name" = ""
            "name" = "test_efsmodule_1"
        }
        # (10 unchanged attributes hidden)

Expected behavior

I expect to have a way of telling terraform or the module that an increase/decrease in size is normal for volumes.

Actual behavior

I'm getting warned that a config drift happened. I saw that the values get updated in the terraform state file but I'm concerned
that I have no way of suppressing this warning.

publish latest version to Terraform registry

Description

Can you please publish v1.6.0 to the Terraform registry? It's still sitting a v1.4.0...

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

EBS CSI driver: unauthorized (when deploying in CI/CD)

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.
    There are 0 open issues listed

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 18.26.6 and 19.16.0

  • Terraform version:

1.4.5

  • Provider version(s):

the relevant subset:

β”‚ └── module.eks
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/tls] ~> 3.0
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0
β”‚ β”œβ”€β”€ module.self_managed_node_group
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ └── module.user_data
β”‚ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
β”‚ β”œβ”€β”€ module.eks_managed_node_group
β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ └── module.user_data
β”‚ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
β”‚ β”œβ”€β”€ module.fargate_profile
β”‚ └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
β”‚ └── module.kms
β”‚ └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0

Reproduction Code [Required]

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.16.0"

cluster_name = var.eks_cluster_name
cluster_version = var.eks_cluster_version

kms_key_administrators = var.kms_key_administrators

vpc_id = var.vpc_id
subnet_ids = local.eks_subnet_ids

cluster_endpoint_private_access = var.eks_endpoint_private_access
cluster_endpoint_public_access = var.eks_endpoint_public_access

Temp workaround for bug : double owned tag

terraform-aws-modules/terraform-aws-eks#1810

node_security_group_tags = {
"kubernetes.io/cluster/${var.eks_cluster_name}" = null
}

eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
key_name = var.aws_keypair_name
attach_cluster_primary_security_group = true
# Disabling and using externally provided security groups
create_security_group = false
vpc_security_group_ids = var.eks_vpc_security_groups
iam_role_name = "${var.eks_cluster_name}_ng"
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = var.eks_node_disk_size
volume_type = "gp3"
}
}
}
}

eks_managed_node_groups = local.node_groups

tags = merge({
Name = var.eks_cluster_name},
var.tags
)
}

Steps to reproduce the behavior:

no

this is a problem in CI/CD (github actions), so there is no local cache

deployed via laptop (works fine), but when i introduce another principal to deploy it, i run into a problem. I can reproduce when i assume the CI/CD role and run an apply

Expected behavior

after the summary of the plan, i expect terraform to return a code of 0 and terminate

Actual behavior

terraform plan returns an error:

terraform plan
...
all output is as expected
...
Plan: 25 to add, 74 to change, 19 to destroy.
Releasing state lock. This may take a few moments...
Error: Process completed with exit code 1.

Error: Unauthorized

with module.dockyard.kubernetes_storage_class.gp3,

on .terraform/modules/dockyard/terraform/eks-addon.tf line 37, in resource "kubernetes_storage_class" "gp3":

37: resource "kubernetes_storage_class" "gp3" {

This seems to be a permissions issue of having multiple principals deploying the EBS CSI driver. I filed a ticket with Amazon support, and the IAM roles seem to be set up properly.

Terminal Output Screenshot(s)

Additional context

AWS EFS Policy Default

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • [ x ] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:

  • Terraform version: 1.3.7

  • Provider version(s): 1.1.1

Reproduction Code [Required]

Steps to reproduce the behavior:
You have to create an efs with the module and set the "deny_nonsecure_transport = true"
The policy that this will generate will not be right, but it will be incomplete

no yes i did it

Expected behavior

I am expecting that the bug will be solved and the wrong policy will be changed with the correct one

Actual behavior

I was not able to mount and i got a message "access denied, even if i had all the permission"

Terminal Output Screenshot(s)

Additional context

No more than 2 "lifecycle_policy" blocks are allowed

Description

It's not possible to set a lifecycle to use all three rules made available in #24 and that AWS supports.

  lifecycle_policy = {
    transition_to_ia                    = "AFTER_30_DAYS"
    transition_to_archive               = "AFTER_90_DAYS"
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }

If your request is for a new feature, please use the Feature request template.

  • [ X] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Versions

  • Module version [Required]: 1.6.2

  • Terraform version:
    1.0.11

  • Provider version(s):
    5.32.0

Reproduction Code [Required]

Steps to reproduce the behavior:
generate an EFS module with the following lifecycle_policy

  lifecycle_policy = {
    transition_to_ia                    = "AFTER_30_DAYS"
    transition_to_archive               = "AFTER_90_DAYS"
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }

Expected behavior

the ability to set those three at the same time
Screenshot 2024-05-08 at 2 43 56β€―PM

Actual behavior

this error is emitted on a terraform plan:
No more than 2 "lifecycle_policy" blocks are allowed

Terminal Output Screenshot(s)

Additional context

Add "transition_to_archive" to EFS lifecycle_policy

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

Describe the solution you'd like.

I would like to add and change "transition_to_archive" value of EFS lifecycle_policy, in addition to "transition_to_ia" and "transition_to_primary_storage_class"

Provisioned throughput cost warning

Is your request related to a problem? Please describe.

I was testing out using EFS as home folder in EC2 and blindly copied the example code thinking EFS is elastic only. The first example uses a 256MiB/s provisioned throughput mode that costs $6/MB/month roughly $50 each day. Obviously this is a user error, but a fair warning would be nice. I was lucky enough to check the cost explorer after a short time.

Support for ignoring changes to size_in_bytes attribute in aws_efs_file_system

Is your request related to a new offering from AWS?

No πŸ›‘, this request is not related to a new offering from AWS, but rather to a common behavior of AWS EFS where the size_in_bytes attribute can change outside of Terraform's management, causing unnecessary noise in terraform plan output.

Is your request related to a problem? Please describe.

I'm always frustrated when every time someone opens a PR, and we run terraform plan, it shows changes in the EFS size_in_bytes like it's a change in the PR, even though the actual filesystem configuration hasn't changed. This creates confusion and makes it harder to review the actual changes introduced by the PR.

Describe the solution you'd like.

I would like the terraform-aws-modules/efs/aws module to support an option to ignore changes to the size_in_bytes attribute in the aws_efs_file_system resource. This can be implemented by allowing users to specify a lifecycle block configuration within the module that would pass through to the underlying aws_efs_file_system resource.

For example, the module could expose a variable such as ignore_size_in_bytes_changes that, when set to true, would automatically add the following lifecycle configuration to the aws_efs_file_system resource:

lifecycle {
  ignore_changes = [
    size_in_bytes,
  ]
}

Describe alternatives you've considered.

As an alternative, I have considered creating a wrapper module to simulate ignoring changes to the size_in_bytes attribute using a null_resource with custom triggers. However, this approach is not ideal as it adds complexity and doesn't directly address the issue at the resource level.

Another alternative is to manually ignore these changes in the terraform plan output, but this is error-prone and not a scalable solution for larger teams or automated CI/CD pipelines.

Additional context

The ability to ignore certain attributes from the terraform plan output is crucial for teams to review and understand infrastructure changes accurately. Supporting this feature within the module would greatly enhance the usability and reduce potential confusion during code reviews.

throughput_mode: elastic not supported

Description

EFS supports 3 throughput modes: bursting, provisioned, elastic
Please refer: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system#throughput_mode

But terraform-aws-efs module supports bursting and provisioned throughput modes only.

When trying to use elastic throughout mode getting the following error.

module "efs" {
  source = "terraform-aws-modules/efs/aws"
  performance_mode                = "generalPurpose"
  throughput_mode                 = "elastic"
}

β”‚ Error: expected throughput_mode to be one of [bursting provisioned], got elastic
β”‚ 
β”‚   with module.efs.aws_efs_file_system.this[0],
β”‚   on .terraform/modules/efs-csi.efs/main.tf line 14, in resource "aws_efs_file_system" "this":
β”‚   14:   throughput_mode                 = var.throughput_mode

Please add the support for elastic throughput mode.

Policy generated when deny_nonsecure_transport = true is incomplete/outdated

Description

The policy generated when deny_nonsecure_transport = true is incomplete/outdated

Versions

  • Module version [Required]: 1.3.1
  • Terraform version: 1.6.6
  • Provider version(s): 5.21.0

Expected behavior

When enabling via the web console, we see the generated policy to be:

{
    "Version": "2012-10-17",
    "Id": "efs-policy-wizard-01983604-a016-498a-b73c-a6956f8caa13",
    "Statement": [
        {
            "Sid": "efs-statement-ba87d44a-9919-4ded-969e-b42792f6e334",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "elasticfilesystem:ClientRootAccess",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:ClientMount"
            ],
            "Condition": {
                "Bool": {
                    "elasticfilesystem:AccessedViaMountTarget": "true"
                }
            }
        },
        {
            "Sid": "efs-statement-d04fd86d-0ea8-49dc-9d76-b6383171d3a7",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}

This policy allowed my ecs containers to properly mount the volume.

Actual behavior

When deny_nonsecure_transport = true (which is the default), this module generates an incomplete policy:

{
    "Sid": "NonSecureTransport",
    "Effect": "Deny",
    "Principal": {
        "AWS": "*"
    },
    "Action": "*",
    "Resource": "arn:aws:elasticfilesystem:us-east-1:12345678912:file-system/fs-0114bc825a22274e46",
    "Condition": {
        "Bool": {
            "aws:SecureTransport": "false"
        }
    }
}

This policy is insufficient for ecs containers to mount the volume when transit_encryption is enabled.

(Seems like this issue has been reported before without a response #11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.