Giter Site home page Giter Site logo

aws-samples / aws-network-hub-for-terraform Goto Github PK

View Code? Open in Web Editor NEW
92.0 6.0 23.0 1.18 MB

This repository demonstrates a scalable, segregated, secured AWS network hub for multi-account organizations using Terraform.

License: MIT No Attribution

HCL 100.00%
aws terraform network aws-orga ram network-hub centralised-networking iac automation iam

aws-network-hub-for-terraform's Introduction

Network Hub Account with Terraform

This repository demonstrates a scalable, segregated, secured AWS network for multi-account organizations. Using Transit Gateway to separate production, non-production and shared services traffic, it deploys an advanced AWS networking pattern using centralized ingress and egress behind Network Firewall, centralizes private VPC endpoints to share across all VPCs, and manages IP address allocation using Amazon VPC IPAM.

  • Perfect for a central networking hub account, potentially alongside Account Factory for Terraform
  • Solution itself can be deployed into nonprod, test and production deployments for safe iteration and testing.
  • Written using clean, composable modules: the solution is easily extended and customised.

Spoke VPCs for organization members can be created using the provided sister example in this repo.

The following resources will be deployed by this example:

  • VPC Transit Gateway
  • VPC Endpoints
  • AWS Network Firewall
  • Route 53 Resolver
  • Amazon VPC IP Address Manager

The resources deployed and the architectural pattern they follow are provided for demonstration and testing purposes but are based on and inspired by AWS best practice and articles.

Table of Contents

Overview

Diagrams

Solution Diagram diagram

Transit Gateway tgw

VPC Endpoints vpc_endpoints

Network Firewall nfw

Route 53 Resolver dns

References


Prerequisites

Minimal tooling is required for this solution. However, there are hard requirements around AWS configuration.

Tooling

  • Terraform ~> 1.1
    • AWS provider >= 4.4.0
  • AWS CLI
  • Git CLI

AWS account configuration

  • AWS Organization
  • Centralised network account
  • IAM role with required permissions
  • RAM sharing enabled for the Organisation
aws ram enable-sharing-with-aws-organization

ram

Troubleshooting tip

If you experience any issue with the RAM share disable and re-enable RAM.

aws organizations disable-aws-service-access --service-principal ram.amazonaws.com
  • IPAM delegated from the master account to the Centralised network account
aws ec2 enable-ipam-organization-admin-account \
    --delegated-admin-account-id <Network-Account-ID>

ipam

Customisation

If you do not define a remote backend Terraform will use the local directory to store the backend files including tfstate. Examples of how to customise the Terraform backend are included but commented out. Usual caveats around safe storage of Terraform state must be considered.

backend

Example GitLab HTTP backend for use with GitLab CI.

http_backend

Variables

Type Variable Name Description Notes
Global variables environment Environment to deploy into. Accepted values * dev, test, preprod, prod
aws_region Region to deploy in.
vpc_endpoints List of centralised VPC endpoints to be deployed
Environment specific variables ipam_cidr CIDR to be allocated to the IP Address Manager
tgw_route_tables Transit Gateway Router Tables to create
root_domain Root DNS domain to create private hosted zone and resolver rules

Input Variable - config.auto.tfvars

aws_region    = "eu-west-2"
vpc_endpoints = ["ec2", "rds", "sqs", "sns", "ssm", "logs", "ssmmessages", "ec2messages", "autoscaling", "ecs", "athena"]

env_config = {
  dev = {
    ipam_cidr        = "10.0.0.0/10"
    tgw_route_tables = ["prod", "dev", "shared"]
    root_domain      = "network-dev.internal"
  }
  test = {
    ipam_cidr        = "10.64.0.0/10"
    tgw_route_tables = ["prod", "dev", "shared"]
    root_domain      = "network-test.internal"
  }
  preprod = {
    ipam_cidr        = "10.128.0.0/10"
    tgw_route_tables = ["prod", "dev", "shared"]
    root_domain      = "network-preprod.internal"
  }
  prod = {
    ipam_cidr        = "10.192.0.0/10"
    tgw_route_tables = ["prod", "dev", "shared"]
    root_domain      = "network-prod.internal"
  }
}

Quick Start

Deploy from client machine

When deploying from your local machine having configured the TF Backend in the code you need to ensure you have access to read and write to the backend - possible backends include HTTP, Consul, Postgres, Artifactory, S3 or S3 + DynamoDB. We initialise the Terraform, complete the validate and format. Review the plan and then apply.

  • terraform init
  • terraform validate
  • set environment for deployment
    • export TF_VAR_environment="$ENV"
    • Set-Item -Path env:TF_VAR_environment -Value "$ENV" (Possible $ENV values - dev, test, preprod, prod)
  • terraform plan
  • terraform apply or terraform apply --auto-approve

Tagging

Tags are added to all AWS resources through use of the tag configuration of the AWS Provider.

As not all AWS resources support default tags passed from the provider (EC2 Auto-Scaling Group + Launch Template) We pass the tags as a variable (Map(string) - these are defined in the root locals.tf file.

provider

Example Tags - locals.tf

tags = {
  Product    = "Network_Automation"
  Owner      = "GitHub"
  Project_ID = "12345"
}

Clean Up

Remember to clean up after your work is complete. You can do that by doing terraform destroy.

Note that this command will delete all the resources previously created by Terraform.

Terraform Docs

Terraform Deployment

Requirements

Name Version
terraform ~> 1.1
aws >= 4.4.0

Providers

Name Version
aws 4.5.0

Modules

Name Source Version
dns ./modules/dns n/a
ipam ./modules/ipam n/a
network_firewall_vpc ./modules/network_firewall_vpc n/a
tgw ./modules/tgw n/a
vpc_endpoints ./modules/vpc_endpoints n/a

Resources

Name Type
aws_iam_policy.central_network resource
aws_iam_policy_attachment.central_network resource
aws_iam_role.central_network resource
aws_iam_role.flow_logs resource
aws_iam_role_policy.flow_logs resource
aws_kms_key.log_key resource
aws_availability_zones.available data source
aws_caller_identity.current data source
aws_iam_policy_document.policy_kms_logs_document data source
aws_organizations_organization.main data source

Inputs

Name Description Type Default Required
aws_region AWS region being deployed to string n/a yes
env_config Map of objects for per environment configuration
map(object({
ipam_cidr = string
tgw_route_tables = list(string)
root_domain = string
}))
n/a yes
environment Deployment environment passed as argument or environment variable string n/a yes
tags Default tags to apply to all resources map(string) n/a yes
vpc_endpoints Which VPC endpoints to use list(string) n/a yes

Outputs

No outputs.

TGW Module

Requirements

No requirements.

Providers

Name Version
aws n/a

Modules

No modules.

Resources

Name Type
aws_ec2_transit_gateway.org_tgw resource
aws_ec2_transit_gateway_route.blackhole_route resource
aws_ec2_transit_gateway_route.default_route resource
aws_ec2_transit_gateway_route.default_route_ipv6 resource
aws_ec2_transit_gateway_route_table.org_tgw resource
aws_ram_principal_association.org resource
aws_ram_resource_association.tgw resource
aws_ram_resource_share.main resource

Inputs

Name Description Type Default Required
az_names A list of the Availability Zone names available to the account list(string) n/a yes
cidr Corporate CIDR range for use with blackholing traffic between production and development environments string n/a yes
environment Deployment environment passed as argument or environment variable string n/a yes
inspection_attachment Inspection VPC attachment for default route string n/a yes
org_arn The ARN of the AWS Organization this account belongs to string n/a yes
tgw_route_tables List of route tables to create for the transit gateway list(string) n/a yes

Outputs

Name Description
tgw TGW ID for VPC attachments
tgw_route_table Map of route tables used for association and propagation

IPAM Module

Requirements

No requirements.

Providers

Name Version
aws n/a

Modules

No modules.

Resources

Name Type
aws_ram_principal_association.org resource
aws_ram_resource_association.ipam resource
aws_ram_resource_share.main resource
aws_ssm_parameter.ipam_pool_id resource
aws_vpc_ipam.org_ipam resource
aws_vpc_ipam_pool.private_org_ipam_pool resource
aws_vpc_ipam_pool_cidr.private_org_ipam_pool resource
aws_vpc_ipam_scope.private_org_ipam_scope resource

Inputs

Name Description Type Default Required
aws_region AWS region being deployed to string n/a yes
ipam_cidr CIDR block assigned to IPAM pool string n/a yes
org_arn The ARN of the AWS Organization this account belongs to string n/a yes

Outputs

Name Description
org_ipam Org IPAM ID
org_ipam_pool Org IPAM pool ID

VPC Endpoint Module

Requirements

No requirements.

Providers

Name Version
aws n/a

Modules

No modules.

Resources

Name Type
aws_cloudwatch_log_group.flow_logs resource
aws_default_security_group.default resource
aws_ec2_transit_gateway_route_table_association.shared resource
aws_ec2_transit_gateway_route_table_propagation.org resource
aws_ec2_transit_gateway_vpc_attachment.vpc_endpoint resource
aws_flow_log.vpc resource
aws_route.default_route resource
aws_route.default_route_ipv6 resource
aws_route53_record.dev_ns resource
aws_route53_zone.interface_phz resource
aws_route_table.endpoint_vpc resource
aws_route_table_association.attachment_subnet resource
aws_route_table_association.endpoint_subnet resource
aws_security_group.allow_vpc_endpoint resource
aws_security_group_rule.org_cidr resource
aws_subnet.attachment_subnet resource
aws_subnet.endpoint_subnet resource
aws_vpc.endpoint_vpc resource
aws_vpc_dhcp_options.endpoint_vpc resource
aws_vpc_dhcp_options_association.endpoint_vpc resource
aws_vpc_endpoint.interface resource

Inputs

Name Description Type Default Required
az_names A list of the Availability Zone names available to the account list(string) n/a yes
cidr Corporate CIDR range for use with blackholing traffic between production and development environments string n/a yes
environment Deployment environment passed as argument or environment variable string n/a yes
iam_role_arn IAM role to allow VPC Flow Logs to write to CloudWatch string n/a yes
interface_endpoints Object representing the region and services to create interface endpoints for map(string) n/a yes
kms_key_id VPC Flow Logs KMS key to encrypt logs string n/a yes
org_ipam_pool IPAM pool ID to allocate CIDR space string n/a yes
tgw TGW ID for VPC attachments string n/a yes
tgw_route_tables TGW route tables for VPC association and propagation map(string) n/a yes

Outputs

No outputs.

DNS Module

Requirements

No requirements.

Providers

Name Version
aws n/a

Modules

No modules.

Resources

Name Type
aws_cloudwatch_log_group.flow_logs resource
aws_default_security_group.default resource
aws_ec2_transit_gateway_route_table_association.shared resource
aws_ec2_transit_gateway_route_table_propagation.org resource
aws_ec2_transit_gateway_vpc_attachment.vpc_dns resource
aws_flow_log.vpc resource
aws_ram_principal_association.org resource
aws_ram_resource_association.r53r resource
aws_ram_resource_share.main resource
aws_route.default_route resource
aws_route.default_route_ipv6 resource
aws_route53_resolver_endpoint.inbound resource
aws_route53_resolver_endpoint.outbound resource
aws_route53_resolver_rule.fwd resource
aws_route53_resolver_rule_association.org_dns resource
aws_route53_zone.root_private resource
aws_route_table.dns_vpc resource
aws_route_table_association.attachment resource
aws_route_table_association.privatesubnet resource
aws_security_group.allow_dns resource
aws_security_group_rule.dns_tcp resource
aws_subnet.attachment_subnet resource
aws_subnet.endpoint_subnet resource
aws_vpc.dns_vpc resource
aws_vpc_dhcp_options.dns_vpc resource
aws_vpc_dhcp_options_association.dns_vpc resource

Inputs

Name Description Type Default Required
az_names A list of the Availability Zone names available to the account list(string) n/a yes
cidr Corporate CIDR range for use with blackholing traffic between production and development environments string n/a yes
environment Deployment environment passed as argument or environment variable string n/a yes
iam_role_arn IAM role to allow VPC Flow Logs to write to CloudWatch string n/a yes
interface_endpoints Object representing the region and services to create interface endpoints for map(string) n/a yes
kms_key_id VPC Flow Logs KMS key to encrypt logs string n/a yes
org_arn The ARN of the AWS Organization this account belongs to string n/a yes
org_ipam_pool IPAM pool ID to allocate CIDR space string n/a yes
root_domain Root domain for private hosted zone delegation string n/a yes
tgw TGW ID for VPC attachments string n/a yes
tgw_route_tables TGW route tables for VPC association and propagation map(string) n/a yes

Outputs

No outputs.

Network Firewall Module

Requirements

No requirements.

Providers

Name Version
aws n/a

Modules

No modules.

Resources

Name Type
aws_cloudwatch_log_group.flow_logs resource
aws_cloudwatch_log_group.network_firewall_alert_log_group resource
aws_cloudwatch_log_group.network_firewall_flow_log_group resource
aws_default_security_group.default resource
aws_ec2_transit_gateway_route_table_association.shared resource
aws_ec2_transit_gateway_route_table_propagation.org resource
aws_ec2_transit_gateway_vpc_attachment.vpc_inspection resource
aws_egress_only_internet_gateway.eigw resource
aws_eip.internet_vpc_nat resource
aws_flow_log.vpc resource
aws_internet_gateway.igw resource
aws_nat_gateway.internet resource
aws_networkfirewall_firewall.inspection_vpc_network_firewall resource
aws_networkfirewall_firewall_policy.anfw_policy resource
aws_networkfirewall_logging_configuration.network_firewall_alert_logging_configuration resource
aws_networkfirewall_rule_group.block_domains resource
aws_route.default_route resource
aws_route.default_route_ipv6 resource
aws_route.egress_route resource
aws_route.egress_route_ipv6 resource
aws_route.ingress_route resource
aws_route.inspection_route resource
aws_route.inspection_route_ipv6 resource
aws_route.inspection_route_natgw_ipv6 resource
aws_route.internal_route resource
aws_route_table.attachment resource
aws_route_table.inspection resource
aws_route_table.internet resource
aws_route_table_association.attachment_subnet_rt_association resource
aws_route_table_association.inspection_subnet resource
aws_route_table_association.internet_subnet resource
aws_subnet.attachment_subnet resource
aws_subnet.inspection_subnet resource
aws_subnet.internet_subnet resource
aws_vpc.inspection_vpc resource
aws_vpc_dhcp_options.inspection_vpc resource
aws_vpc_dhcp_options_association.inspection_vpc resource

Inputs

Name Description Type Default Required
aws_region AWS region being deployed to string n/a yes
az_names A list of the Availability Zone names available to the account list(string) n/a yes
cidr Corporate CIDR range for use with blackholing traffic between production and development environments string n/a yes
environment Deployment environment passed as argument or environment variable string n/a yes
iam_role_arn IAM role to allow VPC Flow Logs to write to CloudWatch string n/a yes
kms_key_id VPC Flow Logs KMS key to encrypt logs string n/a yes
org_ipam_pool IPAM pool ID to allocate CIDR space string n/a yes
tgw TGW ID for VPC attachments string n/a yes
tgw_route_tables TGW route tables for VPC association and propagation map(string) n/a yes

Outputs

Name Description
eni_map Output ENI map
firewall_info Info of network firewall for routing
inspection_attachment Inspection TGW attachment ID for default route in TGW
route_table Output route tables used for NFW
rt_map Output RT map

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

aws-network-hub-for-terraform's People

Contributors

aandsco avatar amazon-auto avatar fredericheem avatar jplock avatar luigidifraiawork avatar methridge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

aws-network-hub-for-terraform's Issues

environment = "shared"

When deploying to networks account, the environment variable should be set to "shared". Currently, the only options are dev, test, preprod and prod. In my opinion, none of these values are valid for the central resources created in networks account, because these resources (tgw, ipam, network firewall etc) are used by every environment.

My suggestion is to add "shared" as an extra option to the environment variable, and to use that as the default value when when deploying to networks account:
environment = "shared" # dev, test, preprod, prod, shared

Network hub facing issue while running on us-west regions

HI @aandsco @jplock @luigidifraiawork

Whether the network_hub was region specific?

I am getting below error in us-west-2

╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9501::/64' is invalid.
│       status code: 400, request id: eb43aee4-9884-4bd7-93e5-29ac731f4eb2
│
│   with module.network_firewall_vpc.aws_subnet.attachment_subnet["us-west-2b"],
│   on modules/network_firewall_vpc/vpc.tf line 73, in resource "aws_subnet" "attachment_subnet":
│   73: resource "aws_subnet" "attachment_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9502::/64' is invalid.
│       status code: 400, request id: 7d75038f-02bb-481c-b451-4fdc67419eaa
│
│   with module.network_firewall_vpc.aws_subnet.attachment_subnet["us-west-2c"],
│   on modules/network_firewall_vpc/vpc.tf line 73, in resource "aws_subnet" "attachment_subnet":
│   73: resource "aws_subnet" "attachment_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9500::/64' is invalid.
│       status code: 400, request id: 7b426eab-e2b2-4d09-bd96-ec7948d82e46
│
│   with module.network_firewall_vpc.aws_subnet.attachment_subnet["us-west-2a"],
│   on modules/network_firewall_vpc/vpc.tf line 73, in resource "aws_subnet" "attachment_subnet":
│   73: resource "aws_subnet" "attachment_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9504::/64' is invalid.
│       status code: 400, request id: d7000f01-dc7b-4769-87f7-6c83ece964c7
│
│   with module.network_firewall_vpc.aws_subnet.inspection_subnet["us-west-2b"],
│   on modules/network_firewall_vpc/vpc.tf line 144, in resource "aws_subnet" "inspection_subnet":
│  144: resource "aws_subnet" "inspection_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9503::/64' is invalid.
│       status code: 400, request id: 25de3fe7-ad05-4194-84ac-fbda88af9937
│
│   with module.network_firewall_vpc.aws_subnet.inspection_subnet["us-west-2a"],
│   on modules/network_firewall_vpc/vpc.tf line 144, in resource "aws_subnet" "inspection_subnet":
│  144: resource "aws_subnet" "inspection_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9505::/64' is invalid.
│       status code: 400, request id: eb463167-f415-4eb0-8a51-b970b1bd0b88
│
│   with module.network_firewall_vpc.aws_subnet.inspection_subnet["us-west-2c"],
│   on modules/network_firewall_vpc/vpc.tf line 144, in resource "aws_subnet" "inspection_subnet":
│  144: resource "aws_subnet" "inspection_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9506::/64' is invalid.
│       status code: 400, request id: d2079724-bdf4-4561-a8c2-6b05f3e2c8ff
│
│   with module.network_firewall_vpc.aws_subnet.internet_subnet["us-west-2a"],
│   on modules/network_firewall_vpc/vpc.tf line 217, in resource "aws_subnet" "internet_subnet":
│  217: resource "aws_subnet" "internet_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9508::/64' is invalid.
│       status code: 400, request id: 7a9f15fa-0c2d-4ca5-8cc7-f7f22b97561b
│
│   with module.network_firewall_vpc.aws_subnet.internet_subnet["us-west-2c"],
│   on modules/network_firewall_vpc/vpc.tf line 217, in resource "aws_subnet" "internet_subnet":
│  217: resource "aws_subnet" "internet_subnet" {
│
╵
╷
│ Error: error creating EC2 Subnet: InvalidSubnet.Range: The IPv6 CIDR '2600:1f13:bbc:9507::/64' is invalid.
│       status code: 400, request id: 3e87d330-578e-4dae-8960-f28c76c1a471
│
│   with module.network_firewall_vpc.aws_subnet.internet_subnet["us-west-2b"],
│   on modules/network_firewall_vpc/vpc.tf line 217, in resource "aws_subnet" "internet_subnet":
│  217: resource "aws_subnet" "internet_subnet" {
│

Can someone please guide and help me.

A help request, Do you have any sample code/ existing repo for reference to deploy ec2 instance on these network hub?

Example_spoke_vpc code not working

HI @aandsco @jplock @luigidifraiawork
I am re-using this code, Got struck at terraform plan for spoke_vpc.
I have successfully created on network-hub on prod environment. Similarly trying to create spoke-vpc facing below issue.

 terraform plan
var.environment
  Deployment environment passed as argument or environment variable

  Enter a value: prod

var.network_hub_account_number
  network hub account number passed as argument or environment variable

  Enter a value: xxx

╷
│ Error: error reading EC2 Transit Gateway: no results found
│
│   with data.aws_ec2_transit_gateway.org_env,
│   on data.tf line 8, in data "aws_ec2_transit_gateway" "org_env":
│    8: data "aws_ec2_transit_gateway" "org_env" {
│
╵
╷
│ Error: no Route53 Resolver rules matched
│
│   with module.dns.data.aws_route53_resolver_rule.root_domain,
│   on modules/dns/data.tf line 6, in data "aws_route53_resolver_rule" "root_domain":
│    6: data "aws_route53_resolver_rule" "root_domain" {
│
╵
╷
│ Error: no matching EC2 VPC found
│
│   with module.dns.data.aws_vpc.selected,
│   on modules/dns/data.tf line 16, in data "aws_vpc" "selected":
│   16: data "aws_vpc" "selected" {
│
╵
╷
│ Error: no matching EC2 VPC found
│
│   with module.dns.data.aws_vpc.endpoint,
│   on modules/dns/data.tf line 24, in data "aws_vpc" "endpoint":
│   24: data "aws_vpc" "endpoint" {
│
╵
╷
│ Error: Error describing SSM parameter (/ipam/pool/id): ParameterNotFound:
│
│   with module.network.data.aws_ssm_parameter.ipam_pool,
│   on modules/network/data.tf line 4, in data "aws_ssm_parameter" "ipam_pool":
│    4: data "aws_ssm_parameter" "ipam_pool" {

Can you please suggest how to solve this issue.

[Question] custom setup with group of instances?

Hi! 👋 I'm new to terraform, and I'm looking to create a set of EC2 instances that can see one another. I've done this on Google Cloud, and for that strategy I use a common module to setup networking, and then I have some logic in the startup script to use the Google metadata API to get the ips for other instances. I'm looking for a similar setup for AWS. To be specific, my questions are:

  • Is there an example using this network hub alongside creating instances that can easily see one another?
  • What are best practices to share ip addresses between the instances?

Thanks for your help!

Example_spoke_vpc code not working again due to missing parameters

HI @aandsco @jplock @luigidifraiawork

We have enabled RAM sharing and testing this feature, got struck with this error.

│ Error: error updating EC2 Transit Gateway Attachment (tgw-attach-01f89f6f73a41a5fe) Route Table () association: error associating EC2 Transit Gateway Route Table () assoc
iation (tgw-attach-01f89f6f73a41a5fe): MissingParameter: Missing required parameter in request: TransitGatewayRouteTableId.
│ status code: 400, request id: ebc18aa5-c437-4afc-b417-d81f07a3938f

│ with module.network.aws_ec2_transit_gateway_vpc_attachment.vpc_endpoint,
│ on modules/network/vpc.tf line 77, in resource "aws_ec2_transit_gateway_vpc_attachment" "vpc_endpoint":
│ 77: resource "aws_ec2_transit_gateway_vpc_attachment" "vpc_endpoint" {

  # module.network.aws_ec2_transit_gateway_vpc_attachment.vpc_endpoint will be created
  + resource "aws_ec2_transit_gateway_vpc_attachment" "vpc_endpoint" {
      + appliance_mode_support                          = "disable"
      + dns_support                                     = "enable"
      + id                                              = (known after apply)
      + ipv6_support                                    = "enable"
      + subnet_ids                                      = (known after apply)
      + tags_all                                        = {
          + "Env"        = "prod"
          + "Owner"      = "WWPS"
          + "Product"    = "Network_Automation"
          + "Project_ID" = "12345"
        }
      + transit_gateway_default_route_table_association = true
      + transit_gateway_default_route_table_propagation = true
      + transit_gateway_id                              = "tgw-xxx"
      + vpc_id                                          = (known after apply)
      + vpc_owner_id                                    = (known after apply)
    }


Can you please suggest how to solve this issue.

Question about AFT integration

Thank you for the material and networking pattern implemented in this repo.

As the root README.md file mentions:

Perfect for a central networking hub account, potentially alongside Account Factory for Terraform

I thought to ask for your thoughts on the below. It's probably a long shot but I'd appreciate the feedback, even if it where along the lines of "don't ask here, go ask there instead".

  1. Would you reckon it makes sense to implement the networking pattern directly as part of AFT account customizations?
  2. If so, any thoughts on how to access route tables from a spoke account within the AWS Organization using native AFT facilities?
  3. Are you aware of anyone working on 1. already?

Thank you.

IPv6 connectivity not working from within spoke_app subnets

This PR by @jplock introduced support for IPv6.

However, IPv6 connectivity within a spoke_app subnet, e.g. spoke_app_eu-west-2a, doesn't appear to be working. E.g. from an EC2 instance with IPv6 configuration:

$ ping6 localhost
PING localhost(localhost6) 56 data bytes
64 bytes from localhost6: icmp_seq=1 ttl=64 time=0.022 ms
^C

$ traceroute6 google.com
traceroute to google.com (2a00:1450:4009:821::200e), 30 hops max, 80 byte packets
 1  * * *
...
30  * * *

The above-mentioned PR also reads:

To reduce costs, we could have IPv6 traffic egress directly from the example spoke VPC for now.

The PR does indeed create an Egress-only IGW in each spoke VPC too but these are not used anywhere.

Routing non-local IPv6 traffic (by changing the route table of the spoke_app subnets) to the spoke VPC EIGW does make IPv6 connectivity to the Internet work:

$ traceroute6 google.com
traceroute to google.com (2a00:1450:4009:821::200e), 30 hops max, 80 byte packets
 1  * * *
...
20  lhr48s28-in-x0e.1e100.net (2a00:1450:4009:821::200e)  2.381 ms  1.592 ms  2.358 ms

If I were to raise a PR, would it be preferable to use spoke VPC EIGWs, or remove these spoke VPC EIGWs and fix IPv6 connectivity so that IPv6 traffic goes through the single EIGW in the inspection VPC?

Suggestion for var.environment for the central networking account?

I'm confused about what should be set for the central networking account's var.environment. The documentation seems to point toward this setting for spoke resources, not for the central account itself.

Also, if those environments need to be customized, what is the proper place to customize them?

Cross account name resolution failing - despite network connectivity

We have deployed a standard hub and spoke architecture using this repository, and the example-spoke-vpc code.

Everything has run smoothly, and we can communicate via ICMP from an application in our dev spoke, to another EC2 instance in our test account (via the TGW)

However name resolution is failing - e.g. from DEVELOPMENT account - ping instance.test.network.internal (a Route53 A record in the TEST account, in the PHZ test.network.internal. The above command returns ping: instance.test.network.internal: Name or service not known

We have a top-level domain in the networking account network.internal but we get NXDOMAIN when we try to resolve that from any of the spoke accounts.

Any idea what we might be missing? We haven't changed any of the code from this repository, apart from non-consequential variables such as tags and IPAM CIDR ranges.

Getting error while create spoke

Error: error updating EC2 Transit Gateway Attachment (tgw-attach-0d42be5b321cc1c72) Route Table () association: error associating EC2 Transit Gateway Route Table () association (tgw-attach-0d42be5b321cc1c72): MissingParameter: Missing required parameter in request: TransitGatewayRouteTableId.

[Question] Routing Ingress Traffic to Spoke EKS Cluster

I've currently deployed this network-hub solution to a dedicated subaccount (hub-account) in my AWS Organization, and have also deployed the provided example-spoke-vpc solution to a separate subaccount (spoke-account) in the same AWS Organization.


Please correct me if I am wrong here, but...
I am currently under the impression that, out-the-box, my spoke-account is currently only capable of Egress with this setup. Specifically, that my EKS Cluster deployed into the spoke-account's spoke-vpc subnets is only capable of connecting outbound to the Internet.

And so if I wanted to be able to connect to this EKS Cluster from the Internet, that I would have to deploy an ALB into the hub-account using the inspection_internet_* public subnets [that is, the subnets which have a 0.0.0.0/0 route to an IGW]. And then from here, have the ALB forward traffic to the Private IPs of a NLB in the spoke-account.


Is my [general] understanding of the Ingress networking above correct, in that in order to enable Ingress to my spoke workload I'd have to take additional steps of deploying an ALB into the hub-account and forward it to the specific Private IPs of my workload machines?

If so, is the hub-account ALB to spoke-account NLB the general recommended solution architecture for this as well?
Or is there a better approach to this? Like sharing the hub-account internet/public subnets with the spoke-account, and deploying the ALB into the spoke-account?

Apologies for my confusion, and thanks in advance for your time.

Best,

IPv6 support?

Would you accept a pull request to add IPv6 support? I know we can't route IPv6 through the Network Firewall because the Gateway Load Balancer doesn't yet support IPv6, but we could plumb everything else through.

VPC flow logs to S3 via Kinesis Data Firehose?

Currently the inspection VPC flow logs are being sent to CloudWatch Logs. Would you be interested in a PR to send them to an S3 bucket the module creates instead or make it configurable?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.