Giter Site home page Giter Site logo

cloudposse / terraform-aws-tfstate-backend Goto Github PK

View Code? Open in Web Editor NEW
381.0 29.0 164.0 605 KB

Terraform module that provision an S3 bucket to store the `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.

Home Page: https://cloudposse.com/accelerate

License: Apache License 2.0

Makefile 6.76% HCL 82.12% Smarty 1.18% Go 9.93%
terraform terraform-module aws tfstate dynamodb locking aws-dynamodb terraform-modules dynamodb-table s3-bucket

terraform-aws-tfstate-backend's People

Contributors

aknysh avatar cloudpossebot avatar danjbh avatar dependabot[bot] avatar dylanbannon avatar gowiem avatar jamengual avatar jmcgeheeiv avatar joepjoosten avatar johncblandii avatar korenyoni avatar lafarer avatar maartenvanderhoef avatar mabadillamycwt avatar maciejmajewski avatar mainbrain avatar max-lobur avatar maximmi avatar nitrocode avatar nuru avatar okgolove avatar osterman avatar rothandrew avatar schollii avatar shmick avatar smontiel avatar sweetops avatar thiagoalmeidasa avatar vadim-hleif avatar woz5999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-tfstate-backend's Issues

terraform destroy fails due to empty coalesce in output

Describe the Bug

Using terraform 0.12:

  1. create a main.tf that uses this module
  2. terraform init and apply
  3. terraform destroy

eventually the following happens:

$ terraform destroy
...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes


Error: Error in function call

  on .terraform/modules/terraform_state_backend/output.tf line 30, in output "dynamodb_table_id":
  30:     coalescelist(
  31: 
  32: 
  33: 
    |----------------
    | aws_dynamodb_table.with_server_side_encryption is empty tuple
    | aws_dynamodb_table.without_server_side_encryption is empty tuple

Call to function "coalescelist" failed: no non-null arguments.

Expected Behavior

It should have finished properly. Looks like the coalesce is incorrect.

Fix

This is likely due to the new behavior of coalescelist() in terraform 0.12.

Add [""] to the coalesclist() call. I will try to submit a PR.

No compatibility with terraform v1.6

Describe the Bug

With terraform v1.6, following error occurs in terraform plan:

╷
│ Warning: Deprecated Parameters
│ 
│   on backend.tf line 4, in terraform:
│    4:   backend "s3" {
│ 
│ The following parameters have been deprecated. Replace them as follows:
│   * role_arn -> assume_role.role_arn
│ 
╵

╷
│ Error: Cannot assume IAM Role
│ 
│ IAM Role ARN not set
╵

module

module "terraform_state_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v1.1.1"

  namespace  = "piyo-dx"
  stage      = "development-tenant"
  name       = "azuread"
  attributes = ["tfstate"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
}

Generated backend.tf

terraform {
  required_version = ">= 1.0.0"

  backend "s3" {
    region         = "ap-northeast-1"
    bucket         = "piyo-dx-development-tenant-azuread-tfstate"
    key            = "terraform.tfstate"
    dynamodb_table = "piyo-dx-development-tenant-azuread-tfstate-lock"
    profile        = ""
    role_arn       = ""
    encrypt        = "true"
  }
}

Expected Behavior

terraform plan success.

Steps to Reproduce

With terraform v1.6 and following configuration:

module "terraform_state_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v1.1.1"

  namespace  = "hogehoge"
  stage      = "development"
  name       = "fugafuga"
  attributes = ["tfstate"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
}

Screenshots

スクリーンショット 2023-10-05 8 37 41

Environment

  • OS: Mac os 14
  • terraform version: v1.6.0
  • module: v1.1.1

Additional Context

backend syntax was changed from v1.6 ( https://github.com/hashicorp/terraform/releases/tag/v1.6.0 ).
It seems like that role_arn need to be nested.

P.S.

Thank you as always for this module. Our work has become simpler and more beautiful.

Update null label version to fully support 0.13.X

Describe the Bug

Module does not have support for v0.13.X

Expected Behavior

It supports 0.13.X according to release notes

Steps to Reproduce

Steps to reproduce the behavior:

  1. Clone the repo
  2. Run terraform init

Error:

Error: Unsupported Terraform Core version

  on .terraform/modules/base_label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.base_label (from
git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0)
does not support Terraform version 0.13.0. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.


Error: Unsupported Terraform Core version

  on .terraform/modules/dynamodb_table_label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.dynamodb_table_label (from
git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0)
does not support Terraform version 0.13.0. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.


Error: Unsupported Terraform Core version

  on .terraform/modules/s3_bucket_label/versions.tf line 2, in terraform:
   2:   required_version = "~> 0.12.0"

Module module.s3_bucket_label (from
git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0)
does not support Terraform version 0.13.0. To proceed, either choose another
supported Terraform version or update this version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Ubuntu 20.04.1
  • Version: master

Additional Context

The issue is that the 0.16.0 version of the null label package, does not support 0.13.0 version of terraform. It just needs to be updated to 0.17.0.

v0.16.0 -> v0.17.0 broken?

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When upgrading the backend plugin from 0.16 to 0.17, you get a bunch of errors that look like this:

Error: Provider configuration not present

To work with
module.terraform_state_backend.module.dynamodb_table_label.data.null_data_source.tags_as_list_of_maps[0]
its original provider configuration at
module.terraform_state_backend.module.dynamodb_table_label.provider.null is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.terraform_state_backend.module.dynamodb_table_label.data.null_data_source.tags_as_list_of_maps[0],
after which you can remove the provider configuration again.

Terraform version:

$ terraform -version
Terraform v0.12.24
+ provider.aws v2.62.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.template v2.1.2

I checked hashicorp/terraform#21416 but it is not obvious how to fix, I think the fix has to be in your module (not in backend.tf).

Expected Behavior

The terraform apply after upgrading backend module to 0.17 should have worked.

Steps to Reproduce

In empty folder, create backend.tf:

provider "aws" {
  region = "us-east-2"
}

module "terraform_state_backend" {
  source        = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=0.16.0"

  environment   = "test-tf-bug"
  stage         = "oliver"
  name          = null
  # force_destroy = true
  attributes    = ["terraform-state"]
  region        = "us-east-2"
}

Then

terraform init
terraform apply

Everything works.

Now edit the backend.tf file to point to 0.17 of terraform-aws-tfstate-backend module, then repeat:

terraform init
terraform apply 

This time the apply causes a dozen identical errors (but about different elements). The error is shown in the Description above.

Unable to be used with terraform workspaces

Describe the Bug

Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.

It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.

Using the workspace in the bucket and dynamo table name with case issues with the backend.tf because this will alter the whole time.

Using enabled = false will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :D

Expected Behavior

The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the enabled flag but just skip_creation_if_resources_exists or so on.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create two workspaces
  2. Apply the first workspace and see s3 bucket and dynamo table created
  3. Switch to second workspace
  4. Apply second workspace and see errors while s3 bucket and dynamo table creation

Additional Context

Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.

Thanks

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Bucket S3 Policy

Hi,

After first terraform apply the bucket S3 isn't created with the policy, so with the backend configuration and after the terraform init, with have this result :

Error inspecting states in the "s3" backend:
    AccessDenied: Access Denied
	status code: 403, request id: 47E272D927CA9A6A, host id: P0C6vfjPgHasrSZ6sWGCQH4ZnFWdC0Ax7GNc2HdSX/+HWaHXqUbqgx+8I33pHS849KLTwhWyQik=

Best regards,

KMS encryption

KMS encryption as a default

From bridgecrew

     Resource: aws_s3_bucket.default | ID: BC_AWS_GENERAL_56 

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

Should be

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = var.kms_master_key_id
        sse_algorithm     = "aws:kms"
      }
    }
  }

where kms_master_key_id should be something like ?

variable "kms_master_key_id" {
  default = "alias/aws/s3"
}

or simply keep kms_master_key_id = "" and set a dynamic for apply_server_side_encryption_by_default

Make s3 bucket and dynamodb table optional

Describe the Feature

The module ("tatb") should support use case: use existing bucket but still create dynamodb table

I could refactor code from the module into my own module, but I'm asking here to avoid re-inventing the wheel if someone has already done this.

Use Case

Say we have 2 stacks (say dev and staging) with the state subdivided like this:

s3://common_bucket
  stack_1 (dev)
    cluster.tfstate
    rds.tfstate (references cluster.tfstate as remote)
    app_1_deployment_1.tfstate (references rds.tfstate as remote)
    app_1_deployment_2.tfstate (references rds.tfstate as remote)
    app_2_deployment_1.tfstate (references cluster.tfstate as remote)
  stack_2 (staging)
    cluster.tfstate
    app_2_deployment_1.tfstate  (references cluster.tfstate as remote)

The above is managed by 6 terraform root modules:

  • each root module has its own backend.tf file referring to same bucket, but different key fore each
  • all root modules that pertain to stack 1 use the same dynamodb lock table, and same goes for stack 2

It would be nice to use terraform-aws-tfstate-backend module in each of those root modules, but this is not possible because it enforces s3 bucket creation and dynamo-db table creation.

Hence the feature request listed at the top.

Describe Ideal Solution

Basically make the bucket creation optional when using the module.

With that capability, one could have the bucket created by a separate root module that uses the tatb module, with these properties:

root module create bucket create lock table
all-states bucket y y
cluster 1 n y
cluster 2 n y
other modules in stack 1 n n
other modules in stack 2 n n

namely:

  • the first row is "use the tatb module as-it-is-now";
  • the last 2 rows are "use the backend.tf directly";
  • the use-case "create bucket but not dynamodb table" does not exist IMO

so the 2 cluster rows are the only ones that really new a patch: sufficient to add a new "existing_state_bucket" variable that causes use of existing bucket but still creates the dynamodb table.

Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Describe the Bug

Getting an error Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again. when running terraform apply -auto-approve.

Also, it asks to Enter the value of a region for S3, however, it's already in vars. World be nice to automate this step as well :)

Environment:

  • OS version: [macOS: Big Sur]
  • Terraform version [Terraform v0.12.28
    +provider.aws v2.70.0
    +provider.local v1.4.0
    +provider.null v2.1.2
    +provider.template v2.1.2]

Steps to Reproduce

terraform apply -auto-approve

var.region
  AWS Region the S3 bucket should reside in

  Enter a value: us-west-2     

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Enter a value: us-west-2

aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=terraform-state-lock]
module.terraform_state_backend.aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=eg-test-terraform-state-lock]
data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
module.terraform_state_backend.data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
aws_s3_bucket.default: Creating...
module.terraform_state_backend.aws_s3_bucket.default: Creating...

Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: DAE8503E57F632E7, host id: LLcTL4YZN1mIOL8mJzBL9y5d4YJKs/tt7CHh5Ks63naqarYBD/RC8Nnqzs7FQ9mRaRMsdQUhmgs=

  on main.tf line 145, in resource "aws_s3_bucket" "default":
 145: resource "aws_s3_bucket" "default" {



Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: 6DF9DDE094778C9C, host id: 2m8a4gn4qbQ4xwZpNaU1/vmCQuHFM+pV1EQA58+45JSmJ7FVxixXIoFigKhg5KXIrOCVqb7L8+4=

  on .terraform/modules/terraform_state_backend/main.tf line 124, in resource "aws_s3_bucket" "default":
 124: resource "aws_s3_bucket" "default" {

Bucket replication managed by this module

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Currently the bucket for replicating tf state as backup must be created manually. It would be really useful to have the module create it, since it is inherently tied to the backend setup.

Expected Behavior

When bucket replication is turned on, the module requires another region to be specified in the file, and a bucket name. It will create that bucket in the new region.

Use Case

Currently I have to create a replication bucket myself then give its ARN to the backend module. This is extra work that could easily be encapsulated into this module.

Flag to only create s3 bucket and forego dynamodb creation in order to save money

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

An s3 bucket is much cheaper than dynamodb. For small projects with a single developer, it would be nice to only create the s3 bucket and forego the more expensive dynamodb database.

Expected Behavior

Flag to only create s3 bucket and forego dynamodb creation in order to save money

Perhaps var.enable_dynamodb and default it to true.

Terraform 0.12 compatibility

When running terraform init with Terraform 0.12 we get the following error:

Initializing modules...
Downloading git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.7.0 for terraform_state_backend.base_label...
- terraform_state_backend.base_label in .terraform/modules/terraform_state_backend.base_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.7.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.7.0 for terraform_state_backend.s3_bucket_label...
- terraform_state_backend.s3_bucket_label in .terraform/modules/terraform_state_backend.s3_bucket_label

There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Missing key/value separator

  on .terraform/modules/terraform_state_backend/main.tf line 35, in data "aws_iam_policy_document" "prevent_unencrypted_uploads":
  30: 
  31: 
  33: 
  35:     principals {

Expected an equals sign ("=") to mark the beginning of the attribute value.


Error: Attribute redefined

  on .terraform/modules/terraform_state_backend/main.tf line 58, in data "aws_iam_policy_document" "prevent_unencrypted_uploads":
  58:   statement = {

The argument "statement" was already set at
.terraform/modules/terraform_state_backend/main.tf:30,3-12. Each argument may
be set only once.

I'm not sure how to properly fix that.

Question: Config output

Hi,

I started using your Terraform module few days ago. I checked that after apply is rendered backend configuration file. How I can convert it to .hcl.json format ? Can you help me ?

AWS Provider v3 support

Describe the Feature

AWS Provider v3 support

Expected Behavior

We can use aws provider v3

Use Case

Provider update is needed by security our comaptibility with other modules.

Describe Ideal Solution

A clear and concise description of what you want to happen. If you don't know, that's okay.

Alternatives Considered

Stay in aws 2.x

Additional Context

Upgrade Blocker

AWS 2.x to 3.x has no compatibility. In terraform-aws-tfstate-backend, following error occurs.

$ terraform plan -var-file="fixtures.us-west-1.tfvars"
Warning: Value for undeclared variable

The root module does not declare a variable named "s3_bucket_name" but a value
was found in file "fixtures.us-west-1.tfvars". To use this value, add a
"variable" block to the configuration.

Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.


Error: Computed attribute cannot be set

  on ../../main.tf line 128, in resource "aws_s3_bucket" "default":
 128:   region        = var.region

This is written in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade#region-attribute-is-now-read-only .

Therefore, I guess we cannot support both aws v3 and aws v2.

invalid or unknown key: server_side_encryption

variable "region" {
  default = "ap-southeast-1"
}

provider "aws" {
  region = "${var.region}"
}

module "terraform_state_backend" {
  source = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master"
  namespace = "master"
  stage = "master"
  name = "mediapop"
  region = "${var.region}"
}

gives:

Error: module.terraform_state_backend.aws_dynamodb_table.default: : invalid or unknown key: server_side_encryption

Using this module without without specifying an external context label module generates invalid resource names

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When creating buckets with replication without specifying an external context label variable (note it's not mandatory on this module), like this:

data "aws_caller_identity" "current" {}

locals {
  default_tags = {
    "omd_environment" : var.environment,
    "creator_arn" : data.aws_caller_identity.current.arn,
  }
}

module "terraform_state_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v0.38.1"

  providers = {
    aws = aws.one
  }

  s3_bucket_name                = var.bucket_name
  dynamodb_table_name           = var.dynamodb_table_name
  dynamodb_enabled              = true
  enable_server_side_encryption = true
  billing_mode                  = "PAY_PER_REQUEST"

  force_destroy          = true
  s3_replication_enabled = true
  s3_replica_bucket_arn  = module.terraform_state_backend_replication.s3_bucket_arn
  tags                   = local.default_tags

}

module "terraform_state_backend_replication" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "v0.38.1"

  providers = {
    aws = aws.other
  }

  s3_bucket_name   = "${var.bucket_name}-replica"
  force_destroy    = true
  dynamodb_enabled = false
  tags             = local.default_tags

}

some resource names are being evaluated to invalid strings:

  + resource "aws_iam_role" "replication" {
      + arn                   = (known after apply)
...
      + name                  = "-replication"
...
    }
  + resource "aws_iam_policy" "replication" {
...
      + name      = "-replication"
...
    }
  dynamic "replication_configuration" {
    for_each = var.s3_replication_enabled ? toset([var.s3_replica_bucket_arn]) : []
    content {
      role = aws_iam_role.replication[0].arn

      rules {
        id     = module.this.id
        ...

Expected Behavior

Replication resource names use the same logic as the bucket name:

  bucket_name = var.s3_bucket_name != "" ? var.s3_bucket_name : module.this.id

Add support for multiple terraform backend config files

Describe the Feature

Terraform S3 backend allows multiple state files to be stored in the same S3 bucket and with same DynamoDB table.
I would like to have a convenience feature provided by this module to generate multiple terraform backend config files at once with different values for different slices of the infrastructure.

Expected Behavior

Accept list of options for additional backend config files for which backend config files are render as output and/or local files.

Use Case

Hashicorp recommends splitting terraform config into separate root modules to manage logically grouped slices of infrastructure independently. Eg slice managing infrastructure wide concerns like networking, Vault and Consul clusters would be separate from infrastructure for one application which would also be separate from infrastructure for another application.

For such slices of the infrastructure it would be preferable to use same S3 bucket and lock table. I think it makes sense to manage backends for those slices within same module

Describe Ideal Solution

Additional input for the module that probably looks something like this:

 terraform_backend_extra_configs = [
  {
    # required. Can uniqueness be validated between all values?
    # using context for the default key value probably better not to be supported 
    terraform_state_file = "alternate.tfstate"

    # terraform version, region, bucket, dynamodb and encrypt values are same as for "terraform_backend_config"

    # controls local file output, creates file if path not empty
    terraform_backend_config_file_path = "../alternate-path"
    terraform_backend_config_file_name = "backend.tf"

    # omitted values should default to vars used by current "terraform_backend_config" template
    # role_arn = ""
    # profile = ""
    # namespace = ""
    # stage = ""
    # environment = ""
    # name = ""
 
    # optionally specify namespace, stage, environment and name via context.
    context = module.alternate_backend_label.context
  }
]

Alternatives Considered

My own template file resource that duplicates behavior of "terraform_backend_config" in this module could do the same.

Probably, better approach to the one I suggested would be to extract backend config template into submodule of this module to allow independent backend file generation. This approach will take more effort but it would also be better from maintenance perspective, I think.

Additional Context

Sample HCL for how this feature could be used:

module "terraform_state_backend" {
  source = "cloudposse/tfstate-backend/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  context = module.this.context

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false

  terraform_backend_extra_configs = [
    {
      # required. Can uniqueness be validated between all values?
      terraform_state_file = "${module.eg_app_dev_tfstate_backend_label.id}.tfstate"

      # terraform version, region, bucket, dynamodb and encrypt values are same as for "terraform_backend_config"

      # controls local file output, creates file if path not empty
      terraform_backend_config_file_path = "../app/dev"
      terraform_backend_config_file_name = "backend.tf"

      # omitted values default to vars used by current "terraform_backend_config" template
      # role_arn = ""
      # profile = ""
      # namespace = ""
      # stage = ""
      # environment = ""
      # name = ""
      role_arn = aws_iam_role.eg_app_dev_backend.arn

      # optionally specify namespace, stage, environment and name via context?
      context = module.eg_app_dev_backend_label.context
    }
  ]
}

module "eg_app_dev_backend_label" {
  source  = "cloudposse/label/null"
  # version     = "x.x.x"

  environment = "dev"

  context = module.this.context
}

module "eg_app_dev_tfstate_backend_label" {
  source  = "cloudposse/label/null"
  # version     = "x.x.x"

  delimiter = "/"

  context = module.eg_app_dev_label.context
}

resource "aws_iam_role" "eg_app_dev_backend" {
  assume_role_policy = ""
}

resource "aws_iam_policy" "eg_app_dev_backend" {
  name        = module.eg_app_dev_backend_label.id
  description = "Grants access to Terraform S3 backend store bucket and DynamoDB locking table"
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = "s3:ListBucket",
        Resource = module.terraform_state_backend.s3_bucket_arn
      },
      {
        Effect   = "Allow"
        Action   = ["s3:GetObject", "s3:PutObject"]
        Resource = "${module.terraform_state_backend.s3_bucket_arn}/${module.eg_app_dev_tfstate_backend_label.id}.tfstate"
      },
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:DeleteItem"
        ]
        Resource = module.terraform_state_backend.dynamodb_table_arn
      },
    ]
  })
  tags = module.eg_app_dev_backend_label.tags
}

resource "aws_iam_role_policy_attachment" "eg_app_dev_backend" {
  policy_arn = aws_iam_policy.eg_app_dev_backend.arn
  role = aws_iam_role.eg_app_dev_backend.id
}

Documentation is wrong

in your steps 1 - 5, you state that you need to add the backend section.

You fail to indicate that it needs to be in a terraform node.

In addition, you fail to mention that the bucket referenced must already exist. I get this error:

Error: Error inspecting states in the "s3" backend: S3 bucket does not exist

Add delete_protection to DynamoDB table

Describe the Feature

This TF module has a force_destroy variable that can prevent accidental S3 bucket deletions. The DynamoDB table also supports a similar flag deletion_protection_enabled that prevents accidental deletions.

Because the purpose is the same, I would suggest reusing the variable also for this case by adding the following into the aws_dynamodb_table:

deletion_protection_enabled = !var.force_destroy

Expected Behavior

DynamoDB deletion_protection_enabled should also be enabled by default.

Use Case

Prevent accidental deletions.

Describe Ideal Solution

deletion_protection_enabled = !var.force_destroy

Alternatives Considered

No response

Additional Context

No response

Question on statefile for the backend

Is there a requirement that the tfstate file for the backend resources needs to be part of the deployment that uses the backend? I don't see anything in the docs regarding this but whenever I deploy with the s3/dynamo backend, terraform always tried to destroy the s3 state bucket and dynamodb lock table. I wouldn't think they are necessarily coupled but maybe it's a requirement by terraform? Makes for very messy deletion because you're deleting backend resource at the same time as everything else.

Error releasing the state lock

Describe the Bug

when upload state to s3, then run 'terraform destroy --auto-approve' command, I've got error message: "Error releasing the state lock"

Expected Behavior

destroy correctly.

Steps to Reproduce

main.tf:

`provider "aws" {
region = "ap-northeast-1"
}

module "terraform_state_backend" {
source = "cloudposse/tfstate-backend/aws"

Cloud Posse recommends pinning every module to a specific version

version = "x.x.x"

namespace = "wasai"
stage = "test"
name = "terraform-example"
attributes = ["state"]

terraform_backend_config_file_path = "."
terraform_backend_config_file_name = "backend.tf"
force_destroy = false
}

resource "aws_instance" "example" {
ami = "ami-00247e9dc9591c233" # 指定 AMI ID
instance_type = "t2.micro" # 指定实例类型

tags = {
Name = "ExampleInstance" # 添加标签
Name = "test"
}
}`

  1. terraform init
  2. terraform plan
  3. terraform apply --auto-approve
  4. terraform init --force-copy
  5. terraform destroy --auto-approve

got error message:

Error: deleting S3 Bucket (wasai-test-terraform-example-state): operation error S3: DeleteBucket, https response error StatusCode: 409, RequestID: HYQKXS2B45JK65Z1, HostID: lB27Gd2tBiyKXf+gK2kUSdjams0k8MBkLCoWfiONz8i4rKCwUwWHp6r7HjJ4OuNOfubh1pFpIsM=, api error BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.




│ Error: Error releasing the state lock

│ Error message: failed to retrieve lock info for lock ID "1caa2e0b-3858-49d0-0032-99423fc0914e": Unable to retrieve item from DynamoDB table "wasai-test-terraform-example-state-lock":
│ operation error DynamoDB: GetItem, https response error StatusCode: 400, RequestID: IUH3UIMJ76VK7U4QKG0C2ACQN3VV4KQNSO5AEMVJF66Q9ASUAAJG, ResourceNotFoundException: Requested resource not
│ found

│ Terraform acquires a lock when accessing your state to prevent others
│ running Terraform to potentially modify the state at the same time. An
│ error occurred while releasing this lock. This could mean that the lock
│ did or did not release properly. If the lock didn't release properly,
│ Terraform may not be able to run future commands since it'll appear as if
│ the lock is held.

│ In this scenario, please call the "force-unlock" command to unlock the
│ state manually. This is a very dangerous operation since if it is done
│ erroneously it could result in two people modifying state at the same time.
│ Only call this command if you're certain that the unlock above failed and
│ that no one else is holding a lock.

Screenshots

No response

Environment

OSX

Additional Context

No response

Breaking changes ahead?

With new AWS provider major version 2.0, should we expect breaking changes soon when using source = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=master"?

Feature Request - Allow a parameter for the name of DynamoDB table

Describe the Feature

By default, the name of the DynamoDB Table is lock

Use Case

It will be good to have a customized name just in case I have a different environment and I want to name my table differently. (I know we can use a common table but good to have an option)

Describe Ideal Solution

A variable like dynamodb_table_name = "terraform-lock"

Alternatives Considered

After running terraform init, I went to file .terraform/modules/terraform_state_backend/main.tf and added name = "terraform" to this block

module "dynamodb_table_label" {
  source     = "cloudposse/label/null"
  version    = "0.22.0"
  attributes = compact(concat(var.attributes, ["lock"]))
  context    = module.this.context
  name       = "terraform"
}

Logging bucket generates a name with a duplicate

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Turning on the logging bucket creates a bucket named: nmspc-as1-root-nmspc-as1-root-tfstate-logs.

Expected Behavior

A name like: nmspc-as1-root-tfstate-logs.

Steps to Reproduce

module "tfstate_backend" {
  source  = "cloudposse/tfstate-backend/aws"
  version = "0.38.0"

  enable_server_side_encryption = var.enable_server_side_encryption
  force_destroy                 = var.force_destroy
  logging_bucket_enabled        = true
  prevent_unencrypted_uploads   = var.prevent_unencrypted_uploads

  context = module.this.context
}

Screenshots

N/A

Environment (please complete the following information):

All envs are impacted starting with v0.38.0.

Additional Context

Culprit: #104

Error creating S3 bucket ... the region 'us-east-1' is wrong; expecting 'eu-central-1'

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

I've set the AWS provider to use us-east-1 but I'm getting this error when the module tries to create the s3 bucket:

Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1'

Expected Behavior

The module should create the s3 bucket in whichever region I specify.

Steps to Reproduce

  1. Create a main.tf file similar to https://gist.github.com/discentem/427f3dddf11863c2528b5859d1fec1d3 and run terraform init, terraform plan, and terraform apply -auto-approve (as per https://github.com/cloudposse/terraform-aws-tfstate-backend#usage)

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Windows 10
  • Terraform version: v0.14.7

Additional Context

Add any other context about the problem here.

Can't use this module with S3 bucket in different region

Why do you need to specify providers block in the module?

I have env vars set to AWS_REGION=eu-west-1 and AWS_DEFAULT_REGION=eu-west-1 which makes it impossible for me to use this module when working with infrastructure where S3 bucket was created in another region.

The error I got is:

Error: error reading S3 Bucket (yoman-terraform-state): BucketRegionError: incorrect region, the bucket is not in 'eu-west-1' region at endpoint ''
	status code: 301, request id: , host id:

I propose to remove this block and have it defined outside of this module (root module is a much better place for this):

provider "aws" {
version = "~> 2.0"
}

Additionally, it does not work for situations when aws provider has to be configured with assume_role or any other properties.

Last year I described why this is a problem in my blog post, also on slides 56 and 57.

What do you think?

Reimplement with "cloudposse/terraform-aws-s3-bucket" to standardize parameters/features

Describe the Feature

I am wondering if there is a specific reason why this https://github.com/cloudposse/terraform-aws-tfstate-backend is not implemented with https://github.com/cloudposse/terraform-aws-s3-bucket? The parameters and features of both also differ.

It would be awesome if parameters (e.g. s3_object_ownership = "BucketOwnerEnforced" vs bucket_ownership_enforced_enabled = true) and features (e.g. lifecycle_configuration_rules) of both would be standardized. Reimplementing one with the other will likely prevent further drift.

Expected Behavior

Standardized parameters/features of similar TF modules.

Use Case

/

Describe Ideal Solution

/

Alternatives Considered

No response

Additional Context

No response

Upgrading from <0.33.1 to >=0.33.1 requires state move for bucket

Describe the Bug

When upgrading from before version 0.33.1 to any version after 0.33.1:
count was added to the default bucket so a plan sees that the bucket does not exist and wants to remove it from state.

No where in the release notes or README does it mention needing to do a state move. Once you move the state to add an index a plan will show no changes needed:
terraform state mv module.tfstate-backend.aws_s3_bucket.default module.tfstate-backend.aws_s3_bucket.default[0]

Expected Behavior

A note in the 0.33.1 release about needing to perform the state move.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Init a terraform project with terraform-aws-tfstate-backend version 0.33.0
  2. Run terraform apply
  3. Change the terraform-aws-tfstate-backend to version 0.33.1
  4. Run terraform plan
  5. See error

Screenshots

N/A

Environment (please complete the following information):

Terraform v1.0.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.56.0

Additional Context

Add any other context about the problem here.

Add lifecycle configuration to delete objects under a certain size to remove destroyed states

Describe the Feature

It's nice to look at the s3 state bucket to see which components and root dirs contain resources

After destroying root components and resources, the state file object continues to exist in s3. The state object has very few characters in there. It would be nice to expire these objects when they are less than a certain number of bytes.

Expected Behavior

Expire s3 state objects when the objects are less than N bytes

Use Case

See above

Describe Ideal Solution

Lifecycle rule

Alternatives Considered

N/A

Additional Context

DynamoDB label attributes inconsistent due to null label module

Describe the Bug

Label created for DynamoDB is different depending on how attributes was passed to this module: as variable or in context

Moving attributes to context results in this plan (unrelated changes snipped):

-/+ resource "aws_dynamodb_table" "with_server_side_encryption" {
      ~ id               = "eg-terraform-state-lock" -> (known after apply)
      ~ name             = "eg-terraform-state-lock" -> "eg-terraform-lock-state" # forces replacementd
      ~ tags             = {
          ~ "Attributes" = "state-lock" -> "lock-state"
          ~ "Name"       = "eg-terraform-state-lock" -> "eg-terraform-lock-state"
            # (1 unchanged element hidden)
        }
      ~ tags_all         = {
          ~ "Attributes" = "state-lock" -> "lock-state"
          ~ "Name"       = "eg-terraform-state-lock" -> "eg-terraform-lock-state"
            # (1 unchanged element hidden)
        }

Seems to be related to cloudposse/terraform-null-label#114 which was released in 0.22.1 while this module pins to 0.22.0

Expected Behavior

dynamodb_table_label should have same order regardless of how attributes were passed.

Assuming suggested README usage producing desirable result (concat(context, var, ["state"]), label module can be bumped to 0.22.1 or newer

Steps to Reproduce

Steps to reproduce the behavior:

  1. Setup module as suggested in README
  2. Run plan and make note of DynamoDB table resource, particularly id, name and tags
  3. Move attributes variable to context object
  4. Run plan again and compare DynamoDB values

Environment (please complete the following information):

$ terraform version
Terraform v0.15.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.42.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

$ terraform get -update
Downloading cloudposse/tfstate-backend/aws 0.33.0 for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading cloudposse/label/null 0.22.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading cloudposse/label/null 0.24.1 for terraform_state_backend.this...
- terraform_state_backend.this in .terraform/modules/terraform_state_backend.this

Additional Context

This module is likely to be used in root module and combined with README instructions this issue is unlikely affect many

Remove the `read_capacity` and `write_capacity` from lifecycle change ignore

The DynamoDB resource recommends using lifecycle ignores for read and write capacity values when using autoscaling. However, since we're not using autoscaling here, it makes it impossible to update DynamoDB read/write capacity changes after the table has been created in follow up operations.

So either, remove these parameters from the lifecycle/ignore_changes or variablize them such that consumers can opt to not use autoscaling and explicitly set these.

terraform destroy needs explanation

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

The module docs should explain how to cleanup.

Expected Behavior

The module docs would say something like (I'm still confirming the details but I just don't want to loose this issue report):

Destroy

  1. comment out the "backend" block
  2. move the state storage back to local: terraform init
  3. make the state bucket deletable even if there are multiple versions of state stored: add force_destroy=true to your terraform_state_backend then terraform apply
  4. terraform destroy

Warning: while the state is local, the state in the bucket still exists, others (or CI/CD!) should not modify it.

terraform-provider-aws v4.0 incompatibility

Describe the Bug

Upgrading to the latest hashicorp/aws v4.0.0 (https://registry.terraform.io/providers/hashicorp/aws/latest) breaks terraform-aws-tfstate-backend with the following error:

$ terraform plan
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
╷
│ Error: Unsupported attribute
│
│   on .terraform/modules/terraform_state_backend.log_storage/main.tf line 30, in resource "aws_s3_bucket" "default":
│   30:         for_each = var.enable_glacier_transition ? [1] : []
│
│ This object does not have an attribute named "enable_glacier_transition".
╵
╷
│ Error: Unsupported attribute
│
│   on .terraform/modules/terraform_state_backend.log_storage/main.tf line 44, in resource "aws_s3_bucket" "default":
│   44:         for_each = var.enable_glacier_transition ? [1] : []
│
│ This object does not have an attribute named "enable_glacier_transition".

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

$ terraform version
Terraform v1.1.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.0.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.2

Temporary solution:

Restricting AWS provider to 3.x gets the tfstate-backend plugin working as expected

$ cat versions.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "< 4.0"
    }
...

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

Renovate tried to run on this repository, but found these problems.

  • WARN: Base branch does not exist - skipping

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Detected dependencies

Branch main
terraform
main.tf
  • cloudposse/label/null 0.25.0
  • cloudposse/label/null 0.25.0
replication.tf
  • cloudposse/label/null 0.25.0
versions.tf
  • aws >= 4.9.0
  • local >= 2.0
  • hashicorp/terraform >= 1.1.0
Branch release/v0
terraform
main.tf
  • cloudposse/label/null 0.25.0
  • cloudposse/s3-log-storage/aws 1.3.1
versions.tf
  • aws >= 4.9.0
  • local >= 1.3
  • hashicorp/terraform >= 1.1.0

  • Check this box to trigger a request for Renovate to run again on this repository

terraform apply completes successfully with "Warning: Argument is deprecated"

Describe the Bug

terraform apply completed successfully. However, there is a warning in the log that will need attention in future:


│ Warning: Argument is deprecated

│ with module.terraform_state_backend.module.log_storage.aws_s3_bucket.default,
│ on .terraform/modules/terraform_state_backend.log_storage/main.tf line 1, in resource "aws_s3_bucket" "default":
│ 1: resource "aws_s3_bucket" "default" {

│ Use the aws_s3_bucket_logging resource instead

│ (and 21 more similar warnings elsewhere)

Expected Behavior

No deprecated argument warning.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Add the below to my main.tf
module "terraform_state_backend" {
  source      = "cloudposse/tfstate-backend/aws"
  version     = "0.38.1"
  namespace   = "versent-digital-dev-kit"
  stage       = var.aws_region
  name        = "terraform"
  attributes  = ["state"]

  terraform_backend_config_file_path = "."
  terraform_backend_config_file_name = "backend.tf"
  force_destroy                      = false
}
  1. Run 'terraform apply -auto-approve'
  2. See warning in console output

Screenshots

Screen Shot 2022-07-15 at 13 53 33

Error activating encryption at rest in dynamoDB

Hello,

Because encryption at rest is not yet available in eu-west-3 region (Paris), when i try to use your module to create the S3 bucket and DynamoDB it fails with the following error:

Error: Error applying plan:

1 error(s) occurred:

* module.terraform_state_backend.aws_dynamodb_table.default: 1 error(s) occurred:

* aws_dynamodb_table.default: ValidationException: One or more parameter values were invalid: Unsupported input parameter SSESpecification
        status code: 400, request id: UKFDXXXXXXXXXXEMVJF66Q9ASUAAJG`

For what i've seen it's because the region doesn't support (as of yet) the DynamoDB encription at rest.

When i comment out the

resource "aws_dynamodb_table" "default" {
...
  #server_side_encryption {
  #enabled = true
  #}

It works fine.

Please add a variable (by default true) where we can specify that we want encryption at rest or not,

Best regards,
Nuno Fernandes

Usage example fails with 'The argument "region" is required, but was not set.'

Describe the Bug

I tried the usage example with the module from the README, i.e.

module "terraform_state_backend" {
    source        = "git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.14.0"
    namespace     = "eg"
    stage         = "test"
    name          = "terraform"
    attributes    = ["state"]
    region        = "us-east-1"
}

in a terraform.tf file and nothing else.

I then ran terraform init, which worked as expected:

Initializing modules...
Downloading git::https://github.com/cloudposse/terraform-aws-tfstate-backend.git?ref=tags/0.14.0 for terraform_state_backend...
- terraform_state_backend in .terraform/modules/terraform_state_backend
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.13.0 for terraform_state_backend.base_label...
- terraform_state_backend.base_label in .terraform/modules/terraform_state_backend.base_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.13.0 for terraform_state_backend.dynamodb_table_label...
- terraform_state_backend.dynamodb_table_label in .terraform/modules/terraform_state_backend.dynamodb_table_label
Downloading git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.13.0 for terraform_state_backend.s3_bucket_label...
- terraform_state_backend.s3_bucket_label in .terraform/modules/terraform_state_backend.s3_bucket_label

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.52.0...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

terraform apply however fails in a rather surprising way:

Error: Missing required argument

The argument "region" is required, but was not set.

Obviously the region is set, just as in the example. Using a different region doesn't change the result.

Expected Behavior

S3 bucket and Dynamo table are created and ready for use.

Steps to Reproduce

Follow the README usage example until step 3.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: macOS
  • Version: 10.15.3

Additional Context

I saw that some other issues also had problems getting this to work, but none of the hacks described there (adding a aws = "aws" provider) helped.

darwin_arm64 still not supported

With the latest version (0.37), there is still an issue because of a left over dependency on registry.terraform.io/hashicorp/template

╷
│ Error: Incompatible provider version
│ 
│ Provider registry.terraform.io/hashicorp/template v2.2.0 does not have a package available for your current platform, darwin_arm64.
│ 
│ Provider releases are separate from Terraform CLI releases, so not all providers are available for all platforms. Other versions of this provider may have different platforms supported.
╵

This prevents usage on e.g. apple m1, and future apple silicon macs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.