Giter Site home page Giter Site logo

gardener / gardener-extension-provider-aws Goto Github PK

View Code? Open in Web Editor NEW
18.0 17.0 96.0 43.88 MB

Gardener extension controller for the AWS cloud provider (https://aws.amazon.com).

Home Page: https://gardener.cloud

License: Apache License 2.0

Shell 2.01% Dockerfile 0.10% Makefile 0.74% Go 94.72% Smarty 0.33% HCL 1.10% Mustache 0.64% Python 0.36%
gardener extension aws kubernetes amazon-web-services

gardener-extension-provider-aws's Introduction

REUSE status CI Build status Go Report Card

Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.

Recently, most of the vendor specific logic has been developed in-tree. However, the project has grown to a size where it is very hard to extend, maintain, and test. With GEP-1 we have proposed how the architecture can be changed in a way to support external controllers that contain their very own vendor specifics. This way, we can keep Gardener core clean and independent.

This controller implements Gardener's extension contract for the AWS provider.

An example for a ControllerRegistration resource that can be used to register this controller to Gardener can be found here.

Please find more information regarding the extensibility concepts and a detailed proposal here.

Supported Kubernetes versions

This extension controller supports the following Kubernetes versions:

Version Support Conformance test results
Kubernetes 1.30 1.30.0+ Gardener v1.30 Conformance Tests
Kubernetes 1.29 1.29.0+ Gardener v1.29 Conformance Tests
Kubernetes 1.28 1.28.0+ Gardener v1.28 Conformance Tests
Kubernetes 1.27 1.27.0+ Gardener v1.27 Conformance Tests
Kubernetes 1.26 1.26.0+ Gardener v1.26 Conformance Tests
Kubernetes 1.25 1.25.0+ Gardener v1.25 Conformance Tests

Please take a look here to see which versions are supported by Gardener in general.

Compatibility

The following lists known compatibility issues of this extension controller with other Gardener components.

AWS Extension Gardener Action Notes
<= v1.15.0 >v1.10.0 Please update the provider version to > v1.15.0 or disable the feature gate MountHostCADirectories in the Gardenlet. Applies if feature flag MountHostCADirectories in the Gardenlet is enabled. Shoots with CSI enabled (Kubernetes version >= 1.18) miss a mount to the directory /etc/ssl in the Shoot API Server. This can lead to not trusting external Root CAs when the API Server makes requests via webhooks or OIDC.

How to start using or developing this extension controller locally

You can run the controller locally on your machine by executing make start.

Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

gardener-extension-provider-aws's People

Contributors

acumino avatar aleksandarsavchev avatar andreasburger avatar axiomsamarth avatar danielfoehrkn avatar dependabot[bot] avatar dimitar-kostadinov avatar dimityrmirchev avatar dkistner avatar docktofuture avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar hebelsan avatar ialidzhikov avatar kon-angelo avatar kostov6 avatar martinweindel avatar n-boshnakov avatar oliver-goetz avatar prashanth26 avatar rfranzke avatar scheererj avatar shafeeqes avatar stoyanr avatar tedteng avatar timebertt avatar timuthy avatar vlvasilev avatar vpnachev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gardener-extension-provider-aws's Issues

StorageClass creation fails when dealing with multiple providers

The deployment of a provider tries to create the StorageClass with name gardener.cloud-fast.

This causes a conflict as soon as there are multiple providers on the same seed:

time="2019-06-13T10:42:29+02:00" level=info msg="Error syncing ControllerInstallation provider-aws: StorageClass.storage.k8s.io \"gardener.cloud-fast\" is invalid: [parameters: Forbidden: updates to parameters are forbidden., provisioner: Forbidden: updates to provisioner are forbidden.]"

Affected version: 0.7.0-dev

/cc @rfranzke

Install SSM agent on worker node

How to categorize this issue?

/area ops-productivity
/area usability
/kind enhancement
/priority normal
/platform aws

What would you like to be added:

Install AWS System Session Manager (SSM) agent on worker nodes.

Why is this needed:

With AWS SSM, there is no need to deploy a bastion machine to access internal worker nodes. This is pretty useful to diagnose a node that fails to join the Shoot cluster.

After enabled SSM agent on worker node, it can be simply login to the worker node by aws ssm start-session --target <ec2-instnace-id>

aws_subnet deletion timeout is not respected

How to categorize this issue?

/kind bug
/priority normal
/platform aws

What happened:

provider-aws configutes the aws_subnet deletion timeout to 5m. See

{{ range $index, $zone := .Values.zones }}
resource "aws_subnet" "nodes_z{{ $index }}" {
vpc_id = {{ required "vpc.id is required" $.Values.vpc.id }}
cidr_block = "{{ required "zone.worker is required" $zone.worker }}"
availability_zone = "{{ required "zone.name is required" $zone.name }}"
timeouts {
create = "5m"
delete = "5m"
}
{{ include "aws-infra.tags-with-suffix" (set $.Values "suffix" (print "nodes-z" $index)) | indent 2 }}
}

Actually this timeout is not respected:

$ k -n shoot--foo--bar logs bar.infra.tf-destroy-mw2wc -f
Fetching configmap shoot--foo--bar/bar.infra.tf-config and storing data in /tf/main.tf...
Fetching configmap shoot--foo--bar/bar.infra.tf-config and storing data in /tf/variables.tf...
Fetching secret shoot--foo--bar/bar.infra.tf-vars and storing data in /tfvars/terraform.tfvars...
Fetching configmap shoot--foo--bar/bar.infra.tf-state and storing data in /tfstate/terraform.tfstate...

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.68"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
aws_vpc.vpc: Refreshing state... [id=vpc-0a31e7ea767dd1234]
aws_subnet.nodes_z0: Refreshing state... [id=subnet-0bdebaabba2912345]
aws_security_group.nodes: Refreshing state... [id=sg-0d73ff50f88b41234]
aws_subnet.nodes_z2: Refreshing state... [id=subnet-03b2d91f031234567]
aws_subnet.nodes_z1: Refreshing state... [id=subnet-02ca560f0b1e4a123]
aws_security_group.nodes: Destroying... [id=sg-0d73ff50f88b41234]
aws_subnet.nodes_z2: Destroying... [id=subnet-03b2d91f031234567]
aws_subnet.nodes_z1: Destroying... [id=subnet-02ca560f0b1e4a123]
aws_subnet.nodes_z0: Destroying... [id=subnet-0bdebaabba2912345]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 1m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 1m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 1m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 1m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 1m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 1m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 1m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 1m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 1m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 1m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 1m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 1m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 1m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 1m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 1m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 1m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 1m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 1m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 1m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 1m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 1m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 1m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 1m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 1m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 2m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 2m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 2m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 2m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 2m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 2m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 2m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 2m10s elapsed]
^[[18;2~aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 2m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 2m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 2m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 2m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 2m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 2m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 2m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 2m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 2m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 2m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 2m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 2m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 2m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 2m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 2m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 2m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 3m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 3m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 3m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 3m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 3m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 3m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 3m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 3m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 3m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 3m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 3m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 3m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 3m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 3m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 3m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 3m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 3m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 3m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 3m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 3m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 3m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 3m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 3m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 3m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 4m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 4m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 4m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 4m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 4m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 4m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 4m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 4m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 4m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 4m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 4m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 4m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 4m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 4m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 4m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 4m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 4m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 4m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 4m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 4m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 4m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 4m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 4m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 4m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 5m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 5m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 5m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 5m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 5m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 5m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 5m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 5m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 5m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 5m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 5m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 5m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 5m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 5m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 5m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 5m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 5m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 5m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 5m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 5m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 5m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 5m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 5m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 5m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 6m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 6m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 6m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 6m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 6m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 6m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 6m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 6m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 6m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 6m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 6m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 6m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 6m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 6m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 6m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 6m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 6m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 6m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 6m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 6m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 6m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 6m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 6m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 6m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 7m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 7m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 7m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 7m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 7m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 7m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 7m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 7m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 7m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 7m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 7m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 7m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 7m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 7m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 7m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 7m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 7m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 7m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 7m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 7m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 7m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 7m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 7m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 7m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 8m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 8m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 8m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 8m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 8m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 8m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 8m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 8m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 8m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 8m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 8m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 8m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 8m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 8m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 8m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 8m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 8m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 8m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 8m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 8m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 8m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 8m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 8m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 8m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 9m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 9m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 9m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 9m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 9m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 9m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 9m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 9m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 9m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 9m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 9m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 9m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 9m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 9m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 9m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 9m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 9m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 9m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 9m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 9m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 9m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 9m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 9m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 9m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 10m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 10m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 10m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 10m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 11m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 11m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 11m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 11m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 11m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 11m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 11m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 11m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 11m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 11m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 11m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 11m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 11m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 11m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 11m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 11m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 11m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 11m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 11m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 11m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 11m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 11m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 11m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 11m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 12m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 12m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 12m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 12m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 12m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 12m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 12m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 12m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 12m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 12m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 12m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 12m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 12m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 12m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 12m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 12m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 12m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 12m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 12m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 12m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 12m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 12m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 12m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 12m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 13m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 13m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 13m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 13m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 13m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 13m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 13m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 13m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 13m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 13m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 13m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 13m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 13m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 13m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 13m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 13m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 13m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 13m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 13m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 13m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 13m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 13m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 13m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 13m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 14m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 14m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 14m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 14m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 14m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 14m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 14m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 14m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 14m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 14m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 14m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 14m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 14m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 14m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 14m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 14m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 14m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 14m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 14m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 14m40s elapsed]
Mon Nov 16 14:51:56 UTC 2020 Sending SIGTERM to terraform.sh process 6.
Mon Nov 16 14:51:56 UTC 2020 Waiting for terraform.sh process 6 to complete...
Mon Nov 16 14:51:56 UTC 2020 Sending SIGTERM to terraform process 150.
Interrupt received.
Please wait for Terraform to exit or data loss may occur.
Gracefully shutting down...
Stopping operation...
Mon Nov 16 14:51:56 UTC 2020 Waiting for terraform process 150 to complete...
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 14m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 14m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 14m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 14m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 15m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 15m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 15m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 15m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 15m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 15m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 15m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 15m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 15m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 15m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 15m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 15m20s elapsed]
^[aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 15m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 15m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 15m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 15m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 15m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 15m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 15m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 15m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 15m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 15m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 15m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 15m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 16m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 16m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 16m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 16m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 16m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 16m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 16m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 16m10s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 16m20s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 16m20s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 16m20s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 16m20s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 16m30s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 16m30s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 16m30s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 16m30s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 16m40s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 16m40s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 16m40s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 16m40s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 16m50s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 16m50s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 16m50s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 16m50s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 17m0s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 17m0s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 17m0s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 17m0s elapsed]
aws_subnet.nodes_z2: Still destroying... [id=subnet-03b2d91f031234567, 17m10s elapsed]
aws_subnet.nodes_z0: Still destroying... [id=subnet-0bdebaabba2912345, 17m10s elapsed]
aws_security_group.nodes: Still destroying... [id=sg-0d73ff50f88b41234, 17m10s elapsed]
aws_subnet.nodes_z1: Still destroying... [id=subnet-02ca560f0b1e4a123, 17m10s elapsed]

What you expected to happen:
The configured deletion timeout to be respected.

  • Gardener version (if relevant): v1.16.0
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Allow reuse of ALB

What would you like to be added:
All entry points should reuse and existing alb, by adding a new target groups; instead that 2 elb per cluster as it is being done now.

Why is this needed:
We are going to have several clusters working together and we need access to the api and the ingress controller. At the moment we have (2 * number of clusters) ELBs.
It would be ideal if we can define that all entry points reuse and existing alb, by adding a new target group.
We would reduce the entry points to our system to a single one; improving security, making easier to get logs, reducing costs...

RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists

We do often see provider-aws terraformer to fail during Shoot creation with error:

Last ErrorFlow "Shoot cluster reconciliation" encountered task errors: [task "Waiting until shoot infrastructure has been reconciled" failed: failed to create infrastructure: retry failed with context deadline exceeded, last error: extension encountered error during reconciliation: Error reconciling infrastructure: Terraform execution job 'foo.infra.tf-job' could not be completed. The following issues have been found in the logs:

-> Pod 'foo.infra.tf-job-8tvwz' reported:
* [0m[0m[1mError creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
    status code: 400, request id: <omitted>[0m
[0m  on tf/main.tf line 221, in resource "aws_route" "private_utility_z0_nat":
 221: resource "aws_route" "private_utility_z0_nat" [4m{[0m
[0m
[0m[0m]

We see that the route is present in the provider, but it is missing in terraform.tfstate which make terraform try to create it and fail.
We see this since several terraform versions (0.12.9 - the current one, 0.11.14, and probably even an older one).

NatGateway integration

Static Public IP addresses for Gardener-based Shoot Cluster Egress (Outbound) Internet Connectivity.

Allow users to bring their own public static IP addresses and/or public static IP addresses ranges / prefixes which should be attached to the NatGateway.
The specific IP addresses can be re-assigned in case cluster crashed or misconfigured or deleted, etc.
Or, even move specific IP addresses between different clusters - the classic use case can be moving IP addresses from main cluster to backup cluster during disaster recovery (DR) procedure.

The feature is needed for IP addresses whitelisting by customers (the end-users, who work with the shoot cluster):

  • In development / test / validation systems, both within enterprise network and outside, like in public regulated market cloud.
  • Also in production for specific products, such as HaaS and HANA Cloud, which have dedicated Source IP address whitelisting feature.
    In these cases the customer can whitelist the Shoot Cluster Egress IP addresses.

Support dry-run for AWS validator

What would you like to be added:
Allow kubectl dry-run and related actions for AWS validator.

Why is this needed:
Currently it's not possible to kubectl diff because of:

admission webhook "validation.aws.provider.extensions.gardener.cloud" does not support dry run

Specify volumeBindingMode:WaitForFirstConsumer in default storage class

/area storage
/kind enhancement
/priority normal
/platform aws

What would you like to be added:
PVs shall be created in the zone where the pod will be scheduled to.

Why is this needed:
From https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode:

By default, the Immediate mode indicates that volume binding and dynamic provisioning occurs once the PersistentVolumeClaim is created. For storage backends that are topology-constrained and not globally accessible from all Nodes in the cluster, PersistentVolumes will be bound or provisioned without knowledge of the Pod's scheduling requirements. This may result in unschedulable Pods.

We use Immediate, but should rather use WaitForFirstConsumer, wouldn't you agree?

Infrastructure controller should check for correct VPC attributes when existing VPC is used

How to categorize this issue?

/area ops-productivity
/kind enhancement
/priority normal
/platform aws
/topology seed
/exp beginner

What would you like to be added:
The AWS extension requires enableDnsHostnames and enableDnsSupport settings of VPC attributes to be set to true in order to function correctly. If these prerequisites are violated then the worker machines might not be able to join the cluster.
Unfortunately, we lack a check in our Infrastructure controller in order to submit a meaningful error message to the user.

Why is this needed:
Let's please add above mentioned check so that the can help himself.

Infrastructure controller can wrongly early exit on deletion

How to categorize this issue?

/area quality
/kind bug
/priority normal
/platform aws

What happened:
Currently the infrastructure deletion can wrongly exit on detecting empty state and assuming that nothing was creating (because of invalid credentials or other reason).

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

  1. Create an Infrastructure with terraform config that will timeout out to be created.

  2. Delete the Infrastructure.

  3. Ensure that the terraformer Pod from the creation can still run but the infrastructure controller can proceed with the deletion detecting empty terraform state config map and removing the Infrastucture finalizer.

Generally this can happen as the terraform pkg is only deleting the Pod and does not wait until the deletion happens after the grace period of time. The infra deletion can meanwhile wrongly complete when the state is empty but there is a terminating terraformer Pod.

Anything else we need to know?:

Environment:

  • Gardener version (if relevant): v1.5.2
  • Extension version: v1.8.2
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Duplicate resources prevent the creation of a cluster

What happened:

We are using gardener with an istio extension. During our integration tests, we create and delete a shoot cluster. We are always using the same cluster name. After a few loops we encountered the following error in the logs of a <shoot>.infra.tf-job

Error: Error applying plan:

3 errors occurred:
        * aws_iam_role.nodes: 1 error occurred:
        * aws_iam_role.nodes: Error creating IAM Role shoot--core--integration-test-nodes: EntityAlreadyExists: Role with name shoot--core--integration-test-nodes already exists.
        status code: 409, request id: cd4798f1-9e59-11e9-9a03-27542a2f5747


        * aws_key_pair.kubernetes: 1 error occurred:
        * aws_key_pair.kubernetes: Error import KeyPair: InvalidKeyPair.Duplicate: The keypair 'shoot--core--integration-test-ssh-publickey' already exists.
        status code: 400, request id: d1b878e3-c75a-4fa6-82ab-8a50d11a2227


        * aws_iam_role.bastions: 1 error occurred:
        * aws_iam_role.bastions: Error creating IAM Role shoot--core--integration-test-bastions: EntityAlreadyExists: Role with name shoot--core--integration-test-bastions already exists.
        status code: 409, request id: cd4749f9-9e59-11e9-82f6-8f5e7da59408

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

What you expected to happen:

Duplicates should be either ignored or deleted.

How to reproduce it (as minimally and precisely as possible):

Not sure how to reproduce this.

Anything else we need to know?:

Environment:

  • Gardener version: master
  • Kubernetes version (use kubectl version): 1.14.0
  • Cloud provider or hardware configuration: AWS
  • Others:

Terraform state can be lost after timeout exceed

How to categorize this issue?

/area quality
/area robustness
/kind bug
/priority normal
/platform aws

What happened:
Today there was a major AWS outage affecting lifecycle operations with IAM resources. Hence the terraformer Pod was deleted and sigkill-ed after the terminationGracePeriodSeconds causing the terraformer Pod not being able to store any of the created state.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
Logs of an example run were:

$ k -n shoot--foo--aws-local2 logs aws-local2.infra.tf-apply-7n8m6 -f
Fetching configmap shoot--foo--aws-local2/aws-local2.infra.tf-config and storing data in /tf/main.tf...
Fetching configmap shoot--foo--aws-local2/aws-local2.infra.tf-config and storing data in /tf/variables.tf...
Fetching secret shoot--foo--aws-local2/aws-local2.infra.tf-vars and storing data in /tfvars/terraform.tfvars...
Fetching configmap shoot--foo--aws-local2/aws-local2.infra.tf-state and storing data in /tfstate/terraform.tfstate...

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.26"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
aws_key_pair.kubernetes: Creating...
aws_iam_role.bastions: Creating...
aws_iam_role.nodes: Creating...
aws_vpc.vpc: Creating...
aws_key_pair.kubernetes: Creation complete after 1s [id=shoot--foo--aws-local2-ssh-publickey]
aws_eip.eip_natgw_z0: Creating...
aws_eip.eip_natgw_z0: Creation complete after 0s [id=eipalloc-0f89a089c2dc4a7c7]
aws_vpc_dhcp_options.vpc_dhcp_options: Creating...
aws_vpc_dhcp_options.vpc_dhcp_options: Creation complete after 1s [id=dopt-06a72b28786d9f951]
aws_vpc.vpc: Creation complete after 2s [id=vpc-0c2d2334000566575]
aws_internet_gateway.igw: Creating...
aws_subnet.private_utility_z0: Creating...
aws_internet_gateway.igw: Creation complete after 0s [id=igw-0a761244e10b8a8b3]
aws_route_table.routetable_private_utility_z0: Creating...
aws_subnet.private_utility_z0: Creation complete after 0s [id=subnet-080bc8689ab3e04e8]
aws_route_table.routetable_main: Creating...
aws_route_table.routetable_private_utility_z0: Creation complete after 1s [id=rtb-01e20e8294a1de712]
aws_default_security_group.default: Creating...
aws_route_table.routetable_main: Creation complete after 0s [id=rtb-053cfba862f6c7e6d]
aws_vpc_dhcp_options_association.vpc_dhcp_options_association: Creating...
aws_vpc_dhcp_options_association.vpc_dhcp_options_association: Creation complete after 0s [id=dopt-06a72b28786d9f951-vpc-0c2d2334000566575]
aws_subnet.nodes_z0: Creating...
aws_default_security_group.default: Creation complete after 0s [id=sg-0ee964493cdd37a53]
aws_security_group.nodes: Creating...
aws_subnet.nodes_z0: Creation complete after 1s [id=subnet-04c67d0ebe1288770]
aws_subnet.public_utility_z0: Creating...
aws_subnet.public_utility_z0: Creation complete after 0s [id=subnet-08e1e002e68166861]
aws_security_group.nodes: Creation complete after 0s [id=sg-00154811f1ea697c3]
aws_route_table_association.routetable_private_utility_z0_association_private_utility_z0: Creating...
aws_route.public: Creating...
aws_route_table_association.routetable_private_utility_z0_association_private_utility_z0: Creation complete after 0s [id=rtbassoc-092bbea40db0e3fa0]
aws_route.public: Creation complete after 0s [id=r-rtb-053cfba862f6c7e6d1080289494]
aws_route_table_association.routetable_private_utility_z0_association_nodes_z0: Creating...
aws_route_table_association.routetable_main_association_public_utility_z0: Creating...
aws_route_table_association.routetable_main_association_public_utility_z0: Creation complete after 0s [id=rtbassoc-0d6b396df0e7d228c]
aws_route_table_association.routetable_private_utility_z0_association_nodes_z0: Creation complete after 0s [id=rtbassoc-01fcdde4413d30150]
aws_security_group_rule.nodes_self: Creating...
aws_nat_gateway.natgw_z0: Creating...
aws_security_group_rule.nodes_self: Creation complete after 1s [id=sgrule-398590943]
aws_security_group_rule.nodes_udp_all: Creating...
aws_security_group_rule.nodes_udp_all: Creation complete after 0s [id=sgrule-289265722]
aws_security_group_rule.nodes_tcp_all: Creating...
aws_security_group_rule.nodes_tcp_all: Creation complete after 0s [id=sgrule-1878145698]
aws_security_group_rule.nodes_egress_all: Creating...
aws_security_group_rule.nodes_egress_all: Creation complete after 1s [id=sgrule-301994842]
aws_security_group_rule.nodes_udp_public_z0: Creating...
aws_security_group_rule.nodes_udp_public_z0: Creation complete after 0s [id=sgrule-237563616]
aws_security_group_rule.nodes_udp_internal_z0: Creating...
aws_security_group_rule.nodes_udp_internal_z0: Creation complete after 0s [id=sgrule-4053445481]
aws_security_group_rule.nodes_tcp_public_z0: Creating...
aws_security_group_rule.nodes_tcp_public_z0: Creation complete after 1s [id=sgrule-3093794439]
aws_security_group_rule.nodes_tcp_internal_z0: Creating...
aws_security_group_rule.nodes_tcp_internal_z0: Creation complete after 0s [id=sgrule-586813682]
aws_iam_role.bastions: Still creating... [10s elapsed]
aws_iam_role.nodes: Still creating... [10s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [10s elapsed]
aws_iam_role.bastions: Still creating... [20s elapsed]
aws_iam_role.nodes: Still creating... [20s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [20s elapsed]
aws_iam_role.bastions: Still creating... [30s elapsed]
aws_iam_role.nodes: Still creating... [30s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [30s elapsed]
aws_iam_role.bastions: Still creating... [40s elapsed]
aws_iam_role.nodes: Still creating... [40s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [40s elapsed]
aws_iam_role.bastions: Still creating... [50s elapsed]
aws_iam_role.nodes: Still creating... [50s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [50s elapsed]
aws_iam_role.bastions: Still creating... [1m0s elapsed]
aws_iam_role.nodes: Still creating... [1m0s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [1m0s elapsed]
aws_iam_role.bastions: Still creating... [1m10s elapsed]
aws_iam_role.nodes: Still creating... [1m10s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [1m10s elapsed]
aws_iam_role.bastions: Still creating... [1m20s elapsed]
aws_iam_role.nodes: Still creating... [1m20s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [1m20s elapsed]
aws_iam_role.bastions: Still creating... [1m30s elapsed]
aws_iam_role.nodes: Still creating... [1m30s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [1m30s elapsed]
aws_iam_role.bastions: Still creating... [1m40s elapsed]
aws_iam_role.nodes: Still creating... [1m40s elapsed]
aws_nat_gateway.natgw_z0: Still creating... [1m40s elapsed]
aws_nat_gateway.natgw_z0: Creation complete after 1m45s [id=nat-0c739ce5de293093a]
aws_route.private_utility_z0_nat: Creating...
aws_route.private_utility_z0_nat: Creation complete after 0s [id=r-rtb-01e20e8294a1de7121080289494]
aws_iam_role.bastions: Still creating... [1m50s elapsed]
aws_iam_role.nodes: Still creating... [1m50s elapsed]
aws_iam_role.bastions: Still creating... [2m0s elapsed]
aws_iam_role.nodes: Still creating... [2m0s elapsed]
aws_iam_role.bastions: Still creating... [2m10s elapsed]
aws_iam_role.nodes: Still creating... [2m10s elapsed]
aws_iam_role.bastions: Still creating... [2m20s elapsed]
aws_iam_role.nodes: Still creating... [2m20s elapsed]
aws_iam_role.bastions: Still creating... [2m30s elapsed]
aws_iam_role.nodes: Still creating... [2m30s elapsed]
aws_iam_role.bastions: Still creating... [2m40s elapsed]
aws_iam_role.nodes: Still creating... [2m40s elapsed]
aws_iam_role.bastions: Still creating... [2m50s elapsed]
aws_iam_role.nodes: Still creating... [2m50s elapsed]
aws_iam_role.bastions: Still creating... [3m0s elapsed]
aws_iam_role.nodes: Still creating... [3m0s elapsed]
aws_iam_role.bastions: Still creating... [3m10s elapsed]
aws_iam_role.nodes: Still creating... [3m10s elapsed]
aws_iam_role.bastions: Still creating... [3m20s elapsed]
aws_iam_role.nodes: Still creating... [3m20s elapsed]
aws_iam_role.bastions: Still creating... [3m30s elapsed]
aws_iam_role.nodes: Still creating... [3m30s elapsed]
aws_iam_role.bastions: Still creating... [3m40s elapsed]
aws_iam_role.nodes: Still creating... [3m40s elapsed]
aws_iam_role.bastions: Still creating... [3m50s elapsed]
aws_iam_role.nodes: Still creating... [3m50s elapsed]
aws_iam_role.bastions: Still creating... [4m0s elapsed]
aws_iam_role.nodes: Still creating... [4m0s elapsed]
aws_iam_role.bastions: Still creating... [4m10s elapsed]
aws_iam_role.nodes: Still creating... [4m10s elapsed]
aws_iam_role.bastions: Still creating... [4m20s elapsed]
aws_iam_role.nodes: Still creating... [4m20s elapsed]
aws_iam_role.bastions: Still creating... [4m30s elapsed]
aws_iam_role.nodes: Still creating... [4m30s elapsed]
aws_iam_role.bastions: Still creating... [4m40s elapsed]
aws_iam_role.nodes: Still creating... [4m40s elapsed]
aws_iam_role.bastions: Still creating... [4m50s elapsed]
aws_iam_role.nodes: Still creating... [4m50s elapsed]
aws_iam_role.bastions: Still creating... [5m0s elapsed]
aws_iam_role.nodes: Still creating... [5m0s elapsed]
aws_iam_role.bastions: Still creating... [5m10s elapsed]
aws_iam_role.nodes: Still creating... [5m10s elapsed]
aws_iam_role.bastions: Still creating... [5m20s elapsed]
aws_iam_role.nodes: Still creating... [5m20s elapsed]
aws_iam_role.bastions: Still creating... [5m30s elapsed]
aws_iam_role.nodes: Still creating... [5m30s elapsed]
aws_iam_role.bastions: Still creating... [5m40s elapsed]
aws_iam_role.nodes: Still creating... [5m40s elapsed]
aws_iam_role.bastions: Still creating... [5m50s elapsed]
aws_iam_role.nodes: Still creating... [5m50s elapsed]
aws_iam_role.bastions: Still creating... [6m0s elapsed]
aws_iam_role.nodes: Still creating... [6m0s elapsed]
aws_iam_role.bastions: Still creating... [6m10s elapsed]
aws_iam_role.nodes: Still creating... [6m10s elapsed]
aws_iam_role.bastions: Still creating... [6m20s elapsed]
aws_iam_role.nodes: Still creating... [6m20s elapsed]
aws_iam_role.bastions: Still creating... [6m30s elapsed]
aws_iam_role.nodes: Still creating... [6m30s elapsed]
aws_iam_role.bastions: Still creating... [6m40s elapsed]
aws_iam_role.nodes: Still creating... [6m40s elapsed]
aws_iam_role.bastions: Still creating... [6m50s elapsed]
aws_iam_role.nodes: Still creating... [6m50s elapsed]
aws_iam_role.bastions: Still creating... [7m0s elapsed]
aws_iam_role.nodes: Still creating... [7m0s elapsed]
aws_iam_role.bastions: Still creating... [7m10s elapsed]
aws_iam_role.nodes: Still creating... [7m10s elapsed]
aws_iam_role.bastions: Still creating... [7m20s elapsed]
aws_iam_role.nodes: Still creating... [7m20s elapsed]
aws_iam_role.bastions: Still creating... [7m30s elapsed]
aws_iam_role.nodes: Still creating... [7m30s elapsed]
aws_iam_role.bastions: Still creating... [7m40s elapsed]
aws_iam_role.nodes: Still creating... [7m40s elapsed]
aws_iam_role.bastions: Still creating... [7m50s elapsed]
aws_iam_role.nodes: Still creating... [7m50s elapsed]
aws_iam_role.bastions: Still creating... [8m0s elapsed]
aws_iam_role.nodes: Still creating... [8m0s elapsed]
aws_iam_role.bastions: Still creating... [8m10s elapsed]
aws_iam_role.nodes: Still creating... [8m10s elapsed]
aws_iam_role.bastions: Still creating... [8m20s elapsed]
aws_iam_role.nodes: Still creating... [8m20s elapsed]
aws_iam_role.bastions: Still creating... [8m30s elapsed]
aws_iam_role.nodes: Still creating... [8m30s elapsed]
aws_iam_role.bastions: Still creating... [8m40s elapsed]
aws_iam_role.nodes: Still creating... [8m40s elapsed]
aws_iam_role.bastions: Still creating... [8m50s elapsed]
aws_iam_role.nodes: Still creating... [8m50s elapsed]
aws_iam_role.bastions: Still creating... [9m0s elapsed]
aws_iam_role.nodes: Still creating... [9m0s elapsed]
aws_iam_role.bastions: Still creating... [9m10s elapsed]
aws_iam_role.nodes: Still creating... [9m10s elapsed]
aws_iam_role.bastions: Still creating... [9m20s elapsed]
aws_iam_role.nodes: Still creating... [9m20s elapsed]
aws_iam_role.bastions: Still creating... [9m30s elapsed]
aws_iam_role.nodes: Still creating... [9m30s elapsed]
aws_iam_role.bastions: Still creating... [9m40s elapsed]
aws_iam_role.nodes: Still creating... [9m40s elapsed]
aws_iam_role.bastions: Still creating... [9m50s elapsed]
aws_iam_role.nodes: Still creating... [9m50s elapsed]
aws_iam_role.bastions: Still creating... [10m0s elapsed]
aws_iam_role.nodes: Still creating... [10m0s elapsed]
aws_iam_role.bastions: Still creating... [10m10s elapsed]
aws_iam_role.nodes: Still creating... [10m10s elapsed]
aws_iam_role.bastions: Still creating... [10m20s elapsed]
aws_iam_role.nodes: Still creating... [10m20s elapsed]
aws_iam_role.bastions: Still creating... [10m30s elapsed]
aws_iam_role.nodes: Still creating... [10m30s elapsed]
aws_iam_role.bastions: Still creating... [10m40s elapsed]
aws_iam_role.nodes: Still creating... [10m40s elapsed]
aws_iam_role.bastions: Still creating... [10m50s elapsed]
aws_iam_role.nodes: Still creating... [10m50s elapsed]
aws_iam_role.bastions: Still creating... [11m0s elapsed]
aws_iam_role.nodes: Still creating... [11m0s elapsed]
aws_iam_role.bastions: Still creating... [11m10s elapsed]
aws_iam_role.nodes: Still creating... [11m10s elapsed]
aws_iam_role.bastions: Still creating... [11m20s elapsed]
aws_iam_role.nodes: Still creating... [11m20s elapsed]
aws_iam_role.bastions: Still creating... [11m30s elapsed]
aws_iam_role.nodes: Still creating... [11m30s elapsed]
aws_iam_role.bastions: Still creating... [11m40s elapsed]
aws_iam_role.nodes: Still creating... [11m40s elapsed]
aws_iam_role.bastions: Still creating... [11m50s elapsed]
aws_iam_role.nodes: Still creating... [11m50s elapsed]
aws_iam_role.bastions: Still creating... [12m0s elapsed]
aws_iam_role.nodes: Still creating... [12m0s elapsed]
aws_iam_role.bastions: Still creating... [12m10s elapsed]
aws_iam_role.nodes: Still creating... [12m10s elapsed]
aws_iam_role.bastions: Still creating... [12m20s elapsed]
aws_iam_role.nodes: Still creating... [12m20s elapsed]
aws_iam_role.bastions: Still creating... [12m30s elapsed]
aws_iam_role.nodes: Still creating... [12m30s elapsed]
aws_iam_role.bastions: Still creating... [12m40s elapsed]
aws_iam_role.nodes: Still creating... [12m40s elapsed]
aws_iam_role.bastions: Still creating... [12m50s elapsed]
aws_iam_role.nodes: Still creating... [12m50s elapsed]
aws_iam_role.bastions: Still creating... [13m0s elapsed]
aws_iam_role.nodes: Still creating... [13m0s elapsed]
aws_iam_role.bastions: Still creating... [13m10s elapsed]
aws_iam_role.nodes: Still creating... [13m10s elapsed]
aws_iam_role.bastions: Still creating... [13m20s elapsed]
aws_iam_role.nodes: Still creating... [13m20s elapsed]
aws_iam_role.bastions: Still creating... [13m30s elapsed]
aws_iam_role.nodes: Still creating... [13m30s elapsed]
aws_iam_role.bastions: Still creating... [13m40s elapsed]
aws_iam_role.nodes: Still creating... [13m40s elapsed]
aws_iam_role.bastions: Still creating... [13m50s elapsed]
aws_iam_role.nodes: Still creating... [13m50s elapsed]
aws_iam_role.bastions: Still creating... [14m0s elapsed]
aws_iam_role.nodes: Still creating... [14m0s elapsed]
aws_iam_role.bastions: Still creating... [14m10s elapsed]
aws_iam_role.nodes: Still creating... [14m10s elapsed]
aws_iam_role.bastions: Still creating... [14m20s elapsed]
aws_iam_role.nodes: Still creating... [14m20s elapsed]
aws_iam_role.bastions: Still creating... [14m30s elapsed]
aws_iam_role.nodes: Still creating... [14m30s elapsed]
Fri Jun 12 11:19:37 UTC 2020 Sending SIGTERM to terraform.sh process 8.
Fri Jun 12 11:19:37 UTC 2020 Waiting for terraform.sh process 8 to complete...
Fri Jun 12 11:19:37 UTC 2020 Sending SIGTERM to terraform process 77.
Interrupt received.
Please wait for Terraform to exit or data loss may occur.
Gracefully shutting down...
Stopping operation...
Fri Jun 12 11:19:37 UTC 2020 Waiting for terraform process 77 to complete...
aws_iam_role.bastions: Still creating... [14m40s elapsed]
aws_iam_role.nodes: Still creating... [14m40s elapsed]
aws_iam_role.bastions: Still creating... [14m50s elapsed]
aws_iam_role.nodes: Still creating... [14m50s elapsed]
aws_iam_role.bastions: Still creating... [15m0s elapsed]
aws_iam_role.nodes: Still creating... [15m0s elapsed]
aws_iam_role.bastions: Still creating... [15m10s elapsed]
aws_iam_role.nodes: Still creating... [15m10s elapsed]
aws_iam_role.bastions: Still creating... [15m20s elapsed]
aws_iam_role.nodes: Still creating... [15m20s elapsed]
aws_iam_role.bastions: Still creating... [15m30s elapsed]
aws_iam_role.nodes: Still creating... [15m30s elapsed]
aws_iam_role.bastions: Still creating... [15m40s elapsed]
aws_iam_role.nodes: Still creating... [15m40s elapsed]
aws_iam_role.bastions: Still creating... [15m50s elapsed]
aws_iam_role.nodes: Still creating... [15m50s elapsed]
aws_iam_role.bastions: Still creating... [16m0s elapsed]
aws_iam_role.nodes: Still creating... [16m0s elapsed]
aws_iam_role.bastions: Still creating... [16m10s elapsed]
aws_iam_role.nodes: Still creating... [16m10s elapsed]
aws_iam_role.bastions: Still creating... [16m20s elapsed]
aws_iam_role.nodes: Still creating... [16m20s elapsed]
aws_iam_role.bastions: Still creating... [16m30s elapsed]
aws_iam_role.nodes: Still creating... [16m30s elapsed]
aws_iam_role.bastions: Still creating... [16m40s elapsed]
aws_iam_role.nodes: Still creating... [16m40s elapsed]
aws_iam_role.bastions: Still creating... [16m50s elapsed]
aws_iam_role.nodes: Still creating... [16m50s elapsed]
aws_iam_role.bastions: Still creating... [17m0s elapsed]
aws_iam_role.nodes: Still creating... [17m0s elapsed]
aws_iam_role.bastions: Still creating... [17m10s elapsed]
aws_iam_role.nodes: Still creating... [17m10s elapsed]
aws_iam_role.bastions: Still creating... [17m20s elapsed]
aws_iam_role.nodes: Still creating... [17m20s elapsed]
aws_iam_role.bastions: Still creating... [17m30s elapsed]
aws_iam_role.nodes: Still creating... [17m30s elapsed]
aws_iam_role.bastions: Still creating... [17m40s elapsed]
aws_iam_role.nodes: Still creating... [17m40s elapsed]
aws_iam_role.bastions: Still creating... [17m50s elapsed]
aws_iam_role.nodes: Still creating... [17m50s elapsed]
aws_iam_role.bastions: Still creating... [18m0s elapsed]
aws_iam_role.nodes: Still creating... [18m0s elapsed]
aws_iam_role.bastions: Still creating... [18m10s elapsed]
aws_iam_role.nodes: Still creating... [18m10s elapsed]
aws_iam_role.bastions: Still creating... [18m20s elapsed]
aws_iam_role.nodes: Still creating... [18m20s elapsed]
aws_iam_role.bastions: Still creating... [18m30s elapsed]
aws_iam_role.nodes: Still creating... [18m30s elapsed]
aws_iam_role.bastions: Still creating... [18m40s elapsed]
aws_iam_role.nodes: Still creating... [18m40s elapsed]
aws_iam_role.bastions: Still creating... [18m50s elapsed]
aws_iam_role.nodes: Still creating... [18m50s elapsed]
aws_iam_role.bastions: Still creating... [19m0s elapsed]
aws_iam_role.nodes: Still creating... [19m0s elapsed]
aws_iam_role.bastions: Still creating... [19m10s elapsed]
aws_iam_role.nodes: Still creating... [19m10s elapsed]
aws_iam_role.bastions: Still creating... [19m20s elapsed]
aws_iam_role.nodes: Still creating... [19m20s elapsed]
aws_iam_role.bastions: Still creating... [19m30s elapsed]
aws_iam_role.nodes: Still creating... [19m30s elapsed]
aws_iam_role.bastions: Still creating... [19m40s elapsed]
aws_iam_role.nodes: Still creating... [19m40s elapsed]
aws_iam_role.bastions: Still creating... [19m50s elapsed]
aws_iam_role.nodes: Still creating... [19m50s elapsed]
aws_iam_role.bastions: Still creating... [20m0s elapsed]
aws_iam_role.nodes: Still creating... [20m0s elapsed]
aws_iam_role.bastions: Still creating... [20m10s elapsed]
aws_iam_role.nodes: Still creating... [20m10s elapsed]
aws_iam_role.bastions: Still creating... [20m20s elapsed]
aws_iam_role.nodes: Still creating... [20m20s elapsed]
aws_iam_role.bastions: Still creating... [20m30s elapsed]
aws_iam_role.nodes: Still creating... [20m30s elapsed]
aws_iam_role.bastions: Still creating... [20m40s elapsed]
aws_iam_role.nodes: Still creating... [20m40s elapsed]
aws_iam_role.bastions: Still creating... [20m50s elapsed]
aws_iam_role.nodes: Still creating... [20m50s elapsed]
aws_iam_role.bastions: Still creating... [21m0s elapsed]
aws_iam_role.nodes: Still creating... [21m0s elapsed]
aws_iam_role.bastions: Still creating... [21m10s elapsed]
aws_iam_role.nodes: Still creating... [21m10s elapsed]
aws_iam_role.bastions: Still creating... [21m20s elapsed]
aws_iam_role.nodes: Still creating... [21m20s elapsed]
aws_iam_role.bastions: Still creating... [21m30s elapsed]
aws_iam_role.nodes: Still creating... [21m30s elapsed]
aws_iam_role.bastions: Still creating... [21m40s elapsed]
aws_iam_role.nodes: Still creating... [21m40s elapsed]
aws_iam_role.bastions: Still creating... [21m50s elapsed]
aws_iam_role.nodes: Still creating... [21m50s elapsed]
aws_iam_role.bastions: Still creating... [22m0s elapsed]
aws_iam_role.nodes: Still creating... [22m0s elapsed]
aws_iam_role.bastions: Still creating... [22m10s elapsed]
aws_iam_role.nodes: Still creating... [22m10s elapsed]
aws_iam_role.bastions: Still creating... [22m20s elapsed]
aws_iam_role.nodes: Still creating... [22m20s elapsed]
aws_iam_role.bastions: Still creating... [22m30s elapsed]
aws_iam_role.nodes: Still creating... [22m30s elapsed]
aws_iam_role.bastions: Still creating... [22m40s elapsed]
aws_iam_role.nodes: Still creating... [22m40s elapsed]
aws_iam_role.bastions: Still creating... [22m50s elapsed]
aws_iam_role.nodes: Still creating... [22m50s elapsed]
aws_iam_role.bastions: Still creating... [23m0s elapsed]
aws_iam_role.nodes: Still creating... [23m0s elapsed]
aws_iam_role.bastions: Still creating... [23m10s elapsed]
aws_iam_role.nodes: Still creating... [23m10s elapsed]
aws_iam_role.bastions: Still creating... [23m20s elapsed]
aws_iam_role.nodes: Still creating... [23m20s elapsed]
aws_iam_role.bastions: Still creating... [23m30s elapsed]
aws_iam_role.nodes: Still creating... [23m30s elapsed]
aws_iam_role.bastions: Still creating... [23m40s elapsed]
aws_iam_role.nodes: Still creating... [23m40s elapsed]
aws_iam_role.bastions: Still creating... [23m50s elapsed]
aws_iam_role.nodes: Still creating... [23m50s elapsed]
aws_iam_role.bastions: Still creating... [24m0s elapsed]
aws_iam_role.nodes: Still creating... [24m0s elapsed]
aws_iam_role.bastions: Still creating... [24m10s elapsed]
aws_iam_role.nodes: Still creating... [24m10s elapsed]
aws_iam_role.bastions: Still creating... [24m20s elapsed]
aws_iam_role.nodes: Still creating... [24m20s elapsed]
aws_iam_role.bastions: Still creating... [24m30s elapsed]
aws_iam_role.nodes: Still creating... [24m30s elapsed]
aws_iam_role.bastions: Still creating... [24m40s elapsed]
aws_iam_role.nodes: Still creating... [24m40s elapsed]
aws_iam_role.bastions: Still creating... [24m50s elapsed]
aws_iam_role.nodes: Still creating... [24m50s elapsed]
aws_iam_role.bastions: Still creating... [25m0s elapsed]
aws_iam_role.nodes: Still creating... [25m0s elapsed]
rpc error: code = Unknown desc = Error: No such container: 0560b2b623a6b70e5b6799f47e6c9c1b0aeca4c0026eff1f7c642c95f46b55db

As you see from the logs, the signal from the Pod deletion is forwarded to the terraform process but it continues to wait the roles to be created before being killed and not being able to save its state.
The bigger issue is that the next run of the infra reconcile is again creating all the resources except the roles and the infra enters in a loop where it only creates new resources again and again.

Anything else we need to know?:

Environment:

  • Gardener version (if relevant): v1.5.2
  • Extension version: v1.8.2
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Make the EBS CSI driver configurable

How to categorize this issue?

/area usability
/kind enhancement
/priority normal

What would you like to be added: The EBS CSI driver v0.6.0 has been released recently. It contains the changes, allowing users to overwrite the max-attachable-volumes per node for the cluster.

We need to enable both a new version of CSI-driver and users to be able to configure the volume-attach-limit flag of the csi-driver via shoot-resource.

To mention, this is part of the issue gardener/gardener#2354 , and a prominent need for the MemoryOne integration to work at its best with k8s 1.18+.

Why is this needed: The environment variable KUBE_MAX_PD_VOLS is ineffective with CSI migration, which leaves no way for a user to configure the max-attachable-volumes.

Allow deletion of not yet reconciled zones

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform aws

What would you like to be added:

Currently, validator-aws forbids removal of zones from a Shoot's infrastructureConfig / workers[].zones.
This is, because it can't know

  • a) if the resources for those removed zones (subnet, security groups, etc.) have already been created
    • but if they haven't been created yet, it would actually be safe to remove a zone
  • b) if a removed zone is empty and can be deleted safely (no machines/other resources left in subnet/zone)

We would like to lift restriction a) and allow removal of zones, that have not been created yet.
This would require that the infrastructure actuator reports back to the garden cluster, what resources (e.g. subnets) have been created for a given Infrastructure object.

Why is this needed:

This will allow endusers to help themselves and fix a broken Shoot spec, for example when they try to add a zones with a CIDR that conflicts with another existing subnet in the VPC. In that case, the addition of the zone succeeds, but the infrastructure reconciliation will fail and the Shoot will be in Failed state, but the zone cannot be removed, although it is "empty" and could be removed safely.

Volume snapshotting does not work

What happened:

I provisioned a Gardener AWS cluster with CSI storage driver, deployed a sample pod, and a sample PVC. Then, I created a VolumeSnapshotClass and a VolumeSnapshot to trigger snapshotting from the existing PVC. However, snapshotting didn't work. VolumeSnapshot's readyToUse field does not become true, and I don't see any related disk or snapshot on the AWS project.

How to reproduce it (as minimally and precisely as possible):

Create a PVC and a Pod:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: source-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 6Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: source-pod
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["/bin/sh", "-c"]
    args: ["touch /demo/data/sample-file.txt && sleep 3000"]
    volumeMounts:
    - name: source-data
      mountPath: /demo/data
  volumes:
  - name: source-data
    persistentVolumeClaim:
      claimName: source-pvc
      readOnly: false

Then, create a VolumeSnapshotClass and a VolumeSnapshot:

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
  name: default-snapshot-class
driver: ebs.csi.aws.com
deletionPolicy: Delete
---
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: snapshot-source-pvc
spec:
  source:
    persistentVolumeClaimName: source-pvc

Check the VolumeSnapshot status:

kubectl get volumesnapshot snapshot-source-pvc

Environment:

  • Gardener version (if relevant): v1.3.0
  • Extension version:
  • Kubernetes version (use kubectl version): 1.18.1
  • Cloud provider or hardware configuration: AWS
  • Others:

Allow to specify IAM instance profiles on workers

How to categorize this issue?

/area security
/kind api-change
/priority normal
/platform aws

What would you like to be added:

The ability to specify IAM instance profile on the workers. e.g. :

apiVersion: extensions.gardener.cloud/v1alpha1
kind: Worker
metadata:
  name: worker
  namespace: shoot--foobar--aws
spec:
  type: aws
  region: eu-west-1
  secretRef:
    name: cloudprovider
    namespace: shoot--foobar--aws
  pools:
  - name: cpu-worker
    machineType: m4.large
    machineImage:
      name: coreos
      version: 2135.6.0
    zones:
    - eu-west-1a
  providerConfig:
    apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1
    kind: WorkerConfig
    volume:
      iops: 10000
    dataVolumes:
    - name: kubelet-dir
      snapshotID: snap-13234
    iamInstaceProfile:
      name: my-profile
      # arn: my-instance-profile-arn # specify either ARN or name.

Why is this needed:

This would allow to eventually (optionally) remove the extension's dependency to the IAM API, reducing the attack surface.

Trigger rolling-update when snapshotID in the dataVolume is updated.

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform aws

What would you like to be added: The extension allows users to configure multiple volumes/disks to the machine while bootstrapping, and also allows to have disks created from the snapshot via Workers.ProviderConfig.DataVolumes.SnapshotID.

An operating system like MemoryOne expects the snapshot to be backed by the secondary-operating-system..
Considering which, it would be nice to trigger the rolling-update when snapshotID is updated.

Why is this needed: For proper roll-out of the secondary/host-os backing the snapshots.

Enable encryption on disks attached to AWS VMs

How to categorize this issue?

/area security
/kind enhancement
/priority normal
/platform aws

What would you like to be added:
Encryption to be enabled by default for all disks attached to EC2 instances

Why is this needed:
Security

Configure mcm-settings from worker to machine-deployment.

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform aws

What would you like to be added: Machine-controller-manager now allows configuring certain controller-settings, per machine-deployment. Currently, the following fields can be set:

Also, with the PR gardener/gardener#2563 , these settings can be configured via shoot-resource as well.

We need to enhance the worker-extensions to read these settings from worker-object and set respectively on MachineDeployment.

Dependencies:

  • Vendor the MCM 0.33.0
  • gardener/gardener#2563 should be merged.
  • g/g with the #2563 change should be vendored.

Why is this needed:
To allow a fine configuration of MCM via worker-object.

Cannot delete infrastructure when credentials data keys a re missing in secret

From gardener-attic/gardener-extensions#577

If the account secret does not contain a service account json, the cluster can for sure not be created.
But when trying to delete such a cluster this fails because of the same reason:

Waiting until shoot infrastructure has been destroyed
Last Error
task "Waiting until shoot infrastructure has been destroyed" failed: Failed to delete infrastructure: Error deleting infrastructure: secret shoot--berlin--rg-kyma/cloudprovider doesn't have a service account json

It is the same also for the other providers. This is not something specific to gcp.

fail to execute cloud-init on aws

I get some system logs from aws console, something happened, but I am unable to fix this:

[H�[J�[1;1H�[H�[J�[1;1H  Booting `Debian GNU/Linux'


Loading Linux 5.4.0-4-cloud-amd64 ...

Loading initial ramdisk ...

[    0.000000] Linux version 5.4.0-4-cloud-amd64 ([email protected]) (gcc version 9.2.1 20200203 (Debian 9.2.1-28)) #1 SMP Debian 5.4.19-1 (2020-02-13)
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-4-cloud-amd64 root=LABEL=ROOT ro console=ttyS0,115200 console=tty0 earlyprintk=ttyS0,115200 consoleblank=0
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
[    0.000000] x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
[    0.000000] x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
[    0.000000] x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
[    0.000000] x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
[    0.000000] x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
[    0.000000] x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bfff9fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000bfffa000-0x00000000bfffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000022d3fffff] usable
[    0.000000] BIOS-e820: [mem 0x000000022d400000-0x000000023fffffff] reserved
[    0.000000] printk: bootconsole [earlyser0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Amazon EC2 m5.large/, BIOS 1.0 10/16/2017
[    0.000000] Hypervisor detected: KVM
[    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[    0.000000] kvm-clock: cpu 0, msr 10f37d001, primary cpu clock
[    0.000000] kvm-clock: using sched offset of 9350337511 cycles
[    0.001184] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.004544] tsc: Detected 2500.000 MHz processor
[    0.006491] last_pfn = 0x22d400 max_arch_pfn = 0x400000000
[    0.007671] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Memory KASLR using RDRAND RDTSC...
[    0.009722] last_pfn = 0xbfffa max_arch_pfn = 0x400000000
[    0.010955] Using GB pages for direct mapping
[    0.012027] RAMDISK: [mem 0x36761000-0x373a7fff]
[    0.013003] ACPI: Early table checksum verification disabled
[    0.014154] ACPI: RSDP 0x00000000000F8FA0 000014 (v00 AMAZON)
[    0.015334] ACPI: RSDT 0x00000000BFFFE360 00003C (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
[    0.017093] ACPI: FACP 0x00000000BFFFFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
[    0.018846] ACPI: DSDT 0x00000000BFFFE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
[    0.020595] ACPI: FACS 0x00000000BFFFFF40 000040
[    0.021535] ACPI: SSDT 0x00000000BFFFF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
[    0.023299] ACPI: APIC 0x00000000BFFFF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
[    0.025048] ACPI: SRAT 0x00000000BFFFF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
[    0.026810] ACPI: SLIT 0x00000000BFFFF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
[    0.028543] ACPI: WAET 0x00000000BFFFF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
[    0.030371] SRAT: PXM 0 -> APIC 0x00 -> Node 0
[    0.031296] SRAT: PXM 0 -> APIC 0x01 -> Node 0
[    0.032203] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff]
[    0.033441] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x23fffffff]
[    0.034718] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x22d3fffff] -> [mem 0x00000000-0x22d3fffff]
[    0.036873] NODE_DATA(0) allocated [mem 0x22d3fa000-0x22d3fefff]
[    0.038126] Zone ranges:
[    0.038640]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.039890]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.041156]   Normal   [mem 0x0000000100000000-0x000000022d3fffff]
[    0.042403]   Device   empty
[    0.042999] Movable zone start for each node
[    0.043872] Early memory node ranges
[    0.044597]   node   0: [mem 0x0000000000001000-0x000000000009efff]
[    0.045864]   node   0: [mem 0x0000000000100000-0x00000000bfff9fff]
[    0.047154]   node   0: [mem 0x0000000100000000-0x000000022d3fffff]
[    0.048552] Zeroed struct page in unavailable ranges: 11368 pages
[    0.048553] Initmem setup node 0 [mem 0x0000000000001000-0x000000022d3fffff]
[    0.075226] ACPI: PM-Timer IO Port: 0xb008
[    0.076086] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.077302] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.078715] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[    0.080063] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.081415] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[    0.082839] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[    0.084223] Using ACPI (MADT) for SMP configuration information
[    0.085418] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.086433] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.087743] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[    0.089047] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
[    0.090363] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[    0.091707] PM: Registered nosave memory: [mem 0xbfffa000-0xbfffffff]
[    0.093051] PM: Registered nosave memory: [mem 0xc0000000-0xdfffffff]
[    0.094364] PM: Registered nosave memory: [mem 0xe0000000-0xe03fffff]
[    0.095673] PM: Registered nosave memory: [mem 0xe0400000-0xfffbffff]
[    0.096965] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
[    0.098290] [mem 0xc0000000-0xdfffffff] available for PCI devices
[    0.099529] Booting paravirtualized kernel on KVM
[    0.100486] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[    0.160075] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
[    0.162128] percpu: Embedded 52 pages/cpu s175896 r8192 d28904 u1048576
[    0.163657] KVM setup async PF for cpu 0
[    0.164594] kvm-stealtime: cpu 0, msr 225216ec0
[    0.165638] Built 1 zonelists, mobility grouping on.  Total pages: 1988659
[    0.167200] Policy zone: Normal
[    0.167921] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-4-cloud-amd64 root=LABEL=ROOT ro console=ttyS0,115200 console=tty0 earlyprintk=ttyS0,115200 consoleblank=0
[    0.172447] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[    0.174738] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[    0.176447] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.202799] Memory: 7838036K/8080992K available (10243K kernel code, 1077K rwdata, 3136K rodata, 1548K init, 2672K bss, 242956K reserved, 0K cma-reserved)
[    0.205794] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.207160] Kernel/User page tables isolation: enabled
[    0.208239] ftrace: allocating 29415 entries in 115 pages
[    0.218665] rcu: Hierarchical RCU implementation.
[    0.219640] rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
[    0.221028] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.222609] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.226133] NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
[    0.227534] random: crng done (trusting CPU's manufacturer)
[    0.341782] Console: colour VGA+ 80x25
[    0.342605] printk: console [tty0] enabled
[    0.344495] printk: bootconsole [earlyser0] disabled
[    0.000000] Linux version 5.4.0-4-cloud-amd64 ([email protected]) (gcc version 9.2.1 20200203 (Debian 9.2.1-28)) #1 SMP Debian 5.4.19-1 (2020-02-13)
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-4-cloud-amd64 root=LABEL=ROOT ro console=ttyS0,115200 console=tty0 earlyprintk=ttyS0,115200 consoleblank=0
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
[    0.000000] x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
[    0.000000] x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
[    0.000000] x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
[    0.000000] x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
[    0.000000] x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
[    0.000000] x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bfff9fff] usable
[    0.000000] BIOS-e820: [mem 0x00000000bfffa000-0x00000000bfffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000022d3fffff] usable
[    0.000000] BIOS-e820: [mem 0x000000022d400000-0x000000023fffffff] reserved
[    0.000000] printk: bootconsole [earlyser0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Amazon EC2 m5.large/, BIOS 1.0 10/16/2017
[    0.000000] Hypervisor detected: KVM
[    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[    0.000000] kvm-clock: cpu 0, msr 10f37d001, primary cpu clock
[    0.000000] kvm-clock: using sched offset of 9350337511 cycles
[    0.001184] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.004544] tsc: Detected 2500.000 MHz processor
[    0.006491] last_pfn = 0x22d400 max_arch_pfn = 0x400000000
[    0.007671] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[    0.009722] last_pfn = 0xbfffa max_arch_pfn = 0x400000000
[    0.010955] Using GB pages for direct mapping
[    0.012027] RAMDISK: [mem 0x36761000-0x373a7fff]
[    0.013003] ACPI: Early table checksum verification disabled
[    0.014154] ACPI: RSDP 0x00000000000F8FA0 000014 (v00 AMAZON)
[    0.015334] ACPI: RSDT 0x00000000BFFFE360 00003C (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
[    0.017093] ACPI: FACP 0x00000000BFFFFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
[    0.018846] ACPI: DSDT 0x00000000BFFFE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
[    0.020595] ACPI: FACS 0x00000000BFFFFF40 000040
[    0.021535] ACPI: SSDT 0x00000000BFFFF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
[    0.023299] ACPI: APIC 0x00000000BFFFF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
[    0.025048] ACPI: SRAT 0x00000000BFFFF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
[    0.026810] ACPI: SLIT 0x00000000BFFFF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
[    0.028543] ACPI: WAET 0x00000000BFFFF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
[    0.030371] SRAT: PXM 0 -> APIC 0x00 -> Node 0
[    0.031296] SRAT: PXM 0 -> APIC 0x01 -> Node 0
[    0.032203] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff]
[    0.033441] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x23fffffff]
[    0.034718] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x22d3fffff] -> [mem 0x00000000-0x22d3fffff]
[    0.036873] NODE_DATA(0) allocated [mem 0x22d3fa000-0x22d3fefff]
[    0.038126] Zone ranges:
[    0.038640]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.039890]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.041156]   Normal   [mem 0x0000000100000000-0x000000022d3fffff]
[    0.042403]   Device   empty
[    0.042999] Movable zone start for each node
[    0.043872] Early memory node ranges
[    0.044597]   node   0: [mem 0x0000000000001000-0x000000000009efff]
[    0.045864]   node   0: [mem 0x0000000000100000-0x00000000bfff9fff]
[    0.047154]   node   0: [mem 0x0000000100000000-0x000000022d3fffff]
[    0.048552] Zeroed struct page in unavailable ranges: 11368 pages
[    0.048553] Initmem setup node 0 [mem 0x0000000000001000-0x000000022d3fffff]
[    0.075226] ACPI: PM-Timer IO Port: 0xb008
[    0.076086] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.077302] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.078715] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[    0.080063] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.081415] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[    0.082839] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[    0.084223] Using ACPI (MADT) for SMP configuration information
[    0.085418] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.086433] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.087743] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[    0.089047] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
[    0.090363] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[    0.091707] PM: Registered nosave memory: [mem 0xbfffa000-0xbfffffff]
[    0.093051] PM: Registered nosave memory: [mem 0xc0000000-0xdfffffff]
[    0.094364] PM: Registered nosave memory: [mem 0xe0000000-0xe03fffff]
[    0.095673] PM: Registered nosave memory: [mem 0xe0400000-0xfffbffff]
[    0.096965] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
[    0.098290] [mem 0xc0000000-0xdfffffff] available for PCI devices
[    0.099529] Booting paravirtualized kernel on KVM
[    0.100486] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[    0.160075] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
[    0.162128] percpu: Embedded 52 pages/cpu s175896 r8192 d28904 u1048576
[    0.163657] KVM setup async PF for cpu 0
[    0.164594] kvm-stealtime: cpu 0, msr 225216ec0
[    0.165638] Built 1 zonelists, mobility grouping on.  Total pages: 1988659
[    0.167200] Policy zone: Normal
[    0.167921] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-4-cloud-amd64 root=LABEL=ROOT ro console=ttyS0,115200 console=tty0 earlyprintk=ttyS0,115200 consoleblank=0
[    0.172447] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[    0.174738] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[    0.176447] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.202799] Memory: 7838036K/8080992K available (10243K kernel code, 1077K rwdata, 3136K rodata, 1548K init, 2672K bss, 242956K reserved, 0K cma-reserved)
[    0.205794] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.207160] Kernel/User page tables isolation: enabled
[    0.208239] ftrace: allocating 29415 entries in 115 pages
[    0.218665] rcu: Hierarchical RCU implementation.
[    0.219640] rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
[    0.221028] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.222609] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.226133] NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
[    0.227534] random: crng done (trusting CPU's manufacturer)
[    0.341782] Console: colour VGA+ 80x25
[    0.342605] printk: console [tty0] enabled
[    0.344495] printk: bootconsole [earlyser0] disabled
[    0.880599] printk: console [ttyS0] enabled
[    0.884121] ACPI: Core revision 20190816
[    0.887438] APIC: Switch to symmetric I/O mode setup
[    0.891432] x2apic enabled
[    0.894647] Switched APIC routing to physical x2apic.
[    0.899708] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240939f1bb2, max_idle_ns: 440795263295 ns
[    0.907777] Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=10000000)
[    0.911774] pid_max: default: 32768 minimum: 301
[    0.911774] LSM: Security Framework initializing
[    0.911774] Yama: disabled by default; enable with sysctl kernel.yama.*
[    0.911774] AppArmor: AppArmor initialized
[    0.911774] TOMOYO Linux initialized
[    0.911774] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[    0.911774] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Poking KASLR using RDRAND RDTSC...
[    0.911774] Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
[    0.911774] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
[    0.911774] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.911774] Spectre V2 : Mitigation: Full generic retpoline
[    0.911774] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.911774] Speculative Store Bypass: Vulnerable
[    0.911774] TAA: Vulnerable: Clear CPU buffers attempted, no microcode
[    0.911774] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[    0.911774] Freeing SMP alternatives memory: 20K
[    0.911774] smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4)
[    0.911891] Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
[    0.915805] rcu: Hierarchical SRCU implementation.
[    0.920064] NMI watchdog: Perf NMI watchdog permanently disabled
[    0.923809] smp: Bringing up secondary CPUs ...
[    0.927744] x86: Booting SMP configuration:
[    0.927779] .... node  #0, CPUs:      #1
[    0.684281] kvm-clock: cpu 1, msr 10f37d041, secondary cpu clock
[    0.929130] KVM setup async PF for cpu 1
[    0.931774] kvm-stealtime: cpu 1, msr 225316ec0
[    0.939820] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[    0.943777] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[    0.947794] smp: Brought up 1 node, 2 CPUs
[    0.951389] smpboot: Max logical packages: 1
[    0.951777] smpboot: Total of 2 processors activated (10000.00 BogoMIPS)
[    0.956084] devtmpfs: initialized
[    0.959049] x86/mm: Memory block size: 128MB
[    0.959989] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.967784] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[    0.971934] NET: Registered protocol family 16
[    0.975647] audit: initializing netlink subsys (disabled)
[    0.979795] audit: type=2000 audit(1594109511.279:1): state=initialized audit_enabled=0 res=1
[    1.106839] ACPI: bus type PCI registered
[    1.107778] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    1.111879] PCI: Using configuration type 1 for base access
[    1.116619] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    1.119779] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    1.235979] ACPI: Added _OSI(Module Device)
[    1.239783] ACPI: Added _OSI(Processor Device)
[    1.243384] ACPI: Added _OSI(3.0 _SCP Extensions)
[    1.243777] ACPI: Added _OSI(Processor Aggregator Device)
[    1.247757] ACPI: Added _OSI(Linux-Dell-Video)
[    1.247781] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    1.251749] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    1.252481] ACPI: 2 ACPI AML tables successfully acquired and loaded
[    1.256611] ACPI: Interpreter enabled
[    1.259785] ACPI: (supports S0 S4 S5)
[    1.263044] ACPI: Using IOAPIC for interrupt routing
[    1.263790] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    1.268148] ACPI: Enabled 16 GPEs in block 00 to 0F
[    1.273734] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    1.275781] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
[    1.279784] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    1.284049] acpiphp: Slot [3] registered
[    1.287437] acpiphp: Slot [4] registered
[    1.287794] acpiphp: Slot [5] registered
[    1.291184] acpiphp: Slot [6] registered
[    1.291791] acpiphp: Slot [7] registered
[    1.295153] acpiphp: Slot [8] registered
[    1.295794] acpiphp: Slot [9] registered
[    1.299281] acpiphp: Slot [10] registered
[    1.299794] acpiphp: Slot [11] registered
[    1.303187] acpiphp: Slot [12] registered
[    1.303792] acpiphp: Slot [13] registered
[    1.307221] acpiphp: Slot [14] registered
[    1.307794] acpiphp: Slot [15] registered
[    1.311210] acpiphp: Slot [16] registered
[    1.311794] acpiphp: Slot [17] registered
[    1.315220] acpiphp: Slot [18] registered
[    1.315793] acpiphp: Slot [19] registered
[    1.319244] acpiphp: Slot [20] registered
[    1.319795] acpiphp: Slot [21] registered
[    1.323208] acpiphp: Slot [22] registered
[    1.323792] acpiphp: Slot [23] registered
[    1.327207] acpiphp: Slot [24] registered
[    1.327792] acpiphp: Slot [25] registered
[    1.331315] acpiphp: Slot [26] registered
[    1.331794] acpiphp: Slot [27] registered
[    1.335237] acpiphp: Slot [28] registered
[    1.335791] acpiphp: Slot [29] registered
[    1.339416] acpiphp: Slot [30] registered
[    1.339792] acpiphp: Slot [31] registered
[    1.343244] PCI host bridge to bus 0000:00
[    1.343779] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    1.347777] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    1.351779] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    1.355777] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
[    1.359778] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.363822] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    1.368302] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    1.373187] pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
[    1.376869] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    1.379794] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
[    1.383851] pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
[    1.387794] pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
[    1.391798] pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
[    1.395798] pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
[    1.399793] pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
[    1.403796] pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
[    1.407781] pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 31250 usecs
[    1.412122] pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
[    1.416242] pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
[    1.420989] pci 0000:00:03.0: reg 0x30: [mem 0xfebd0000-0xfebdffff pref]
[    1.424134] pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
[    1.429347] pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
[    1.437284] pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
[    1.441403] pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
[    1.446667] pci 0000:00:05.0: reg 0x18: [mem 0xfe800000-0xfe8fffff pref]
[    1.450647] pci 0000:00:05.0: reg 0x20: [mem 0xfebe0000-0xfebeffff]
[    1.455732] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[    1.455886] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[    1.459877] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[    1.463876] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[    1.467829] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[    1.471882] iommu: Default domain type: Translated 
[    1.475801] pci 0000:00:03.0: vgaarb: setting as boot VGA device
[    1.479774] pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    1.479783] pci 0000:00:03.0: vgaarb: bridge control possible
[    1.483777] vgaarb: loaded
[    1.486740] PCI: Using ACPI for IRQ routing
[    1.488062] clocksource: Switched to clocksource kvm-clock
[    1.499969] VFS: Disk quotas dquot_6.6.0
[    1.503341] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    1.508081] AppArmor: AppArmor Filesystem Enabled
[    1.511849] pnp: PnP ACPI init
[    1.515246] pnp: PnP ACPI: found 5 devices
[    1.525400] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    1.532539] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    1.536879] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    1.541284] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    1.546014] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
[    1.550795] NET: Registered protocol family 2
[    1.554547] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
[    1.561608] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
[    1.568409] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    1.575039] TCP: Hash tables configured (established 65536 bind 65536)
[    1.579646] UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
[    1.584392] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
[    1.590923] NET: Registered protocol family 1
[    1.594533] NET: Registered protocol family 44
[    1.598223] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    1.602522] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    1.607070] pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    1.614117] PCI: CLS 0 bytes, default 64
[    1.617544] Trying to unpack rootfs image as initramfs...
[    1.796726] Freeing initrd memory: 12572K
[    1.800263] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    1.804683] software IO TLB: mapped [mem 0xbbffa000-0xbfffa000] (64MB)
[    1.809211] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240939f1bb2, max_idle_ns: 440795263295 ns
[    1.816760] clocksource: Switched to clocksource tsc
[    1.820786] Initialise system trusted keyrings
[    1.824757] Key type blacklist registered
[    1.828212] workingset: timestamp_bits=40 max_order=21 bucket_order=0
[    1.833649] zbud: loaded
[    1.836656] Key type asymmetric registered
[    1.840111] Asymmetric key parser 'x509' registered
[    1.843892] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.850277] io scheduler mq-deadline registered
[    1.854073] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    1.886043] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    1.892926] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    1.899815] i8042: Warning: Keylock active
[    1.904234] serio: i8042 KBD port at 0x60,0x64 irq 1
[    1.908048] serio: i8042 AUX port at 0x60,0x64 irq 12
[    1.912002] mousedev: PS/2 mouse device common for all mice
[    1.916255] rtc_cmos 00:00: RTC can wake from S4
[    1.920650] rtc_cmos 00:00: registered as rtc0
[    1.924248] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
[    1.928604] drop_monitor: Initializing network drop monitor service
[    1.933078] NET: Registered protocol family 10
[    1.944653] Segment Routing with IPv6
[    1.948027] mip6: Mobile IPv6
[    1.951030] NET: Registered protocol family 17
[    1.954689] mpls_gso: MPLS GSO support
[    1.958022] IPI shorthand broadcast: enabled
[    1.961546] sched_clock: Marking stable (1281249486, 680281423)->(2200802317, -239271408)
[    1.968279] registered taskstats version 1
[    1.971648] Loading compiled-in X.509 certificates
[    2.004818] Loaded X.509 cert 'Debian Secure Boot CA: 6ccece7e4c6c0d1f6149f3dd27dfcc5cbb419ea1'
[    2.011812] Loaded X.509 cert 'Debian Secure Boot Signer: 00a7468def'
[    2.016377] Key type ._fscrypt registered
[    2.019817] Key type .fscrypt registered
[    2.023214] AppArmor: AppArmor sha1 policy hashing enabled
[    2.027625] rtc_cmos 00:00: setting system clock to 2020-07-07T08:11:52 UTC (1594109512)
[    2.034990] Freeing unused kernel image memory: 1548K
[    2.051785] Write protecting the kernel read-only data: 16384k
[    2.056541] Freeing unused kernel image memory: 2036K
[    2.060625] Freeing unused kernel image memory: 960K
[    2.064601] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    2.069054] x86/mm: Checking user space page tables
[    2.072862] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    2.186117] Run /init as init process
[    2.247109] ena: Elastic Network Adapter (ENA) v2.1.0K
[    2.251568] ena 0000:00:05.0: Elastic Network Adapter (ENA) v2.1.0K
[    2.261756] nvme nvme0: pci function 0000:00:04.0
[    2.266497] PCI Interrupt Link [LNKD] enabled at IRQ 11
[    2.275814] ena: ena device version: 0.10
[    2.279253] ena: ena controller version: 0.0.1 implementation version 1
[    2.427815] ena 0000:00:05.0: creating 2 io queues. rx queue size: 1024 tx queue size. 1024 LLQ is ENABLED
[    2.444290] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    2.453962] ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 0e:f4:49:13:58:7d Queues 2, Placement policy: Low Latency
[    2.463977] ena 0000:00:05.0 ens5: renamed from eth0
[    2.490479] nvme nvme0: 2/0/0 default/read/poll queues
[    2.674052] GPT:Primary header thinks Alt. header is not at the end of the disk.
[    2.680655] GPT:4194303 != 104857599
[    2.684020] GPT:Alternate GPT header not at the end of the disk.
[    2.688491] GPT:4194303 != 104857599
[    2.691893] GPT: Use GNU Parted to correct GPT errors.
[    2.695925]  nvme0n1: p1 p2 p3 p4
[    4.658343]  nvme0n1: p1 p2 p3 p4
[    4.827214] EXT4-fs (nvme0n1p4): mounted filesystem with ordered data mode. Opts: (null)
[    4.985993] EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null)
[    5.152400] Not activating Mandatory Access Control as /sbin/tomoyo-init does not exist.
[    9.996142] systemd[1]: Inserted module 'autofs4'
[   10.197187] systemd[1]: systemd 245.5-1 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
[   10.212812] systemd[1]: Detected virtualization kvm.
[   10.216651] systemd[1]: Detected architecture x86-64.
[   10.243309] systemd[1]: No hostname configured.
[   10.246944] systemd[1]: Set hostname to <localhost>.
[   10.251721] systemd[1]: Initializing machine ID from KVM UUID.
[   10.255995] systemd[1]: Installed transient /etc/machine-id file.
[   12.815867] systemd[1]: /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
[   12.863352] systemd[1]: usr.mount: Unit is bound to inactive unit dev-nvme0n1p3.device. Stopping, too.
[   12.872118] systemd[1]: Created slice system-getty.slice.
[   12.880274] systemd[1]: Created slice system-modprobe.slice.
[   12.888606] systemd[1]: Created slice system-serial\x2dgetty.slice.
[   12.897350] systemd[1]: Created slice system-systemd\x2dfsck.slice.
[   12.905975] systemd[1]: Created slice system-systemd\x2dgrowfs.slice.
[   12.914772] systemd[1]: Created slice User and Session Slice.
[   12.923241] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[   12.934944] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[   12.945953] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[   12.957314] systemd[1]: Reached target Local Encrypted Volumes.
[   12.965242] systemd[1]: Reached target Slices.
[   12.972266] systemd[1]: Reached target Swap.
[   12.981879] systemd[1]: Listening on RPCbind Server Activation Socket.
[   12.991826] systemd[1]: Listening on Process Core Dump Socket.
[   12.999751] systemd[1]: Listening on fsck to fsckd communication Socket.
[   13.008197] systemd[1]: Listening on initctl Compatibility Named Pipe.
[   13.016666] systemd[1]: Listening on Journal Audit Socket.
[   13.024413] systemd[1]: Listening on Journal Socket (/dev/log).
[   13.032376] systemd[1]: Listening on Journal Socket.
[   13.039743] systemd[1]: Listening on Network Service Netlink Socket.
[   13.048009] systemd[1]: Listening on udev Control Socket.
[   13.055566] systemd[1]: Listening on udev Kernel Socket.
[   13.063927] systemd[1]: Mounting Huge Pages File System...
[   13.072178] systemd[1]: Mounting POSIX Message Queue File System...
[   13.080802] systemd[1]: Mounting RPC Pipe File System...
[   13.088756] systemd[1]: Mounting Kernel Debug File System...
[   13.097431] systemd[1]: Mounting Kernel Trace File System...
[   13.106653] systemd[1]: Mounting Temporary Directory (/tmp)...
[   13.114349] systemd[1]: Condition check resulted in Kernel Module supporting RPCSEC_GSS being skipped.
[   13.122472] systemd[1]: Starting Create list of static device nodes for the current kernel...
[   13.134697] systemd[1]: Starting Load Kernel Module drm...
[   13.143438] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped.
[   13.152519] systemd[1]: Starting File System Check on Root Device...
[   13.159762] RPC: Registered named UNIX socket transport module.
[   13.159762] RPC: Registered udp transport module.
[   13.159763] RPC: Registered tcp transport module.
[   13.159763] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   13.179072] systemd[1]: Starting Journal Service...
[   13.188388] systemd[1]: Starting Load Kernel Modules...
[   13.197252] systemd[1]: Starting udev Coldplug all Devices...
[   13.207443] systemd[1]: Mounted Huge Pages File System.
[   13.215882] systemd[1]: Mounted POSIX Message Queue File System.
[   13.224844] systemd[1]: Mounted RPC Pipe File System.
[   13.233623] systemd[1]: Mounted Kernel Debug File System.
[   13.242198] systemd[1]: Mounted Kernel Trace File System.
[   13.250704] systemd[1]: Mounted Temporary Directory (/tmp).
[   13.260345] systemd[1]: Finished Create list of static device nodes for the current kernel.
[   13.273096] systemd[1]: [email protected]: Succeeded.
[   13.277888] systemd[1]: Finished Load Kernel Module drm.
[   13.289713] systemd[1]: Started File System Check Daemon to report status.
[   13.331197] systemd[1]: Finished udev Coldplug all Devices.
[   13.341832] systemd[1]: Starting Helper to synchronize boot up for ifupdown...
[   13.356885] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
[   13.361840] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
[   13.369250] IPVS: ipvs loaded.
[   13.374859] IPVS: [rr] scheduler registered.
[   13.380373] IPVS: [wrr] scheduler registered.
[   13.385437] IPVS: [sh] scheduler registered.
[   13.390740] systemd[1]: Finished Load Kernel Modules.
[   13.399450] systemd[1]: Condition check resulted in FUSE Control File System being skipped.
[   13.523597] systemd[1]: Condition check resulted in Kernel Configuration File System being skipped.
[   13.532686] systemd[1]: Starting Apply Kernel Variables...
[   13.541464] systemd[1]: Finished File System Check on Root Device.
[   13.551345] systemd[1]: Finished Helper to synchronize boot up for ifupdown.
[   13.561976] systemd[1]: Starting Remount Root and Kernel File Systems...
[   13.622497] EXT4-fs (nvme0n1p3): re-mounted. Opts: (null)
[   13.628245] EXT4-fs (nvme0n1p4): re-mounted. Opts: errors=remount-ro
[   13.637724] systemd[1]: Finished Apply Kernel Variables.
[   13.647407] systemd[1]: Finished Remount Root and Kernel File Systems.
[   13.657945] systemd[1]: Starting Initial cloud-init job (pre-networking)...
[   13.668300] systemd[1]: Starting Grow File System on /...
[   13.682757] EXT4-fs (nvme0n1p4): resizing filesystem from 257531 to 12840443 blocks
[   13.690551] systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
[   13.699646] systemd[1]: Condition check resulted in Platform Persistent Storage Archival being skipped.
[   13.709451] systemd[1]: Starting Load/Save Random Seed...
[   13.728767] systemd[1]: Starting Create System Users...
[   13.887670] systemd[1]: Started Journal Service.
[   14.093777] systemd-journald[262]: Received client request to flush runtime journal.
[   14.303267] EXT4-fs (nvme0n1p4): resized filesystem to 12840443
[   14.992443] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
[   15.005986] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
[   15.014053] ACPI: Power Button [PWRF]
[   15.020573] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
[   15.028045] ACPI: Sleep Button [SLPF]
[   15.191538] cryptd: max_cpu_qlen set to 1000
[   15.255027] AVX2 version of gcm_enc/dec engaged.
[   15.258932] AES CTR mode by8 optimization enabled


Debian GNU/Linux bullseye/sid ip-10-250-13-220 ttyS0

ip-10-250-13-220 login: [   53.814134] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   53.821526] Bridge firewalling registered
[   54.003246] Initializing XFRM netlink socket
[   93.821693] docker0: port 1(veth7ef775a) entered blocking state
[   93.826175] docker0: port 1(veth7ef775a) entered disabled state
[   93.830912] device veth7ef775a entered promiscuous mode
[   94.758996] eth0: renamed from veth2b773b3
[   94.778140] IPv6: ADDRCONF(NETDEV_CHANGE): veth7ef775a: link becomes ready
[   94.782965] docker0: port 1(veth7ef775a) entered blocking state
[   94.787268] docker0: port 1(veth7ef775a) entered forwarding state
[   94.791763] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[   95.168768] docker0: port 1(veth7ef775a) entered disabled state
[   95.173301] veth2b773b3: renamed from eth0
[   95.218952] docker0: port 1(veth7ef775a) entered disabled state
[   95.227072] device veth7ef775a left promiscuous mode
[   95.231066] docker0: port 1(veth7ef775a) entered disabled state
[   97.100280] docker0: port 1(vetha64b7aa) entered blocking state
[   97.105063] docker0: port 1(vetha64b7aa) entered disabled state
[   97.111917] device vetha64b7aa entered promiscuous mode
[   97.117733] docker0: port 1(vetha64b7aa) entered blocking state
[   97.122507] docker0: port 1(vetha64b7aa) entered forwarding state
[   97.127664] docker0: port 1(vetha64b7aa) entered disabled state
[   97.373993] eth0: renamed from veth0b75117
[   97.401979] IPv6: ADDRCONF(NETDEV_CHANGE): vetha64b7aa: link becomes ready
[   97.407101] docker0: port 1(vetha64b7aa) entered blocking state
[   97.411598] docker0: port 1(vetha64b7aa) entered forwarding state
[   97.774603] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.
[   97.830805] docker0: port 1(vetha64b7aa) entered disabled state
[   97.835508] veth0b75117: renamed from eth0
[   97.887623] docker0: port 1(vetha64b7aa) entered disabled state
[   97.895864] device vetha64b7aa left promiscuous mode
[   97.899707] docker0: port 1(vetha64b7aa) entered disabled state
[  101.611385] docker0: port 1(veth9f88e57) entered blocking state
[  101.616302] docker0: port 1(veth9f88e57) entered disabled state
[  101.633937] device veth9f88e57 entered promiscuous mode
[  101.664282] docker0: port 2(veth085d0d4) entered blocking state
[  101.669229] docker0: port 2(veth085d0d4) entered disabled state
[  101.689404] device veth085d0d4 entered promiscuous mode
[  101.702474] docker0: port 2(veth085d0d4) entered blocking state
[  101.707295] docker0: port 2(veth085d0d4) entered forwarding state
[  101.712248] docker0: port 2(veth085d0d4) entered disabled state
[  102.321755] eth0: renamed from veth00894cf
[  102.373528] IPv6: ADDRCONF(NETDEV_CHANGE): veth085d0d4: link becomes ready
[  102.378638] docker0: port 2(veth085d0d4) entered blocking state
[  102.383421] docker0: port 2(veth085d0d4) entered forwarding state
[  102.388173] eth0: renamed from veth04f0c05
[  102.404277] IPv6: ADDRCONF(NETDEV_CHANGE): veth9f88e57: link becomes ready
[  102.409613] docker0: port 1(veth9f88e57) entered blocking state
[  102.414501] docker0: port 1(veth9f88e57) entered forwarding state
[  103.061702] docker0: port 1(veth9f88e57) entered disabled state
[  103.066784] veth04f0c05: renamed from eth0
[  103.139653] docker0: port 1(veth9f88e57) entered disabled state
[  103.146285] device veth9f88e57 left promiscuous mode
[  103.150429] docker0: port 1(veth9f88e57) entered disabled state
[  103.252831] docker0: port 2(veth085d0d4) entered disabled state
[  103.258226] veth00894cf: renamed from eth0
[  103.296815] docker0: port 2(veth085d0d4) entered disabled state
[  103.307626] device veth085d0d4 left promiscuous mode
[  103.311666] docker0: port 2(veth085d0d4) entered disabled state
[ 1218.211895] audit: type=1305 audit(1594110729.468:951): op=set audit_pid=0 old=403 auid=4294967295 ses=4294967295 subj==unconfined res=1
[ 1218.229240] audit: type=1131 audit(1594110729.488:952): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=cloud-init-local comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.246457] audit: type=1131 audit(1594110729.504:953): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-sysctl comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.263597] audit: type=1131 audit(1594110729.520:954): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-modules-load comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.280674] audit: type=1131 audit(1594110729.536:955): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=auditd comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.297368] audit: type=1131 audit(1594110729.556:956): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.332152] audit: type=1131 audit(1594110729.588:957): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-growfs@- comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.378147] audit: type=1131 audit(1594110729.636:958): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.395419] audit: type=1131 audit(1594110729.652:959): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-sysusers comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.413105] audit: type=1131 audit(1594110729.672:960): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-remount-fs comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 1218.532173] systemd-shutdown[1]: Syncing filesystems and block devices.
[ 1218.568445] systemd-shutdown[1]: Sending SIGTERM to remaining processes...
[ 1218.574150] systemd-journald[262]: Received SIGTERM from PID 1 (systemd-shutdow).
[ 1218.585287] systemd-shutdown[1]: Sending SIGKILL to remaining processes...
[ 1218.591025] systemd-shutdown[1]: Unmounting file systems.
[ 1218.595748] [20515]: Unmounting '/usr'.
[ 1218.599237] [20515]: Failed to unmount /usr: Device or resource busy
[ 1218.604019] [20516]: Remounting '/' read-only in with options 'errors=remount-ro'.
[ 1218.634002] EXT4-fs (nvme0n1p4): re-mounted. Opts: errors=remount-ro
[ 1218.650939] systemd-shutdown[1]: Not all file systems unmounted, 1 left.
[ 1218.655552] systemd-shutdown[1]: Deactivating swaps.
[ 1218.659427] systemd-shutdown[1]: All swaps deactivated.
[ 1218.663373] systemd-shutdown[1]: Detaching loop devices.
[ 1218.667486] systemd-shutdown[1]: All loop devices detached.
[ 1218.671547] systemd-shutdown[1]: Detaching DM devices.
[ 1218.675440] systemd-shutdown[1]: All DM devices detached.
[ 1218.679529] systemd-shutdown[1]: Unmounting file systems.
[ 1218.684093] [20517]: Unmounting '/usr'.
[ 1218.687430] [20517]: Failed to unmount /usr: Device or resource busy
[ 1218.691916] systemd-shutdown[1]: Not all file systems unmounted, 1 left.
[ 1218.696442] systemd-shutdown[1]: Cannot finalize remaining file systems, continuing.
[ 1218.704749] systemd-shutdown[1]: Failed to finalize  file systems, ignoring
[ 1218.709527] systemd-shutdown[1]: Syncing filesystems and block devices.
[ 1218.714123] systemd-shutdown[1]: Powering off.
[ 1218.873535] ACPI: Preparing to enter system sleep state S5
[ 1218.877690] reboot: Power down

MTU customizer unit fails if eth0 not available

What happened:

When creating a Shoot with an ubuntu image, the mtu-customizer.service (systemd) fails because it expects the existence of eth0 network interface.
The mtu customizer should the mtu of the primary network interface to a common value 1460.

Tested on AWS with coreos and ubuntu.

What you expected to happen:
mtu-customizer service should have run successfully.
MTU should be 1460.

How to reproduce it (as minimally and precisely as possible):

  • AWS Shoot with a machine type that is capable of ENA
    like m5.large
  • Machine image ubuntu (used ubuntu in version 18.4.20190617)

Execute systemctl status mtu-customizer.service and observe that it is in failed status.
Execute ifconfig | grp eth0 and see that eth0 is missing.

Anything else we need to know?:

Ubuntu defaults to using the ENA network interface that is not eth0.
Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Rework the infrastructure integration test with envtest pkg

How to categorize this issue?

/area testing
/kind enhancement
/priority normal
/platform aws

Why is this needed:
Currently the infrastructure integration test directly interacts with the actuator's Reconcile and Delete funcs which leaves some points uncovered/untested - controller-runtime predicates and controller-runtime inner machinery that afterwards results in Reconcile and Delete invocations of the actuator.
Looks like https://godoc.org/sigs.k8s.io/controller-runtime/pkg/envtest package has a good entry point for integration tests. It allows tests against existing cluster or against partial "ControlPlane" (apiserver and etcd started by the pkg, which might be enough for some cases). We could obtain the environment configuration, create a manager from it, add the controller we want to test to this manager (we already have the AddToManager funcs), start the manager and then finally start the test.
I think this will improve the tests as the test will be concentrated more on contract testing (applying and deleting CRs and checking its status) rather than controller/actuator inner working (for example instantiate a chartrenderer and pass it to the Reconcile func).

SeedNetworkPoliciesTest fails always

From gardener-attic/gardener-extensions#293

What happened:
The test defined in SeedNetworkPoliciesTest.yaml fails always.
Most of the time the following 3 specs fail:

2019-07-29 11:32:33	Test Suite Failed
2019-07-29 11:32:33	Ginkgo ran 1 suite in 3m20.280138435s
2x		2019-07-29 11:32:33	
2019-07-29 11:32:32	FAIL! -- 375 Passed | 3 Failed | 0 Pending | 126 Skipped
2019-07-29 11:32:32	Ran 378 of 504 Specs in 85.218 seconds
2019-07-29 11:32:32	
2019-07-29 11:32:32	> /go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1194
2019-07-29 11:32:32	[Fail] Network Policy Testing egress for mirrored pods elasticsearch-logging [AfterEach] should block connection to "Garden Prometheus" prometheus-web.garden:80
2019-07-29 11:32:32	
2019-07-29 11:32:32	/go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1062
2019-07-29 11:32:32	[Fail] Network Policy Testing components are selected by correct policies [AfterEach] gardener-resource-manager
2019-07-29 11:32:32	
2019-07-29 11:32:32	/go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1194
2019-07-29 11:32:32	[Fail] Network Policy Testing egress for mirrored pods gardener-resource-manager [AfterEach] should block connection to "External host" 8.8.8.8:53

@mvladev can you please check?

Environment:
TestMachinery on all landscapes (dev, ..., live)

Update credentials during Worker deletion

From gardener-attic/gardener-extensions#523

Steps to reproduce:

  1. Create a Shoot with valid cloud provider credentials my-secret.
  2. Ensure that the Shoot is successfully created.
  3. Invalidate the my-secret credentials.
  4. Delete the Shoot.
  5. Update my-secret credentials with valid ones.
  6. Ensure that the Shoot deletion fails waiting the Worker to be deleted.

Currently we do no sync the cloudprovider credentials in the <Provider>MachineClass during Worker deletion. Hence machine-controller-manager fails to delete the machines because the credentials are the invalid ones.

[AWS] Allow using existing subnets

End-users are interested in not only using an existing VPC but also existing subnets for their AWS shoot clusters. Today, the AWS infrastructure extension always creates new, dedicated subnets (even if an existing VPC is used).

We could allow the following InfrastructureConfig:

apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
  vpc:
    id: vpc-123456 # re-use existing VPC
  zones:
  - name: eu-west-1a
    internalID: subnet-123456 # re-use existing subnet
    publicID: subnet-7890ab # re-use existing subnet
    workersID: subnet-cdefgh # re-use existing subnet

We could even allow a mixed setup:

apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
  vpc:
    id: vpc-123456 # re-use existing VPC
  zones:
  - name: eu-west-1a
    internal: 10.250.112.0/22 # create a new, dedicated subnet for internal LBs
    publicID: subnet-7890ab # re-use existing subnet
    workersID: subnet-cdefgh # re-use existing subnet

Constraints:

  • If an existing subnet for public load balancers is used then we require that a NAT gateway + elastic IP is attached to this subnet (similar to how we require that an internet gateway is attached if an existing VPC is used)
  • We keep creating dedicated route tables between the subnets.

Add option to enable encryption to StorageClassses

How to categorize this issue?

/area security
/kind enhancement
/priority normal
/platform aws

What would you like to be added:

Add option to configure the StorageClasses with enrypted: "true" option

Why is this needed:

Security

Specify additional security groups for nodes.

It should be possible to deploy 2 or more Shoots in the same network (in AWS - VPC) and configure the secure groups on the nodes, so the pods / nodes from the different clusters are routeable to each other.

This is a requirement for Istio multicluster:

The usage of an RFC1918 network, VPN, or alternative more advanced network techniques to meet the following requirements:
- Individual cluster Pod CIDR ranges and service CIDR ranges must be unique across the multicluster environment and may not overlap.
- All pod CIDRs in every cluster must be routable to each other.
- All Kubernetes control plane API servers must be routable to each other.

network.vpc.gatewayEndpoints are not created when using existing VPC

How to categorize this issue?

/area networking
/kind bug
/priority normal
/platform aws

What happened:
network.vpc.gatewayEndpoints are now usable only when new VPC is requested (networks.vpc.id)

How to reproduce it (as minimally and precisely as possible):

  1. Create an Infrastructure with networks.vpc.id and networks.vpc.gatewayEndpoints:
apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
  vpc:
    id: vpc-123456
    gatewayEndpoints:
    - s3
  zones:
  - name: eu-west-1a
    internal: 10.250.112.0/22
    public: 10.250.96.0/22
    workers: 10.250.0.0/19
  1. Ensure that the corresponding VPC endpoint resources won't be created.

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version: v1.12.0
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Validate cloudprovider credentials

(recreating issue from the g/g repo: gardener/gardener#2293)

What would you like to be added:
Add validation for cloudprovider secret

Why is this needed:
Currently, when uploading secrets via the UI, all secret fields are required and validated. However, when creating those credentials via the cloudprovider secret, there is no validation. This results in errors such as this error: (specific to Azure but a similar error would be generated for AWS):

Flow "Shoot cluster reconciliation" encountered task errors: [task "Waiting until shoot infrastructure has been reconciled" failed: failed to create infrastructure: retry failed with context deadline exceeded, last error: extension encountered error during reconciliation: Error reconciling infrastructure: secret shoot--xxxx--xxxx/cloudprovider doesn't have a subscription ID] Operation will be retried.

Screen Shot 2020-05-07 at 1 09 03 PM

Disable Default ECR Access

What would you like to be added:
Either means to switch off ECR access for the worker nodes or influence the instance profile for the worker nodes.

Why is this needed:
There are use cases such as CI/CD where the cluster creates and loads images to/from a registry, but is itself offered as multi-tenant app/service in one cluster. Now, running custom code is always dangerous and even so PSP and NP help to secure a cluster, as an additional security layer, it would make sense to not generally allow the worker nodes to access ECR which these apps/services utilise. However, Gardener generally enabled ECR (historic setting), which is a problem for these use cases:

"Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:BatchGetImage"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

That made kind of sense in the early days of Gardener, since Gardener focuses on bring-your-own-account, but it is problematic for use cases such as the above or even where multi-tenancy is implemented by means of separate clusters within one account all using ECR, but wanting to isolate access.

Implementation proposals:

  • We could add another "switch" to the shoot spec (deprecated from the start) and that, if set to true, enables ECR access. However, if omitted/not set, ECR access would be disabled. This way we could phase out this permission over time. Owners of use cases such as the above could then use clean imagePullSecrets to enforce the behaviour they want. This however would make it ugly for other stakeholders that also use ECR, but not in a multi-tenant way.
  • Same as above, but not deprecated. Then the default can/should maybe rather be enabled.
  • When extensibility is ready, it would be possible to extend the shoot spec with custom configuration that could be evaluated by a custom controller that would then make the necessary modifications, e.g. imprinting a custom instance profile.

Ignoring Custom Tags on TF Resources

Hi team,

are custom tags currently ignored by the TF reconciliation loop? Specifically: if I add a tag to a subNet created by the AWS extension provider.

Regards,
Andreas

/kind question

Allow specifying NAT Gateways, subnets and routes when creating cluster.

How to categorize this issue?

/area cost
/kind enchancement
/priority normal
/platform aws

What would you like to be added:

Allow for usage of pre-created AWS infrastructure as it would reduce costs (e.g. sharing NAT Gateways between clusters) and speed up the creation of the cluster. This would also further allow for full control over the network routing and design.

All resources must be tagged with "kubernetes.io/cluster/{{cluster-id}}" = "1" where cluster-id is shoot--{{project-name}}--{{shoot-name}} and the subnet used for internal LB should also be tagged with "kubernetes.io/role/internal-elb" = "use".

Why is this needed:

  • Reduced costs
  • Faster cluster creation
  • Network customization

Introduce `zone` label on the MachineClass.

What would you like to be added: We recently introduced the scale-to-zero feature for the cluster-autoscaler. This feature requires the autoscaler to know the zone for which the MachineClass/MachineDeployment is being created as part of its sub-functionality. Following are the related discussions/issues:

It would mean, autoscaler can't scale-up from zero for statefulset workload.

Also to note, we plan to re-work on the architecture of the scale-to-zero feature. There we may add zone field in the AWSMachineClass's Spec. But the approach there has to be completely finalized yet.

Image overwrite for images defined by _images.yaml are not overwritten

How to categorize this issue?

/area delivery
/kind bug
/priority normal
/platform aws

What happened:

Currently images that are defined by the _images.yaml are added to the component descriptor (see the script here).

But these images are not overwritten by the image vector overwrite, therefore the opensource images are used instead of the overwritten image references.

cc @AndreasBurger @ccwienk

What you expected to happen:

The images should be overwritten as defined by the component descriptor.

Duplicate KeyPair/Roles on AWS

What happened:
Shoot gets stuck during creation with the error message that a key pair and two roles (one with "nodes" and one with "bastion") belonging to the shoot are already present on the infrastructure and thus can't be created.
(Currently trying to reproduce the problem, will add exact error message afterwards)

What you expected to happen:
The shoot is created.

How to reproduce it (as minimally and precisely as possible):
Not completely sure, but it seems that this happens when the seed needs to scale up during shoot creation.

Anything else we need to know?:
Has been observed for AWS shoots only. Seems to happen when the shoot is created, but the seed is full and during shoot creation, the auto-scaling of the seed cluster kicks in and scales up.
The error can be resolved by manually removing the duplicate key pair and roles from the infrastructure.

Environment:

  • Gardener version: 0.26.2
  • Cloud provider or hardware configuration: AWS

Missing the mutating webhook leads to broken nginx ingress controller

How to categorize this issue?
/area control-plane
/platform aws

What happened:

  1. Created a cluster with enabled nginx-ingress addon
  2. I assume the webhook gardener-extension-provider-aws was down/unreachable in that moment
  3. The nginx ConfigMap in the shoot's kube-system ns was created with use-proxy-protocol: "false" (obviously the default from https://github.com/gardener/gardener/blob/master/charts/shoot-addons/charts/nginx-ingress/values.yaml#L15)
  4. However the LBs in AWS are configured with proxy protocol enabled
  5. As a result, all requests via LB to the nginx ingress respond with HTTP 400 as nginx didn't understand the proxy protocol.

What you expected to happen:
Nginx ingress to work.

Anything else we need to know?:
I think part of the problem is the gardener-extension-provider-aws webhook's failurePolicy: Ignore.
Maybe the webhook configuration could be more selective so that failurePolicy: Fail could be used instead.

Infrastructure reconciliation should validate user-configured DHCP options

What would you like to be added:
If the user provides a VPC ID then the Infrastructure controller already performs certain checks. For example, it validates that the VPC has an attached internet gateway.
Let's extend these checks by a new one that validates the associated DHCP options. Particularly, we should enforce that the configured domain-name property is ec2.internal for us-east-1 region while it is <region-name>.compute.internal for all other regions.

Ref: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html

/cc @MartinWeindel

Why is this needed:
Prevent the user from misconfiguring the shoot clusters and provide helpful/meaningful error messages instead.

Forbid replacing secret with new account for existing Shoots

What would you like to be added:
Currently we don't have a validation that would prevent user to replace its cloudprovider secret with credentials for another account. Basically we do have only a warning in the dashboard - ref gardener/dashboard#422.

Steps to reproduce:

  1. Get an existing Shoot.
  2. Update its secret with credentials for another account.
  3. Ensure that on new reconciliation, new infra resources will be created in the new account. The old infra resources and machines in the old account will leak.
    For me the reconciliation failed at
    lastOperation:
      description: Waiting until the Kubernetes API server can connect to the Shoot
        workers
      lastUpdateTime: "2020-02-20T14:56:43Z"
      progress: 89
      state: Processing
      type: Reconcile

wtih reason

$ k describe svc -n kube-system vpn-shoot
Events:
  Type     Reason                   Age                  From                Message
  ----     ------                   ----                 ----                -------
  Normal   EnsuringLoadBalancer     7m38s (x6 over 10m)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed   7m37s (x6 over 10m)  service-controller  Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB

Why is this needed:
Prevent users to harm themselves.

*-packr.go files are ignored by git

What happened:
Currently changes in charts/images.yaml are not applied in the docker image if make generate is not run.
Also check-generate won't detect any outdated generated file.

What you expected to happen:
After updating an entry in charts/images.yaml and building a docker image after that, I would expect my change to be respected.

How to reproduce it (as minimally and precisely as possible):

  1. Update an image tag in charts/images.yaml

For example update the csi-snapshotter image tag from v2.1.0 to v2.1.1

 - name: csi-snapshotter
   sourceRepository: github.com/kubernetes-csi/external-snapshotter
   repository: quay.io/k8scsi/csi-snapshotter
-  tag: "v2.1.0"
+  tag: "v2.1.1"
  1. Build and push an image
$ docker build -t foo/gardener-extension-provider-aws:bar -f Dockerfile --target gardener-extension-provider-aws .
$ docker push foo/gardener-extension-provider-aws:bar
  1. Update the ControllerRegistration with the newly built image and ensure that the charts/images.yaml file in the new container is updated with the change from step 1.

  2. Ensure that the update step from step 1 has no effect and the extension controller is using the old tag (v2.1.0).

To fix this issue, the developer needs to first run make generate and then to build and push his docker image.

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.