Giter Site home page Giter Site logo

rancher / terraform-provider-rke Goto Github PK

View Code? Open in Web Editor NEW
336.0 29.0 151.0 18.13 MB

Terraform provider plugin for deploy kubernetes cluster by RKE(Rancher Kubernetes Engine)

License: Mozilla Public License 2.0

Makefile 0.72% Go 97.39% Shell 1.67% Dockerfile 0.23%

terraform-provider-rke's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-rke's Issues

Validate file paths for addons_include yaml file paths

I want to use addons_include to automatically install some yamls for ingress-certs, psps and similar stuff.

resource rke_cluster "cluster" {
    # ...
    addons_include = [
      "./k8s-addons/nginx-ingress-cert.yaml"
    ]
}

This doesn't work since I made a stupid mistake: The file was called nginx-ingress-cert.yml, so .yml instead of .yaml. Sadly this doesn't result in any error, neither on terraform plan nor terraform apply.

Having those paths validated would be easier to troubleshoot, so an error occurs if specified files doesn't exist.

Terraform import

Any chance to provide import functionality?

Now terraform import rke_cluster.cluster cluster-id-123 return Error: rke_cluster.cluster (import id: stage): import rke_cluster.cluster (id: stage): resource rke_cluster doesn't support import

Thanks.

Running in CI

We are running Terraform and this provider in CI, but it's not clear to me how to get the kubeconfig generated afterwards to our devs, so they can connect via kubectl and control the cluster. What do you recommend?

Error should include more than "Cluster must have at least one etcd plane host" when running terraform apply

I'm using Terraform 0.11.7 and the new 0.3.0 rke provider, and when attempting to use nodes_conf it seems that it always returns an error when trying to apply

Error: Error applying plan:

1 error(s) occurred:

* rke_cluster.cluster: 1 error(s) occurred:

* rke_cluster.cluster: Cluster must have at least one etcd plane host

I get the error in my own config I was working on as well as in the simple example config from https://github.com/yamamoto-febc/terraform-provider-rke/blob/master/examples/multiple_nodes/example.tf

terraform plan doesn't complain, and I see that the error is actually from RKE itself, so it seems that the config isn't actually making its way to RKE when specified via nodes_conf.

Removing a Node doesn't work

I built an rke cluster on fresh VMs via:

terraform apply -var node_ips='["10.22.141.17","10.22.141.18","10.22.141.19"]'

TF file:

variable "node_ips" {
  type = "list"
  description = "List of node ips to provision"
}

locals {
  count = "${length(var.node_ips)}"
}

data rke_node_parameter "nodes" {
  count   = "${local.count}"
  address = "${var.node_ips[count.index]}"
  user    = "rancher"
  role    = ["controlplane", "worker", "etcd"]
}

resource rke_cluster "cluster" {
  nodes_conf = ["${data.rke_node_parameter.nodes.*.json}"]
  ssh_agent_auth = true
  ....omitted....
}
resource "local_file" "kube_cluster_yaml" {
  filename = "${path.root}/kube_config_cluster.yml"
  content = "${rke_cluster.cluster.kube_config_yaml}"
}

resource "local_file" "rke_cluster_yaml" {
  filename = "${path.root}/cluster.yml"
  content = "${rke_cluster.cluster.rke_cluster_yaml}"
}

As expected a cluster with 3 Nodes gets created.
I then remove an ip from the node_ips var an rerun terraform:

$ terraform apply -var node_ips='["10.22.141.17","10.22.141.18"]'
data.rke_node_parameter.nodes[0]: Refreshing state...
data.rke_node_parameter.nodes[1]: Refreshing state...
rke_cluster.cluster: Refreshing state... (ID: 10.22.141.17)
local_file.kube_cluster_yaml: Refreshing state... (ID: f7c055d88ecb4f086055cf9bcd6df0a10c1faaff)
local_file.rke_cluster_yaml: Refreshing state... (ID: 15cd80558fb10b085fb7f0d636f6e490b26b80b2)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ module.rke.rke_cluster.cluster
      nodes_conf.#: "3" => "2"
      nodes_conf.2: "{\"address\":\"10.22.141.19\",\"role\":[\"controlplane\",\"worker\",\"etcd\"],\"user\":\"rancher\"}" => ""


Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.rke.rke_cluster.cluster: Modifying... (ID: 10.22.141.17)
  nodes_conf.#: "3" => "2"
  nodes_conf.2: "{\"address\":\"10.22.141.19\",\"role\":[\"controlplane\",\"worker\",\"etcd\"],\"user\":\"rancher\"}" => ""
module.rke.rke_cluster.cluster: Still modifying... (ID: 10.22.141.17, 10s elapsed)
module.rke.rke_cluster.cluster: Still modifying... (ID: 10.22.141.17, 20s elapsed)
module.rke.rke_cluster.cluster: Still modifying... (ID: 10.22.141.17, 30s elapsed)
module.rke.rke_cluster.cluster: Still modifying... (ID: 10.22.141.17, 40s elapsed)
module.rke.rke_cluster.cluster: Still modifying... (ID: 10.22.141.17, 50s elapsed)
module.rke.rke_cluster.cluster: Modifications complete after 58s (ID: 10.22.141.17)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

However kubectl still shows 3 nodes:

$ kubectl --kubeconfig=kube_config_cluster.yml get nodes
NAME           STATUS   ROLES                      AGE     VERSION
10.22.141.17   Ready    controlplane,etcd,worker   8m31s   v1.12.4
10.22.141.18   Ready    controlplane,etcd,worker   8m31s   v1.12.4
10.22.141.19   Ready    controlplane,etcd,worker   8m31s   v1.12.4

Also the cluster.yml still contains the 10.22.141.19 Node. Only after running terraform apply again the cluster.yml is updated. It seems the cluster.yml is created before the cluster is updated.

Running rke up by hand with the generated cluster.yml (10.22.141.19 is missing in the file) also doesn't remove the Node.
Building the 3 Node cluster with terraform, removing a Node from the cluster.yml by hand and running rke up removes the Node correctly from the cluster.

Versions:

$ terraform version
Terraform v0.11.11
+ provider.local v1.1.0
+ provider.rke v0.7.0

Add attr to rke_cluster for outputting RKE compatible cluster.yml

Make it possible to execute rke command for the cluster builded by Terraform.

ex. rke etcd snapshot-save --config cluster.yml.

Use as follows:

resource rke_cluster "cluster" {
  nodes = [
    {
      address      = "192.2.0.1"
      user         = "rancher"
      role         = ["controlplane", "worker", "etcd"]
      ssh_key_path = "/home/user/.ssh/id_rsa"
    }
  ]
}

output "cluster_config" {
  value = "${rke_cluster.cluster.cluster_yml}"
}
# after "terraform apply"
$ terraform show cluster_config
nodes:
  nodes:
  - address: 192.2.0.1
    user: rancher
    role:
    - controlplane
    - worker
    - etcd
    ssh_key_path: /home/user/.ssh/id_rsa

Provider produced inconsistent final plan

Just been hit with this:

# terraform apply

rke_cluster.cluster: Refreshing state... [id=server1]
local_file.client_cert: Refreshing state... [id=54cc9b061a432d3fee69464ef42c46aa083bf1ac]
local_file.ca_crt: Refreshing state... [id=f9a2dc701b64b84a59accb375ed2a441cc023985]
local_file.client_key: Refreshing state... [id=bf00257b95f9c5249c1fb2d302ed85da5b29e887]
local_file.kube_cluster_yaml: Refreshing state... [id=18473fc352fbffb624ab50599199a116260ab275]
local_file.client_cert: Destroying... [id=54cc9b061a432d3fee69464ef42c46aa083bf1ac]
local_file.kube_cluster_yaml: Destroying... [id=18473fc352fbffb624ab50599199a116260ab275]
local_file.client_cert: Destruction complete after 0s
local_file.kube_cluster_yaml: Destruction complete after 0s
rke_cluster.cluster: Modifying... [id=server1]
rke_cluster.cluster: Still modifying... [id=server1, 10s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 20s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 30s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 40s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 50s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 1m0s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 1m10s elapsed]
rke_cluster.cluster: Still modifying... [id=server1, 1m20s elapsed]
rke_cluster.cluster: Modifications complete after 1m23s [id=server1]
local_file.kube_cluster_yaml: Creating...
local_file.kube_cluster_yaml: Creation complete after 0s [id=8d9846133c708d432310e3212628cd7581a2adb9]

Error: Provider produced inconsistent final plan

When expanding the plan for local_file.client_cert to include new values
learned so far during apply, provider "local" produced an invalid new value
for .content: was cty.StringVal("-----BEGIN
CERTIFICATE-----\nMIIC6TCCAdGgAwIBAgIIJ8yNqq6AYRgwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE\nAxMHa3ViZS1jYTAeFw0xOTA2MTIxOTU1NTZaFw0yOTA2MDkyMDM2MzBaMC4xFzAV\nBgNVBAoTDnN5c3RlbTptYXN0ZXJzMRMwEQYDVQQDEwprdWJlLWFkbWluMIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnVBAMK+PmEpUqgxjeNx9/VRixPbL\nI91fLH91JxH589Rvjm6ItYOvg9Ce9jj7qgK19NuEGNW4CFNfGGK2NjcBWgc5eqO+\nyV81Rmv/lRshNIYVioa0NSP2tnDaTwNjCEYeeQjqDc1zUyhllJCSSlclukTyKxYo\nT9/yqZ6ORcvk591jUxCk14xR9MqVrCMB768226XVZ7ILJsTVdu7/Ht8DWq1JMK/X\niGO+wgYdjnj4Z2y2E2jJS/g28XX6uFmvTfd4jrMGHQXJlRIll6Y4jn5R3dVR0WFe\nr+keZyqS6NrZu2HqfMrlkeWxXsCBkiC895BZ1yZGOrc7EVpilWbDygC65QIDAQAB\noycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDQYJKoZI\nhvcNAQELBQADggEBABrQSUcQ/60wY3iexKJ1VjvpH7kY094YYu/9y09XfgjIf0qC\nEVzbv7kphm5+DS7DQt0XmpiizbuvuzN7a0xj/5ema6MZWQVJUBc16cKqg4oywhtM\nv1/wZ+E0iRQs0bYgTV6ELEQ+LV7jj3cvpwsyKaP+cigByO6KdqylT67Ot/+AJ8Qw\noweD/qXZuJPSU1LJdXYqNanoX4dKfOUApEWhLxCfyfFFaalODvkPGN0c2EaS03RK\nnR/OAfkZho1d8nSsCiRLRgSV1e3jU3KA58+lqd2fioNu9MV/x8WEGXEmFa2+2jR7\nKCb2VPB/4C6Y0Ya1IHx+Jxa6D4ofUExJfWXBNqc=\n-----END
CERTIFICATE-----\n"), but now cty.StringVal("-----BEGIN
CERTIFICATE-----\nMIIC6TCCAdGgAwIBAgIIeboBJH1IGbgwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE\nAxMHa3ViZS1jYTAeFw0xOTA2MTIxOTU1NTZaFw0yOTA2MTAxOTA0MzZaMC4xFzAV\nBgNVBAoTDnN5c3RlbTptYXN0ZXJzMRMwEQYDVQQDEwprdWJlLWFkbWluMIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnVBAMK+PmEpUqgxjeNx9/VRixPbL\nI91fLH91JxH589Rvjm6ItYOvg9Ce9jj7qgK19NuEGNW4CFNfGGK2NjcBWgc5eqO+\nyV81Rmv/lRshNIYVioa0NSP2tnDaTwNjCEYeeQjqDc1zUyhllJCSSlclukTyKxYo\nT9/yqZ6ORcvk591jUxCk14xR9MqVrCMB768226XVZ7ILJsTVdu7/Ht8DWq1JMK/X\niGO+wgYdjnj4Z2y2E2jJS/g28XX6uFmvTfd4jrMGHQXJlRIll6Y4jn5R3dVR0WFe\nr+keZyqS6NrZu2HqfMrlkeWxXsCBkiC895BZ1yZGOrc7EVpilWbDygC65QIDAQAB\noycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDQYJKoZI\nhvcNAQELBQADggEBAEfysIkZOJAWoNjNMlZkUSbYltwn030bDI9kFmylVo0jrquD\n3TemAmhab63jhONXKciH0DfkuKlMdCIJ5Vfc7I82OzqHJnuEDry1IIkEjLd6H8Pa\nvLuzIYwYQu10wg1+/Q+aiH+8xQMS3p/2L9LAEv6ILRxm2UU/SYY8bWAIgtpM8xff\nQq8D2tKNUXQFWA2x9RmvYNCowtm6YAzpKUUXoQ1y69J7dvHpIzpY9DigrETpe41C\nS7A11fUqcI1DqBhPVwep9YvA0ZKepXZcbbb/YiEm9aw066LwowqsqDSpX4SMIdaW\nKGi60dTD8wKK+NvqC4NbQOsmjltUUmVqLkTCq4I=\n-----END
CERTIFICATE-----\n").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

After doing, again, terraform apply (without any changes):

local_file.ca_crt: Refreshing state... [id=f9a2dc701b64b84a59accb375ed2a441cc023985]
local_file.client_key: Refreshing state... [id=bf00257b95f9c5249c1fb2d302ed85da5b29e887]
local_file.kube_cluster_yaml: Refreshing state... [id=8d9846133c708d432310e3212628cd7581a2adb9]
local_file.client_cert: Creating...
local_file.client_cert: Creation complete after 0s [id=847884fdfb1f425a47195bca22133ce31dbb3478]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Before that, the plan was following:

-/+ resource "local_file" "client_cert" {
      ~ content  = <<~EOT # forces replacement
            -----BEGIN CERTIFICATE-----
          - MIIC6TCCAdGgAwIBAgIIZ5AbOm48o3UwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
          - AxMHa3ViZS1jYTAeFw0xOTA2MTIxOTU1NTZaFw0yOTA2MDkxOTU1NTZaMC4xFzAV
          + MIIC6TCCAdGgAwIBAgIIJ8yNqq6AYRgwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
          + AxMHa3ViZS1jYTAeFw0xOTA2MTIxOTU1NTZaFw0yOTA2MDkyMDM2MzBaMC4xFzAV
            BgNVBAoTDnN5c3RlbTptYXN0ZXJzMRMwEQYDVQQDEwprdWJlLWFkbWluMIIBIjAN
            BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnVBAMK+PmEpUqgxjeNx9/VRixPbL
            I91fLH91JxH589Rvjm6ItYOvg9Ce9jj7qgK19NuEGNW4CFNfGGK2NjcBWgc5eqO+
            yV81Rmv/lRshNIYVioa0NSP2tnDaTwNjCEYeeQjqDc1zUyhllJCSSlclukTyKxYo
            T9/yqZ6ORcvk591jUxCk14xR9MqVrCMB768226XVZ7ILJsTVdu7/Ht8DWq1JMK/X
            iGO+wgYdjnj4Z2y2E2jJS/g28XX6uFmvTfd4jrMGHQXJlRIll6Y4jn5R3dVR0WFe
            r+keZyqS6NrZu2HqfMrlkeWxXsCBkiC895BZ1yZGOrc7EVpilWbDygC65QIDAQAB
            oycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDQYJKoZI
          - hvcNAQELBQADggEBAHHymgpdsdFkK2GUpjoJXVdRRtrzNJbmGUxNfGbIbI1FfxTj
          - y6j1DBe99XJo+CpeLmG3TQ5YEFUNA+3a+xFEYTTh47B81dAlXh24AiT2ez+t9pE0
          - I7O2iIdxzvnZjHr184wh777noXftbxlj4gye5s9nDTKWDongZcNtRyMoyCuI3M1p
          - OH4Ptj/nNo36UuHdF4Y1V6pC0HYkyRaf8PxgRC3xFn1LML4S+5LyNKJNWZtToqaU
          - pU742Y0/An9jd9Z4yQSouRSHyba1cOwMgRwrLfAMfRs8zw4BmpAEuYBBwjDkmzHW
          - h7I/R7/GUlU7KAcEBhEgTQnek5eoP5T8qldw1yk=
          + hvcNAQELBQADggEBABrQSUcQ/60wY3iexKJ1VjvpH7kY094YYu/9y09XfgjIf0qC
          + EVzbv7kphm5+DS7DQt0XmpiizbuvuzN7a0xj/5ema6MZWQVJUBc16cKqg4oywhtM
          + v1/wZ+E0iRQs0bYgTV6ELEQ+LV7jj3cvpwsyKaP+cigByO6KdqylT67Ot/+AJ8Qw
          + oweD/qXZuJPSU1LJdXYqNanoX4dKfOUApEWhLxCfyfFFaalODvkPGN0c2EaS03RK
          + nR/OAfkZho1d8nSsCiRLRgSV1e3jU3KA58+lqd2fioNu9MV/x8WEGXEmFa2+2jR7
          + KCb2VPB/4C6Y0Ya1IHx+Jxa6D4ofUExJfWXBNqc=
            -----END CERTIFICATE-----
        EOT
        filename = "./../etc/certs/client_cert"
      ~ id       = "54cc9b061a432d3fee69464ef42c46aa083bf1ac" -> (known after apply)
    }

  # local_file.kube_cluster_yaml must be replaced
-/+ resource "local_file" "kube_cluster_yaml" {
      ~ content  = "apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    api-version: v1\n    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQWFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEU1TURZeE1qRTVOVFUxTmxvWERUSTVNRFl3T1RFNU5UVTFObG93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUtuZ3hUUE1XMnZiCjZNNjlxMjFNVGlIUU15OW9reVQvS1d1dUNNdmk4SmhGMHBzbElSK1F6TU11dE5RN0V3MGk2cjNoNXFId1kwV3kKVWNuV2VnaG9RREV6bVozcFpyUm1YMldtSWRjREVYQncxRUNlQ09LQy9vZ05DUm9LQ2t2NC9iY0orVlN0UHVpYgpxR21aYzByZ3JjUWVYaEdyV3Z0SkNReUVGN0dYSHozWHFRRkx2RnliWHJyeGxmck5zaTVOdmo3eTJySGpzbVgwCm5ZaDVGQWtwSm0zeGxYeStnczF2WnVnSEloY1I0SFk3WktxWjY1eVJRM2liQkZrUWNMMzAvakE0SUtvWGJXUFEKZzBOM252Z05FeGFISkpKaFN2bTJIL3UvT2V0MXRCUGxtQU9uRXIwbVJQVnpPZnl3RVo5M2hKU2thbXdRTkRCVQpaN29KVmtuNFZsTUNBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBR0wxazB1Vis2Vk5lTGRFNlJWWW1TT2Y5dDlIak9tc3VLT0gKcE5pVXhISTVoWUM5cStNT3B2cG1DVDRCVW1YeDM5bmpHUmgwU0xoa0d1aE9KSzNLZDdITVJhcldLc1lNZTl2cApsU0RDdStnNFZSY0NXS2pyYUVjOXJsZ3BKQUZ6MVUweE9zeXhacysrQVkzaExUVG9QODVWemZtazhZRFVhdVhWCnBSdzE0K2kvWkRObXV4Wlc1WHpwcnBSc2FtM1EvTUFEVzJUeHluemNTWWhqV05SRlBQS2Fja0hDakZYRXdUNVAKTE5odEFpdFBxN3d1Tmt5c2hac3B0MXFsQXNJT3dqdXBjNXg3dkRZbVh3U2pUek84MzlvR0wySTJnSDFpcXRDbQpNUzJ6a3JPUGV3VWx5SHVZRmo5YWUrUEE4NERUWUErK0plWStrVkw3NTYyYkxzeUdPVzA9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n    server: \"https://server1:6443\"\n  name: \"clustername\"\ncontexts:\n- context:\n    cluster: \"clustername\"\n    user: \"kube-admin-clustername\"\n  name: \"clustername\"\ncurrent-context: \"clustername\"\nusers:\n- name: \"kube-admin-clustername\"\n  user:\n    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJSjh5TnFxNkFZUmd3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHhPVEEyTVRJeE9UVTFOVFphRncweU9UQTJNRGt5TURNMk16QmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5WQkFNSytQbUVwVXFneGplTng5L1ZSaXhQYkwKSTkxZkxIOTFKeEg1ODlSdmptNkl0WU92ZzlDZTlqajdxZ0sxOU51RUdOVzRDRk5mR0dLMk5qY0JXZ2M1ZXFPKwp5VjgxUm12L2xSc2hOSVlWaW9hME5TUDJ0bkRhVHdOakNFWWVlUWpxRGMxelV5aGxsSkNTU2xjbHVrVHlLeFlvClQ5L3lxWjZPUmN2azU5MWpVeENrMTR4UjlNcVZyQ01CNzY4MjI2WFZaN0lMSnNUVmR1Ny9IdDhEV3ExSk1LL1gKaUdPK3dnWWRqbmo0WjJ5MkUyakpTL2cyOFhYNnVGbXZUZmQ0anJNR0hRWEpsUklsbDZZNGpuNVIzZFZSMFdGZQpyK2tlWnlxUzZOclp1MkhxZk1ybGtlV3hYc0NCa2lDODk1QloxeVpHT3JjN0VWcGlsV2JEeWdDNjVRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFCclFTVWNRLzYwd1kzaWV4S0oxVmp2cEg3a1kwOTRZWXUvOXkwOVhmZ2pJZjBxQwpFVnpidjdrcGhtNStEUzdEUXQwWG1waWl6YnV2dXpON2EweGovNWVtYTZNWldRVkpVQmMxNmNLcWc0b3l3aHRNCnYxL3daK0UwaVJRczBiWWdUVjZFTEVRK0xWN2pqM2N2cHdzeUthUCtjaWdCeU82S2RxeWxUNjdPdC8rQUo4UXcKb3dlRC9xWFp1SlBTVTFMSmRYWXFOYW5vWDRkS2ZPVUFwRVdoTHhDZnlmRkZhYWxPRHZrUEdOMGMyRWFTMDNSSwpuUi9PQWZrWmhvMWQ4blNzQ2lSTFJnU1YxZTNqVTNLQTU4K2xxZDJmaW9OdTlNVi94OFdFR1hFbUZhMisyalI3CktDYjJWUEIvNEM2WTBZYTFJSHgrSnhhNkQ0b2ZVRXhKZldYQk5xYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=\n    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBblZCQU1LK1BtRXBVcWd4amVOeDkvVlJpeFBiTEk5MWZMSDkxSnhINTg5UnZqbTZJCnRZT3ZnOUNlOWpqN3FnSzE5TnVFR05XNENGTmZHR0syTmpjQldnYzVlcU8reVY4MVJtdi9sUnNoTklZVmlvYTAKTlNQMnRuRGFUd05qQ0VZZWVRanFEYzF6VXlobGxKQ1NTbGNsdWtUeUt4WW9UOS95cVo2T1Jjdms1OTFqVXhDawoxNHhSOU1xVnJDTUI3NjgyMjZYVlo3SUxKc1RWZHU3L0h0OERXcTFKTUsvWGlHTyt3Z1lkam5qNFoyeTJFMmpKClMvZzI4WFg2dUZtdlRmZDRqck1HSFFYSmxSSWxsNlk0am41UjNkVlIwV0ZlcitrZVp5cVM2TnJadTJIcWZNcmwKa2VXeFhzQ0JraUM4OTVCWjF5WkdPcmM3RVZwaWxXYkR5Z0M2NVFJREFRQUJBb0lCQUZZRkY5U0hhMEdmQTRTbwptWXZ4SllOc3JVVituYjNTd3NRV1BmMUxPeDQxUDNybXZpSmpDNHBNZlYrdDhROFp4RjFMMjRPbythU3owZ0FICm1oTXpLSzROM1VST1hYakhjdDQ3RjlwMHAwZU5PaUl4WGtEZ2xYdFZZa3BxVTdDbWh1c3dFS3ZUZUFnMHdyYm0KQnRoWHB1MmYzYnZwdGNsWGI5MklNY3ZBbmo2YVdLM1FYN1lDZXFiVEZwS1BOQlFjajRvNEZjM3dMbWd0TFhCbQpQN1p0RTNpMVlSVnhaT3EwQUpyVFF6WG40dDl5aEpaNXFBajNEM0xBNEVRWlY4MXVsRk1xN2xsbVhRVDJZRFdNCjFlRUsxd2daVzNiZ1dtSFpjV0pXQy8yaEVSTHVoM3I1MmxQd3cwUWgwNXJSWEFNbGpyWVVjcWFpRGZzZFZobFMKK1dsMzZZRUNnWUVBenhXck9XK0JMc3pjcVZZT0tUcFllbnVzQlFvdHlSRkpTaENZMXJKU20xbmpta0Vqa0tvegp4MGQrQkZVcXNWaXVFcWs3QzhHMG13RWRFVWZmUGFYbGlwNkZyVlRRdzlZcldjRi80bEgyMFhtZmNJZWR4dWY5CnZCNzZvZlpSbTMrOWZ3TzZicGFaNy9BckxmLy8zeGhlVHhvbkxaTE5XUUh5cWc0RHlUcHZrVTBDZ1lFQXduancKS3NEc2FqWTBTdUM5eWhWQ1hWZ3RlZ2JralZqT3IyUjdQempELzdpM2NKOTIxRG5SMFh1Vy8vdnZGd2RHMm5FLwp5QmdPek4vTTMrckVJM3d3NUV2SzdNdlV2TDRNY3YraDlHbUFLRVpnTThDNHNtemh5YzgwMXVQQlpzZkdhQWgvCjF0MS94RjVtK3ZUUmQxZXU5UGVOUjBTR3F1Y2VyT2lXL2x1cWcva0NnWUFCRFY0aVc1T3ZkakVFMTBBWks0ZTUKajVsUEtUOFVUM2NzM2lxNHBJMVE1c01HVEtCdW9yN0NtM1ZqZGo5U1NWNFJFRFVSbVRsZXRFRytqYnZ2cDBFawpWQ3ZmdHBlYzl5Q2ZReUZ3Ti9SbUdoVWFVRVlYOWFQUGFlVGlIOHRJVy96TmdXcFlGNEhPdTB5czNpa2hyQkVHCm05NXBGOTdkUGVwS3ZPbCtBMEwvM1FLQmdBRFpRa01OZ0hxZUxmQTl0dFpRN1c4MjJVdjFCNzVPS3VpOUNZU24KSE1QYTdJSURVQ053OVNkeTRKL1JXNlBBRm1FUnFYT1lGMGh4bVpWSWt2Nk1wakg0MnJQWjE5M1MvbjdwK3F6MApZT2pNRmROai9lcFphMHJVS3FqZGFaU25Qb2hwc1JVZzlsUEhEYS8rcllOVjBKK2xET3JJczhXL0tIVWN0cnY1CmJtOFJBb0dBTWlqSHF2SU9hWStXbno3ZHZ6TWZTZmNQUy9QNS9ka0o1NDRsazlEdi9sNms3b1RwNWdnaXM0UlgKMnFNZmZEVkZSRU43S3JVRlg0WG5lMWdCTVFWTFgxdERDRGI3UVBmRjBwVUdKY21yN0F1TWhVcnNOcFlkNnhLZwp2cjQvTWxxbFUwa2p6dVZBSnBYdnhoanptVElTbEM5SldWaTM4dHpDR2ZQQ0JKU3VWQjg9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==" -> (known after apply) # forces replacement
        filename = "../kube_config_cluster.yml"
      ~ id       = "18473fc352fbffb624ab50599199a116260ab275" -> (known after apply)
    }

  # rke_cluster.cluster will be updated in-place
  ~ resource "rke_cluster" "cluster" {
        addon_job_timeout         = 30
        api_server_url            = "https://server1:6443"
[...]
      ~ kube_config_yaml          = (sensitive value)
        kubernetes_version        = "v1.14.1-rancher1-2"
        prefix_path               = "/"
      ~ rke_cluster_yaml          = (sensitive value)
[...]
      ~ services_etcd {
            creation      = "1h"
            external_urls = []
            extra_args    = {
                "election-timeout"   = "5000"
                "heartbeat-interval" = "500"
            }
            extra_binds   = []
            extra_env     = []
            retention     = "3d"
            snapshot      = true

          + backup_config {
              + interval_hours = 2
              + retention      = 24

              + s3_backup_config {
                  + access_key  = (sensitive value)
                  + bucket_name = "masked"
                  + endpoint    = "masked"
                  + region      = "masked"
                  + secret_key  = (sensitive value)
                }
            }
        }
[...]

After kube_config_cluster.yml has been changed, here is diff:

diff kube_config_cluster.yml kube_config_cluster.yml-old
18c18
<     client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJZWJvQkpIMUlHYmd3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHhPVEEyTVRJeE9UVTFOVFphRncweU9UQTJNVEF4T1RBME16WmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5WQkFNSytQbUVwVXFneGplTng5L1ZSaXhQYkwKSTkxZkxIOTFKeEg1ODlSdmptNkl0WU92ZzlDZTlqajdxZ0sxOU51RUdOVzRDRk5mR0dLMk5qY0JXZ2M1ZXFPKwp5VjgxUm12L2xSc2hOSVlWaW9hME5TUDJ0bkRhVHdOakNFWWVlUWpxRGMxelV5aGxsSkNTU2xjbHVrVHlLeFlvClQ5L3lxWjZPUmN2azU5MWpVeENrMTR4UjlNcVZyQ01CNzY4MjI2WFZaN0lMSnNUVmR1Ny9IdDhEV3ExSk1LL1gKaUdPK3dnWWRqbmo0WjJ5MkUyakpTL2cyOFhYNnVGbXZUZmQ0anJNR0hRWEpsUklsbDZZNGpuNVIzZFZSMFdGZQpyK2tlWnlxUzZOclp1MkhxZk1ybGtlV3hYc0NCa2lDODk1QloxeVpHT3JjN0VWcGlsV2JEeWdDNjVRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFFZnlzSWtaT0pBV29Oak5NbFprVVNiWWx0d24wMzBiREk5a0ZteWxWbzBqcnF1RAozVGVtQW1oYWI2M2poT05YS2NpSDBEZmt1S2xNZENJSjVWZmM3STgyT3pxSEpudUVEcnkxSUlrRWpMZDZIOFBhCnZMdXpJWXdZUXUxMHdnMSsvUSthaUgrOHhRTVMzcC8yTDlMQUV2NklMUnhtMlVVL1NZWThiV0FJZ3RwTTh4ZmYKUXE4RDJ0S05VWFFGV0EyeDlSbXZZTkNvd3RtNllBenBLVVVYb1ExeTY5SjdkdkhwSXpwWTlEaWdyRVRwZTQxQwpTN0ExMWZVcWNJMURxQmhQVndlcDlZdkEwWktlcFhaY2JiYi9ZaUVtOWF3MDY2THdvd3FzcURTcFg0U01JZGFXCktHaTYwZFREOHdLSytOdnFDNE5iUU9zbWpsdFVVbVZxTGtUQ3E0ST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
---
>     client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJSjh5TnFxNkFZUmd3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHhPVEEyTVRJeE9UVTFOVFphRncweU9UQTJNRGt5TURNMk16QmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5WQkFNSytQbUVwVXFneGplTng5L1ZSaXhQYkwKSTkxZkxIOTFKeEg1ODlSdmptNkl0WU92ZzlDZTlqajdxZ0sxOU51RUdOVzRDRk5mR0dLMk5qY0JXZ2M1ZXFPKwp5VjgxUm12L2xSc2hOSVlWaW9hME5TUDJ0bkRhVHdOakNFWWVlUWpxRGMxelV5aGxsSkNTU2xjbHVrVHlLeFlvClQ5L3lxWjZPUmN2azU5MWpVeENrMTR4UjlNcVZyQ01CNzY4MjI2WFZaN0lMSnNUVmR1Ny9IdDhEV3ExSk1LL1gKaUdPK3dnWWRqbmo0WjJ5MkUyakpTL2cyOFhYNnVGbXZUZmQ0anJNR0hRWEpsUklsbDZZNGpuNVIzZFZSMFdGZQpyK2tlWnlxUzZOclp1MkhxZk1ybGtlV3hYc0NCa2lDODk1QloxeVpHT3JjN0VWcGlsV2JEeWdDNjVRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFCclFTVWNRLzYwd1kzaWV4S0oxVmp2cEg3a1kwOTRZWXUvOXkwOVhmZ2pJZjBxQwpFVnpidjdrcGhtNStEUzdEUXQwWG1waWl6YnV2dXpON2EweGovNWVtYTZNWldRVkpVQmMxNmNLcWc0b3l3aHRNCnYxL3daK0UwaVJRczBiWWdUVjZFTEVRK0xWN2pqM2N2cHdzeUthUCtjaWdCeU82S2RxeWxUNjdPdC8rQUo4UXcKb3dlRC9xWFp1SlBTVTFMSmRYWXFOYW5vWDRkS2ZPVUFwRVdoTHhDZnlmRkZhYWxPRHZrUEdOMGMyRWFTMDNSSwpuUi9PQWZrWmhvMWQ4blNzQ2lSTFJnU1YxZTNqVTNLQTU4K2xxZDJmaW9OdTlNVi94OFdFR1hFbUZhMisyalI3CktDYjJWUEIvNEM2WTBZYTFJSHgrSnhhNkQ0b2ZVRXhKZldYQk5xYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

Versions:

Terraform v0.12.1
+ provider.local v1.2.2
+ provider.rke v0.12.1

This is test cluster, thus I can send certs publicly.

expected kubernetes_version lingering

rke provider is lingering in regard latest rke supported k8s versions.

$ terraform plan

Error: expected kubernetes_version to be one of [v1.11.9-rancher1-1 v1.12.7-rancher1-2 v1.13.5-rancher1-2 v1.14.1-rancher1-1], got v1.13.5-rancher1-3

  on main.tf line 2, in resource "rke_cluster" "cluster":
   2: resource "rke_cluster" "cluster" {

rke reports "supported versions are: [v1.11.9-rancher1-2 v1.12.7-rancher1-3 v1.13.5-rancher1-3 v1.14.1-rancher1-2]"

Cert rotation for existing cluster

Hi, its look like the default certificates in rke last 365 days how we could extend the date ?
with rke they have ./rke cert rotate in the latest version.

Can't run terraform

Error:


[iahmad@web-prod-ijaz001 my-k8s-cluster]$ terraform init

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.local: version = "~> 1.3"
* provider.rke: version = "~> 0.12"

Terraform has been successfully initialized!


[iahmad@web-prod-ijaz001 my-k8s-cluster]$ terraform plan
Error asking for user input: 2 error(s) occurred:

* provider.local: dial unix /tmp/plugin633927044|netrpc|: connect: no such file or directory
* provider.rke: dial unix /tmp/plugin769453007|netrpc|: connect: no such file or directory


Disaster recovery

Basic Setup

We setup 3 instances on aws in different AZ.
This 3 instances work as rancher nodes in a private subnet with fixed ips.
We have a bastion host to connect to this private host.
We also expose the kubernetes API to run kubectl and helm from my local machine, if we want to.
We also run the rke provider to setup the rancher management cluster.

Additionally we also deploy cert-manager and rancher-ui via the helm provider.

Everything is fine.

Here is the basic setup for rke:

resource rke_cluster "rancher_cluster" {

  cluster_name = "rancher-management"

  bastion_host = {
    address      = "${aws_instance.rancher_bastion_instance.public_ip}"
    user         = "ubuntu"
    ssh_key      = "${file("~/.ssh/rancher.pem")}"
    port         = 22
  }

  authentication {
    strategy = "x509"

    sans = [
      "${aws_lb.rancher_nlb.dns_name}",
    ]
  }

  nodes = [
    {
      address = "${aws_instance.rancher_node_1.private_ip}"
      user    = "ubuntu"
      //TODO: Modify this when we integrate the vault. For now it expects the rancher key to be in the .ssh folder.
      ssh_key = "${file("~/.ssh/rancher.pem")}"
      role    = ["controlplane", "worker", "etcd"]
    },
    {
      address = "${aws_instance.rancher_node_2.private_ip}"
      user    = "ubuntu"
      //TODO: Modify this when we integrate the vault. For now it expects the rancher key to be in the .ssh folder.
      ssh_key = "${file("~/.ssh/rancher.pem")}"
      role    = ["controlplane", "worker", "etcd"]
    },
    {
      address = "${aws_instance.rancher_node_3.private_ip}"
      user    = "ubuntu"
      //TODO: Modify this when we integrate the vault. For now it expects the rancher key to be in the .ssh folder.
      ssh_key = "${file("~/.ssh/rancher.pem")}"
      role    = ["controlplane", "worker", "etcd"]
    }
  ]
}

resource "local_file" "kube_cluster_yaml" {
  filename = "${path.root}/kubeconfig_rancher_cluster.yaml"
  content = "${rke_cluster.rancher_cluster.kube_config_yaml}"
}

Incident

For science and my inner chaos monkey I just terminated one of the instances via the aws console.
Rancher says: Kubelet stopped posting node status.

When we terraform apply again a new instance is created with the same IP it had before.
But the instance never will join the rke cluster.

Question

How shall we proceed when we loose a node for whatever reason?
I would think that a terraform apply should solve issue.

Since in my opinion a new instance will be created and also a new instance for the rke cluster should be spinned up with rke.

Versioning policy

Versioning of this provider is as follows.

  • v1.x.x: terraform v0.12+(terraform API version: 5, RKE 0.2+)
  • v0.10+.x: terraform v0.11(but v0.12 syntax ready, RKE 0.2+)
  • v0.9.x: terraform v0.11 + RKE 0.1.x(bug fix only)

How can I specify list of ec2 IP addresses for dynamic creation

Hi,

I have a multi-stage setup where I create n number of ec2 instances in terraform.

After that, I want to set up a rke cluster with those instances. So, how can I pass the ip's of those instances as a list here?

I came up with something like this but I am not sure if it will work:

data rke_node_parameter "nodes" {
  count   = 2

  address = "${element(aws_instance.rke-node.*.public_ip, count.index)}"
  user    = "ubuntu"
  role    = ["controlplane", "worker", "etcd"]
  ssh_key = "${file("~/.ssh/id_rsa")}"
}

rke_cluster leaks private sshKey on destroy

When removing node from the cluster, terraform plan prints following output:

  ~ rke_cluster.cluster
      kube_config_yaml:       <sensitive> => <computed> (attribute changed)
      nodes_conf.#:           "3" => "2"
      nodes_conf.2:           "{\"address\":\"0.0.0.0\",\"port\":\"22\",\"internalAddress\":\"10.44.0.3\",\"role\":[\"worker\"],\"hostnameOverride\":\"testing-3.local\",\"user\":\"core\",\"sshKey\":\"-----BEGIN RSA PRIVATE KEY-----\\n<deducted>n-----END RSA PRIVATE KEY-----\\n\"}" => ""

I think we can just set nodes_conf to computed as done with other parameters and then it should be fine.

Terraform destroy gets stuck in refreshing state

Hi,

After successful creation of rke cluster, I was trying to remove the cluster with terraform destroy. But the process is stuck on:

rke_cluster.cluster: Refreshing state... (ID: ec2-18-188-12-112.us-east-2.compute.amazonaws.com)

All other components' state has been refreshed successfully. Is there anything else which needs to be done to destroy the cluster?

Adding 'addons' causes cluster destroy/recreate rather than change

If you add an additional addon or change the addon configurations in the right way, rather than resulting in a change to the cluster instead there is a 'destroy' and subsequent recreate issued to RKE. This results in a non-functioning cluster as there are leftovers such as etcd and similar.

Provider attempts to re-create existing cluster when specified SSH key file is missing

We got this situation, that SSH key file we use for creating cluster got moved to a different path. This resulted in Terraform attempting to re-create the cluster and when I did terraform refresh, cluster got removed from state file.

I believe this is a bug and provider should complain, that file does not exist, even though with it should not be able to connect to cluster nodes without correct SSH key file.

Steps to reproduce:

  • Generate SSH key file:
    ssh-keygen -b 4096 -t rsa -f ./id_rsa -q -N ""
    
  • Create test.tf file with minimal cluster configuration:
    provider "rke" {
      version = "~> 0.8.0"
    }
    
    resource rke_cluster "cluster" {
      ssh_key_path = "${path.root}/id_rsa"
    
      nodes {
          address = "x.x.x.x"
          user    = "root"
          role    = ["controlplane", "worker", "etcd"]
      }
    }
    
  • create cluster with terraform apply
  • verify that cluster is there with terraform state list
  • move SSH key file:
    mv id_rsa id_rsa_test
    
  • at this point, running terraform apply attempts to re-create a cluster, without any warnings
  • running terraform refresh removes cluster from state file

Unable to access rancher UI.

Hi,

I used the new provider and was able to get the k8s cluster up & running with rancher.
But wondering, how do I modify the ingress rules to access the rancher UI ?
I wanted to use something like below to access the rancher UI:

host: mykubernetes.com

I also do not see any k8s service running for rancher UI.

I used the master/examples/full/example.tf and modified it for my use and installed the cluster in single VM.

rke_cluster.cluster network changes on all terraform apply without any change

If I re-run "terraform apply" with no change, it keeps showing the network change below:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ rke_cluster.cluster
      network.0.options.%:                     "1" => "0"
      network.0.options.calico_cloud_provider: "none" => ""
      rke_cluster_yaml:                        <sensitive> => <computed> (attribute changed)

The whole rke cluster tf files as below:

locals {
  master_ip_list = "${aws_instance.k8smaster.*.private_ip}"
  worker_ip_list = "${aws_instance.k8sworker.*.private_ip}"
}

data rke_node_parameter "k8smaster" {
  count   = "${length(local.master_ip_list)}"
  address = "${local.master_ip_list[count.index]}"
  user    = "ec2-user"
  role    = ["controlplane", "etcd"]
  ssh_key = "${file("${var.key_name}.pem")}"
}
data rke_node_parameter "k8sworker" {
  count   = "${length(local.worker_ip_list)}"
  address = "${local.worker_ip_list[count.index]}"
  user    = "ec2-user"
  role    = ["worker"]
  ssh_key = "${file("${var.key_name}.pem")}"
}

resource "rke_cluster" "cluster" {

  nodes_conf = ["${data.rke_node_parameter.k8smaster.*.json}","${data.rke_node_parameter.k8sworker.*.json}"]

  # If set to true, RKE will not fail when unsupported Docker version are found
  ignore_docker_version = false

  ################################################
  # Private Registries
  ################################################
  # List of registry credentials, if you are using a Docker Hub registry,
  # you can omit the `url` or set it to `docker.io`
#   private_registries {
#     url      = "registry1.com"
#     user     = "Username"
#     password = "password1"
#   }
#   private_registries {
#     url      = "registry2.com"
#     user     = "Username"
#     password = "password1"
#   }

  ################################################
  # Versions
  ################################################
  # The kubernetes version used.
  # For now, this should match the version defined in rancher/types defaults map:
  #    https://github.com/rancher/types/blob/master/apis/management.cattle.io/v3/k8s_defaults.go#L14
  #
  # In case the kubernetes_version and kubernetes image in system_images are defined,
  # the system_images configuration will take precedence over kubernetes_version.
  kubernetes_version = "${var.k8s_version}"

  #########################################################
  # Network(CNI) - supported: flannel/calico/canal/weave
  #########################################################
  # There are several network plug-ins that work, but we default to canal
  network {
    plugin = "${var.k8s_network_plugin}"
  }

  ################################################
  # Ingress
  ################################################
  # Currently only nginx ingress provider is supported.
  # To disable ingress controller, set `provider: none`
  ingress {
    provider = "nginx"
  }

  ###############################################
  # Kubernetes services
  ###############################################
  services_etcd {
    # if external etcd used
    #path      = "/etcdcluster"
    #ca_cert   = file("ca_cert")
    #cert      = file("cert")
    #key       = file("key")

    # for etcd snapshots
    snapshot  = false
    #retention = "24h"
    #creation  = "5m0s"
  }
}

resource "local_file" "kube_cluster_yaml" {
  filename = "./kube_config_cluster.yml"
  content  = "${rke_cluster.cluster.kube_config_yaml}"
}

Wich version ?

#terraform-provider-rke_v0.7.0_x4
#target node on rancherOS on 1.5.0
With this minimal config:
resource rke_cluster "cluster" { nodes = [ { address = "${vsphere_virtual_machine.vm1.default_ip_address}" user = "rancher" role = ["controlplane", "worker", "etcd"] }, ]
I have this error :

  • rke_cluster.cluster:
    Unsupported Docker version found [18.06.1-ce], supported versions are [1.11.x 1.12.x 1.13.x 17.03.x]

With "ignore_docker_version = true" I have:

  • rke_cluster.cluster:
    [controlPlane] Failed to bring up Control Plane: Failed to create [kube-apiserver] container on host [10.35.20.127]: Error: No such image: rancher/hyperkube:v1.11.6-rancher1

With "kubernetes_version = "v1.12.5-rancher1-1" given on https://github.com/rancher/types/blob/a3c9a8f4614922719729a05256e18f235e918622/apis/management.cattle.io/v3/k8s_defaults.go#L28 , I have:
expected kubernetes_version to be one of [v1.9.7-rancher2-2 v1.10.12-rancher1-1 v1.11.6-rancher1-1 v1.12.4-rancher1-1], got v1.12.5-rancher1-1

With kubernetes_version = "v1.12.4-rancher1-1" , I have:
[controlPlane] Failed to bring up Control Plane: Failed to create [kube-apiserver] container on host [10.35.20.126]: Error: No such image: rancher/hyperkube:v1.12.4-rancher1

Which version should I use ?

Help with re-working dynamic nodes

Before terraform v.0.12 was released, I had a really good way of being able to deploy RKE nodes dynamically. Now that I have begun my dive into the latest terraform, I am unsure how to proceed with the new way of creating dynamic nodes... I have looked at the example given on this repo, but I am having a hard time translating between my version and the new "way".

I tried translating my way into the new variables block, however that error'd out saying things like:

Error: Unsupported argument

  on rke.tf line 20, in variable "nodes":
  20:   count = var.NODE_COUNT

An argument named "count" is not expected here.


Error: Variables not allowed

  on rke.tf line 27, in variable "nodes":
  27:       address = azurestack_public_ip.vmpip.*.ip_address[count.index]

Variables may not be used here.

Here is my code:

# locals to allow us to control which nodes get assigned manager roles vs. controlplane and workers
locals {
  mgr_roles    = ["controlplane","etcd"]
  worker_roles = ["worker"]
}

# This is currently the way we are dynamically provisioning nodes. 
# This is used in a CI/CD process, so we have a script and pass in the number of manager nodes
# to create and the number of total nodes for the entire cluster 
# (ex. deploy.sh --deployment-name mytest --managers 3 --nodes 5, which would create all resources using the "mytest" prefix, and it would yield a k8s cluster with 5 nodes total and 3 of those nodes being managers
# data rke_node_parameter "nodes" {
#   count    = "${var.NODE_COUNT}"
#   address = "${azurestack_public_ip.vmpip.*.ip_address[count.index]}"
#   user    = "myuser"
#   role = "${split(",", count.index < var.nbr_managers ? join(",", local.mgr_roles) : join(",", local.worker_roles))}"
#   ssh_key = "${file("path/to/sshkey")}"
# }

# How does the above ^^^^ translate into here???
variable "nodes" {
  type = list(object({
    address = string,
    user    = string,
  }))
  default = [
    {
      address = "192.2.0.1"
      user    = "ubuntu"
    },
    {
      address = "192.2.0.2"
      user    = "ubuntu"
    },
  ]
}

resource rke_cluster "cluster" {
  depends_on = ["azurestack_public_ip.vmpip", "azurestack_virtual_machine.vm"]

  # This is how I am referencing the old dynamic way...
  #nodes_conf = ["${data.rke_node_parameter.nodes.*.json}"]

  # Instead of defining a variable block, could I handle the dynamic logic in this piece here using the "azurestack_public_ip...[count.index]" syntax?
  dynamic nodes {
    for_each = var.nodes
    content {
      address = nodes.value.address
      user    = nodes.value.user
      role    = split(",", count.index < var.nbr_managers ? join(",", local.mgr_roles) : join(",", local.worker_roles))
      ssh_key = file("path/to/sshkey")
    }
  }

  ignore_docker_version = true
  cluster_name = "${var.deployment_name}-cluster"

  # Kubernetes version
  kubernetes_version = "v1.13.5-rancher1-2"
}

unable to remove hostname_override

when i set hostname_override = "node0" a then hostname_override = "" or remove hostname_override attribute in node, terraform will not remove this attribute :(

edit: manually removed in cluster-state in config maps

Support specifying ssh_key for cluster and don't leak private key

If SSH key used for deployment is generated with Terraform, it would be more convenient to pass ssh_key to cluster resource rather than ssh_key_path.

Currently, it's possible to specify ssh_key for node itself, but then the private key leaks into the changelog, which might be not acceptable for some environments.

Support for dind mode

RKE merged dind mode provisioning support.
rancher/rke#763

It seems to for development and testing.

refs : rancher/rancher#14266

I currently do not have a use case to use dind mode with Terraform + RKE,
so I am not planning to support dind mode with this provider.

But, if someone has a use case, I may implement it.

If you have any use case, let me know about your thoughts and feedbacks!

Terraform v.11 release with latest RKE

Hi,
Would you be able to push another v0.8 release with the latest RKE so up-to-date (non CVE) k8 clusters can be provisioned in the current stable terraform release?

Destroy of servers gets rke in to unresolvable state

It appears that if you refresh your servers (say in this case in openstack) and they get a new IP list, RKE attempts to use the old server IPs no matter what. This leads to a 'terraform refresh' failing and with a '-refresh=false' it attempts to use the old IPs and gets stuck in a loop of 'still destroying', since the servers it is trying to reach no longer exist.

Generated nodes input variable in rke_cluster resource

I'm having a problem, I have a Map with N keys/values for each cluster node. In the Map, the key is the node's hostname and value is the node's IP.

Example:

{
    "node1": "192.168.176.54",
    "node2": "192.168.176.55"
}

I'm trying to convert it into a ListMap to satisfy the nodes input variable in the rke_cluster resource.

To something like this:

[{
    address = "192.168.176.54"
    user    = "some static string"
    role    = ["controlplane", "etcd", "worker"]
    ssh_key = "my SSH key"
},
{
    address = "192.168.176.55"
    user    = "some static string"
    role    = ["controlplane", "etcd", "worker"]
    ssh_key = "my SSH key"
}]

What I have is the following Terraform code:

data "null_data_source" rke_nodes {
  count = "${length(keys(var.nodes))}"

  inputs = {
    address = "${element(values(var.nodes), count.index)}"
    user    = "${var.ssh_username}"
    role    = "controlplane,etcd,worker"
    ssh_key = "${file(var.ssh_private_key_path)}"
  }
}

resource "rke_cluster" rke_cluster {
  nodes = ["${data.null_data_source.rke_nodes.*.outputs}"]

  ingress = {
    provider = ""

    extra_args = {
      enable-ssl-passthrough = ""
    }
  }
}

When I apply this, I get the following error from the rke_cluster resource:

+ terraform apply
Releasing state lock. This may take a few moments...

Error: module.vsphere.module.rancher.module.rancher_cluster.module.rke.rke_cluster.rke_cluster: "nodes.0.address": required field is not set

Error: module.vsphere.module.rancher.module.rancher_cluster.module.rke.rke_cluster.rke_cluster: "nodes.0.role": required field is not set

Output gives me:

rancher_rke = [
    {
        address = 192.168.176.54,
        role = controlplane,etcd,worker,
        ssh_key = CENSORED,
        user = myuser
    },
    {
        address = 192.168.176.55,
        role = controlplane,etcd,worker,
        ssh_key = CENSORED,
        user = myuser
    }
]

On a side note, I'm also having problem specifying a list for role in null_data_source resource.

Can you guide me?

TERRAFORM CRASH while settingup cluster using terraform-provider-rke/releases/tag/0.6.2

Terraform version: Terraform v0.11.10
rke provider: terraform-provider-rke/releases/tag/0.6.2
OS: Ubuntu
16.04

This example is same as https://github.com/yamamoto-febc/terraform-provider-rke/blob/master/examples/full/example.tf:
..
..
kubernetes_version = "v1.12.3-rancher1-1"
..
system_images {
kubernetes = "rancher/hyperkube:v1.12.3-rancher1"
etcd = "rancher/coreos-etcd:v3.2.18"
alpine = "rancher/rke-tools:v0.1.15"
nginx_proxy = "rancher/rke-tools:v0.1.15"
cert_downloader = "rancher/rke-tools:v0.1.15"
kubernetes_services_sidecar = "rancher/rke-tools:v0.1.15"
kube_dns = "rancher/k8s-dns-kube-dns-amd64:1.14.10"
dnsmasq = "rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10"
kube_dns_sidecar = "rancher/k8s-dns-sidecar-amd64:1.14.10"
kube_dns_autoscaler = "rancher/cluster-proportional-autoscaler-amd64:1.0.0"
pod_infra_container = "rancher/pause-amd64:3.1"
}
..
..
network {
plugin = "calico"
}
..
..

Multiple nodes throws error

I'm trying to do multiple knows as per the example:

variable "node_addrs" {
  type    = "list"
  default = ["192.2.0.1", "192.2.0.2"]
}

data rke_node_parameter "nodes" {
  count   = "${length(var.node_addrs)}"

  address = "${var.node_addrs[count.index]}"
  user    = "ubuntu"
  role    = ["controlplane", "worker", "etcd"]
  ssh_key = "${file("~/.ssh/id_rsa")}"
}

resource rke_cluster "cluster" {
  nodes_conf = ["${data.rke_node_parameter.nodes.*.json}"]
}

However, it's throwing this error:

data.rke_node_parameter.nodes: value of 'count' cannot be computed

Recurring Snapshots not working

When configuring ETCD S3 snapshots, the cluster is provisioned correctly but the snapshots never start, I have to trigger them manually.

services_etcd {
    backup_config {
      interval_hours = 1
      retention = 6
      s3_backup_config {
        access_key = aws_iam_access_key.etcd_backup.id
        secret_key = aws_iam_access_key.etcd_backup.secret
        bucket_name = aws_s3_bucket.etcd_backup.bucket
        region = aws_s3_bucket.etcd_backup.region
        endpoint = "s3.amazonaws.com"
      }
    }
  }

When I check cluster.yaml I see that backup_config has enabled: null, trying to add enabled: true in the resource just gives an error.

backup_config:
  enabled: null
  interval_hours: 1
  retention: 6
  s3backupconfig:
    access_key: <access_key>
    secret_key: <secret_key>
    bucket_name: <bucket_name>
    region: us-east-1
    endpoint: s3.amazonaws.com

`make build` attempts to `rm -Rf /bin/*`

I'm trying to build from source because I'm running on FreeBSD. I'm building such an old version because that's what the triton-kubernetes CLI tool tries to install.

The offending line is:

Makefile
21:     rm -Rf $(CURDIR)/bin/*

My assumption is that $(CURDIR) is a GNU extension. The default make on BSDs is not GNU make.

A possible solution is to use a GNUMakefile.

Custom cloud provider configuration unclear

Could you describe how to configure custom cloud providers? We're configuring CustomCloudProvider for vsphere with the following config, but always getting error.

name: ${cloud_provider_name}
customCloudProvider: |-
  [Global]
  user = "${vsphere_username}"
  password = "${vsphere_password}"
  port = "${ssh_port}"
  insecure-flag = "1"
  datacenters = "${vsphere_datacenter}"

  [VirtualCenter "${vsphere_server}"]
  user = "${vsphere_username}"
  password = "${vsphere_password}"

  [Workspace]
  server = "${vsphere_server}"
  datacenter = "${vsphere_datacenter}"
  folder = "${vsphere_folder}"
  default-datastore = "${vsphere_datastore}"

  [Disk]
  scsicontrollertype = "${vsphere_scsi_type}"

This config is read as a template_file rendered string, ${var.cloud_provider_template} which we then insert into the custom_cloud_config property. The rke_cluster resource looks like this:

resource "rke_cluster" rke_cluster {
  nodes_conf = ["${data.rke_node_parameter.nodes.*.json}"]

  ingress = {
    provider = "none"
  }

  cloud_provider {
    name = "custom"

    custom_cloud_config = "${var.cloud_provider_template}"
  }

  addons = "${data.template_file.rke_addons.rendered}"
}

Could you elaborate on this please?

terraform destroy not doing its thing

After creation of cluster simple terraform destroy is removing fully cluster, still visible containers on worker/etcd/master.

Here is the workflow, first making sure we are having clean and prepared targeted server for test.

cluster-host$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Running terraform apply to create cluster on cluster-host

test-tf$ ./bin/terraform apply
[...]
rke_cluster.cluster: Creation complete after 2m11s [id=192.168.1.1]
local_file.kube_cluster_yaml: Creating...
local_file.kube_cluster_yaml: Creation complete after 0s [id=cb08f485f418840751c97292f139c240b5bae03b]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Checking cluster-host for containers, everything is there:

cluster-host$ docker ps
CONTAINER ID        IMAGE                                  COMMAND                  CREATED              STATUS              PORTS               NAMES
721e4316d15a        846921f0fe0e                           "/server"                About a minute ago   Up 53 seconds                           k8s_default-http-backend_default-http-backend-78fccfc5d9-9z5zk_ingress-nginx_acce1d01-7ed2-11e9-8e39-000af7ba5220_0
0a09d7626b99        f9aed6605b81                           "/dashboard --insecu…"   About a minute ago   Up About a minute                       k8s_kubernetes-dashboard_kubernetes-dashboard-57df4db6b-r6frn_kube-system_b109bfb9-7ed2-11e9-8e39-000af7ba5220_0
7d78c4b54f7c        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kubernetes-dashboard-57df4db6b-r6frn_kube-system_b109bfb9-7ed2-11e9-8e39-000af7ba5220_0
bbad70b4b356        2b37f252629b                           "/entrypoint.sh /ngi…"   About a minute ago   Up About a minute                       k8s_nginx-ingress-controller_nginx-ingress-controller-8rpbr_ingress-nginx_accc7702-7ed2-11e9-8e39-000af7ba5220_0
4493670ca7fd        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_nginx-ingress-controller-8rpbr_ingress-nginx_accc7702-7ed2-11e9-8e39-000af7ba5220_0
85afe5246670        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_default-http-backend-78fccfc5d9-9z5zk_ingress-nginx_acce1d01-7ed2-11e9-8e39-000af7ba5220_0
a1f8a63e2eed        10ea5f40b581                           "/sidecar --v=2 --lo…"   About a minute ago   Up About a minute                       k8s_sidecar_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
6aaf0d562aa5        5427e2ee0767                           "/dnsmasq-nanny -v=2…"   About a minute ago   Up About a minute                       k8s_dnsmasq_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
84915671bf67        rancher/metrics-server-amd64           "/metrics-server --k…"   About a minute ago   Up About a minute                       k8s_metrics-server_metrics-server-58bd5dd8d7-vd4dx_kube-system_a94a8ff8-7ed2-11e9-8e39-000af7ba5220_0
c0e0bf270d3a        6e3a56d0cb18                           "/kube-dns --domain=…"   About a minute ago   Up About a minute                       k8s_kubedns_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
ef0419a360fb        e183460c484d                           "/cluster-proportion…"   About a minute ago   Up About a minute                       k8s_autoscaler_kube-dns-autoscaler-77bc5fd84-xp2vh_kube-system_a668cf25-7ed2-11e9-8e39-000af7ba5220_0
5de28226a263        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_metrics-server-58bd5dd8d7-vd4dx_kube-system_a94a8ff8-7ed2-11e9-8e39-000af7ba5220_0
ed8844559c71        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-dns-autoscaler-77bc5fd84-xp2vh_kube-system_a668cf25-7ed2-11e9-8e39-000af7ba5220_0
c6e1fa7d4f55        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
c96f2c2cf2be        f0fad859c909                           "/opt/bin/flanneld -…"   About a minute ago   Up About a minute                       k8s_kube-flannel_canal-nvtb7_kube-system_a2dc6302-7ed2-11e9-8e39-000af7ba5220_0
ec38b1c77cf1        a89b45f36d5e                           "start_runit"            About a minute ago   Up About a minute                       k8s_calico-node_canal-nvtb7_kube-system_a2dc6302-7ed2-11e9-8e39-000af7ba5220_0
3fe44dad37b0        rancher/pause:3.1                      "/pause"                 About a minute ago   Up About a minute                       k8s_POD_canal-nvtb7_kube-system_a2dc6302-7ed2-11e9-8e39-000af7ba5220_0
ab7d47948641        rancher/hyperkube:v1.13.5-rancher1     "/opt/rke-tools/entr…"   About a minute ago   Up About a minute                       kube-proxy
283262eb0683        rancher/hyperkube:v1.13.5-rancher1     "/opt/rke-tools/entr…"   About a minute ago   Up About a minute                       kubelet
67b65f860652        rancher/hyperkube:v1.13.5-rancher1     "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes                            kube-scheduler
098ca3d38c28        rancher/hyperkube:v1.13.5-rancher1     "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes                            kube-controller-manager
ec3fb2ab2d56        rancher/hyperkube:v1.13.5-rancher1     "/opt/rke-tools/entr…"   2 minutes ago        Up 2 minutes                            kube-apiserver
8d1fef7da1b6        rancher/rke-tools:v0.1.27              "/opt/rke-tools/rke-…"   2 minutes ago        Up 2 minutes                            etcd-rolling-snapshots
ce8e5484a1b2        rancher/coreos-etcd:v3.2.24-rancher1   "/usr/local/bin/etcd…"   2 minutes ago        Up 2 minutes                            etcd

kubectl works, life is good.

So, I want to destroy cluster and thus issuing destroy command (showing with DEBUG enabled):

[...]
2019/05/25 11:55:36 [INFO] backend/local: apply calling Apply
2019/05/25 11:55:36 [INFO] terraform: building graph: GraphTypeApply
2019/05/25 11:55:36 [DEBUG] adding implicit provider configuration provider.local, implied first by local_file.kube_cluster_yaml (destroy)
2019/05/25 11:55:36 [DEBUG] adding implicit provider configuration provider.rke, implied first by rke_cluster.cluster
2019/05/25 11:55:36 [DEBUG] ProviderTransformer: "rke_cluster.cluster (destroy)" (*terraform.NodeDestroyResourceInstance) needs provider.rke
2019/05/25 11:55:36 [DEBUG] ProviderTransformer: "local_file.kube_cluster_yaml" (*terraform.NodeAbstractResourceInstance) needs provider.local
2019/05/25 11:55:36 [DEBUG] ProviderTransformer: "local_file.kube_cluster_yaml (destroy)" (*terraform.NodeDestroyResourceInstance) needs provider.local
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "provider.rke" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "local_file.kube_cluster_yaml" references: [rke_cluster.cluster rke_cluster.cluster]
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "local_file.kube_cluster_yaml (destroy)" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "rke_cluster.cluster" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "rke_cluster.cluster (destroy)" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "provider.local" references: []
2019/05/25 11:55:36 [DEBUG] ProviderTransformer: "local_file.kube_cluster_yaml (destroy)" (*terraform.NodeDestroyResourceInstance) needs provider.local
2019/05/25 11:55:36 [TRACE] ProviderTransformer: exact match for provider.rke serving rke_cluster.cluster (destroy)
2019/05/25 11:55:36 [DEBUG] ProviderTransformer: "rke_cluster.cluster (destroy)" (*terraform.NodeDestroyResourceInstance) needs provider.rke
2019/05/25 11:55:36 [TRACE] ProviderTransformer: exact match for provider.rke serving rke_cluster.cluster (prepare state)
2019/05/25 11:55:36 [DEBUG] ProviderTransformer: "rke_cluster.cluster (prepare state)" (*terraform.NodeApplyableResource) needs provider.rke
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "rke_cluster.cluster (destroy)" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "provider.rke" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "provider.local" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "rke_cluster.cluster (prepare state)" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "local_file.kube_cluster_yaml (prepare state)" references: []
2019/05/25 11:55:36 [DEBUG] ReferenceTransformer: "local_file.kube_cluster_yaml (destroy)" references: []
former
2019/05/25 11:55:36 [DEBUG] Starting graph walk: walkDestroy
2019-05-25T11:55:36.609+0200 [INFO]  plugin: configuring client automatic mTLS
2019-05-25T11:55:36.641+0200 [DEBUG] plugin: starting plugin: path=/home/test/test-tf/.terraform/plugins/linux_amd64/terraform-provider-local_v1.2.2_x4 args=[/home/test/test-tf/.terraform/plugins/linux_amd64/terraform-provider-local_v1.2.
2019-05-25T11:55:36.644+0200 [DEBUG] plugin: plugin started: path=/home/test/test-tf/.terraform/plugins/linux_amd64/terraform-provider-local_v1.2.2_x4 pid=17080
2019-05-25T11:55:36.644+0200 [DEBUG] plugin: waiting for RPC address: path=/home/test/test-tf/.terraform/plugins/linux_amd64/terraform-provider-local_v1.2.2_x4
2019-05-25T11:55:36.693+0200 [INFO]  plugin.terraform-provider-local_v1.2.2_x4: configuring server automatic mTLS: timestamp=2019-05-25T11:55:36.693+0200
2019-05-25T11:55:36.737+0200 [DEBUG] plugin.terraform-provider-local_v1.2.2_x4: plugin address: address=/tmp/plugin452432771 network=unix timestamp=2019-05-25T11:55:36.737+0200
2019-05-25T11:55:36.737+0200 [DEBUG] plugin: using plugin: version=5
2019-05-25T11:55:36.737+0200 [INFO]  plugin: configuring client automatic mTLS
2019-05-25T11:55:36.777+0200 [DEBUG] plugin: starting plugin: path=/home/test/.terraform.d/plugins/terraform-provider-rke_v0.11.1 args=[/home/test/.terraform.d/plugins/terraform-provider-rke_v0.11.1]
2019-05-25T11:55:36.778+0200 [DEBUG] plugin: plugin started: path=/home/test/.terraform.d/plugins/terraform-provider-rke_v0.11.1 pid=17095
2019-05-25T11:55:36.778+0200 [DEBUG] plugin: waiting for RPC address: path=/home/test/.terraform.d/plugins/terraform-provider-rke_v0.11.1
2019-05-25T11:55:36.802+0200 [INFO]  plugin.terraform-provider-rke_v0.11.1: configuring server automatic mTLS: timestamp=2019-05-25T11:55:36.802+0200
2019-05-25T11:55:36.836+0200 [DEBUG] plugin.terraform-provider-rke_v0.11.1: plugin address: address=/tmp/plugin085325074 network=unix timestamp=2019-05-25T11:55:36.836+0200
2019-05-25T11:55:36.836+0200 [DEBUG] plugin: using plugin: version=5
local_file.kube_cluster_yaml: Destroying... [id=cb08f485f418840751c97292f139c240b5bae03b]
local_file.kube_cluster_yaml: Destruction complete after 0s
2019-05-25T11:55:36.887+0200 [DEBUG] plugin: plugin process exited: path=/home/test/test-tf/.terraform/plugins/linux_amd64/terraform-provider-local_v1.2.2_x4 pid=17080
2019-05-25T11:55:36.887+0200 [DEBUG] plugin: plugin exited
2019/05/25 11:55:36 [DEBUG] rke_cluster.cluster: applying the planned Delete change
2019/05/25 11:55:36 [TRACE] GRPCProvider: ApplyResourceChange
rke_cluster.cluster: Destroying... [id=192.168.1.1]
rke_cluster.cluster: Destruction complete after 8s
2019-05-25T11:55:45.392+0200 [DEBUG] plugin: plugin process exited: path=/home/test/.terraform.d/plugins/terraform-provider-rke_v0.11.1 pid=17095
2019-05-25T11:55:45.392+0200 [DEBUG] plugin: plugin exited
2019/05/25 11:55:45 [TRACE] [walkDestroy] Exiting eval tree: provider.rke (close)
2019/05/25 11:55:45 [TRACE] vertex "provider.rke (close)": visit complete

Destroy complete! Resources: 2 destroyed.

It is declared with success, indeed 6443 port is not reachable, but when I check docker containers list on the remote server, it is quite full:

cluster-host$ docker ps
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
721e4316d15a        846921f0fe0e                   "/server"                7 minutes ago       Up 6 minutes                            k8s_default-http-backend_default-http-backend-78fccfc5d9-9z5zk_ingress-nginx_acce1d01-7ed2-11e9-8e39-000af7ba5220_0
0a09d7626b99        f9aed6605b81                   "/dashboard --insecu…"   7 minutes ago       Up 7 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-57df4db6b-r6frn_kube-system_b109bfb9-7ed2-11e9-8e39-000af7ba5220_0
7d78c4b54f7c        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-57df4db6b-r6frn_kube-system_b109bfb9-7ed2-11e9-8e39-000af7ba5220_0
bbad70b4b356        2b37f252629b                   "/entrypoint.sh /ngi…"   7 minutes ago       Up 7 minutes                            k8s_nginx-ingress-controller_nginx-ingress-controller-8rpbr_ingress-nginx_accc7702-7ed2-11e9-8e39-000af7ba5220_0
4493670ca7fd        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_nginx-ingress-controller-8rpbr_ingress-nginx_accc7702-7ed2-11e9-8e39-000af7ba5220_0
85afe5246670        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_default-http-backend-78fccfc5d9-9z5zk_ingress-nginx_acce1d01-7ed2-11e9-8e39-000af7ba5220_0
a1f8a63e2eed        10ea5f40b581                   "/sidecar --v=2 --lo…"   7 minutes ago       Up 7 minutes                            k8s_sidecar_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
6aaf0d562aa5        5427e2ee0767                   "/dnsmasq-nanny -v=2…"   7 minutes ago       Up 7 minutes                            k8s_dnsmasq_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
84915671bf67        rancher/metrics-server-amd64   "/metrics-server --k…"   7 minutes ago       Up 7 minutes                            k8s_metrics-server_metrics-server-58bd5dd8d7-vd4dx_kube-system_a94a8ff8-7ed2-11e9-8e39-000af7ba5220_0
c0e0bf270d3a        6e3a56d0cb18                   "/kube-dns --domain=…"   7 minutes ago       Up 7 minutes                            k8s_kubedns_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
ef0419a360fb        e183460c484d                   "/cluster-proportion…"   7 minutes ago       Up 7 minutes                            k8s_autoscaler_kube-dns-autoscaler-77bc5fd84-xp2vh_kube-system_a668cf25-7ed2-11e9-8e39-000af7ba5220_0
5de28226a263        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_metrics-server-58bd5dd8d7-vd4dx_kube-system_a94a8ff8-7ed2-11e9-8e39-000af7ba5220_0
ed8844559c71        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-dns-autoscaler-77bc5fd84-xp2vh_kube-system_a668cf25-7ed2-11e9-8e39-000af7ba5220_0
c6e1fa7d4f55        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-dns-58bd5b8dd7-bjqqm_kube-system_a5d6e90a-7ed2-11e9-8e39-000af7ba5220_0
c96f2c2cf2be        f0fad859c909                   "/opt/bin/flanneld -…"   7 minutes ago       Up 7 minutes                            k8s_kube-flannel_canal-nvtb7_kube-system_a2dc6302-7ed2-11e9-8e39-000af7ba5220_0
ec38b1c77cf1        a89b45f36d5e                   "start_runit"            7 minutes ago       Up 7 minutes                            k8s_calico-node_canal-nvtb7_kube-system_a2dc6302-7ed2-11e9-8e39-000af7ba5220_0
3fe44dad37b0        rancher/pause:3.1              "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_canal-nvtb7_kube-system_a2dc6302-7ed2-11e9-8e39-000af7ba5220_0
8d1fef7da1b6        rancher/rke-tools:v0.1.27      "/opt/rke-tools/rke-…"   8 minutes ago       Up 8 minutes                            etcd-rolling-snapshots

My understanding that destroy should remove everything from the cluster.

Below is main.cf, nothing fancy:

resource "rke_cluster" "cluster" {
  nodes {
    address = "192.168.1.1"
    user    = "platform"
    role    = ["controlplane", "worker", "etcd"]
    ssh_key = "${file("~/.ssh/id_ed25519")}"
  }
}

###############################################################################
# If you need kubeconfig.yml for using kubectl, please uncomment follows.
###############################################################################
resource "local_file" "kube_cluster_yaml" {
  filename = format("%s/%s" , path.root, "kube_config_cluster.yml")
  content = rke_cluster.cluster.kube_config_yaml
}

Terraform and providers versions.

$ ./bin/terraform -v
Terraform v0.12.0
+ provider.local v1.2.2
+ provider.rke v0.11.1

Docker version on cluster-host: 18.09.6, build 481bc77156.

Thanks!

Network always changes when applying

Every time I run apply, I get this change:

      cloud_provider.0.vsphere_cloud_config.0.network.#: "1" => "0"
      network.0.options.%:                               "1" => "0"
      network.0.options.canal_flannel_backend_type:      "vxlan" => ""

Do you know why and if it's possible to turn it off?

Dynamic node table

Hi,

great work with the provider =D.

Did you find a way to have dynamic number of nodes for this field ?
nodes = [
{
address = "1.1.1.1"
user = "ubuntu"
role = ["controlplane", "etcd"]
ssh_key_path = "~/.ssh/id_rsa"
port = 2222
}

bastion_host looks for ~/.ssh/id_rsa with ssh_agent_auth = true

I'm using bastion_host with ssh_agent_auth = true but the provider still looks for ~/.ssh/id_rsa. Log:

data.rke_node_parameter.nodes[2]: Refreshing state...
data.rke_node_parameter.nodes[1]: Refreshing state...
data.rke_node_parameter.nodes[0]: Refreshing state...
rke_cluster.cluster: Refreshing state... (ID: 10.22.141.17)

Error: Error refreshing state: 1 error(s) occurred:

* module.rancher_cluster.rke_cluster.cluster: 1 error(s) occurred:

* module.rancher_cluster.rke_cluster.cluster: rke_cluster.cluster: Error while reading SSH key file: open /home/yvespp/.ssh/id_rsa: no such file or directory

When I do a touch ~/.ssh/id_rsa it works, but in the next runs of terraform apply it has changes in the bastion_host:

Terraform will perform the following actions:
  ~ module.rancher_cluster.rke_cluster.cluster
      bastion_host.0.port:         "22" => "0"
      bastion_host.0.ssh_key_path: "~/.ssh/id_rsa" => ""

Config:

resource rke_cluster "cluster" {
  nodes_conf = ["${data.rke_node_parameter.nodes.*.json}"]
  ssh_agent_auth = true
  cluster_name = "${var.cluster_name}"
  kubernetes_version = "${var.kubernetes_version}"
  bastion_host = {
    address        = "${vsphere_virtual_machine.vm.*.default_ip_address[0]}"
    ssh_agent_auth = true
    user           = "rancher"
  }

  private_registries = {
    is_default = true
    password = "pw"
    url = "docker-registry.mycorp.com"
    user = "yvespp"
  }
  services_kube_api = {
    service_cluster_ip_range = "172.23.0.0/16"
  }
  services_kube_controller = {
    cluster_cidr = "172.28.0.0/14"
    service_cluster_ip_range = "172.23.0.0/16"
  }
  services_kubelet = {
    cluster_dns_server = "172.23.0.10"
  }
}

private_registries not working

What I am doing wrong?

  private_registries = {
    url      = "xxx"
    user     = "xxx"
    password = "xxx"
  }

Terraform 0.11.7, rke v 0.4
Rancher 2.0.7

image

Critical Kubernetes CVE-2018-1002105 vulnerability

This vulnerability allows specially crafted requests to establish a connection through the Kubernetes API server to backend servers (such as aggregated API servers and kubelets), then send arbitrary requests over the same connection directly to the backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection. kubernetes/kubernetes#71411

Fixed with new Hypercube images in RKE 0.1.13RC1

https://discuss.kubernetes.io/t/kubernetes-security-announcement-v1-10-11-v1-11-5-v1-12-3-released-to-address-cve-2018-1002105/3700

https://github.com/rancher/rke/blob/v0.1.13-rc1/vendor/github.com/rancher/types/apis/management.cattle.io/v3/k8s_defaults.go

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.