Giter Site home page Giter Site logo

terraform-azurerm-computegroup's Introduction

terraform-azurerm-computegroup

Deploys a group of Virtual Machines exposed to a public IP via a Load Balancer

Build Status

This Terraform module deploys a Virtual Machines Scale Set in Azure and opens the specified ports on the loadbalancer for external access and returns the id of the VM scale set deployed.

This module requires a network and loadbalancer to be provider separately. You can provision them with the "Azure/network/azurerm" and "Azure/loadbalancer/azurerm" modules.

Usage

Using the vm_os_simple:

provider "azurerm" {
  version = "~> 1.0"
}

variable "resource_group_name" {
    default = "terraform-test"
}

module "network" {
    source              = "Azure/network/azurerm"
    location            = "westus"
    resource_group_name = "${var.resource_group_name}"
  }

module "loadbalancer" {
  source              = "Azure/loadbalancer/azurerm"
  resource_group_name = "${var.resource_group_name}"
  location            = "westus"
  prefix              = "terraform-test"
  lb_port             = {
                          http  = ["80", "Tcp", "80"]
                          https = ["443", "Tcp", "443"]
                          #ssh   = ["22", "Tcp", "22"]
                        }
}

module "computegroup" {
    source              = "Azure/computegroup/azurerm"
    resource_group_name = "${var.resource_group_name}"
    location            = "westus"
    vm_size             = "Standard_A0"
    admin_username      = "azureuser"
    admin_password      = "ComplexPassword"
    ssh_key             = "~/.ssh/id_rsa.pub"
    nb_instance         = 2
    vm_os_simple        = "UbuntuServer"
    vnet_subnet_id      = "${module.network.vnet_subnets[0]}"
    load_balancer_backend_address_pool_ids = "${module.loadbalancer.azurerm_lb_backend_address_pool_id}"
    cmd_extension       = "sudo apt-get -y install nginx"
    tags                = {
                            environment = "dev"
                            costcenter  = "it"
                          }
}

output "vmss_id"{
  value = "${module.computegroup.vmss_id}"
}

Using the vm_os_publisher, vm_os_offer and vm_os_sku

provider "azurerm" {
  version = "~> 1.0"
}

variable "resource_group_name" {
    default = "terraform-test"
}

module "network" {
    source              = "Azure/network/azurerm"
    location            = "westus"
    resource_group_name = "${var.resource_group_name}"
  }

module "loadbalancer" {
  source              = "Azure/loadbalancer/azurerm"
  resource_group_name = "${var.resource_group_name}"
  location            = "westus"
  prefix              = "terraform-test"
  lb_port             = {
                          http  = ["80", "Tcp", "80"]
                          https = ["443", "Tcp", "443"]
                          #ssh   = ["22", "Tcp", "22"]
                        }
}

module "computegroup" {
    source              = "Azure/computegroup/azurerm"
    resource_group_name = "${var.resource_group_name}"
    location            = "westus"
    vm_size             = "Standard_A0"
    admin_username      = "azureuser"
    admin_password      = "ComplexPassword"
    ssh_key             = "~/.ssh/id_rsa.pub"
    nb_instance         = 2
    vm_os_publisher     = "Canonical"
    vm_os_offer         = "UbuntuServer"
    vm_os_sku           = "14.04.2-LTS"
    vnet_subnet_id      = "${module.network.vnet_subnets[0]}"
    load_balancer_backend_address_pool_ids = "${module.loadbalancer.azurerm_lb_backend_address_pool_id}"
    cmd_extension       = "sudo apt-get -y install nginx"
    tags                = {
                            environment = "dev"
                            costcenter  = "it"
                          }
}

output "vmss_id"{
  value = "${module.computegroup.vmss_id}"
}

The module does not expose direct access to each node of the VM scale set for security reason. The following example shows how to use the compute group module with a jumpbox machine.

provider "azurerm" {
  version = "~> 1.0"
}

variable "resource_group_name" {
    default = "jumpbox-test"
}

variable "location" {
    default = "westus"
}

module "network" {
    source = "Azure/network/azurerm"
    location = "${var.location}"
    resource_group_name = "${var.resource_group_name}"
  }

module "loadbalancer" {
  source = "Azure/loadbalancer/azurerm"
  resource_group_name = "${var.resource_group_name}"
  location            = "${var.location}"
  prefix              = "terraform-test"
  lb_port             = {
                          http  = ["80", "Tcp", "80"]
                          https = ["443", "Tcp", "443"]
                        }
}

module "computegroup" {
    source              = "Azure/computegroup/azurerm"
    resource_group_name = "${var.resource_group_name}"
    location            = "${var.location}"
    vm_size             = "Standard_DS1_v2"
    admin_username      = "azureuser"
    admin_password      = "ComplexPassword"
    ssh_key             = "~/.ssh/id_rsa.pub"
    nb_instance         = 2
    vm_os_publisher     = "Canonical"
    vm_os_offer         = "UbuntuServer"
    vm_os_sku           = "16.04-LTS"
    vnet_subnet_id      = "${module.network.vnet_subnets[0]}"
    load_balancer_backend_address_pool_ids = "${module.loadbalancer.azurerm_lb_backend_address_pool_id}"
    cmd_extension       = "sudo apt-get -y install nginx"
    tags                = {
                            environment = "codelab"
                          }
}

resource "azurerm_public_ip" "jumpbox" {
  name                         = "jumpbox-public-ip"
  location                     = "${var.location}"
  resource_group_name          = "${var.resource_group_name}"
  public_ip_address_allocation = "static"
  domain_name_label            = "${var.resource_group_name}-ssh"
  depends_on                   = ["module.network"]
  tags {
    environment = "codelab"
  }
}

resource "azurerm_network_interface" "jumpbox" {
  name                = "jumpbox-nic"
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  ip_configuration {
    name                          = "IPConfiguration"
    subnet_id                     = "${module.network.vnet_subnets[0]}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id          = "${azurerm_public_ip.jumpbox.id}"
  }

  tags {
    environment = "codelab"
  }
}

resource "azurerm_virtual_machine" "jumpbox" {
  name                  = "jumpbox"
  location              = "${var.location}"
  resource_group_name   = "${var.resource_group_name}"
  network_interface_ids = ["${azurerm_network_interface.jumpbox.id}"]
  vm_size               = "Standard_DS1_v2"

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "16.04-LTS"
    version   = "latest"
  }

  storage_os_disk {
    name              = "jumpbox-osdisk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  os_profile {
    computer_name  = "jumpbox"
    admin_username = "azureuser"
    admin_password = "Password1234!"
  }

  os_profile_linux_config {
    disable_password_authentication = true

    ssh_keys {
      path     = "/home/azureuser/.ssh/authorized_keys"
      key_data = "${file("~/.ssh/id_rsa.pub")}"
    }
  }

  tags {
    environment = "codelab"
  }
}

Test

Configurations

We provide 2 ways to build, run, and test the module on a local development machine. Native (Mac/Linux) or Docker.

Native(Mac/Linux)

Prerequisites

Environment setup

We provide simple script to quickly set up module development environment:

$ curl -sSL https://raw.githubusercontent.com/Azure/terramodtest/master/tool/env_setup.sh | sudo bash

Run test

Then simply run it in local shell:

$ cd $GOPATH/src/{directory_name}/
$ bundle install
$ rake build
$ rake e2e

Docker

We provide a Dockerfile to build a new image based FROM the microsoft/terraform-test Docker hub image which adds additional tools / packages specific for this module (see Custom Image section). Alternatively use only the microsoft/terraform-test Docker hub image by using these instructions.

Prerequisites

Custom Image

This builds the custom image:

$ docker build --build-arg BUILD_ARM_SUBSCRIPTION_ID=$ARM_SUBSCRIPTION_ID --build-arg BUILD_ARM_CLIENT_ID=$ARM_CLIENT_ID --build-arg BUILD_ARM_CLIENT_SECRET=$ARM_CLIENT_SECRET --build-arg BUILD_ARM_TENANT_ID=$ARM_TENANT_ID -t azure-computegroup .

This runs the build and unit tests:

$ docker run --rm azure-computegroup /bin/bash -c "bundle install && rake build"

This runs the end to end tests:

$ docker run --rm azure-computegroup /bin/bash -c "bundle install && rake e2e"

This runs the full tests:

$ docker run --rm azure-computegroup /bin/bash -c "bundle install && rake full"

Authors

Originally created by Damien Caro

License

MIT

terraform-azurerm-computegroup's People

Contributors

dcaro avatar dtzar avatar foreverxzc avatar msftgits avatar vaijanathb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-azurerm-computegroup's Issues

Replace Travis AZURE Secrets

Currently, I have the rake e2e Travis setup to use my personal MSFT credentials (in secure ENV variables) and this should be replaced with the engineering team's Azure credentials.

Rake e2e tests fail

When you run rake e2e the tests fail with:

$$$$$$ Running command `terraform destroy -force -lock=true -lock-timeout=0s -input=false -no-color -parallelism=1 -refresh=true  -var-file=/tf-test/module/test/integration/fixtures/testing.tfvars /tf-test/module`    
       Error: Required variable not set: load_balancer_backend_address_pool_ids    
       Error: Required variable not set: vnet_subnet_id

These are not able to be specified since they are an output from a different module. The module code works flawlessly. Therefore, the test framework / code needs to be updated to accomodate this scenario.

ip_configuration requires primary field

The latest version of the virtual machine scale set resource requires the primary setting within the ip_configuration. The current main.tf has this:
`
network_profile {
name = "${var.network_profile}"
primary = true

ip_configuration {
  name                                   = "IPConfiguration"
  subnet_id                              = "${var.vnet_subnet_id}"
  load_balancer_backend_address_pool_ids = ["${var.load_balancer_backend_address_pool_ids}"]
}

}
`

And that fails on plan with the error:
Error: module.computegroup.azurerm_virtual_machine_scale_set.vm-linux: "network_profile.0.ip_configuration.0.primary": required field is not set

If I change the following to this:
`
network_profile {
name = "${var.network_profile}"
primary = true

ip_configuration {
  name                                   = "IPConfiguration"
  subnet_id                              = "${var.vnet_subnet_id}"
  primary                                 = true
  load_balancer_backend_address_pool_ids = ["${var.load_balancer_backend_address_pool_ids}"]
}

}
`

The plan runs without error.

Loadbalancer module remote port not working

When you use the official azurerm loadbalancer with a remote port i.e.

"remote_port" {
    ssh = ["Tcp", "22"]
  }

This creates a NAT rule from 50001 --> 22 for SSH but this rule does not effectively enable SSH access to the VM (times out). When you look at the configuration in the portal, you can see that the rule is in place, but it doesn't have a target:
image

Unsure of whether this is a problem with the loadbalancer module, VMSS, or other - but just filing here.

Using LBs created with https://github.com/Azure/terraform-azurerm-loadbalancer error...

I'm a new-ish TFE user with a problem that seems to point to either some validation inside the computegroup module being wrong, or possibly the azurerm-loadbalancer output sending the wrong data. Sorry if this is in the wrong place. Here's my error:

azurerm_virtual_machine_scale_set.sf_ss1: compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidRequestFormat" Message="Cannot parse the request." Details=[{"code":"InvalidJsonReferenceWrongType","message":"Reference Id /subscriptions/[redacted]/resourceGroups/terraformdeploy-rg/providers/Microsoft.Network/loadBalancers/test_lb1-lb/inboundNatRules/VM-0 is referencing resource of a wrong type. The Id is expected to reference resources of type loadBalancers/backendAddressPools. Path Properties.UpdateGroups[0].NetworkProfile.networkInterfaceConfigurations[0].properties.ipConfigurations[0].properties.loadBalancerBackendAddressPools[0]."},{"code":"InvalidJsonReferenceWrongType","message":"Reference Id /subscriptions/[redacted]/resourceGroups/terraformdeploy-rg/providers/Microsoft.Network/loadBalancers/test_lb1-lb/inboundNatRules/VM-1 is referencing resource of a wrong type. The Id is expected to reference resources of type loadBalancers/backendAddressPools. Path Properties.UpdateGroups[0].NetworkProfile.networkInterfaceConfigurations[0].properties.ipConfigurations[0].properties.loadBalancerBackendAddressPools[1]."},{"code":"InvalidJsonReferenceWrongType","message":"Reference Id /subscriptions/[redacted]/resourceGroups/terraformdeploy-rg/providers/Microsoft.Network/loadBalancers/test_lb1-lb/backendAddressPools/BackEndAddressPool is referencing resource of a wrong type. The Id is expected to reference resources of type loadBalancers/inboundNatPools. Path Properties.UpdateGroups[0].NetworkProfile.networkInterfaceConfigurations[0].properties.ipConfigurations[0].properties.loadBalancerInboundNatPools[0]."}]

You can see where the provided inboundNatRules is definitely not of type backendAddressPools. Here's the bit of code from my resource in question, referencing the output from the lb module:

load_balancer_backend_address_pool_ids = ["${module.sf_lb1.azurerm_lb_backend_address_pool_id}"]
load_balancer_inbound_nat_rules_ids    = ["${module.sf_lb1.azurerm_lb_nat_rule_ids}"]

I'm at a loss where to go from here - any pointers / help would be appreciated.

Error getting plugins: module root: module computegroup: cmd_extension is not a valid parameter

Hey,

โžœ  terraform -v
Terraform v0.11.0
+ provider.azurerm v0.3.3

When using your examples in README the terraform init fails due to cmd_extension is not a valid parameter
When removing the cmd_extension from module "computegroup" init finish succesfully,
but it still fails due to the network module(Not related to your module).

This is the issue for network module Azure/terraform-azurerm-network#4

CustomScriptExtension fails for Windows VM

Set commandToExecute as follows

settings = <<SETTINGS
    {
        "commandToExecute": "powershell Add-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-Features"
    }
    SETTINGS

Fails with the following message:

  • azurerm_virtual_machine_scale_set.vm-windows: compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="VMExtensionProvisioningError" Message="VM has reported a failure when process
    ing extension 'vmssextension'. Error message: "Invalid handler configuration. Exiting. Error Message: Expecting state 'Element'.. Encountered 'Text'
    with name '', namespace ''. "."

Leaving default script extension blank fails deployment

If you leave the cmd_extension value a blank string as it is by default and do a deployment (tested at least with CentOS custom image), then the deployment fails with:

 "VM has reported a failure when processing extension 'vmssextension'. Error message: "Malformed status file [ExtensionError] Invalid status/status: failed"."

Specify a simple value of echo hello for this variable and the deployment succeeds.

Recommendation to do a breakout so there are entirely new blocks of code which have and do not have script extension or just make the script extension a requirement for this module.

Connect via SSH

Hi,

I'm trying to use this Terraform module.

I have a question: how do I access the VM using SSH?
I tried to connect using the public IP address but it doesn't work. Any idea?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.