Giter Site home page Giter Site logo

badgerati / fogg Goto Github PK

View Code? Open in Web Editor NEW
7.0 4.0 2.0 323 KB

PowerShell tool to aide and simplify the creation, deployment and provisioning of infrastructure in Azure

License: MIT License

PowerShell 100.00%
powershell azure automation deployment provision infrastructure azure-resource-manager devops arm virtual-machine

fogg's People

Contributors

badgerati avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fogg's Issues

Ability to create/update a VM with an additional data drive(s)

This issue is to implement a new feature for VMs, which is to allow the template to define multiple additional data drives that can be attached to a VM.

Note: this will only create or update a VM to have these drives. It will not delete them

example:

"template": [
    {
        "type": "vm",
        "drives": [
            {
                "type": "data",
                "name": "Logs",
                "letter": "F",
                "size": 512
            }
        ]
    }
]

The "size" is in GB, and the "letter" can only be between F and Y. If the drive letter already exists on the VM, deployment will fail.

The "name" will be the name of the drive on the VM. (ie: Logs (F:))

If we had to install the CustomScriptExtension, ensure it's uninstalled post-provision

There's a bug in the CustomScriptExtension where if you have to restart a VM with it, it'll attempt to run the extension again and again.

This could lead to duplicate items, failures because software already installed, or other unexpected behaviour.

The fix here is to uninstall the CustomScriptExtension once provisioning of a VM is complete - but only if we had to originally install the extension.

Useful link: http://www.gi-architects.co.uk/2016/07/custom-script-extension-for-arm-vms-in-azure/

PowerShell to remove extension:

Remove-AzurermVMCustomScriptExtension -ResourceGroupName $RGName -VMName $VMname –Name $ExtName -Force

Ability to use an array in a Foggfile for multiple builds at once

Using Fogg on the CLI or via Foggfile, you can currently only deploy one Resource Group and its containing infrastructure at a time. Meaning if you have a more complex infrastructure, you would have to run Fogg for each group you needed creating.

This issue is to allow the Foggfile to contain an array of parameters, so that it can contain and deploy multiple groups at once.

Such a simple scenario could be a group for a VPN gateway and jump servers, and another group for you core web VMs.

When passing arguments to a custom script, the validator fails to realise a provisioner key exists

Say you have a custom script that accepts arguments; and you pass these arguments in on the provisioner list for a VM.

So you define the provisioner as:

"provisioners": {
    "example": "custom: .\\script.ps1"
}

and then you call the provisioner with:

"templates": [
    {
        "provisioner": [
            "example: argument1 | argument2"
        ]
    }
]

Then the check on the VM template to see if "example" exists will fail - probably because it's trying to find "example: arg..." rather than just "example".

VPN certificate paths need to be fully resolved

When a certificate path for a VPN is supplied as relative, it needs to be fully resolved (Resolve-Path). This needs to be done, otherwise the call to Azure to create the VPN will fail as it can't find the cert.

When deploying VMs and VPNs, report the time taken for each

At the each of a full deployment, the total time taken is reported. For small templates this is all right, but if a template consists of a VPN and 10 VMs, then having an approx. idea as to the ETA would be ideal.

So for this issue, it would be useful to output the total time taken to deploy a VM or a VPN.

Add in ability to configure WinRM on a VM

Add in the ability for Fogg to be able to configure WinRM on a VM. This will include:

  • Inbuilt custom provisioning script to enable PSRemoting, open port 5986, and the self-signed cert
  • Inbuilt firewall rule for inbound traffic on 5986

Inbuilt firewall rule should allow for both ways, rather than just inbound

The default firewall rules at the moment only allow inbound traffic. This should be changed so that the default is inbound, but it can be specified to be in/out/both.

Maybe something like:

"firewall": {
    "https|out": true
}

which will allow traffic out to 443. Also: "https|in" and "https|both". If the pipe is not passed, in is assumed.

  • Inbound traffic will be source *:* and destination <vm_subnet>:<port>
  • Outbound traffic will be source <vm_subnet>:* and destination *:<port>

Virtual Machine names should ideally end with "-vm"

When creating subnets, vnets, nsgs, etc. they all have a "-snet", "-vnet" or "-nsg" in their name. Even availability sets, load balancers and IPs.

This issue it to make it so that VMs get the same treatment, meaning instead of getting a VM called "core-web1" you'll get "core-web-vm1" instead.

If possible, throw in backwards compatibility.

Ability to have appending VM creation, rather than just fixed

When we create a template for a VM with a count of 1, it will ALWAYS create just 1 VM. If that VM exists, it updates the existing one and leaves it at that. ie, there is always only 1 VM.

It would be cool to have an "append": true property on a VM template that created and additional X VMs where X is the count. So a "count": 1 after 3 runs you would have 3 VMs not just 1.

This would be useful for regression/test environments where you can constantly spin up new VMs on each commit or deploy of a release - deleting the VMs when testing is done.

Of course, this dynamic creation will only apply to VMs, the RG, NICs, NSGs and VNETs will all still be treated like normal.

Also, when naming the VMs, the fixed way just appends the VM index the name like "test-vm1" and "test-vm2". Here, that won't work, so we'll need to get the number of VMs with the base "test-vm" name first to work out the appended value.

A real bonus would be to make it figure out in-between values. So if we create vm1 then vm2, then delete vm1; instead of creating a conflict vm2 or even a vm3, create a new vm1. Then, if 1 and 2 are still up, create a 3 and so-on-forth.

If VM cores exceed max, but we're only updating, then we need a -Force argument

The check for VM core usage exceeding the max limit stops the user from deploying VMs. However, if you re-run the script have a successful deploy, and the "cores" exceeding the limit, it will fail - even though we aren't deploying any more, just updating existing ones.

This issue is to add a -Force flag which will ignore the limit and press-on

When setting up NSG port rules, remove ones not present in the template

When creating an NSG and configuring the port rules, rules are added but never removed. So if you have a template that configures 5 rules, then drops down to 4, then up to 6 but the new last 2 are different to the previous 5th; this new 5th rule will fail because a rule with that priority already exists.

It would be ideal if when the rules were dropped down to 4, the older 5th one was removed from the NSG.

This could be achieved by only inspected rules with a priority of 4095 or less (as this is the highest priority Azure lets you set up to, and pre-configured ones by Azure are like 65,000+)

Add in additional naming convention length checks

When creating IaaS in Azure, certain resources having naming convention length limits (like passwords must be 12-123 chars).

Fogg doesn't check naming conventions during validation, so this issue is to add in support for the most common ones:

  • Resource Group names (1-90)
  • VM names (1-15)
  • VM username (1-20)
  • VM password (12-123)
  • Storage Account names (3-24)

(for the username/password, if possible, ask 3 times if the requirements aren't met, then fail)

More can be added in later for things like availability sets.

New inbuilt custom script for altering file/folder permissions

It would be useful to have an inbuilt custom provisioning script that would help to alter permissions on files and folders.

The script should accept arguments of path, user, access-level and allow/deny. This way the script can be called multiple times on a single VM template, just with different parameters.

When finished, Fogg should return an object with information of what was just deployed

Right now when Fogg finishes deploying your IaaS, the only "output" you get is a quick list of the Public IP Addresses for each VM that you may have deployed.

Fogg needs to return a more detailed object containing basic information of what was just deployed for each VM/VPN in the template. Things like:

  • Resource Group name
  • Virtual Network name and Address mask
  • Storage Account name
  • Details for each VM template object, and for each one of that type that was deployed
    • Such as name and private/public IP

An example could be:

$ex = @{
    'Error' = @{
        'Code' = 0;
        'Reason' = '';
    };
    'ResourceGroup' = @{
        'Name' = 'example-rg';
    };
    'VirtualNetwork' = @{
        'Name' = 'example-vnet';
        'AddressPrefix' = '10.1.0.0/16';
    };
    'StorageAccount' = @{
        'Name' = 'examplestdsa'
    };
    'VirtualMachineInfo' = @{
        'web' = @{
            'Subnet' = '10.1.0.0/24';
            'AvailabilitySet' = 'example-web-as';
            'LoadBalancer' = @{
                'Name' = 'example-web-lb';
                'PublicIP' = '52.101.205.32';
                'Port' = 443;
            };
            'VirtualMachines' = @(
                @{
                    'Name' = 'example-web1';
                    'State' = 'On';
                    'PrivateIP' = '10.1.0.1';
                    'PublicIP' = '52.139.128.96';
                };
                @{
                    'Name' = 'example-web2';
                    'State' = 'On';
                    'PrivateIP' = '10.1.0.2';
                    'PublicIP' = '52.112.134.97';
                };
            );
        };
    };
}

Ability to encrypt the Storage Account created

This is for the inclusion of a flag to enable the encryption of the Storage Account created by Fogg. By default the encryption will be disabled.

{
    "encrypt": true,
    "templates": [ ... ]
}

If the encrypt flag is true then the Storage Account is encrypted, otherwise it won't be encrypted.

New Chocolatey provisioner that takes a list of software names to install

Create a new type of provisioner for installing software via Chocolatey. In principle this will just be a Custom Script, but instead of initialising it with a script path you pass it a comma-separated list of software names:

"provisioners": {
    "general-software": "choco: 7zip.install, visualstudiocode(1.12.2), nodejs.install"
}

When the general-software provisioner is used on a VM, it will now install 7-Zip, VS Code v1.12.2 and NodeJs on that VM via Chocolatey.

Have inbuilt Inbound firewall rules to shrink template size

Some inbound firewalls rules are very common, such as RDP, HTTP(S), SQL (+Mirroring).

To help reduce the template size, allow having inbuilt firewall flags that when true allow the ports on the current VM's subnet.

example:

"template": [
    {
        "tag": "vm",
        "type": "vm",
        "count": 1,
        ...
        "firewall": {
            "https": true
        }
    }
]

This will auto allow the HTTPS port (443) as an inbound rule.

These inbuilt rules will need fixed priorities, so a range of about 3500-4000 should suffice.

Extend DSC provisioner to include Custom Scripts

The only provisioner right now is DSC, aim for this issue is to extend DSC section to be a Provisioner section which include DSC and Custom Scripts. Something like:

"provisioners": {
    "remoting": "dsc: .\\Remoting.ps1",
    "web": "custom: .\\WebServer.ps1"
}

Then in the VM section, instead of having a DSC array of keys, it's a Provisioners array of keys:

"vms": [
    {
        ...,
        "provisioners": [
            "remoting"
        ],
        ...
    }
]

The rest wil be taken care of by Fogg; so publishing DSC scripts, and creating context/containers for custom scripts.

Ability for Fogg to have two Storage Account: one for disks and one for data

This issue covers a relatively large change: to have two storage accounts.

At the moment, Fogg creates one storage account for everything. This issue will split that, and set Fogg to create two storage accounts: one for blob/disk data (VM VHDs, etc), and one for general file data (DSC and Custom scripts, etc).

(this will probably pave the way for a new template type: "sa", for more storage accounts be be created)

In templates, have option to override pre-tag name instead of using Resource Group name

When VMs, NIC/VNETs, etc. are all created, the default naming scheme is:

<resource_group_name>-<tag>[-nic/-vnet]

This is useful, but if the Resource Group name is long, or would look cluttered in the above format, then it's hardly ideal.

This issue is to introduce a parameter into the template files of "pretag", which if present will override the usage of the Resource Group name being used in all resource creation:

<pretag>-<tag>[-nic/-vnet]

This means, that if your group name were "test-web-rg" and you wanted a VM with tag "web", you'd get "test-web-web1" which looks a little silly. Whereas if you set the pretag to be "test" then you'd have "test-web1" which looks a little better!

Inbuilt DSC/Custom scripts for easier provisioning

This could be for things like remoting, web server/.net installs etc.

For now, I'll just put in place the backbone and some simple DSC scripts; so that more can be easily added later on.

General format when being using will be like:

"provisioners": {
    "remoting": "dsc: @{Remoting}",
    "web": "dsc: @{WebServer}"
}

The rest is normal provisioner usage logic.

In VM templates, have a new "useAvailabilitySet" property

In VM templates at the moment, if you have a count of more than 1, it will automatically create a Load Balancer. But, you can set the "useLoadBalancer": false property to not create a Load Balancer.

However, it will still place those VMs within an Availability Set. This issue is to have a new "useAvailabilitySet" property, which when false will not create and place the VMs into an Availability Set. It will also not create a Load Balancer - because they only function when an Availability Set has been created.

During validation, ensure the VM sizes core count don't exceed the Max Core limit

During validation, ensure the VM sizes core count don't exceed the Max Core limit.

To get the max core limit:

(Get-AzureRmVMUsage -Location 'westeurope' | ? { $_.Name.Value -ieq 'cores' }).CurrentValue
(Get-AzureRmVMUsage -Location 'westeurope' | ? { $_.Name.Value -ieq 'cores' }).Limit

To get the number of cores a VM size type requires:

(Get-AzureRmVMSize -Location 'westeurope' | ? { $_.Name -ieq 'Standard_DS5_v2' }).NumberOfCores

If what is about to be deployed exceeds the limit during initial validation, fail. So we don't have to deploy everything and wait hours before it fails.

Public IP Address should end with "-pip"

When creating Public IP Addresses, the resource name ends with "-ip". To meet with naming conventions, change it so that they instead end with "-pip", and if possible have back compatibility for the old way.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.