Giter Site home page Giter Site logo

packer-cloudstack's Introduction

Packer-cloudstack

This adds a plugin to Packer to talk to Apache Cloudstack. It supports both bootstrapping a OS installation to a empty block device as well as extending existing templates.

Install the plugin

First at all, you should have already installed Packer and have it added to your PATH.

Docker way

If you have a running Docker, you can build and run the image without having to install anything :

# From the root of the repository
docker build -t packer-cs-build .
cd $PACKER_BIN_PATH/
docker run packer-cs-build > ./packer-cs.tgz
tar xzf ./packer-cs.tgz && rm -f ./packer-cs.tgz

Manual way

To install this plugin you will need to have Go installed as well as the needed version control tools for the dependencies. Here we assume a Red Hat derivate, please adjust to your native OS package manager (e.g. apt-get or brew).

export GOPATH=$HOME/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
sudo yum install hg git bzr -y
go get -u github.com/mitchellh/gox
go get -u github.com/schubergphilis/packer-cloudstack
make -C $GOPATH/src/github.com/schubergphilis/packer-cloudstack updatedeps dev

How it works ?

The diagram below shows how to perform a full OS installation (Red Hat derivate) via PXE chainloading onto an empty block device. Cloudstack automation The special chain boot iPXE ISO needs to be built with an embedded script, Embedding script in iPXE. The following snippet should be enough to generate the chainloader ISO from scratch:

sudo yum install -y genisoimage
git clone git://git.ipxe.org/ipxe.git
cd ipxe/src
wget http://ftp.sunet.se/pub/os/Linux/distributions/centos/6/os/x86_64/isolinux/isolinux.bin
cat << EOF > chainload.ipxe
#!ipxe
dhcp
sleep 10
chain http://\${dhcp-server}/latest/user-data
EOF
make ISOLINUX_BIN=isolinux.bin EMBED=chainload.ipxe

The resulting bin/ipxe.iso file needs to be uploaded to your Cloudstack instance. Specify Other (32-bit) as the OS type for the ISO. Also note the resulting UUID as you will need be use this inside the Packer JSON configuration files.

Packer configuration example

The JSON payload below will utilize the special iPXE ISO as well as spin up a local web server on the Packer build workstation. This web server will then serve the neccessary files to perform the full OS installation. This assume we have exported some environment variables for the API end point and the API and secret keys. This will avoid having to hard code these values inside the JSON files which normally will be stored under version control (e.g. git).

export CLOUDSTACK_API_URL="https://cloudstack.local:443/client/api"
export CLOUDSTACK_API_KEY="AAAAAAAAAAAAAAAAAA"
export CLOUDSTACK_SECRET_KEY="AAAAAAAAAAAAAAAAAA"

Currently there is no support for using display names of service offerings, zones, etc. So one needs to add the UUID here. Also note that the hypervisor type needs to be specified so update this accordingly. This builder has been verified to work with Xenserver and VmWare.

{
  "provisioners": [
    {
      "type": "shell",
      "scripts": [
        "scripts/base.sh",
        "scripts/motd.sh",
        "scripts/version.sh",
        "scripts/chef-client11.sh",
        "scripts/setupvm.sh",
        "scripts/tuned.sh",
        "scripts/tuneio.sh",
        "scripts/xs-tools.sh",
        "scripts/vmtweaks.sh",
        "scripts/cleanup.sh",
        "scripts/zerodisk.sh"
      ]
    }
  ],
  "builders": [
    {
      "type": "cloudstack",
      "hypervisor": "xenserver",
      "service_offering_id" : "4ccec2a3-0b53-4db0-aebc-6735019581b2",
      "template_id" : "b34f2d7b-2bec-497e-a18e-06d0de94526e",
      "zone_id" : "489e5147-85ba-4f28-a78d-226bf03db47c",
      "disk_offering_id" :"ef781d7f-f8e8-4f73-985c-e0b0a8ef8d48",
      "network_ids" : ["9ab9719e-1f03-40d1-bfbe-b5dbf598e27f"],
      "ssh_username": "root",
      "ssh_key_path": "data/vagrant_insecure_private_key",
      "ssh_timeout": "15m",
      "state_timeout": "30m",
      "template_name": "centos-6.5-20gb-chef11",
      "template_display_text": "CentOS 6.5 20GB chef11",
      "template_os_id": "144",
      "http_directory": "web",
      "user_data": "#!ipxe\nkernel http://{{.HTTPIP}}:{{.HTTPPort}}/vmlinuz ks=http://{{.HTTPIP}}:{{.HTTPPort}}/ks.cfg\ninitrd http://{{.HTTPIP}}:{{.HTTPPort}}/initrd.img\nboot"
    }
  ]
}

Vmlinuz, initrd and kickstart files are all served from the webserver Packer spins up on the local workstation that is also performing the API calls to Cloudstack.

To continue the provisioning using Packer we need to add the user and/or key we define in the JSON configuration file. An example on how to do this using a CentOS kickstart file is available below. In this example we use the well known Vagrant SSH key pair. This of course needs to be removed after the provisioning has been performed.

install
url --url http://ftp.sunet.se/pub/os/Linux/distributions/centos/6/os/x86_64/Packages/
lang en_US.UTF-8
keyboard sv-latin1
network --bootproto=dhcp --noipv6 --onboot=yes
authconfig --enableshadow --passalgo=sha512
rootpw --iscrypted $6$BbYMtjYH1Xm6$JwsqvNUpqyBiedELVG5aXeTyZXwWhdJ6gTFzrsgA9bykApjz/GrdKqadgvPV38fSM/R8ci3ju5RNm7RB1uQsr.
firewall --disabled
selinux --disabled
timezone --utc Europe/Stockholm
bootloader --location=mbr --append="notsc clocksource=hpet"

text
skipx
zerombr

clearpart --all --initlabel
part /boot --fstype=ext4 --asprimary --recommended --size=100 --fsoptions "defaults,noatime"
part / --fstype=ext4 --grow --asprimary --size=100 --fsoptions "defaults,noatime,data=writeback,barrier=0,nobh,commit=15"
part swap --recommended

firstboot --disabled
reboot

%packages --ignoremissing
@base
@development
kernel-devel
kernel-headers
tuned
%end

%post
mkdir -p /root/.ssh/
cat > /root/.ssh/authorized_keys <<'END_OF_KEY'
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key
END_OF_KEY
chmod 0700 /root/.ssh/
chmod 0600 /root/.ssh/authorized_keys
%end

packer-cloudstack's People

Contributors

mindjiver avatar abayer avatar filirom1 avatar dduportal avatar

Stargazers

 avatar MaxPeal avatar Luke Farnell avatar Jan-Arve Nygård avatar Arminder Singh Girgla avatar miri avatar Çetin ARDAL avatar  avatar Matt Morrison avatar  avatar Faisal Shaikh avatar Atsushi Sasaki avatar Justyn Shull avatar  avatar Felix Erkinger avatar Nik Wolfgramm avatar

Watchers

Justyn Shull avatar  avatar James Cloos avatar Suleyman Kutlu avatar Stefan Wessels Beljaars avatar Todd Pigram avatar Bart Oudhuis avatar Jeroen de Korte avatar  avatar chobbs avatar Mike van Goor avatar Ferry Blankendaal avatar  avatar  avatar  avatar

packer-cloudstack's Issues

Plugin fails to compile during go get

The only info I really have available is the following:

Ubuntu 14.04
Packer 0.7.5
$ go get -u github.com/schubergphilis/packer-cloudstack
# github.com/schubergphilis/packer-cloudstack go/src/github.com/schubergphilis/packer-cloudstack/builder.go:77: undefined: packer.ConfigTemplate

Transferring the project

hi @mindjiver

As we discussed in MissionCriticalCloud/vagrant-cloudstack#62, I'm ready to move over the git tree from this repo to our organisation and then continue the development from there.
However, our open-source policy requires an Apache 2.0 license. Would it be possible for you to change the license file on the repo?
It would be enough to replace the contents of the current license file with this: http://www.apache.org/licenses/LICENSE-2.0.txt

In additional, not to loose history of issues, PRs, etc, would it be possible for you to transfer the repo to me?
There is some documentation on how to do that here: https://help.github.com/articles/transferring-a-repository/

Build 'cloudstack' errored

Hi

I succeed in compiling packer-cloudstack but I struggle to get packer build works

I copy the plugin into the right directory

cp ~/go/bin/packer-builder-cloudstack ~/.packer.d/plugins
cp ~/go/bin/builder-cloudstack ~/.packer.d/plugins

packer validate works, but packer build is failing:

Without debug:

cloudstack output will be in this color.

Build 'cloudstack' errored: Build was halted.

==> Some builds didn't complete successfully and had errors:
--> cloudstack: Build was halted.

==> Builds finished but no artifacts were created.

With debug:

2014/11/26 16:31:44 Executing command: build
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Reading template: packer-cloudstack.json
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Creating build: cloudstack
2014/11/26 16:31:44 Loading builder: cloudstack
2014/11/26 16:31:44 Creating plugin client for path: /root/.packer.d/plugins/packer-builder-cloudstack
2014/11/26 16:31:44 Starting plugin: /root/.packer.d/plugins/packer-builder-cloudstack []string{"/root/.packer.d/plugins/packer-builder-cloudstack"}
2014/11/26 16:31:44 Waiting for RPC address for: /root/.packer.d/plugins/packer-builder-cloudstack
2014/11/26 16:31:44 packer-builder-cloudstack: 2014/11/26 16:31:44 Plugin minimum port: 10000
2014/11/26 16:31:44 packer-builder-cloudstack: 2014/11/26 16:31:44 Plugin maximum port: 25000
2014/11/26 16:31:44 packer-builder-cloudstack: 2014/11/26 16:31:44 Plugin address: unix /tmp/packer-plugin368690657
2014/11/26 16:31:44 packer-builder-cloudstack: 2014/11/26 16:31:44 Waiting for connection...
2014/11/26 16:31:44 packer-builder-cloudstack: 2014/11/26 16:31:44 Serving a plugin connection...
2014/11/26 16:31:44 Loading provisioner: shell
2014/11/26 16:31:44 Creating plugin client for path: /root/packer/packer-provisioner-shell
2014/11/26 16:31:44 Starting plugin: /root/packer/packer-provisioner-shell []string{"/root/packer/packer-provisioner-shell"}
2014/11/26 16:31:44 Waiting for RPC address for: /root/packer/packer-provisioner-shell
2014/11/26 16:31:44 packer-provisioner-shell: 2014/11/26 16:31:44 Plugin build against Packer ''
2014/11/26 16:31:44 packer-provisioner-shell: 2014/11/26 16:31:44 Plugin minimum port: 10000
2014/11/26 16:31:44 packer-provisioner-shell: 2014/11/26 16:31:44 Plugin maximum port: 25000
2014/11/26 16:31:44 packer-provisioner-shell: 2014/11/26 16:31:44 Plugin address: unix /tmp/packer-plugin830361451
2014/11/26 16:31:44 packer-provisioner-shell: 2014/11/26 16:31:44 Waiting for connection...
2014/11/26 16:31:44 packer-provisioner-shell: 2014/11/26 16:31:44 Serving a plugin connection...
cloudstack output will be in this color.
2014/11/26 16:31:44 ui: cloudstack output will be in this color.
2014/11/26 16:31:44 ui:

2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Build debug mode: false
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Force build: false
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Preparing build: cloudstack
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Waiting on builds to complete...
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Starting build run: cloudstack
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Running builder: cloudstack
Build 'cloudstack' errored: Build was halted.
2014/11/26 16:31:44 ui error: Build 'cloudstack' errored: Build was halted.
2014/11/26 16:31:44 packer-command-build: 2014/11/26 16:31:44 Builds completed. Waiting on interrupt barrier...
2014/11/26 16:31:44 machine readable: error-count []string{"1"}

==> Some builds didn't complete successfully and had errors:
2014/11/26 16:31:44 ui error:
==> Some builds didn't complete successfully and had errors:
2014/11/26 16:31:44 machine readable: cloudstack,error []string{"Build was halted."}
2014/11/26 16:31:44 ui error: --> cloudstack: Build was halted.
--> cloudstack: Build was halted.

==> Builds finished but no artifacts were created.
2014/11/26 16:31:44 ui:
==> Builds finished but no artifacts were created.
2014/11/26 16:31:44 waiting for all plugin processes to complete...
2014/11/26 16:31:44 /root/packer/packer-provisioner-shell: plugin process exited
2014/11/26 16:31:44 /root/packer/packer-command-build: plugin process exited
2014/11/26 16:31:44 /root/.packer.d/plugins/packer-builder-cloudstack: plugin process exited

installing CentOS from ISO

Hi Team,

I tried to install the centos from ISO using packer. Here is my code.

``source "cloudstack" "centos" {
api_key = ""
api_url = ""
zone = "27efd513-9343-43be-a1a7-d1a854d85148"
disk_offering = "0789fa34-0d88-4ff9-a43f-ca6c9e2364f4"
hypervisor = "XenServer"
network = "25591049-15f7-425f-9ade-3b1593c2e41d"
secret_key = "c_uqZ8VOub7aJpaS05UUH3IDdAyCV6RiE8MWcI4cWW7r_9smIzyrfdSoN2fD0JgwjEp7wEI_SsKmkUl5fIthwA"
service_offering = "5a6fe325-74c1-413f-8582-3e333ce63005"
source_iso = "7d7dfa0b-4e2e-44be-9e50-3cb63a52d00c"
ssh_username = "root"
ssh_private_key_file = "Loges-key.key"
template_display_text = "Centos7-x86_64 KVM Packer"
template_featured = true
template_name = "Centos7-x86_64-KVM-Packer"
template_os = "feb90e0c-7756-434f-8a9b-908ac9b4b711"
template_password_enabled = true
template_scalable = true
}

build {
sources = ["source.cloudstack.centos"]
}

When I run packer build command, VM was created and prompted to continue installation. I prepared kickstart file (ks.cfg) and place it in under templates\config folder. I'm not sure how to add it in the script.

Can someone help me on this ?

Post-processor for vagrant box for cloudstack

Hi !

As we are trying to provide some "dev boxes" that our user can run in virtualbox as well as in the cloudstack (dependening on the pwoer of your local machine), i want to investigate further in that use case :

  • A unique packer.json file with these sections :
    • Builders :
      • "cloudstack" builder
      • "virtualbox" builder
    • Provisionners : a list of shell scripts, common to all my images (like install that thing, download that stuff, write that piece of file)
    • Post-processor :
      • A "vagrant" one which should be bound to my virtualbox artefact : the .ova and .mf file for vbox, a box-level Vagrantfile and it will generate my "metadata.json" saying that box will use virtualbox.
      • Another "vagrant" one for creating a lightweight baebox which is cloudstack bound - see https://github.com/klarna/vagrant-cloudstack/tree/master/example_box for more information.
    • Push : As packer supports "push" operation, once your template has been built - https://packer.io/docs/templates/push.html -, we can enhance our use cases :
    • A push for cloudstack : for each cloudstack builder i use, i want to copy the "tmp" template already generated and created from the API, by packer-cloudstack plugin, to a "real named" template (e.g. : the packer build will create a "my-box-tmp" and packer build will ensure that my-box-tmp will become "my-box-final" for)
    • A push for my other artefacts.

We are two packer-cloudstack related things :

  • First is the functionnality to bind a post-processor to the build : generate a vagrant box for the vagrant-cloudstack plugin, from the template just generated
  • Second is the functionnality to re-use the packer push step, in order to move our artficat (remote template) from a tmp state to a final state

I've just quickly written that issue, i think @Filirom1 and me can contribute and write those, but me have to validate the concept before rushing to the write.

Is it enough clear ? Do you need more context ? Do you agree or disagree ?

packer-cloudstack & 0.10.x ?

Hej,

I know ... patches welcome ;), but I might ask anyway: Are there any plans to bump the code to be able to compile against latest packer (+ to update e.g. the broken references to code.google.com)?

Martin

Instructions for building ipxe ISO

Hey just wanted to let you know there's a couple of type-os in the embed script:

!pxe --> #!ipxe

...userdata --> ...user-data

(the user-data one I think recently changed in 4.3+)

I had to also include a sleep before starting DHCP as it would timeout before actually getting an IP address. Could be specific to my instance.

For uploading the ISO to CloudStack, what OS type do you select? I'm having trouble where this will only boot on an HVM instance but not a PV. If you install the OS on an HVM and then after converting to template you set the OS to be a PV profile - does this actually result in a PV VM?

Thanks for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.