Building cloud images with Ansible

Introduction

I work for Eucalyptus and spend time in both the private and public cloud.  When working with customers and users THE first roadblock to using the cloud is usually getting an image with some custom bits created, registered and then running.  More often than not this is starting entirely from scratch, in the case of a fresh Eucalyptus install there are no images registered.  The user can either build their first image from scratch or download something from eustore, which is an online catalog for starter images (which can then be edited and bundled).

Fairly often I need to be able to quickly generate a cloud image which contains some custom bits of software and other bits n’ bobs.  This got me thinking.  What about using Ansible to perform an image build? The beautiful thing about Ansible is that its so easy to pick up and that the YAML-formatted playbooks (set of tasks) are so easy to maintain, particularly amongst a group of folk from different technical backgrounds (e.g. a team!). I know for a fact that some of the tools like boxgrinder, kiwi etc. put users off due to the learning curve.  Wouldn’t it be cool if we could walk through image creation in a playbook and make use of particular modules to help us build an image. Maybe write a module for the disk image creation (size, type etc.), use a module to format the image, use another to install grub perhaps? Ultimately we could get a very nice framework of tasks which are interlocked with functional modules to perform the tricky or distro-specific bits.  This could end up really being quite elegant.  Combine this with modules to upload to cloud providers and you have a fully-fledge image orchestration engine.  Wouldn’t that be neat …..

Anyhow, I decided to first write a very simple playbook which is essentially script-like in the nature of its tasks, it isn’t idempotent (at this point) but it does make good use of the ansible chroot plugin, which allows a user to perform actions within a chroot environment without having to rely on some shell and command funkiness.  I focused on RHEL to start with, specifically building an image in a format suitable for Eucalyptus clouds.

Image building (RHEL-based)

Below is the resultant playbook, in short it performs the following steps:

  1. Creates a sparse image file
  2. Gives the image a disk label
  3. Creates an ext3 filesystem on the image
  4. Loopback attaches the image file and mounts it
  5. Installs the base OS (CentOS 6) into this mount point
  6. Sets up some required mountpoints (proc, sys, dev)
  7. Switches to use the chroot plugin
  8. Installs additional packages and configures the rest of the environment as appropriate (add extra stuff here)

This is how it looks, it could do with a sprinkling of with_items and some idempotency and other module usage.

- hosts: local
  connection: local
  tasks:

  - name: Create a disk image
    command: dd if=/dev/zero of=/tmp/myimage.img count=2000000

  - name: Create disk label
    command: /sbin/parted /tmp/myimage.img mklabel msdos

  - name: Create filesystem
    command: /sbin/mkfs.ext3 -F /tmp/myimage.img -L rootdisk

  - name: Find loopback
    shell: losetup -f
    register: loopback

  - name: Loopback attach
    command: losetup ${loopback.stdout} /tmp/myimage.img

  - name: Mount
    command: mount ${loopback.stdout} /mnt

  - name: Install the release RPM
    command: rpm -i --root=/mnt http://mirror.centos.org/centos/6/os/x86_64/Packages/centos-release-6-4.el6.centos.10.x86_64.rpm

  - name: Install packages
    command: yum -y --installroot=/mnt/ groupinstall Base

  - name: Install some extras
    command: yum -y --installroot=/mnt/ install vim openssh-server dhclient curl ntp

  - name: Create mountpoints
    command: mkdir -p /mnt/{proc,etc,dev,var}/{cache,log,lock/rpm}

  - name: Mount proc
    command: mount -t proc none /mnt/proc

  - name: Mount dev
    command: mount -o bind /dev /mnt/dev

- hosts: local-chroot
  user: root
  tasks:

  - name: Change some service states
    service: name={{ item }} enabled=no
    with_items:
    - abrt-ccpp
    - abrt-oops
    - abrtd
    - ip6tables
    - iptables
    - kdump
    - lvm2-monitor
    - ntpd
    - sshd

  - name: Set up network and turn off zeroconf
    template: src=templates/network.j2 dest=/etc/sysconfig/network owner=root group=root

  - name: Template network configuration file
    template: src=templates/ifcfg.j2 dest=/etc/sysconfig/network-scripts/ifcfg-eth0 owner=root group=root

  - name: Template fstab
    template: src=templates/fstab.j2 dest=/etc/fstab owner=root group=root

  - name: Copy EPEL release RPM
    copy: src=files/epel-release.rpm dest=/tmp/epel-release.rpm

  - name: Install EPEL release RPM
    command: yum -y install /tmp/epel-release.rpm

  - name: Install rc.local
    copy: src=files/rc.local dest=/etc/rc.d/rc.local owner=root group=root

  - name: Set permissions
    file: path=/etc/rc.d/rc.local owner=root group=root mode=0755

Notice the use of the chroot plugin, the second play targets this chroot environment. It requires that the mount point be specified in the inventory file, like so:

[local-chroot]
/mnt ansible_connection=chroot

The result is a working image, see here:

[root@emea-demo-01 ~]# euca-describe-instances i-E174437B
 RESERVATION r-10EE3F89 427616426802 default
 INSTANCE i-E174437B emi-376C3CC9 X.X.X.X euca-172-30-53-79.eucalyptus.internal running admin 0 m1.medium 2013-08-02T15:10:08.713Z cluster01 eki-DE1A36B6 eri-BB0C3904 monitoring-disabled X.X.X.X 172.30.53.79 instance-store
 TAG instance i-E174437B euca:node 192.168.250.11

[root@emea-demo-01 ~]# ssh -i creds/eucalyptus/admin/admin.key root@X.X.X.X
 Last login: Fri Aug 2 08:12:03 2013 from Y.Y.Y.Y
 -bash-4.1# hostname
 euca-172-30-53-79.eucalyptus.internal

Anyone can add to this and its easy to wield, it should be easy to add steps to actually bundle and upload the image to AWS/Eucalyptus for registration. I'm hoping that over the coming months I'll get to look into this approach more closely by extending some modules or writing some supporting modules for building images. Time permitting of course 🙂

You can find this image building playbook here: https://github.com/lwade/ansible-playbooks

EC2 AMI Module

Since we're on the topic of images, its worth mentioning this module.  New for Ansible 1.3 we have an ec2_ami module contributed by Evan Duffield and iAquire.  This module chiefly deals with the ability to bundle an EBS-backed instance into an EBS-backed AMI and register it.  This is analagous to ec2-create-image. You can use it like so:

 
- hosts: local
  tasks:

  - name: provision instance
    local_action: ec2_ami instance_id=i-8431a7c9 wait=yes name=newbundle region=eu-west-1

Here is the resultant image:

IMAGE ami-65435a11 048212016277/newbundle 048212016277 available private [marketplace: 7w73f3vx0zywcfq1izrshkpjl] x86_64 machine aki-71665e05 ebs paravirtual xen
 BLOCKDEVICEMAPPING EBS /dev/sda snap-c17ff5ec 8 false standard 

What’s new in Ansible 1.1 for AWS and Eucalyptus users?

I thought the Ansible 1.0 development cycle was busy but 1.1 is crammed full of orchestration goodness.  On Tuesday, 1.1 was released and you can read more about it here: http://blog.ansibleworks.com/2013/04/02/ansible-1-1-released/

For those working on AWS and Eucalyptus, 1.1 brings some nice module improvements as well as a new cloudformation and s3 module.  It’s great to see the AWS-related modules becoming so popular so quickly.  Here are some more details about the changes but you can find info in the changelog here: https://github.com/ansible/ansible/blob/devel/CHANGELOG.md

Security group ID support

It’s now possible to specify the security group by its ID.  This is quite typically behaviour in EC2 and Eucalyptus will support this with the pending 3.3 release.  The parameter is optional.

VPC subnet ID

VPC users can now specify a subnet ID associated with their instance.

Instance state wait timeout

In 1.0 there was no way to specify how long to wait for instances to move to the running state (from pending).  This adds a wait timeout parameter which allows the user to specify how long to wait for the instances to move to running before failing.  Useful to stop a playbook blocking for a longer than desired amount of time.  Thanks to milan for this one 🙂

EC2 idempotency via client token

EC2 implements idempotency for run instance requests via a client token parameter.  This client token should be a unique ASCII string, as detailed here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html. It ensures that another request to launch instances with the same client token would result in an evaluation for idempotency of the request.  Rob Parrott added this feature.

Additional response elements in EC2 returns

The EC2 module returns some response elements which are attributes of the instance(s) you may have launched.  Harold Spencer added all of the attributes, so these can be accessed in plays for later use.

CloudFormation module

James Martin added a CloudFormation module which takes a JSON template to deploy a collection of resources in AWS.

S3 module

This basic S3 module allows a user to put an object into a bucket and then returns a download url.  Useful for stashing particular system info into S3 for archive, or as a different way to transfer files between hosts and groups.  This module aims to be idempotent with a state parameter.

EC2 EBS module (ec2_vol)

This isn’t in the changelog but is a new module for volume creation and attachment to instances.  Useful for supplementing the EC2 module in provisioning tasks but very basic.  It’s not idempotent (yet) and takes the instance ID, volume size, availability zone and device name as parameters.

A big thanks goes out to all the other contributors to the AWS modules who I’ve not explicitly mentioned.

Deploying Eucalyptus via Ansible playbook(s)

The first cut of the Ansible deployment playbook for deploying Eucalyptus private clouds is ready.  I’ve merged the first “release” into the master branch here: https://github.com/lwade/eucalyptus-playbook. Feedback and contributions are very welcome, please file issues against the project.

This playbook allows a user to deploy a single front-end cloud (i.e. all component on a single system) and as many NC’s as they want.  Although admittedly I’ve only tested with one so far.  I’ve followed, to a certain degree, the best practices described here:  http://ansible.cc/docs/bestpractices.html

Overall I’m pretty happy with it, there are some areas which I’d like to re-write and improve but the groundwork is there.  It was all very fluid to start with and doing something multi-tier has been enjoyable. I’ve also learnt what its like to automate a deployment of Eucalyptus and there are probably a number of things we should improve to make it easier in this regard.  Like getting rid of the euca_conf requirement to register systems, doing it via config file would be better 🙂

For those familiar with Ansible you will (hopefully) see that I’ve started to split common tasks out to encourage their reuse.  I’m certainly not finished here but I think what I have lays the groundwork for task re-use to then enable different topologies.

Fundamentally, to deploy a cloud with a specific topology the master playbook is called which then references a particular collection of tasks to achieve the desired results.  After all, a playbook is just a list of tasks, right?  So currently there is only one playbook for the single front-end topology: cloud-deploy-sfe.yml.  By looking at this, you’ll be able to see what tasks are referenced to build the cloud platform.  The next topology I plan to create is one for a non-HA split front-end topology (where all Eucalyptus cloud and cluster tier components are on separate hosts).  After that, I’ll look to address a couple of HA topologies. These are the kind of topologies folks are putting into production use.

The directory hierarchy looks like this:

|-- cloud
|-- cloud-deploy-distrib-nonha.yml
|-- cloud-deploy-sfe.yml
|-- cluster
|   |-- handlers
|   |   `-- cluster.yml
|   `-- tasks
|       |-- node-key-fetch.yml
|       `-- node-registration.yml
|-- common
|   |-- files
|   |-- handlers
|   |   |-- initialization.yml
|   |   `-- services.yml
|   |-- tasks
|   |   |-- component-registration.yml
|   |   |-- creds.yml
|   |   |-- initialization.yml
|   |   |-- packages.yml
|   |   |-- preconfig.yml
|   |   |-- refresh-facts.yml
|   |   |-- storage.yml
|   |   `-- template-configuration.yml
|   `-- templates
|       `-- eucalyptus.conf.j2
|-- group_vars
|   |-- all
|   |-- clustercontroller
|   |-- nodecontroller
|   `-- storagecontroller
|-- hosts
|-- host_vars
|-- node
|   |-- handlers
|   |   `-- node-preconfig.yml
|   |-- tasks
|   |   |-- node-key-copy.yml
|   |   `-- node-preconfig.yml
|   `-- templates
|       |-- ifconfig_bridge_interface.j2
|       `-- ifconfig_bridge.j2
|-- readme
`-- vars
    |-- cloudcontroller.yml
    |-- clustercontroller.yml
    |-- nodecontroller-net.yml
    |-- nodecontroller.yml
    |-- storagecontroller.yml
    `-- walrus.yml

It’s probably worth explaining the structure (I’ll be updating the readme soon) ….

|-- cloud
|-- cloud-deploy-distrib-nonha.yml
|-- cloud-deploy-sfe.yml
|-- cluster
|-- common
|-- group_vars
|-- hosts
|-- host_vars
|-- node
|-- readme
`-- vars

Note: structure may change, there are already some tasks which should really be moved into their tiered component hierarchies.

  • cloud – this directory holds any specific task includes, handlers, files and templates for the cloud controller tier (CLC + Walrus)
  • cloud-deploy* files – these are the top-level playbooks which pull in the tasks for the platform tiers.  From a users perspective, these will be the playbooks they choose to run to deploy a certain topology and configuration.
  • cluster – this directory holds specific tasks includes, handlers, files and templates for the cluster layer (CC + SC)
  • common – all common tasks, handlers, files and templates are found in this directory.  Tasks which are shared across all tiers and components should end up in here.
  • group_vars – these are variables which apply to host groups.
  • hosts – the inventory file, put your hosts into here, based on group (role).
  • host_vars – host specific variables, of which there are none (at the moment).
  • node – this directory holds specific tasks includes, handlers, files and templates for the node controllers in a cluster.
  • readme – the readme which needs expanding 😉
  • vars – this holds variable include files which cannot be used across groups or that need to be set statically for whatever reason.

Please take it for a spin and file issues against the project in GitHub.  If we get enough traction, I’m likely to split this out into a separate repo entirely.

Writing my first Ansible module: ec2_vol

As is probably quite evident, I’ve recently been using Ansible to deploy workloads into EC2 and Eucalyptus.  One of the ideas behind this is the convenience of being able to leverage the common API to achieve a hybrid deployment scenario.  Thanks to various folk (names mentioned in previous posts) we have a solid ec2 boto-based Python module for instance launching.   One thing I wanted to do when spinning up instances and configuring a workload was to add some persistent storage for the application.  To do this I had to create and attach a volume as a manual step or run a local_action against euca2ools.  I figured I could try to write my own module to practice a bit more python (specifically boto).  The result is something terribly (perhaps in more ways that one?) simple but I think this is a testament to just how easy it is to write modules for Ansible (p.s it doesn’t need to be Python) 😉

The resulting doc bits are here: http://ansible.cc/docs/modules.html#ec2-vol whilst the code lives here: https://github.com/ansible/ansible/blob/devel/library/ec2_vol

In a future post I’ll write a little bit about how it works, hopefully this can inspire other folks to try writing some additional EC2 modules for Ansible 🙂

It’s really very rudimentary but it does what it says on the tin.  Having spoken to Seth (and others) about this, some improvements for future versions would be:

– Make it idempotent.  In practical terms make it possible to run the module without changes to the system.  This would be possible via a state parameter (see other modules): present/absent.  The code could then check for a volume attached to a particular instance which is tagged with the specified tag.  If it doesn’t exist, add it.  If it does, pass over it.  I’d call this the ec2 volume passover module 😉

– Detect existing device mappings via instance.block_device_mapping and then adjust the attachment device appropriately.

I’ll probably attempt this in a couple of months. This would work fine under EC2 but tagging and *full* block device mapping support won’t be in Eucalyptus until version 3.3 (due end of Q2 this year).  See here for more details: https://eucalyptus.atlassian.net/browse/EUCA-4786 and https://eucalyptus.atlassian.net/browse/EUCA-2117.

As we continue to bring better feature compatibility with AWS, it makes these kind of hybrid things much easier 🙂  We’re also going to need to address some of the other AWS services with Ansible modules I think, like ELB, CloudWatch etc.

Ansible: workload example – new and improved with FREE Load Balancing

Following on from my post on how to deploy multiple instances of the Eucalyptus User Console (as just a sample workload) I figured I’d make it more useful and add an HAProxy load balancer in front of the user consoles.  With the playbook found here, you should be able to deploy as many consoles as you want and add a single load balancer in front of them.

There are some changes which are worth explaining in this example.  Firstly, the funky templating that Ansible and the Jinja2 templating language allow you to perform.  Here’s a snippet where I launch my instances:

  tasks:    
  - name: Launch instance      
    local_action: ec2 keypair=$keypair group=$security_group instance_type=$instance_type image=$image wait=true count=3      
    register: ec2

Note the register directive. What if I’ve launched 3, 5, maybe 10? How can I know exactly how many to reference in my configuration template?  Here’s the nice bit.  In my template file for the HAproxy I then use the templating language to get dynamic about the HAproxy backend configuration.  This for loop picks all of my instances from the list the ec2 module returns and inserts them into the config:

backend console
    {% for instance in ec2.instances %}        
              server {{ instance.id }} {{ instance.public_ip }}:8888 check    
    {% endfor %}

Here’s the resulting config (haproxy.cfg):

backend console            
        server i-18FA3D7F 10.104.7.10:8888 check            
        server i-517B3F77 10.104.7.12:8888 check            
        server i-9E1C413F 10.104.7.11:8888 check

The next minor change, made after mpdeehan’s feedback, was to add notify actions and a handler snippet for when the eucalyptus-console configuration changes.

A handler is a static common task which a notify directive will call when specified.  This is particularly useful for calling a service restart when updating a configuration file for a currently running service (to reload the config).  Here is the handler section:

  handlers:
    - name: restart console
      action: service name=eucalyptus-console state=restarted

This defines the action the handler will take (restart the console service). Then, here is the notify directive along with the task.  This will ensure that on change the handler is notified (and the service restarted):

    - name: Configure User Console Endpoint
      action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^clchost line=clchost:" ${clchost}"
      notify:
      - restart console

There we have it.  Some minor changes which hopefully demo a little bit more of the functionality you can achieve with Ansible.

Ansible: playbook to deploy a workload using the ec2 module

My previous post talked a little bit about new functionality from (new and updated) ec2-related modules in the Ansible 1.0 release.  In this post I’ll go through the practical example of launching multiple instances with the ec2 module and then configuring them with a workload.  In this case the workload will be the Eucalyptus User Console 🙂

For those who are unfamiliar with ansible, check out the online documentation here. The documentation itself is in a pretty good order for newcomers, so make sure to take it step by step to get the most out of it.  Once you’ve finished you’ll then come to realise how flexible it is but hopefully also how *easy* it is.

To launch my instances and then configure them, I’m going to use a playbook.  For want of a better explanation a playbook is a yaml formatted file containing a series of tasks which orchestrate a configuration or deployment process. The playbook can contain multiple plays, these are separate sections of the playbook (perhaps for logical reasons) which target specific hosts or groups of hosts.

Getting Started

First up, get hold of ansible!  We’ll clone from GitHub in this example, see here for alternatives.  You need to make sure you have Ansible release 1.0 or the latest devel snapshot.

$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup

Next, we need to create a hosts file. This hosts file can reside anywhere and be specified with the “-i” option when running Ansible. By default Ansible will look in /etc/ansible/hosts, so lets create this file.

$ touch /etc/ansible/hosts

Our playbook will depend on a single entry for localhost, so add the following to this file:

[local]
localhost

This assumes you have a localhost entry in your systems /etc/hosts file.

Try it out

Now we’re ready to really get started. Before we create our playbook, lets verify that the environment is setup and working correctly. Choose a system on which you have a login and SSH access and lets try and retrieve some facts by using Ansible in task execution mode.

In this example I’m using host “prc6” onto which I’ve copied my public SSH key. This host has sudo configured and I have the appropriate permissions for root in the sudoers file. Ansible can take this into account with the appropriate switches:

# ansible prc6 -m setup -s -K

Lets break that down:

ansible prc6 # run against this host, this could also be a host group found in /etc/ansible/hosts
-m setup # run the setup module on the remote host
-s -K # tell ansible I'm using sudo and ask ansible to prompt me for my sudo password

Running this command returns a load of in-built facts from the setup module about the remote host:

prc6 | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"10.104.1.130"
],
"ansible_all_ipv6_addresses": [
"fe80::ea9a:8fff:fe74:13c2"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "06/22/2012",
"ansible_bios_version": "2.0.3",
"ansible_cmdline": {
"KEYBOARDTYPE": "pc",
"KEYTABLE": "us",
"LANG": "en_US.UTF-8",
"SYSFONT": "latarcyrheb-sun16",
"crashkernel": "129M@0M",
"quiet": true,
"rd_LVM_LV": "vg01/lv_root",
"rd_MD_UUID": "1b27ae4a:6c4a8a77:6f718721:d335bf17",
"rd_NO_DM": true,
"rd_NO_LUKS": true,
"rhgb": true,
"ro": true,
"root": "/dev/mapper/vg01-lv_root"
}, [...cont]

If we were using a playbook, we could use these gathered facts in our configuration jobs. Neat huh? An example might be some kind of conditional execution, in pseudo:

if bios_version <= 2.0.3 then update firmware

This gives you just an extremely small idea of how ansible can be used as a configuration management engine.  Better still, modules are highly modular (wierd that?!) and can be written in pretty much any language, so you could write your own module to gather whatever facts you might want. We’re planning a Eucalyptus configuration module in the future 🙂

Anyhow, I digress …..

Playbook

With ansible working nicely, lets start writing our playbook. With our next example we want to achieve the following aim:

  • Launch 3 instances in ec2/Eucalyptus using Ansible

Firstly, we need to address dependencies. With 1.0, the ec2 module was ported to boto so install the python-boto and m2crypt packages on your local system:

$ yum install python-boto m2crypt

Next up, get the credentials for your EC2 or Eucalyptus cloud. If you’re using Eucalyptus, source the eucarc as the ec2 module will take the values for EC2_ACCESS_KEY, EC2_SECRET_KEY and EC2_URL from your environment. If you’re wanting to use ec2, just export your access and secret key, boto knows the endpoints:

# export EC2_SECRET_KEY=XXXXXX
# export EC2_ACCESS_KEY=XXXXXX

At this point you might want to try using euca2ools or ec2-tools to interact with your cloud, just to check your credentials 🙂

Onto the action! Below is a playbook broken up into sections, you can see the whole thing here as eucalyptus-user-console-ec2.yml. Save this as euca-demo.yml:

- name: Stage instance # we name our playbook
  hosts: local # we want to target the “local” host group we defined earlier in this post
  connection: local # we want to run this action locally on this host
  user: root # we want to run as this user
  gather_facts: false # since we're running locally, we don't want to gather system facts (remember the setup module we tested?)

  vars: # here we define some play variables
      keypair: lwade # this is our keypair in ec2/euca
      instance_type: m1.small # this is the instance type we want
      security_group: default # our security group
      image: emi-048B3A37 # our image
      count: 3 # how many we want to launch

  tasks: # here we begin the tasks section of the playbook, which is fairly self explanatory :)
    - name: Launch instance # name our tasks
      local_action: ec2 keypair=$keypair group=$security_group instance_type=$instance_type image=$image count=$count wait=true # run the ec2 module locally, with the parameters we want
      register: ec2 # register (save) the output for later use
    - name: Itemised host group addition
      local_action: add_host hostname=${item.public_ip} groupname=launched # here we add the public_ip var from the list of ec2.instances to a host group called "launched", ready for the next play
      with_items: ${ec2.instances}

You could try running this now.  Run with:

# ansible-playbook euca-demo.yml

Once complete, take a look at your running instances:

# euca-describe-instances
RESERVATION    r-48D14305    230788055250    default
INSTANCE    i-127F4992    emi-048B3A37    10.104.7.12    172.25.26.172    running    admin    2        m1.small    2013-02-05T14:59:36.095Z    uno-cluster    eki-38B93991    eri-EC703A1C        monitoring-disabled    10.104.7.12    172.25.26.172            instance-store                                    
INSTANCE    i-244B42C9    emi-048B3A37    10.104.7.10    172.25.26.173    running    admin    0        m1.small    2013-02-05T14:59:36.044Z    uno-cluster    eki-38B93991    eri-EC703A1C        monitoring-disabled    10.104.7.10    172.25.26.173            instance-store                                    
INSTANCE    i-FADD43E0    emi-048B3A37    10.104.7.11    172.25.26.169    running    admin    1        m1.small    2013-02-05T14:59:36.076Z    uno-cluster    eki-38B93991    eri-EC703A1C        monitoring-disabled    10.104.7.11    172.25.26.169            instance-store

Now we want a playbook (play) section to deal with the configuration of the instance.  We define the next play in the same file.  Remember this entire file is available here.

- name: Configure instance  
  hosts: launched  # Here we use the hostgroup from the previous play; all of our instances
  user: root  
  gather_facts: True  # Since these aren't local actions, I've left facter gathering on.

  vars_files:  # Here we demonstrate an external yaml file containing variables    
      - vars/euca-console.yml  

  tasks:   # Begin the tasks
      - name: Ensure NTP is up and running  # Using the "service" module to check state of ntpd    
        action: service name=ntpd state=started     

      - name: Downloads the repo RPMs # download the list of repo RPM's we need
        action: get_url url=$item dest=/tmp/ thirsty=yes
        with_items:
        - http://downloads.eucalyptus.com/software/eucalyptus/${euca_version}/rhel/6/x86_64/eucalyptus-release-${euca_version}.noarch.rpm
        - http://downloads.eucalyptus.com/software/eucalyptus/${euca_version}/rhel/6/x86_64/epel-release-6.noarch.rpm
        - http://downloads.eucalyptus.com/software/eucalyptus/${euca_version}/rhel/6/x86_64/elrepo-release-6.noarch.rpm
        tags:
        - geturl

      - name: Install the repo RPMs # Install the RPM's for the repos
        action: command rpm -Uvh --force /tmp/$item
        with_items:
        - eucalyptus-release-${euca_version}.noarch.rpm
        - epel-release-6.noarch.rpm
        - elrepo-release-6.noarch.rpm

      - name: Install Eucalyptus User Console # Use the yum module to install the eucalyptus-console package     
        action: yum pkg=eucalyptus-console state=latest    

      - name: Configure User Console Endpoint # Here we use the lineinfile module to make a substitutions based on a regexp in the configuration file     
        action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^clchost line=clchost:" ${clchost}"    

      - name: Configure User Console Port      
        action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^uiport line=uiport:" ${port}"

      - name: Configure User Console Language      
        action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^language line=language:" ${lang}"    

      - name: Restart Eucalyptus User Console # With the config changed, use the service module to restart eucalyptus-console   
        action: service name=eucalyptus-console state=restarted

There we have it.  The playbook with two distinct plays is complete; one play to launch the instances and another to configure them.  Lets run our playbook and observe the results.

To run a playbook, you use the ansible-playbook command with much the same options as with ansible (task execution mode).  Since our instances we launch will need a private key, we specify this as part of the command:

# ansible-playbook euca-demo.yml --private-key=/home/lwade/.euca/mykey.priv -vvv

The -vvv switch gives extra verbose output.

As ansible goes off and configures the systems in parallel you’ll see this sort of output, indicating whether a task has been successful:

TASK: [Ensure NTP is up and running] ********************* 
<10.104.7.10> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 10.104.7.10
<10.104.7.11> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 10.104.7.11
<10.104.7.12> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 10.104.7.12
<10.104.7.10> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1360077064.27-8382619657571 && echo $HOME/.ansible/tmp/ansible-1360077064.27-8382619657571'
<10.104.7.11> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1360077064.27-166172056795312 && echo $HOME/.ansible/tmp/ansible-1360077064.27-166172056795312'
<10.104.7.12> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1360077064.3-30210459423135 && echo $HOME/.ansible/tmp/ansible-1360077064.3-30210459423135'
<10.104.7.10> REMOTE_MODULE service name=ntpd state=started
<10.104.7.11> REMOTE_MODULE service name=ntpd state=started
<10.104.7.10> PUT /tmp/tmptDgeMW TO /root/.ansible/tmp/ansible-1360077064.27-8382619657571/service
<10.104.7.11> PUT /tmp/tmpBF7ekS TO /root/.ansible/tmp/ansible-1360077064.27-166172056795312/service
<10.104.7.12> REMOTE_MODULE service name=ntpd state=started
<10.104.7.12> PUT /tmp/tmpA49v9F TO /root/.ansible/tmp/ansible-1360077064.3-30210459423135/service
<10.104.7.10> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1360077064.27-8382619657571/service; rm -rf /root/.ansible/tmp/ansible-1360077064.27-8382619657571/ >/dev/null 2>&1'
<10.104.7.11> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1360077064.27-166172056795312/service; rm -rf /root/.ansible/tmp/ansible-1360077064.27-166172056795312/ >/dev/null 2>&1'
<10.104.7.12> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1360077064.3-30210459423135/service; rm -rf /root/.ansible/tmp/ansible-1360077064.3-30210459423135/ >/dev/null 2>&1'
ok: [10.104.7.10] => {"changed": false, "name": "ntpd", "state": "started"}
ok: [10.104.7.12] => {"changed": false, "name": "ntpd", "state": "started"}
ok: [10.104.7.11] => {"changed": false, "name": "ntpd", "state": "started"}

We see that the task to ensure ntpd is running has been completed successfully. At the end of all the plays we get a recap:

PLAY RECAP ********************* 
10.104.7.10                    : ok=9    changed=5    unreachable=0    failed=0    
10.104.7.11                    : ok=9    changed=5    unreachable=0    failed=0    
10.104.7.12                    : ok=9    changed=5    unreachable=0    failed=0    
localhost                      : ok=2    changed=2    unreachable=0    failed=0

This demonstrates tasks completed successfully and those which changed something on the system.

Let’s see if it worked, point your web browser to https://public_ip:8888 of one of your instances (e.g. https://10.104.7.12:8888):

Screenshot from 2013-02-05 15:31:50

Job done!  Maybe next you could try putting a load balancer in front of these three instances? 🙂

Hopefully this was a good taste of deploying applications into the cloud with the help of Ansible.  I’ll continue to write with further examples of workloads over the next couple of months and will put this information into the Eucalyptus GitHub wiki.

EDIT – If people see value in a playbook repository for generic workloads and other deployment examples on EC2/Eucalyptus, let me know.  I’m sure we can get a common repo setup 🙂

Ansible 1.0 – Awesome.

You may be forgiven for thinking that a version 1.0 software release indicates some sort of significant milestone for the lifecycle of a project.  Perhaps in many cases it does but with Ansible, not so much.  Michael DeHaan articulates it much better than I could in this post to the project mailing list.  My personal experience from using Ansible since v0.8 is that each release delivers consistency, quality and increased flexibility.  It’s great to see fast releases delivering something which is incrementally more useful and enjoyable to use.  There’s always new handy stuff in each release.

For those using AWS and Eucalyptus, Ansible 1.0 is a perfect example of incremental AWSomeness (geddit?!).  Along with a host of other improvements it delivers the following for AWS/Eucalyptus users:

  • An updated ec2 module; ported to boto by our very own tgerla and now with the capability to launch multiple instances.  It works a treat and now you can deploy many instances and configure them all with simple plays. This brings direct dependencies of boto and m2crypt but looses the dependency of euca2ools.  Huge thanks to skvidal here for his counsel.
  • A new ec2_facts module written by silviud, this pulls the same information that facter would from an instance (basically all the metadata). Advantage being re-use of the facts in a playbook and of course not requiring facter and its ruby dependencies to be installed in the instance.

Better still, ec2-related stuff in Ansible is becoming even more popular.  Just recently the module was again updated to support tagging (although this won’t be out-of-the-box until 1.1)!

Read Michael’s blog post here for more information on the release.

I’ll cover some ec2/euca deployment and configuration examples in future blog posts, so stay tuned!  This information will also be going into the GitHub Eucalyptus wiki 🙂

Deploying Eucalyptus components and workloads with Ansible

About two months ago I started playing with Ansible in my “spare” work time.  Ansible is an orchestration engine which uses SSH, easy syntax and powerful modules to provide an alternative to the likes of Puppet and Chef.  A huge benefit to this approach is that it doesn’t require an agent to be installed on the target system.  This is perfect for cloud environments or the orchestration of a large number of disparate systems.

I began with creating a test playbook (my first) which will install a Eucalyptus node controller.  I needed to add about 5 NC’s to a deployment and figured it would be nice to automate this.  The resulting playbook is here (check branches, its a work in progress). It’s not particularly neat but it works and does the job.  The next step here is to re-write and build out a playbook for installing other Eucalyptus components.

More recently, I wanted to deploy the Eucalyptus User Console and Data Warehouse into EC2 and Eucalyptus for testing. To do this I created two new playbooks which deploy these components respectively.  They use the Ansible EC2 module which uses local calls via euca2ools to start an instance, the playbook then configures the resulting VM with the Eucalyptus software.  Try them out!

Disabling linklocal / zeroconf in OpenSUSE 12.2

For Eucalyptus images, having a zeroconf (169.254.0.0) route in the VM’s routing table is BAD.  It usually results in the system being unable to talk to the metadata service and so fail to retrieve SSH keys, hostname, etc.

It took me a little while to find it but disabling zeroconf in OpenSUSE was not as obvious as in RHEL (NOZEROCONF=yes in if-eth* script).  Hopefully this will be useful to others. The place to disable this in OpenSUSE (verified with 12.2) is /etc/sysconfig/network/config.  Look for the following section:

## Type:        string
## Default:     "eth*[0-9]|tr*[0-9]|wlan[0-9]|ath[0-9]"
#
# Automatically add a linklocal route to the matching interfaces.
# This string is used in a bash "case" statement, so it may contain
# '*', '[', ']'  and '|' meta-characters.
#
LINKLOCAL_INTERFACES="eth*[0-9]|tr*[0-9]|wlan[0-9]|ath[0-9]"

Just comment out any interfaces from the above LINKLOCAL_INTERFACES line and save the file.  Restart the network service (ensuring routing table is scrubbed after it goes down) and no more linklocal route!

EBS Architecture (EDBP)

So, the second post in this series and now a look at Eucalyptus Elastic Block Storage (EBS) and the Eucalyptus Storage Controller (SC) component which handles this.

What does it do?

The Storage Controller sits at the same layer as the Cluster Controller (CC), each Eucalyptus Availability Zone (AZ) or Cluster will have its own CC and SC.  Within that AZ, the SC will provide EBS (Elastic Block Store) functionality via iSCSI (AoE is no longer used in 3.0 onwards) using the Linux Target Framework (tgt).  If you’re a subscription-paying customer you can also use the SAN adapter to have the Eucalyptus SC talk directly to your NetApp, Dell (and EMC with 3.2) array.  In turn the Node Controllers (NC) will also talk directly to your storage array.

EBS is core to EC2, it’s a feature which the vast majority compute users will use. It provides the capability for users to store persistent data (and snapshot that data to Walrus/S3 for long-term storage).  With Eucalyptus 3.0 users can now utilise EBS-backed instances, which are essentially boot-from-iSCSI virtual machines.  These virtual machines use an EBS volume for their root filesystem.

This post is a more in-depth look at best practices around storage controller configuration.

How does it work?

This is pretty well explained on the wiki page here but I’ll summarise in short for the benefit of readers.

An EBS volume is created by a number of steps, starting with messaging sent from the client to the CLC (Cloud Controller).  To then create the EBS volume, the SC performs the following steps:

  1. SC creates a volume file in /var/lib/eucalyptus/volumes named per the volume ID
  2. This file is then attached to a free loopback device
  3. A logical volume is created on top of this loopback
  4. An iSCSI target is created along with a new LUN with the backing store of this logical volume

Then, when a user wishes to attach this EBS volume to a running instance, the NC on which the instance resides will attempt to login to this iSCSI target and pass the block device from the LUN through to the virtual machine based on an XML definition file with a “virsh attach-device”.

The SC also facilitates point-in-time snapshots of EBS volumes.  This involves the SC copying the EBS volume to S3 for long-term persistence. From this snapshot, users can register boot-from-EBS  (bfEBS) images and create new volumes.

During the snapshot process the SC does the following:

  1. Creates a new raw disk image file in /var/lib/eucalyptus/volumes
  2. Adds this as a physical volume to the volume group of the EBS volume
  3. Extends the volume group over this new physical volume
  4. Create a new Logical Volume and dd the contents of the volume into this LV

After the copy is complete, the SC will then transfer the contents of the EBS volume up to S3/Walrus.

How should I architect EBS storage?

It’s quite common that the storage controller (without SAN adapter) can be the primary factor in deciding whether to scale out with multi-cluster in deployments.  In part this is down to the usage profile of EBS and the architectural design most users follow.  When designing the storage aspect of the cloud there are a number of key areas on which to focus.

Usage Profile

What is the usage profile for the cloud?  Are EBS volumes used at all?  Are EBS volumes a key infrastructure component of workloads? How many users are on the cloud? How many concurrent volume attachments can you expect?

These are all valid questions when designing the storage architecture.  Concurrent and heavy use of EBS volumes may dictate very different backend requirements to a cloud where EBS volumes are only lightly used.  Make sure you test at the planned or envisaged scale.

Component Topology

Always keep the SC on a separate host from the Cluster Controller (CC) or any other Eucalyptus component if you can.  This has the potential to dramatically improve performance for even the smallest deployments.

Disk Subsystem & Filesystem

With the overlay storage backend, volumes and snapshots are stored in a flat directory structure in /var/lib/eucalyptus/volumes.

You need to make sure you choose a fast disk subsystem for the storage controller.  If you’re using local disks, consider RAID levels with some form of striping, such as RAID1+0 on as many spindles as possible.  If you’re backing /var/lib/eucalyptus/volumes with some form of networked storage, avoid NFS and CIFS.  Choose iSCSI or Fibre Channel where possible for best performance under high utilization across high numbers of active EBS volumes.  For ultimate performance, consider SSD’s and SSD arrays.  If you’re planning on using a remote backing store, like iSCSI, consider optimising with jumbo frames and iSCSI TCP-offloading on the NIC, if supported.

With 3.2 the DASManager storage backend is now open sourced.  Unlike the typical overlay backend the DASManager directly carves up presented block storage with LVM, circumventing the requirement for loopbacks and thus removing the limit of 255 volumes+snapshots*.  When not using the SAN adapter for NetApp, Dell or EMC arrays, the DASManager should be the preferred choice for performance and additional scalability.

As noted in the wiki referred to previously the SC has been tested primarily with ext4, although any POSIX compliant filesystem should be fine.

* In Eucalyptus 3.0 and 3.1 the SC would loopback mount all inactive volumes and thus the limit of 256 volumes would be imposed (256 loopbacks in Linux).  With 3.2+ the overlay storage backend ensures that only active volumes are loopback mounted, so now users can have up to 256 in-use volumes or snapshots.

Network

Don’t cross the streams!

Best possible scenario is to move all EBS traffic onto its own network segment, adding additional interfaces to both your NC’s and the SC and then registering the SC on the interface you wish to use for the storage traffic.  This will ensure that storage and data are segregated.  This should be a necessity if you really must have the CC and SC sharing a host.

Host System Tuning

The box on which the SC is running should have as much memory as possible, plenty for pagecache usage (write-caching).  If the inbound I/O from initiators cannot be written to disk fast enough, the pagecache is going to be important. Monitor the virtual memory subsystem at all times, using something like Ganglia, Nagios, collectl or collectd.

For RHEL hosts use tuned profiles to apply some generic tweaks.  For the SC, enterprise-storage is probably the most effective; adjusting vm.dirty_ratio upwards (the point at which processes start asynchronous writes to disk with pdflushd), setting deadline I/O scheduler and enabling transparent hugepage support.

Consider cache layers in the chain from initiator to the SC.  These can give misleading results during testing.  For example, writes from the instance will (by default unless cache=none) hit the host pagecache, followed by the tgt cache on the SC as well its the SC pagecache, followed by any cache layer for the backing of /var/lib/eucalyptus/volumes. So the instance itself may see very misleading performance figures for disk writes particularly.  Test the chain from initiators to SC under stress conditions.

iSCSI Tuning

By default iscsid may be considered to be quite aggressive in its timeouts.  On a congested network the last thing a user wants is the initiator logging out of a session.  If bfEBS is being used, it’s probably a good idea to back off on some of the timeouts, consider changing the following:

node.conn[0].timeo.noop_out_interval = 0 <- this stops the "ping" between initator and target
node.conn[0].timeo.noop_out_timeout = 0 <- this disables the action of timing out an operation from the above "ping"
node.session.timeo.replacement_timeout = 7200 <-  this sets the connection timeout high, if running the entire root of the OS on the iscsi volume (bfEBS), be lazy.

Expectations & Remediation

Maybe this section should come first but expectations are key here. If you are going to install Eucalyptus on low-end server hardware, with slow disks and network, then don’t expect miracles in terms of concurrent EBS and bfEBS usage.  Performance may suck, YMMV.

On the other hand, perhaps you have optimised as best you can but short of getting new hardware and a new network, there is nothing more you can do to improve EBS performance in your current architecture.  At this point, consider using the following cloud properties to limit EBS volume sizes:

<partition_name>.storage.maxtotalvolumesizeingb <- sets the maximum total EBS volume size in the partition / cluster
<partition_name>.storage.maxvolumesizeingb <- sets the max per-volume size in the partition / cluster

Futhermore, utilise EIAM (Eucalyptus Identitiy and Access Management) quotas to mask over weak points in your platform architecture by placing restrictions on volume sizes and numbers of volumes users can have.  This is an easy way to limit “abuse” of the storage platform.  You can find some sample quotas here.

Wrap-up

Following on from my first post on design, the key here is to nail the requirements and usage profile before designing the EBS architecture and always, always monitor.

If you have any comments or requests, please reply to this post and I’ll try to address them 🙂