Deploying Eucalyptus via Ansible playbook(s)

The first cut of the Ansible deployment playbook for deploying Eucalyptus private clouds is ready.  I’ve merged the first “release” into the master branch here: https://github.com/lwade/eucalyptus-playbook. Feedback and contributions are very welcome, please file issues against the project.

This playbook allows a user to deploy a single front-end cloud (i.e. all component on a single system) and as many NC’s as they want.  Although admittedly I’ve only tested with one so far.  I’ve followed, to a certain degree, the best practices described here:  http://ansible.cc/docs/bestpractices.html

Overall I’m pretty happy with it, there are some areas which I’d like to re-write and improve but the groundwork is there.  It was all very fluid to start with and doing something multi-tier has been enjoyable. I’ve also learnt what its like to automate a deployment of Eucalyptus and there are probably a number of things we should improve to make it easier in this regard.  Like getting rid of the euca_conf requirement to register systems, doing it via config file would be better 🙂

For those familiar with Ansible you will (hopefully) see that I’ve started to split common tasks out to encourage their reuse.  I’m certainly not finished here but I think what I have lays the groundwork for task re-use to then enable different topologies.

Fundamentally, to deploy a cloud with a specific topology the master playbook is called which then references a particular collection of tasks to achieve the desired results.  After all, a playbook is just a list of tasks, right?  So currently there is only one playbook for the single front-end topology: cloud-deploy-sfe.yml.  By looking at this, you’ll be able to see what tasks are referenced to build the cloud platform.  The next topology I plan to create is one for a non-HA split front-end topology (where all Eucalyptus cloud and cluster tier components are on separate hosts).  After that, I’ll look to address a couple of HA topologies. These are the kind of topologies folks are putting into production use.

The directory hierarchy looks like this:

|-- cloud
|-- cloud-deploy-distrib-nonha.yml
|-- cloud-deploy-sfe.yml
|-- cluster
|   |-- handlers
|   |   `-- cluster.yml
|   `-- tasks
|       |-- node-key-fetch.yml
|       `-- node-registration.yml
|-- common
|   |-- files
|   |-- handlers
|   |   |-- initialization.yml
|   |   `-- services.yml
|   |-- tasks
|   |   |-- component-registration.yml
|   |   |-- creds.yml
|   |   |-- initialization.yml
|   |   |-- packages.yml
|   |   |-- preconfig.yml
|   |   |-- refresh-facts.yml
|   |   |-- storage.yml
|   |   `-- template-configuration.yml
|   `-- templates
|       `-- eucalyptus.conf.j2
|-- group_vars
|   |-- all
|   |-- clustercontroller
|   |-- nodecontroller
|   `-- storagecontroller
|-- hosts
|-- host_vars
|-- node
|   |-- handlers
|   |   `-- node-preconfig.yml
|   |-- tasks
|   |   |-- node-key-copy.yml
|   |   `-- node-preconfig.yml
|   `-- templates
|       |-- ifconfig_bridge_interface.j2
|       `-- ifconfig_bridge.j2
|-- readme
`-- vars
    |-- cloudcontroller.yml
    |-- clustercontroller.yml
    |-- nodecontroller-net.yml
    |-- nodecontroller.yml
    |-- storagecontroller.yml
    `-- walrus.yml

It’s probably worth explaining the structure (I’ll be updating the readme soon) ….

|-- cloud
|-- cloud-deploy-distrib-nonha.yml
|-- cloud-deploy-sfe.yml
|-- cluster
|-- common
|-- group_vars
|-- hosts
|-- host_vars
|-- node
|-- readme
`-- vars

Note: structure may change, there are already some tasks which should really be moved into their tiered component hierarchies.

  • cloud – this directory holds any specific task includes, handlers, files and templates for the cloud controller tier (CLC + Walrus)
  • cloud-deploy* files – these are the top-level playbooks which pull in the tasks for the platform tiers.  From a users perspective, these will be the playbooks they choose to run to deploy a certain topology and configuration.
  • cluster – this directory holds specific tasks includes, handlers, files and templates for the cluster layer (CC + SC)
  • common – all common tasks, handlers, files and templates are found in this directory.  Tasks which are shared across all tiers and components should end up in here.
  • group_vars – these are variables which apply to host groups.
  • hosts – the inventory file, put your hosts into here, based on group (role).
  • host_vars – host specific variables, of which there are none (at the moment).
  • node – this directory holds specific tasks includes, handlers, files and templates for the node controllers in a cluster.
  • readme – the readme which needs expanding 😉
  • vars – this holds variable include files which cannot be used across groups or that need to be set statically for whatever reason.

Please take it for a spin and file issues against the project in GitHub.  If we get enough traction, I’m likely to split this out into a separate repo entirely.

Writing my first Ansible module: ec2_vol

As is probably quite evident, I’ve recently been using Ansible to deploy workloads into EC2 and Eucalyptus.  One of the ideas behind this is the convenience of being able to leverage the common API to achieve a hybrid deployment scenario.  Thanks to various folk (names mentioned in previous posts) we have a solid ec2 boto-based Python module for instance launching.   One thing I wanted to do when spinning up instances and configuring a workload was to add some persistent storage for the application.  To do this I had to create and attach a volume as a manual step or run a local_action against euca2ools.  I figured I could try to write my own module to practice a bit more python (specifically boto).  The result is something terribly (perhaps in more ways that one?) simple but I think this is a testament to just how easy it is to write modules for Ansible (p.s it doesn’t need to be Python) 😉

The resulting doc bits are here: http://ansible.cc/docs/modules.html#ec2-vol whilst the code lives here: https://github.com/ansible/ansible/blob/devel/library/ec2_vol

In a future post I’ll write a little bit about how it works, hopefully this can inspire other folks to try writing some additional EC2 modules for Ansible 🙂

It’s really very rudimentary but it does what it says on the tin.  Having spoken to Seth (and others) about this, some improvements for future versions would be:

– Make it idempotent.  In practical terms make it possible to run the module without changes to the system.  This would be possible via a state parameter (see other modules): present/absent.  The code could then check for a volume attached to a particular instance which is tagged with the specified tag.  If it doesn’t exist, add it.  If it does, pass over it.  I’d call this the ec2 volume passover module 😉

– Detect existing device mappings via instance.block_device_mapping and then adjust the attachment device appropriately.

I’ll probably attempt this in a couple of months. This would work fine under EC2 but tagging and *full* block device mapping support won’t be in Eucalyptus until version 3.3 (due end of Q2 this year).  See here for more details: https://eucalyptus.atlassian.net/browse/EUCA-4786 and https://eucalyptus.atlassian.net/browse/EUCA-2117.

As we continue to bring better feature compatibility with AWS, it makes these kind of hybrid things much easier 🙂  We’re also going to need to address some of the other AWS services with Ansible modules I think, like ELB, CloudWatch etc.

Ansible: workload example – new and improved with FREE Load Balancing

Following on from my post on how to deploy multiple instances of the Eucalyptus User Console (as just a sample workload) I figured I’d make it more useful and add an HAProxy load balancer in front of the user consoles.  With the playbook found here, you should be able to deploy as many consoles as you want and add a single load balancer in front of them.

There are some changes which are worth explaining in this example.  Firstly, the funky templating that Ansible and the Jinja2 templating language allow you to perform.  Here’s a snippet where I launch my instances:

  tasks:    
  - name: Launch instance      
    local_action: ec2 keypair=$keypair group=$security_group instance_type=$instance_type image=$image wait=true count=3      
    register: ec2

Note the register directive. What if I’ve launched 3, 5, maybe 10? How can I know exactly how many to reference in my configuration template?  Here’s the nice bit.  In my template file for the HAproxy I then use the templating language to get dynamic about the HAproxy backend configuration.  This for loop picks all of my instances from the list the ec2 module returns and inserts them into the config:

backend console
    {% for instance in ec2.instances %}        
              server {{ instance.id }} {{ instance.public_ip }}:8888 check    
    {% endfor %}

Here’s the resulting config (haproxy.cfg):

backend console            
        server i-18FA3D7F 10.104.7.10:8888 check            
        server i-517B3F77 10.104.7.12:8888 check            
        server i-9E1C413F 10.104.7.11:8888 check

The next minor change, made after mpdeehan’s feedback, was to add notify actions and a handler snippet for when the eucalyptus-console configuration changes.

A handler is a static common task which a notify directive will call when specified.  This is particularly useful for calling a service restart when updating a configuration file for a currently running service (to reload the config).  Here is the handler section:

  handlers:
    - name: restart console
      action: service name=eucalyptus-console state=restarted

This defines the action the handler will take (restart the console service). Then, here is the notify directive along with the task.  This will ensure that on change the handler is notified (and the service restarted):

    - name: Configure User Console Endpoint
      action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^clchost line=clchost:" ${clchost}"
      notify:
      - restart console

There we have it.  Some minor changes which hopefully demo a little bit more of the functionality you can achieve with Ansible.

Ansible: playbook to deploy a workload using the ec2 module

My previous post talked a little bit about new functionality from (new and updated) ec2-related modules in the Ansible 1.0 release.  In this post I’ll go through the practical example of launching multiple instances with the ec2 module and then configuring them with a workload.  In this case the workload will be the Eucalyptus User Console 🙂

For those who are unfamiliar with ansible, check out the online documentation here. The documentation itself is in a pretty good order for newcomers, so make sure to take it step by step to get the most out of it.  Once you’ve finished you’ll then come to realise how flexible it is but hopefully also how *easy* it is.

To launch my instances and then configure them, I’m going to use a playbook.  For want of a better explanation a playbook is a yaml formatted file containing a series of tasks which orchestrate a configuration or deployment process. The playbook can contain multiple plays, these are separate sections of the playbook (perhaps for logical reasons) which target specific hosts or groups of hosts.

Getting Started

First up, get hold of ansible!  We’ll clone from GitHub in this example, see here for alternatives.  You need to make sure you have Ansible release 1.0 or the latest devel snapshot.

$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup

Next, we need to create a hosts file. This hosts file can reside anywhere and be specified with the “-i” option when running Ansible. By default Ansible will look in /etc/ansible/hosts, so lets create this file.

$ touch /etc/ansible/hosts

Our playbook will depend on a single entry for localhost, so add the following to this file:

[local]
localhost

This assumes you have a localhost entry in your systems /etc/hosts file.

Try it out

Now we’re ready to really get started. Before we create our playbook, lets verify that the environment is setup and working correctly. Choose a system on which you have a login and SSH access and lets try and retrieve some facts by using Ansible in task execution mode.

In this example I’m using host “prc6” onto which I’ve copied my public SSH key. This host has sudo configured and I have the appropriate permissions for root in the sudoers file. Ansible can take this into account with the appropriate switches:

# ansible prc6 -m setup -s -K

Lets break that down:

ansible prc6 # run against this host, this could also be a host group found in /etc/ansible/hosts
-m setup # run the setup module on the remote host
-s -K # tell ansible I'm using sudo and ask ansible to prompt me for my sudo password

Running this command returns a load of in-built facts from the setup module about the remote host:

prc6 | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"10.104.1.130"
],
"ansible_all_ipv6_addresses": [
"fe80::ea9a:8fff:fe74:13c2"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "06/22/2012",
"ansible_bios_version": "2.0.3",
"ansible_cmdline": {
"KEYBOARDTYPE": "pc",
"KEYTABLE": "us",
"LANG": "en_US.UTF-8",
"SYSFONT": "latarcyrheb-sun16",
"crashkernel": "129M@0M",
"quiet": true,
"rd_LVM_LV": "vg01/lv_root",
"rd_MD_UUID": "1b27ae4a:6c4a8a77:6f718721:d335bf17",
"rd_NO_DM": true,
"rd_NO_LUKS": true,
"rhgb": true,
"ro": true,
"root": "/dev/mapper/vg01-lv_root"
}, [...cont]

If we were using a playbook, we could use these gathered facts in our configuration jobs. Neat huh? An example might be some kind of conditional execution, in pseudo:

if bios_version <= 2.0.3 then update firmware

This gives you just an extremely small idea of how ansible can be used as a configuration management engine.  Better still, modules are highly modular (wierd that?!) and can be written in pretty much any language, so you could write your own module to gather whatever facts you might want. We’re planning a Eucalyptus configuration module in the future 🙂

Anyhow, I digress …..

Playbook

With ansible working nicely, lets start writing our playbook. With our next example we want to achieve the following aim:

  • Launch 3 instances in ec2/Eucalyptus using Ansible

Firstly, we need to address dependencies. With 1.0, the ec2 module was ported to boto so install the python-boto and m2crypt packages on your local system:

$ yum install python-boto m2crypt

Next up, get the credentials for your EC2 or Eucalyptus cloud. If you’re using Eucalyptus, source the eucarc as the ec2 module will take the values for EC2_ACCESS_KEY, EC2_SECRET_KEY and EC2_URL from your environment. If you’re wanting to use ec2, just export your access and secret key, boto knows the endpoints:

# export EC2_SECRET_KEY=XXXXXX
# export EC2_ACCESS_KEY=XXXXXX

At this point you might want to try using euca2ools or ec2-tools to interact with your cloud, just to check your credentials 🙂

Onto the action! Below is a playbook broken up into sections, you can see the whole thing here as eucalyptus-user-console-ec2.yml. Save this as euca-demo.yml:

- name: Stage instance # we name our playbook
  hosts: local # we want to target the “local” host group we defined earlier in this post
  connection: local # we want to run this action locally on this host
  user: root # we want to run as this user
  gather_facts: false # since we're running locally, we don't want to gather system facts (remember the setup module we tested?)

  vars: # here we define some play variables
      keypair: lwade # this is our keypair in ec2/euca
      instance_type: m1.small # this is the instance type we want
      security_group: default # our security group
      image: emi-048B3A37 # our image
      count: 3 # how many we want to launch

  tasks: # here we begin the tasks section of the playbook, which is fairly self explanatory :)
    - name: Launch instance # name our tasks
      local_action: ec2 keypair=$keypair group=$security_group instance_type=$instance_type image=$image count=$count wait=true # run the ec2 module locally, with the parameters we want
      register: ec2 # register (save) the output for later use
    - name: Itemised host group addition
      local_action: add_host hostname=${item.public_ip} groupname=launched # here we add the public_ip var from the list of ec2.instances to a host group called "launched", ready for the next play
      with_items: ${ec2.instances}

You could try running this now.  Run with:

# ansible-playbook euca-demo.yml

Once complete, take a look at your running instances:

# euca-describe-instances
RESERVATION    r-48D14305    230788055250    default
INSTANCE    i-127F4992    emi-048B3A37    10.104.7.12    172.25.26.172    running    admin    2        m1.small    2013-02-05T14:59:36.095Z    uno-cluster    eki-38B93991    eri-EC703A1C        monitoring-disabled    10.104.7.12    172.25.26.172            instance-store                                    
INSTANCE    i-244B42C9    emi-048B3A37    10.104.7.10    172.25.26.173    running    admin    0        m1.small    2013-02-05T14:59:36.044Z    uno-cluster    eki-38B93991    eri-EC703A1C        monitoring-disabled    10.104.7.10    172.25.26.173            instance-store                                    
INSTANCE    i-FADD43E0    emi-048B3A37    10.104.7.11    172.25.26.169    running    admin    1        m1.small    2013-02-05T14:59:36.076Z    uno-cluster    eki-38B93991    eri-EC703A1C        monitoring-disabled    10.104.7.11    172.25.26.169            instance-store

Now we want a playbook (play) section to deal with the configuration of the instance.  We define the next play in the same file.  Remember this entire file is available here.

- name: Configure instance  
  hosts: launched  # Here we use the hostgroup from the previous play; all of our instances
  user: root  
  gather_facts: True  # Since these aren't local actions, I've left facter gathering on.

  vars_files:  # Here we demonstrate an external yaml file containing variables    
      - vars/euca-console.yml  

  tasks:   # Begin the tasks
      - name: Ensure NTP is up and running  # Using the "service" module to check state of ntpd    
        action: service name=ntpd state=started     

      - name: Downloads the repo RPMs # download the list of repo RPM's we need
        action: get_url url=$item dest=/tmp/ thirsty=yes
        with_items:
        - http://downloads.eucalyptus.com/software/eucalyptus/${euca_version}/rhel/6/x86_64/eucalyptus-release-${euca_version}.noarch.rpm
        - http://downloads.eucalyptus.com/software/eucalyptus/${euca_version}/rhel/6/x86_64/epel-release-6.noarch.rpm
        - http://downloads.eucalyptus.com/software/eucalyptus/${euca_version}/rhel/6/x86_64/elrepo-release-6.noarch.rpm
        tags:
        - geturl

      - name: Install the repo RPMs # Install the RPM's for the repos
        action: command rpm -Uvh --force /tmp/$item
        with_items:
        - eucalyptus-release-${euca_version}.noarch.rpm
        - epel-release-6.noarch.rpm
        - elrepo-release-6.noarch.rpm

      - name: Install Eucalyptus User Console # Use the yum module to install the eucalyptus-console package     
        action: yum pkg=eucalyptus-console state=latest    

      - name: Configure User Console Endpoint # Here we use the lineinfile module to make a substitutions based on a regexp in the configuration file     
        action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^clchost line=clchost:" ${clchost}"    

      - name: Configure User Console Port      
        action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^uiport line=uiport:" ${port}"

      - name: Configure User Console Language      
        action: lineinfile dest=/etc/eucalyptus-console/console.ini state=present regexp=^language line=language:" ${lang}"    

      - name: Restart Eucalyptus User Console # With the config changed, use the service module to restart eucalyptus-console   
        action: service name=eucalyptus-console state=restarted

There we have it.  The playbook with two distinct plays is complete; one play to launch the instances and another to configure them.  Lets run our playbook and observe the results.

To run a playbook, you use the ansible-playbook command with much the same options as with ansible (task execution mode).  Since our instances we launch will need a private key, we specify this as part of the command:

# ansible-playbook euca-demo.yml --private-key=/home/lwade/.euca/mykey.priv -vvv

The -vvv switch gives extra verbose output.

As ansible goes off and configures the systems in parallel you’ll see this sort of output, indicating whether a task has been successful:

TASK: [Ensure NTP is up and running] ********************* 
<10.104.7.10> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 10.104.7.10
<10.104.7.11> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 10.104.7.11
<10.104.7.12> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 10.104.7.12
<10.104.7.10> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1360077064.27-8382619657571 && echo $HOME/.ansible/tmp/ansible-1360077064.27-8382619657571'
<10.104.7.11> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1360077064.27-166172056795312 && echo $HOME/.ansible/tmp/ansible-1360077064.27-166172056795312'
<10.104.7.12> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1360077064.3-30210459423135 && echo $HOME/.ansible/tmp/ansible-1360077064.3-30210459423135'
<10.104.7.10> REMOTE_MODULE service name=ntpd state=started
<10.104.7.11> REMOTE_MODULE service name=ntpd state=started
<10.104.7.10> PUT /tmp/tmptDgeMW TO /root/.ansible/tmp/ansible-1360077064.27-8382619657571/service
<10.104.7.11> PUT /tmp/tmpBF7ekS TO /root/.ansible/tmp/ansible-1360077064.27-166172056795312/service
<10.104.7.12> REMOTE_MODULE service name=ntpd state=started
<10.104.7.12> PUT /tmp/tmpA49v9F TO /root/.ansible/tmp/ansible-1360077064.3-30210459423135/service
<10.104.7.10> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1360077064.27-8382619657571/service; rm -rf /root/.ansible/tmp/ansible-1360077064.27-8382619657571/ >/dev/null 2>&1'
<10.104.7.11> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1360077064.27-166172056795312/service; rm -rf /root/.ansible/tmp/ansible-1360077064.27-166172056795312/ >/dev/null 2>&1'
<10.104.7.12> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1360077064.3-30210459423135/service; rm -rf /root/.ansible/tmp/ansible-1360077064.3-30210459423135/ >/dev/null 2>&1'
ok: [10.104.7.10] => {"changed": false, "name": "ntpd", "state": "started"}
ok: [10.104.7.12] => {"changed": false, "name": "ntpd", "state": "started"}
ok: [10.104.7.11] => {"changed": false, "name": "ntpd", "state": "started"}

We see that the task to ensure ntpd is running has been completed successfully. At the end of all the plays we get a recap:

PLAY RECAP ********************* 
10.104.7.10                    : ok=9    changed=5    unreachable=0    failed=0    
10.104.7.11                    : ok=9    changed=5    unreachable=0    failed=0    
10.104.7.12                    : ok=9    changed=5    unreachable=0    failed=0    
localhost                      : ok=2    changed=2    unreachable=0    failed=0

This demonstrates tasks completed successfully and those which changed something on the system.

Let’s see if it worked, point your web browser to https://public_ip:8888 of one of your instances (e.g. https://10.104.7.12:8888):

Screenshot from 2013-02-05 15:31:50

Job done!  Maybe next you could try putting a load balancer in front of these three instances? 🙂

Hopefully this was a good taste of deploying applications into the cloud with the help of Ansible.  I’ll continue to write with further examples of workloads over the next couple of months and will put this information into the Eucalyptus GitHub wiki.

EDIT – If people see value in a playbook repository for generic workloads and other deployment examples on EC2/Eucalyptus, let me know.  I’m sure we can get a common repo setup 🙂

Ansible 1.0 – Awesome.

You may be forgiven for thinking that a version 1.0 software release indicates some sort of significant milestone for the lifecycle of a project.  Perhaps in many cases it does but with Ansible, not so much.  Michael DeHaan articulates it much better than I could in this post to the project mailing list.  My personal experience from using Ansible since v0.8 is that each release delivers consistency, quality and increased flexibility.  It’s great to see fast releases delivering something which is incrementally more useful and enjoyable to use.  There’s always new handy stuff in each release.

For those using AWS and Eucalyptus, Ansible 1.0 is a perfect example of incremental AWSomeness (geddit?!).  Along with a host of other improvements it delivers the following for AWS/Eucalyptus users:

  • An updated ec2 module; ported to boto by our very own tgerla and now with the capability to launch multiple instances.  It works a treat and now you can deploy many instances and configure them all with simple plays. This brings direct dependencies of boto and m2crypt but looses the dependency of euca2ools.  Huge thanks to skvidal here for his counsel.
  • A new ec2_facts module written by silviud, this pulls the same information that facter would from an instance (basically all the metadata). Advantage being re-use of the facts in a playbook and of course not requiring facter and its ruby dependencies to be installed in the instance.

Better still, ec2-related stuff in Ansible is becoming even more popular.  Just recently the module was again updated to support tagging (although this won’t be out-of-the-box until 1.1)!

Read Michael’s blog post here for more information on the release.

I’ll cover some ec2/euca deployment and configuration examples in future blog posts, so stay tuned!  This information will also be going into the GitHub Eucalyptus wiki 🙂

Deploying Eucalyptus components and workloads with Ansible

About two months ago I started playing with Ansible in my “spare” work time.  Ansible is an orchestration engine which uses SSH, easy syntax and powerful modules to provide an alternative to the likes of Puppet and Chef.  A huge benefit to this approach is that it doesn’t require an agent to be installed on the target system.  This is perfect for cloud environments or the orchestration of a large number of disparate systems.

I began with creating a test playbook (my first) which will install a Eucalyptus node controller.  I needed to add about 5 NC’s to a deployment and figured it would be nice to automate this.  The resulting playbook is here (check branches, its a work in progress). It’s not particularly neat but it works and does the job.  The next step here is to re-write and build out a playbook for installing other Eucalyptus components.

More recently, I wanted to deploy the Eucalyptus User Console and Data Warehouse into EC2 and Eucalyptus for testing. To do this I created two new playbooks which deploy these components respectively.  They use the Ansible EC2 module which uses local calls via euca2ools to start an instance, the playbook then configures the resulting VM with the Eucalyptus software.  Try them out!