The first cut of the Ansible deployment playbook for deploying Eucalyptus private clouds is ready. I’ve merged the first “release” into the master branch here: https://github.com/lwade/eucalyptus-playbook. Feedback and contributions are very welcome, please file issues against the project.
This playbook allows a user to deploy a single front-end cloud (i.e. all component on a single system) and as many NC’s as they want. Although admittedly I’ve only tested with one so far. I’ve followed, to a certain degree, the best practices described here: http://ansible.cc/docs/bestpractices.html
Overall I’m pretty happy with it, there are some areas which I’d like to re-write and improve but the groundwork is there. It was all very fluid to start with and doing something multi-tier has been enjoyable. I’ve also learnt what its like to automate a deployment of Eucalyptus and there are probably a number of things we should improve to make it easier in this regard. Like getting rid of the euca_conf requirement to register systems, doing it via config file would be better
For those familiar with Ansible you will (hopefully) see that I’ve started to split common tasks out to encourage their reuse. I’m certainly not finished here but I think what I have lays the groundwork for task re-use to then enable different topologies.
Fundamentally, to deploy a cloud with a specific topology the master playbook is called which then references a particular collection of tasks to achieve the desired results. After all, a playbook is just a list of tasks, right? So currently there is only one playbook for the single front-end topology: cloud-deploy-sfe.yml. By looking at this, you’ll be able to see what tasks are referenced to build the cloud platform. The next topology I plan to create is one for a non-HA split front-end topology (where all Eucalyptus cloud and cluster tier components are on separate hosts). After that, I’ll look to address a couple of HA topologies. These are the kind of topologies folks are putting into production use.
The directory hierarchy looks like this:
|-- cloud |-- cloud-deploy-distrib-nonha.yml |-- cloud-deploy-sfe.yml |-- cluster | |-- handlers | | `-- cluster.yml | `-- tasks | |-- node-key-fetch.yml | `-- node-registration.yml |-- common | |-- files | |-- handlers | | |-- initialization.yml | | `-- services.yml | |-- tasks | | |-- component-registration.yml | | |-- creds.yml | | |-- initialization.yml | | |-- packages.yml | | |-- preconfig.yml | | |-- refresh-facts.yml | | |-- storage.yml | | `-- template-configuration.yml | `-- templates | `-- eucalyptus.conf.j2 |-- group_vars | |-- all | |-- clustercontroller | |-- nodecontroller | `-- storagecontroller |-- hosts |-- host_vars |-- node | |-- handlers | | `-- node-preconfig.yml | |-- tasks | | |-- node-key-copy.yml | | `-- node-preconfig.yml | `-- templates | |-- ifconfig_bridge_interface.j2 | `-- ifconfig_bridge.j2 |-- readme `-- vars |-- cloudcontroller.yml |-- clustercontroller.yml |-- nodecontroller-net.yml |-- nodecontroller.yml |-- storagecontroller.yml `-- walrus.yml
It’s probably worth explaining the structure (I’ll be updating the readme soon) ….
|-- cloud |-- cloud-deploy-distrib-nonha.yml |-- cloud-deploy-sfe.yml |-- cluster |-- common |-- group_vars |-- hosts |-- host_vars |-- node |-- readme `-- vars
Note: structure may change, there are already some tasks which should really be moved into their tiered component hierarchies.
- cloud – this directory holds any specific task includes, handlers, files and templates for the cloud controller tier (CLC + Walrus)
- cloud-deploy* files – these are the top-level playbooks which pull in the tasks for the platform tiers. From a users perspective, these will be the playbooks they choose to run to deploy a certain topology and configuration.
- cluster – this directory holds specific tasks includes, handlers, files and templates for the cluster layer (CC + SC)
- common – all common tasks, handlers, files and templates are found in this directory. Tasks which are shared across all tiers and components should end up in here.
- group_vars – these are variables which apply to host groups.
- hosts – the inventory file, put your hosts into here, based on group (role).
- host_vars – host specific variables, of which there are none (at the moment).
- node – this directory holds specific tasks includes, handlers, files and templates for the node controllers in a cluster.
- readme – the readme which needs expanding
- vars – this holds variable include files which cannot be used across groups or that need to be set statically for whatever reason.
Please take it for a spin and file issues against the project in GitHub. If we get enough traction, I’m likely to split this out into a separate repo entirely.