This blog post was originally posted on the ShapeBlue website: http://www.shapeblue.com/networking-kvm-for-cloudstack/
KVM hypervisor networking for CloudStack can sometimes be a challenge, considering KVM doesn’t quite have the matured guest networking model found in the likes of VMware vSphere and Citrix XenServer. In this blog post we’re looking at the options for networking KVM hosts using bridges and VLANs, and dive a bit deeper into the configuration for these options. Read More
This blog post was originally posted on the ShapeBlue website: http://www.shapeblue.com/recovery-of-vms-to-new-cloudstack-instance
We recently came across a very unusual issue where a client had a major security breach on their network. As well as lots of other damage their CloudStack infrastructure was maliciously damaged beyond recovery. Luckily the hackers hadn’t manage to damage the backend XenServer hypervisors so they were quite happily still running user VMs and Virtual Routers, just not under CloudStack control. Read More
I recently did a talk on CloudStack / CloudPlatform, zero touch VMware ESXi / Citrix XenServer builds and Ansible automation at the Glasgow Virtualisation User Group meeting. Great day, lots of useful information from end users and vendors, as well as some interesting cloud and virtualisation discussions. Thanks to Mike, Brendon and Gavin for the invite, was good to catch up.
Slide deck up on Slideshare:
Cloudmonkey is distributed with Apache CloudStack, and allows for command line configuration of CloudStack resources – i.e. configuration of zones, networks, pods, clusters as well as adding hypervisors, primary and secondary storage.
Using an Ansible playbook to run CloudMonkey isn’t necessarily a good idea – writing a proper shell script with it’s own variable input will allow for much more dynamic configuration – Ansible doesn’t offer proper scripting capabilities after all.
Anyway, the following playbook will configure a CloudStack zone, adding pod, cluster, hypervisors and storage.
Pre-reqs as follows:
Since Fusion doesn’t have the team functionality that Workstation has I knocked up the following to suspend the current running Fusion VMs and write a new startup script for the same VMs.
Suspend current running VMs with:
# sh suspendvms.sh
This creates the file lastlabstart.sh, run this to start the same VMs:
# sh lastlabstart.sh
echo "#!/bin/bash" > lastlabstart.sh
echo "# VMs shut down `date`" >> lastlabstart.sh
for strVM in `vmrun list | sed 's/\ /\\ /' | grep vmx`;
echo "Shutting down $strVM";
vmrun suspend $strVM;
echo "vmrun start \"$strVM\" nogui" >> lastlabstart.sh
As with any Ansible playbook the CloudStack playbook is fairly self explanatory and self-documenting. In short the following will install Apache CloudStack version 4.3 or 4.4 with all required components as well as CloudMonkey for later configuration.
The playbook is written for CentOS base OS for all roles, with CloudStack using XenServer hypervisors and NFS storage.
The playbook relies on tags to separate the various tasks and roles, these are as follows:
Running the vCloud Director appliance in a lab environment requires a full vSphere / vCenter environment due to the pre-requisite for a full OVF environment. Considering the VCD shouldn’t run in the provider data centre this means spinning up a separate ESXi/vCentre purely for VCD (assuming VCNS etc. are running natively in your VMware Fusion / Workstation environment), which is a pain when you have limited lab resources at hand.
The workaround is to create the OVF answer file for the appliance. The following (this is for VCD 5.5) sets the network settings as well as using the internal Oracle database:
<?xml version="1.0" encoding="UTF-8"?>
<Property oe:key="guest-password" oe:value=""/>
<Property oe:key="root-password" oe:value=""/>
<Property oe:key="vami.DNS.vCloud_Director" oe:value="DNS_IP"/>
<Property oe:key="vami.gateway.vCloud_Director" oe:value="DEFAULT_GATEWAY_IP"/>
<Property oe:key="vami.ip0.vCloud_Director" oe:value="IP_ADDRESS_1"/>
<Property oe:key="vami.ip1.vCloud_Director" oe:value="IP_ADDRESS_2"/>
<Property oe:key="vami.netmask0.vCloud_Director" oe:value="IP1_NETMASK"/>
<Property oe:key="vami.netmask1.vCloud_Director" oe:value="IP2_NETMASK"/>
<Property oe:key="vcd.db.addr" oe:value=""/>
<Property oe:key="vcd.db.mssql.instance" oe:value=""/>
<Property oe:key="vcd.db.mssql.name" oe:value="MSSQLSERVER"/>
<Property oe:key="vcd.db.oracle.sid" oe:value="orcl"/>
<Property oe:key="vcd.db.password" oe:value=""/>
<Property oe:key="vcd.db.port" oe:value=""/>
<Property oe:key="vcd.db.type" oe:value="internal"/>
<Property oe:key="vcd.db.user" oe:value=""/>
<Property oe:key="vm.vmname" oe:value="vCloud_Director"/>
<ve:Adapter ve:mac="00:50:56:aa:bb:cc" ve:network="Portgroup1" ve:unitNumber="7"/>
<ve:Adapter ve:mac="00:50:56:aa:bb:dd" ve:network="Portgroup1" ve:unitNumber="8"/>
Save the file as “ovf-env.xml” in an empty folder then package the file in an ISO file. In OSX this can be done with
hdiutil makehybrid -iso -joliet -o ovf-env.iso vcdovfxml/
Import the VCD appliance OVF file as a new Fusion/Workstation VM, but add a CD-rom device pointing to the ISO file. The VM will now boot without the “OVF environment not supported. Please deploy on compatible vCenter server” error.
The appliance can now be configured through the normal http://<IP of appliance> interface, where administrator password etc. is set.
The network setup interface on http://<IP of appliance>:5480 requires one more step. The appliance will come online with a *blank* root password (note that the root/Default0 password doesn’t work). The web interface will not accept a blank password, so from the console use the login function, then login with root / <blank> and change the password with the usual linux “passwd” command. Once this is done the web interface will accept the same password.
Error: “Polling for listener instance XE. Status ready.”
This error seems to happen if the appliance has limited CPU/RAM resources, i.e. you’ve cut it back to not overload the lab environment. Allowing the appliance to boot once with it’s original settings allows the appliance to boot OK.
Note: having tested this it seems 2.5GB RAM is the minimum for the VCD appliance, any less and the services simply won’t start up.