CloudStack

KVM networking

This blog post was originally posted on the ShapeBlue website: http://www.shapeblue.com/networking-kvm-for-cloudstack/

Introduction

KVM hypervisor networking for CloudStack can sometimes be a challenge, considering KVM doesn’t quite have the matured guest networking model found in the likes of VMware vSphere and Citrix XenServer. In this blog post we’re looking at the options for networking KVM hosts using bridges and VLANs, and dive a bit deeper into the configuration for these options. Read More

CloudStackXenserver

Recovery of VMs to new CloudStack instance

This blog post was originally posted on the ShapeBlue website: http://www.shapeblue.com/recovery-of-vms-to-new-cloudstack-instance

We recently came across a very unusual issue where a client had a major security breach on their network. As well as lots of other damage their CloudStack infrastructure was maliciously damaged beyond recovery. Luckily the hackers hadn’t manage to damage the backend XenServer hypervisors so they were quite happily still running user VMs and Virtual Routers, just not under CloudStack control. Read More

CloudStackDevOps and AutomationXenserver

Virtualisation user group talk 26/Feb – CloudStack, automated builds and Ansible

I recently did a talk on CloudStack / CloudPlatform, zero touch VMware ESXi / Citrix XenServer builds and Ansible automation at the Glasgow Virtualisation User Group meeting. Great day, lots of useful information from end users and vendors, as well as some interesting cloud and virtualisation discussions. Thanks to Mike, Brendon and Gavin for the invite, was good to catch up.

Slide deck up on Slideshare:

CloudStackDevOps and AutomationXenserver

Cloudmonkey Ansible playbook

Cloudmonkey is distributed with Apache CloudStack, and allows for command line configuration of CloudStack resources – i.e. configuration of zones, networks, pods, clusters as well as adding hypervisors, primary and secondary storage.

Using an Ansible playbook to run CloudMonkey isn’t necessarily a good idea – writing a proper shell script with it’s own variable input will allow for much more dynamic configuration – Ansible doesn’t offer proper scripting capabilities after all.

Anyway, the following playbook will configure a CloudStack zone, adding pod, cluster, hypervisors and storage.

Pre-reqs as follows:
Read More

VMware

VMWare Fusion lab suspend/resume script

Since Fusion doesn’t have the team functionality that Workstation has I knocked up the following to suspend the current running Fusion VMs and write a new startup script for the same VMs.

Suspend current running VMs with:

# sh suspendvms.sh

This creates the file lastlabstart.sh, run this to start the same VMs:

# sh lastlabstart.sh

Suspendvms.sh

#!/bin/bash
strOrigIFS=$IFS
IFS=$(echo "\n\b");
echo "#!/bin/bash" > lastlabstart.sh
echo "# VMs shut down `date`" >> lastlabstart.sh
for strVM in `vmrun list | sed 's/\ /\\ /' | grep vmx`;
do
	echo "Shutting down $strVM";
	vmrun suspend $strVM;
	echo "vmrun start \"$strVM\" nogui" >> lastlabstart.sh
done
IFS=$strOrigIFS

 

CloudStackDevOps and AutomationXenserver

Apache CloudStack Ansible playbook

Basics

As with any Ansible playbook the CloudStack playbook is fairly self explanatory and self-documenting. In short the following will install Apache CloudStack version 4.3 or 4.4 with all required components as well as CloudMonkey for later configuration.

The playbook is written for CentOS base OS for all roles, with CloudStack using XenServer hypervisors and NFS storage.

The playbook relies on tags to separate the various tasks and roles, these are as follows:
Read More

VMware

Running vCloud Director 5.5 appliance in VMware Fusion

Running the vCloud Director appliance in a lab environment requires a full vSphere / vCenter environment due to the pre-requisite for a full OVF environment. Considering the VCD shouldn’t run in the provider data centre this means spinning up a separate ESXi/vCentre purely for VCD (assuming VCNS etc. are running natively in your VMware Fusion / Workstation environment), which is a pain when you have limited lab resources at hand.

The workaround is to create the OVF answer file for the appliance. The following (this is for VCD 5.5) sets the network settings as well as using the internal Oracle database:

<?xml version="1.0" encoding="UTF-8"?>
<Environment
     xmlns="http://schemas.dmtf.org/ovf/environment/1"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xmlns:oe="http://schemas.dmtf.org/ovf/environment/1"
     xmlns:ve="http://www.vmware.com/schema/ovfenv"
     oe:id=""
     ve:vCenterId="vm-100">
   <PlatformSection>
      <Kind>VMware ESXi</Kind>
      <Version>5.5.0</Version>
      <Vendor>VMware, Inc.</Vendor>
      <Locale>en</Locale>
   </PlatformSection>
   <PropertySection>
         <Property oe:key="guest-password" oe:value=""/>
         <Property oe:key="root-password" oe:value=""/>
         <Property oe:key="vami.DNS.vCloud_Director" oe:value="DNS_IP"/>
         <Property oe:key="vami.gateway.vCloud_Director" oe:value="DEFAULT_GATEWAY_IP"/>
         <Property oe:key="vami.ip0.vCloud_Director" oe:value="IP_ADDRESS_1"/>
         <Property oe:key="vami.ip1.vCloud_Director" oe:value="IP_ADDRESS_2"/>
         <Property oe:key="vami.netmask0.vCloud_Director" oe:value="IP1_NETMASK"/>
         <Property oe:key="vami.netmask1.vCloud_Director" oe:value="IP2_NETMASK"/>
         <Property oe:key="vcd.db.addr" oe:value=""/>
         <Property oe:key="vcd.db.mssql.instance" oe:value=""/>
         <Property oe:key="vcd.db.mssql.name" oe:value="MSSQLSERVER"/>
         <Property oe:key="vcd.db.oracle.sid" oe:value="orcl"/>
         <Property oe:key="vcd.db.password" oe:value=""/>
         <Property oe:key="vcd.db.port" oe:value=""/>
         <Property oe:key="vcd.db.type" oe:value="internal"/>
         <Property oe:key="vcd.db.user" oe:value=""/>
         <Property oe:key="vm.vmname" oe:value="vCloud_Director"/>
   </PropertySection>
   <ve:EthernetAdapterSection>
      <ve:Adapter ve:mac="00:50:56:aa:bb:cc" ve:network="Portgroup1" ve:unitNumber="7"/>
      <ve:Adapter ve:mac="00:50:56:aa:bb:dd" ve:network="Portgroup1" ve:unitNumber="8"/>
   </ve:EthernetAdapterSection>
</Environment>

Save the file as “ovf-env.xml” in an empty folder then package the file in an ISO file. In OSX this can be done with

hdiutil makehybrid -iso -joliet -o ovf-env.iso vcdovfxml/

Import the VCD appliance OVF file as a new Fusion/Workstation VM, but add a CD-rom device pointing to the ISO file. The VM will now boot without the “OVF environment not supported. Please deploy on compatible vCenter server” error.

Configuration

The appliance can now be configured through the normal http://<IP of appliance> interface, where administrator password etc. is set.

The network setup interface on http://<IP of appliance>:5480 requires one more step. The appliance will come online with a *blank* root password (note that the root/Default0 password doesn’t work). The web interface will not accept a blank password, so from the console use the login function, then login with root / <blank> and change the password with the usual linux “passwd” command. Once this is done the web interface will accept the same password.

Error: “Polling for listener instance XE. Status ready.”

This error seems to happen if the appliance has limited CPU/RAM resources, i.e. you’ve cut it back to not overload the lab environment. Allowing the appliance to boot once with it’s original settings allows the appliance to boot OK.

Note: having tested this it seems 2.5GB RAM is the minimum for the VCD appliance, any less and the services simply won’t start up.