Category: DevOps and Automation

CloudStackDevOps and AutomationVMware

Automating sVS to dVS migration on single NIC VMware ESXi hosts

We recently came across an issue where we needed to migrate VMware ESXi hosts from standard vSwitches to distributed vSwitches at build time and before putting the hosts into production. This is straight forward from vCenter by just using the normal migration wizards, but proved to be a little trickier from command line:

  • On a single NIC host the uplink migration has to be done at the same time as port groups and service consoles / VMkernels.
  • This can not be done from the ESXi side (happy to be proven wrong on this point) but has to be managed from VC.
  • In addition this had to be handled by Ansible automated ESXi / VC build playbooks, preferably without involving any other intermediary build hosts.

The conclusion was the migration can be handled by using PowerCLI under Powershell. Microsoft have this year released Powershell for Linux – which meant it could be installed on the Ansible build hosts. PowerCLI can then just be added to the same host:

Once the pre-requisites are in place the following script will handle the migration. Input parameters are named:

  • vchost: IP address of VC host
  • vcuser: VC user account
  • vcpass: VC password
  • esxihosts: comma delimited list of ESXi hosts to migrate
  • dvswitchname: name of distributed vSwitch. The script assumes this has already been created through other means, e.g. the Ansible VMware modules.
param(
[String] $vchost,
[String] $vcuser,
[String] $vcpass,
[String] $esxihosts,
[String] $dvswitchname
)

# Stop spam
Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $false

# Ignore certificates
Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

# VC connectivity
Write-Host "Connecting to VC host " $vchost
Connect-VIServer -Server $vchost -User $vcuser -Pass $vcpass

# Array of esxi hosts
$esxihostarray = $esxihosts -split ','
foreach ($esxihost in $esxihostarray) {
$dvswitch = Get-VDSwitch $dvswitchname

# Add ESXi host to dvSwitch
Write-Host "Adding" $esxihost "to" $dvswitchname
Add-VDSwitchVMHost -VMHost $esxihost -VDSwitch $dvswitch
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $esxihost -Name "vmk0"
$management_vmkernel_portgroup = Get-VDPortgroup -name "Management Network" -VDSwitch $dvswitchname

# Migration esxi host networking to dvSwitch
Write-Host "Adding vmnic0 to" $dvswitchname
$esxihostnic = Get-VMHost $esxihost | Get-VMHostNetworkAdapter -Physical -Name vmnic0
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic $esxihostnic -DistributedSwitch $dvswitch -VMHostVirtualNic $management_vmkernel -VirtualNicPortgroup $management_vmkernel_portgroup -Confirm:$false
}
Disconnect-VIServer -Server $global:DefaultVIServers -Force -Confirm:$false

 

The script can be ran either directly, or by a “local_action shell” from Ansible:

/usr/bin/pwsh esxi-dvs-mgmt.ps1 -vchost [VC host IP] -vcuser [VC user] -vcpass [VC password] -esxihosts "[ESXihost1,ESXihost2]" -dvswitchname [dvSwitch name]
CloudStackDevOps and Automation

Virtualisation / TechUG talk 22/Feb/17 – Configuration Management best practices

In February I was invited by Mike, Brendon and Gavin to do another talk at the TechUG / Virtualisation user group in Glasgow. Having done a talk about one of my favourite topics – Ansible – previously, I decided to do a little bit more of a getting started and best practices talk this time, as well showing what we do with our Trillian framework at ShapeBlue.

Slide deck is up on SlideShare:

CloudStackDevOps and AutomationXenserver

Virtualisation user group talk 26/Feb – CloudStack, automated builds and Ansible

I recently did a talk on CloudStack / CloudPlatform, zero touch VMware ESXi / Citrix XenServer builds and Ansible automation at the Glasgow Virtualisation User Group meeting. Great day, lots of useful information from end users and vendors, as well as some interesting cloud and virtualisation discussions. Thanks to Mike, Brendon and Gavin for the invite, was good to catch up.

Slide deck up on Slideshare:

CloudStackDevOps and AutomationXenserver

Cloudmonkey Ansible playbook

Cloudmonkey is distributed with Apache CloudStack, and allows for command line configuration of CloudStack resources – i.e. configuration of zones, networks, pods, clusters as well as adding hypervisors, primary and secondary storage.

Using an Ansible playbook to run CloudMonkey isn’t necessarily a good idea – writing a proper shell script with it’s own variable input will allow for much more dynamic configuration – Ansible doesn’t offer proper scripting capabilities after all.

Anyway, the following playbook will configure a CloudStack zone, adding pod, cluster, hypervisors and storage.

Pre-reqs as follows:
Read More

CloudStackDevOps and AutomationXenserver

Apache CloudStack Ansible playbook

Basics

As with any Ansible playbook the CloudStack playbook is fairly self explanatory and self-documenting. In short the following will install Apache CloudStack version 4.3 or 4.4 with all required components as well as CloudMonkey for later configuration.

The playbook is written for CentOS base OS for all roles, with CloudStack using XenServer hypervisors and NFS storage.

The playbook relies on tags to separate the various tasks and roles, these are as follows:
Read More