Category: VMware

Scripts, notes, builds and tweaks.

CloudStackDevOps and AutomationVMware

Automating sVS to dVS migration on single NIC VMware ESXi hosts

We recently came across an issue where we needed to migrate VMware ESXi hosts from standard vSwitches to distributed vSwitches at build time and before putting the hosts into production. This is straight forward from vCenter by just using the normal migration wizards, but proved to be a little trickier from command line:

  • On a single NIC host the uplink migration has to be done at the same time as port groups and service consoles / VMkernels.
  • This can not be done from the ESXi side (happy to be proven wrong on this point) but has to be managed from VC.
  • In addition this had to be handled by Ansible automated ESXi / VC build playbooks, preferably without involving any other intermediary build hosts.

The conclusion was the migration can be handled by using PowerCLI under Powershell. Microsoft have this year released Powershell for Linux – which meant it could be installed on the Ansible build hosts. PowerCLI can then just be added to the same host:

Once the pre-requisites are in place the following script will handle the migration. Input parameters are named:

  • vchost: IP address of VC host
  • vcuser: VC user account
  • vcpass: VC password
  • esxihosts: comma delimited list of ESXi hosts to migrate
  • dvswitchname: name of distributed vSwitch. The script assumes this has already been created through other means, e.g. the Ansible VMware modules.
[String] $vchost,
[String] $vcuser,
[String] $vcpass,
[String] $esxihosts,
[String] $dvswitchname

# Stop spam
Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $false

# Ignore certificates
Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

# VC connectivity
Write-Host "Connecting to VC host " $vchost
Connect-VIServer -Server $vchost -User $vcuser -Pass $vcpass

# Array of esxi hosts
$esxihostarray = $esxihosts -split ','
foreach ($esxihost in $esxihostarray) {
$dvswitch = Get-VDSwitch $dvswitchname

# Add ESXi host to dvSwitch
Write-Host "Adding" $esxihost "to" $dvswitchname
Add-VDSwitchVMHost -VMHost $esxihost -VDSwitch $dvswitch
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $esxihost -Name "vmk0"
$management_vmkernel_portgroup = Get-VDPortgroup -name "Management Network" -VDSwitch $dvswitchname

# Migration esxi host networking to dvSwitch
Write-Host "Adding vmnic0 to" $dvswitchname
$esxihostnic = Get-VMHost $esxihost | Get-VMHostNetworkAdapter -Physical -Name vmnic0
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic $esxihostnic -DistributedSwitch $dvswitch -VMHostVirtualNic $management_vmkernel -VirtualNicPortgroup $management_vmkernel_portgroup -Confirm:$false
Disconnect-VIServer -Server $global:DefaultVIServers -Force -Confirm:$false


The script can be ran either directly, or by a “local_action shell” from Ansible:

/usr/bin/pwsh esxi-dvs-mgmt.ps1 -vchost [VC host IP] -vcuser [VC user] -vcpass [VC password] -esxihosts "[ESXihost1,ESXihost2]" -dvswitchname [dvSwitch name]

VMWare Fusion lab suspend/resume script

Since Fusion doesn’t have the team functionality that Workstation has I knocked up the following to suspend the current running Fusion VMs and write a new startup script for the same VMs.

Suspend current running VMs with:

# sh

This creates the file, run this to start the same VMs:

# sh

IFS=$(echo "\n\b");
echo "#!/bin/bash" >
echo "# VMs shut down `date`" >>
for strVM in `vmrun list | sed 's/\ /\\ /' | grep vmx`;
	echo "Shutting down $strVM";
	vmrun suspend $strVM;
	echo "vmrun start \"$strVM\" nogui" >>



Running vCloud Director 5.5 appliance in VMware Fusion

Running the vCloud Director appliance in a lab environment requires a full vSphere / vCenter environment due to the pre-requisite for a full OVF environment. Considering the VCD shouldn’t run in the provider data centre this means spinning up a separate ESXi/vCentre purely for VCD (assuming VCNS etc. are running natively in your VMware Fusion / Workstation environment), which is a pain when you have limited lab resources at hand.

The workaround is to create the OVF answer file for the appliance. The following (this is for VCD 5.5) sets the network settings as well as using the internal Oracle database:

<?xml version="1.0" encoding="UTF-8"?>
      <Kind>VMware ESXi</Kind>
      <Vendor>VMware, Inc.</Vendor>
         <Property oe:key="guest-password" oe:value=""/>
         <Property oe:key="root-password" oe:value=""/>
         <Property oe:key="vami.DNS.vCloud_Director" oe:value="DNS_IP"/>
         <Property oe:key="vami.gateway.vCloud_Director" oe:value="DEFAULT_GATEWAY_IP"/>
         <Property oe:key="vami.ip0.vCloud_Director" oe:value="IP_ADDRESS_1"/>
         <Property oe:key="vami.ip1.vCloud_Director" oe:value="IP_ADDRESS_2"/>
         <Property oe:key="vami.netmask0.vCloud_Director" oe:value="IP1_NETMASK"/>
         <Property oe:key="vami.netmask1.vCloud_Director" oe:value="IP2_NETMASK"/>
         <Property oe:key="vcd.db.addr" oe:value=""/>
         <Property oe:key="vcd.db.mssql.instance" oe:value=""/>
         <Property oe:key="" oe:value="MSSQLSERVER"/>
         <Property oe:key="" oe:value="orcl"/>
         <Property oe:key="vcd.db.password" oe:value=""/>
         <Property oe:key="vcd.db.port" oe:value=""/>
         <Property oe:key="vcd.db.type" oe:value="internal"/>
         <Property oe:key="vcd.db.user" oe:value=""/>
         <Property oe:key="vm.vmname" oe:value="vCloud_Director"/>
      <ve:Adapter ve:mac="00:50:56:aa:bb:cc" ve:network="Portgroup1" ve:unitNumber="7"/>
      <ve:Adapter ve:mac="00:50:56:aa:bb:dd" ve:network="Portgroup1" ve:unitNumber="8"/>

Save the file as “ovf-env.xml” in an empty folder then package the file in an ISO file. In OSX this can be done with

hdiutil makehybrid -iso -joliet -o ovf-env.iso vcdovfxml/

Import the VCD appliance OVF file as a new Fusion/Workstation VM, but add a CD-rom device pointing to the ISO file. The VM will now boot without the “OVF environment not supported. Please deploy on compatible vCenter server” error.


The appliance can now be configured through the normal http://<IP of appliance> interface, where administrator password etc. is set.

The network setup interface on http://<IP of appliance>:5480 requires one more step. The appliance will come online with a *blank* root password (note that the root/Default0 password doesn’t work). The web interface will not accept a blank password, so from the console use the login function, then login with root / <blank> and change the password with the usual linux “passwd” command. Once this is done the web interface will accept the same password.

Error: “Polling for listener instance XE. Status ready.”

This error seems to happen if the appliance has limited CPU/RAM resources, i.e. you’ve cut it back to not overload the lab environment. Allowing the appliance to boot once with it’s original settings allows the appliance to boot OK.

Note: having tested this it seems 2.5GB RAM is the minimum for the VCD appliance, any less and the services simply won’t start up. 


VMware ESXi 5.5 zero touch builds – part 3 – storage

A note on storage

I won’t spend any time on NFS storage – it’s relatively straight forward, using a VMK with appropriate uplinks and failover. 

As hinted at in the previous post (interface vmmic override) ISCSI storage does however require additional configuration as per the ESXi storage guide (page 84). In short this boils down to a vmkernel interface having to be attached to a single vmnic  in a one-to-one configuration to be compatible with iSCSI port binding.

This can be achieved in two different ways:

  • Two separate vSwitches with a single VMkernel interface each and a single vmnic each. In this scenario the two VMkernel interfaces need to be configured on separate subnets to ensure no connectivity issues. As most storage platforms are presented on the same subnet this is not normally a viable option.
  • A single vSwitch uplinked with two vmnics and configured with two VMkernel interfaces. The vmnic adapter binding is overridden for each VMkernel interface with one active and one unused vmnic. This is more easily achieved and is in most situations the natural choice.  Read More

VMware ESXi 5.5 zero touch builds – part 2 – network configuration

Some notes on dynamic variables

Creating pointers / variable variables is required to loop through any number of vSwitches listed in the host configuration file. The easiest way is to normally use the “${!ReferenceVariable}” syntax, or arrays/associative arrays, but as mentioned in part 1 these simply don’t work in the Busybox Ash shell. The only way I’ve managed to get this to work is “$(eval echo “\${ReferenceVariable”)” – this seems to work OK, and I’ll stick with it until I come across something better.


All fairly straight forward – loop through the vSwitch variables set in the host configuration file and configure accordingly. I’ve set these up with standard vSwitches with

  • Uplinks
  • Failover / failback
  • Switch notify
  • Loadbalancing
  • Number of ports
  • MTU
  • etc

but this can be expanded to include any config required.

Read More


VMware ESXi 5.5 zero touch builds – part 1 – basics


I won’t go into too much detail about the ESXi automated build process – VMware do a good job of that in the ESXi 5.5 installation guide, on top of this the default kickstart file /etc/vmware/weasel/ks.cfg is a good starting point.

Creating dynamic zero touch builds is possibly overkill for most SME environments, but can be useful in it large scale estates like service provider environments. Ideally these should tie into proper CMDB / inventory databases, source control, continuous integration environments, etc., but this all depends on how much time/manpower/money you have available.


All files – including apache license – can be found on

Host specific PXE boot process

The standard VMware build PXE menu configuration is roughly as follows:

Read More


Host config file export VBA script

Quick followup on the previous build script posts. In short the following will write host specific config files based on spreadsheet data, writing to files based on hostname in column 1 – in the following example “xs62cn1.cfg”.  

So, in short, a spreadsheet with the following entries:


will result in a new config file xs62cn1.cfg with the following entries:


VBA script:

Sub CreateHostConfigFiles()
'Exports spreadsheet data to config files

Dim intCol, intRow, arrHeaders(), intMaxcols, strOutputbuffer, objFSO, strFilename, arrFilename, objFile
    intRow = 2
    intCol = 1
    Set objFSO = CreateObject("Scripting.FileSystemObject")
    'Find headings
    ReDim arrHeaders(0)
    While Cells(1, intCol) <> ""
        ReDim Preserve arrHeaders(UBound(arrHeaders) + 1)
        arrHeaders(intCol - 1) = Cells(1, intCol)
        intCol = intCol + 1
    intCol = intCol - 1
    intMaxcols = intCol
    'Write config
    While Cells(intRow, 1) <> ""
        strFilename = Left(Cells(intRow, 1), InStr(Cells(intRow, 1), ".") - 1) & ".cfg"
        Set objFile = objFSO.CreateTextFile(strFilename, True)
        For intCol = 1 To intMaxcols
            strOutputbuffer = strOutputbuffer & arrHeaders(intCol - 1) & "=" & Chr(34) & Cells(intRow, intCol) & Chr(34) & ";" & Chr(10)
        objFile.write strOutputbuffer
        strOutputbuffer = ""
        intRow = intRow + 1
MsgBox "All done"
End Sub