VMware I/O Analyzer is a tool to launch orchestrated tests against a storage solution available from the VMware flings website. It can be used as a single appliance where the worker process and the analytics is done within. Additional appliances can be deployed to act as Worker VMs. The Analyzer VM launches IOmeter tests (on the Worker VMs) and after test completion it collects the data. All configuration is done from a web interface on the Analyzer VM.
This post is describing how I deployed VMware I/O Analyzer and how I got to a test with maximized IOs. The first tests were conducted launching a IOmeter from within a virtual machine on the vSAN datastore and showed more or less 300 IOs being generated. In the end 18 Worker VMs with 8 disks each on a 6 host vSAN cluster were used generating 340K+ IOPS. The purpose was to create a baseline for a VSAN datastore maximum IOPs.
Hardware used
6 hosts 1 disk group 1 800GB SSD drive5 1,2 TB 10K SAS vSphere 5.5 U3
General
The VM OS disks should not be put on the vSAN datastore you want to test, if not the generated IOPs will be part of your report. To keep the Analyser VM IOPS out of the performance graphs, put it on a different datastore.
Deploy one Analyser VM. Deploy a Worker VM per ESXi host. You should end up with as much Worker VMs as you have hosts in your cluster.
I changed the IP of all VMs to static as there was no DHCP server available in the subnet. This means that no DNS entries were required.
Preferably you will want to change the Analyser VM to a static IP as you will manage the solution from a web browser. The Worker VMs you can leave as is if there is DHCP server available. You will need dns entries and change the configuration used here.
To work easily set the Worker VMs on static IPs or create dns aliases as you will be doing a lot of work on the Worker VMs. I prefer static IPs because they add no complexity due to name resolving, etc…
vi/etc/sysconfig/network/routes(The filewill be created ifit doesn’texist)
Add / Change the following line:
Shell
1
2
Default192.168.1.1--
(Defaultspace GW space hyphen space hyphen)
Save and close the file (:wq)
Restart the network service:
Shell
1
service network restart
Check if the VM is reachable.
Now shutdown the VM.
Deploying the Worker VM:
Clone the Analyser VM.
Add a Hard Disk of 1GB.
Choose advanced and put the 1GB disk on the VSAN datastore.
I needed to configure static IPs on the Worker VMs, so I had to start each VM and change the IP address. After changing the network settings, shut down the VM and create a new clone. Not changing the IPs will give duplicate IPs.
Ease of access configuration
Two ease of access configurations were applied. The first is configured for easy copying from the Analyzer VM to the Worker VMs. The second because all appliances need to be logged onto for the VMware IO Analyzer solution to work. All commands are executed on the Analyzer VM and then copied to the Worker VMs.
Setup ssh keyless authentication
Generate a key pair
Shell
1
ssh-keygen(with an empty passphrase)
ssh-copy-id will copy your public key to the target machine
Shell
1
2
3
4
5
6
ssh-copy-id-iid_rsa.pubroot@192.168.1.21
ssh-copy-id-iid_rsa.pubroot@192.168.1.22
ssh-copy-id-iid_rsa.pubroot@192.168.1.23
ssh-copy-id-iid_rsa.pubroot@192.168.1.24
ssh-copy-id-iid_rsa.pubroot@192.168.1.25
ssh-copy-id-iid_rsa.pubroot@192.168.1.26
The root account password of the destination will need to be supplied for each of the above lines.
BE AWARE: This has the following security downside. If the root account is compromised on the Analyzer vm all worker vms should be considered compromised too.
Autologon
Change autologon=”” to autologon=”root” in the displaymanager (/etc/sysconfig/displaymanager) file with the following command:
TIP: Create affinity rules in vCenter to keep the Worker VMs on dedicated hosts, otherwise the configuration on the VMware I/O Analyzer dashboard will be outdated soon. The consequence is that certain Worker VMs will not be launching their IOmeter profiles and therefor the reports will not be correct.
Configuration
Prerequisites
Enable the SSH service on the ESXi hosts via the vSphere (Web) Client or through Powershell.
The powershell way: (be aware to filter your hosts if needed). There is a dedicated post about starting and stopping ESXi services through powershell here.
I found that looking at the console of the Worker VMs is interesting for troubleshooting. You can see the IOmeter tests being launched. This was very usefull in the process of creating the IOmeter profile. You don’t need to wait untill the test is finished to see it has failed. Stopping IOmeter tests from the console gives the opportunity to look at, edit and save the launched profile.
In a VSAN project the VMware Compatibility Guide mentioned a different driver version for the raid controller than the one that was installed. So I tried to install a driver update for the raid controller through the CLI. This did not work out as expected because the /altbootbank was in a corrupted state. There were two ways to go ahead, either reinstall from scratch or try to rebuild the /altbootbank from the /bootbank contents. This was not a production server so I had the freedom to apply a more experimental approach and therefor I chose the not supported, not recommended approach to rebuild the /altbootbank from the /bootbank contents.
I ran the following command to install the driver:
The vmware KB is going through the steps to solve this, which in this case didn’t. The better solution is to repair or reinstall but this is a time consuming task.
The steps in the KB didn’t solve it, so I tried to delete it with:
Shell
1
rm/altbootbank/state.5824665/
Shell
1
rm–rf/altbootbank/state.5824665/
The ghost file/directory would not delete. The first command returned ‘This is not a file’, the second ‘This is not a directory’. I repeated the same commands after a reboot with the same results. As the server was still booting well I knew the /bootbank was still ok. I wanted to replace the /altbootbank with the contents of the /bootbank partition.
THE FOLLOWING IS NOT RECOMMENDED NOR SUPPORTED! DO NOT EXECUTE ON A PRODUCTION ENVIRONMENT !
Identity the naaID and partition number of the /altbootbank:
Shell
1
vmkfstools-Ph/altbootbank
Scratch the partition through recreating the file system:
Shell
1
vmkfstools-Cvfat/dev/disks/naaID:partitionNumber
Remove the /altbootbank folder:
Shell
1
rm–rf/altbootbank
Create a symlink to the newly created vFat volume with /altbootbank:
Shell
1
ln–s/vmfs/volumes/volumeGUID/altbootbank
Copy all the contents from /bootbank to /altbootbank:
Shell
1
cp/bootbank/*/altbootbank
Change the bootstate=3 in /altbootbank/boot.cfg
Shell
1
vi/altbootbank/boot.cfg
Run /sbin/autobackup.sh script to update the changes
These are powercli goodies I use on a regular base. I have collected them here to find them easily. Some I wrote myself, some are copied from other sites. If I didn’t reference the source, I don’t know anymore where I found it.
While executing the NetApp MetroCluster testplan, the syslog service stops logging to the presented syslog datastore. To restart the logging reload the syslog service on all impacted hosts. The following command will reload the syslog service on all hosts in the connected vCenters. Check the $global:defaultviservers to know which vCenters are connected.
PowerShell
1
2
3
4
5
6
7
$server_list =Get-VMhost
Foreach($srvin$server_list)
{
$esxcli=Get-EsxCli-VMhost$srv
$esxcli.system.syslog.reload()
}
Speed-up the initialization of PowerCLI
This needs to be done for each registered version of PowerCLI. This one worked for me on Windows Server 2012 R2
Apparently there were some errors (curly brackets missing or in the wrong place) in the previous code.
It also ran several times per host because per host there was a Get-VMHost in the ForEach iteration. So If you had three hosts it would run three times per host.
The updated and optimized code:
PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$VMHosts=Get-VMHost
#for each host show the power management policy setting
ForEach($entryin$VMHosts){
#list power management policy on all connected esxi hosts