Cross vCenter vMotion Utility

VMware

Whilst upgrading the homelab I also decided to rebuild from scratch. There were some challenges to overcome because I have running VM’s I don’t want to shut while migrating.

My current homelab setup and the go to setup is documented here (work in progress). Basically it comes down to:

  • Original setup: three hosts backed with iSCSI storage for running the VM’s
  • Temporary setup:
    • New vCenter with two of the three hosts configured for vSAN with connection to the iSCSI datastores
    • Old vCenter with one remaining host running all of the VM’s
  • Destination setup: new vCenter with vSAN datastore

To migrate the virtual machines from the old environment (from the last remaining host to the two new hosts) I decided to take a look at the ‘Cross vCenter vMotion Utility‘. There is not a lot of documentation available at first sight but it is straightforward to set up and configure. Although I did find some things that are worth noting.

Step 1 : Running the jar

To start the Cross vCenter vMotion Utility one must run a jar file: ‘java -jar xvm-2.6.jar’.

I am running linux (Pop!_OS 18.04) as my OS. I have java version 8 and 11 installed with version 11 as default. Version 11 is not listed on the fling site as supported (Java Runtime Environment 1.8-10: See requirements). Running with version 11 (sudo java -jar xvm-2.6.jar) starts the local website on port 8080 (http://localhost:8080) but does not report back on the CLI.

Under the assumption that the java application started and failed rightaway, I decided to run it on my windows box which has Java Runtime environment 8 installed. The last line of feedback ‘Initialized controller with empty state’ was the same as on my linux machine. Navigating to localhost:8080 showed the Cross vCenter vMotion Utility web interface. I could now configure the application and run migrations.

It is only later when I closed the running instance on my linux box and restarting it that it showed me output on the CLI that the application started succesfully.

ps -df | grep -i java
kill -HUP 9159

Output after restart:

Step 2 : Configuration

  • Register connections
    1. Source vCenter
    2. Destination vCenter

Step 3 : Migration

  • Add migrations
    1. Source Site: source vCenter
    2. Target Site: destination vCenter
    3. Source Datacenter
    4. Virtual Machine(s): Select one or more virtual machines
    5. Placement Target: Cluster or Host
    6. Target Datastore
    7. Network Mapping(s): the utility will detect the source networks for all selected virtual machines and display a selection field for the target network

Issues

Storage vMotion?

Storage vMotion does not seem to be supported. I tried to svMotion my machines from their iSCSI based datastores to the newly created vSAN datastore but it failed.

Target Datastore: Shared datastore (same as source)

Choosing ‘Shared datastore (same as source)’ as Target Datastore fails and throws the following error:

I added the destination host and tried again but it also failed with several issues:

  • destination networks were not listed, only a subset were – although all were added to the distributed vSwitch
  • matching datastore was not found on the destination host

I could migrate to the new environment but had to select a destination datastore. This posed not much of a problem in my environment because the end goal was to get the virtual machine on the vSAN datastore.

Now after migrating most of the virtual machines, only two types of virtual machines were leftit felt like I could take a step back if needed. The a to migrate, the vCenter VM’s and the firewall VM’s. The old vCenter is not needed anymore, the new vCenter and the firewall VM’s are and once those are migrated I can go break down the last part of the old setup. The last host will be reset to default settings via the DCUI after which it can be added to the vSAN cluster and I can make the vSAN cluster setup complete. A tmp_vSAN_policy with no redundancy is not the way you (or me) want to run your environment, even if it is a lab environment.

Conclusion

I could not migrate from the old environment to the new environment while also doing a Storage vMotion, I did needed to go in steps.

Nevertheless I’m happy to have used the Cross vCenter vMotion Utility. It did save me a lot of work, required little setup and configuration. I didn’t need to change anything to the setup of my old nor my new environment.

Reconfigure diagnostic partition using Get-EsxCli -V2

VMware

The following powershell snippet is going to unconfigure the diagnostic coredump partition using the esxcli version 2 cmdlet. The second part will reconfigure the diagnostic partition with the ‘smart’ option so that an accessible partition is chosen.

If you want to configure a new diagnostic partition the you will find the necessary information in the following VMware knowledge base article: Configuring a diagnostic coredump partition on an ESXi 5.x/6.x host (2004299). There will be additional steps to supply the partition id.

The following powershell snippet is going to unconfigure the diagnostic coredump partition using the esxcli version 2 cmdlet. The second part will reconfigure the diagnostic partition with the ‘smart’ option so that an accessible partition is chosen.

If you want to configure a new diagnostic partition the you will find the necessary information in the following VMware knowledge base article: Configuring a diagnostic coredump partition on an ESXi 5.x/6.x host (2004299). There will be additional steps to supply the partition id.

$srv = Get-VMHost ESXiHost
$esxcli = Get-EsxCli -VMHost $srv -V2
$arg = $esxcli.system.coredump.partition.set.CreateArgs()
$arg.unconfigure = "true"
$esxcli.system.coredump.partition.set.Invoke($arg)
$arg = $esxcli.system.coredump.partition.set.CreateArgs()
$arg.unconfigure = "false"
$arg.enable = "true"
$arg.smart = "true"
$esxcli.system.coredump.partition.set.Invoke($arg)

First we connect to the esxi host directly and insert the connection details in the variable $srv:

$srv = Get-VMHost ESXiHost

Then we create a esxcli object $esxcli using the variable $srv we created previously:

$esxcli = Get-EsxCli -VMHost $srv -V2

Now we create a variable $arg to store the arguments we will provide later:

$arg = $esxcli.system.coredump.partition.set.CreateArgs()

Setting the $arg property ‘unconfigure’ to true will deactivate the diagnostic partition:

$arg.unconfigure = "true

The invoke command will invoke the command remotely on the esxi host. After execution the diagnostic partition is deactivated:

$esxcli.system.coredump.partition.set.Invoke($arg)

The second part starts with creating a new set of arguments:

$arg = $esxcli.system.coredump.partition.set.CreateArgs()

Reactivate the coredump, because we deactivated it before:

$arg.unconfigure = "false"

Enable the coredump partition:

$arg.enable = "true"

The ‘smart’ property will try to use an accessible partition:

$arg.smart = "true"

The last argument will configure the diagnostic partition using the supplied parameters:

$esxcli.system.coredump.partition.set.Invoke($arg)

Configuring Tesla M60 cards for NVIDIA GRID vGPU

VMware

There are a couple of steps which need to be taken to configure the Tesla M60 cards with NVIDIA GRID VGPU in a vSphere / Horizon environment. I have listed them here quick and dirty. They are an extract of the NVIDIA Virtual GPU Software User Guide.

  • On the host(s):
    • Install the vib
      • esxcli software vib install -v directory/NVIDIA-vGPUVMware_ESXi_6.0_Host_Driver_390.72-1OEM.600.0.0.2159203.vib
    • Reboot the host(s)
    • Check if the module is loaded
      • vmkload_mod -l | grep nvidia
    • Run the nvidia-smi command to verify the correct communictation with the device
    • Configuring Suspend and Resume for VMware vSphere
      • esxcli system module parameters set -m nvidia -p “NVreg_RegistryDwords=RMEnableVgpuMigration=1”
    • Reboot the host
    • Confirm that suspend and resume is configured
      • dmesg | grep NVRM
    • Check that the default graphics type is set to shared direct
    • If the graphics type were not set to shared direct, execute the following commands to stop and start the xorg and nv-hostengine services
      • /etc/init.d/xorg stop
      • nv-hostengine -t
      • nv-hostengine -d
      • /etc/init.d/xorg start
  • On the VM / Parent VM:
    • Configure the VM, beware that once the vGPU is configured that the console of the VM will not be visible/accessible through the vSphere Client. An alternate access method should already be foreseen
    • Edit the VM configuration to add a shared pci device, verify that NVIDIA GRID vGPU is selected
    • Choose the vGPU profile
      more info on the profiles can be found here under section ‘1.4.1 Virtual GPU Types’: https://docs.nvidia.com/grid/6.0/grid-vgpu-user-guide/index.html
    • Reserve all guest memory
  • On the Horizon pool
    • Configure the pool to use the NVIDIA GRID vGPU as 3D Renderer

Unsupported upgrade of VCSA 6.5 U2 to 6.7

VMware

We will upgrade the vCenter Server Appliance from 6.5 U2 to 6.7 though it is not supported. As this is not supported you will NOT want go ahead with this in a production environment. Maybe I will have regrets later on too … but this is my lab environment so the alternative is to redeploy a new VCSA.

I have applied the following knowledge base articles on the source VCSA

The first KB was applied because the installer is failing due to a lack of disk space on the source appliance. The installer gives the opportunity to supply a location on the source VCSA to export the necessary files that facilitate the upgrade.

The second KB was applied because the VMware Directory failed during the firstboot phase after the upgrade succeeded.

I downloaded the sources for VCSA 6.7.0 but had to go and download the sources for VCSA 6.7.0a. The VCSA 6.7.0 sources stalled at 5% on VMware Identity Management Service.

I also went to change the root password expiration to no and set the administrator@vsphere.local account password to only include alphabet characters.

The installer will also fail after the first phase if the VAMI port is not reachable, the first phase will finish succesfully though. I forgot to add an exception to my firewall. You can then continue the installer by going to the VAMI interface on port 5480.

 Setting up the lab in Ravello – Part 1 : the jumphost

This entry is part 1 of 1 in the series Ravello Cloud Lab 1.0

VMware

In these series we will create a lab with multiple components, a jumphost, vcsa, esxi, a vsan enabled cluster, nsx and maybe more. The aim of the series is to learn about deploying all components onto the Ravello cloud.

Part 1: Creating the Jumphost

Part one of the series will be about creating the jumphost. I’m looking at a linux system as we do not need any license to run it and it is already available in Ravello

Creating the Ravello Application

The first step is to create an application. We will create a 0.1 version of the LAB:

Creating the Jumphost VM in the Application

Drag a ‘Xubuntu Desktop 14.04.1 with qemu-kvm pre-installed’ onto the Canvas. Once the VM has been dragged onto the Canvas, there will be an error: ‘Key pair must be supplied’

You can see that the error has its source on the General tab. To correct this a Key Pair must be created.

On the General tab – Cloud Init Configuration – Key Pair

Select the Option: Create a Key Pair

In the following screenshot you can see that I already created a Key Pair

Once created the private key will be available for download. To be able to use the private key with a ssh session from putty, you will need to convert the key.pem to key.ppk. Open puttygen and load the key.pem file and save the file as key.ppk.

Now that we have created our key pair we can save the VM and the error should disappear.

On the System tab, change the # CPU to 2 and the memory to 3 GB.

On the Disks and NICs tab we leave everything as is.

On the Services tab, Add Supplied Service. We will use this Service to connect to the VM via RDP.

A second service will be added. I changed the name to RDP and chose protocol RDP which sets the Port to 3389.

We are ready to publish the application:

Change the ‘Schedule application to stop in:’ countdown timer to ‘04:00hr’. This will give us the time to update and change the VM to our needs.

Publish will power on the VM. When Powered on we will have access to the Console. Powering on the VM takes a couple of minutes.

Customizing the Jumphost VM

Upgrades

The Console will open in a new tab. The initial password for this VM is ‘ravelloCloud’.

The first thing we will do is upgrade the VM to the latest release available. Open the ‘Byobu Terminal’.

Run the command ‘sudo apt-get update && sudo apt-get upgrade’ and confirm you want to upgrade all proposed packages. I tried do-release-upgrade first, which failed because of an apt dependency.

sudo apt-get update && sudo apt-get upgrade

Now we are ready to upgrade to the lastest release. Confirm to all new version configuration files from the package maintainer. In the end all obsolete packages can be removed and reboot when finished.

Run the command ‘sudo apt-get dist-upgrade’ and confirm you want to upgrade all proposed packages. Now your system will be fully up-to-date.

XRDP 0.9.x

Install xrdp 0.9.x so that we can connect via RDP. This will be a more pleasant way of working.

We will add a PPA (Personal Package Archive) to add the package source location to the /etc/apt/sources.list file. This will enable updates through the apt update process. We will install the latest version of xrpd from this location. At the time of writing the version integrated is in the ubuntu sources is 0.6.x. The latest stable version has quite some enhancements like shared clipboard support.

sudo add-apt-repository ppa:hermlnx/xrdp
sudo apt-get update
sudo apt-get install
xrdp xrdp -v

The version installed at the time of writing is 0.9.4

Create xsession file with contents xfce4-session. The latest xrdp version should be detecting the desktop environment by default but in my case it did't and wouldn't work without the following xsession file.

cd $HOME
echo xfce4-session > ~/.xsession

Generate new certificate and key

openssl req -x509 -newkey rsa:2048 -nodes -keyout key.pem -out cert.pem -days 365

Update XRDP to use the new certificates

cd /etc/xrdp sudo vi xrdp.ini

Change the following lines to use the certificate and key generated

certificate=/home/ubuntu/cert.pem
key_file=/home/ubuntu/key.pem
cd /etc/X11/
sudo vi wrapper.config

Change the following line

allowed_users=anybody

Reboot the VM Now you can access the VM through RDP. You will need to confirm the self-signed cert as it has not been signed by a trusted root CA.

Powershell Core

Import the public repository GPG keys

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

Register the Microsoft Ubuntu repository

curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list

Update the list of products

sudo apt-get update

Install PowerShell

sudo apt-get install -y powershell

Start PowerShell

pwsh

PowerCLI 10

Install the PowerCLI module from the PowerShell Gallery

Install-Module -Name VMware.PowerCLI -scope CurrentUser

Verify PowerCLI version

Get-PowerCLIVersion

OPTIONAL: Opt-out from the Customer Experience Improvement Program (CEIP)

Set-PowerCLIConfiguration -scope user -ParticipateCeip $false

OPTIONAL: Do not display the warning about using self-signed certificates

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore

OPTIONAL: Visual Studio Code

Installing Microsoft Visual Studio Code can be usefull for creating scripts that will/could be used within the environment.

curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'
sudo apt-get update
sudo apt-get install code # or code-insiders

The next part will be setting up the ESXi machines and VCSA.

Many thanks to: