VMware Horizon 7.8 and CART 5 – What’s New

VMware Horizon 7.8 and CART 5 – What’s New

VMware Horizon 7.8 and CART 5 – What’s New

I’m excited to announce the latest release of VMware Horizon 7 version 7.8 and Client Agent Release Train (CART) 5.0. You can see the highlights of this release in the What’s New video below. This includes new features and enhancements to the Connection Server, Horizon Agent, Horizon Client, GPO Bundle, Horizon 7 Cloud Connector, VMware […] The post VMware Horizon 7.8 and CART 5 – What’s New appeared first on VMware End-User Computing Blog.


VMware Social Media Advocacy

Horizon Client Installer Failed

Adding the following symlinks made the failure message go away. I’m wondering though if the packages get updated in the repositories whether this will break the Multimedia Redirection (MMR). I guess I’ll notice some day.

sudo ln -s /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0.1401.0 /usr/lib/x86_64-linux-gnu/libgstbase-0.10.so.0
sudo ln -s /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1401.0 /usr/lib/x86_64-linux-gnu/libgstreamer-0.10.so.0
sudo ln -s /usr/lib/x86_64-linux-gnu/libgstapp-1.0.so.0 /usr/lib/x86_64-linux-gnu/libgstapp-0.10.so.0

Reconfigure diagnostic partition using Get-EsxCli -V2

The following powershell snippet is going to unconfigure the diagnostic coredump partition using the esxcli version 2 cmdlet. The second part will reconfigure the diagnostic partition with the ‘smart’ option so that an accessible partition is chosen.

If you want to configure a new diagnostic partition the you will find the necessary information in the following VMware knowledge base article: Configuring a diagnostic coredump partition on an ESXi 5.x/6.x host (2004299). There will be additional steps to supply the partition id.

The following powershell snippet is going to unconfigure the diagnostic coredump partition using the esxcli version 2 cmdlet. The second part will reconfigure the diagnostic partition with the ‘smart’ option so that an accessible partition is chosen.

If you want to configure a new diagnostic partition the you will find the necessary information in the following VMware knowledge base article: Configuring a diagnostic coredump partition on an ESXi 5.x/6.x host (2004299). There will be additional steps to supply the partition id.

$srv = Get-VMHost ESXiHost
$esxcli = Get-EsxCli -VMHost $srv -V2
$arg = $esxcli.system.coredump.partition.set.CreateArgs()
$arg.unconfigure = "true"
$esxcli.system.coredump.partition.set.Invoke($arg)
$arg = $esxcli.system.coredump.partition.set.CreateArgs()
$arg.unconfigure = "false"
$arg.enable = "true"
$arg.smart = "true"
$esxcli.system.coredump.partition.set.Invoke($arg)

First we connect to the esxi host directly and insert the connection details in the variable $srv:

$srv = Get-VMHost ESXiHost

Then we create a esxcli object $esxcli using the variable $srv we created previously:

$esxcli = Get-EsxCli -VMHost $srv -V2

Now we create a variable $arg to store the arguments we will provide later:

$arg = $esxcli.system.coredump.partition.set.CreateArgs()

Setting the $arg property ‘unconfigure’ to true will deactivate the diagnostic partition:

$arg.unconfigure = "true

The invoke command will invoke the command remotely on the esxi host. After execution the diagnostic partition is deactivated:

$esxcli.system.coredump.partition.set.Invoke($arg)

The second part starts with creating a new set of arguments:

$arg = $esxcli.system.coredump.partition.set.CreateArgs()

Reactivate the coredump, because we deactivated it before:

$arg.unconfigure = "false"

Enable the coredump partition:

$arg.enable = "true"

The ‘smart’ property will try to use an accessible partition:

$arg.smart = "true"

The last argument will configure the diagnostic partition using the supplied parameters:

$esxcli.system.coredump.partition.set.Invoke($arg)

Configuring Tesla M60 cards for NVIDIA GRID vGPU

There are a couple of steps which need to be taken to configure the Tesla M60 cards with NVIDIA GRID VGPU in a vSphere / Horizon environment. I have listed them here quick and dirty. They are an extract of the NVIDIA Virtual GPU Software User Guide.

  • On the host(s):
    • Install the vib
      • esxcli software vib install -v directory/NVIDIA-vGPUVMware_ESXi_6.0_Host_Driver_390.72-1OEM.600.0.0.2159203.vib
    • Reboot the host(s)
    • Check if the module is loaded
      • vmkload_mod -l | grep nvidia
    • Run the nvidia-smi command to verify the correct communictation with the device
    • Configuring Suspend and Resume for VMware vSphere
      • esxcli system module parameters set -m nvidia -p “NVreg_RegistryDwords=RMEnableVgpuMigration=1”
    • Reboot the host
    • Confirm that suspend and resume is configured
      • dmesg | grep NVRM
    • Check that the default graphics type is set to shared direct
    • If the graphics type were not set to shared direct, execute the following commands to stop and start the xorg and nv-hostengine services
      • /etc/init.d/xorg stop
      • nv-hostengine -t
      • nv-hostengine -d
      • /etc/init.d/xorg start
  • On the VM / Parent VM:
    • Configure the VM, beware that once the vGPU is configured that the console of the VM will not be visible/accessible through the vSphere Client. An alternate access method should already be foreseen
    • Edit the VM configuration to add a shared pci device, verify that NVIDIA GRID vGPU is selected
    • Choose the vGPU profile
      more info on the profiles can be found here under section ‘1.4.1 Virtual GPU Types’: https://docs.nvidia.com/grid/6.0/grid-vgpu-user-guide/index.html
    • Reserve all guest memory
  • On the Horizon pool
    • Configure the pool to use the NVIDIA GRID vGPU as 3D Renderer

Unsupported upgrade of VCSA 6.5 U2 to 6.7

We will upgrade the vCenter Server Appliance from 6.5 U2 to 6.7 though it is not supported. As this is not supported you will NOT want go ahead with this in a production environment. Maybe I will have regrets later on too … but this is my lab environment so the alternative is to redeploy a new VCSA.

I have applied the following knowledge base articles on the source VCSA

The first KB was applied because the installer is failing due to a lack of disk space on the source appliance. The installer gives the opportunity to supply a location on the source VCSA to export the necessary files that facilitate the upgrade.

The second KB was applied because the VMware Directory failed during the firstboot phase after the upgrade succeeded.

I downloaded the sources for VCSA 6.7.0 but had to go and download the sources for VCSA 6.7.0a. The VCSA 6.7.0 sources stalled at 5% on VMware Identity Management Service.

I also went to change the root password expiration to no and set the administrator@vsphere.local account password to only include alphabet characters.

The installer will also fail after the first phase if the VAMI port is not reachable, the first phase will finish succesfully though. I forgot to add an exception to my firewall. You can then continue the installer by going to the VAMI interface on port 5480.