While going through the latest lab upgrade round, I found myself running into an error when upgrading NSX. The NSX Edge Transport Nodes (ETN) upgrade successfully, however, the NSX Host Transport Nodes (HTN) portion fails.
p
info
Not that the solutions is so special but it had me running around a bit, therefore I wanted to share.
The upgrade returns the following error:
A general system error occurred: Image is not valid. Component NSX LCP Bundle(NSX LCP Bundle(4.1.0.2.0-8.0.21761693)) has unmet dependency nsx-python-greenlet-esxio because providing component(s) NSX LCP Bundle(NSX LCP Bundle(4.1.0.2.0-8.0.21761693)) are obsoleted.
At the same time the same error is listed on vCenter:
Wile not exactly mentioning the solution, it got me thinking it could be similar. The procedure instructs to download, and work with the JSON export of the vLCM configuration:
vLCM json
Default
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"add_on":{
"name":"DEL-ESXi",
"version":"802.22380479-A04"
},
"alternative_images":null,
"base_image":{
"version":"8.0.2-0.40.23825572"
},
"components":{
"Intel-i40en":"2.5.11.0-1OEM.700.1.0.15843807",
"Synology-syno-nfs-vaai-plugin":"2.0-1109",
"nsx-lcp-bundle":"4.1.0.2.0-8.0.21761693"
},
"hardware_support":null,
"removed_components":null,
"solutions":null
}
I removed the highlighted nsx-lcp-bundle line, saved and imported the JSON again to vLCM. Hereafter I retried the upgrade on NSX and could progress now!
To be honest, I have been complaining some over the last year, or so, about the NSX Advanced Load Balancer documentation. Mostly that it was not easy to be found, and one was having to fall back on the avinetworks.com site, which was not great either.
On docs.vmware.com the navigation links were not existing. However, if and when you knew the page titles, you could search for them through search engines. That showed that a lot of those documentation pages were there, in fact, but only not visible with non-existing links.
However, since a couple weeks, there is a banner on the avinetworks.com site that 22.1.4 is the latest release that was documented on avinetworks.com.
NSX ALB documentation deprecation on avinetworks.com
This means that the single source of truth will be on the NSX Advanced Load Balancer page on docs.vmware.com (the link does redirect you to that location 😀).
Quick tip: if you want to search within a site through a browser, e.g. chrome, use the following as an example:
Are you setting up TrueSSO? Are you looking to use signed certificates to secure the communication between the Connection Server and the Enrollment Server?
Try to find the documentation on using signed certificates to secure that communication. I challenge you, you will not find it easily.
What and why?
You are allowing access to the Unified Access Gateway from the internet. You will want those services to have signed certificates to secure the communication, which will turn that icon in the Horizon client green. To enable end-to-end signed communication, you will need to make sure that you have certs all the way. In the end you are creating tunnels to backend services.
On top of that you want to add TrueSSO in the equation as you want a seamless sign-on experience. This means more certificates. You follow the guides (and all the blog posts that are built using this information), so you are almost there.
However, one step is exporting the ‘vdm.ec’ certificate from the Connection Server and import it on the Enrollment Server. That is exactly where the information is missing or at least hard to be found. None of them actually talk about CA signed certificates for this. You are doing this kind of effort to get all those components (Microsoft) CA signed. Don’t you think that you should use signed certificates here as well, if . I think so!
Where can I find the documentation
Here is the documentation on the VMware websites on setting up TrueSSO:
When you connect to your ESXi host and you launch esxtop. You look at the esxtop output and it is not displaying as it should. Instead, it is displaying like in the below screenshot:
Your esxtop output will be displayed correctly if you are using a terminal emulator that defaults to xterm as the TERM environment variable. Some terminal emulators will use another terminal emulator value by default, eg. xterm-256color. ESXi does not map xterm-256color to one of the values it knows, so it doesn’t know how to display the output.
There is a KB article that explains how to resolve:
The value of the environment variable TERM is used by the server to control how input is recognized by the system, and what capabilities exist for output.
Let us have a look first what the TERM variable is in my case:
Shell
1
echo$TERM
I am receiving the following output:
My terminal emulator tries to connect to the endpoint (ESXi) with xterm-256color. Now let’s take a look at what values this endpoint does support:
So all of the above is possible to assign to TERM. The value my terminal emulator uses is not among the supported terminfo types. So the ESXi host cannot map to any of the known and thus does not know how to display the esxtop info correctly.
When we update the TERM environment variable to xterm and try to run esxtop again, the output will show nicely formatted.
Default
1
2
TERM=xterm
echo$TERM
Let’s check esxtop again to make sure the outcome is as expected:
Sometimes you want/need use iPerf to test the nic speed between two ESXi hosts. I did because I was seeing a NIC with low throughput in my lab.
How can we test raw speeds between the two hosts? iPerf comes to the rescue. I was looking on how to do this on an ESXi host. I doesn’t come as a surprise that I found the solution here at William Lams’ virtuallyghetto.com. Apparently iperf has been added to ESXi since 6.5 U2. You used to have to copy iperf to iperf.copy. In ESXi 7.0 that has been done for you, although you will need to look for /usr/lib/vmware/vsan/bin/iperf3.copy
ESXi host 1 (iperf server)
Disable the firewall:
Shell
1
esxcli network firewall set--enabled false
Change to the directory containing the iperf binary
Shell
1
cd/usr/lib/vmware/vsan/bin/
Execute iPerf as server
Shell
1
./iperf3.copy-s-B10.11.6.171
Overview of the used parameters:
-s
will start iperf as server
-B
defines the IP the iperf server will listen to
Disable the firewall
Shell
1
esxcli network firewall set--enabled false
ESXi host 2 (iperf client)
Change to the directory containing the iperf binary
Shell
1
cd/usr/lib/vmware/vsan/bin/
Execute iPerf as client
Shell
1
./iperf3.copy-i1-t10-c10.11.6.171-fm
Overview of the used parameters:
-i
will determine the interval of reporting back
-t
time iperf will be running
-c
client ip, will force the usage of the correct vmkernel interface
-fm
defaults to kbit/s, adding m will use mbit/s
Don’t forget to re-enable the firewall on both systems.