Lately, I have been doing quite some work on VMware vSphere with Tanzu. A prerequisite to configure vSphere with Tanzu is a load balancer of some sort. Currently the following are supported, HAProxy, the NSX-T integrated load balancer or the NSX Advanced Load Balancer (ALB). (Support for the NSX ALB was added with the release of vSphere 7 U1.)
The endgoal of the setup is to host several websites in combination with a Horizon environment on a single IP. Because not all systems can handle Let’s Encrypt requests, eg UAG, I want one system that handles the certificate request and does the SSL offloading for the endpoints. So I was looking for a load balancer solution with Let’s Encrypt ability, The NSX Advanced Load Balancer (ALB) adds the ability to request Let’s Encrypt certificates through ControlScripts.
I already learned a lot on the NSX ALB and having some experience with other brands of load balancers certainly helped me to get up to speed quickly.
The goal of this post is to set up a standard Virtual Service (VS) and request a Let’s Encrypt certificate for that VS. You will see that it is quite easy.
Prequisites
I will not configure some necessary configuration settings. They are, however, required to successfully execute the steps below. I will assume the following prerequisites are in place.
The following post shows how to deploy the NSX Advanced Load Balancer and how to configure a ‘VMware vCenter/vSphere ESX’ cloud.
The NSX ALB registered with a cloud. I use a ‘VMware vCenter/vSphere ESX’ cloud
A public DNS entry for the Virtual Service. (Let’s Encrypt needs to be able to check your Virtual Service)
Some way to get to the virtual service from the internet. I have setup a NAT rule on my firewall for this.
Server Pool. (Needed to create the Virtual Service. It is obvious that the Virtual Service needs some endpoint to send the requests to.)
Network config for VIP and SE. (Once you configure a ‘VMware vCenter/vSphere ESX’ cloud, you’ll have access to the networks known to vCenter. You will need to configure ‘Subnets’ and ‘IP Address Pools’ for the NSX ALB to use for the VSs.)
IPAM/DNS Profile. (You need to add the Domain Names for the Virtual Services here.)
I will cover these in a later post but for now I added them as a prerequisite.
What does the ControlScript do?
The ControlScript generates a challenge token for the Let’s Encrypt servers to check the service. Secondly, it searches for a Virtual Service with an fqdn with the Common Name supplied on the certificate request. Once it finds that Virtual Service, it checks if it is listening on port 80. If not, it configures the Virtual Service to handle the request on port 80. Then it adds the challenge token to the Virtual Service. Finally, after a succesful certificate request the changes are cleared.
Add the Let’s Encrypt ControlScript to the NSX Advanced Load Balancer
Navigate to Templates > Scripts > ControlScripts and click CREATE
Supply the script name, eg ControlScript_LetsEncrypt_VS, and choose either ‘Enter Text’ or ‘Upload File’. Now we will choose the ‘Enter Text’ option and paste the contents of the python script on github.
Create a Certificate Management profile
Navigate to Templates > Security > Certificate Mangement and click CREATE
Enter the Name ‘CertMgmt_LetsEncrypt_VS’ and select the Control Script ‘ControlScript_LetsEncrypt_VS’
Click ‘Enable Custom Parameters’ and add the following:
Name
Value
Comment
user
admin
password
<enter your NSX ALB controller password for the admin user>
(toggle Sensitive)
tenant
admin
this is important, otherwise the script won’t have clue on which tenant it should be applied
Add the Custom Parameter ‘tenant’ even if you only have one tenant, the default tenant (admin). I have struggled a lot with the script failing without having a clue why that was. Ultimately, after a long search and monitoring the log through tail, there was something in the logs that pointed me in this direction.
There is a possibility to add a fourth parameter ‘dryrun’, with value true or false. This will toggle the script to use the Let’s Encrypt staging server.
Create the Virtual Service
Navigate to Applications > Virtual Services > CREATE VIRTUAL SERVICE and click ‘Advanced Setup’
Create the VS with the SNI, in this example I will create ‘vpn.vconsultants.be’. Configure the settings page and leave the other tabs with the default settings.
Supply the VS name (I use the fqdn/SNI just for manageability)
Leave the checkbox ‘Virtual Hosting VS’ unchecked (default). (We will setup a standard VS.)
Leave the checkbox ‘Auto Allocate’ checked (default). (It takes an IP from the Network pool.)
Change the ‘Application Profile’ to ‘System-Secure-HTTP’.
Supply a ‘Floating IPv4’. (I use a static one so that I’m able to setup NAT to this IP on my firewall.)
Select a ‘Network for VIP Address Allocation’. (The SE will create the VIP in this network.)
Select a ‘IPv4 Subnet’. (Only the ones created in the Network config for VIP and SE will be available.)
Change the ‘Application Domain Name’ so that it matches the fqdn of the SNI. (This will fill automatically based on the VS Name.)
Check SSL and verify that the port changes to 443
Select the correct Pool
Change the ‘SSL Profile’ to ‘System-Standard’
Note: Item 7 is a bit awkward. Hovering over the question mark for help, it states that it is only applicable if the VirtualService belongs to an OpenStack or AWS cloud. When you don’t set this option, you cannot go forward. This confuses me somewhat, as I only use a vSphere cloud.
Request a Let’s Encrypt certificate for the NSX ALB Virtual Service
Navigate to Templates > Security > SSL/TLS Certificates > CREATE and click Application Certificate
Fill in the details for the Certificate Request (CSR) with the SNI for the certificate you want to request. The script will run when the SAVE button is clicked.
Supply the Certificate name (I use the fqdn/SNI just for manageability)
Select ‘Type’ ‘CSR’.
Supply the certificate ‘Common Name’. This is where you supply the actual name of the certificate you want to request, in this case vpn.vconsultants.be.
Supply the certificate ‘Common Name’ as ‘Subject Alternative Name’.
I started to use ‘EC’ as the certificate ‘Algorithm’ over ‘RSA’
Select a ‘Key Size’. Be aware that when choosing ‘EC’ as ‘Algorithm’, ‘SECP384R1’ is the latest that Let’s Encrypt supports for now.
Check ‘Enable OCSP Stapling’, this will speed up the certificate validation process.
Now watch the magic.
Note:I added the Root and Intermediates certificates to the NSX ALB controller to validate the certificate. That is why the color of the circle is green.
Add the Let’s Encrypt certificate to the NSX ALB Virtual Service
A final step to do in this setup is to apply the certificate on the VS.
In the end, you will have an NSX Advanced Load Balancer (ALB) Virtual Service configured with a Let’s Encrypt certificate.
Next POST
In the next post I’ll show the customized script that enables Let’s Encrypt Certificate Management for Enhanced Virtual Hosting (EVH) where the certificate will be requested for a EVH child Virtual Service.
Today I was playing around with vSphere with Tanzu. I want to consume vSphere with Tanzu and therefore I try to deploy an app from the bitnami repository. This should be pretty easy to do. Well I’m still in the learning phase so bear with me if this is something obvious …
These are the steps I’m doing
Install helm
Add bitnami repo
Install app from the bitnami repo
Deploy an app from the bitnami repo on a Tanzu Kubernetes Grid (TKG) cluster (deployed on vSphere with Tanzu)
So I tried to deploy redis to the TKG cluster. It needs a Persistent Volume (PV) so at deploy time a Persistent Volume Claim (PVC) would be issued and a PV should be assigned. When I saw it took a while to get my redis app deployed I looked at the namespace – Monitor – Events – Kubernetes and saw that there was an error: ‘no persistent volumes available for this claim and no storage class is set’.
In my case the issue was that I did not specify the defaultClass when I created the TKG cluster. I used the following yaml file to create the TKG cluster. The highlighted lines were not in the yaml file when I created the TKG cluster and these specify what storage class should be used by default.
TKG cluster yaml
YAML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: k8s-01
namespace: demo
spec:
topology:
controlPlane:
count: 1
class: guaranteed-small
storageClass: storage-policy-tanzu
workers:
count: 3
class: guaranteed-small
storageClass: storage-policy-tanzu
settings:
storage:
defaultClass: storage-policy-tanzu
distribution:
version: v1.18
So I executed (the k8s-01.yaml file has the above content)
Shell
1
kubectl apply-fk8s-01.yaml
and received the following error:
As I was still in the TKG cluster context I could not change the TKG cluster spec. So I need to change the context to the namespace ‘demo’ (where I deployed my TKG cluster)
TKG cluster yaml
YAML
1
kubectlconfiguse-contextdemo
I reapplied the yaml file, changed the context again to the TKG cluster and issued the command:
TKG cluster yaml
YAML
1
kubectldescribestorageclass
Now we see that there is a default storage class for this cluster:
And when I launch the deploy again:
TKG cluster yaml
YAML
1
kubectlrunredisbitnami/redis
I see that the deploy is succeeding. Woohoo
UPDATE: @anthonyspiteri has come to the same conclusion in later blog posts
Today I deployed a new VCSA 7 U1 and as U2 has GA’d recently I wanted to update the environment first. So I headed to the VAMI interface > Available Updates page. Immediately there was an error:
I found some blogs that showed to delete upgrade status file ‘software_update_state.conf’ at /etc/applmgmt/appliance. While I tested with renaming this file to .old this did not resolve the error.
The file was recreated but held the same info, which is in JSON format and has an the following content:
Shell
1
2
3
{
"state":"UP_TO_DATE"
}
“UP_TO_DATE”, it clearly is not. So I found this KB article. This is also where I got the solution for my install. I compared the url I found in the KB article with the one that is included by default in the update settings page.
In my case when I alleviated the .latest from that url, updates are detected and I can proceed.
So as you can see in the screenshot below (well not entirely but you will need to take my word for it), I selected ‘Specified’ and supplied the following url:
Due to a power failure of the storage where the vCenter Server Appliance resides, the VCSA does not boot. Connecting to the console shows the following output:
When you see this screen, none of the services are started as the appliance does not fully start. This implies that there is no means of connecting to the H5 client nor the VAMI interface on port 5480.
Why does the VCSA not boot and where do I start troubleshooting?
There are two important things to mentioned on the screenshot above, this is where we start:
Failed to start File System Check on /dev/log_vg/log
journalctl -xb
First we take a look at ‘journalctl -xb’. To do this we need to supply the root password and launch the BASH:
Now that have shell access we can take a look at ‘journalctl -xb’:
Shell
1
journalctl-xb
Type G to go to the bottom of the log file:
Shell
1
G
journalctl -xb
Work upwards, the most relevant logs will be at the bottom. For the sake of this blog post, I have type -S. This will turn on/off word wrap, in this case, I turned on word wrap.
File System Check
Going up a little I find these entries:
There is a problem with a certain inode and File System Check (fsck) should be run. Let’s see how we can do that. Is it as simple as running:
Shell
1
fsck/dev/mapper/log_vg-log
It seems like it. Running the above command finds some errors and suggests to repair. I confirmed.
Other volumes
Let’s check the other logical volumes (lvm). First we will run ‘lsblk’ to take a look at the drive layout:
Shell
1
lsblk
Remark: When we take a look at the type, we see the disks, eg. sda, sdb, etc… The difference between sda and the rest is that sda is partitioned with standard partitions and on the rest the disks an LVM has been created.
I checked all other volumes and found none of them were having issues.
Reboot
To reboot while you are in maintenance boot:
Default
1
reboot--force
After the reboot, I could connect to the H5 client and clear the relevant errors.
Remark
This blog post is very similar to this one here. Although they are very much alike, the issues in the older blog post were on a standard partition on a VCSA 6.5 whereas the issues described and addressed in this post are on a VCSA 7.0 LVM physical volume.
When you connect to your ESXi host and you launch esxtop. You look at the esxtop output and it is not displaying as it should. Instead, it is displaying like in the below screenshot:
Your esxtop output will be displayed correctly if you are using a terminal emulator that defaults to xterm as the TERM environment variable. Some terminal emulators will use another terminal emulator value by default, eg. xterm-256color. ESXi does not map xterm-256color to one of the values it knows, so it doesn’t know how to display the output.
There is a KB article that explains how to resolve:
The value of the environment variable TERM is used by the server to control how input is recognized by the system, and what capabilities exist for output.
Let us have a look first what the TERM variable is in my case:
Shell
1
echo$TERM
I am receiving the following output:
My terminal emulator tries to connect to the endpoint (ESXi) with xterm-256color. Now let’s take a look at what values this endpoint does support:
So all of the above is possible to assign to TERM. The value my terminal emulator uses is not among the supported terminfo types. So the ESXi host cannot map to any of the known and thus does not know how to display the esxtop info correctly.
When we update the TERM environment variable to xterm and try to run esxtop again, the output will show nicely formatted.
Default
1
2
TERM=xterm
echo$TERM
Let’s check esxtop again to make sure the outcome is as expected: