April 1, 2023

CloudBuilder fails to deploy vCenter during initial deployment

When deploying VCF 4.5 you should be able to do that in an air gapped environment that has no access to the internet. In such cases you will need to get updates and such into the environment manually, but it's still a supported solution.

When running the initial bring up process deployment will fail with the message: "vCenter installation failed. Check logs for more details.". The vcf-bringup.log file will tell you that the vCenter appliance was deployed and started, but that there's was problem with the time of this appliance. 

The ntp parameters you have specified in your spreadsheet have been populated correctly in /etc/ntp.conf of the Cloud Builder appliance, but the logs show that it's trying to connect to the Google ntp servers.


The only solution we've found so far is to either impersonate Google's ntp entries in dns or to open the firewall and let Cloud Builder communicate with these external servers. Cloud Builder is only used during bring up so these workarounds can be reverted once the environment is up and running. 

March 1, 2023

LPe12000 and other old Emulex cards are unsupported if you patch ESXi 7.0U3


In January 2020 Broadcom announced that a series of Emulex cards would soon go End Of Life. They have however worked fine in VMware ESXi until recently, including 7.0U3d. 


If you patch your ESXi 7 host with the latest patches the lpfc driver will be replaced by one that doesn't support these old cards and you will no longer see your FC LUNs (vmfs datastores & RDM disks). The driver will be upgraded from to 14.0.543.0. We've also found that installing ESXi 7.0U3j comes with a non-working driver.


Using supported hardware is always recommended. Swapping these old cards with newer ones would be optimal.


Installing an old driver (right click this link, Save As) that still supports old hardware is possible and you will then see your LUNs again.


In order to identify where this problem will occur before patching I used the following PowerCLI script:

$vmhosts = get-vmhost|sort-object

foreach($vmhost in $vmhosts){

  $devices = Get-VMHostHba -VMHost $vmhost.Name | Where-Object {$_.Model -match "3530C|LPe1605|LPe12004|LPe12000|LPe12002|SN1000E"}

  foreach ($device in $devices) {

    Write-Output "$vmhost - $($device.Model) device with WWN $($device.PortWorldWideName)"



This script will check the HBAs of all of your ESXi hosts and you'll get a listing similar to this:

It's highly unusual that a device gets unsupported while patching a version of ESXi. As far as I can recall we have only seen devices being discontinued between major or minor versions of ESXi, not while installing non-critical patches.

May 4, 2022

Horizon Client 8.5 crashing on Linux


After upgrading from version 8.4 the Horizon Client was unable launch correctly. Launching it from the command line showed a segmentation fault:

I'm using Ubuntu 20.04 LTS, but other related distros may also be affected.


It turned out that Reddit user Zixyar had already found that you could solve this problem by editing the file /etc/pam.d/lightdm and uncommenting the line:

#session required pam_loginuid.so

After rebooting I was able to use the Horizon client 8.5 (2203-8.5.0-19586897) without problems.

May 17, 2021

Priority tagging of vSAN traffic


According to Cisco COS is defined as "Class of Service (CoS) or Quality of Service (QoS) is a way to manage multiple traffic profiles over a network by giving certain types of traffic priority over others. "

Note that there's also a similar technology called DSCP that can used in more or less the same way.

When using a vSphere Distributed Switch it's possible to configure this and create fairly granular rules per Port Group. It's not at all limited to vSAN traffic even though that was our use case.


I was asked by the networking guys if we could enable this functionality for vSAN traffic by setting COS=3.


Identify the port group associated with the vmkernel adapter used by vSAN and choose  Edit Settings. / Advanced and enable Traffic filtering and marking. 

At configure level for the port group you will need to create the rule as outlined in the following steps:

Now that it was turned on it was instantly visible to the networking guys as they started seeing traffic within UC3 (Priority Group 3).

April 19, 2021

Autoinstall physical NSX Edge with custom passwords


Setting up NSX Edge in an automatic way with a custom password is a good idea because by default you get a default password that needs to be changed at first login. If you're planning on using an extra strong password, setting it through iDRAC (or similar) can be a bit awkward. If you're using a non-english keyboard layout (like me) it can be even more non-trivial to hit the correct special characters.


1. We had a problem getting the physical Dell R640 server with Mellanox 25GbE nics to boot from PXE. It would say "Booting from PXE Device 1: Integrated NIC 1 Port 1 Partition 1 Downloading NBP file... NBP File downloaded successfully. Boot: Failed PXE Device 1: Integrated NIC 1 Port 1 Partition 1 No boot device available or Operating system detected. Please ensure a compatible bootable media is available."

2. VMware has provided us with a nice 19 step document that guides us through the needed steps for setting up everything we need. The optional step 16 of setting a non-default password is however a bit misleading (probably referring to an older version of NSX?) and doesn't quite work.


1. In order to get the physical server to PXE boot we had to change the boot mode from UEFI to BIOS.

2. I had a case open for months without a resolution. In the end I started studying the Debian manuals (that the NSX Edge installer is based upon). I eventually found a working solution. It turned out that adding the following commands to preseed.cfg right after the "di passwd/root..." line gave a working config:

d-i preseed/late_command       string \
        in-target usermod --password 'insert non escaped password hash here' root;\
        in-target usermod --password 'non escaped password hash' admin
You will need to create the password hash using mkpasswd -m sha-512 as described in the original 19 step document.

April 15, 2021

vSAN critical alert regarding a potential data inconsistency and maintenance mode problems after upgrade to 7.0U1


Versions involved: 

VMware ESXi, 7.0.1, 17325551,  DEL-ESXi-701_17325551-A01

vCenter 7.0U1 Build 17491160

vCenter and ESXi hosts were upgraded from 6.7U3 to 7.0U1c an the vSAN disk format was upgraded to version 13.


After upgrading many clusters from 6.7U3 to 7.0U1c and upgrading the vSAN format to 13 we experienced a health warning after the upgrade.

The error message in Skyline Health was "vSAN critical alert regarding a potential data inconsistency"

For almost all clusters this error would fix itself within 60 minutes after the upgrade (typically in a much shorter time).

For one of our clusters this error did however stick and we were unable to put any hosts within this cluster in maintenance mode.

Trying to put a host in maintenance mode would fail after 1 hour. Before failing it would stop at a high percentage between 80 and even at 100% with a message "Objects Evacuated xxx of yyy. Data Evacuated xxx MB of yyy MB".

It's worth mentioning that this cluster had an active Horizon environment running during the upgrade and we suspect that it's constant tasks of creating and removing VMs has contributed to this problem.


We found a kb article with a similar error message even though we haven't changed the storage policy of any VMs for  a long time (but Horizon might have done something like that behind the scenes): https://kb.vmware.com/s/article/82383

This article states this is a rare issue, but we found a korean page referring this same issue. The VMware kb article has a python script that you will need to run on each host involved. After running the python script we were able to put hosts in maintenance mode and do 7.x single image patching.

We asked VMware support if it was a good idea that we had changed this setting and their response was "Yes, if you want the DeltaComponent functionality going forward then please change it back to 1. The delta component makes a temporary component when there are maintenance mode issues."

Because of this we decided to change the value back and wrote a powershell script instead of running a python script on each host:

param (

    [string]$clustername = $( Read-Host "Enter cluster name:" )


get-cluster $clustername|Get-VMHost| Get-AdvancedSetting -Name "VSAN.DeltaComponent"| Set-AdvancedSetting -Value 1 -Confirm:$false

As we've only found a single article on this issue (in Korean) I guess this issue is indeed quite rare, but if it happens again we now know what to do.

December 2, 2020

How to check BIOS Power management settings of ESXi hosts

The performance of your workload will possibly be greatly affected by the power saving settings of your hosts. There are power saving settings both in the vSphere client¹, in the BIOS of the hosts² and inside the VMs³. This can cause much confuzion and there are a number of articles related to this issue:
Select a CPU Power Management Policy
Performance Best Practices for VMware vSphere 6.7
Virtual machine application runs slower than expected in ESXi (1018206)

The root cause of the problem is that servers are normally shipped with a BIOS setting of Balanced power saving. This means that C states are enabled in order to make the cpus sleep whenever they are idle.

You use the vSphere client to check the settings of your BIOS (ESXi host / Configure / Hardware - Power Management) and you can also configure how ESXi should treat power savings.

Note that in vSphere 7 this option has moved to ESXi host / Configure / Hardware - Overview - Power Management.

From the example above we can see P states are also enabled on this system. P states makes turbo mode work when something requires extra performance, but doesn't need all cores. Many systems tend however to come with only C states enabled. The information seen from the vSphere client does not reveal the level of C states that are configured.  C states does not always have a severe impact, but since all systems I have seen so far come with all C states enabled it will normally affect the performance if you see it in the vSphere client.

The Performance BP doc says the following:

The SQL BP doc says:
Both these documents agree on that disabling C states is the way to go (The OS Control mode setting in BIOS typically disables C states and enables P states). Earlier versions of this document have suggested disabling saving functionality completely. 

The document Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere Virtual Machines also suggest to set "Power Management Mode to Maximum Performance" in BIOS, disabling Power Management completely.

The the good old .Net based vSphere client it was only possible to change the settings if either C or P states were available. In the HTML5 client you can set options here even if they are disabled, which doesn't really make much sense.

Many new servers now also come with a virtualization adapted predefined power scheme that you can choose in BIOS.

When you buy servers today it's also possible to specify to make this setting the default one and then all the servers will come correctly preconfigured.

Even if all your servers at one point had the BIOS settings set to Full Performance, you may at a later point see that not all servers perform equally good. I have seen that replacement of motherboards will normally lead to a Balanced power saving setting (and degraded performance).

With Powershell you can easily identify servers that has C or P states enabled.
Get-VMHost | Sort | Select Name,
    @{ N='HW Support';

When you run it agains your clusters it will tell you if the ESXi hosts has any of these power states enabled in the BIOS:

C:\> get-biospowersettings.ps1
Name                          HW Support
----                          ----------
esxa001.mydomain.com          ACPI C-states
esxb001.mydomain.com          ACPI P-states
esxb002.mydomain.com          ACPI P-states
esxc044.mydomain.com          ACPI C-states
esxc047.mydomain.com          ACPI C-states

As we can see from this output one of the host has no output. This means that it has power saving disabled in the BIOS. The ones with C states probably has a default setting of Balanced (gives poor performance) and the ones with P states have probably been manually configured to take advantage of cpu Turbo modes. For most workloads (and lowest latency) you will probably want to disable power saving in the BIOS and have a blank result here.