Intro
With the recent release of NSX-T 3.0 its time to upgrade my lab.
In this article I’ll be upgrading my existing lab setup to version 3.0. In a later article it is likely that I will be covering the rebuild of the lab to make use of the new feature which is likely to become the standard configuration for vSphere based deployments which is NSX on vDS.
The Steps
Not surprisingly the steps are very similar to previous releases so alot of this post will be a duplicate of the 2.4 – 2.5 upgrade post, with the addition of the new first step to add a new disk to the NSX Manager.
- Download the upgrade bundle
- Provision a secondary disk of exactly 100 GB capacity on all NSX Manager appliances.
- Upload the bundle to the NSX Manager
- Upgrade the Upgrade Coordinator
- Run Pre Checks
- Upgrade the NSX Edge Cluster
- Upgrade the hosts
- Upgrade the Management Plane
The Build
Download the Upgrade Bundle
The first step is to download the upgrade bundle.
This is not the full OVA file download although you should probably get that as well. What we need is the mub file as shown below.
For production deployments make sure you check the operational impact of the upgrade before proceeding the details are listed on the VMware website HERE
There are obvious potential outages when upgrading the NSX Edge cluster and the hosts will need to go in and out of maintenance mode.
And clearly the management plane will be down during the upgrade of the manager.
There are also a few limitations on what is supported for an in place upgrade, the following is not supported.
- More than one N-VDS switch is configured on the host.
- More than 100 vNICs are configured on the host N-VDS switch.
- ENS is configured on the host N-VDS switch.
- CPU use for the hostd, nsxa, or the config-agent service is high.
- vSAN(with LACP) is configured on the host N-VDS switch.
- Layer 7 firewall rules or Identity Firewall rules are enabled.
- VMkernel interface is configured on the overlay network.
- Service Insertion has been configured to redirect north-south traffic or east-west traffic.
- A VProbe-based packet capture is in progress.
Before proceeding make sure the system is healthy by checking the Dashboards from the NSX Manager home page.
Also ensure you have a valid backup, I’ve not configured backups on my lab yet but for a production system make sure you have one before you proceed.
Provision A Secondary Disk
This is a new step from the 2.4 – 2.5 upgrade in that the NSX manager needs an additional 100 GB disk.
Login to the vCenter where the NSX Manager VM is deployed right click the VM and select Edit Settings
Select ADD NEW DEVICE and then Hard Disk
Change the size to 100 GB and optionally change the Disk Provisioning configuration, for my lab I set everything to Thin Provision to save space.
Click OK
At this point we can if we choose reboot the appliance to ensure that the new disk is detected by the upgrade coordinator when we get to that stage or we can just wait and see if it is later on for this demo I am not going to reboot.
Upload the Upgrade Bundle
When you are ready to upgrade login to the NSX Manager web console and go to System, Upgrade and then select Upload MUB file then click Browse.
Browse to and select the upgrade file then click UPLOAD
While the file is being processed login to the NSX manager console and run get service install-upgrade to check that the upgrade service is running.
NSXTMan01> get service install-upgrade
Service name: install-upgrade
Service state: running
Enabled on: 192.168.10.50
You can also run get service to check all other services are running correctly.
Upgrade the Upgrade Coordinator
Once the file is uploaded and extracted (which may take a while) click on BEGIN UPGRADE
Accept the license agreement and then Click on Continue to start the upgrade of the Upgrade Coordinator.
Once the Upgrade Coordinator has been upgraded you will be returned to this screen. Now you can just click Next and carry on but you should really run the Pre Checks first. The pre checks can be run if you click Next as you will get a warning and the option to run them. But for simplicity we can just click RUN PRE CHECKS from the upgrade screen.
By clicking NEXT we can also run the Pre Checks
Once the checks finish you can see the results in the three sections ‘Edges’ ‘Hosts’ and ‘Management Nodes’ if there are issues then click on the link for more details. As you can see I have alerts for the Hosts and the Management Nodes.
From the PreCheck Issues screen I can click on the warning and also change to the Edges and Hosts tab to view any issues they may have and then go ahead and resolve them. The only issue I have is simply telling me that I have not backed up the NSX Manager recently so I will ignore this.
If you didn’t add the extra 100 GB disk or if the NSX Manager didn’t detect it and needs a reboot you will see the Error message below.
Click OK to close the PreCheck window and return the the upgrade screen then click on NEXT in the bottom right corner.
Upgrade Edge Clusters
The Edge upgrades are done in groups, each “Edge Group” consists of Edge Nodes that are part of an Edge Cluster. The obvious reason why you have to upgrade the group is so all Edge Nodes in the cluster end up running on the same version. If you have multiple Edge Clusters each one forms a group and the order in which the groups are upgraded can be changed. Since I only have a single Edge Cluster I don’t have the option to reorder it 😛
There are a couple of settings that can be changed.
Upgrade order across groups
Serial | Upgrade all the Edge upgrade unit groups consecutively. This selection is applied to the overall upgrade sequence. |
---|---|
Parallel | Upgrade all the Edge upgrade unit groups simultaneously. For example, if the overall upgrade is set to the parallel order, the Edge upgrade unit groups are upgraded together and the NSX Edge nodes are upgraded one at a time. This menu item is selected by default. |
Pause upgrade condition has changed from the previous upgrade and there is now only one option that I can select.
After each group completes | Select to pause the upgrade process after each Edge upgrade unit group finishes upgrading. |
---|
I’m going to leave these at their default values, click ‘START’
The Edge upgrade will start and can be paused if needed but not recommended. Each Edge Node within the group will be upgraded one at a time.
Once the upgrade is complete the status will show successful (hopefully).
The post check status will state that No checks have been performed, we can click on RUN POST CHECKS
We can see that there are no issues so our next step is to click NEXT and proceed to the hosts upgrade.
Upgrade Hosts
The upgrade options for hosts are the same as for Edges, with the exception of the Pause Upgrade Condition which is the same as with previous releases.
Pause upgrade condition.
When an upgrade unit fails to upgrade | Selected by default so that you can fix an error on the Edge node and continue the upgrade.You cannot deselect this setting. |
---|---|
After each group completes | Select to pause the upgrade process after each Edge upgrade unit group finishes upgrading. |
We can also edit the settings for each Host Group by selecting it and clicking EDIT
With the edit group function we can change the order within the group to either Serial or Parallel and change the upgrade mode to put the hosts into Maintenance or not.
I won’t change the default settings here so I just click START from the Host upgrade screen.
The settings will upgrade one host at a time placing each into maintenance mode and moving any running VM’s to the other hosts.
Once complete we can again run the Post upgrade checks.
We can also verify the software on the hosts have been upgraded by logging into the console and running the command
esxcli software vib list | grep nsx
[root@comp01:~] esxcli software vib list | grep nsx nsx-adf 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-cfgagent 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-context-mux 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-cpp-libs 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-esx-datapath 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-exporter 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-host 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-idps 3.0.0.0.0-6.7.15928665 VMware VMwareCertified 2020-05-11 nsx-monitoring 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-mpa 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-nestdb 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-netopa 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-opsagent 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-platform-client 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-proto2-libs 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-proxy 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-python-gevent 1.1.0-15366959 VMware VMwareCertified 2020-05-11 nsx-python-greenlet 0.4.14-15670904 VMware VMwareCertified 2020-05-11 nsx-python-logging 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-python-protobuf 2.6.1-15670901 VMware VMwareCertified 2020-05-11 nsx-python-utils 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-sfhc 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-shared-libs 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsx-vdpi 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11 nsxcli 3.0.0.0.0-6.7.15945993 VMware VMwareCertified 2020-05-11
Upgrade Management Plane
Next we will upgrade the NSX Manager, there are no configuration options here as I am only running a single manager due to the lab environment. if this was production there is still no options but it will show the three NSX managers and will upgrade them all. Now simply click ‘Start’
The NSX Manager console will be unavailable until the upgrade is complete.
During the upgrade you will receive the status message below as the NSX Manager reboots.
Once the NSX Manager has finished upgrading you can log back in and join or not join the VMware customer experience improvement program.
The upgrade page now shows the Management Nodes Upgrade Status as ‘Successful’ You can also login to the NSX manager console again and run ‘get service’ to confirm they are started. If you are running a cluster of NSX managers which you should be in production then also run get cluster status.
If I go back to the System, upgrade screen I now see the upgrade as Complete.
New in NSX-T 3.0 is the User Interface Mode Toggle I’ll cover what this is in another blog post.
Also something to note is the lack of the Advanced Networking & Security page, VMware had stated that this would be removed in future builds which is why they were advising to not use it to configure things such as Segments.
Summary
For NSX-T 3.0 the old NSX for vSphere license is now no longer valid and cannot be used so you will need an NSX-T 3.0 license luckily the product comes with an Eval license so you can at least use it and get the hang of it without having one.
Unlike the 2.4 to 2.5 upgrade the vCPU and RAM were not increased as part of the upgrade, the manager does take a long time to come online due to the CPU constraints I have but for the lab its not an issue.