Welcome to Part 17 of the NSX-T Lab Series.
This is somewhat of a bonus post as I’ve already completed the NSX-T Lab build series but since I now need to upgrade to 2.5 it makes sense to just add it as part of the original series.
This is an in place upgrade from 2.4 to 2.5.
The upgrade process can be broken down into several steps.
- Download the upgrade bundle
- Upload the bundle to the NSX Manager
- Upgrade the Upgrade Coordinator
- Run Pre Checks
- Upgrade the NSX Edge Cluster
- Upgrade the hosts
- Upgrade the Management Plane
- Upgrade the Policy Manager
Download the Upgrade Bundle
The first step is to download the upgrade bundle.
This is not the full OVA file download although you should probably get that as well. What we need is the mub file as shown below.
For production deployments make sure you check the operational impact of the upgrade before proceeding the details are listed on the VMware website HERE
The two biggest changes are the NSX messaging channel port from all transport nodes to the NSX Manager has changed from port 5671 to port 1234, since my lab has no physical firewall this is not an issue.
The other big change is that the NSX Manager now requires more CPU and Memory so I’ll have to tweak the settings for my lab to get it to run without eating all my precious resources.
|NSX-T Data Center 2.3 Appliance||Memory||vCPU||NSX-T Data Center 2.5 Appliance||Memory||vCPU|
|N/A||N/A||N/A||NSX Manager Extra Small VM||8 GB||2|
|NSX Manager Small VM||8 GB||2||NSX Manager Small VM||16 GB||4|
|NSX Manager Medium VM||16 GB||4||NSX Manager Medium VM||24 GB||6|
|NSX Manager Large VM||32 GB||8||NSX Manager Large VM||48 GB||12|
Before proceeding make sure the system is healthy by checking the Dashboards from the NSX Manager home page.
Also ensure you have a valid backup, I’ve not configured backups on my lab yet but for a production system make sure you have one before you proceed.
Upload the Upgrade Bundle
When you are ready to upgrade go to ‘System’ ‘ Upgrade’ and then ‘Browse’ to the MUB file.
Select the MUB file and then Hit ‘UPLOAD’
While the file is being processed login to the NSX manager console and run ‘get service’ check the listed services to ensure that all services that should be running are in fact running.
Last login: Tue Dec 3 12:42:55 2019 NSX CLI (Manager, Policy, Controller 18.104.22.168.0.13716579). Press ? for command list or enter: help NSXTMan01> get service Service name: cluster_manager Service state: running Service name: cm-inventory Service state: running Service name: controller Service state: running Listen address: Service name: datastore Service state: running Service name: http Service state: running Session timeout: 1800 Connection timeout: 30 Redirect host: (not configured) Client API rate limit: 100 requests/sec Client API concurrency limit: 40 Global API concurrency limit: 199 Client API concurrency limit: 40 Global API concurrency limit: 199 Service name: install-upgrade Service state: running Enabled on: 192.168.10.50 Service name: liagent Service state: stopped Service name: manager Service state: running Logging level: info Service name: mgmt-plane-bus Service state: running Service name: migration-coordinator Service state: stopped Service name: node-mgmt Service state: running Service name: node-stats Service state: running Service name: nsx-message-bus Service state: running Service name: nsx-upgrade-agent Service state: running Service name: ntp Service state: running Service name: policy Service state: running Logging level: info Service name: search Service state: running Service name: snmp Service state: stopped Start on boot: False Service name: ssh Service state: running Start on boot: True Service name: syslog Service state: running Service name: telemetry Service state: running Service name: ui-service Service state: running
Upgrade the Upgrade Coordinator
Once the file is uploaded and extracted (which may take a while) click on ‘BEGIN UPGRADE’
Accept the license agreement and then Click on ‘Continue’ to start the upgrade of the Upgrade Coordinator.
Run Pre Checks
Once the Upgrade Coordinator has been upgraded you will be returned to this screen. Now you can just click Next and carry on but you should really run the Pre Checks first. The pre checks can be run if you click Next as you will get a warning and the option to run them. But for simplicity we can just click ‘RUN PRE CHECKS’ from the upgrade screen.
Click ‘RUN PRE CHECKS’
Once the checks finish you can see the results in the three sections ‘Edges’ ‘Hosts’ and ‘Management Nodes’ if there are issues then click on the link for more details. As you can see I have an alert for the Management Nodes.
From the PreCheck Issues screen I can click on the warning and also change to the Edges and Hosts tab to view any issues they may have and then go ahead and resolve them. The issue I have with the Management Node is not really an issue but instead is a warning that the Communication port has changed to port 1234 and so if I have a physical firewall I should ensure that I have opened the port. Since I have no firewall on my lab I can ignore this warning. Click ‘OK’ to close the PreCheck window and return the the upgrade screen then click on ‘Next’ in the bottom right corner.
Upgrade Edge Clusters
The Edge upgrades are done in groups, each “Edge Group” consists of Edge Nodes that are part of an Edge Cluster. The obvious reason why you have to upgrade the group is so all Edge Nodes in the cluster end up running on the same version. If you have multiple Edge Clusters each one forms a group and the order in which the groups are upgraded can be changed. Since I only have a single Edge Cluster I don’t have the option to reorder it 😛
There are a couple of settings that can be changed.
Upgrade order across groups
|Serial||Upgrade all the Edge upgrade unit groups consecutively.|
This menu item is selected by default. This selection is applied to the overall upgrade sequence.
|Parallel||Upgrade all the Edge upgrade unit groups simultaneously.For example, if the overall upgrade is set to the parallel order, the Edge upgrade unit groups are upgraded together and the NSX Edge nodes are upgraded one at a time.|
Pause upgrade condition.
|When an upgrade unit fails to upgrade||Selected by default so that you can fix an error on the Edge node and continue the upgrade.You cannot deselect this setting.|
|After each group completes||Select to pause the upgrade process after each Edge upgrade unit group finishes upgrading.|
I’m going to leave these at their default values, click ‘START’
The Edge upgrade will start and can be paused if needed but not recommended. Each Edge Node within the group will be upgraded one at a time.
Once the upgrade is complete the status will show successful (hopefully).
The post check status will state that No checks have been performed, we can click on ‘RUN POST CHECKS’
We can see that there are no issues so our next step is to click ‘Next’ and proceed to the hosts upgrade.
The upgrade options for hosts are the same as for Edges, however we can also edit the settings for each Host Group by selecting it and clicking ‘Edit’
With the edit group function we can change the order within the group to either Serial or Parallel and change the upgrade mode to put the hosts into Maintenance or not.
I won’t change the default settings here so I just click ‘Start’ from the Host upgrade screen.
The settings will upgrade one host at a time placing each into maintenance mode and moving any running VM’s to the other hosts.
Once complete we can again run the Post upgrade checks.
We can also verify the software on the hosts have been upgraded by logging into the console and running the command
esxcli software vib list | grep nsx
[root@Comp01:~] esxcli software vib list | grep nsx nsx-adf 22.214.171.124.0-6.7.14664072 VMware VMwareCertified 2019-12-03 nsx-aggservice 126.96.36.199.0-6.7.14664087 VMware VMwareCertified 2019-12-03 nsx-cli-libs 188.8.131.52.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-common-libs 184.108.40.206.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-context-mux 220.127.116.11.0esx67-14664127 VMware VMwareCertified 2019-12-03 nsx-esx-datapath 18.104.22.168.0-6.7.14663999 VMware VMwareCertified 2019-12-03 nsx-exporter 22.214.171.124.0-6.7.14664087 VMware VMwareCertified 2019-12-03 nsx-host 126.96.36.199.0-6.7.14663975 VMware VMwareCertified 2019-12-03 nsx-metrics-libs 188.8.131.52.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-mpa 184.108.40.206.0-6.7.14664087 VMware VMwareCertified 2019-12-03 nsx-nestdb-libs 220.127.116.11.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-nestdb 18.104.22.168.0-6.7.14664057 VMware VMwareCertified 2019-12-03 nsx-netcpa 22.214.171.124.0-6.7.14664120 VMware VMwareCertified 2019-12-03 nsx-netopa 126.96.36.199.0-6.7.14664047 VMware VMwareCertified 2019-12-03 nsx-opsagent 188.8.131.52.0-6.7.14664087 VMware VMwareCertified 2019-12-03 nsx-platform-client 184.108.40.206.0-6.7.14664087 VMware VMwareCertified 2019-12-03 nsx-profiling-libs 220.127.116.11.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-proxy 18.104.22.168.0-6.7.14664108 VMware VMwareCertified 2019-12-03 nsx-python-gevent 1.1.0-9273114 VMware VMwareCertified 2019-06-25 nsx-python-greenlet 0.4.9-12819723 VMware VMwareCertified 2019-12-03 nsx-python-logging 22.214.171.124.0-6.7.14664072 VMware VMwareCertified 2019-12-03 nsx-python-protobuf 2.6.1-12818951 VMware VMwareCertified 2019-12-03 nsx-rpc-libs 126.96.36.199.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-sfhc 188.8.131.52.0-6.7.14664087 VMware VMwareCertified 2019-12-03 nsx-shared-libs 184.108.40.206.0-6.7.14100719 VMware VMwareCertified 2019-12-03 nsx-upm-libs 220.127.116.11.0-6.7.14664172 VMware VMwareCertified 2019-12-03 nsx-vdpi 18.104.22.168.0-6.7.14664097 VMware VMwareCertified 2019-12-03 nsxcli 22.214.171.124.0-6.7.14663983 VMware VMwareCertified 2019-12-03 [root@Comp01:~]
Upgrade Management Plane
Next we will upgrade the NSX Manager, there are no configuration options here as I am only running a single manager due to the lab environment. if this was production there is still no options but it will show the three NSX managers and will upgrade them all. Now simply click ‘Start’
The NSX Manager console will be unavailable until the upgrade is complete.
During the upgrade you will receive the status message below as the NSX Manager reboots.
Once the NSX Manager has finished upgrading you can log back in and join or not join the VMware customer experience improvement program.
The upgrade page now shows the Management Nodes Upgrade Status as ‘Successful’ You can also login to the NSX manager console again and run ‘get service’ to confirm they are started. If you are running a cluster of NSX managers which you should be in production then also run get cluster status
Upgrade the Policy Manager
This is the easiest step of them all as its not required.
As we were running NSX 2.4 the Policy Manager is part of the NSX Manager and so has already been upgraded.
In my lab for some reason my NSX license key was lost as part of the upgrade to resolve this I simply added the key back in again.
The vCPU increased to 4 vCPU and the RAM to 12 GB so I will need to again reduce these down to an acceptable level for my lab, the upgrade however did not add the reservations back in again.
I’ll start with 2 vCPU and 10 GB of RAM and see how it behaves.