Welcome to Part 12 of the NSX-T Lab Series. In the previous post, we deployed our first edge node via the NSX Manager web UI.
This post should really be part 11b since it’s covering the same process just in a different way this time via an OVA deployment.
So why do it this way?
The VMware documentation says that if you prefer a more interactive deployment then you can deploy via the OVA.
This is certainly true since you have more control over the allocation of NIC’s well its not more control it’s just clearer on which NIC’s you assign to which network. I also experienced an issue with an earlier release of NSX-T when I tried to deploy onto a collapsed node that was running both as compute and Edge with limited NIC’s as thats often a common customer configuration. As I recall the deployment via the web UI couldn’t see the Management port group however it’s likely I was doing something wrong as it was the first time I’d played with NSX-T. Anyway I’ll revisit that particular configuration in another blog post but for now lets get on and deploy our second Edge Node.
Deployed of OVA files can be done direct from the host in the case of a standalone host or via the vCenter as we’ll be doing here.
Give the VM a name, as this is going to be the second Edge Node in my cluster it’s NSXT-ESG02, select the location and continue.
Set the destination compute resource
Review the details before the advanced configuration.
Select the size, since this is a lab and the other edge node is small I’ll use small again.
Select the datastore and disk format
Now we get to the interesting part we need to set the network mappings and it’s far easier to see and control what is assigned to which interface, a point to note is that they by default are NIC0 at the bottom and 4 at the top, my management is the bottom interface Network 0 mapped to Edge-Mgmt then the Overlay VLAN on Network 1 then the two uplink VLANs.
Next set a complex password or two.
CLI admin and audit usernames can be changed here if desired.
The rest of this section can be left blank, you can if you like add the NSX Manager IP however the Edge node will not join the management plane automatically we’ll need to login to the console after deployment and manually add it. but if you want to add the IP here it won’t hurt.
Configure the network settings and hostname
Add the DNS and NTP, as always DNS is very important and we’ll need to resolve the hostname/IP of the NSX manager when we join the Edge node to the management plane later.
At this point you can enable SSH and root SSH logins as this is a lab lets turn all this good stuff on.
Review and finish the template deployment.
As this is a lab build before I power on the VM I remove the CPU and memory reservations, as I’ve said before Do Not do this in a production deployment. Now go ahead and power this sucker on.
So we have our second Edge node VM deployed, but the NSX manager knows nothing about it and cannot manage it so we need to add it to the management plane.
In order to do that we need the NSX Manager certificate api thumbprint and there are two ways to get it. We can simply connect to the command line of the NSX manager and run the command ‘get certificate api thumbprint’ This will give you an output similar to whats below, now copy that character string, a tool such as putty or Terminus will allow you to copy text, if you connect via the VM web browse then get typing 🙂
NSXTMan01> get certificate api thumbprint 629bcd13d1a555ef1f5b03d567307cb38d924df5e2e5f6de5849780336d0b188
Another way to get the thumbprint is from the NSX Manager web UI itself simply go to the System Overview screen and click on the little box highlighted below this will display the certificate thumbprint allowing you to simply copy.
OK so you have the thumbprint now what.
Connect to the Edge node command line using your tool of choice and run the command ‘join management-plane NSX-Manager.FQDN username admin password yourpassword thumbprint your thumbprint‘ change the italic text to your respective settings.
Once complete you’ll get the Node successfully registered as Fabric Node message
NSXT-ESG02> join management-plane NSXTMan01.lab.local username admin password xxxxxxxxxxxx thumbprint 629bcd13d1a555ef1f5b03d567307cb38d924df5e2e5f6de5849780336d0b188 Node successfully registered as Fabric Node: 28d8b60a-9743-11e9-b05d-005056a7a48e
If we now browse to ‘System’ ‘Fabric’ ‘Nodes’ ‘Edge Transport Nodes’ we can see our second Edge node but we still need to configure it and add it to the transport zones. Click the ‘Configure NSX’ under the configuration State column.
First off we need to add the Edge node to the transport zones so select the TZ-Overlay, TA-VLANA and TZ-VLA|NB zones and move them to the right then go to the N-VDS Tab to configure the switches.
The process here is basically the same as it is with the UI deployment method from this stage onwards and the diagram below shows what we need to configure for this node.
We need to configure the N-VDS settings for the N-VDS-Overlay switch so select it from the dropdown, select the nsx-edge-single-nic-uplink-profile. Next set the IP Assignment to use IP Pool and then pick the TEP IP-Pool. As the Edge node resides on a dedicated Edge host we can use the same TEP Pool as the compute hosts.
For the Uplink we only have the one uplink we can pick but this is where it is different from the UI deployment as the options available are different we do not see the Edge-Overlay-VLAN instead we see fp-eth0, fp-eth1 and fp-eth2.
But there are 4 interfaces on the Edge node I hear you say.
Correct there are but remember we have already assigned fp-eth0 to the management network and given it an IP so it doesn’t appear here.
Also the next three steps will all show a single uplink, Uplink-1 the configuration will not place all the N-VDS’s on a single uplink each one is different it’s just that the Uplink Profile we are using only has a single Uplink-1 uplink so it just looks like we are configuring the same uplink each time in reality we are not.
Click ‘+ ADD N-NDS’ and this time select N-VDS-VLANA as the switch name, the IP assignment will be greyed out as the system recognizes that this is a VLAN backed switch there is no IP Pool option to pick all we need to do is map the Uplink-1 to the fp-eth1 interface which we have already mapped to the Uplink1-VLAN-160 port group on the vDS.
Click Add N-VDS again to configure the VLANB interface and set the uplink-1 to fp-eth2 which we already mapped to the Uplink2-VLAN-170 Port group. then click Save.
We’ve now deployed our second edge node, we can see that it has been added to the three Transport Zones Overlay and VLANA and B so it can now process Geneve traffic and connect north south to the physical network.
The final step is to disable vMotion for the Edge nodes as it is not supported.
Select the cluster that hosts the Edge Nodes then go to ‘Configure’ and select ‘vSphere DRS’ then hit the ‘+ Add’ button.
Select the Edge Nodes and hit ‘NEXT’
Click ‘Override’ next to the DRS automation level and then change it to ‘Disabled’ then click on ‘Finish’
The next step is to configure our NSX Edge cluster.
NSX-T Lab Part:13 NSX-T Edge Cluster