Welcome to Part 9 of the NSX-T Lab Series. In the previous post, we configured a Host Transport Node Profile ready for use in configuring NSX-T on our compute cluster.
In this post we’ll use that profile to make our lives easier but I will also cover configuring a host as a standalone.
Lab guide stage comparison.
So after 9 steps we are finally ready to install NSX-T compare that to the NSX-V installation which took 7 steps to reach this point.
But the NSX-V guide has an extra step covering adding a license which we’ve not covered for NSX-T?
Correct thats because without a license you can’t do anything with NSX-V with NSX-T thats changed and it now starts with an Eval license this is a big help for anyone trying to learn NSX-T as now anyone can run it on our labs. I will cover adding a license later on. Right lets get started.
I’ll start off by showing you how to configure a single host and then we’ll cover configuring all hosts in a cluster and you will quickly see why we took the time to create a host transport node profile!
Single host build.
One of the big differences between NSX-V and NSX-T is that NSX-T gives us the ability to deploy it to just a single host, care should be taken with this however as remember a VM attached to a logical switch/segment can only operate on a host where NSX-T is installed. so if the single host dies the VM cannot recover onto a none NSX-T enabled host.
To configure a single host there are two places we can do it.
The first place is in ‘System’ ‘Fabric’ ‘Nodes’ note the Managed by section shows us None: Standalone Hosts by default and from here we can click ‘Add’
Enter the host name , IP, select the correct OS from the drop down list and then enter the username and password for the host and hit ‘Next’
We get the usual thumbprint message
Now we need to configure the Transport Zone that the host will belong to for me thats the TZ-Overlay Transport Zone. Select the N-VDS from the dropdown, select the NIOC profile ad the Uplink Profile this is the one we created earlier with two Uplinks.
LLDP Profile enabled or disabled I’m setting mine to enabled.
Now we need to configure the TEP assignment so select the IP pool we created in step 5 if you don’t have a pool you can create it from here.
enter the first free host NIC that will be assigned to the N-VDS for me thats vmnic1 then click add PNIC and enter the next free NIC
Then hit finish
As my system is configured as part of a cluster managed by the vCenter the standalone host won’t appear on the Standalone hosts page even if I configure it as such. to be able to see the host deployment I need to change the Managed by to my vCenter.
Here we can see the single host deployed with NSX-T
From this vCenter screen we could just select each host individually and configure them one at a time and there may be circumstances where thats needed but a much more efficient way to do this is to use the Transport Node profile we created in the last step so lets do that now.
This time we select the cluster rather that the host and click on ‘Configure NSX’
Now select the Transport Node profile we previously created. and save
Thats it we are done!
I told you it was worth taking the time to setup the profile, we have now configured all the hosts in the cluster with the correct matching settings with just one click which can be repeated on any other clusters that will be in the same transport zone.
We can click on a host to see the status of the deployment.
Thats a lot of VMK’s and NIC’s I’ll cover what they are outside of this guide.
We can also get a visual representation of the N-VDS, at the moment it’s empty with just the Uplinks showing since we haven’t configured any networking just yet.
From the vCenter all we can see at the moment is via the Physical adapters page which shows us the host NIC’s attached to our N-VDS the N-VDS-Overlay.
If we SSH to the host we can do a esxcfg-vswitch -l which will show us the configured vSwitches. my host has three the vSwitch0 used for management, the vSwitch1 used for iSCSI and then the N-VDS-Overlay which is the NSX-T switch.
[root@Comp01:~] esxcfg-vswitch -l Switch Name Num Ports Used Ports Configured Ports MTU Uplinks vSwitch0 2560 4 128 1600 vmnic0 PortGroup Name VLAN ID Used Ports Uplinks Management Network 0 1 vmnic0 Switch Name Num Ports Used Ports Configured Ports MTU Uplinks vSwitch1 2560 4 1024 9000 vmnic3 PortGroup Name VLAN ID Used Ports Uplinks iSCSI 0 1 vmnic3 Switch Name Num Ports Used Ports Uplinks N-VDS-Overlay 2560 9 vmnic2,vmnic1
We can check the VMKnic interfaces by running esxcfg-vmknic -l
What we are looking for here is the vxlan network, yes thats right vxlan not Geneve but don’t worry we are still using Geneve here.
Note the IP’s assigned to the vxlan VMK’s are from the IP Pool we created.
[root@Com[root@Comp01:~] esxcfg-vmknic -l Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack vmk0 Management Network IPv4 192.168.10.24 255.255.255.0 192.168.10.255 00:50:56:a7:de:66 1600 65535 true STATIC defaultTcpipStack vmk1 iSCSI IPv4 192.168.30.24 255.255.255.0 192.168.30.255 00:50:56:63:f1:80 9000 65535 true STATIC defaultTcpipStack vmk10 10 IPv4 10.150.1.19 255.255.255.0 10.150.1.255 00:50:56:64:57:eb 1600 65535 true STATIC vxlan vmk11 11 IPv4 10.150.1.20 255.255.255.0 10.150.1.255 00:50:56:66:de:0c 1600 65535 true STATIC vxlan vmk50 d573746d-ac79-4d98-924d-15cf16885d36 IPv4 169.254.1.1 255.255.0.0 169.254.255.255 00:50:56:61:e4:25 1500 65535 true STATIC hyperbus
So we have confirmed the host have a TEP IP so now lets check that they can actually use it to communicate across the overlay VLAN.
For this we’ll use the vmkping command with the ++netstack and 1572 to bump the MTU above 1500
Success we can ping from one host to another.
[root@Comp01:~] vmkping ++netstack=vxlan 10.150.1.11 -d -s 1572 PING 10.150.1.11 (10.150.1.11): 1572 data bytes 1580 bytes from 10.150.1.11: icmp_seq=0 ttl=64 time=4.293 ms 1580 bytes from 10.150.1.11: icmp_seq=0 ttl=64 time=4.333 ms (DUP!) 1580 bytes from 10.150.1.11: icmp_seq=1 ttl=64 time=0.694 ms 1580 bytes from 10.150.1.11: icmp_seq=2 ttl=64 time=0.430 ms --- 10.150.1.11 ping statistics --- 3 packets transmitted, 3 packets received, +1 duplicates, 0% packet loss round-trip min/avg/max = 0.430/3.250/4.333 ms
Now that we have NSX-T configured on our compute nodes we are ready to create some Segment/Logical Switches for our VM’s.
NSX-T Lab Part:10 NSX-T Segments