Welcome to Part 7 of the NSX-T Lab Series. In the previous post, we created our Transport Zones, adding two VLAN TZ’s and one Overlay TZ.
It’s a bit horse before the cart situation, since I already knew how I was going to configure the Uplink profile and the Edge node connectivity so I know I need two VLAN TZ’s but they can be added at anytime so if you are still figuring it out don’t worry just go back to the previous step if you need more TZ’s.
In this post we’ll cover the creating a custom Uplink Profile.
What’s the uplink profile for?
Good question but first lets make something very clear an Uplink is NOT a physical NIC.
A physical NIC can be single or bundled into a LAG (Link Aggregation Group) and is on the physical host.
Uplinks are logical interfaces on an N-VDS, again I’ll cover this is more detail outside of this series as there are a lot of configuration options available.
An N-VDS uplink will map to an individual physical NIC or a LAG.
An uplink profile allows you to configure a consistent set of capabilities and settings across transport nodes.
The uplink profile includes settings that you want your network adapters to have and they include the following.
- Teaming Policy
- Active and Standby NICs
- Transport VLAN ID
There are built in pre-defined Uplink Profiles already on the system and I will use one of them for the Edge nodes. However I will create a custom profile for the Compute transport nodes to enable me to have two TEP’s per node which will provide better throughput as both uplinks will be used. This is the configuration that 95% of my customers use and so it makes sense to configure it for this lab build.
So lets create a custom Uplink profile.
From the NSX console under ‘System’ ‘Fabric’ ‘Profiles’ select ‘Uplink Profiles’ and click the +
Give it a name and under Teaming Policy change it to ‘Load balance source’.
In the ‘Active Uplinks’ we need to add two Uplink names. These can be anything you like as they are just labels, when we configure the Transport nodes later we will pair them with the physical NIC’s. Make sure to separate them with a ,
Finally set the Transport VLAN in this case it’s the Overlay VLAN.
- Failover Order:
An active uplink is specified along with an optional list of standby uplinks.
If the active uplink fails, the next uplink in the standby list replaces the active uplink.
No load balancing is performed with this option.
This will assign a single TEP
Can be used on Edge, ESXi and KVM nodes.
- Load Balanced Source:
A list of active uplinks is specified, and each interface on the transport node is pinned to one active uplink based on Source Port ID.
This configuration allows use of several active uplinks at the same time.
The will Assign multiple TEP’s normally two uplinks are set giving two TEPs.
Can be used on Edge and ESXi Nodes
- Load Balanced Source Mac:
This option determines the uplink based on the source VM’s MAC address.
The will Assign multiple TEP’s
Can be used on ESXi Nodes
For named teaming policy on an Edge node, only Failover Order policy is supported.
- Link aggregation groups (LAGs) using Link Aggregation Control Protocol (LACP) for the transport network.
For LACP, multiple LAG is not supported on KVM hosts.
If you don’t set the MTU here the default of 1600 will be used, this is fine for my lab but a lot of customer deployments will use 9000 MTU on the network and so this should be set to ensure a consistent end to end MTU.
Thats it our compute node uplink profile has been added.
For the Edge node uplink profile I’m going to use the pre-defined one called nsx-edge-single-nic-uplink-profile this gives me a single active Uplink using Failover order this will be set on each of the Uplinks on the Edge which will be two each one going to a different VLAN to allow ECMP to the physical network. But I am getting ahead of myself, before we get there we need some Transport nodes.
NSX-T Lab Part:8 NSX-T Transport Node Profile (Optional but recommended)
NSX-T Lab Part:9 NSX-T Host Transport Nodes
One thought to “NSX-T Lab: Custom Uplink Profile”
One question, I’m learning VVDS (VMware Validated Designs) and started my journey (https://www.sddcpro.com/). I was searching for whether to configure LACP or not on Compute and Edge cluster. As per VVDS, LCAP is not recommended. I know why because dell servers do not support LACP on uplinks. But It is not true with other vendors. What would recommend on LACP on uplinks? Thanks for the post and I’m going use your lab guide to build a home lab.