NSX-T 3.0 Lab Single N-VDS Edge Nodes

Intro

First off I apologize for the huge blog post, when I started writing this I didn’t think it would end up so long and splitting it into multiple parts just doesn’t seem right.
When I deployed my Edge Nodes during my 2.4 lab build the recommended design approach was to deploy three N-VDS switches on the Edge.
This changed in 2.5 to a recommended single N-VDS setup.
While the three N-VDS configuration is still valid and works I wanted to change my lab to the single N-VDS design since that’s what I’ll be deploying for customers in my day to day job.

Lab Setup Overview

Before I get started with the build I thought it would be a good idea to explain how my lab is configured since I need it to simulate a physical environment and I want the BGP routing to fail when an uplink or TOR fails.
As my lab uses a nested configuration I need to adjust how the Distributed Port Groups are configured to ensure the routing works as it would in a physical deployment.

The below diagram shows a basic overview of the nested and physical hosts configuration.

Referencing the diagram above, from the top down I have my lab L3 switch connected to the Physical ESXi Hosts pNICs, these in turn are mapped to the Lab VDS. There are two port groups configured.
Both are used by the Edge and Compute hosts and one each for the Virtual Lab Routers configured as follows.

Nested-LANA – Connected to Lab-RouterXG and to vmnic2 of the Compute and Edge hosts

  • VLAN 0-157 – 150 being the Overlay VLAN
  • VLAN 160 – DC1 Uplink VLANA
  • VLAN 165 – DC2 Uplink VLANA (Future Use)

Nested-LANB – Connected to Lab-RouterXG02 and to vmnic3 of the Compute and Edge hosts

  • VLAN 0-157 – 150 being the Overlay VLAN
  • VLAN 170 – DC1 Uplink VLANB
  • VLAN 175 – DC2 Uplink VLANB (Future Use)

The reason for different VLANs 160,165 and 170,175 between the two trunk port groups is due to how the Edge Nodes will map to them.
Because the N-VDS maps to the Edge-VDS which will have Active/Standby NICs if the port groups trunked all VLANs as they would in a physical build then when we simulate a TOR or Uplink failure the BGP routing for VLANA for example would simply switch over to the standby uplink and still get to RouterA as the port groups are part of the same VDS, thus it would still pass the traffic. By limiting the allowed VLANs we can prevent this behaviour as the connected NIC of the Edge maps to a portgroup with only the allowed VLAN for the relevant BGP peer so if NIC1 on Edge Host fails traffic reverts to the standby NIC of the Edge VDS which is connected to NIC2 which is mapped to the Nested-LANB portgroup which doesn’t have VLAN 160 so the traffic is dropped.

If we take a look at the Nested Edge host VM we can see that there are two vNIC’s mapped to the Nested-LAN A and B port groups.

The Nested Edge hosts have a VDS, DC1-Edge-VDS there are three port groups Edge-Mgmt VLAN 10 and DC1-Trunk-A and DC1-Trunk-B with all VLANs trunked. The DC1-Trunks are configure as follows.

  • DC1-Trunk-A – Uplink 1 Active, Uplink 2 Standby
  • DC1-Trunk-B – Uplink 2 Active, Uplink 1 Standby

Edge Deployment Overview

The diagram below is what we are going to setup.
The Tier-0 Gateway will be configured with a Uplink-1 to VLAN 160 which connects through to the simulated TOR-LEFT and Uplink-2 to VLAN 170 which connects through to the simulated TOR-RIGHT.
It will have two TEP IP’s which will use both Uplinks for the Overlay traffic.
You can see on the DC1-Edge-VDS how the Active/standby Uplinks are configured. So with the physical host VDS trunk configuration if the TOR-LEFT dies then the BGP adjacency for VLAN 160 will drop and routing will only occur via VLAN 170 as it would in a physical deployment.

The Build

Edge Uplink Profile

The first step is to create an new Uplink Profile for our Edge nodes, if you already have one that’s fine you can just edit it.
Navigate to System, Fabric, Profiles then click +ADD

Give it a name then under Teamings change the Default Teaming to ‘Load Balance Source’ then enter names for your uplinks.
Next we will add two Named Teamings Click + ADD Give it a name and make sure the ‘Teaming Policy’ is set to Failover Order, for the first one give it the first Active Uplink and no Standby Uplinks.
Repeat for the second one this time using the second Uplink.
Set the Transport VLAN then click SAVE.
Make a note of the names you gave the Teamings as you will need them for the next step.

If we click the new profile we can view the configuration.

VLAN Transport Zone

You can use the default VLAN Transport zone or you can create a new one, to create a new one navigate to System, Fabric, Transport Zones then click + ADD

Enter a ‘Name’, set the Switch Name if you want to otherwise the system will generate one for you, set the ‘Traffic Type’ to VLAN, now we need to specify our Teaming Policy Names so enter the two teaming policies you created in the last step then click ADD

Edge Node

The next step is to deploy and configure our Edge nodes, navigate to System, Fabric, Nodes, Edge Transport Nodes then click + ADD EDGE VM

Enter a Name and the FQDN then select the Form Factor, for my lab Small is fine. You can also change the CPU reservation, click NEXT

Enter the CLI and Root Passwords and you can enable SSH and SSH Root logins, click NEXT

Select the ‘vCenter and ‘Cluster’ and optionally the Resource Pool and host, Select the ‘Datastore’ and click NEXT

Change the IP assignment to ‘Static’ and set the ‘Management IP and Default Gateway’ Click to select the Management Interface, then enter the ‘Search Domain Names, DNS Servers and NTP Servers’ click NEXT

Set the ‘Edge Switch Name’ then click the drop-down menu and select both the Overlay and the VLAN transport zones.
Click the ‘Uplink Profile’ drop-down and select the profile we created earlier, Set the ‘IP Assignment’ to ‘Use IP Pool’ then pick the pool from the ‘IP Pool’ drop-down, scroll down.

Click the ‘Select Interface’ link for the first Uplink

Select the first trunk PortGroup for the first uplink.

Repeat for the second uplink this time picking the second trunk PortGroup.
Click FINISH

Repeat the process for additional Edge Nodes.

On your vCenter create an Anti-Affinity rule to keep the Edges on separate hosts.

Edge Cluster

Now create an Edge cluster, navigate to System, Fabric, Nodes, Edge Clusters click + ADD

Enter a name then select the Edge nodes and click the right arrow to add them, click SAVE

Edge Uplink Segments

Now we will configure the Uplink Segments for our Edge nodes.
Navigate to Networking, Segments and click ADD SEGMENT

Enter a ‘Segment Name’ then select the VLAN Transport Zone from the drop-down, enter the VLAN ID for the TOR-LEFT Uplink VLAN then select the ‘Uplink Teaming Policy’ for the first Uplink.
Click SAVE

Click NO

Repeat for the second Uplink this time using the second Uplink VLAN IP which is used for the TOR-RIGHT uplink and select the ‘Uplink Teaming Policy’ for the second Uplink.

Tier-0 Gateway

If you don’t already have a Tier-o Gateway then add one by navigating to Networking, Tier-0 Gateways and click ADD GATEWAY.
Enter a name, select the HA Mode, Active-Active for me as this is an ECMP build then select the ‘Edge Cluster’ from the drop-down.
Click Save.

Expand the ‘Interfaces’ section and click Set

Enter a ‘Name’, the type is External, enter the IP address in CIDR format. Set the ‘Connected To Segment’ to the first Edge Uplink, set the ‘Edge Node’ to the first Edge Node VM, set the MTU and set the ‘URPF Mode’ to None for ECMP.
Click SAVE.

Configure a second Interface for Uplink 2 of Edge Node 1.

Repeat the process for the other Edge nodes.
Click CLOSE

Configure the BGP settings.

Click the ‘BGP Neighbors link in the bottom right to add your TOR-LEFT and TOR-RIGHT BGP neighbors.
Click ADD BGP NEIGHBOR and configure the IP Address and Remote AS Number. For me thats the VLAN 160 and 170 IP’s for the TORs.
Click CLOSE

Open the ‘ROUTE RE-DISTRIBUTION’ section and click Set

Click ADD ROUTE RE-DISTRIBUTION enter a ‘Name’ and click Set

Tick the settings you want and click APPLY

Click ADD then APPLY.
Then on the Tier-0 screen click SAVE and CLOSE EDITING.

Tier-1 Gateway

Edit or add a Tier-1 Gateway. Navigate to Networking, Tier-1 Gateways to add click ADD TIER-1 GATEWAY Enter a ‘Name’ set the ‘Linked Tier-0 Gateway’ to our Tier-0, select the ‘Edge Cluster’ from the drop-down.
Now expand the ‘Route. Advertisement’ and select the desired settings then click SAVE.

VM Overlay Segments

I won’t cover this again this post is already long enough but I need to configure my test app segments for a guide on this follow my NSX-T Lab: Segments post.

Testing the BGP

I first need to check that my virtual routers are peered and are seeing the routes from the Tier-1.
Lab-router-XG on VLAN 160 is all good and shows two paths one va 10.160.1.11 the other via 10.160.1.12

Lab-router-XG02 on VLAN 170 is also good and shows two paths one va 10.170.1.11 the other via 10.170.1.12

From the Edge node perspective I connect to Edge node 1 and run the command ‘get logical-router

DC1-NSXT-ESG01> get logical-router
Logical Router
UUID VRF LR-ID Name Type Ports
736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 4
4a9917a9-4226-4d51-94b5-a2ae6df67a35 5 3075 SR-Tier-0-GW01 SERVICE_ROUTER_TIER0 7
efe84351-3724-4ad8-8475-d0e55ede853c 7 3077 SR-Tier-1-GW01 SERVICE_ROUTER_TIER1 5
184b4aa1-2409-4fc9-8466-857b7a9b84c6 8 1025 DR-Tier-1-GW01 DISTRIBUTED_ROUTER_TIER1 6
bbce87be-f962-456d-8b46-49b51e75960d 9 4 DR-Tier-0-GW01 DISTRIBUTED_ROUTER_TIER0 4

For the Tier-0 I type ‘vrf 5′
Then ‘get bgp neighbor summary
This shows me two neighbours 10.160.1.1 and 10.170.1.1 all good so far.

DC1-NSXT-ESG01> vrf 5
DC1-NSXT-ESG01(tier0_sr)> get bgp neighbor summary
BFD States: NC - Not configured, AC - Activating,DC - Disconnected
AD - Admin down, DW - Down, IN - Init,UP - Up
BGP summary information for VRF default for address-family: ipv4Unicast
Router ID: 10.160.1.11 Local AS: 65100
Neighbor AS State Up/DownTime BFD InMsgs OutMsgs InPfx OutPfx
10.160.1.1 65000 Estab 01:26:18 NC 106 106 1 7
10.170.1.1 65000 Estab 01:26:45 NC 104 103 1 5

To show the BGP routes I type ‘get route bgp’
Here I can see the Inter Service Routers (ISR) which are the connected segments on the Tier-1 and the internal connection from the Tier-1 to Tier-0. I can also see two BGP routes to 192.168.88.0 which is my home LAN

DC1-NSXT-ESG01(tier0_sr)> get route bgp
Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP,
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, 
> selected route, * - FIB route
Total number of routes: 6
isr> * 10.0.1.0/24 [20/0] via 100.64.80.1, downlink-335, 00:14:28
isr> * 10.0.2.0/24 [20/0] via 100.64.80.1, downlink-335, 00:14:28
isr> * 10.0.3.0/24 [20/0] via 100.64.80.1, downlink-335, 00:14:28
isr> * 10.160.1.12/32 [200/0] via 169.254.0.131, inter-sr-299, 01:19:24
isr> * 10.170.1.12/32 [200/0] via 169.254.0.131, inter-sr-299, 01:19:01
b > * 192.168.88.0/24 [20/0] via 10.170.1.1, uplink-306, 01:26:26
b > * 192.168.88.0/24 [20/0] via 10.160.1.1, uplink-303, 01:26:26

Now I want to test what happens when I fail a TOR or uplink.
To do this I’ll disconnect an uplink on my Edge Host VMs.

Fail Edge Host 01 Uplink-1

Let me first fail a single host uplink. Since the Edge Nodes have an anti-affinity rule keeping them on separate hosts this should only affect a single Edge node. I edit the nested VM and disconnected the Nested-LANA NIC

After the BGP timers have run I can check the routing, the Lab-router-XG02 on VLAN 170 shows the same routes but Lab-router-XG on VLAN 160 now only shows routes via 10.16.1.12 which is the Edge Node 2 uplink IP this is as I expected 🙂

If I look at the Tier-0 BGP Neighbors I see the status of Success however if I click on the highlighted ‘i’

I can select each Edge Node from the drop-down at the top.
The name is the system name so check the Source Address here it is 10.160.1.12 which is Edge Node 2 on Edge Host 2 so the Connection State is Established as expected.

Change the Edge node and it’s a different story.
Connection State is Active as it can’t establish a peer because we disconnected the NIC as expected. This is why I changed the trunking on the Lab VDS to only allow the specific uplink VLAN’s so it’s working as designed.

The final step is to disconnect the same NIC on Edge Host 2.
I could also turn off the Lab-Router-XG but remember the whole point is to make sure the Edge Node can’t get to the router via the standby interface on the Lab VDS.

No change as expected on Lab-Router-XG02 VLAN 170 but on Lab-Router-XG VLAN 160 we have now lost our BGP Neighbor 10.160.1.11

We also have no BGP routes 🙂

From NSX we can see that we now have a ‘Down Status’ for 10.160.1.1

From the Tier-0 SR we can also see that 10.160.1.1 is in an active state.

DC1-NSXT-ESG01(tier0_sr)> get bgp neighbor summary
BFD States: NC - Not configured, AC - Activating,DC - Disconnected
AD - Admin down, DW - Down, IN - Init,UP - Up
BGP summary information for VRF default for address-family: ipv4Unicast
Router ID: 10.160.1.11 Local AS: 65100
Neighbor AS State Up/DownTime BFD InMsgs OutMsgs InPfx OutPfx
10.160.1.1 65000 Activ 00:36:32 NC 116 121 0 0
10.170.1.1 65000 Estab 02:16:00 NC 153 153 1 5

The routes don’t look much different except we have lost 192.168.88.0/24 via 10.170.1.1

DC1-NSXT-ESG01(tier0_sr)> get route bgp
Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP,
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, > selected route, * - FIB route
Total number of routes: 6
isr> * 10.0.1.0/24 [20/0] via 100.64.80.1, downlink-335, 01:04:50
isr> * 10.0.2.0/24 [20/0] via 100.64.80.1, downlink-335, 01:04:50
isr> * 10.0.3.0/24 [20/0] via 100.64.80.1, downlink-335, 01:04:50
isr> * 10.160.1.12/32 [200/0] via 169.254.0.131, inter-sr-299, 02:09:46
isr> * 10.170.1.12/32 [200/0] via 169.254.0.131, inter-sr-299, 02:09:23
b > * 192.168.88.0/24 [20/0] via 10.170.1.1, uplink-306, 00:37:48

After re-connecting the Edge Hosts NICs routing is established again.

So the lab worked as expected we now have a single N-VDS Edge configuration simulating a physical environment for BGP failure behavior!

Leave a Reply

Your email address will not be published. Required fields are marked *