NSX-T Layer 2 Bridging

Intro

NSX-T Layer 2 bridging allows us to connect or ‘Bridge’ a VLAN backed port group or device to an NSX-T Overlay network.

Why would we want to do this? Well its a common use case for migrating workloads into NSX-T Overlay networks or to provide connectivity between physical servers and Overlay backed VM’s while having them all live on the same layer 2 network.

Lets focus on the migration use case. There are several ways to migrate workloads from VLAN backed port groups into NSX-T Overlay networks.

  • Option 1: Simply re-ip the VM’s and place them on a new Overlay network.
  • Option 2: Deploy new workloads to the new Overlay network and age out the old ones.
  • Option 3: Do a big bang migration and move all VM’s to the Overlay network at the same time.
  • Option 4: Create a VLAN Bridge, migrate at your leisure before moving the Gateway or not.

Option 1 is often not possible as it requires a lot of manual changes to potentially hundreds of VM’s and there are still applications with hard coded IP’s making this option impossible.

Option 2 again requires lots of work and also time which is not always desirable.

Option 3 is the preferred approach since it doesn’t require any bridging however is does require that all VM’s are moved at the same time. This option is also not possible if there are to remain any physical workloads that need to be on the same Layer 2 network as the VM’s we are moving, in that case a bridge needs to be created and left permanently to allow the connectivity.

Option 4 is what we are going to look at in this blog, bridges are often temporary used during the migration however they can also be configured and left running permanently if needed.

While the bridge is in place the default gateway will stay on the physical network however it can be moved to NSX before completing the migration but it’s often left to the final cutover stage. Moving the gateway is not a requirement if the bridge is to be left in place.

Pre-requisites

In order to setup the bridge there are few things we need to have in place, the first is an Edge cluster.
The bridge can be deployed using an existing Edge cluster and this can be the same one used by your N/S T0 and thats what I’ll be using in this blog. However you can also use a dedicated Edge cluster just for bridging and this may be desirable if the load on the bridge is expected to be high.
By using dedicated Edge Nodes we guarantee resource for the bridged traffic. For customer designs this is often the approach I take, also if the bridge is to remain in place long term I’d also use a dedicated Edge Cluster.

We also need to have a T0 configured and routing setup to the physical network so that we can switch the N/S traffic to go via NSX-T at the end of the migration.

Finally we need some Overlay Segments to configure for bridging and these need to be connected to a Gateway either a T0 or T1 in my case.

The Starting Point

My starting point is a three tier app consisting of the following

  • Four Web servers on the 10.0.1.0/24 network VLAN 110
  • Four Application servers on the 10.0.2.0/24 network VLAN 120
  • Two database servers on the 10.0.3.0/24 network VLAN 130

Half the servers in each tier are on a VLAN port group the other half are on NSX-T overlay segments.

The default gateways for each network are on a physical switch and the T1 interface for each is configured with a .5 IP.

The 3 tier apps are deployed as vApps but this makes no difference to the bridging.
Shown below are the VLAN backed vSphere vDS port groups.

Here are the Overlay NSX Segments as shown in the vAPP on vCenter

The goal is to migrate the VM’s that live on the VLAN backed port groups onto the NSX Segments and then move the gateways from the physical network to the NSX-T T1 gateway.

The Build

Trunk Port Group

OK lets get started first off we need a trunk port group on our vCenter vDS this trunk will carry all the VLAN’s we will need to bridge and will be connected to our Edge Nodes on the fp-eth2 interface which is currently unused.

My lab is setup with a compute cluster and an Edge cluster each have their own vDS however the setup will work with a collapsed cluster and a shared vDS. Just make sure you create this trunk port group on the vDS where your edges are connected.

I’ve created a new port group called Bridge-Trunk and I configure the VLAN type as VLAN trunking and then I’ve added in the VLAN’s needed.

On the Teaming and failover tab I’ve configured Uplink 2 as standby

Next we need to enable Promiscuous Mode and Forged Transmits.

Lastly, on the ESXi hosts where the NSX-T Edge node virtual machines are running we must enable reverse filter by issuing the following esxcli command:

esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1

After running the above command we need to disable/enable Promiscuous mode on the “Bridge” port group so that the reverse filter becomes active.

so edit the port group go to security and change Promiscuous mode to Reject click save then repeat and change it to accept.

Bridge TZ

Next we need to create a new Transport Zone for our Bridge. Navigate to System, Fabric, Transport Zones and click ADD ZONE.

Give it a Name and set it to Trunk Type VLAN and click ADD

Reconfigure the Edge Nodes

We can now reconfigure our Edge nodes. Navigate to System, Fabric, Nodes, Edge Transport Nodes Select the first Edge node and click EDIT

Click ADD SWITCH

Give it a Name or just use the default, now select the Bridge Transport Zone for the Uplink Profile select the nsx-edge-single-nic-uplink-profile finally from the Teaming Policy Uplink Mapping section click Select Interface

Select the Bridge-Trunk port group we created earlier and click SAVE

Click SAVE. Now repeat for the remaining Edge node/s.

Edge Bridge Profile

An Edge bridge profile makes an NSX Edge cluster capable of providing layer 2 bridging to a segment. 

Navigate to Networking, Segments, Profiles, Edge Bridge Profiles and click ADD EDGE BRIDGE PROFILE.

Give it a Name and select the Edge Cluster now select the Primary Node this is the Edge node that will own and run the Bridge. Optionally and recommended select the Backup Node. Finally select the Fail Over mode Preemptive or Non-Preemptive and click SAVE.

If you set the failover mode to be preemptive and a failover occurs, the standby node becomes the active node. After the failed node recovers, it becomes the active node again.

If you set the failover mode to be non-preemptive and a failover occurs, the standby node becomes the active node. After the failed node recovers, it becomes the standby node.

Why would you use these settings? Well lets say you want to bridge two VLAN’s and balance the load across two different Edge nodes so

VLAN A uses EN1 as Primary and EN2 as Backup.

VLAN B uses EN2 as Primary and EN1 as Backup.

To do this just create two bridge profiles with opposite node settings and configure the bridge segments to use each of the profiles. This will balancer the load evenly however if you have configured the bridge profiles to be non-preemptive that after a failure of an edge node the node will not become primary again, at this stage you have lost the balance as you are now using the same edge node to bridge both VLAN’s so the Preemptive setting ensures an even load before and after failover recovery.

It does mean a small outage while the bridge moves to the original node but it give the balance. If I am bridging only 1 VLAN then I use non-preemptive since there is no benefit to moving the bridge back to the original node after a failure.

Layer 2 Bridge-Backed Segment

We now need to configure the bridges on our segments, shown below are my Overlay segments I’ll be using.
Click the Ellipsis for each one ion turn to select Edit

Expand Additional Settings and next to Edge Bridges click Set

Click ADD EDGE BRIDGE

Select the Edge Bridge Profile, the Transport Zone ‘Brdge-TZ’ for me and in VLAN enter the VLAN to Bridge then click ADD

Save the Segment changes by clicking SAVE

Repeat for the other Segments shown below is the App segment with VLAN 120

Testing

The bridges are now setup so time for a quick test shown below are pings from the physical switch to the following.

  • 10.0.1.5 – T1 interface for the Web VLAN 110
  • 10.0.2.5 – T1 interface for the App VLAN 120
  • 10.0.3.5 – T1 interface for the DB VLAN 130

Heres a trace to the Web T1 interface from the switch as expected it’s 1 how as the switch and the T1 are on the same L2 network.

Now lets ping from a Web server on the physical 110 VLAN to the following.

  • 10.0.1.21 – Web server VM on the Web 110 VLAN Overlay Segment

And a trace to

  • 10.0.1.1 – Physical gateway for the 110 VLAN
  • 10.0.1.5 – T1 interface for the Web VLAN 110
  • 10.0.1.21 – Web server VM on the Web 110 VLAN Overlay Segment

OK all is working we can now move onto the migration.

Migration

With our bridges in place and working lets now move the Web VM’s to the Overlay network. There are severl ways to do this from simply editing each VM individually which can be time consuming so lets migrate all VM’s connected to the VLAN port group. To do this from the VDS from the vCenter select the Networking tab, select the VDS and then the Networks tab now select port group right click and select Migrate VMs to Another Network.

The same process can be started from within a vAPP.

Click BROWSE

Select the Overlay Network and click OK

Click NEXT

Select the VM’s to move then click NEXT

Click Finish the VM’s will be reconfigured and connected to the Overlay network. There should be no drop in traffic or at most a single ping drop but I saw none during testing.

Here we can see the Web servers are now all on the NSX Port Group.

Repeat for the remaining networks.

Move the Gateway

The final step in our migration is to move the gateway from the physical switch over to the NSX T1. Remember if you are doing this in a production system there will be a slight outage from shutting down the physical interface changing the T1 and the routes being updated so plan accordingly!

Shown below I shutdown the VLAN interface on the physical switch.

I now need to edit the IP of the Segment connected to the T1 gateway remember my Segments are currently using a .5 IP but the gateway configured on the VM’s is .1 so I need to change the T1 Segment to use .1

Click the Ellipsis for the segment and select Edit

Now change the IP to .1 and click SAVE

Shown below are the ping drops from the steps above. Once the T1 interface is changed the VM’s can again get to the gateway .1 IP

Repeat for the remaining Segments.

Some quick tests to check that the VM’s can talk to each other and the respecting gateways.

With that the migrations are complete if the Bridges are no longer needed they can now be removed from the Segments, optionally remove the Bridge profile, remove the extra switch from the Edge nodes and delete the Transport zone.
The NSX-T system will now be back to the configuration it was before we started but the VM’s are now all running on T1 Overlay networks.

One thought to “NSX-T Layer 2 Bridging”

  1. Hi Graham,

    as always – great config guide! Many thanks for this.
    I was wondering if you know of a way to bridge traffic from physical to nsx-t federation stretched segments. As far as I know to date its just possible to do this on LM-Site, right?

    If its not possible at all, do you now if this feature is somewhere on vmwares roadmap?

    best regards
    jimslim

Leave a Reply

Your email address will not be published. Required fields are marked *