Welcome to Part 5 of the NSX-T 3.0 Lab Federation Series.
In my Last Post I Setup the RTEPs on the Edge clusters ready for the Federation networking configuration.
In this post we will be deploying our shared Tier-0 Gateway.
In our lab setup diagram we are going to configure the orange section and peer with our Lab Routers.
From the Tier-0 perspective we are setting up Active/Active with DCA being Primary and DCB Secondary.
We can deploy the Tier-0 and then jump out and setup the Uplink interfaces then jump back in and attached them to the Tier-0 but we may as well set them up first so we don’t have to jump back and forth.
Default Overlay Transport Zone
Before we setup the Segments I need to change the default Transport zone as I am using my own ones not the system created ones.
If you are using the builtin zones then you can skip this step.
On all local managers navigate to System, Fabric, Transport Zones on the ‘Transport Zones’ tab select the TZ you want to make default and click ACTIONS, Set as Default Transport Zone
be aware that local Segments will also be added to this TZ if you don’t select a different one. With the Global manager you can’t select the overlay TZ it uses which ever is set as the default on your Local manager.
You can’t set the VLAN Transport Zone default so be sure to pick the correct one when deploying the VLAN segments.
For production builds I will probably stick to the two system created zones but this is a lab built for testing so that’s why I am using my created zones.
The configuration of the Uplink Interfaces must be done from the Global manager, go to Networking, Segments, SEGMENTS and click ADD SEGMENT.
Give the segment a ‘Name’ and select ‘VLAN’ as the traffic type, Select the ‘Location’ DCA for me and the correct ‘VLAN Transport Zone’, set the ‘VLAN ID’ and pick the ‘Uplink Teaming Policy’ for the first Uplink.
Repeat for Uplink 2 with the same location but different VLAN ID and the second Uplink teaming policy.
Then do the same for the second location this time picking DCB as the location, Transport Zone, Uplink Profile and VLAN.
I now have 4 uplink segments, to check the status click the Check Status links in the ‘Status’ Column.
We can now deploy and configure our Tier-0 gateway, on the Global manager go to Networking, Tier-0 Gateways click ADD GATEWAY and select Tier-0
Give it a ‘Name’ The HA mode can only be ‘Active Active’ NSX-T 3.0 federation cannot do Active Standby that will come in a later release.
I want traffic to egress out of DCA so I uncheck the ‘Mark all locations as Primary’ box.
Under location select ‘DCA’ and select the ‘DCA Edge Cluster’ leave the mode at ‘Primary’.
Click ADD LOCATION
Then select ‘DCB’ and select the ‘DCB Edge Cluster’ leave the mode at ‘Secondary’
If you have other locations you can click ADD LOCATION.
We can now continue to configure the Tier-0 so click YES
But what just happeed?
If I open a web console to the Local manager and look at the Tier-0 Gateways I can see that a new Tier-0 has been created on both sites and is maked with ‘GM’ indicating that this is deployed and manged from the Global Manager.
As such you cannot edit it from the Local manager this must be done on the Global manager.
Back on our Global manager to continue the configuration.
Expand the ‘INTERFACES’ and click Set
Enter a ‘Name’, select the location to the first DC (DCA for me) from the drop-down list, the type is ‘External’, enter the ‘IP address’ in CIDR format.
Set the ‘Connected To Segment’ to the first Locations, first Edge Uplink, set the ‘Edge Node’ to the first Edge Node VM, set the MTU and set the ‘URPF Mode’ to None for ECMP.
Repeat the same for Uplink 2 obviously the ‘Name’, ‘IP’ and ‘Connected Segment’ will be for Uplink 2.
Repeat the process this time picking Edge node 2.
Once you have setup the DCA Edge node interfaces repeat the same for DCB.
I now have 8 interfaces two for each Edge 4 in DCA and 4 in DCB.
Click Check Status to make sure all show as Success.
The next step is to setup our BGP peers, expand ‘BGP’ set the Local AS.
I’m turning off Graceful Restart so I set it to ‘Disable click SAVE. Then under ‘BGP Neighbors’ click Set
Click ADD BGP NEIGHBOR , I’m going to keep the settings basic for this for simplicity. I set the ‘IP Address’ to the DCA Router A, set the ‘Location’ to DCA, set the ‘Remote AS Number’ to that of the Router A and set ‘Graceful Restart’ to Disable. Click SAVE
Under the ‘Status’ column click Check Status it should go to ‘Success’ Click the ‘i’ to see each edge node connection status, you can change the Edge node from the drop-down at the top.
Both my Edge nodes have peered with Router A in DCA.
Quick look at Router A confirms the two neighbors.
BGP router identifier 10.160.1.1, local AS number 65100 RIB entries 0, using 0 bytes of memory Peers 2, using 4968 bytes of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.160.1.11 4 65000 12 30 0 0 0 00:11:29 0 10.160.1.12 4 65000 12 30 0 0 0 00:11:28 0 Total number of neighbors 2
I repeat the process to connect to Router B in DCA.
I now do the same thing connecting to Router A in DCB remember to set the location and the DCB router AS.
Now connect Router B in DCB.
I end up with 4 neighbors two in each site.
Tier-0 Route Redistribution
The process here is the same as a normal Tier-0 except we have to configure the settings for both sites.
Expand ‘ROUTE REDISTRIBUTION’ and click Set on DCA
Click ADD ROUTE REDISTRIBUTION give it a ‘Name’ and click Set
Fill out what you want to redistribute and click APPLY.
Click ADD then APPLY.
Repeat for DCB by clicking the DCB Set, once done click SAVE then CLOSE EDITING
Our Tier-0 is now configured in the next post I’ll be setting up the Tier-1 and our Application Segments.