NSX-T 3.0 Lab Federation: Lab Setup

Intro

Welcome to Part 1 of the NSX-T 3.0 Lab Federation Series.
In the previous post, we covered the introduction to the NSX-T 3.0 lab Federation series.
In this post I’ll cover the basics of how my lab is set up and the overall configruation we will end up with.

Physical Environment

Hosts

I have three physical hosts, these are Dell Poweredge T20 Servers they are about 3 years old and each has a single four core Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz . Each host has 32 GB of RAM and a single onboard 1GB NIC, additionally each host also has a 4 port 1GB NIC card.
So the hosts are not particularly new and not particularly beefy and as a result they are really struggling with this Federation build which is why I have a new server going into the cluster which has the same resources as four of my current three hosts!
The physical adapters on the hosts are configured as per the below screenshot.
I’ve kept the management and storage on local vSwitches for simplicity, Management on vSwitch0 and Storage on vSwitch 1.
While VMNIC’s 2 and 3 are connected to the Lab-VDS

Switch

I currently have a single physical switch which is a 24 port Cisco 3750 G I have the following VLANs configured.
The VLANs I’ll be using for the NSX-T Federation lab are VLAN 10 for DCA management, VLAN 15 for DCB management, VLAN 30 which is the storage network, VLAN 150 for the DCA overlay network, VLAN 155 for the DCB overlay network, VLAN 152 for the DCA Edge RTEP and VLAN 157 for the DCB Edge RTEP.
Uplink VLANs 160, 165, 170 and 175 for the Edge nodes will only be on virtual routers.

interface Vlan10
 ip address 192.168.10.1 255.255.255.0
!
interface Vlan15
 ip address 192.168.15.1 255.255.255.0
!
interface Vlan30
 ip address 192.168.30.1 255.255.255.0
!
interface Vlan150
 ip address 10.150.1.1 255.255.255.0
!
interface Vlan152
 ip address 10.152.1.1 255.255.255.0
!
interface Vlan155
 ip address 10.155.1.1 255.255.255.0
!
interface Vlan157
 ip address 10.157.1.1 255.255.255.0

Each host port is trunked with the following configuration.

interface GigabitEthernet1/0/1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,10,15,30,150-152,155-157,160,165,170,175
switchport mode trunk
spanning-tree portfast trunk
spanning-tree bpduguard enable

Storage

For storage I’m using a Synology NAS with spinning disks over the 1GB network with two datastores configured for the lab, again it’s not the fastest in the world but since I don’t really run many client VM’s and I have no intention of running Horizon it does the job OK.

Virtual Environment

Hosts

I have four nested ESXi 7.0 hosts for each DC, A & B running on the physical hosts these are DCA-Comp01, DCA-Comp02, DCA-Edge01, DCA-Edge02 DCB-Comp01, DCB-Comp02, DCB-Edge01 and DCB-Edge02.
Each host has 4 CPUs and 10GB of RAM though the RAM will increase when the new host is added.
Each host has 4 network adapters, NIC 1 is connected to Management-vDS which is for Management. NIC 2 is connected to the Nested-LANA portgroup trunked with VLANs 0-157 160 and 165, NIC 3 is connected to the Nested-LANB portgroup trunked with VLANs 0-157 170 and 175 and NIC 4 is connected to VM_iSCSI for storage.

VMs

On the physical hosts I am running the following VM’s.
LabAD01 which is the active directory server.
vCenter1 which is the DCA vCenter
vCenter2 which is the DCB vCenter
Lab-router-XG and Lab-router-XG02 which will be our two DCA virtual routers to which the DCA Tier-0 will peer with BGP.
Lab-router-XG03 and Lab-router-XG04 which will be our two DCB virtual routers to which the DCB Tier-0 will peer with BGP.
For the NSX managers we have NSXTMan01 which is the DCA NSX Manager NSXTMan02 which is the DCB NSX Manager and we will be deploying G-NSXTMan01 as the Global NSX Manager.

vCenter1

Clusters

vCenter1 runs three clusters.
Lab – this houses our three physical hosts.
DCA-Comp – Houses our nested DCA Compute hosts
DCA-Edge – Houses our nested DCA Edge hosts

vCenter2

Clusters

vCenter2 runs two clusters.
DCB-Comp – Houses our nested DCB Compute hosts
DCB-Edge – Houses our nested DCB Edge hosts

Network

On the lab cluster I run a VDS called Lab-VDS which has the Management-vDS, SiteB-MGMT, Nested-LANA and Nested-LANB port groups.

Settings for the Nested-LANA port group, set to VLAN trunking 0-157, 160, 165.

This image has an empty alt attribute; its file name is NSX-T-Lab-setup5-1024x870.png

Settings for the Nested-LANB port group, set to VLAN trunking 0-157, 165, 175.

I also have a VDS configured on the Edge cluster.
The two nested Edge hosts are added to this VDS which runs the port groups Edge-Mgmt VLAN 10 and DCA/B-Trunk-A and B which trunk all VLANs

Nested Environment

The nested environment has our Edge nodes running on the Edge clusters and a test app running on the compute clusters.
The compute clusters are configured for NSX on the VDS.
Edge hosts are not prepared for NSX.

NSX Configuration

The intention of this series is to detail the configuration of NSX Federation as such I won’t be showing the full NSX deployment thats already been covered on my blog. So here is a brief summary of the starting state before the Federation build.

Transport Zones

In my original lab setup blog I created two new Transport Zones.
When building Gateways and Segments in Federation the system will use which ever Overlay Transport zone is set as the default and by default that is the system created nsx-overlay-transportzone.
In order for me to use my own one I need to change the default to my TZ-Overlay I’ll cover that in the blog when I get to deploying the Federation components

In DCA there are two transport zones

  • TZ-Overlay
  • DCA-TZ-VLAN

In DCB there are two transport zones

  • TZ-Overlay
  • DCB-TZ-VLAN

DCA-TZ-VLAN has two Uplink Teaming Policies configured, these are used on Tier-0 Segments see NSX-T 3.0 Lab Single N-VDS Edge Nodes for how these are used.

DCB has the same setup with DCB-Uplink1 and 2.

Uplink Profiles

There are two uplink profiles per site.
The ESXi Host profiles contain the respective Overlay VLAN 150 for DCA hosts and 155 for DCB, it is configured with a Load Balance Source ID teaming with Uplink-1 and Uplink-2 set to Active.

The Edge Nodes profiles contain the respective Overlay VLAN 150 for DCA hosts and 155 for DCB, it is configured with a Load Balance Source ID teaming with E-UP-01 and E-UP-02 set to Active.
There are also two named teaming policies.
DC(A/B)-Uplink1 set to Failover Order with E-UP-01 Active and no standby.
DC(A/B)-Uplink2 set to Failover Order with E-UP-02 Active and no standby.
These are used in the Transport Zone as shown above.

Transport Node Profile

There is a Transport Node Profile on each site set to use VDS with the TZ-Overlay Transport Zone and the ESXi uplink profile.

There is a TEP Pool configured for each site used by the Compute hosts and the Edge Nodes.

Edge Nodes

I’ve deployed two Edge nodes on each site and added them to an Edge cluster.
Each node is configured as part of the TZ-Overlay and DCA/B-TZ-VLAN Transport Zones and uses the Edge Uplink Profile and the TEP pool.
The Edge interfaces are mapped to the DC Trunk port groups on the Edge VDS.

Federation

The diagram below shows an illustration of what we will be setting up during the lab series from a networking perspective.

I have a 3 Tier app running on the Compute clusters at DCA and DCB, each VM is connected to a GLobal Stretched Segment, these will be connected to a stretched Tier-1 Gateway which in turn connects to a stretched Tier-0 Gateway which will peer with the two pairs of virtual routers in each DC. These then connect to my home Router to give me external connectivity.

If I break it down further you can see how that maps to the Edge nodes.
The stretched T0 runs as active/active on the Edge Nodes these in turn connect via two uplinks to the Lab routers for ECMP.

I’m not showing how the Edge nodes map to the VDS and those uplinks as this is just an overall routing diagram.

That’s about it for the lab setup. now lets start our build.
Part 2 NSX-T 3.0 Lab: Global Manager OVA Deployment-Federation

Leave a Reply

Your email address will not be published. Required fields are marked *