Welcome to Part 11 of the NSX-T Lab Series. In the previous post, we set up our network segments and attached our VM’s to the networks.
We successfully pinged within each Layer 2 network.
We could now go ahead and deploy a Tier 1 logical router and connect those segments to it to get layer 3 connectivity and I’ll probably do a separate post on that particular configuration option however this lab build is following a typical production environment and so we wouldn’t normally have a Tier 1 router not connected to an Edge node as it won’t be able to talk North/South and thats generally needed so we need to deploy some edge nodes.
There are three ways to do this but I’ll only cover two in this series.
The ways are via the web UI, Via an OVA deployment and using the OVF Tool. I’ll be covering the first two since they are the most common methods.
This post is the web UI method and we’ll be deploying one of our two Edge Nodes.
What is an Edge Node?
As this is a lab build guide I won’t go into too much detail I’ll save that for a separate blog post and it will be a pretty big one! But I will give a brief overview here.
NSX Edge nodes are service appliances with pools of capacity, dedicated to running network services that cannot be distributed to the hypervisors.
The NSX-T Edge appliance provides routing services and connectivity to networks that are external to the NSX-T environment.
An NSX Edge is required if you want to deploy a tier-0 router or a tier-1 router with stateful services such as NAT, DHCP Server, Edge Firewall etc.
Edge nodes can be viewed as empty containers when they are first deployed.
NSX Edges interact with the physical network in the same way as physical routing devices and have the same logical and physical properties as physical routers. NSX Edges provide routing services and connectivity to networks that are external to the NSX-T deployment. You use an NSX-T Edge for establishing external connectivity from the NSX-T domain by using a Tier-0 router using BGP or static routing.
Edge nodes are available in two form factors – VM and bare metal. Both leverage the data plane development toolkit (DPDK) for faster packet processing and high performance for obvious reasons I won’t be touching bare metal edges here.
Thats a very very brief explanation like I said it needs a separate blog post to cover all the detail so I’ll get to doing that sometime soon.
From ‘System’ ‘Fabric’ ‘Nodes’ select ‘Edge Transport Nodes’ and ‘+ Add Edge VM’
Give it a name and the FQDN and select the desired size for my lab I’ll use small and I’ll later reduce the allocated resources again don’t do that in production and refer to the VMware documentation for the required Form Factor based on your requirements.
Setup the login credentials these need to be complex passwords and set allow or not the SSH and Root logins. for production builds these are normally disabled until there is a need to troubleshoot, but I generally turn these on initially so we can check every things working. For the lab we’ll turn them on.
Select the compute manager and the cluster where the Edge node will be deployed. since this lab build covers deploying to as dedicated Edge cluster thats the cluster I pick. I will cover deploying edge nodes to a collapsed configuration whereby the edge and compute are on the same host.
There are various design options on how to do that based on the host configuration but thats outside the scope of this guide so will be covered separately.
If desired pick the resource pool and the host and then the datastore where the VM will be deployed.
I’ll configure a static IP and use the Edge management VLAN on my vDS. this is the nested management VLAN for the edge cluster and maps to VLAN 10 on the physical network which is the same VLAN as mu vCenter NSX manager and hosts VMK0
Configure the Domain name, DNS and NTP Servers.
We now have to setup the Transport zones and the N-VDS that will be added to the Edge node. The N-VDS is internal to the Edge VM so the physical host never see’s it but the Edge VM’s interfaces map to the N-VDS’s and the N-VDS uplinks in turn map to the physical uplinks on the host.
Below is a diagram of what we need to configure.
As we are going to setup ECMP routing we need to map two of the Edge Node uplinks to a specific host VMNIC this in turn will peer with one side of the physical network. In a production environment this would be each host VMNIC mapping to one of the Top Of Rack switches. So VLANA is only on the left side switch and VLANB is only on the right side switch the routing will then see to equal cost routes out of the NSX environment once we configure the routing.
Next click the Transport zone dropdown and add the three zones we created earlier the TZ-Overlay, TZ-VLANA and TZ-VLANB
Now we need to configure the N-VDS settings, select the N-VDS-Overlay switch, the nsx-edge-single-nic-uplink-profile. Next set the IP Assignment to use IP Pool and then pick the TEP IP-Pool. As the Edge node resides on a dedicated Edge host we can use the same TEP Pool as the compute hosts.
For the Uplink we only have the one uplink we can pick but we set it to the Edge-Overlay-VLAN. Don’t get confused by the next three steps since all of them will show a single uplink, Uplink-1 the configuration will not place all the N-VDS’s on a single uplink each one is different it’s just that the Uplink Profile we are using only has a single Uplink-1 uplink so it just looks like we are configuring the same uplink each time in reality we are not.
We now need to configure the next N-VDS this one for the VLANA uplink.
I’ve highlighted the TZ-VLANA Transport zone at the top to show that we are configuring the N-VDS for that Zone but you don’t need to click it. What you do need to click is ‘+ Add N-VDS.
This time pick the N-VDS-VLANA Switch, use the same uplink profile we used previously. As the system recognizes that this is a VLAN backed switch there is no IP Pool option to pick all we need to do is map the Uplink-1 to the Uplink1-VLAN-160 port group on the vDS
Click Add N-VDS again to configure the VLANB interface and set the uplink-1 to Uplink2-VLAN-170 Port group. then click finish
We’ve now deployed our first edge node, we can see that it has been added to the three Transport Zones Overlay and VLANA and B so it can now process Geneve traffic and connect north south to the physical network.
If we click on the Edge node name we can expand thee detail pane to get more information on the node.
On the Monitor tab we can see the interfaces.
The three we are interested in at the moment are eth0, fp-eth0, fp-eth1 and fp-eth2
If we take a look at the Edge node VM we can get the MAC Addresses for each of the Network adapters
For eth0 we can see that it matches the MAC of Network adapter 1 and we can see the IP that we assigned. This is the management interface
For fp-eth0 the MAC matches Network adapter 2 this is the Overlay network meaning the other two are the Uplinks
As this is a lab I want to remove the CPU and Memory reservations so I set these to 0. Once again don’t do this in production.
I now have my first Edge node but I still need another one so in the next post I’ll deploy a second one but this time via the OVA deployment method.
If you are deploying both Edge nodes with this method make sure you disable DRS for them the details on how to do this are at the bottom of the next post HERE.
NSX-T Lab Part:12 NSX-T Edge Node OVA Deployment