NSX-V Lab: Configure VXLAN

Intro

Welcome to Part 8 of the NSX-V Lab Series. In the previous post, we prepared our hosts for NSX. In this post we will configure the VXLAN parameters and deploy our host VTEPs.

What is VXLAN

Virtual Extensible LAN (VXLAN) is a networking technology designed to address the scalability issues with networks. Essentially it is a VLAN or rather multiple VLAN’s encapsulated into a single transport VLAN.
Because the VXLAN VLANs are encapsulated and traverse via a single transport VLAN the limitations in terms of numbers of available VLANs is removed. With VXLAN we can now have up to 16 million logical networks all running within a single transport VLAN.

The VXLAN network is used for Layer 2 logical switching across hosts, potentially spanning multiple underlying Layer 3 domains.
You configure VXLAN on a per-cluster basis, where you map each cluster that is to participate in NSX to a vSphere distributed switch (VDS).
When you map a cluster to a distributed switch, each host in that cluster is enabled for logical switches.
When you configure VXLAN networking, you must have a vSphere Distributed Switch, a VLAN ID, an MTU size of at least 1600, an IP addressing mechanism (DHCP or IP pool), and a NIC teaming policy.

The MTU required for VLAN is actually 1550 as the VXLAN header adds additional bytes to the frame however to keep things easy we use 1600 as the minimum.

VTEP

The VTEPs are VXLAN Tunnel Endpoints, these are the VMkernels where the VXLAN encapsulation and decapsulation is done. When a VM on a logical switch communication the packet is encapsulated into a VXLAN frame and send to the remote VTEP on the remote host where the target VM resides the packet is then de-encapsulated and sent to the VM.
VTEPs can be either VMkernel ports or hardware VTEPs. As I’m not made of money we’ll use VMkernel VTEPs in my lab which is mostly what you’ll see used in production systems as well.

The Build

The first thing we need to do is to create our VTEP IP pool, this can be done as part of the host VXLAN configuration or it it can be done ahead of time, I tend to do it first rather than during the VTEP install but either way is fine.
From the NSX vSphere interface browse to ‘Groups and Tags’ then onto the ‘IP Pools’ tab.
We can see our Controller pool already there so we need to now add our VTEP pool.
You can also assign VTEP IP’s via DHCP however this is only normally done if you are deploying a metro cluster as it’s a requirement. however for a normal NSX or cross site NSX deployment we usually use a static pool as this removes the dependency on the customer to provide a DHCP server.
Click ‘+ ADD’

Give the pool a name, set the Gateway IP and prefix length.
Adding a DNS server and DNS suffix are optional.
Finally add the IP pool range. This should cover all the hosts and the number of VTEPs you are planning for the hosts and also any hosts you plan to add during the lifetime of the deployment.
Since we will be configuring our lab with two VTEPs per host and we have 6 hosts per site we need a minimum of 12 VTEP IPs. When we deploy our second site it will use a VTEP pool configured locally to it so this pool only needs to cover Site A hosts.
Once you’ve configured everything click ‘ADD’

Our pool now shows with 0/20 used IP’s

We are now ready to configure our hosts, so go to ‘Installation and Upgrade’ select your compute cluster, click ‘Actions’ and then ‘Configure VXLAN’

Ensure the correct Switch is selected, I only have a single vDS configured on the compute cluster so it’s the only one shown. Set the VLAN ID for me thats VLAN 50 which is our transport VLAN. Configure the MTU remember 1600 is the minimum but you can also use a n=large MTU if you wish just ensure the same MTU is configured across the environment from vDS to physical.
The vmkNIC IP Addressing section shows that we can select DHCP or IP Pool, you can also create a New IP Pool from here which will open up the New IP Pool screen we used earlier.
As we already have an IP pool just select it from the dropdown, be sure to pick the VTEP pool and not the controller pool.
Finally we need to decide on a Teaming policy. this is largely dependant on how you host NIC’s are configured and how many NIC’s you have one or two or a LAG being the normal configuration. I would not advise 3 or 4 NIC’s unless they are in a LAG as that adds more than two VTEPs which complicates the build and trouble shooting. For all my customer designs we have only ever had one or two VTEPs.
I’ll be using two VTEPs in my lab which is the most common deployment method and I will be using Load Balance – Source ID (SRCID)

Selecting this teaming policy shows the VTEP as 2.
Now click ‘SAVE’ and the compute hosts will be configured for VXLAN.

Once the configuration is complete if we pick a host and click ‘VIEW DETAILS’ we can see the VMkernels that have been installed.

We can also view the configuration for the cluster.

While we are here we may as well set the IP detection type.
IP detection is a mechanism that NSX uses to associate IPs with VMs for use with the distributed firewall.
There are three mechanisms, VMware tools, ARP Snooping and DHCP Snooping.
VMware tools is used by default but we also want to enable ARP snooping so from the ‘Actions’ menu select ‘Change IP Detection Type’

Tick ARP Snooping and click ‘SAVE’

We now need to repeat the process for the Edge cluster.

If we go to our vDS and look for the vmknicPg we can view all our VTEP VMKernel Ports for our cluster hosts.

As a test to ensure our overlay is working we can ping from one host to another using the vxlan netstack. setting the -s switch to 1572 ensure the packet is larger than 1500 thus testing the MTU as well.

[root@vcomp01:~] vmkping ++netstack=vxlan 10.50.1.15 -d -s 1572
PING 10.50.1.15 (10.50.1.15): 1572 data bytes
1580 bytes from 10.50.1.15: icmp_seq=0 ttl=64 time=0.848 ms
1580 bytes from 10.50.1.15: icmp_seq=0 ttl=64 time=0.885 ms (DUP!)
1580 bytes from 10.50.1.15: icmp_seq=0 ttl=64 time=0.993 ms (DUP!)
1580 bytes from 10.50.1.15: icmp_seq=0 ttl=64 time=1.007 ms (DUP!)
1580 bytes from 10.50.1.15: icmp_seq=1 ttl=64 time=0.577 ms
1580 bytes from 10.50.1.15: icmp_seq=2 ttl=64 time=0.560 ms

--- 10.50.1.15 ping statistics ---
3 packets transmitted, 3 packets received, +3 duplicates, 0% packet loss round-trip min/avg/max = 0.560/1.623/1.007 ms

That’s it our hosts are now ready for further configuration.
NSX-V Lab Part:9 NSX-V Segment ID

Leave a Reply

Your email address will not be published. Required fields are marked *