Intro
NSX-T 3.1 is now GA and it’s a pretty big update and brings in the much anticipated standby Global Manager and with it the production ready support so I expect to be busy designing and building more customer solutions using Federation in the upcoming months.
For the full release notes go to https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-31-Release-Notes.html
What’s New
NSX-T Data Center 3.1 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:
- Cloud-scale Networking: Federation enhancements, Enhanced Multicast capabilities.
- Move to Next Gen SDN: Simplified migration from NSX-V to NSX-T,
- Intrinsic Security: Distributed IPS, FQDN-based Enhancements
- Lifecycle and monitoring: NSX-T support with vSphere Lifecycle Manager (vLCM), simplified installation, enhanced monitoring, search and filtering.
- Inclusive terminology: In NSX-T 3.1, as part of a company-wide effort to remove instances of non-inclusive language in our products, the NSX team has made changes to some of the terms used in the product UI and documentation. APIs, CLIs, and logs still use legacy terms.
In addition to these enhancements, many other capabilities are added in every part of the product. More details on NSX-T 3.1 new features and enhancements are available in the NSX-T Data Center 3.1.0 release.
Federation
- Support for standby Global Manager Cluster
- Global Manager can now have an active cluster and a standby cluster in another location. Latency between active and standby cluster must be a maximum of 150ms round-trip time.
- With the support of Federation upgrade and Standby GM, Federation is now considered production ready.
L2 Networking
Change the display name for TCP/IP stack: The netstack keys remain “vxlan” and “hyperbus” but the display name in the UI is now “nsx-overlay” and “nsx-hyperbus”.
- The display name will change in both the list of Netstacks and list of VMKNICs
- This change will be visible with vCenter 6.7
Improvements in L2 Bridge Monitoring and Troubleshooting
- Consistent terminology across documentation, UI and CLI
- Addition of new CLI commands to get summary and detailed information on L2 Bridge profiles and stats
- Log messages to identify the bridge profile, the reason for the state change, as well as the logical switch(es) impacted
Support TEPs in different subnets to fully leverage different physical uplinks
A Transport Node can have multiple host switches attaching to several Overlay Transport Zones. However, the TEPs for all those host switches need to have an IP address in the same subnet. This restriction has been lifted to allow you to pin different host switches to different physical uplinks that belong to different L2 domains.
Improvements in IP Discovery and NS Groups: IP Discovery profiles can now be applied to NS Groups simplifying usage for Firewall Admins.
L3 Networking
Policy API enhancements
- Ability to configure BFD peers on gateways and forwarding up timer per VRF through policy API.
- Ability to retrieve the proxy ARP entries of gateway through policy API.
Multicast
NSX-T 3.1 is a major release for Multicast, which extends its feature set and confirms its status as enterprise ready for deployment.
- Support for Multicast Replication on Tier-1 gateway. Allows to turn on multicast for a Tier-1 with Tier-1 Service Router (mandatory requirement) and have Multicast receivers and sources attached to it.
- Support for IGMPv2 on all downlinks and uplinks from Tier-1
- Support for PIM-SM on all uplinks (config max supported) between each Tier-0 and all TORs (protection against TOR failure)
- Ability to run Multicast in A/S and Unicast ECMP in A/A from Tier-1 → Tier-0 → TOR
- Please note that Unicast ECMP will not be supported from ESXi host → T1 when it is attached to a T1 which also has Multicast enabled.
- Support for static RP programming and learning through BS & Support for Multiple Static RPs
- Distributed Firewall support for Multicast Traffic
- Improved Troubleshooting: This adds the ability to configure IGMP Local Groups on the uplinks so that the Edge can act as a receiver. This will greatly help in triaging multicast issues by being able to attract multicast traffic of a particular group to Edge.
Edge Platform and Services
- Inter TEP communication within the same host:Edge TEP IP can be on the same subnet as the local hypervisor TEP.
- Support for redeployment of Edge node:A defunct Edge node, VM or physical server, can be replaced with a new one without requiring it to be deleted.
- NAT connection limit per Gateway: The maximum NAT sessions can be configured per Gateway.
Firewall
- Improvements in FQDN-based Firewall: You can define FQDNs that can be applied to a Distributed Firewall. You can either add individual FQDNs or import a set of FQDNs from CSV files.
Firewall Usability Features
- Firewall Export & Import:NSX now provides the option for you to export and import firewall rules and policies as CSVs.
- Enhanced Search and Filtering: Improved search indexing and filtering options for firewall rules based on IP ranges.
Distributed Intrusion Detection/Prevention System (D-IDPS)
Distributed IPS
- NSX-T will have a Distributed Intrusion Prevention System. You can block threats based on signatures configured for inspection.
- Enhanced dashboard to provide details on threats detected and blocked.
- IDS/IPS profile creation is enhanced with Attack Types, Attack Targets, and CVSS scores to create more targeted detection.
Load Balancing
- HTTP server-side Keep-alive:An option to keep one-to-one mapping between the client side connection and the server side connection; the backend connectionis keptuntil the frontend connection is closed.
- HTTP cookie security compliance: Support for “httponly” and “secure” options for HTTP cookie.
- A new diagnostic CLI command: The single command captures various troubleshooting outputs relevant to Load Balancer.
VPN
- TCP MSS Clamping for L2 VPN: The TCP MSS Clamping feature allows L2 VPN session to pass traffic when there is MTU mismatch.
Automation, OpenStack and API
- NSX-T Terraform Provider support for Federation: The NSX-T Terraform Provider extends its support to NSX-T Federation. This allows you to create complex logical configurations with networking, security (segment, gateways, firewall etc.) and services in an infra-as-code model. For more details, see the NSX-T Terraform Provider release notes.
- Conversion to NSX-T Policy Neutron Plugin for OpenStack environment consuming Management API: Allows you to move an OpenStack with NSX-T environment from the Management API to the Policy API. This gives you the ability to move an environment deployed before NSX-T 2.5 to the latest NSX-T Neutron Plugin and take advantage of the latest platform features.
- Ability to change the order of NAT and FWLL on OpenStack Neutron Router: This gives you the choice in your deployment for the order of operation between NAT and FWLL. At the OpenStack Neutron Router level (mapped to a Tier-1 in NSX-T), the order of operation can be defined to be either NAT then firewall or firewall then NAT. This is a global setting for a given OpenStack Platform.
- NSX Policy API Enhancements: Ability to filter and retrieve all objects within a subtree of the NSX Policy API hierarchy. In previous version filtering was done from the root of the tree policy/api/v1/infra?filter=Type-, this will allow you to retrieve all objects from sub-trees instead. For example, this allows a network admin to look at all Tier-0 configurations by simply /policy/api/v1/infra/tier-0s?filter=Type- instead of specifying from the root all the Tier-0 related objects.
Operations
- NSX-T support with vSphere Lifecycle Manager (vLCM):Starting with vSphere 7.0 Update 1, VMware NSX-T Data Center can be supported on a cluster that is managed with a single vSphere Lifecycle Manager (vLCM) image. As a result, NSX Manager can be used to install, upgrade, or remove NSX components on the ESXi hosts in a cluster that is managed with a single image.
- Hosts can be added and removed from a cluster that is managed with a single vSphere Lifecycle Manager and enabled with VMware NSX-T Data Center.
- Both VMware NSX-T Data Center and ESXi can be upgraded in a single vSphere Lifecycle Manager remediation task. The workflow is supported only if you upgrade from VMware NSX-T Data Center version 3.1.
- Compliance can be checked, a remediation pre-check report can be generated, and a cluster can be remediated with a single vSphere Lifecycle Manager image and that is enabled with VMware NSX-T Data Center.
- Simplification of host/cluster installation with NSX-T:Through the “Getting Started” button in the VMware NSX-T Data Center user interface, simply select the cluster of hosts that needs to be installed with NSX, and the UI will automatically prompt you with a network configuration that is recommended by NSX based on your underlying host configuration. This can be installed on the cluster of hosts thereby completing the entire installation in a single click after selecting the clusters. The recommended host network configuration will be shown in the wizard with a rich UI, and any changes to the desired network configuration before NSX installation will be dynamically updated so users can refer to it as needed.
- Enhancements to in-place upgrades:Several enhancements have been made to the VMware NSX-T Data Center in-place host upgrade process, like increasing the max limit of virtual NICs supported per host, removing previous limitations, and reducing the downtime in data path during in-place upgrades. Refer to the VMware NSX-T Data Center Upgrade Guide for more details.
- Reduction of VIB size in NSX-T:VMware NSX-T Data Center 3.1.0 has a smaller VIB footprint in all NSX host installations so that you are able to install ESX and other 3rd party VIBs along with NSX on their hypervisors.
- Enhancements to Physical Server installation of NSX-T: To simplify the workflow of installing VMware NSX-T Data Center on Physical Servers, the entire end-to-end physical server installation process is now through the NSX Manager. The need for running Ansible scripts for configuring host network connectivity is no longer a requirement.
- ERSPAN support on a dedicated network stack with ENS: ERSPAN can now be configured on a dedicated network stack i.e., vmk stack and supported with the enhanced NSX network switch i.e., ENS, thereby resulting in higher performance and throughput for ERSPAN Port Mirroring.
- Singleton Manager with vSphere HA: NSX now supports the deployment of a single NSX Manager in production deployments. This can be used in conjunction with vSphere HA to recover a failed NSX Manager. Please note that the recovery time for a single NSX Manager using backup/restore or vSphere HA may be much longer than the availability provided by a cluster of NSX Managers.
- Log consistency across NSX components: Consistent logging format and documentation across different components of NSX so that logs can be easily parsed for automation and you can efficiently consume the logs for monitoring and troubleshooting.
- Support for Rich Common Filters: This is to support rich common filters for operations features like packet capture, port mirroring, IPFIX, and latency measurements for increasing the efficiency of customers while using these features. Currently, these features have either very simple filters which are not always helpful, or no filters leading to inconvenience.
- CLI Enhancements:Several CLI related enhancements have been made in this release:
- CLI “get” commands will be accompanied with timestamps now to help with debugging
- GET / SET / RESET the Virtual IP (VIP) of the NSX Management cluster through CLI
- While debugging through the central CLI, run ping commands directly on the local machines eliminating extra steps needed to log in to the machine and do the same
- View the list of core on any NSX component through CLI
- Use the “*” operator now in CLI
- Commands for debugging L2Bridge through CLI have also been introduced in this release
- Distributed Load Balancer Traceflow: Traceflow now supports Distributed Load Balancer for troubleshooting communication failures from endpoints deployed in vSphere with Tanzu to a service endpoint via the Distributed Load Balancer.
Monitoring
- Events and Alarms
- Capacity Dashboard: Maximum Capacity, Maximum Capacity Threshold, Minimum Capacity Threshold
- Edge Health: Standby move to different edge node, Datapath thread deadlocked, NSXT Edge core file has been generated, Logical Router failover event, Edge process failed, Storage Latency High, Storage Error
- ISD/IPS: NSX-IDPS Engine Up/Down, NSX-IDPS Engine CPU Usage exceeded 75%, NSX-IDPS Engine CPU Usage exceeded 85%, NSX-IDPS Engine CPU Usage exceeded 95%, Max events reached, NSX-IDPS Engine Memory Usage exceeded 75%,
NSX-IDPS Engine MemoryUsage exceeded 85%, NSX-IDPS Engine MemoryUsage exceeded 95% - IDFW: Connectivity to AD server, Errors during Delta Sync
- Federation: GM to GM Split Brain
- Communication: Control Channel to Transport Node Down, Control Channel to Transport Node Down for too Long, Control Channel to Manager Node Down, Control Channel to Manager Node Down for too Long, Management Channel to Transport Node Down, Management Channel to Transport Node Down for too Long, Manager FQDN Lookup Failure, Manager FQDN Reverse Lookup Failure
- ERSPAN for ENS fast path: Support port mirroring for ENS fast path.
- System Health Plugin Enhancements: System Health plugin enhancements and status monitoring of processes running on different nodes to ensure that system is running properly by on-time detection of errors.
- Live Traffic Analysis & Tracing:A live traffic analysis tool to support bi-directional traceflow between on-prem and VMC data centers.
- Latency Statistics and Measurement for UA Nodes: Latency measurements between NSX Manager nodes per NSX Manager cluster and between NSX Manager clusters across different sites.
- Performance Characterization for Network Monitoring using Service Insertion: To provide performance metrics for network monitoring using Service Insertion.
Usability and User Interface
- Graphical Visualization of VPN: The Network Topology map now visualizes the VPN tunnels and sessions that are configured. This aids you to quickly visualize and troubleshoot VPN configuration and settings.
- Dark Mode: NSX UI now supports dark mode. You can toggle between light and dark mode.
- Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.
- Enhanced Search and Filtering:Improved the search indexing and filtering options for firewall rules based on IP ranges.
- Reducing Number of Clicks: With this UI enhancement, NSX-T now offers a convenient and easy way to edit Network objects.
Licensing
- Multiple license keys: NSX now has the ability to accept multiple license keys of same edition and metric. This functionality allows you to maintain all your license keys without having to combine your license keys.
- License Enforcement: NSX-T now ensures that users are license-compliant by restricting access to features based on license edition. New users will be able to access only those features that are available in the edition that they have purchased. Existing users who have used features that are not in their license edition will be restricted to only viewing the objects; create and edit will be disallowed.
- New VMware NSX Data Center Licenses: Adds support for new VMware NSX Firewall and NSX Firewall with Advanced Threat Prevention license introduced in October 2020, and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. See VMware knowledge base article 52462 for more information about NSX licenses.
AAA and Platform Security
- Security Enhancements for Use of Certificates And Key Store Management:With this architectural enhancement, NSX-T offers a convenient and secure way to store and manage a multitude of certificates that are essential for platform operations and be in compliance with industry and government guidelines. This enhancement also simplifies API use to install and manage certificates.
- Alerts for Audit Log Failures: Audit logs play a critical role in managing cybersecurity risks within an organization and are often the basis of forensic analysis, security analysis and criminal prosecution, in addition to aiding with diagnosis of system performance issues. Complying with NIST-800-53 and industry-benchmark compliance directives, NSX offers alert notification via alarms in the event of failure to generate or process audit data.
- Custom Role Based Access Control: Users desire the ability to configure roles and permissions that are customized to their specific operating environment. The custom RBAC feature allows granular feature-based privilege customization capabilities enabling NSX customers the flexibility to enforce authorization based on least privilege principles. This will benefit users in fulfilling specific operational requirements or meeting compliance guidelines. Please note in NSX-T 3.1, only policy based features are available for role customization.
- FIPS – Interoperability with vSphere 7.x:Cryptographic modules in use with NSX-T are FIPS 140-2 validated since NSX-T 2.5. This change extends formal certification to incorporate module upgrades and interoperability with vSphere 7.0.
NSX Data Center for vSphere to NSX-T Data Center Migration
- Migration of NSX for vSphere Environment with vRealize Automation: The Migration Coordinator now interacts with vRealize Automation (vRA) in order to migrate environments where vRealize Automation provides automation capabilities. This will offer a first set of topologies which can be migrated in an environment with vRealize Automation and NSX-T Data Center. Note: This will require support on vRealize Automation.
- Modular Distributed Firewall Config Migration: The Migration Coordinator is now able to migrate firewall configurations and state from a NSX Data Center for vSphere environment to NSX-T Data Center environment. This functionality allows a customer to do migrate virtual machines (using vMotion) from one environment to the other and keep their firewall rules and state.
- Migration of Multiple VTEP: The NSX Migration Coordinator now has the ability to migrate environments deployed with multiple VTEPs.
- Increase Scale in Migration Coordinator to 256 Hosts: The Migration Coordinator can now migrate up to 256 hypervisor hosts from NSX Data Center for vSphere to NSX-T Data Center.
- Migration Coordinator coverage of Service Insertion and Guest Introspection: The Migration Coordinator can migrate environments with Service Insertion and Guest Introspection. This will allow partners to offer a solution for migration integrated with complete migrator workflow.