vRealize Network Insight 6.1

vRealize Network Insight 6.1 | 14 Jan 2021| Build 1610450081

vRealize Network Insight helps you build an optimized, highly available and secure network infrastructure across hybrid and multi-cloud environments. It provides network visibility and analytics to accelerate micro-segmentation security, minimize risk during application migration, optimize network performance and confidently manage and scale NSX, SD-WAN Velocloud, and Kubernetes deployments.

What’s New

vRealize Network Insight 6.1 and vRealize Network Insight Cloud (SaaS) deliver the latest in application and network visibility and analytics.

  • Customization: Pinboards to customize persistent dashboards
  • Multi-Cloud: Enhancements to visibility with VMware Cloud on AWS for traffic and dropped packets across gateway AWS Direct Connect, cross VPC, and public interface metrics
  • Assurance and Verification: Enhancements with in-context device configuration snapshots viewable from the topology map for easier troubleshooting
  • NSX-T: Data integration from NSX Intelligence can now be integrated for more application-centric network operations and troubleshooting visibility
  • VMware SD-WAN: New capabilities with analytics intent for better service-level agreement (SLA) monitoring and visibility with SD-WAN link utilization and metering

My Highlights

  • Integration with NSX-T Intelligence – I think vRNI does a fantastic job correlating all the data it ingests and now adding Intelligence as a data source will improve the visibility even more. When integration with NSX Intelligence is enabled, the flow record in vRealize Network Insight will now also display the Layer 7 service.
  • Intent Based improvements continue to show more and more value to users – I will definitely be testing the new Edge Uplink Utilization visibility.

Network Assurance and Verification

Layer 7 Service Information from NSX Intelligence

NSX-T Monitoring and Troubleshooting

VMware Cloud on AWS (VMC on AWS) 

  • Support for VMC T0 Router Interface Statistics which includes Rx Total Bytes, Rx Total Packets, Rx Dropped Packets, Tx Total Bytes, Tx Total Packets and Tx Dropped Packets for Public, Cross-VPC, and Direct Connect interfaces.

Physical Device Monitoring and Troubleshooting

  • Provides metric charts for better visualization and interaction with the metrics.


  • Provides alternative search suggestions when a search fails to show results.


  • Preserves filter state when pinning a widget to a pinboard
  • Ability to pin the no results search to a pinboard
  • Ability to see other users’ pinboards in the Auditor role.


  • Shows different alert definitions in separate tabs for easy classification and better management of alerts. Alert Definition (known as events in earlier release) refers to a problem or a change condition that the system detects
  • Introduces the term alert to indicate an instance when the system detects a problem, a change, or violation of an intent.

vRealize Network Insight Platform

  • Supports web proxies for data sources (SD-WAN, AWS, Azure, ServiceNow, and VMC NSX Manager)
  • Shows information related to Platform and Collector VMs such as IP address (name), last activity, status, and so on in one single page
  • Introduces the following new pages and capabilities:
    • Adding or updating data sources 
    • Web proxies listing page 
    • Web proxies usage visibility 
    • Infrastructure and support page.


Source vRealize Network Insight 6.1 Release Notes

VMware NSX-T Data Center 3.1.1

VMware NSX-T Data Center 3.1.1   |  27 January 2021  |  Build 17483185

What’s New

NSX-T Data Center 3.1.1 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas.

My Highlights
  • The introduction of OSPFv2 as the North/South Routing routing protocol – customers have been asking for this for sometime now but I think many customers have settled on eBGP by now but none the less I am sure this would still be handing for many customers.
  • NSX for vSphere is reaching end of support January 2022 and many customers are scrambling to migrate to NSX-T. The improvements in the latest version of the migration coordinator built into NSX-T now includes more supported current deployments e.g NSX for vSphere Cross vCenter deployments, vRA deployed blue prints, modular options selecting specific hosts and distributed Firewall policies.
  • NSX-T Advanced Server Load Balancer (AKA AVI Networks)
  • NSX-T Cloud – Option to deploy the NSX management plane and control plane fully in Azure.
  • NVDS to Converged VDS – Now introducing UI based migration option of Transport Nodes from NVDS to VDS with NSX-T. ***Note, only supported with VDS 7.0***
L3 Networking
  • OSPFv2 Support on Tier-0 Gateways
    • NSX-T Data Center now supports OSPF version 2 as a dynamic routing protocol between Tier-0 gateways and physical routers. OSPF can be enabled only on external interfaces and can all be in the same OSPF area (standard area or NSSA), even across multiple Edge Nodes. This simplifies migration from the existing NSX for vSphere deployment already using OSPF to NSX-T Data Center.
NSX Data Center for vSphere to NSX-T Data Center Migration
  • Support of Universal Objects Migration for a Single Site
    • You can migrate your NSX Data Center for vSphere environment deployed with a single NSX Manager in Primary mode (not secondary). As this is a single NSX deployment, the objects (local and universal) are migrated to local objects on a local NSX-T.  This feature does not support cross-vCenter environments with Primary and Secondary NSX Managers.
  • Migration of NSX-V Environment with vRealize Automation – Phase 2
    • The Migration Coordinator interacts with vRealize Automation (vRA) to migrate environments where vRealize Automation provides automation capabilities. This release adds additional topologies and use cases to those already supported in NSX-T 3.1.0.
  • Modular Migration for Hosts and Distributed Firewall
    • The NSX-T Migration Coordinator adds a new mode to migrate only the distributed firewall configuration and the hosts, leaving the logical topology(L3 topology, services) for you to complete. You can benefit from the in-place migration offered by the Migration Coordinator (hosts moved from NSX-V to NSX-T while going through maintenance mode, firewall states and memberships maintained, layer 2 extended between NSX for vSphere and NSX-T during migration) that lets you (or a third party automation) deploy the Tier-0/Tier-1 gateways and relative services, hence giving greater flexibility in terms of topologies. This feature is available from UI and API.
  • Modular Migration for Distributed Firewall available from UI
    •  The NSX-T user interface now exposes the Modular Migration of firewall rules. This feature was introduced in 3.1.0 (API only) and allows the migration of firewall configurations, memberships and state from an NSX Data Center for vSphere environment to an NSX-T Data Center environment. This feature simplifies lift-and-shift migration where you vMotion VMs between an environment with hosts with NSX for vSphere and another environment with hosts with NSX-T by migrating firewall rules and keeping states and memberships (hence maintaining security between VMs in the old environment and the new one).
  • Fully Validated Scenario for Lift and Shift Leveraging vMotion, Distributed Firewall Migration and L2 Extension with Bridging
    • This feature supports the complete scenario for migration between two parallel environments (lift and shift) leveraging NSX-T bridge to extend L2 between NSX for vSphere and NSX-T, the Modular Distributed Firewall.
Identity Firewall
Advanced Load Balancer Integration
  •  Support Policy API for Avi Configuration
  • Service Insertion Phase 2
    • This feature supports the Transparent LB in NSX-T advanced load balancer (Avi). Avi sends the load balanced traffic to the servers with the client’s IP as the source IP. This feature leverages service insertion to redirect the return traffic back to the service engine to provide transparent load balancing without requiring any server side modification.
Edge Platform and Services
  • DHCPv4 Relay on Service Interface
    • Tier-0 and Tier-1 Gateways support DHCPv4 Relay on Service Interfaces, enabling a 3rd party DHCP server to be located on a physical network
AAA and Platform Security
  • Guest Users – Local User accounts: NSX customers integrate their existing corporate identity store to onboard users for normal operations of NSX-T. However, there is an essential need for a limited set of local users — to aid identity and access management in many scenarios. Scenarios such as (1) the ability to bootstrap and operate NSX during early stages of deployment before identity sources are configured in non-administrative mode or (2) when there is failure of communication/access to corporate identity repository. In such cases, local users are effective in bringing NSX-T to normal operational status. Additionally, in certain scenarios such as (3) being able to manage NSX in a specific compliant-state catering to industry or federal regulations, use of local guest users are beneficial. To enable these use-cases and ease-of-operations, two guest local-users have been introduced in 3.1.1, in addition to existing admin and audit local users. With this feature, the NSX admin has extended privileges to manage the lifecycle of the users (e.g., Password rotation, etc.) including the ability to customize and assign appropriate RBAC permissions. Please note that the local user capability is available on both NSX-T Local Managers (LM) and Global Managers (GM) but is unavailable on edge nodes in 3.1.1 via API and UI. The guest users are disabled by default and have to be explicitly activated for consumption and can be disabled at any time. 
  • FIPS Compliant Bouncy Castle Upgrade: NSX-T 3.1.1 contains an updated version of FIPS compliant Bouncy Castle (v1.0.2.1). Bouncy Castle module is a collection of Java based cryptographic libraries, functions, and APIs. Bouncy Castle module is used extensively on NSX-T Manager. The upgraded version resolves critical security bugs and facilitates compliant and secure operations of NSX-T. 
NSX Cloud
  • NSX Marketplace Appliance in Azure: Starting with NSX-T 3.1.1, you have the option to deploy the NSX management plane and control plane fully in Public Cloud (Azure only, for NSX-T 3.1.1. AWS will be supported in a future release). The NSX management/control plane components and NSX Cloud Public Cloud Gateway (PCG) are packaged as VHDs and made available in the Azure Marketplace. For a greenfield deployment in the public cloud, you also have the option to use a ‘one-click’ terraform script to perform the complete installation of NSX in Azure. 
  • NSX Cloud Service Manager HA: In the event that you deploy NSX management/control plane in the public cloud, NSX Cloud Service Manager (CSM) also has HA. PCG is already deployed in Active-Standby mode thereby enabling HA. 
  • NSX-Cloud for Horizon Cloud VDI enhancements: Starting with NSX-T 3.1.1, when using NSX Cloud to protect Horizon VDIs in Azure, you can install the NSX agent as part of the Horizon Agent installation in the VDIs. This feature also addresses one of the challenges with having multiple components ( VDIs, PCG, etc.) and their respective OS versions. Any version of the PCG can work with any version of the agent on the VM. In the event that there is an incompatibility, the incompatibility is displayed in the NSX Cloud Service Manager (CSM), leveraging the existing framework. 
  • UI-based Upgrade Readiness Tool for migration from NVDS to VDS with NSX-T Data Center
    • To migrate Transport Nodes from NVDS to VDS with NSX-T, you can use the Upgrade Readiness Tool present in the Getting Started wizard in the NSX Manager user interface. Use the tool to get recommended VDS with NSX configurations, create or edit the recommended VDS with NSX, and then automatically migrate the switch from NVDS to VDS with NSX while upgrading the ESX hosts to vSphere Hypervisor (ESXi) 7.0 U2.
  • Enable VDS in all vSphere Editions for NSX-T Data Center Users: Starting with NSX-T 3.1.1, you can utilize VDS in all versions of vSphere. You are entitled to use an equivalent number of CPU licenses to use VDS. This feature ensures that you can instantiate VDS.
Container Networking and Security
  • This release supports a maximum scale of 50 Clusters (ESXi clusters) per vCenter enabled with vLCM, on clusters enabled for vSphere with Tanzu as documented at configmax.vmware.com
Compatibility and System Requirements

For compatibility and system requirements information, see the NSX-T Data Center Installation Guide.

API Deprecations and Behavior Changes

Retention Period of Unassigned Tags: In NSX-T 3.0.x, NSX Tags with 0 Virtual Machines assigned are automatically deleted by the system after five days. In NSX-T 3.1.0, the system task has been modified to run on a daily basis, cleaning up unassigned tags that are older than one day. There is no manual way to force delete unassigned tags.

Duplicate certificate extensions not allowed: Starting with NSX-T 3.1.1, NSX-T will reject x509 certificates with duplicate extensions (or fields) following RFC guidelines and industry best practices for secure certificate management. Please note this will not impact certificates that are already in use prior to upgrading to 3.1.1. Otherwise, checks will be enforced when NSX administrators attempt to replace existing certificates or install new certificates after NSX-T 3.1.1 has been deployed.

Sourced from VMware NSX-T Data Center 3.1.1 Release Notes

VMware HCX 4.0

VMware HCX 4.0.0 | 23 FEB 2021 | Build 17667890 (Connector), Build 17667891 (Cloud)

What is VMware HCX

VMware HCX delivers secure and seamless app mobility and infrastructure hybridity across vSphere 6.0 and later versions, both on-premises and in the cloud. HCX abstracts the distinct private or public vSphere resources and presents a Service Mesh as an end-to-end entity. The HCX Interconnect can then provide high-performance, secure, and optimized multi-site connectivity to achieve infrastructure hybridity and present multiple options for bi-directional virtual machine mobility with technologies that facilitate the modernization of legacy data centers.

For more information, see the VMware HCX User Guide in the VMware Documentation Center.

First Change in HCX 4.0

Starting with the HCX 4.0 release, software versioning adheres to X.Y.Z Semantic Versioning, where X is the major version, Y is the minor version, and Z is the maintenance version. For more information about HCX software support, lifecycles, and version skew policies, see VMware HCX Software Support and Version Skew Policy. This document includes HCX Support Policy for Vacating Legacy vSphere Environments.

For information regarding HCX interoperability, see VMware Product Interoperability Matrix.

Upgrading to 4.0 from HCX 3.5.3 R-Releases

Prior to Semantic Versioning, HCX 3.5.3 releases were identified as “R” releases, such as R147. Review the following information prior to upgrading from R-based releases to HCX 4.0: 

  • Upgrading to HCX 4.0 is supported only from the following HCX Service Updates: R147 and  R146.
  • HCX Service Update R146 is the oldest version suitable for upgrade. Customers are required to move to R146 or later to support upgrading to HCX 4.0. 
  • Assistance with upgrading from out-of-support releases will be on a best-effort basis.
  • VMware sends administrative messages to HCX systems that are running out of support and require upgrade. 

Note: 3.5.3/R146-R147 HCX Manager systems will display release code R148 as a prefix to the 4.0.0 version. This is a known condition during the transition. 4.0.x HCX Managers will not display the R release codes.

So What’s New in HCX 4.0

VMware HCX 4.0 is a major release that introduces new functionality and enhancements.

Some of my highlights, then followed by all the details from the release notes

  • NSX Security Tag Migration – HCX Can now migrate NSX security tags associated to a virtual machine living in NSX for vSphere or NSX-T environments. This release does not migrate the security policies but only the security tags associated to the VM.
  • Usability – In-Service upgrade for Network Extension appliances, New event logs, estimated migration time metrics, Network Extension Metrics.
  • Improvements in OS assisted Migrations

Migration Enhancements

  • Mobility Migration Events  – The HCX Migration interface displays detailed event information with time lapse of events from the start of the migration operation. This information can help with understanding the migration process and diagnosing migration issues. See Viewing HCX Migration Event Details.
  • NSX Security Tag Migration – Transfers any NSX Security tags associated with the source virtual machine when selected as an Extended Option for vSphere to vSphere migrations. See Additional Migration Settings
  • Real-time Estimation of Bulk Migration – HCX analyzes migration metrics and provides an estimation of the time required to complete the transfer phase for every configured Bulk migration. The estimate is shown in the progress bar displayed on the Migration Tracking and Migration Management pages for each virtual machine migration while the transfer is in underway. For more information, see Monitoring Migration Progress for Mobility Groups.
  • OS Assisted Migration Scaling – HCX now supports 200 concurrent VM disk migrations across a four Service Mesh scale out deployment. A single Service Mesh deploys one Sentinel Gateway (SGW) and its peer Sentinel Data Receiver (SDR), and continues to support up to 50 active replica disks each. In this Service Mesh scale out model for OSAM, the HCX Sentinel download operation is presented per Service Mesh. See OS Assisted Migration in Linux and Windows Environments.
  • Migrate Custom Attributes for vMotion  –  The option Migrate Custom Attributes is added to the Extended Options selections for vMotion migrations. 
  • Additional Disk Formats for Virtual Machines – For Bulk, vMotion, and RAV migration types, HCX now supports these additional disk formats: Thick Provisioned Eager Zeroed, Thick Provisioned Lazy Zeroed. 
  • Force Power-off for In-Progress Bulk Migrations – HCX now includes the option to Force Power-off in-progress Bulk migrations, including the later stages of migration. 

Network Extension Enhancements

  • In-Service Upgrade – The Network Extension appliance is a critical component of many HCX deployments, not only during migration but also after migration in a hybrid environment. In-Service upgrade is available for Network Extension upgrade or redeploy operations, and helps to minimize service downtime and disruptions to on-going L2 traffic. See In-Service Upgrade for Network Extension Appliances

    Note: This feature is currently available for Early Adoption (EA). The In-Service mode works to minimize traffic disruptions from the Network Extension upgrade or redeploy operation to only a few seconds or less. The actual time it takes to return to forwarding traffic depends on the overall deployment environment.
  • Network Extension Details – HCX provides connection statistics for each extended network associated with a specific Network Extension appliance. Statistics include bytes and packets received and transferred, bit rate and packet rate, and attached virtual machine MAC addresses for each extended network. See Viewing Network Extension Details.

Service Mesh Configuration Enhancements

  • HCX Traffic Type Selection in Network Profile – When setting up HCX Network Profiles, administrators can tag networks for a suggested HCX traffic type: Management, HCX Uplink, vSphere Replication, vMotion, or Sentinel Guest Network. These selections then appear in the Compute Profile wizard as suggestions of which networks to use in the configuration. See Creating a Network Profile.

Usability Enhancements

  • HCX now supports scheduling of migrations in DRAFT state directly from the Migration Management interface.(PR/2459044)
  • All widgets in the HCX Dashboard can be maximized to fit the browser window. (PR/2609007)
  • The topology diagram shown in the Compute Profile now reflects when a folder is selected as the HCX Deployment Resource. (PR/2518674)
  • In the Create/Edit Network Profile wizard, the IP Pool/Range entries are visually grouped for readability. (PR/2456501)

Source VMware HCX 4.0 Release Notes

NSX-T Time-Based Firewall Policy

VMware NSX-T Distributed Firewall (DFW) offers L2 to L7 stateful firewall capabilities, in my previous blog I covered the capability to create policies matching FQDN/URLs. This blog will further expand on the NSX-T DFW capabilities and focus on time-based firewall policies.

With time-Based firewall policies, security administrators can restrict traffic from a source to a destination for a configured time period. This could be to restrict access to certain resources during specific hours.

Time windows apply to a firewall policy section, and all the rules in it. Each firewall policy section can have one time window. The same time window can be applied to more than one policy section. If you want the same rule applied on different days or different times for different sites, you must create more than one policy section. Time-based rules are available for distributed and gateway firewalls on both ESXi and KVM hosts.

This demonstration and blog is based on NSX-T 3.1.1


As per the VMware documentation, the following needs to be in place:

Network Time Protocol (NTP) is an Internet protocol used for clock synchronization between computer clients and servers. NTP service must be running on each transport node when using time-based rule publishing.

If a time-zone is changed on the edge transport node after the node is deployed, reload the edge node or restart the data plane for time-based gateway firewall policy to take effect

It is highly recommended to enable NTP on network and security appliances in your environment, this is really helpful for troubleshooting and monitoring purposes.

At the time of creating this blog, Time-Based policies are only supported with NSX-T appliances configured with the timezone set to UTC.

Getting Started

I will start with configuring NTP on my NSX-T setup to confirm that the prerequisites are in place and then cover enabling Time-Based policies.

Configuring NTP on Appliance and Transport Nodes

Configure NTP on an appliance

Some system configuration tasks must be done using the command line or API. We will do the needed from the NSX-T CLI. The following commands should be configured on the NSX-T Manager appliance and the NSX-T Edge appliances

  • Set system timezone set timezone <timezone>
  • Set NTP Server set ntp-server <ntp-server>
  • Set a DNS server set name-servers <dns-server>
  • Set DNS Search Domain set search-domains <domain>

Open a terminal and ssh to the management IP/Virtual IP of the NSX-T Manager and/or the NSX-T Edge devices as admin. In my lab the NTP and DNS service is running on

  • nsx-dc-01> set timezone UTC
  • nsx-dc-01> set ntp-server
  • nsx-dc-01> set name-servers
  • nsx-dc-01> set search-domains vmwdxb.com

After doing these configurations on the NSX-T Manager and NSX-T Edge(s) Nodes, the time should match the time of day on the NTP source. The NSX-T Edge appliance will use its management IP address as the source of the NTP/DNS lookup so confirm you have network connectivity from these interfaces to the NTP/DNS and open any firewalls if needed. NTP uses UDP port 123.

***If you are changing the timezone on the edge devices, you will need to restart the service data plane or reload the edge appliance***. “restart service dataplane”

Alternatively the NSX-T Manager UI also allows NTP configurations to be applied to all nodes using the Node Profile under the Fabric configurations.

NTP Configuration using Node Profile

To configure NTP for an ESXi host, see the topic Synchronize ESXi Clocks with a Network Time Server, in vSphere Security or if your hosts are managed by vCenter.

Confirm that the ESXi hosts are synchronised with NTP using ntpq -pn from the CLI – confirm that there is * next to the NTP server IP

Output from ESXI CLI
Configuring Time-Based Security Policy

Create a firewall policy or edit an existing policy. I pre-configured a policy section with one DFW rule allowing two desktops access to anything. From the steps below I will be enabling the Time-Based Firewall on the Time-Based-Policy DFW section.

  • (Step 1) Click the clock icon on the firewall policy you want to have a time window.
Time Icon is to the right of the policy

A time window appears

NSX-T Time Window
  • (Step 2) Click Add New Time Window and enter a name
  • (Step 3) Select a time zone: UTC (Coordinated Universal Time), or the local time of the transport node. Distributed firewall only supports UTC with NTP service enabled, a change of time zone configuration is not supported
  • (Step 4) Select the frequency of the time window – Weekly or One time.
  • (Step 5) Select the days of the week that the time window takes effect.
    • NSX-T Data Center supports configuring weekly UTC time-windows for the local time-zone, when the entire time-window for the local time-zone is within the same day as the UTC time-zone. For example, you cannot configure a time window in UTC for a 7am-7pm PDT, which maps to UTC 2pm-2am of the next day.
  • (Step 6) Select the beginning and ending dates for the time window, and the times the window will be in effect.
Time Based Configuration
  • (Step 7) Click Save

In my demonstration I have selected the policy to be activate daily starting from the 1st of March 2021 until 1st of April 2021 and only be applied between 08:30 to 09:00 in the morning.

***Take note the minimum configurable time range is 30 minutes and only supported in 30min blocks.***

(Step 8) Click the check box next to the policy section you want to have a time window. Then click the clock icon.

Click the radio button on the top left next to Time Window configured
  • (Step 9) Select the time window you want to apply, and click Apply.
  • (Step 10) Click Publish, The clock icon for the section turns green.
Published Policy

When publishing the policies, NSX-T Manager will check and confirm that NTP is correctly configured and running on transport nodes before the Time-Window is successfully applied. The Clock turns green when successfully applied. If you click on Success in the Firewall Section you can see the transport hosts where this policy has been pushed too successfully.

Time-Based DFW Section applied to the ESXi Transport Nodes

For the first publication of a time-based rule, the time is taken, and rule enforcement begins at less than 2 minutes. After the rules are deployed, enforcement as per time window, is instantaneous.

Outside of the time window configured NSX-T was preventing access to the Internet for my demo desktop.

Outside of the permitted time window configured

As the clocked changed to open window, I refreshed the page and successfully loaded the page.

Access to http://www.vmware.com

From LogInsight you can see that it is indeed my DFW rule number 3054 allowing the access.

LogInsight Output
Some side notes

When I published the DFW policy changes the first time an error occurred and the NSX-T manager reported that my transport nodes (ESXi hosts) did not have a valid NTP Service running.

NSX-T fails to publish the Time Based Policy

Although I had NTP configured on ESXi hosts pointing to a Win2012 server in my lab, NTP was not actually synchronising with the Win2012 server. So I switched all my ESXi nodes and NSX-T nodes to point to a Ubuntu virtual machine which I configured as a NTP Server too and this resolved error above.

NTP Not Synchronised on ESXi host with Win2012

You can see from the screenshot above, there is no * next to the IP

Working NTP Synchronisation

Thanks for the taking time to read this blog, hopefully you found it useful. It definitely helped me get a better understanding of making use of this feature.

NSX-T Filtering Specific Domains (FQDN/URLs)

VMware NSX-T Distributed Firewall (DFW) offers L2 to L7 stateful firewall capabilities. Most NSX-T operators are fairly comfortable creating L4 policies in the quest to achieve the “zero-trust” model.

In this blog I wanted to take this one step further and explore the capabilities of using the DFW to enforce policy matching L7 FQDN/URLs. I will demonstrate how to go about using some builtin FQDN’s and adding a custom FQDN’s to create security policies restricting outbound access from virtual workloads attached to NSX-T overlay networks.

A fully qualified domain name (FQDN) is the complete domain name for a specific host on the Internet. FQDNs are used in firewall rules to allow or reject traffic going to specific domains.

NSX-T Data Center supports custom FQDNs that are defined by an administrator on top of an existing pre-defined list of FQDNs.

This demonstration and blog is based on NSX-T 3.1.1


As per the VMware documentation, the following needs to be in place:

You must set up a DNS rule first, and then the FQDN allowlist or denylist rule below it. This is because NSX-T Data Center uses DNS Snooping to obtain a mapping between the IP address and the FQDN. SpoofGuard should be enabled across the switch on all logical ports to protect against the risk of DNS spoofing attacks. A DNS spoofing attack is when a malicious VM can inject spoofed DNS responses to redirect traffic to malicious endpoints or bypass the firewall.

FQDN-based rules are retained during vMotion for ESXi hosts.

NOTE: ESXi and KVM hosts are supported. ESXi supports droplisting action for URL rules. KVM supports the allowlisting feature.

Custom FQDN supports the following

  • Full FQDN names such as maps.google.com or myapp.corp.com
  • Partial REGEX with * at the beginning only such as *eng.northpole.com or *yahoo.com
  • FQDN name length up to 64 characters
  • FQDN names should end with the registered Top Level Domain (TLD) such as .com, .org, or .net

Getting Started

  • (Step 1) From your browser, log in with admin privileges to an NSX Manager at https://
NSX-T Manager
  • (Step 2) Navigate to Security -> Distributed Firewall ->
NSX-T Distributed Firewall UI
  • (Step 3) Add a firewall policy section by following the steps in Add a Distributed Firewall. An existing firewall policy section can also be used.
Added Policy Section “FQDN Demo Policy”
  • (Step 4) Select the new or existing firewall policy section
  • (Step 5) Click Add Rule to create the DNS firewall rule first – I will create the firewall in the policy section created in step 3
    • Provide a Name for the rule – My example is using DNS Rule
    • Select the Services (DNS/UDP-DNS) – I included both for the purpose of the demo
    • Select the L7 Context Profile (DNS) – This is system generated context profile, and is available in your deployment by default.
    • Applied To – Select the group as required, I left it default at this stage.
    • Action – Select Allow
DNS Firewall Rule
  • (Step 6) Click Add Rule again to set up the FQDN allowlisting or denylisting rule
  • (Step 7) Name the rule appropriately, such as, FQDN/URL Allowlist. Drag the rule under the DNS rule under this policy section.
Second Firewall rule created “FQDN-RULE-01”

***Take note about dragging the newly created FQDN rule below the DNS Rule ***

  • (Step 8) Provide the following details
    • Services Click the edit icon and select the service you want to associate with this rule, for example, HTTP [I selected HTTP & HTTPS]
    • Context Profiles Click the edit icon, and Add Context Profile and name the profile. In the Attributes column, select Set > Add Attribute > Domain (FQDN) Name . Select the list of Attribute Name/Values from the predefined list, or create a custom FQDN
      • My Customer Context Profile is Named FQDN-Profile and I added the following builtin FQDN attributes to start my testing with:
Three FQDN attributes selected
  • Applied To Select DFW or a group as required.
    • Action Select Allow, Drop, or Reject.
      • I am going to set the action to drop to prevent access to three selected FQDN’s
  • (Step 9) Publish
FQDN Firewall


I am using virtual desktop connected my NSX-T environment and will be using the Chrome browser to access some websites to confirm if my policy created is prevent the access to the selected FQDN’s. I am going to start off with setting m FQDN policy to allow so that I can confirm the connectivity to itunes.apple.com does indeed work.

Next I am going to access itunes.apple.com to confirm connectivity

itunes.apple.com is re-written to apple.com/itunes/

Now I will set my security policy back to block and test this page again.

Deny DFW Rule

Now with the FQDN rule set to drop, my access to itunes.apple.com has been restricted by the NSX-T DFW

Testing http://itunes.apple.com

Just to make sure that the NSX-T DFW is actually dropping the data, I logged into my LogInsight Log server filtered to all dropped logs

LogInsight Dropping the FQDN Data

Custom FQDN Demo

So in the demonstration above I just used some of the built-in FQDN’s provided by NSX-T when installed, now I will create a custom FQDN of my choice and perform the testing again.

I am going to add another DFW firewall to my policy section named FQDN-Rule-02 – follow from step 6 but when you get to step 8, instead of selecting one of the out of the box FQDN’s in the context profile click on the three dots on the right hand side.

Context Profile Attributes

Once you clicked on those three dots and hit ADD FQDN, it opens the pop up shown below and you enter your custom FQDN/URL and click save.

Take Note: Enter the domain name in form *[hostname].[domain]

Custom FQDN Created

Now that you have selected your custom FQDN attribute click ADD and then Apply and save the Context Profile.

Then click Apply

Custom Context Profile

Once completed you will see the newly created DFW rule, I have set my rule to allow to confirm connectivity to you yogovirtual.com. Then I am going to change the action to drop.

DFW Section with both FQDN DFW Rules
Demonstrating accessing https://yogovirtual.com

I changed the policy to drop in the NSX-T DFW and closed the browser and opened the page again and confirmed that the access to the customer FQDN is blocked.

Access to Custom FQDN being blocked

I accessed my LogInsight to confirm the drop action taking place

LogInsight NSX-T DFW Logs

This concludes this blog post, I hope that it helps you understand how to enable FQDN based Filtering using the NSX-T DFW.

NSX-T 3.1 – Deploying Distributed IDS/IPS

In NSX-T 3.0 VMware introduce distributed IDS and now in NSX-T 3.1 this has been expanded to include distributed IPS. In this blog I will highlight the steps to enabled and configured distributed IDS/IPS and end with a demonstration.


Distributed Intrusion Detection and Prevention Service (IDS/IPS) monitors network traffic on the host for suspicious activity.Signatures can be enabled based on severity. A higher severity score indicates an increased risk associated with the intrusion event. Severity is determined based on the following:

  • Severity specified in the signature itself
  • CVSS (Common Vulnerability Scoring System) score specified in the signature
  • Type-rating associated with the classification type

IDS detects intrusion attempts based on already known malicious instruction sequences. The detected patterns in the IDS are known as signatures. You can set a specific signature alert/drop/reject actions globally, or by profile.


  • Alert – An alert is generated and no automatic preventative action is taken.
  • Drop – An alert is generated and the offending packets are dropped.
  • Reject – An alert is generated and the offending packets are dropped. For TCP flows, a TCP reset package is generated by IDS and sent to the source and destination of the connection. For other protocols, an ICMP-error packet sent to the source and destination of the connection.

***Do not enable Distributed Intrusion Detection and Prevention Service (IDS/IPS) in an environment that is using Distributed Load Balancer. NSX-T Data Center does not support using IDS/IPS with a Distributed Load Balancer.***

***Distributed IDS/IPS is a licensed feature which is not included in the traditional NSX per CPU licenses. You will need to apply the Add-On NSX Advanced Threat Prevention license in your NSX-T Manager to enable these capabilities***

NSX-T Firewall with Advanced Threat Prevention License applied

Distributed IDS/IPS Configuration

I will be using the topology highlighted below for this demonstration setup so I will do some pre-work configurations in NSX-T so that I can consume these in the subsequent steps (Creating Segments, T1/T0, Demo Virtual Machines, Groups)

Demo Topology


  • Create 2 Tags for the workloads: Home -> Inventory -> Tags ->Add Tag
    • Name: Production
    • Assign: WEB-VM-01 and APP-VM-01
Production Tag
  • Name: Development
    • Assign: WEB-VM-02 and APP-VM-02
Development Tag

  • Create 2 Groups for the Workloads: Home -> Inventory -> Groups
    • Name: Production Applications
    • Compute Members: Membership Criteria: Virtual Machine Tag Equals: Production
Product Applications Group
  • Name: Development Applications
    • Compute Members: Membership Criteria: Virtual Machine Tag Equals: Development
Development Application Group
  • Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click View Members for the 2 groups you created and confirm
Production Group and the associated members
Development Group and the associated members

1. Enable IDS/IPS on hosts, download latest signature set, and configure signature settings.

Home -> Security -> Distributed IDS/IPS -> Settings

NSX-T IDS/IPS can automatically apply signatures to your hosts, and update intrusion detection signatures by checking our cloud-based service

IDS/IPS Settings Menu
  • Intrusion Detection and Prevention Signatures = Enable Auto Updates. The NSX-T Manager requires Internet access for Auto Updates.
  • Enable Intrusion Detection and Prevention for Cluster(s) = DC-02-Compute, select the cluster where you workloads are and select enable. When prompted “Are you sure you want to Enable Intrusion Detection and Prevention for selected clusters?” click YES
IDS/IPS Auto Updates Enabled and IDS/IPS enabled on my DC-02-Compute Cluster

NSX can automatically update it’s IDS signatures by checking the cloud-based service. By default, NSX manager will check once per day and VMware publishes new signature update versions every two week (with additional non-scheduled 0-day updates). NSX can also be configured to optionally automatically apply newly updated signatures to all hosts that have IDS enabled.

2. Create IDS/IPS profiles

Home -> Security -> Distributed IDS/IPS -> Profiles

IDS/IPS Profiles are used to group signatures, which can then be applied to select applications. You can create 24 custom profiles in addition to the default profile

Default IDS Profile

We will be create two new IDS/IPS profiles, one for Production and another one for Development

Home -> Security -> Distributed IDS/IPS -> Profiles -> ADD PROFILE

  • Name: Production
  • Signatures to Include: Critical, High, Medium
Production IDS Profile
  • Name: Development
  • Signatures to include: Critical & High
Development IDS Profile
Newly created IDS Profiles

3. Create IDS/IPS rules

IDS/IPS rules are used to apply a previously created profile to select applications and traffic. I am going to create one rule for the Production VM’s and a second rule for the Development VM’s

Home -> Security -> Distributed IDS/IPS -> Rules -> Add Policy

  • Click Add a New Policy – Renamed my default name to NSX Demo

Now lets create the Production Policy Rule

  • Add a Rule to the Policy – Click ADD RULE
  • Rule Name: Production Policy
  • IDS Profile: Production
  • Applied to Group: Production Applications
  • The rest is left default

Next we create the Development Policy Rule

  • Add a Rule to the Policy – Click ADD RULE
  • Rule Name: Development Policy
  • IDS Profile: Development
  • Applied to Group: Development Applications
  • The rest is left default
IDS Policy and Rules created

Last step is to publish the Policy – Click Publish on the top left.

The mode setting will determine if we are doing IDS or IDS/IPS.

  • Detect Only – Detects signatures and does not take action.
  • Detect and Prevent – detects signatures and takes into account profile or global action of drop or reject.

There are some other optional settings when you click on the gear at the end of the rule:

  • Logging
  • Direction
  • IP Protocol
  • Log Label

4. Verify IDS/IPS status on hosts

To use the NSX virtual appliance CLI, you must have SSH access to an NSX virtual appliance. Each NSX virtual appliance contains a command-line interface (CLI)

  1. Open a ssh session to one of the ESXi hosts
  2. Enter nsxcli command to open the NSX-T Data Center CLI.
  3. To confirm that IDS is enabled on this host, run the command get ids status
get ids status
  1. To confirm both of the IDS profiles have been applied to this host, run the command get ids profile
get ids profile
  1. To review IDS profile (engine) statistics including the number of packets processed and alerts generated, run the command get ids engine profilestats <tab_to_select_profile_ID>
get ids engine profilestats

5. Distributed IDS/IPS Events

I have set up basic demonstration using Metasploit to launch a simple exploit against the Drupal service running on the Web-VM-01 and confirm the NSX Distributed IDS/IPS was able to detect this exploit attempt.

Basic Attack Demo

In this demonstration, the IDS/IPS engine is set to detect only

IDS Engine configuration

When I trigger the exploit from the Hacker to WEB-VM-01, I am able to do a reverse shell and gather system information on the victim.

Exploited Victim

Now when I go over to the IDS/IPS dashboard in NSX-T, I can see the event and expand to see the details and showing this as a detected only event.

IDS/IPS Dashboard

Thank-you you taking the time to read the blog, if you found it useful or have any feedback feel free to ping or provide comments.

NSX-T 3.1 – Configuring DHCP Server

As I build out various demonstrations in my lab I wanted to reduce the amount of static IP allocations on my demo work loads so that I can move them between network segments for different demonstrations and with this enabling a DHCP Server in my NSX-T deployment makes sense.

So in this post I will cover the steps and procedure to follow enabling DHCP on my Local NSX-T Manager.


DHCP (Dynamic Host Configuration Protocol) allows clients to automatically obtain network configuration, such as IP address, subnet mask, default gateway, and DNS configuration, from a DHCP server.

As per VMware documentation, NSX-T Data Center supports three types of DHCP on a segment:

  • DHCP local server
  • Gateway DHCP
  • DHCP relay.

DHCP Local Server
As the name suggests, it is a DHCP server that is local to the segment and not available to the other segments in the network. A local DHCP server provides a dynamic IP assignment service only to the VMs that are attached to the segment. The IP address of a local DHCP server must be in the subnet that is configured on the segment.
Gateway DHCP
It is analogous to a central DHCP service that dynamically assigns IP and other network configuration to the VMs on all the segments that are connected to the gateway and using Gateway DHCP. Depending on the type of DHCP profile you attach to the gateway, you can configure a Gateway DHCP server or a Gateway DHCP relay on the segment. By default, segments that are connected to a tier-1 or tier-0 gateway use Gateway DHCP. The IP address of a Gateway DHCP server can be different from the subnets that are configured in the segments.
DHCP Relay
It is a DHCP relay service that is local to the segment and not available to the other segments in the network. The DHCP relay service relays the DHCP requests of the VMs that are attached to the segment to the remote DHCP servers. The remote DHCP servers can be in any subnet, outside the SDDC, or in the physical network.

You can configure DHCP on each segment regardless of whether the segment is connected to a gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.

For a gateway-connected segment, all the three DHCP types are supported. However, Gateway DHCP is supported only in the IPv4 subnet of a segment.

For a standalone segment that is not connected to a gateway, only local DHCP server is supported.


My base networking has already been configured and deployed and I will address enabling the DHCP services on one segment which I want to allocate Private IP’s too and translate these with Source based NAT to provide Internet access to this segment.


For the purpose of this demonstration which I am working on, I will configure a DHCP Local Server on my segment with network range My T1 default gateway is

Step 1 – Create a DHCP Profile

You can create DHCP servers to service DHCP requests from VMs that are connected to logical switches.

Select Networking > DHCP > ADD DHCP Profile> Add.

Adding a DHCP Profile
  • Enter a Profile name = DHCP-LM-DC-01
  • Profile Type (DHCP Server / DHCP Relay) = DHCP Server
  • Enter the IP address of the DHCP server and its subnet mask in CIDR format:
  • Lease Time (Should be between 60 and 4294967295): Default is 84600
  • Select the Edge Cluster: LM-EDGE-Cluster-DC-01
  • Select the Edge(s): edge-dc-03 for my lab
Selecting edge-dc-03 from my Edge Cluster

Once all the fields are populated, hit save

DHCP Profile populated data

Step 2 – Attach a DHCP Server to a Segment

Networking > Segments > Select Segment

Configure a new segment or select the one you want to edit, in my case I will edit DC-01-NAT-Segment

Edit Segment to Set DHCP Config


Blank DHCP Settings

Now we need to populate the details and select the options matching our requirements.

  • DHCP Profile: DHCP-LM-DC-01
  • IP Server Settings:
    • DHCP Config: Enable
    • DHCP Server IP Address:
  • DHCP Ranges:
  • Lease Time (seconds): 3600
  • DNS Servers:
DHCP Configurations Populated in the UI


Step 3 – Attach Virtual Machine to the Network and set networking settings to DHCP

Setting my VM to obtain an IP via DHCP

The virtual machine has succesfully obtain a dynamic IP address from the DHCP Server created in NSX-T

Dynamic IP allocated via DHCP seen in vCenter
DHCP IP allocated to the Virtual Machine from
Final Topology with DHCP and NAT enabled

Product Offerings for VMware NSX Security 3.1.x

New VMware NSX Security editions became available to order on October 29th, 2020. The tiers of NSX Security licenses are as follows:

  • NSX Firewall for Baremetal Hosts: For organizations needing an agent-based network segmentation solution.
  • NSX Firewall Edition: For organizations needing network security and network segmentation.
  • NSX Firewall with Advanced Threat Prevention Edition: For organizations needing Firewall, and advanced threat prevention features.

To review the table outlining specific functions available by edition, visit the VMware KB.

NSX Security is available as a single download image with license keys required to enable specific functionality.

Configuring NSX-T VRF Lite Networking

VMware introduced VRF capabilities in NSX-T 3.0, this post will guide you how through the steps to configure and enabled VRF capabilities.

A virtual routing and forwarding (VRF) gateway makes it possible for multiple instances of a routing table to exist within the same gateway at the same time. VRFs are the layer 3 equivalent of a VLAN. A VRF gateway must be linked to a tier-0 gateway. From the tier-0 gateway, the VRF gateway inherits the failover mode, Edge cluster, internal transit subnet, T0-T1 transit subnets, and BGP routing configuration.

If you are using Federation, you can use the Global Manager to create a VRF gateway on a tier-0 gateway if the tier-0 spans only one location. VRF gateway is not supported on stretched tier-0 gateways in Federation


We are going to need some base work done before configuring and enabling the VRF configurations

Parent T0 created with Trunk Uplinks
  • Deploy at least one Edge VM or Bare Metal appliance
  • Create an Edge Cluster and add the Edge VM or BM appliance to the cluster
  • Create a T0 in the networking section
  • A Trunk Segment is required as the uplink interface on the Edge VM as each VRF created will consume a VLAN on the trunk between the T0 and the TOR
  • The VLAN used in the uplink interfaces in the Parent T0 should not overlap with any of the networks within VRF’s – make sure to use a unique VLAN not included in the trunk segment used by the VRF Interfaces.
Creating the T0 assigned to my pre-created edge cluster
Parent T0 External Uplink Interfaces
VLAN Backed Segments using VLAN 1120 and VLAN 11230 for Parent T0 Uplinks
Trunk Segment created for VRF Interfaces – VLAN1110 – 1119 and VLAN 1220-1229

I will be using VLAN’s 1110-1119 for VRF’s uplink interfaces mapped to EDGE-03 and VLAN’s 1220-1229 for VRF’s uplink interfaces mapped to EDGE-04

Once we have completed the configurations our desired topology would have two VRF’s configured shown below

Desired Topology

Lets get started

Select Networking > Tier-0 Gateway I am using TO-LM-DC01 in my setup.

Click Add Gateway > VRF, then name the new VRF T0 Gateway and assign it to the T0 pre-created (T0-LM-DC01 in my lab).

Creating the new VRF-A T0

Repeat this step for VRF B and you should have created two new T0’s named VRF-A and VRF-B

VRF-A and VRF-B T0’s

Next we will configure the uplink interfaces to TOR for each VRF created.

Select the VRF and click on Interfaces – Set

My Edge Cluster has two Edge VM so I will need to create two Interfaces per VRF T0 and these will be need to mapped to the correct access VLAN configured on the TOR

EDGE-03-DC01 VRF A –, access VLAN 1111

EDGE-03-DC01 VRF A –, access VLAN 1111

EDGE-04-DC01 VRF A –, access VLAN 1112

EDGE-04-DC01 VRF A –, access VLAN 1112
VRF A Uplink Interfaces
Testing connectivity to newly created interfaces from the TOR

EDGE-03-DC01 VRF B –, access VLAN 1221

EDGE-03-DC01 VRF B –, access VLAN 1221

EDGE-04-DC01 VRF B –, access VLAN 1222

EDGE-04-DC01 VRF B –, access VLAN 1222
VRF B Uplink Interfaces
Testing connectivity to newly created interfaces from the TOR

Next Step will be enabling BGP between the VRF T0’s to the TOR

The VRF TO’s will use the same BGP AS Number created in the parent TO, in my case I have reconfigured this to 65111 at the parent.

BGP AS 65111 Configured at the Partner T0

Next Step is to enable BGP on each VRF T0

Enabling BGP at the VRF T0

Now go ahead and click set to add your BGP Neighbours

VRF A Edge-03 TOR BGP Neighbour
VRF A Edge-04 TOR BGP Neighbour
VRF B Edge-03 TOR BGP Neighbour
VRF B Edge-04 TOR BGP Neighbour

After enabling BGP to the TOR and the BGP neighbour relationship is established, you should see success status in the dashboard

Neighbour Status Success

This can be confirmed from TOR, where the BGP neighbour status should show established

BGP Established from TOR to VRF A T0
Desired Topology

Adding T1’s to the VRF enabled T0

Next will create two T1’s, one for VRF A and the other VRF B and will attach these to the T0’s created. This is where we will attach our VRF workload networks.

Creating VRF-A-T1 and attaching it to T0 VRF A
Creating VRF-B-T1 and attaching it to T0 VRF B

Enabling BGP on the T0

Once the T1’s are created we can edit the routing attributes to automatically advertise the networks attached to the T1. This will also BGP to automatically advertise the newly created networks to the TOR switches. Since this is just a lab set up, I have enabled all the options – I can edit these as I enable stateful services if I want to control how I advertise my connected interfaces.

Enabling Route Advertisement on VRF-A T1
Enabling Route Advertisement on VRF-A T1

***We will need to make sure we have enabled Router Advertisement on the VRF T0’s towards the TOR too***

Adding Segments to the T1

Now we will add a segment to each VRF T1 so that we can connect our workloads to and confirm we have connectivity to the outside world.

VRF-A Segment Creation

We create the segment and attach it to the correct T1 Gateway, either VRF A or VRF B and I selected to deploy this an Overlay Segment.

VRF-A Segment Creation

After creating the segments, I can see them in the NSX-T dashboard and confirm that they have been attached to the correct T1’s and review the network topology.

Segment View from NSX-T Dashboard

From the NSX-T topology view I see all my uplink IP’s and the segment which I attached to the T1

VRF A Topology view

VRF-B Topology View

Lastly I want to connect to the TOR and confirm that these newly create IP subnets are being advertised from NSX-T to the TOR via BGP

Segments routes are learnt via BGP on the TOR

IP Connectivity to the newly created subnets

IP connectivity to the segments from the TOR

Now lets us the NSX-T Traceflow troubleshooting tool to confirm we have end to end connectivity by doing a trace from a VM connected to VRF-A to another VM connected to VRF-B

Successful trace taken from NSX-T Traceflow

I hope you found this useful. Thanks

Deploying NSX-T Data Center Federation with 3.1.0

Deployment Topology

VMware recently announced the general availability of NSX-T 3.1.0 bringing a host of new features and functionality. One of the key features which is now production ready is the Multi-Site solution, Federation.

  • Support for standby Global Manager Cluster
    • Global Manager can now have an active cluster and a standby cluster in another location. Latency between active and standby cluster must be a maximum of 150ms round-trip time.
  • With the support of Federation upgrade and Standby GM, Federation is now considered production ready.

With Federation, you can manage multiple NSX-T Data Center environments with a single pane of glass view, create gateways and segments that span one or more locations, and configure and enforce firewall rules consistently across locations.

Once you have installed the Global Manager and have you added locations, you can configure networking and security from Global Manager.

In this post I will cover the step by step process to connect my two Local Managers (LM’s) to a Global Manager (GM). I have pre deployed both local sites and the Active/Standby GM appliances.

For a better understanding on the features which are supported or not supported when using NSX-T Data Center Federation, see VMware’s guide.

Lets get started

Pre Work Check List

  • NSX-T Local Managers deployed and hosts prepared for NSX
  • NSX-T Local Managers backup configured and executed
  • NSX-T Edge(s) deployed in Local Manager and Edge Clusters created
  • Local Manager requires Enterprise Plus License
NSX-T Manager DC-01
NSX-T Manager DC-02
NSX-T Edge Virtual Machines DC-01
NSX-T Edge Virtual Machine DC-02
NSX-T Backup Configuration DC-01
NSX-T Backup Configuration DC-02
NSX-T Global Manager Dashboard

Now lets start adding the first Local Manager nsx-dc-01 under the Location Manager tab. Populate the required details for the local manager. Check Compatibility and hit save. The Local Manager should be on the same OS level as the Global Manager.

Adding nsx-dc-01 LM to the Global Manager

Next step would be to add the next sites local manager – nsx-dc-02 in my setup.

Adding nsx-dc-02 LM to the Global Manager

Once both Local Managers have been added and the Global Manager dashboard is refreshed you will see both Local Managers in the Global Manager dashboard.

Global Manager view showing the recently added Global Managers

After adding the two Local Managers my Global Manager highlights that is has found objects and policies in these newly added locations and asks Do you want to import them. ***This is a one time option and can not be done after proceeding***

Confirm Local Manager successfully registered to GM

Log into the two Local Managers and see the status and view from the Local Managers, you will notice that the LM’s establish connection to the Active and Standby GM’s as well a connection is established to the neighbour LM’s

Local Manager 01 view – successfully connected to the Active and Standby GM’s
Local Manager 02 view – successfully connected to the Active and Standby GM’s

Now I will go ahead an import the objects which the GM discovered in LM from DC-01. At this point the GM will check if a configuration backup of the LM manager has been taken before it will proceed with the import. If a backup has not been taken, the step can not proceed.

Importing objects from the LM
Objects discovered by the GM

You can add a Prefix or Suffix to the imported objects – this is optional

Once this is completed for the first LM, I refreshed the dashboard and the GM requests the same for my second LM

I just followed the same steps and imported the discovered objects.

Global Manager successfully imported Objects from both LM’s

Networking in Federation

Federation enables you to create virtual network topologies spanning multiple locations across L3 connected networks.

Take Note of the following

Tier-0 gateways, tier-1 gateways, and segments can span one or more locations in the Federation environment. When you plan your network topology, keep these requirements in mind:

  • Tier-0 and tier-1 gateways can have a span of one or more locations.
  • The span of a tier-1 gateway must be equal to, or a subset of, the span of the tier-0 gateway it is attached to.
  • A segment has the same span as the tier-0 or tier-1 gateway it is attached to. Isolated segments are not realized until they are connected to a gateway.

You can create different topologies to achieve different goals.

  • You can create segments and gateways that are specific to a given location. Each site has its own configuration, but you can manage everything from the Global Manager interface.
  • You can create segments and gateways that span locations. These stretched networks provide consistent networking across sites.

Configuring Remote Tunnel Endpoint on the Edge

If you want to create gateways and segments that span more than one location, you must configure a remote tunnel endpoint (RTEP) on Edge nodes in each location to handle the cross-location traffic.

NSX-T Federation introduces a Remote Tunnel Endpoint (TEP) Interface on the NSX Edge which is used to transport encapsulated data between the Local Manager locations. This reduces the amount of inter site TEP’s which was normally built between transport nodes.

All Edge nodes in the cluster must have an RTEP configured. You do not need to configure all Edge clusters with RTEP. RTEPs are required only if the Edge cluster is used to configure a gateway that spans more than one location.

You can also configure RTEPs from each Local Manager. Select System > Get Started > Configure Remote Tunnel Endpoint.

You can edit RTEPs on an Edge node. Log into the Local Manager and select System > Fabric > Nodes > Edge Transport Nodes. Select an Edge node, and click Tunnels. If an RTEP is configured, it is displayed in the Remote Tunnel Endpoint section. Click Edit to modify the RTEP configuration.

Remote Tunnel Endpoint Configuration

I am using nvds-02 which is mapped to my vlan-transport zone and my Interface is mapped to VLAN 2 on my switch in DC-01 and vlan 3 for DC-02. I have just used static IP assignment. This task is repeated for Edge appliance which we will be using for the Edge Cluster in our Global Networking setup.

Edge-01 deployed in nsx-dc-01
Edge-01 deployed in nsx-dc-02

Configure the MTU for RTEP on each Local Manager. The default is 1700. Set the RTEP MTU to be as high as your physical network supports. On each Local Manager, select System > Fabric > Settings. Click Edit next to Remote Tunnel Endpoint. The RTEP can work using an MTU of 1500 but will cause fragmentation.

Tier-0 Gateway Configuration in Federation

Federation offers multiple deployment options when considering T0 deployment and configuration – Active/Active or Active/Standby with some variations of each. As with any Active/Active deployment in NSX-T at the T0, stateful services are not supported.

I have opt for the following deployment in my setup

Stretched Active-Standby Tier-0 Gateway with Primary and Secondary Locations

In an active-standby tier-0 gateway with primary and secondary locations, the following applies:

  • Only one Edge node is active at a time, therefore the tier-0 can run stateful services.
  • All traffic enters and leaves through the active Edge node in the primary location.

For Active Standby tier-0 gateways, the following services are supported:

  • Network Address Translation (NAT)
  • Gateway Firewall
  • DNS
  • DHCP

Refer to Tier-0 Gateway Configurations in Federation for all the deployment options and considerations.

Global Manager Network Dashboard

Lets go ahead and create our T0 in the global manager

Creating our T0 in the Global Manager

I am creating an Active/Standby T0 and I have selected nsx-dc-01 with my Edge Cluster named Federation-Cluster and I am selecting edge-01-dc01 as the Primary Edge Node. Then I selected nsx-dc-02 as the secondary location and the Federation-Cluster with edge-01-dc02 as the Secondary appliance.

Once configured you can check if the T0 has been successfully configured across all locations as intended

Global T0 is created in each LM and indicated by the GM tag in the name (nsx-dc-02)

Next we will configure the uplink interfaces on the T0 for each Edge VM – this will be used for the North/South traffic and the BGP Peering to the TOR. The interfaces are configured from the GM.

Before we proceed with adding the interfaces in the T0, I need to configure the segments which I will be using for the uplink interfaces. In my lab I will be using VLAN1331 and VLAN1332 in DC-01 and VLAN 2331 and VLAN 2332 in DC-02 as the uplink segments to my TOR.

Repeat this process for each Edge appliance uplink interface and remember to select the correct location for the segment and VLAN ID for this VLAN segment

Now I can proceed to map these VLAN backed segments as uplink interfaces in the GM TO

Configuring Uplink Interface for Global T0

Remember to select the correct Location, Segment and Edge appliance for the IP address configured. Then repeat this for the required uplink interfaces on the T0 across both sites.

After doing the first interface I done a quick ping test from the directly connected TOR to confirm my connectivity is working as expected.

After configuring an external uplink interface for each of the Edge appliances my setup, I can see my Global T0 has 4 interfaces

Connectivity test from my TOR to the newly created uplink interfaces

Next will be to configure BGP Routing between the newly created Global T0 and the TOR.

I started by changing the default BGP Local AS number to 65333 and then went ahead to configure the BGP neighbours at the bottom right.

Adding the TOR BGP Neighbours

After adding the first BGP Neighbour in the Global T0, I confirmed that the BGP relationship has been established. Then repeated adding the rest of my BGP neighbours.

BGP Neighbour established

After all the BGP Neighbours have been configured, you should the status green and Successful. This can be confirmed from the TOR too.

Route Redistribution in the T0

NSX-T provides extensive flexibility when deciding which networks would be redistributed upstream to the TOR, since this just a lab setup I will enable all networks created in my topology to redistributed and advertised to the TOR.

First start by creating redistribution list for each location

NSX-T Redistribution options

The redistribution list can be applied to Route Map where extended BGP attributes can be added:

Route-Map Options

Adding a Global T1 Router

Now that we have the North / South connectivity and routing up and running we will go ahead and create a T1 which will provide the default gateway connectivity for our networks and add some segments which will be stretched across the two NSX locations.

Adding Global T1 and attaching it to the Global T0

Adding Global Segments

Next step is creating Global Segments, these will be created as overlay Segments and attached to Global-T1 previously created. I will create three Global Segments for my Global 3 Tier Application.

Creating the Global Web Segement

As before, after configuring the first segment I want to confirm that BGP is now dynamically updating my TOR with the newly created Segement.

TOR shows newly created network learnt from BGP

After completing the configurations for the three segments, I can see them successfully created in the GM dashboard.

Segments for the 3 Tier App in the GM

The newly created segments can all be seen in the vCenters at each location and can now be used to connect VM’s.

Once the VM’s have been connected to newly created Segments, we can test our connectivity.

Network Topology Overview

NSX-T 3.0 introduced a network topology view which is only available in the Local Managers

Network Topology View from NSX-DC-01

From the topology view you can see the T0, T1, uplink interfaces and segments created with their IP addresses and the VM’s connected to these segements.

Network Topology View from NSX-DC-02

A quick ping test from WEB-01 deployed in DC-01 with IP to the WEB-02 VM deployed in DC-02 with IP shows our cross site connecitvity is working as expected.

In the next blog I will focus on adding Global Security Policies and enabled Micro-Segmented policies across the two sites centrally configured and managed in the Global Manager.