NSX-T 3.1 – Deploying Distributed IDS/IPS

In NSX-T 3.0 VMware introduce distributed IDS and now in NSX-T 3.1 this has been expanded to include distributed IPS. In this blog I will highlight the steps to enabled and configured distributed IDS/IPS and end with a demonstration.

Overview

Distributed Intrusion Detection and Prevention Service (IDS/IPS) monitors network traffic on the host for suspicious activity.Signatures can be enabled based on severity. A higher severity score indicates an increased risk associated with the intrusion event. Severity is determined based on the following:

  • Severity specified in the signature itself
  • CVSS (Common Vulnerability Scoring System) score specified in the signature
  • Type-rating associated with the classification type

IDS detects intrusion attempts based on already known malicious instruction sequences. The detected patterns in the IDS are known as signatures. You can set a specific signature alert/drop/reject actions globally, or by profile.

Actions

  • Alert – An alert is generated and no automatic preventative action is taken.
  • Drop – An alert is generated and the offending packets are dropped.
  • Reject – An alert is generated and the offending packets are dropped. For TCP flows, a TCP reset package is generated by IDS and sent to the source and destination of the connection. For other protocols, an ICMP-error packet sent to the source and destination of the connection.

***Do not enable Distributed Intrusion Detection and Prevention Service (IDS/IPS) in an environment that is using Distributed Load Balancer. NSX-T Data Center does not support using IDS/IPS with a Distributed Load Balancer.***

***Distributed IDS/IPS is a licensed feature which is not included in the traditional NSX per CPU licenses. You will need to apply the Add-On NSX Advanced Threat Prevention license in your NSX-T Manager to enable these capabilities***

NSX-T Firewall with Advanced Threat Prevention License applied

Distributed IDS/IPS Configuration

I will be using the topology highlighted below for this demonstration setup so I will do some pre-work configurations in NSX-T so that I can consume these in the subsequent steps (Creating Segments, T1/T0, Demo Virtual Machines, Groups)

Demo Topology

Pre-Work

  • Create 2 Tags for the workloads: Home -> Inventory -> Tags ->Add Tag
    • Name: Production
    • Assign: WEB-VM-01 and APP-VM-01
Production Tag
  • Name: Development
    • Assign: WEB-VM-02 and APP-VM-02
Development Tag

  • Create 2 Groups for the Workloads: Home -> Inventory -> Groups
    • Name: Production Applications
    • Compute Members: Membership Criteria: Virtual Machine Tag Equals: Production
Product Applications Group
  • Name: Development Applications
    • Compute Members: Membership Criteria: Virtual Machine Tag Equals: Development
Development Application Group
  • Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click View Members for the 2 groups you created and confirm
Production Group and the associated members
Development Group and the associated members

1. Enable IDS/IPS on hosts, download latest signature set, and configure signature settings.

Home -> Security -> Distributed IDS/IPS -> Settings

NSX-T IDS/IPS can automatically apply signatures to your hosts, and update intrusion detection signatures by checking our cloud-based service

IDS/IPS Settings Menu
  • Intrusion Detection and Prevention Signatures = Enable Auto Updates. The NSX-T Manager requires Internet access for Auto Updates.
  • Enable Intrusion Detection and Prevention for Cluster(s) = DC-02-Compute, select the cluster where you workloads are and select enable. When prompted “Are you sure you want to Enable Intrusion Detection and Prevention for selected clusters?” click YES
IDS/IPS Auto Updates Enabled and IDS/IPS enabled on my DC-02-Compute Cluster

NSX can automatically update it’s IDS signatures by checking the cloud-based service. By default, NSX manager will check once per day and VMware publishes new signature update versions every two week (with additional non-scheduled 0-day updates). NSX can also be configured to optionally automatically apply newly updated signatures to all hosts that have IDS enabled.

2. Create IDS/IPS profiles

Home -> Security -> Distributed IDS/IPS -> Profiles

IDS/IPS Profiles are used to group signatures, which can then be applied to select applications. You can create 24 custom profiles in addition to the default profile

Default IDS Profile

We will be create two new IDS/IPS profiles, one for Production and another one for Development

Home -> Security -> Distributed IDS/IPS -> Profiles -> ADD PROFILE

  • Name: Production
  • Signatures to Include: Critical, High, Medium
Production IDS Profile
  • Name: Development
  • Signatures to include: Critical & High
Development IDS Profile
Newly created IDS Profiles

3. Create IDS/IPS rules

IDS/IPS rules are used to apply a previously created profile to select applications and traffic. I am going to create one rule for the Production VM’s and a second rule for the Development VM’s

Home -> Security -> Distributed IDS/IPS -> Rules -> Add Policy

  • Click Add a New Policy – Renamed my default name to NSX Demo

Now lets create the Production Policy Rule

  • Add a Rule to the Policy – Click ADD RULE
  • Rule Name: Production Policy
  • IDS Profile: Production
  • Applied to Group: Production Applications
  • The rest is left default

Next we create the Development Policy Rule

  • Add a Rule to the Policy – Click ADD RULE
  • Rule Name: Development Policy
  • IDS Profile: Development
  • Applied to Group: Development Applications
  • The rest is left default
IDS Policy and Rules created

Last step is to publish the Policy – Click Publish on the top left.

The mode setting will determine if we are doing IDS or IDS/IPS.

  • Detect Only – Detects signatures and does not take action.
  • Detect and Prevent – detects signatures and takes into account profile or global action of drop or reject.

There are some other optional settings when you click on the gear at the end of the rule:

  • Logging
  • Direction
  • IP Protocol
  • Log Label

4. Verify IDS/IPS status on hosts

To use the NSX virtual appliance CLI, you must have SSH access to an NSX virtual appliance. Each NSX virtual appliance contains a command-line interface (CLI)

  1. Open a ssh session to one of the ESXi hosts
  2. Enter nsxcli command to open the NSX-T Data Center CLI.
  3. To confirm that IDS is enabled on this host, run the command get ids status
get ids status
  1. To confirm both of the IDS profiles have been applied to this host, run the command get ids profile
get ids profile
  1. To review IDS profile (engine) statistics including the number of packets processed and alerts generated, run the command get ids engine profilestats <tab_to_select_profile_ID>
get ids engine profilestats

5. Distributed IDS/IPS Events

I have set up basic demonstration using Metasploit to launch a simple exploit against the Drupal service running on the Web-VM-01 and confirm the NSX Distributed IDS/IPS was able to detect this exploit attempt.

Basic Attack Demo

In this demonstration, the IDS/IPS engine is set to detect only

IDS Engine configuration

When I trigger the exploit from the Hacker to WEB-VM-01, I am able to do a reverse shell and gather system information on the victim.

Exploited Victim

Now when I go over to the IDS/IPS dashboard in NSX-T, I can see the event and expand to see the details and showing this as a detected only event.


IDS/IPS Dashboard

Thank-you you taking the time to read the blog, if you found it useful or have any feedback feel free to ping or provide comments.

NSX-T 3.1 – Configuring DHCP Server

As I build out various demonstrations in my lab I wanted to reduce the amount of static IP allocations on my demo work loads so that I can move them between network segments for different demonstrations and with this enabling a DHCP Server in my NSX-T deployment makes sense.

So in this post I will cover the steps and procedure to follow enabling DHCP on my Local NSX-T Manager.

Overview

DHCP (Dynamic Host Configuration Protocol) allows clients to automatically obtain network configuration, such as IP address, subnet mask, default gateway, and DNS configuration, from a DHCP server.

As per VMware documentation, NSX-T Data Center supports three types of DHCP on a segment:

  • DHCP local server
  • Gateway DHCP
  • DHCP relay.

DHCP Local Server
As the name suggests, it is a DHCP server that is local to the segment and not available to the other segments in the network. A local DHCP server provides a dynamic IP assignment service only to the VMs that are attached to the segment. The IP address of a local DHCP server must be in the subnet that is configured on the segment.
Gateway DHCP
It is analogous to a central DHCP service that dynamically assigns IP and other network configuration to the VMs on all the segments that are connected to the gateway and using Gateway DHCP. Depending on the type of DHCP profile you attach to the gateway, you can configure a Gateway DHCP server or a Gateway DHCP relay on the segment. By default, segments that are connected to a tier-1 or tier-0 gateway use Gateway DHCP. The IP address of a Gateway DHCP server can be different from the subnets that are configured in the segments.
DHCP Relay
It is a DHCP relay service that is local to the segment and not available to the other segments in the network. The DHCP relay service relays the DHCP requests of the VMs that are attached to the segment to the remote DHCP servers. The remote DHCP servers can be in any subnet, outside the SDDC, or in the physical network.


You can configure DHCP on each segment regardless of whether the segment is connected to a gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.

For a gateway-connected segment, all the three DHCP types are supported. However, Gateway DHCP is supported only in the IPv4 subnet of a segment.

For a standalone segment that is not connected to a gateway, only local DHCP server is supported.

Assumptions

My base networking has already been configured and deployed and I will address enabling the DHCP services on one segment which I want to allocate Private IP’s too and translate these with Source based NAT to provide Internet access to this segment.

Configuration

For the purpose of this demonstration which I am working on, I will configure a DHCP Local Server on my segment with network range 10.10.10.0/24. My T1 default gateway is 10.10.10.1.

Step 1 – Create a DHCP Profile

You can create DHCP servers to service DHCP requests from VMs that are connected to logical switches.

Select Networking > DHCP > ADD DHCP Profile> Add.

Adding a DHCP Profile
  • Enter a Profile name = DHCP-LM-DC-01
  • Profile Type (DHCP Server / DHCP Relay) = DHCP Server
  • Enter the IP address of the DHCP server and its subnet mask in CIDR format: 192.168.10.240/24
  • Lease Time (Should be between 60 and 4294967295): Default is 84600
  • Select the Edge Cluster: LM-EDGE-Cluster-DC-01
  • Select the Edge(s): edge-dc-03 for my lab
Selecting edge-dc-03 from my Edge Cluster

Once all the fields are populated, hit save

DHCP Profile populated data

Step 2 – Attach a DHCP Server to a Segment

Networking > Segments > Select Segment

Configure a new segment or select the one you want to edit, in my case I will edit DC-01-NAT-Segment

Edit Segment to Set DHCP Config

SET DHCP CONIFG

Blank DHCP Settings

Now we need to populate the details and select the options matching our requirements.

  • DHCP TYPE: LOCAL DHCP Server
  • DHCP Profile: DHCP-LM-DC-01
  • IP Server Settings:
    • DHCP Config: Enable
    • DHCP Server IP Address: 10.10.10.254/24
  • DHCP Ranges: 10.10.10.10-10.10.10.100
  • Lease Time (seconds): 3600
  • DNS Servers: 192.168.10.5
DHCP Configurations Populated in the UI

APPLY and SAVE.

Step 3 – Attach Virtual Machine to the Network and set networking settings to DHCP

Setting my VM to obtain an IP via DHCP

The virtual machine has succesfully obtain a dynamic IP address from the DHCP Server created in NSX-T

Dynamic IP 10.10.10.10 allocated via DHCP seen in vCenter
DHCP IP allocated to the Virtual Machine from 10.10.10.254
Final Topology with DHCP and NAT enabled

Product Offerings for VMware NSX Security 3.1.x

New VMware NSX Security editions became available to order on October 29th, 2020. The tiers of NSX Security licenses are as follows:

  • NSX Firewall for Baremetal Hosts: For organizations needing an agent-based network segmentation solution.
  • NSX Firewall Edition: For organizations needing network security and network segmentation.
  • NSX Firewall with Advanced Threat Prevention Edition: For organizations needing Firewall, and advanced threat prevention features.

To review the table outlining specific functions available by edition, visit the VMware KB.

NSX Security is available as a single download image with license keys required to enable specific functionality.

Configuring NSX-T VRF Lite Networking

VMware introduced VRF capabilities in NSX-T 3.0, this post will guide you how through the steps to configure and enabled VRF capabilities.

A virtual routing and forwarding (VRF) gateway makes it possible for multiple instances of a routing table to exist within the same gateway at the same time. VRFs are the layer 3 equivalent of a VLAN. A VRF gateway must be linked to a tier-0 gateway. From the tier-0 gateway, the VRF gateway inherits the failover mode, Edge cluster, internal transit subnet, T0-T1 transit subnets, and BGP routing configuration.

If you are using Federation, you can use the Global Manager to create a VRF gateway on a tier-0 gateway if the tier-0 spans only one location. VRF gateway is not supported on stretched tier-0 gateways in Federation

Prerequisites

We are going to need some base work done before configuring and enabling the VRF configurations

Parent T0 created with Trunk Uplinks
  • Deploy at least one Edge VM or Bare Metal appliance
  • Create an Edge Cluster and add the Edge VM or BM appliance to the cluster
  • Create a T0 in the networking section
  • A Trunk Segment is required as the uplink interface on the Edge VM as each VRF created will consume a VLAN on the trunk between the T0 and the TOR
  • The VLAN used in the uplink interfaces in the Parent T0 should not overlap with any of the networks within VRF’s – make sure to use a unique VLAN not included in the trunk segment used by the VRF Interfaces.
Creating the T0 assigned to my pre-created edge cluster
Parent T0 External Uplink Interfaces
VLAN Backed Segments using VLAN 1120 and VLAN 11230 for Parent T0 Uplinks
Trunk Segment created for VRF Interfaces – VLAN1110 – 1119 and VLAN 1220-1229

I will be using VLAN’s 1110-1119 for VRF’s uplink interfaces mapped to EDGE-03 and VLAN’s 1220-1229 for VRF’s uplink interfaces mapped to EDGE-04

Once we have completed the configurations our desired topology would have two VRF’s configured shown below

Desired Topology

Lets get started

Select Networking > Tier-0 Gateway I am using TO-LM-DC01 in my setup.

Click Add Gateway > VRF, then name the new VRF T0 Gateway and assign it to the T0 pre-created (T0-LM-DC01 in my lab).

Creating the new VRF-A T0

Repeat this step for VRF B and you should have created two new T0’s named VRF-A and VRF-B

VRF-A and VRF-B T0’s

Next we will configure the uplink interfaces to TOR for each VRF created.

Select the VRF and click on Interfaces – Set

My Edge Cluster has two Edge VM so I will need to create two Interfaces per VRF T0 and these will be need to mapped to the correct access VLAN configured on the TOR

EDGE-03-DC01 VRF A – 192.168.254.0/30, access VLAN 1111

EDGE-03-DC01 VRF A – 192.168.254.0/30, access VLAN 1111

EDGE-04-DC01 VRF A – 192.168.254.4/30, access VLAN 1112

EDGE-04-DC01 VRF A – 192.168.254.4/30, access VLAN 1112
VRF A Uplink Interfaces
Testing connectivity to newly created interfaces from the TOR

EDGE-03-DC01 VRF B – 192.168.254.0/30, access VLAN 1221

EDGE-03-DC01 VRF B – 192.168.254.0/30, access VLAN 1221

EDGE-04-DC01 VRF B – 192.168.254.4/30, access VLAN 1222

EDGE-04-DC01 VRF B – 192.168.254.4/30, access VLAN 1222
VRF B Uplink Interfaces
Testing connectivity to newly created interfaces from the TOR

Next Step will be enabling BGP between the VRF T0’s to the TOR

The VRF TO’s will use the same BGP AS Number created in the parent TO, in my case I have reconfigured this to 65111 at the parent.

BGP AS 65111 Configured at the Partner T0

Next Step is to enable BGP on each VRF T0

Enabling BGP at the VRF T0

Now go ahead and click set to add your BGP Neighbours

VRF A Edge-03 TOR BGP Neighbour
VRF A Edge-04 TOR BGP Neighbour
VRF B Edge-03 TOR BGP Neighbour
VRF B Edge-04 TOR BGP Neighbour

After enabling BGP to the TOR and the BGP neighbour relationship is established, you should see success status in the dashboard

Neighbour Status Success

This can be confirmed from TOR, where the BGP neighbour status should show established

BGP Established from TOR to VRF A T0
Desired Topology

Adding T1’s to the VRF enabled T0

Next will create two T1’s, one for VRF A and the other VRF B and will attach these to the T0’s created. This is where we will attach our VRF workload networks.

Creating VRF-A-T1 and attaching it to T0 VRF A
Creating VRF-B-T1 and attaching it to T0 VRF B

Enabling BGP on the T0

Once the T1’s are created we can edit the routing attributes to automatically advertise the networks attached to the T1. This will also BGP to automatically advertise the newly created networks to the TOR switches. Since this is just a lab set up, I have enabled all the options – I can edit these as I enable stateful services if I want to control how I advertise my connected interfaces.

Enabling Route Advertisement on VRF-A T1
Enabling Route Advertisement on VRF-A T1

***We will need to make sure we have enabled Router Advertisement on the VRF T0’s towards the TOR too***

Adding Segments to the T1

Now we will add a segment to each VRF T1 so that we can connect our workloads to and confirm we have connectivity to the outside world.

VRF-A Segment Creation

We create the segment and attach it to the correct T1 Gateway, either VRF A or VRF B and I selected to deploy this an Overlay Segment.

VRF-A Segment Creation

After creating the segments, I can see them in the NSX-T dashboard and confirm that they have been attached to the correct T1’s and review the network topology.

Segment View from NSX-T Dashboard

From the NSX-T topology view I see all my uplink IP’s and the segment which I attached to the T1

VRF A Topology view

VRF-B Topology View

Lastly I want to connect to the TOR and confirm that these newly create IP subnets are being advertised from NSX-T to the TOR via BGP

Segments routes are learnt via BGP on the TOR

IP Connectivity to the newly created subnets

IP connectivity to the segments from the TOR

Now lets us the NSX-T Traceflow troubleshooting tool to confirm we have end to end connectivity by doing a trace from a VM connected to VRF-A to another VM connected to VRF-B

Successful trace taken from NSX-T Traceflow

I hope you found this useful. Thanks

Deploying NSX-T Data Center Federation with 3.1.0

Deployment Topology

VMware recently announced the general availability of NSX-T 3.1.0 bringing a host of new features and functionality. One of the key features which is now production ready is the Multi-Site solution, Federation.

  • Support for standby Global Manager Cluster
    • Global Manager can now have an active cluster and a standby cluster in another location. Latency between active and standby cluster must be a maximum of 150ms round-trip time.
  • With the support of Federation upgrade and Standby GM, Federation is now considered production ready.

With Federation, you can manage multiple NSX-T Data Center environments with a single pane of glass view, create gateways and segments that span one or more locations, and configure and enforce firewall rules consistently across locations.

Once you have installed the Global Manager and have you added locations, you can configure networking and security from Global Manager.

In this post I will cover the step by step process to connect my two Local Managers (LM’s) to a Global Manager (GM). I have pre deployed both local sites and the Active/Standby GM appliances.

For a better understanding on the features which are supported or not supported when using NSX-T Data Center Federation, see VMware’s guide.

Lets get started

Pre Work Check List

  • NSX-T Local Managers deployed and hosts prepared for NSX
  • NSX-T Local Managers backup configured and executed
  • NSX-T Edge(s) deployed in Local Manager and Edge Clusters created
  • Local Manager requires Enterprise Plus License
NSX-T Manager DC-01
NSX-T Manager DC-02
NSX-T Edge Virtual Machines DC-01
NSX-T Edge Virtual Machine DC-02
NSX-T Backup Configuration DC-01
NSX-T Backup Configuration DC-02
NSX-T Global Manager Dashboard

Now lets start adding the first Local Manager nsx-dc-01 under the Location Manager tab. Populate the required details for the local manager. Check Compatibility and hit save. The Local Manager should be on the same OS level as the Global Manager.

Adding nsx-dc-01 LM to the Global Manager

Next step would be to add the next sites local manager – nsx-dc-02 in my setup.

Adding nsx-dc-02 LM to the Global Manager

Once both Local Managers have been added and the Global Manager dashboard is refreshed you will see both Local Managers in the Global Manager dashboard.

Global Manager view showing the recently added Global Managers

After adding the two Local Managers my Global Manager highlights that is has found objects and policies in these newly added locations and asks Do you want to import them. ***This is a one time option and can not be done after proceeding***

Confirm Local Manager successfully registered to GM

Log into the two Local Managers and see the status and view from the Local Managers, you will notice that the LM’s establish connection to the Active and Standby GM’s as well a connection is established to the neighbour LM’s

Local Manager 01 view – successfully connected to the Active and Standby GM’s
Local Manager 02 view – successfully connected to the Active and Standby GM’s

Now I will go ahead an import the objects which the GM discovered in LM from DC-01. At this point the GM will check if a configuration backup of the LM manager has been taken before it will proceed with the import. If a backup has not been taken, the step can not proceed.

Importing objects from the LM
Objects discovered by the GM

You can add a Prefix or Suffix to the imported objects – this is optional

Once this is completed for the first LM, I refreshed the dashboard and the GM requests the same for my second LM

I just followed the same steps and imported the discovered objects.

Global Manager successfully imported Objects from both LM’s

Networking in Federation

Federation enables you to create virtual network topologies spanning multiple locations across L3 connected networks.

Take Note of the following

Tier-0 gateways, tier-1 gateways, and segments can span one or more locations in the Federation environment. When you plan your network topology, keep these requirements in mind:

  • Tier-0 and tier-1 gateways can have a span of one or more locations.
  • The span of a tier-1 gateway must be equal to, or a subset of, the span of the tier-0 gateway it is attached to.
  • A segment has the same span as the tier-0 or tier-1 gateway it is attached to. Isolated segments are not realized until they are connected to a gateway.

You can create different topologies to achieve different goals.

  • You can create segments and gateways that are specific to a given location. Each site has its own configuration, but you can manage everything from the Global Manager interface.
  • You can create segments and gateways that span locations. These stretched networks provide consistent networking across sites.

Configuring Remote Tunnel Endpoint on the Edge

If you want to create gateways and segments that span more than one location, you must configure a remote tunnel endpoint (RTEP) on Edge nodes in each location to handle the cross-location traffic.

NSX-T Federation introduces a Remote Tunnel Endpoint (TEP) Interface on the NSX Edge which is used to transport encapsulated data between the Local Manager locations. This reduces the amount of inter site TEP’s which was normally built between transport nodes.

All Edge nodes in the cluster must have an RTEP configured. You do not need to configure all Edge clusters with RTEP. RTEPs are required only if the Edge cluster is used to configure a gateway that spans more than one location.

You can also configure RTEPs from each Local Manager. Select System > Get Started > Configure Remote Tunnel Endpoint.

You can edit RTEPs on an Edge node. Log into the Local Manager and select System > Fabric > Nodes > Edge Transport Nodes. Select an Edge node, and click Tunnels. If an RTEP is configured, it is displayed in the Remote Tunnel Endpoint section. Click Edit to modify the RTEP configuration.

Remote Tunnel Endpoint Configuration

I am using nvds-02 which is mapped to my vlan-transport zone and my Interface is mapped to VLAN 2 on my switch in DC-01 and vlan 3 for DC-02. I have just used static IP assignment. This task is repeated for Edge appliance which we will be using for the Edge Cluster in our Global Networking setup.

Edge-01 deployed in nsx-dc-01
Edge-01 deployed in nsx-dc-02

Configure the MTU for RTEP on each Local Manager. The default is 1700. Set the RTEP MTU to be as high as your physical network supports. On each Local Manager, select System > Fabric > Settings. Click Edit next to Remote Tunnel Endpoint. The RTEP can work using an MTU of 1500 but will cause fragmentation.

Tier-0 Gateway Configuration in Federation

Federation offers multiple deployment options when considering T0 deployment and configuration – Active/Active or Active/Standby with some variations of each. As with any Active/Active deployment in NSX-T at the T0, stateful services are not supported.

I have opt for the following deployment in my setup

Stretched Active-Standby Tier-0 Gateway with Primary and Secondary Locations

In an active-standby tier-0 gateway with primary and secondary locations, the following applies:

  • Only one Edge node is active at a time, therefore the tier-0 can run stateful services.
  • All traffic enters and leaves through the active Edge node in the primary location.

For Active Standby tier-0 gateways, the following services are supported:

  • Network Address Translation (NAT)
  • Gateway Firewall
  • DNS
  • DHCP

Refer to Tier-0 Gateway Configurations in Federation for all the deployment options and considerations.

Global Manager Network Dashboard

Lets go ahead and create our T0 in the global manager

Creating our T0 in the Global Manager

I am creating an Active/Standby T0 and I have selected nsx-dc-01 with my Edge Cluster named Federation-Cluster and I am selecting edge-01-dc01 as the Primary Edge Node. Then I selected nsx-dc-02 as the secondary location and the Federation-Cluster with edge-01-dc02 as the Secondary appliance.

Once configured you can check if the T0 has been successfully configured across all locations as intended

Global T0 is created in each LM and indicated by the GM tag in the name (nsx-dc-02)

Next we will configure the uplink interfaces on the T0 for each Edge VM – this will be used for the North/South traffic and the BGP Peering to the TOR. The interfaces are configured from the GM.

Before we proceed with adding the interfaces in the T0, I need to configure the segments which I will be using for the uplink interfaces. In my lab I will be using VLAN1331 and VLAN1332 in DC-01 and VLAN 2331 and VLAN 2332 in DC-02 as the uplink segments to my TOR.

Repeat this process for each Edge appliance uplink interface and remember to select the correct location for the segment and VLAN ID for this VLAN segment

Now I can proceed to map these VLAN backed segments as uplink interfaces in the GM TO

Configuring Uplink Interface for Global T0

Remember to select the correct Location, Segment and Edge appliance for the IP address configured. Then repeat this for the required uplink interfaces on the T0 across both sites.

After doing the first interface I done a quick ping test from the directly connected TOR to confirm my connectivity is working as expected.

After configuring an external uplink interface for each of the Edge appliances my setup, I can see my Global T0 has 4 interfaces

Connectivity test from my TOR to the newly created uplink interfaces

Next will be to configure BGP Routing between the newly created Global T0 and the TOR.

I started by changing the default BGP Local AS number to 65333 and then went ahead to configure the BGP neighbours at the bottom right.

Adding the TOR BGP Neighbours

After adding the first BGP Neighbour in the Global T0, I confirmed that the BGP relationship has been established. Then repeated adding the rest of my BGP neighbours.

BGP Neighbour established

After all the BGP Neighbours have been configured, you should the status green and Successful. This can be confirmed from the TOR too.

Route Redistribution in the T0

NSX-T provides extensive flexibility when deciding which networks would be redistributed upstream to the TOR, since this just a lab setup I will enable all networks created in my topology to redistributed and advertised to the TOR.

First start by creating redistribution list for each location

NSX-T Redistribution options

The redistribution list can be applied to Route Map where extended BGP attributes can be added:

Route-Map Options

Adding a Global T1 Router

Now that we have the North / South connectivity and routing up and running we will go ahead and create a T1 which will provide the default gateway connectivity for our networks and add some segments which will be stretched across the two NSX locations.

Adding Global T1 and attaching it to the Global T0

Adding Global Segments

Next step is creating Global Segments, these will be created as overlay Segments and attached to Global-T1 previously created. I will create three Global Segments for my Global 3 Tier Application.

Creating the Global Web Segement

As before, after configuring the first segment I want to confirm that BGP is now dynamically updating my TOR with the newly created Segement.

TOR shows newly created network learnt from BGP

After completing the configurations for the three segments, I can see them successfully created in the GM dashboard.

Segments for the 3 Tier App in the GM

The newly created segments can all be seen in the vCenters at each location and can now be used to connect VM’s.

Once the VM’s have been connected to newly created Segments, we can test our connectivity.

Network Topology Overview

NSX-T 3.0 introduced a network topology view which is only available in the Local Managers

Network Topology View from NSX-DC-01

From the topology view you can see the T0, T1, uplink interfaces and segments created with their IP addresses and the VM’s connected to these segements.

Network Topology View from NSX-DC-02

A quick ping test from WEB-01 deployed in DC-01 with IP 172.123.10.11 to the WEB-02 VM deployed in DC-02 with IP 172.123.10.12 shows our cross site connecitvity is working as expected.

In the next blog I will focus on adding Global Security Policies and enabled Micro-Segmented policies across the two sites centrally configured and managed in the Global Manager.

NSX-T 3.1 – Federation Global Manager Redundancy

A quick post to set up and configure Redundancy for NSX-T Federation Global Managers across two locations.

My Primary Global Manager (GM) has been deployed and configured it as Active – I have only deployed a single GM appliance at each location, in a production deployment it is highly recommended to deploy a 3 node Cluster. Since the release of NSX-T 3.1, VMware now supports deployments with a single NSX-T Manager and relying on vSphere and configuration & backup restore options.

Configuring Primary Global Manager as the Active Manager
NSX-T Global Manager Primary Location

I have deployed the deployed standby GM (nsx-global-02) and will now add it as the Standby GM in the primary GM dashboard.

Populate Standby GM details

Populate all the details required and check compatibility and save. SSH to the GM and run this command to get the thumbprint request: get certificate api thumbprint

Compatibility check is successful

After successfully adding the Standby GM, you will now see both Active and Standby GM’s in the dashboard.

NSX-T 3.0 URL Analysis

URL Analysis Dashboard in NSX-T

VMware recently introduced URL Analysis capabilities on the NSX L7 Edge Firewall.

“The Layer 7 Edge Firewall is now further enhanced in NSX-T 3.0 with the implementation of URL Analysis for URL Classification and Reputation. The Edge Firewall detects access from outside the datacenter for granular detection and categorization of in-bound and outbound URLs.”

URL analysis allows administrators to gain insight into the type of websites accessed within the organization, and understand the reputation and risk of the accessed websites.

This blog will take you through the step by step procedure to enable URL Analysis in your NSX-T 3.0 environment – take note this feature requires the NSX Data Center Enterprise Plus license. Unlike previous features, URL Analysis is license enforced and will not work if you do not have a minimum of the Enterprise Plus license.

Let’s get started with enabling URL Analysis

This blog takes into consideration you have already deployed the required Edge Appliances and configured segments, T1’s and T0’s as needed to meet the demo topology below

NSX-T has 80+ pre-defined categories and domains can belong to multiple categories. Score are then computed from 0 – 100 into 5 categories. The distribution is shown as a percentage of each risk level and the total number of flows/Risk level. NOTE: URL analysis is available on gateway firewall.

NSX-T Demo Topology taken from the NSX-T Dashboard

Step 1 – Ensure that DNS is configured on an edge node. See Create an NSX Edge Transport Node in the NSX-T Data Center Installation Guide. ***When deploying the Edge Appliance, you need to make sure you have followed the deployment guide correctly and enabled DNS***

It is important to know that your management interface(s) of the Edge nodes(s) used by your NSX-T T1 distributed router as their Services Router (SR) must have access to the Internet to download the database. DNS is required on Edge Nodes to resolve the cloud server domain name hosting the URL Database. If the management interface(s) do not have Internet access or can not resolve DNS, this functionality will not work.

Step 2 – Enable URL Analysis

NSX-T Security Overview Dashboard

On the NSX-T Manager dashboard click on Security at the top and you will see the dashboard above. At the moment we have not enabled URL Analysis and there for have no data available here yet. Lets click on Get started with URL Analysis >> Hit GET Started on the Pop Up shown below

Pop-up Highlighting high-level steps to follow

Next we will need to select the Edge Cluster where we are going to enabled the N/S URL Analysis functionality – in my lab I will enable it on the cluster named: edge-cluster-url-analysis

URL Analysis Settings Dashboard

Toggle enable on the right hand side for the cluster you will be using in your setup and then click YES on the pop shown below.

URL Analysis is now enabled on my edge cluster

Now that we have enabled it on the Cluster(s) and shown above, we have an optional task to create a context profile, with a URL Category attribute. For the sake of the blog we will go ahead and perform this task.

Step 3 – (Optional) Create a context profile, with a URL Category attribute

Click Set on the Cluster

Click on Set under Profiles for the cluster you have selected to enable this one, it will open the screen shown below.

Creating a Context Profile

Click on the top left – ADD CONTEXT PROFILE and provide a name for the Context Profile, you could add description.

Once you have configured the Name, click on Set and the window below opens. Go ahead and click on ADD ATTRIBUTE

Now select URL Category

This takes you to the next dashboard showing a list of attributes which is our categories, just click on the names to add them to your list.

I added all the available options to my profile, click ADD when done.

Next Hit Apply

Now hit Save and then followed by APPLY on the bottom right.

Once you applied the context profile, the system will contact the URL Database server on the Internet and perform the Database Download. You should see a URL Data Version and Last Synced date and time.

Step 4 – In this last step we need to configure a Layer 7 gateway firewall rule for DNS traffic, so that URL Analysis can analyze domain information. ***NOTE*** This is a Gateway firewall rule and not a distributed firewall rule. Navigate to Gateway Firewall on the left hand side menu.

Now we will create a L7 DNS firewall rule under the all shared rules, click on ADD Policy and you will see a new Policy Section added.

Lets rename the default name to something useful, I renamed mine L7DNS-Policy. Click on the Name “New Policy” and you will be able to edit it there.

Now click on those three blue dots next to the Policy Name and select ADD RULE

Adding a new rule

Go ahead and name new rule added, I named mine L7DNSRule and then leave the source and destination as any.

Now move the cursor over the services field in this new rule and click on the pencil so that we can edit the services, here we will search for dns and add DNS and DNS-UDP and click apply. This configured L4 DNS inspection

Next move the cursor over the Profiles where is says “none” and click the pencil to edit this field.

Here we will select DNS, this is where we enable the L7 DNS Inspection capabilities – Click Apply

Type DNS in the box at the top and then select DNS

The last step is to apply this L7 DNS Policy to the T1 where are end system segments are connected too – You can only apply this to a T1. Again hover the cursor over the applied to field and click the pencil and select the T1 you will use – I am using T1-URL-Analysis here. Then click Apply.

Select the correct T1

Once you hit Apply you will see the Policy defined and now we just need to Publish this policy so that the NSX Manager push the policy as needed – So go ahead and click Publish in the top right.

You can click the refresh button on the bottom left hand side and watch until the policy status is green and says

L7 DNS Policy successfully applied to the T1

Final Step – Now that all the configuration work is done, we can go back to the URL Analysis and see if we seeing our URL’s being classified, you might the “URL Analysis in Progress. Please check back.” screen. Generate some DNS requests in your environment and return to the dashboard – I have a virtual Windows Desktop connect to one of the segments and I just opened a browser and open various web sites to create some traffic.

After a short while access the Security Overview dashboard and you should see data populated on the URL ANALYSIS SUMMARY – N-S TRAFFIC

Security Overview Dashboard

For a detailed view, click on the URL Analysis tab on the right hand side and you will see the output below. Note all the Reputation scores, categories and others details.

Congratulations, you now have a working NSX-T 3.0 URL Analysis environment

Deploying the NSX-T Cloud Service Manager (CSM)

NSX Cloud enables you to manage and secure your public cloud inventory using NSX-T Data Center. The Cloud Service Manager (CSM) provides a single pane of glass management endpoint for your public cloud inventory.

Delivering consistent networking and security for your applications running natively in public clouds with NSX Cloud. No more infrastructure silos to drive up complexity and operational expense — instead, enjoy intrinsic security policies globally and precise control across virtual networks, regions, and clouds. NSX Cloud currently supports Microsoft Azure and Amazon AWS, including Azure Government and AWS GovCloud (US) regions. For full details NSX Cloud

This blog focuses on deploying the NSX-T CSM appliance and connecting it to my on premise NSX-T Manager, following the base setup we will go ahead and connect the CSM to my AWS account.

There is no specific appliance binary for the NSX-T CSM, it uses the same NSX-T Manager software package. So we will just reuse the NSX-T Manager appliance OVA which you have downloaded from VMware download page when you deployed your NSX-T Managers.

You will need an additional IP address which will be the management IP for this new appliance. Since this appliance will need to access the Internet, the IP address either needs direct Internet access or we would need the proxy details for your environment.

You are going to need your admin credentials for your on premise NSX-T Manager when we register the CSM with the on premise NSX-T Manager.

Let’s Get Started

Step 1 – Login to your vCenter and navigate to the cluster where we will deploy the NSX-T Manager OVA – I am using NSX-T 3.0

Step 2 – Provide a Virtual Machine Name for this deployment, I am just using NSX-CSM

Step 3 – Select the vCenter Cluster or host where you are planning to deploy the NSX-CSM appliance – I will deploy mine in my management cluster

Step 4 – Review the details below and click Next

Step 5 – This is now where we decide to deploy this as a traditional NSX-T Manager or CSM. By Selecting ExtraSmall you will see the side note that this configuration is only supported for the NSX-T Cloud-Services-Manager and the resources required for this deployment. If you accidentally go ahead with the default selection of Medium, you would need to restart from Step 1

Step 6 – Next we select the storage where the appliance would be deployed too. In my case I have a vSAN datastore in my management cluster and I will deploy it there.

Step 7 – Select the correct portgroup to which the CSM will be attached too – in my lab I will use Management_DXB – this is the same port group where my on premise NSX-T Manager is connected too.

Step 8 – In the step we will configure the Management Interfaces and password details.

Please follow the password complexity rule as below:
– minimum of 12 characters in length
– >=1 uppercase character
– >=1 lowercase character
– >=1 numeric character
– >=1 special character
– >=5 unique characters

You must configure the System Root User Password and the CLI “admin” User Password. The others are optional and I just left them blank

Take Note NSX-T default password expiry time is 90 days, this can be changed via the CLI.

Now we need to configure the hostname and an important step is to select the role for this appliance – since we are deploying this as a CSM, I select nsx-cloud-service-manager from the drop menu

Make sure to select nsx-cloud-service-manager
Populate the correct IP address details

Finally before deploy the CSM, make sure to use the correct DNS and NTP server for your environment

Step 8 – After populating all the needed details we are now ready to deploy NSX-T CSM appliance. Once the deployment has completed, go ahead and power it up

Final step reviewing all the details before we hit Finish

Quick Tip: How to get the NSX-T Manager’s Thumbprint

For various reasons you might face a requirement that needs the NSX-T Manager’s thumbprint… This could be when you deploy a standalone NSX-T Edge or the NSX-T Cloud Services Manager (CSM). My use case for deploying the NSX-T CSM.

person showing thumb

Step 1: Open a SSH session to the NSX-T manager with the admin credentials

Step 2: On the NSX-T Manager terminal run the following command: get certificate api thumbprint

Now you can copy the output to where ever you needed it.

Upgrading NSX-T Federation environment to NSX-T 3.0.1

VMware recently announced the availability of NSX-T 3.0.1 on 23 June 2020. This post shows the steps I followed to upgrade my lab environment from my NSX-T 3.0 to NSX-T 3.0.1.

NSX-T Data Center 3.0.1 is a maintenance release which includes new features and bug fixes – I am upgrading my lab to stay on the latest release as I use this setup for demo’s but also to fix one or two bugs.

As with any upgrade always check the compatibility and system requirements information, see the NSX-T Data Center Installation Guide.

You are going to need to download the upgrade bundle from the download page. This requires an active support contract.

NSX 3.0.1 Upgrade Bundle

Let’s Get Started with the upgrade

My NSX Lab Layout

My lab is hosted in a nested ESXi environment with two simulated sites (DC-01 and DC-02). Each site has their own local vCenter and local NSX-T Manager and all management components are hosted outside in a management cluster where my Global NSX-T Manager is also deployed. Both DC-01 and DC-02 Local NSX-T Managers are registered with the Global Manager shown below.

System Overview from the NSX-T Global Manager

Since this is only a lab environment, I am only using a single NSX-T Manager appliance for each of the NSX Managers (Local and Global) *At the time of release of Federation in NSX-T 3.0, only one Global Manager virtual appliance is supported.

I am going to use the built-in NSX-T upgrade coordinator under the system tab for the upgrade process. The upgrade coordinator runs in the NSX Manager. It is a self-contained web application that orchestrates the upgrade process of hosts, NSX Edge cluster, NSX Controller cluster, and Management plane.

The upgrade coordinator will be upgraded first followed by the Global Manager management plane will be updated followed by the Local Managers.

Let me start by upgrading the upgrade coordinator by clicking the blue UPGRADE notice on the Global Manager. Next I will need to upload the upgrade bundle package file so browse to where you have saved the package image and hit upload.

Once you hit upload, the NSX-T Manager starts uploading the image and you will see the upload progress meter as shown below – depending on your setup and bandwidth available between the NSX Manager and where the file is copied from, this could take a few minutes – Its a 8.6GB image.

NSX-T Manager uploading image

Once the upload is completed, NSX-T Manager will start extracting the upgrade bundle and perform a compatibility matrix check – this can take some time too, 10-20min.

So now that all the checks have been done, we are ready to start the upgrade process on the coordinator. Let’s hit the upgrade button.

Read and accept the EULA terms, hit continue

Confirm if you are sure and want to continue… Hit Yes, Continue

At this point it seems that nothing is happening but the upgrade coordinator is being upgraded and it should take a couple of minutes.