VMware NSX Application Platform Deployment

This post is intended for administrators who must deploy or manage the NSX Application Platform and activate the NSX applications that are hosted on the platform. This post will cover the deployment and activation starting from the NSX-T UI and it assumes the needed Kubernetes platform has already been prepared (Controller and Worker Nodes already created to meet the requirements as documented by VMware)

Overview

The NSX Application Platform is a modern microservices platform that hosts the following NSX features that collect, ingest, and correlate network traffic data in your NSX-T environment.

  • VMware NSX® Intelligence™
  • VMware NSX® Network Detection and Response™
  • VMware NSX® Malware Prevention
  • VMware NSX® Metrics

As network traffic data is produced, captured, and analyzed, the NSX Application Platform provides the platform that can be scaled out to meet the needs of these data-intensive features and the core services that support them.

Following is a list of some of the core services utilized by these NSX features. These services can be scaled out as the need arises.

  • Messaging
  • Analytics
  • Data Storage
  • Metrics

The NSX Application Platform is available beginning with NSX-T Data Center 3.2. After you meet the minimum system prerequisites and prepare for any existing analytics data that you want migrated from previous NSX Intelligence installation, you can deploy the platform using the NSX Manager user interface.

Refer to the below posts on the VMware documentation covering deployment prereq, licensing requirements and system requirements. I am assuming you have all these covered before trying to activate and deploy the NSX Application Platform from the NSX-T UI in this post.

NSX Application Platform Deployment Prerequisites
To install the NSX Application Platform successfully and to activate the NSX features that it hosts, you must prepare the deployment environment so that it meets the minimum required resources.[Read more]

License Requirement for NSX Application Platform Deployment
To deploy the NSX Application Platform, your NSX Manager session must be using a valid license during the NSX Application Platform deployment.[Read more]

NSX Application Platform System Requirements
The following table lists the form factors that the NSX Application Platform supports, along with the minimum resources required for each. The form factor you select determines which NSX features you can activate or install on the platform.[Read more]

NSX Application Platform Deployment Checklist

Use the checklist to track your progress with the NSX Application Platform deployment workflow and the activation of the NSX features that the platform hosts

Deploying the NSX Application Platform

My deployment leverages a Tanzu Kubernetes Cluster and the NSX-T native Load Balancer and all these have already been enabled and deployed prior to starting this deployment.

Let’s get started by clicking on System -> NSX Application Platform

Step 1 – Prepare to Deploy

Start off by clicking “Deploy NSX Application Platform

Prepare to Deploy

Helm Repository – The repository from which you can obtain the packaged Helm chart for NSX Application Platform.

https://projects.registry.vmware.com/chartrepo/nsx_application_platform

Docker Registry – The registry URL from which you can obtain he Docker images for NSX Application Platform. Take note, there is not https:// in this URL

projects.registry.vmware.com/nsx_application_platform/clustering

These packages can be hosted on a private container registry or you can point to the VMware public repository – I am keeping it simple and pointing my deployment to the VMware public repository. This would mean your NSX-T Manager has Internet reachability.

If you are using a VMware Tanzu Kubernetes Cluster (TKC), do not use its embedded Harbor container registry for hosting the NSX Application Platform Helm charts and Docker images. Your infrastructure administrator must set up a separate Harbor container registry.

URL Populated

After populating the URLs, click SAVE URL.

If your NSX-T manager can reach these URLs it will list the Platform Target version and Chart Name shown below.

Platform Target Version

Click NEXT on the bottom right hand corner

Step 2 – Configuration details

Configuration

Kubernetes Configuration – Upload File

You need to create a kubeconfige file – all the steps are nicely documented here.

Select and browse to the file on your local machine and upload it to the NSX-T Manager

Sample out of my Token File

If you see the error message Server version and client version are incompatible, upload the latest Kubernetes Tools version to resolve the error, upload a compatible version of the Kubernetes tools bundle.

You can use the Kubernetes Tools bundle provided in the VMware Product Download site at https://customerconnect.vmware.com/downloads/details?downloadGroup=NSX-T-3201&productId=982&rPId=84354#product_downloads. When you download the file, the default name is kubernetes-tools-buildversion.tar.gz. For example, kubernetes-tools-1.20.11-00_3.5.4-1.tar.gz. Do not rename the file when you download it. The file is signed with a VMware private key.

  1. Either select Upload Local File or Upload Remote File.
  2. If you selected Upload Local File, click Select and navigate to the location of the Kubernetes Tools file.
  3. If you selected Upload Remote File, enter the URL from which the system can obtain the compatible Kubernetes Tools file. For example, enter the URL of the kubernetes-tools-buildversion.tar.gz file that you downloaded.
  4. Click Upload.

Storage Class – Storage Class values are provided by the kubeconfig file. To change available choices, please modify and resubmit the kubeconfig file.

Cluster Type – Standard is the only supported option today

Service Name – Enter a valid fully qualified domain name (FQDN) value for the Service Name text box.

The Service Name is used as the HTTPS endpoint to connect to the NSX Application Platform. The Service selector defines an abstract reference to multiple Kubernetes nodes. To change the available choices, please modify and resubmit the kubeconfig file.

This requires a FQDN created in DNS and reachable by the NSX-T Manager. This IP will be configured as ingress on the load balancer.

Form Factor – Lastly on this page you need to select the form factor which will be deployed. If you are planning to enable all the features hosted on the NAPP platform you will need to select the advanced form factor.

Standard Supports – NSX Network Detection and Response, NSX Malware Prevention and Metrics.

Advanced Supports – NSX Network Detection and Response, NSX Malware Prevention and Metrics and NSX Intelligence.

Configuration parameters populated

Once all the parameters are populated, click Next on the bottom right.

Step 3 – Pre Check the Platform

The system needs to check the configuration information that have been obtained before proceeding with the NSX Application Platform deployment.

Pre Checks

Click on Run PreChecks, the system will run the listed pre checks and should take a minute or so

Pre Checks Completed

All pre checks completed successfully with one warning “Kubernetes cluster and NSX time should be in sync.” This is just a note and not an error.

If you the system highlighted any other others you can view the details of these and address them as needed else Proceed and click Next on the bottom right.

Step 4 – Deploy NSX Application Platform

Deploy NSX Application Platform

Review all the settings shown and if all the settings look correct proceed to Deploy the solution by clicking deploy at the bottom right.

Deployment Progress Monitor

This will take some time depending on your environment but as the deployment is taking place, you can see the progress meter moving. I have come across a number of deployments hang at this point:

Installing Certificate Manager… In Progress 10%

At this stage I have not managed to figure out the main cause for this but instead I landed up recreating everything and it worked. If I do find any updated information on troubleshooting this I will share it here.

40% Done
Registering Platform – 70% Done
Installing Metrics – 80% Done

Results

Once the system successfully deployed the NSX Application Platform, the UI is updated with the details about the platform.

Successful Deployment
Core Services View

Once the NSX Application Platform has successfully deployed you can now continue with enabling the features listed at the bottom of the page – NSX Metrics is enabled with the deployment.

Features running on NAPP

I will be doing follow-up posts enabling and consuming the various features.

Tanzu Deployment Screenshot for reference

Just some screen shots of my Tanzu deployment in vCenter

My Namespace -napp-ns
Kubernetes Events
General
Workload Networking

VMware NSX-T Data Center 3.2.0.x

VMware NSX-T Data Center 3.2.0   |  16 December 2021  |  Build 19067070

***VMware removed 3.2.0 around a week or two after the release and recommend users upgrade to 3.2.0.1 instead***

NSX Application Platform

What’s New

NSX-T Data Center 3.2.0 is a major release offering many new features in all the verticals of NSX-T: networking, security, services and onboarding. Here are some of the major enhancements.

  • Switch agnostic distributed security: Ability to extend micro-segmentation to workloads deployed on vSphere networks.
  • Gateway Security: Enhanced L7 App IDs, Malware Detection and Sandboxing, URL filtering, User-ID firewall, TLS inspection (Tech Preview) and Intrusion Detection and Prevention Service (IDS/IPS).
  • Enhanced Distributed Security: Malware detection and Prevention, Behavioral IDS/IPS, enhanced application identities for L7 firewall.
  • Improved integration with NSX Advanced Load Balancer (formerly Avi): Install and configure NSX ALB (Avi) from NSX-T UI; Migrate NSX for vSphere LB to NSX ALB (Avi).
  • NSX for vSphere to NSX-T Migration: Major enhancements to the Migration Coordinator to extend coverage of supported NSX for vSphere topologies and provide flexibility on the target NSX-T topologies.
  • Improved protection against Log4j vulnerability: Updated Apache Log4j to version 2.16 to resolve CVE-2021-44228 and CVE-2021-45046. For more information on these vulnerabilities and their impact on VMware products, please see VMSA-2021-0028.

VMware introduces new capabilities on the Gateway and with this a new licensing model has been introduced focusing on Security capabilities

  • VMware NSX Gateway
  • VMware NSX Advanced Threat Prevention Add-on to NSX Gateway Firewall

Refer to Product Offerings for NSX-T 3.2 Security for all the features included in these new licenses.

My Highlights
  • The introduction of Advanced Threat Prevention capabilities with Malware Detection and Prevention and Network Detection and Response fully integrated into the NSX-T UI. This brings Sandboxing capabilities native to the platform.
  • Now customers can leverage their native VDS switch to take advantage of all these distributed security capabilities without having to recreate segments via the NSX-T UI and migrate workloads between port groups.
  • NSX-T 3.2 introduces the NSX Application platform which replaces the traditional NSX-T Intelligence appliance. VMware NSX Application Platform is a new container based solution introduced in NSX-T 3.2.0 that provides a highly available, resilient, scale out architecture to deliver a set of core platform services which enables several new NSX features.
  • Some of the new features are currently made available as Tech Preview – Gateway Intrusion Detection/Prevention and TLS decryption/encryption
Gateway Firewall
  • User Identity-based Access Control – Gateway Firewall introduces the following additional User Identity Firewall capabilities:For deployments where Active Directory is used as the user authentication system, NSX leverages Active Directory logs.
  • For all other authentication systems, NSX can now leverage vRealize Log Insight based logs to identify User Identity to IP address mapping.
  • Enhanced set of L7 AppIDs – Gateway Firewall capabilities are enhanced to identify a more comprehensive number of Layer-7 applications.
  • TLS Inspection for both inbound and outbound traffic (🔎Tech Preview; not for production deployments) – More and more traffic is getting encrypted on the network. With the TLS inspection feature, you can now leverage NSX Gateway Firewall to do deep-packet inspection and threat detection and prevention services for encrypted traffic as well.
  • URL Filtering (includes categorization and reputation of URLs) – You can now control internet bound traffic based on the new URL Filtering feature. This feature allows you to control internet access based on the URL categories and as well as the reputation of the URLs. URL repository, including the categorization and reputation data, is updated on an ongoing basis for updated protection.
  • Malware Analysis and Sandboxing support – NSX Gateway Firewall now provides malware detection from known as well as zero-day malware using advanced machine learning techniques and sandboxing capabilities. The known malware data is updated on an ongoing basis. (Please see known issue 2888658 before deploying in live production deployments.)
  • Intrusion Detection and Prevention (🔎Tech Preview; not for production deployments) – For NSX Gateway Firewall, Intrusion Detection and Prevention capabilities (IPS) are introduced in a “Tech Preview” mode. You can try the feature set in non-production deployments.
New NSX Application Platform
  • NSX Application Platform – VMware NSX Application Platform is a new container based solution introduced in NSX-T 3.2.0 that provides a highly available, resilient, scale out architecture to deliver a set of core platform services which enables several new NSX features such as:
    • NSX Intelligence
    • NSX Metrics
    • NSX Network Detection and Response
    • NSX Malware Prevention

The NSX Application Platform deployment process is fully orchestrated through the NSX UI and requires a supported Kubernetes environment. Refer to the Deploying and Managing the VMware NSX Application Platform guide for more information on the infrastructure prerequisites and requirements for installation.

Network Detection and Response
  • VMware Network Detection and Response correlates IDPS, Malware and Anomaly events into intrusion campaigns that help identify threats and malicious activities on the network.
  • Correlation into threat campaigns rather than events, which allows SOC operators to focus on triaging only a small set of actionable threats.
  • Network Detection and Response collects IDPS events from Distributed IDPS, Malware events (malicious files only) from Gateway, and Network Anomaly events from NSX Intelligence.Gateway IDPS (Tech Preview) events are not collected by NSX Network Detection and Response in NSX-T 3.2.
  • Network Detection and Response functionality runs in the cloud and is available in two cloud regions: US and EU.
Licensing
  • License Enforcement – NSX-T now ensures that users are license-compliant by restricting access to features based on license edition. New users are able to access only those features that are available in the edition that they have purchased. Existing users who have used features that are not in their license edition are restricted to only viewing the objects; create and edit will be disallowed.
  • New Licenses – Added support for new VMware NSX Gateway Firewall and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. See VMware knowledge base article 52462 for more information about NSX licenses.
Tech Preview Features

NSX-T Data Center 3.2 offers several features for your technical preview. Technical preview features are not supported by VMware for production use. They are not fully tested and some functionality might not work as expected. However, these previews help VMware improve current NSX-T functionality and develop future enhancements.

For details about these technical preview features, see the available documentation provided in the NSX-T Data Center 3.2 Administration Guide. Links are provided in the following list that briefly describes these technical preview features. The topics will have Technical Preview in their titles.

Summary

So many new capabilities in NSX-T 3.2.0 will definitely be keeping me busy with future blogs posts which will hopefully help other get some of these features enabled.

VMware NSX Security

It has been a while since I last posted something here and so many new features have been added to VMware NSX since the 3.2 release in December 2021. With a major focus on Security in this release I thought it would make sense to create a few blog posts which would help others getting the Advanced Threat Prevention capabilities up and running.

I will be doing a few blog posts covering the deployment of

  • VMware NSX Application Platform
  • VMware NSX® Intelligence™
  • VMware NSX® Network Detection and Response™
  • VMware NSX® Malware Prevention
  • VMware NSX® Metrics

Stay virtual!

vRealize Network Insight 6.1

vRealize Network Insight 6.1 | 14 Jan 2021| Build 1610450081

vRealize Network Insight helps you build an optimized, highly available and secure network infrastructure across hybrid and multi-cloud environments. It provides network visibility and analytics to accelerate micro-segmentation security, minimize risk during application migration, optimize network performance and confidently manage and scale NSX, SD-WAN Velocloud, and Kubernetes deployments.

What’s New

vRealize Network Insight 6.1 and vRealize Network Insight Cloud (SaaS) deliver the latest in application and network visibility and analytics.

  • Customization: Pinboards to customize persistent dashboards
  • Multi-Cloud: Enhancements to visibility with VMware Cloud on AWS for traffic and dropped packets across gateway AWS Direct Connect, cross VPC, and public interface metrics
  • Assurance and Verification: Enhancements with in-context device configuration snapshots viewable from the topology map for easier troubleshooting
  • NSX-T: Data integration from NSX Intelligence can now be integrated for more application-centric network operations and troubleshooting visibility
  • VMware SD-WAN: New capabilities with analytics intent for better service-level agreement (SLA) monitoring and visibility with SD-WAN link utilization and metering

My Highlights

  • Integration with NSX-T Intelligence – I think vRNI does a fantastic job correlating all the data it ingests and now adding Intelligence as a data source will improve the visibility even more. When integration with NSX Intelligence is enabled, the flow record in vRealize Network Insight will now also display the Layer 7 service.
  • Intent Based improvements continue to show more and more value to users – I will definitely be testing the new Edge Uplink Utilization visibility.

Network Assurance and Verification

Layer 7 Service Information from NSX Intelligence

NSX-T Monitoring and Troubleshooting

VMware Cloud on AWS (VMC on AWS) 

  • Support for VMC T0 Router Interface Statistics which includes Rx Total Bytes, Rx Total Packets, Rx Dropped Packets, Tx Total Bytes, Tx Total Packets and Tx Dropped Packets for Public, Cross-VPC, and Direct Connect interfaces.

Physical Device Monitoring and Troubleshooting

  • Provides metric charts for better visualization and interaction with the metrics.

Search

  • Provides alternative search suggestions when a search fails to show results.

Pinboard

  • Preserves filter state when pinning a widget to a pinboard
  • Ability to pin the no results search to a pinboard
  • Ability to see other users’ pinboards in the Auditor role.

Alerts

  • Shows different alert definitions in separate tabs for easy classification and better management of alerts. Alert Definition (known as events in earlier release) refers to a problem or a change condition that the system detects
  • Introduces the term alert to indicate an instance when the system detects a problem, a change, or violation of an intent.

vRealize Network Insight Platform

  • Supports web proxies for data sources (SD-WAN, AWS, Azure, ServiceNow, and VMC NSX Manager)
  • Shows information related to Platform and Collector VMs such as IP address (name), last activity, status, and so on in one single page
  • Introduces the following new pages and capabilities:
    • Adding or updating data sources 
    • Web proxies listing page 
    • Web proxies usage visibility 
    • Infrastructure and support page.

Others

Source vRealize Network Insight 6.1 Release Notes

VMware NSX-T Data Center 3.1.1

VMware NSX-T Data Center 3.1.1   |  27 January 2021  |  Build 17483185

What’s New

NSX-T Data Center 3.1.1 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas.

My Highlights
  • The introduction of OSPFv2 as the North/South Routing routing protocol – customers have been asking for this for sometime now but I think many customers have settled on eBGP by now but none the less I am sure this would still be handing for many customers.
  • NSX for vSphere is reaching end of support January 2022 and many customers are scrambling to migrate to NSX-T. The improvements in the latest version of the migration coordinator built into NSX-T now includes more supported current deployments e.g NSX for vSphere Cross vCenter deployments, vRA deployed blue prints, modular options selecting specific hosts and distributed Firewall policies.
  • NSX-T Advanced Server Load Balancer (AKA AVI Networks)
  • NSX-T Cloud – Option to deploy the NSX management plane and control plane fully in Azure.
  • NVDS to Converged VDS – Now introducing UI based migration option of Transport Nodes from NVDS to VDS with NSX-T. ***Note, only supported with VDS 7.0***
L3 Networking
  • OSPFv2 Support on Tier-0 Gateways
    • NSX-T Data Center now supports OSPF version 2 as a dynamic routing protocol between Tier-0 gateways and physical routers. OSPF can be enabled only on external interfaces and can all be in the same OSPF area (standard area or NSSA), even across multiple Edge Nodes. This simplifies migration from the existing NSX for vSphere deployment already using OSPF to NSX-T Data Center.
NSX Data Center for vSphere to NSX-T Data Center Migration
  • Support of Universal Objects Migration for a Single Site
    • You can migrate your NSX Data Center for vSphere environment deployed with a single NSX Manager in Primary mode (not secondary). As this is a single NSX deployment, the objects (local and universal) are migrated to local objects on a local NSX-T.  This feature does not support cross-vCenter environments with Primary and Secondary NSX Managers.
  • Migration of NSX-V Environment with vRealize Automation – Phase 2
    • The Migration Coordinator interacts with vRealize Automation (vRA) to migrate environments where vRealize Automation provides automation capabilities. This release adds additional topologies and use cases to those already supported in NSX-T 3.1.0.
  • Modular Migration for Hosts and Distributed Firewall
    • The NSX-T Migration Coordinator adds a new mode to migrate only the distributed firewall configuration and the hosts, leaving the logical topology(L3 topology, services) for you to complete. You can benefit from the in-place migration offered by the Migration Coordinator (hosts moved from NSX-V to NSX-T while going through maintenance mode, firewall states and memberships maintained, layer 2 extended between NSX for vSphere and NSX-T during migration) that lets you (or a third party automation) deploy the Tier-0/Tier-1 gateways and relative services, hence giving greater flexibility in terms of topologies. This feature is available from UI and API.
  • Modular Migration for Distributed Firewall available from UI
    •  The NSX-T user interface now exposes the Modular Migration of firewall rules. This feature was introduced in 3.1.0 (API only) and allows the migration of firewall configurations, memberships and state from an NSX Data Center for vSphere environment to an NSX-T Data Center environment. This feature simplifies lift-and-shift migration where you vMotion VMs between an environment with hosts with NSX for vSphere and another environment with hosts with NSX-T by migrating firewall rules and keeping states and memberships (hence maintaining security between VMs in the old environment and the new one).
  • Fully Validated Scenario for Lift and Shift Leveraging vMotion, Distributed Firewall Migration and L2 Extension with Bridging
    • This feature supports the complete scenario for migration between two parallel environments (lift and shift) leveraging NSX-T bridge to extend L2 between NSX for vSphere and NSX-T, the Modular Distributed Firewall.
Identity Firewall
Advanced Load Balancer Integration
  •  Support Policy API for Avi Configuration
  • Service Insertion Phase 2
    • This feature supports the Transparent LB in NSX-T advanced load balancer (Avi). Avi sends the load balanced traffic to the servers with the client’s IP as the source IP. This feature leverages service insertion to redirect the return traffic back to the service engine to provide transparent load balancing without requiring any server side modification.
Edge Platform and Services
  • DHCPv4 Relay on Service Interface
    • Tier-0 and Tier-1 Gateways support DHCPv4 Relay on Service Interfaces, enabling a 3rd party DHCP server to be located on a physical network
AAA and Platform Security
  • Guest Users – Local User accounts: NSX customers integrate their existing corporate identity store to onboard users for normal operations of NSX-T. However, there is an essential need for a limited set of local users — to aid identity and access management in many scenarios. Scenarios such as (1) the ability to bootstrap and operate NSX during early stages of deployment before identity sources are configured in non-administrative mode or (2) when there is failure of communication/access to corporate identity repository. In such cases, local users are effective in bringing NSX-T to normal operational status. Additionally, in certain scenarios such as (3) being able to manage NSX in a specific compliant-state catering to industry or federal regulations, use of local guest users are beneficial. To enable these use-cases and ease-of-operations, two guest local-users have been introduced in 3.1.1, in addition to existing admin and audit local users. With this feature, the NSX admin has extended privileges to manage the lifecycle of the users (e.g., Password rotation, etc.) including the ability to customize and assign appropriate RBAC permissions. Please note that the local user capability is available on both NSX-T Local Managers (LM) and Global Managers (GM) but is unavailable on edge nodes in 3.1.1 via API and UI. The guest users are disabled by default and have to be explicitly activated for consumption and can be disabled at any time. 
     
  • FIPS Compliant Bouncy Castle Upgrade: NSX-T 3.1.1 contains an updated version of FIPS compliant Bouncy Castle (v1.0.2.1). Bouncy Castle module is a collection of Java based cryptographic libraries, functions, and APIs. Bouncy Castle module is used extensively on NSX-T Manager. The upgraded version resolves critical security bugs and facilitates compliant and secure operations of NSX-T. 
NSX Cloud
  • NSX Marketplace Appliance in Azure: Starting with NSX-T 3.1.1, you have the option to deploy the NSX management plane and control plane fully in Public Cloud (Azure only, for NSX-T 3.1.1. AWS will be supported in a future release). The NSX management/control plane components and NSX Cloud Public Cloud Gateway (PCG) are packaged as VHDs and made available in the Azure Marketplace. For a greenfield deployment in the public cloud, you also have the option to use a ‘one-click’ terraform script to perform the complete installation of NSX in Azure. 
  • NSX Cloud Service Manager HA: In the event that you deploy NSX management/control plane in the public cloud, NSX Cloud Service Manager (CSM) also has HA. PCG is already deployed in Active-Standby mode thereby enabling HA. 
  • NSX-Cloud for Horizon Cloud VDI enhancements: Starting with NSX-T 3.1.1, when using NSX Cloud to protect Horizon VDIs in Azure, you can install the NSX agent as part of the Horizon Agent installation in the VDIs. This feature also addresses one of the challenges with having multiple components ( VDIs, PCG, etc.) and their respective OS versions. Any version of the PCG can work with any version of the agent on the VM. In the event that there is an incompatibility, the incompatibility is displayed in the NSX Cloud Service Manager (CSM), leveraging the existing framework. 
Operations
  • UI-based Upgrade Readiness Tool for migration from NVDS to VDS with NSX-T Data Center
    • To migrate Transport Nodes from NVDS to VDS with NSX-T, you can use the Upgrade Readiness Tool present in the Getting Started wizard in the NSX Manager user interface. Use the tool to get recommended VDS with NSX configurations, create or edit the recommended VDS with NSX, and then automatically migrate the switch from NVDS to VDS with NSX while upgrading the ESX hosts to vSphere Hypervisor (ESXi) 7.0 U2.
Licensing
  • Enable VDS in all vSphere Editions for NSX-T Data Center Users: Starting with NSX-T 3.1.1, you can utilize VDS in all versions of vSphere. You are entitled to use an equivalent number of CPU licenses to use VDS. This feature ensures that you can instantiate VDS.
Container Networking and Security
  • This release supports a maximum scale of 50 Clusters (ESXi clusters) per vCenter enabled with vLCM, on clusters enabled for vSphere with Tanzu as documented at configmax.vmware.com
Federation
Compatibility and System Requirements

For compatibility and system requirements information, see the NSX-T Data Center Installation Guide.

API Deprecations and Behavior Changes

Retention Period of Unassigned Tags: In NSX-T 3.0.x, NSX Tags with 0 Virtual Machines assigned are automatically deleted by the system after five days. In NSX-T 3.1.0, the system task has been modified to run on a daily basis, cleaning up unassigned tags that are older than one day. There is no manual way to force delete unassigned tags.

Duplicate certificate extensions not allowed: Starting with NSX-T 3.1.1, NSX-T will reject x509 certificates with duplicate extensions (or fields) following RFC guidelines and industry best practices for secure certificate management. Please note this will not impact certificates that are already in use prior to upgrading to 3.1.1. Otherwise, checks will be enforced when NSX administrators attempt to replace existing certificates or install new certificates after NSX-T 3.1.1 has been deployed.

Sourced from VMware NSX-T Data Center 3.1.1 Release Notes

VMware HCX 4.0

VMware HCX 4.0.0 | 23 FEB 2021 | Build 17667890 (Connector), Build 17667891 (Cloud)

What is VMware HCX

VMware HCX delivers secure and seamless app mobility and infrastructure hybridity across vSphere 6.0 and later versions, both on-premises and in the cloud. HCX abstracts the distinct private or public vSphere resources and presents a Service Mesh as an end-to-end entity. The HCX Interconnect can then provide high-performance, secure, and optimized multi-site connectivity to achieve infrastructure hybridity and present multiple options for bi-directional virtual machine mobility with technologies that facilitate the modernization of legacy data centers.

For more information, see the VMware HCX User Guide in the VMware Documentation Center.

First Change in HCX 4.0

Starting with the HCX 4.0 release, software versioning adheres to X.Y.Z Semantic Versioning, where X is the major version, Y is the minor version, and Z is the maintenance version. For more information about HCX software support, lifecycles, and version skew policies, see VMware HCX Software Support and Version Skew Policy. This document includes HCX Support Policy for Vacating Legacy vSphere Environments.

For information regarding HCX interoperability, see VMware Product Interoperability Matrix.

Upgrading to 4.0 from HCX 3.5.3 R-Releases

Prior to Semantic Versioning, HCX 3.5.3 releases were identified as “R” releases, such as R147. Review the following information prior to upgrading from R-based releases to HCX 4.0: 

  • Upgrading to HCX 4.0 is supported only from the following HCX Service Updates: R147 and  R146.
  • HCX Service Update R146 is the oldest version suitable for upgrade. Customers are required to move to R146 or later to support upgrading to HCX 4.0. 
  • Assistance with upgrading from out-of-support releases will be on a best-effort basis.
  • VMware sends administrative messages to HCX systems that are running out of support and require upgrade. 

Note: 3.5.3/R146-R147 HCX Manager systems will display release code R148 as a prefix to the 4.0.0 version. This is a known condition during the transition. 4.0.x HCX Managers will not display the R release codes.

So What’s New in HCX 4.0

VMware HCX 4.0 is a major release that introduces new functionality and enhancements.

Some of my highlights, then followed by all the details from the release notes

  • NSX Security Tag Migration – HCX Can now migrate NSX security tags associated to a virtual machine living in NSX for vSphere or NSX-T environments. This release does not migrate the security policies but only the security tags associated to the VM.
  • Usability – In-Service upgrade for Network Extension appliances, New event logs, estimated migration time metrics, Network Extension Metrics.
  • Improvements in OS assisted Migrations

Migration Enhancements

  • Mobility Migration Events  – The HCX Migration interface displays detailed event information with time lapse of events from the start of the migration operation. This information can help with understanding the migration process and diagnosing migration issues. See Viewing HCX Migration Event Details.
  • NSX Security Tag Migration – Transfers any NSX Security tags associated with the source virtual machine when selected as an Extended Option for vSphere to vSphere migrations. See Additional Migration Settings
  • Real-time Estimation of Bulk Migration – HCX analyzes migration metrics and provides an estimation of the time required to complete the transfer phase for every configured Bulk migration. The estimate is shown in the progress bar displayed on the Migration Tracking and Migration Management pages for each virtual machine migration while the transfer is in underway. For more information, see Monitoring Migration Progress for Mobility Groups.
     
  • OS Assisted Migration Scaling – HCX now supports 200 concurrent VM disk migrations across a four Service Mesh scale out deployment. A single Service Mesh deploys one Sentinel Gateway (SGW) and its peer Sentinel Data Receiver (SDR), and continues to support up to 50 active replica disks each. In this Service Mesh scale out model for OSAM, the HCX Sentinel download operation is presented per Service Mesh. See OS Assisted Migration in Linux and Windows Environments.
     
  • Migrate Custom Attributes for vMotion  –  The option Migrate Custom Attributes is added to the Extended Options selections for vMotion migrations. 
     
  • Additional Disk Formats for Virtual Machines – For Bulk, vMotion, and RAV migration types, HCX now supports these additional disk formats: Thick Provisioned Eager Zeroed, Thick Provisioned Lazy Zeroed. 
     
  • Force Power-off for In-Progress Bulk Migrations – HCX now includes the option to Force Power-off in-progress Bulk migrations, including the later stages of migration. 

Network Extension Enhancements

  • In-Service Upgrade – The Network Extension appliance is a critical component of many HCX deployments, not only during migration but also after migration in a hybrid environment. In-Service upgrade is available for Network Extension upgrade or redeploy operations, and helps to minimize service downtime and disruptions to on-going L2 traffic. See In-Service Upgrade for Network Extension Appliances

    Note: This feature is currently available for Early Adoption (EA). The In-Service mode works to minimize traffic disruptions from the Network Extension upgrade or redeploy operation to only a few seconds or less. The actual time it takes to return to forwarding traffic depends on the overall deployment environment.
  • Network Extension Details – HCX provides connection statistics for each extended network associated with a specific Network Extension appliance. Statistics include bytes and packets received and transferred, bit rate and packet rate, and attached virtual machine MAC addresses for each extended network. See Viewing Network Extension Details.

Service Mesh Configuration Enhancements

  • HCX Traffic Type Selection in Network Profile – When setting up HCX Network Profiles, administrators can tag networks for a suggested HCX traffic type: Management, HCX Uplink, vSphere Replication, vMotion, or Sentinel Guest Network. These selections then appear in the Compute Profile wizard as suggestions of which networks to use in the configuration. See Creating a Network Profile.

Usability Enhancements

  • HCX now supports scheduling of migrations in DRAFT state directly from the Migration Management interface.(PR/2459044)
  • All widgets in the HCX Dashboard can be maximized to fit the browser window. (PR/2609007)
  • The topology diagram shown in the Compute Profile now reflects when a folder is selected as the HCX Deployment Resource. (PR/2518674)
     
  • In the Create/Edit Network Profile wizard, the IP Pool/Range entries are visually grouped for readability. (PR/2456501)

Source VMware HCX 4.0 Release Notes

NSX-T Time-Based Firewall Policy

VMware NSX-T Distributed Firewall (DFW) offers L2 to L7 stateful firewall capabilities, in my previous blog I covered the capability to create policies matching FQDN/URLs. This blog will further expand on the NSX-T DFW capabilities and focus on time-based firewall policies.

With time-Based firewall policies, security administrators can restrict traffic from a source to a destination for a configured time period. This could be to restrict access to certain resources during specific hours.

Time windows apply to a firewall policy section, and all the rules in it. Each firewall policy section can have one time window. The same time window can be applied to more than one policy section. If you want the same rule applied on different days or different times for different sites, you must create more than one policy section. Time-based rules are available for distributed and gateway firewalls on both ESXi and KVM hosts.

This demonstration and blog is based on NSX-T 3.1.1

Prerequisites

As per the VMware documentation, the following needs to be in place:

Network Time Protocol (NTP) is an Internet protocol used for clock synchronization between computer clients and servers. NTP service must be running on each transport node when using time-based rule publishing.

If a time-zone is changed on the edge transport node after the node is deployed, reload the edge node or restart the data plane for time-based gateway firewall policy to take effect

It is highly recommended to enable NTP on network and security appliances in your environment, this is really helpful for troubleshooting and monitoring purposes.

At the time of creating this blog, Time-Based policies are only supported with NSX-T appliances configured with the timezone set to UTC.

Getting Started

I will start with configuring NTP on my NSX-T setup to confirm that the prerequisites are in place and then cover enabling Time-Based policies.

Configuring NTP on Appliance and Transport Nodes

Configure NTP on an appliance

Some system configuration tasks must be done using the command line or API. We will do the needed from the NSX-T CLI. The following commands should be configured on the NSX-T Manager appliance and the NSX-T Edge appliances

  • Set system timezone set timezone <timezone>
  • Set NTP Server set ntp-server <ntp-server>
  • Set a DNS server set name-servers <dns-server>
  • Set DNS Search Domain set search-domains <domain>

Open a terminal and ssh to the management IP/Virtual IP of the NSX-T Manager and/or the NSX-T Edge devices as admin. In my lab the NTP and DNS service is running on 192.168.10.9

  • nsx-dc-01> set timezone UTC
  • nsx-dc-01> set ntp-server 192.168.10.9
  • nsx-dc-01> set name-servers 192.168.10.9
  • nsx-dc-01> set search-domains vmwdxb.com

After doing these configurations on the NSX-T Manager and NSX-T Edge(s) Nodes, the time should match the time of day on the NTP source. The NSX-T Edge appliance will use its management IP address as the source of the NTP/DNS lookup so confirm you have network connectivity from these interfaces to the NTP/DNS and open any firewalls if needed. NTP uses UDP port 123.

***If you are changing the timezone on the edge devices, you will need to restart the service data plane or reload the edge appliance***. “restart service dataplane”

Alternatively the NSX-T Manager UI also allows NTP configurations to be applied to all nodes using the Node Profile under the Fabric configurations.

NTP Configuration using Node Profile

To configure NTP for an ESXi host, see the topic Synchronize ESXi Clocks with a Network Time Server, in vSphere Security or if your hosts are managed by vCenter.

Confirm that the ESXi hosts are synchronised with NTP using ntpq -pn from the CLI – confirm that there is * next to the NTP server IP

Output from ESXI CLI
Configuring Time-Based Security Policy

Create a firewall policy or edit an existing policy. I pre-configured a policy section with one DFW rule allowing two desktops access to anything. From the steps below I will be enabling the Time-Based Firewall on the Time-Based-Policy DFW section.

  • (Step 1) Click the clock icon on the firewall policy you want to have a time window.
Time Icon is to the right of the policy

A time window appears

NSX-T Time Window
  • (Step 2) Click Add New Time Window and enter a name
  • (Step 3) Select a time zone: UTC (Coordinated Universal Time), or the local time of the transport node. Distributed firewall only supports UTC with NTP service enabled, a change of time zone configuration is not supported
  • (Step 4) Select the frequency of the time window – Weekly or One time.
  • (Step 5) Select the days of the week that the time window takes effect.
    • NSX-T Data Center supports configuring weekly UTC time-windows for the local time-zone, when the entire time-window for the local time-zone is within the same day as the UTC time-zone. For example, you cannot configure a time window in UTC for a 7am-7pm PDT, which maps to UTC 2pm-2am of the next day.
  • (Step 6) Select the beginning and ending dates for the time window, and the times the window will be in effect.
Time Based Configuration
  • (Step 7) Click Save

In my demonstration I have selected the policy to be activate daily starting from the 1st of March 2021 until 1st of April 2021 and only be applied between 08:30 to 09:00 in the morning.

***Take note the minimum configurable time range is 30 minutes and only supported in 30min blocks.***

(Step 8) Click the check box next to the policy section you want to have a time window. Then click the clock icon.

Click the radio button on the top left next to Time Window configured
  • (Step 9) Select the time window you want to apply, and click Apply.
  • (Step 10) Click Publish, The clock icon for the section turns green.
Published Policy

When publishing the policies, NSX-T Manager will check and confirm that NTP is correctly configured and running on transport nodes before the Time-Window is successfully applied. The Clock turns green when successfully applied. If you click on Success in the Firewall Section you can see the transport hosts where this policy has been pushed too successfully.

Time-Based DFW Section applied to the ESXi Transport Nodes

For the first publication of a time-based rule, the time is taken, and rule enforcement begins at less than 2 minutes. After the rules are deployed, enforcement as per time window, is instantaneous.

Outside of the time window configured NSX-T was preventing access to the Internet for my demo desktop.

Outside of the permitted time window configured

As the clocked changed to open window, I refreshed the page and successfully loaded the page.

Access to http://www.vmware.com

From LogInsight you can see that it is indeed my DFW rule number 3054 allowing the access.

LogInsight Output
Some side notes

When I published the DFW policy changes the first time an error occurred and the NSX-T manager reported that my transport nodes (ESXi hosts) did not have a valid NTP Service running.

NSX-T fails to publish the Time Based Policy

Although I had NTP configured on ESXi hosts pointing to a Win2012 server in my lab, NTP was not actually synchronising with the Win2012 server. So I switched all my ESXi nodes and NSX-T nodes to point to a Ubuntu virtual machine which I configured as a NTP Server too and this resolved error above.

NTP Not Synchronised on ESXi host with Win2012

You can see from the screenshot above, there is no * next to the IP 192.168.10.5

Working NTP Synchronisation

Thanks for the taking time to read this blog, hopefully you found it useful. It definitely helped me get a better understanding of making use of this feature.

NSX-T Filtering Specific Domains (FQDN/URLs)

VMware NSX-T Distributed Firewall (DFW) offers L2 to L7 stateful firewall capabilities. Most NSX-T operators are fairly comfortable creating L4 policies in the quest to achieve the “zero-trust” model.

In this blog I wanted to take this one step further and explore the capabilities of using the DFW to enforce policy matching L7 FQDN/URLs. I will demonstrate how to go about using some builtin FQDN’s and adding a custom FQDN’s to create security policies restricting outbound access from virtual workloads attached to NSX-T overlay networks.

A fully qualified domain name (FQDN) is the complete domain name for a specific host on the Internet. FQDNs are used in firewall rules to allow or reject traffic going to specific domains.

NSX-T Data Center supports custom FQDNs that are defined by an administrator on top of an existing pre-defined list of FQDNs.

This demonstration and blog is based on NSX-T 3.1.1

Prerequisites

As per the VMware documentation, the following needs to be in place:

You must set up a DNS rule first, and then the FQDN allowlist or denylist rule below it. This is because NSX-T Data Center uses DNS Snooping to obtain a mapping between the IP address and the FQDN. SpoofGuard should be enabled across the switch on all logical ports to protect against the risk of DNS spoofing attacks. A DNS spoofing attack is when a malicious VM can inject spoofed DNS responses to redirect traffic to malicious endpoints or bypass the firewall.

FQDN-based rules are retained during vMotion for ESXi hosts.

NOTE: ESXi and KVM hosts are supported. ESXi supports droplisting action for URL rules. KVM supports the allowlisting feature.

Custom FQDN supports the following

  • Full FQDN names such as maps.google.com or myapp.corp.com
  • Partial REGEX with * at the beginning only such as *eng.northpole.com or *yahoo.com
  • FQDN name length up to 64 characters
  • FQDN names should end with the registered Top Level Domain (TLD) such as .com, .org, or .net

Getting Started

  • (Step 1) From your browser, log in with admin privileges to an NSX Manager at https://
NSX-T Manager
  • (Step 2) Navigate to Security -> Distributed Firewall ->
NSX-T Distributed Firewall UI
  • (Step 3) Add a firewall policy section by following the steps in Add a Distributed Firewall. An existing firewall policy section can also be used.
Added Policy Section “FQDN Demo Policy”
  • (Step 4) Select the new or existing firewall policy section
  • (Step 5) Click Add Rule to create the DNS firewall rule first – I will create the firewall in the policy section created in step 3
    • Provide a Name for the rule – My example is using DNS Rule
    • Select the Services (DNS/UDP-DNS) – I included both for the purpose of the demo
    • Select the L7 Context Profile (DNS) – This is system generated context profile, and is available in your deployment by default.
    • Applied To – Select the group as required, I left it default at this stage.
    • Action – Select Allow
DNS Firewall Rule
  • (Step 6) Click Add Rule again to set up the FQDN allowlisting or denylisting rule
  • (Step 7) Name the rule appropriately, such as, FQDN/URL Allowlist. Drag the rule under the DNS rule under this policy section.
Second Firewall rule created “FQDN-RULE-01”

***Take note about dragging the newly created FQDN rule below the DNS Rule ***

  • (Step 8) Provide the following details
    • Services Click the edit icon and select the service you want to associate with this rule, for example, HTTP [I selected HTTP & HTTPS]
    • Context Profiles Click the edit icon, and Add Context Profile and name the profile. In the Attributes column, select Set > Add Attribute > Domain (FQDN) Name . Select the list of Attribute Name/Values from the predefined list, or create a custom FQDN
      • My Customer Context Profile is Named FQDN-Profile and I added the following builtin FQDN attributes to start my testing with:
Three FQDN attributes selected
  • Applied To Select DFW or a group as required.
    • Action Select Allow, Drop, or Reject.
      • I am going to set the action to drop to prevent access to three selected FQDN’s
  • (Step 9) Publish
FQDN Firewall

Demonstration

I am using virtual desktop connected my NSX-T environment and will be using the Chrome browser to access some websites to confirm if my policy created is prevent the access to the selected FQDN’s. I am going to start off with setting m FQDN policy to allow so that I can confirm the connectivity to itunes.apple.com does indeed work.

Next I am going to access itunes.apple.com to confirm connectivity

itunes.apple.com is re-written to apple.com/itunes/

Now I will set my security policy back to block and test this page again.

Deny DFW Rule

Now with the FQDN rule set to drop, my access to itunes.apple.com has been restricted by the NSX-T DFW

Testing http://itunes.apple.com

Just to make sure that the NSX-T DFW is actually dropping the data, I logged into my LogInsight Log server filtered to all dropped logs

LogInsight Dropping the FQDN Data

Custom FQDN Demo

So in the demonstration above I just used some of the built-in FQDN’s provided by NSX-T when installed, now I will create a custom FQDN of my choice and perform the testing again.

I am going to add another DFW firewall to my policy section named FQDN-Rule-02 – follow from step 6 but when you get to step 8, instead of selecting one of the out of the box FQDN’s in the context profile click on the three dots on the right hand side.

Context Profile Attributes

Once you clicked on those three dots and hit ADD FQDN, it opens the pop up shown below and you enter your custom FQDN/URL and click save.

Take Note: Enter the domain name in form *[hostname].[domain]

Custom FQDN Created

Now that you have selected your custom FQDN attribute click ADD and then Apply and save the Context Profile.

Then click Apply

Custom Context Profile

Once completed you will see the newly created DFW rule, I have set my rule to allow to confirm connectivity to you yogovirtual.com. Then I am going to change the action to drop.

DFW Section with both FQDN DFW Rules
Demonstrating accessing https://yogovirtual.com

I changed the policy to drop in the NSX-T DFW and closed the browser and opened the page again and confirmed that the access to the customer FQDN is blocked.

Access to Custom FQDN being blocked

I accessed my LogInsight to confirm the drop action taking place

LogInsight NSX-T DFW Logs

This concludes this blog post, I hope that it helps you understand how to enable FQDN based Filtering using the NSX-T DFW.

NSX-T 3.1 – Deploying Distributed IDS/IPS

In NSX-T 3.0 VMware introduce distributed IDS and now in NSX-T 3.1 this has been expanded to include distributed IPS. In this blog I will highlight the steps to enabled and configured distributed IDS/IPS and end with a demonstration.

Overview

Distributed Intrusion Detection and Prevention Service (IDS/IPS) monitors network traffic on the host for suspicious activity.Signatures can be enabled based on severity. A higher severity score indicates an increased risk associated with the intrusion event. Severity is determined based on the following:

  • Severity specified in the signature itself
  • CVSS (Common Vulnerability Scoring System) score specified in the signature
  • Type-rating associated with the classification type

IDS detects intrusion attempts based on already known malicious instruction sequences. The detected patterns in the IDS are known as signatures. You can set a specific signature alert/drop/reject actions globally, or by profile.

Actions

  • Alert – An alert is generated and no automatic preventative action is taken.
  • Drop – An alert is generated and the offending packets are dropped.
  • Reject – An alert is generated and the offending packets are dropped. For TCP flows, a TCP reset package is generated by IDS and sent to the source and destination of the connection. For other protocols, an ICMP-error packet sent to the source and destination of the connection.

***Do not enable Distributed Intrusion Detection and Prevention Service (IDS/IPS) in an environment that is using Distributed Load Balancer. NSX-T Data Center does not support using IDS/IPS with a Distributed Load Balancer.***

***Distributed IDS/IPS is a licensed feature which is not included in the traditional NSX per CPU licenses. You will need to apply the Add-On NSX Advanced Threat Prevention license in your NSX-T Manager to enable these capabilities***

NSX-T Firewall with Advanced Threat Prevention License applied

Distributed IDS/IPS Configuration

I will be using the topology highlighted below for this demonstration setup so I will do some pre-work configurations in NSX-T so that I can consume these in the subsequent steps (Creating Segments, T1/T0, Demo Virtual Machines, Groups)

Demo Topology

Pre-Work

  • Create 2 Tags for the workloads: Home -> Inventory -> Tags ->Add Tag
    • Name: Production
    • Assign: WEB-VM-01 and APP-VM-01
Production Tag
  • Name: Development
    • Assign: WEB-VM-02 and APP-VM-02
Development Tag

  • Create 2 Groups for the Workloads: Home -> Inventory -> Groups
    • Name: Production Applications
    • Compute Members: Membership Criteria: Virtual Machine Tag Equals: Production
Product Applications Group
  • Name: Development Applications
    • Compute Members: Membership Criteria: Virtual Machine Tag Equals: Development
Development Application Group
  • Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click View Members for the 2 groups you created and confirm
Production Group and the associated members
Development Group and the associated members

1. Enable IDS/IPS on hosts, download latest signature set, and configure signature settings.

Home -> Security -> Distributed IDS/IPS -> Settings

NSX-T IDS/IPS can automatically apply signatures to your hosts, and update intrusion detection signatures by checking our cloud-based service

IDS/IPS Settings Menu
  • Intrusion Detection and Prevention Signatures = Enable Auto Updates. The NSX-T Manager requires Internet access for Auto Updates.
  • Enable Intrusion Detection and Prevention for Cluster(s) = DC-02-Compute, select the cluster where you workloads are and select enable. When prompted “Are you sure you want to Enable Intrusion Detection and Prevention for selected clusters?” click YES
IDS/IPS Auto Updates Enabled and IDS/IPS enabled on my DC-02-Compute Cluster

NSX can automatically update it’s IDS signatures by checking the cloud-based service. By default, NSX manager will check once per day and VMware publishes new signature update versions every two week (with additional non-scheduled 0-day updates). NSX can also be configured to optionally automatically apply newly updated signatures to all hosts that have IDS enabled.

2. Create IDS/IPS profiles

Home -> Security -> Distributed IDS/IPS -> Profiles

IDS/IPS Profiles are used to group signatures, which can then be applied to select applications. You can create 24 custom profiles in addition to the default profile

Default IDS Profile

We will be create two new IDS/IPS profiles, one for Production and another one for Development

Home -> Security -> Distributed IDS/IPS -> Profiles -> ADD PROFILE

  • Name: Production
  • Signatures to Include: Critical, High, Medium
Production IDS Profile
  • Name: Development
  • Signatures to include: Critical & High
Development IDS Profile
Newly created IDS Profiles

3. Create IDS/IPS rules

IDS/IPS rules are used to apply a previously created profile to select applications and traffic. I am going to create one rule for the Production VM’s and a second rule for the Development VM’s

Home -> Security -> Distributed IDS/IPS -> Rules -> Add Policy

  • Click Add a New Policy – Renamed my default name to NSX Demo

Now lets create the Production Policy Rule

  • Add a Rule to the Policy – Click ADD RULE
  • Rule Name: Production Policy
  • IDS Profile: Production
  • Applied to Group: Production Applications
  • The rest is left default

Next we create the Development Policy Rule

  • Add a Rule to the Policy – Click ADD RULE
  • Rule Name: Development Policy
  • IDS Profile: Development
  • Applied to Group: Development Applications
  • The rest is left default
IDS Policy and Rules created

Last step is to publish the Policy – Click Publish on the top left.

The mode setting will determine if we are doing IDS or IDS/IPS.

  • Detect Only – Detects signatures and does not take action.
  • Detect and Prevent – detects signatures and takes into account profile or global action of drop or reject.

There are some other optional settings when you click on the gear at the end of the rule:

  • Logging
  • Direction
  • IP Protocol
  • Log Label

4. Verify IDS/IPS status on hosts

To use the NSX virtual appliance CLI, you must have SSH access to an NSX virtual appliance. Each NSX virtual appliance contains a command-line interface (CLI)

  1. Open a ssh session to one of the ESXi hosts
  2. Enter nsxcli command to open the NSX-T Data Center CLI.
  3. To confirm that IDS is enabled on this host, run the command get ids status
get ids status
  1. To confirm both of the IDS profiles have been applied to this host, run the command get ids profile
get ids profile
  1. To review IDS profile (engine) statistics including the number of packets processed and alerts generated, run the command get ids engine profilestats <tab_to_select_profile_ID>
get ids engine profilestats

5. Distributed IDS/IPS Events

I have set up basic demonstration using Metasploit to launch a simple exploit against the Drupal service running on the Web-VM-01 and confirm the NSX Distributed IDS/IPS was able to detect this exploit attempt.

Basic Attack Demo

In this demonstration, the IDS/IPS engine is set to detect only

IDS Engine configuration

When I trigger the exploit from the Hacker to WEB-VM-01, I am able to do a reverse shell and gather system information on the victim.

Exploited Victim

Now when I go over to the IDS/IPS dashboard in NSX-T, I can see the event and expand to see the details and showing this as a detected only event.


IDS/IPS Dashboard

Thank-you you taking the time to read the blog, if you found it useful or have any feedback feel free to ping or provide comments.

NSX-T 3.1 – Configuring DHCP Server

As I build out various demonstrations in my lab I wanted to reduce the amount of static IP allocations on my demo work loads so that I can move them between network segments for different demonstrations and with this enabling a DHCP Server in my NSX-T deployment makes sense.

So in this post I will cover the steps and procedure to follow enabling DHCP on my Local NSX-T Manager.

Overview

DHCP (Dynamic Host Configuration Protocol) allows clients to automatically obtain network configuration, such as IP address, subnet mask, default gateway, and DNS configuration, from a DHCP server.

As per VMware documentation, NSX-T Data Center supports three types of DHCP on a segment:

  • DHCP local server
  • Gateway DHCP
  • DHCP relay.

DHCP Local Server
As the name suggests, it is a DHCP server that is local to the segment and not available to the other segments in the network. A local DHCP server provides a dynamic IP assignment service only to the VMs that are attached to the segment. The IP address of a local DHCP server must be in the subnet that is configured on the segment.
Gateway DHCP
It is analogous to a central DHCP service that dynamically assigns IP and other network configuration to the VMs on all the segments that are connected to the gateway and using Gateway DHCP. Depending on the type of DHCP profile you attach to the gateway, you can configure a Gateway DHCP server or a Gateway DHCP relay on the segment. By default, segments that are connected to a tier-1 or tier-0 gateway use Gateway DHCP. The IP address of a Gateway DHCP server can be different from the subnets that are configured in the segments.
DHCP Relay
It is a DHCP relay service that is local to the segment and not available to the other segments in the network. The DHCP relay service relays the DHCP requests of the VMs that are attached to the segment to the remote DHCP servers. The remote DHCP servers can be in any subnet, outside the SDDC, or in the physical network.


You can configure DHCP on each segment regardless of whether the segment is connected to a gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.

For a gateway-connected segment, all the three DHCP types are supported. However, Gateway DHCP is supported only in the IPv4 subnet of a segment.

For a standalone segment that is not connected to a gateway, only local DHCP server is supported.

Assumptions

My base networking has already been configured and deployed and I will address enabling the DHCP services on one segment which I want to allocate Private IP’s too and translate these with Source based NAT to provide Internet access to this segment.

Configuration

For the purpose of this demonstration which I am working on, I will configure a DHCP Local Server on my segment with network range 10.10.10.0/24. My T1 default gateway is 10.10.10.1.

Step 1 – Create a DHCP Profile

You can create DHCP servers to service DHCP requests from VMs that are connected to logical switches.

Select Networking > DHCP > ADD DHCP Profile> Add.

Adding a DHCP Profile
  • Enter a Profile name = DHCP-LM-DC-01
  • Profile Type (DHCP Server / DHCP Relay) = DHCP Server
  • Enter the IP address of the DHCP server and its subnet mask in CIDR format: 192.168.10.240/24
  • Lease Time (Should be between 60 and 4294967295): Default is 84600
  • Select the Edge Cluster: LM-EDGE-Cluster-DC-01
  • Select the Edge(s): edge-dc-03 for my lab
Selecting edge-dc-03 from my Edge Cluster

Once all the fields are populated, hit save

DHCP Profile populated data

Step 2 – Attach a DHCP Server to a Segment

Networking > Segments > Select Segment

Configure a new segment or select the one you want to edit, in my case I will edit DC-01-NAT-Segment

Edit Segment to Set DHCP Config

SET DHCP CONIFG

Blank DHCP Settings

Now we need to populate the details and select the options matching our requirements.

  • DHCP TYPE: LOCAL DHCP Server
  • DHCP Profile: DHCP-LM-DC-01
  • IP Server Settings:
    • DHCP Config: Enable
    • DHCP Server IP Address: 10.10.10.254/24
  • DHCP Ranges: 10.10.10.10-10.10.10.100
  • Lease Time (seconds): 3600
  • DNS Servers: 192.168.10.5
DHCP Configurations Populated in the UI

APPLY and SAVE.

Step 3 – Attach Virtual Machine to the Network and set networking settings to DHCP

Setting my VM to obtain an IP via DHCP

The virtual machine has succesfully obtain a dynamic IP address from the DHCP Server created in NSX-T

Dynamic IP 10.10.10.10 allocated via DHCP seen in vCenter
DHCP IP allocated to the Virtual Machine from 10.10.10.254
Final Topology with DHCP and NAT enabled