├── ADS-Examples
├── 001-Little-Snitch-Discovery-Behavior.md
├── 002-Modified-Boot-Configuration-Data.md
├── 003-osquery-MacOS-Malware-Detection.md
├── 004-Unusual-Powershell-Host-Process.md
├── 005-Active-Directory-Privileged-Group-Modification.md
└── 006-Cilium-Blocked-DNS-Resolution.md
├── ADS-Framework.md
├── LICENSE
└── README.md
/ADS-Examples/001-Little-Snitch-Discovery-Behavior.md:
--------------------------------------------------------------------------------
1 | # Goal
2 | Detect attempts by potentially malicious software to discover the presence of Little Snitch on a host by looking for process and command line artifacts.
3 |
4 | # Categorization
5 | These attempts are categorized as [Discovery / Security Software Discovery](https://attack.mitre.org/wiki/Technique/T1063).
6 |
7 | # Strategy Abstract
8 | The strategy will function as follows:
9 |
10 | * Record process and process command line information for MacOS hosts using endpoint detection tooling.
11 | * Look for any explicit process or command line references to Little Snitch.
12 | * Suppress known-good processes and command line arguments
13 | * Little Snitch Updater
14 | * Little Snitch Installer
15 | * Health checks for Little Snitch
16 | * Fire alert on any other process or command line activity.
17 |
18 | # Technical Context
19 | [Little Snitch](https://www.obdev.at/products/littlesnitch/index.html) is an application firewall for MacOS that allows users to generate rulesets around how applications can communicate on the network.
20 |
21 | In the most paranoid mode, Little Snitch will launch a pop-up notifying the user that an application has deviated from a ruleset. For instance, the following events could trip an interactive alert:
22 |
23 | A new process is observed attempting to communicate on the network.
24 | A process is communicating with a new IP address or port which differs from a ruleset.
25 | The following prompt demonstrates the expected behavior of Little Snitch:
26 |
27 | Due to the intrusive nature of Little Snitch popups, [several MacOS implants](https://blog.malwarebytes.com/cybercrime/2016/07/new-mac-backdoor-malware-eleanor/) will perform explicit checks for processes, kexts, and other components. This usually manifests through explicit calls to the process (ps) or directory (dir) commands with sub-filtering for Little Snitch.
28 |
29 | For instance, an implant could look for the following components:
30 |
31 | * Running Little Snitch processes
32 | * Little Snitch Kexts
33 | * Little Snitch Plists
34 | * Little Snitch Rules
35 |
36 | The following code is explicitly run by the Powershell Empyre agent as soon as it executes on a MacOS system:
37 | ```
38 | /bin/sh -c ps -ef | grep Little\\ Snitch | grep -v grep
39 | ```
40 | The following screenshot shows the same command as part of a endpoint detection tooling process execution chain:
41 |
42 | Looking at the [source code for Powershell Empyre](https://github.com/EmpireProject/Empire/blob/8f3570b390d6f91d940881c8baa11e2b2586081a/lib/listeners/http.py) reveals the explicit check using the ps and grep commands:
43 | ```
44 | try:
45 | if safeChecks.lower() == 'true':
46 | launcherBase += "import re, subprocess;"
47 | launcherBase += "cmd = \"ps -ef | grep Little\ Snitch | grep -v grep\"\n"
48 | launcherBase += "ps = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)\n"
49 | launcherBase += "out = ps.stdout.read()\n"
50 | launcherBase += "ps.stdout.close()\n"
51 | launcherBase += "if re.search(\"Little Snitch\", out):\n"
52 | launcherBase += " sys.exit()\n"
53 | except Exception as e:
54 | p = "[!] Error setting LittleSnitch in stager: " + str(e)
55 | print helpers.color(p, color='red')
56 | ```
57 |
58 | # Blind Spots and Assumptions
59 |
60 | This strategy relies on the following assumptions:
61 | * Endpoint detection tooling is running and functioning correctly on the system.
62 | * Process execution events are being recorded.
63 | * Logs from endpoint detection tooling are reported to the server.
64 | * Endpoint detection tooling is correctly forwarding logs to SIEM.
65 | * SIEM is successfully indexing endpoint detection tooling logs.
66 | * Attacker toolkits will perform searches to identify if Little Snitch is installed or running.
67 |
68 | A blind spot will occur if any of the assumptions are violated. For instance, the following would not trip the alert:
69 | * Endpoint detection tooling is tampered with or disabled.
70 | * The attacker implant does not perform searches for Little Snitch in a manner that generates a child process.
71 | * Obfuscation occurs in the search for Little Snitch which defeats our regex.
72 |
73 | # False Positives
74 | There are several instances where false positives for this ADS could occur:
75 |
76 | * Users explicitly performing interrogation of the Little Snitch installation
77 | * Grepping for a process, searching for files.
78 | * Little Snitch performing an update, installation, or uninstallation.
79 | * We miss whitelisting a known-good process.
80 | * Management tools performing actions on Little Snitch.
81 | * We miss whitelisting a known-good process.
82 |
83 | Known false positives include:
84 | * Little Snitch Software Updater
85 |
86 | Most false positives can be attributed to scripts or user behavior looking at the current state of Little Snitch. These are either trusted binaries (e.g. our management tools) or are definitively benign user behavior (e.g. the processes performing interrogation are child processes of a user shell process).
87 |
88 | # Priority
89 | The priority is set to medium under all conditions.
90 |
91 | # Validation
92 | Validation can occur for this ADS by performing the following execution on a MacOS host:
93 | ```
94 | /bin/sh -c ps -ef | grep Little\\ Snitch | grep -v grep
95 | ```
96 |
97 | # Response
98 | In the event that this alert fires, the following response procedures are recommended:
99 |
100 | * Look at management tooling to identify if Little Snitch is installed on the host.
101 | * If Little Snitch is not installed on the Host, this may be more suspicious.
102 | * Look at the process that triggered this alert. Walk the process chain.
103 | * What process triggered this alert?
104 | * What was the user the process ran as?
105 | * What was the parent process?
106 | * Are there any unusual discrepancies in this chain?
107 | * Look at the process that triggered this alert. Inspect the binary.
108 | * Is this a shell process?
109 | * Is the process digitally signed?
110 | * Is the parent process digitally signed?
111 | * How prevalent is this binary?
112 | * Does this appear to be user-generated in nature?
113 | * Is this running in a long-running shell?
114 | * Are there other indicators this was manually typed by a user?
115 | * If the activity may have been user-generated, reach out to the user via our chat client and ask them to clarify their behavior.
116 | * If the user is unaware of this behavior, escalate to a security incident.
117 | * If the process behavior seems unusual, or if Little Snitch is not installed, escalate to a security incident.
118 |
119 | # Additional Resources
120 | * [Elanor Mac Malware (Representative Sample)](https://blog.malwarebytes.com/cybercrime/2016/07/new-mac-backdoor-malware-eleanor/)
121 |
122 |
--------------------------------------------------------------------------------
/ADS-Examples/002-Modified-Boot-Configuration-Data.md:
--------------------------------------------------------------------------------
1 | # Goal
2 | Detect when the boot configuration data (BCD) of a Windows device has been modified in an unusual and potentially malicious way.
3 |
4 | # Categorization
5 | These attempts are categorized as [Defense Evasion / Disabling Security Tools](https://attack.mitre.org/wiki/Technique/T1089).
6 |
7 | # Strategy Abstract
8 | The strategy will function as follows:
9 |
10 | * Record BCD for all boot events in Windows using Windows Event Logs.
11 | * Compare reported BCD to a known-good profile.
12 | * Alert on any discrepancies between desired and current states.
13 |
14 | # Technical Context
15 | [Boot Configuration Data](https://msdn.microsoft.com/en-us/library/windows/hardware/dn653287(v=vs.85).aspx) is the replacement for legacy file-based boot information.
16 |
17 | BCD provides a firmware-independent mechanism for manipulating boot environment data for any type of Windows system. Windows Vista and later versions of Windows will use it to load the operating system or to run boot applications such as memory diagnostics. Some key characteristics include:
18 |
19 | * BCD abstracts the underlying firmware. BCD currently supports both PC/AT BIOS and EFI systems. BCD interfaces perform all necessary interaction with firmware. For example, on EFI systems, BCD creates and maintains EFI NVRAM entries.
20 | * BCD provides clean and intuitive structured storage for boot settings.
21 | * BCD interfaces abstract the underlying data store.
22 | * BCD is available at run time and during the boot process.
23 | * BCD manipulation requires elevated permissions.
24 | * BCD is designed to handle systems with multiple versions and configurations of Windows, including versions earlier than Windows Vista. It can also handle non-Windows operating systems.
25 | * BCD is the only boot data store that is required for Windows Vista and later versions of Windows. BCD can describe NTLDR and the boot process for loading of earlier versions of Windows, but these operating systems are ultimately loaded by Ntldr and must still store their boot options in a boot.ini file.
26 |
27 | BCD is relevant for security purposes as it is responsible for:
28 |
29 | * Enforcing driver code signing requirements.
30 | * Enforcing DEP and other anti-exploit requirements.
31 | * Controlling kernel/hypervisor debugging settings.
32 |
33 | BCD can be modified using multiple methods, most notably via WMIC or the bcdedit.exe binary.
34 |
35 | At the start of the Windows boot process, a [Windows event ID (4826)](https://docs.microsoft.com/en-us/windows/device-security/auditing/event-4826) is recorded in the event log with the details of the BCD data loaded.
36 |
37 | There are several critical BCD entries present in this log that should be inspected for changes:
38 |
39 | |BCD Entries|Description|Default State|Desired State|Security Impact|
40 | |-----------|-----------|-------------|-------------|---------------|
41 | SecurityID|The security ID responsible for the BCD load event.|SYSTEM|SYSTEM|Indicates an anomalous BCD loading event.|
42 | Kernel Debugging|Describes whether or not kernel debugging is enabled.|Disabled|Disabled|Allows subversion of the operating system, security tooling, and controls.|
43 | Hypervisor Debugging|Describes whether or not hypervisor debugging is enabled.|Disabled|Disabled|Allows subversion of the hypervisor and any running guests.|
44 | Test Signing|Describes whether or not test signing is enabled.|Disabled|Disabled|Allows loading of unsigned kernel modules and drivers.|
45 | Flight Signing|Describes whether or not flight signing is enabled.|Disabled|Disabled|Allows loading of flight-signed (Microsoft development code signing certificate) drivers.|
46 | Integrity Checks|Describes whether or not integrity checks are performed.|Enabled|Enabled|Disables all integrity checks on the BCD.|
47 |
48 | # Blind Spots and Assumptions
49 | This strategy relies on the following assumptions:
50 | * BCD reporting is valid and trustworthy.
51 | * Windows event logs are being successfully generated on Windows hosts.
52 | * Windows event logs are successfully forwarded to WEF servers.
53 | * SIEM is successfully indexing Windows event logs.
54 |
55 | A blind spot will occur if any of the assumptions are violated. For instance, the following would not trip the alert:
56 | * Windows event forwarding or auditing is disabled on the host.
57 | * BCD is modified without generating a log event (e.g. exploit, implant).
58 |
59 | # False Positives
60 | There are several instances where false positives will occur:
61 | * Users enrolling in Windows Insider Preview (WIP) builds will enable Flight Signing.
62 | * Users manually enabling debugging or driver test features for the purposes of development.
63 |
64 | System configuration should prevent enrollment in WIP, but enterprising users may work around these restrictions.
65 |
66 | System debugging (e.g. Kernel, Hypervisor) should only take place in a sanctioned development environment, and should not be present on a production host.
67 |
68 | # Priority
69 | The priority is set to high under the following conditions:
70 | * Integrity checks are disabled.
71 | * Kernel debugging is enabled.
72 | * Hypervisor debugging is enabled.
73 | * Test signing is enabled.
74 |
75 | The priority is set to medium under the following conditions:
76 | * Flight signing is enabled.
77 |
78 | # Validation
79 | Validation can occur for this ADS by performing the following execution on a Windows host, followed by a reboot:
80 | ```
81 | BCDEDIT /set nointegritychecks ON
82 | ```
83 |
84 | # Response
85 | In the event that this alert fires, the following response procedures are recommended:
86 | * Identify the BCD properties that were modified.
87 | * If only Flight Signing were modified, it is likely the user enrolled in WIP.
88 | * Check the current build of their machine and compare against public WIP builds.
89 | * If this is a true positive, work with the user to roll back to a stable build.
90 | * If integrity checks or test signing are modified, treat as a high priority alert.
91 | * Investigation any processes which have executed since the last reboot.
92 | * Identify any new loaded kernel modules or drivers.
93 | * If the user is unaware of this behavior, escalate to a security incident
94 | * If debugging settings are modified, treat as a high priority alert.
95 | * Identify if any debuggers were used by the user.
96 | * If the user is unaware of this behavior, escalate to a security incident.
97 |
98 | # Additional Resources
99 | * [Boot Configuration Data Documentation](https://msdn.microsoft.com/en-us/library/windows/hardware/dn653287(v=vs.85).aspx)
100 | * [About Kernel Debugging](https://msdn.microsoft.com/en-us/library/windows/hardware/ff542191(v=vs.85).aspx)
101 | * [About Hypervisor Debugging](https://msdn.microsoft.com/en-us/library/windows/hardware/ff538138(v=vs.85).aspx)
102 | * [About Test Signing](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/the-testsigning-boot-configuration-option)
103 |
--------------------------------------------------------------------------------
/ADS-Examples/003-osquery-MacOS-Malware-Detection.md:
--------------------------------------------------------------------------------
1 | # Goal
2 | Detect when a query from the osquery osx-attacks query pack detects a positive hit on malware/adware on MacOS endpoints.
3 |
4 | # Categorization
5 | These attempts are categorized as one of several MITRE ATT&CK categorizes:
6 |
7 | * [Persistence / Launch Agent](https://attack.mitre.org/wiki/Technique/T1159)
8 | * [Persistence / Launch Daemon](https://attack.mitre.org/wiki/Technique/T1160)
9 | * [Persistence / Login Item](https://attack.mitre.org/wiki/Technique/T1162)
10 | * [Persistence / Logon Scripts](https://attack.mitre.org/wiki/Technique/T1037)
11 | * [Persistence / Startup Items](https://attack.mitre.org/wiki/Technique/T1165)
12 |
13 | # Strategy Abstract
14 | The strategy will function as follows:
15 |
16 | * Periodically run the osquery osx-attacks query pack on all MacOS endpoints.
17 | * Alert on any hits on the osx-attacks query pack.
18 |
19 | # Technical Context
20 | The [osquery OSS project](https://osquery.io/) maintains a series of queries called "packs". One such pack is called the [osx-attacks pack](https://github.com/facebook/osquery/blob/master/packs/osx-attacks.conf) and it contains simple signatures to detect the presence of known adware and malware on OSX systems. This does not use any heuristics, it just looks for known indicators (usually plists) to indicate a positive infection.
21 |
22 | A sample osquery event looks like the following:
23 | ```
24 | {
25 | action: added
26 | calendarTime: Mon Sep 4 18:14:39 2017 UTC
27 | columns: {
28 | disabled:
29 | groupname:
30 | inetd_compatibility:
31 | keep_alive: 1
32 | label: com.spigot.ApplicationManager
33 | name: com.spigot.ApplicationManager.plist
34 | on_demand:
35 | path: /Users/tester/Library/LaunchAgents/com.spigot.ApplicationManager.plist
36 | process_type:
37 | program:
38 | program_arguments: /Users/tester/Library/Application Support/Spigot/ApplicationManager --protect
39 | queue_directories:
40 | root_directory:
41 | run_at_load: 1
42 | start_interval:
43 | start_on_mount:
44 | stderr_path:
45 | stdout_path:
46 | username:
47 | watch_paths:
48 | working_directory:
49 | }
50 | decorations: { [+]
51 | }
52 | hostIdentifier: testermac
53 | name: pack/osx-attacks/Spigot
54 | unixTime: 1504548879
55 | }
56 | ```
57 |
58 | # Blind Spots and Assumptions
59 | This strategy relies on the following assumptions:
60 | * Osquery is running on hosts.
61 | * Osquery has the correct query packs.
62 | * Osquery is successfully reporting data to the Kolide endpoint.
63 |
64 | A blind spot will occur if any of the assumptions are violated. For instance, the following would not trip the alert:
65 | * Osquery stops running or is tampered with.
66 | * The malware sample does not match an entry in the osx-attacks query pack.
67 |
68 | Note: This detection method is only able to detect known malware using static indicators. Malware variants may not be picked up by osquery.
69 |
70 | # False Positives
71 | There are very limited instances where false positives will occur:
72 | * A legitimate file uses the same filename or filepath as a malicious sample.
73 | * A legitimate file is accidentally added to the osx-attacks query pack.
74 |
75 | Note: No false positives were detected during staging or production. It is extremely unlikely that a false positive will occur on this ADS.
76 |
77 | # Priority
78 | The priority is set to high under all conditions.
79 |
80 | # Validation
81 | Validation can occur for this ADS by performing the following execution on a MacOS host:
82 |
83 | ```
84 | echo '
85 | >
86 | >
87 | >
88 | > KeepAlive
89 | >
90 | > SuccessfulExit
91 | >
92 | >
93 | > Label
94 | > com.apple.xpcd.plist
95 | > LimitLoadToSessionType
96 | > Aqua
97 | > ProgramArguments
98 | >
99 | > id
100 | >
101 | > RunAtLoad
102 | >
103 | >
104 | > ' > ~/Library/LaunchAgents/com.apple.xpcd.plist
105 |
106 | launchctl load ~/Library/LaunchAgents/com.apple.xpcd.plist
107 | ```
108 | This validation scenario will create a PLIST file which should trigger a detection for OSX_Proton.
109 |
110 | # Response
111 | In the event that this alert fires, the following response procedures are recommended:
112 | * Identify the signature that was used to create the alert. This should fall into one of two buckets: commodity or sophisticated.
113 | * Commodity includes adware, PUPs, and other unwanted software.
114 | * Sophisticated includes any implant which was used in a CNE campaign.
115 | * If the signature relates to a commodity implant, perform the following:
116 | * Follow the MacOS anti-adware playbook and ensure implant removal.
117 | * Perform blocking on any C2 infrastructure.
118 | * If the signature relates to a sophisticated implant, perform the following:
119 | * Escalate to a security incident.
120 |
121 | # Additional Resources
122 | * [Osquery osx-attacks pack](https://github.com/facebook/osquery/packs/osx-attacks.conf)
--------------------------------------------------------------------------------
/ADS-Examples/004-Unusual-Powershell-Host-Process.md:
--------------------------------------------------------------------------------
1 | # Goal
2 | Detect when powershell (system.management.automation.dll) is loaded into an unusual powershell host process. This may be indicative of an attempt to load powershell functionality without relying on traditional powershell hosts (e.g. powershell.exe).
3 |
4 | # Categorization
5 | These attempts are categorized as [Execution / Powershell](https://attack.mitre.org/wiki/Technique/T1086).
6 |
7 | # Strategy Abstract
8 | The strategy will function as follows:
9 |
10 | * Monitor module loads via endpoint tooling on Windows systems.
11 | * Look for any process that loads the powershell DLL (system.management.automation.dll OR system.management.automation.ni.dll)
12 | * Suppress any known-good powershell host processes by path and process name.
13 | * Alert on any unusual powershell host processes.
14 |
15 | # Technical Context
16 | Built on the .NET framework, powershell is a command-line shell and scripting language for performing system management and automation. While normally exposed through the process powershell.exe, powershell is actually a DLL entitled system.management.automation.dll. It may also exist in a native image format as system.management.automation.ni.dll.
17 |
18 | The powershell DLL may be loaded into several processes which are known as powershell hosts. These may range from common hosts like powershell.exe or the powershell integrated scripting environment (powershell_ise.exe) to more esoteric binaries like Exchange and Azure Active Directory Sync processes. Generally, powershell hosts are rather predictable and are usually signed binaries distributed by Microsoft.
19 |
20 | Attackers love to leverage powershell as it provides a high-level interface to interact with the operating system without requiring development of functionality in C, C#, or .NET. While many attackers leverage native powershell hosts, more sophisticated adversaries may opt for the more OPSEC-friendly method of injecting powershell into non-native hosts. This is described as [unmanaged powershell](https://github.com/leechristensen/UnmanagedPowerShell) (POC: [powerpick](https://github.com/PowerShellEmpire/PowerTools/tree/master/PowerPick)), a method of loading the powershell DLL into an arbitrary process without relying on a powershell host.
21 |
22 | An important caveat is how unmanaged powershell interacts with powershell logging. As noted in the powershell knowledge base page, powershell v5 includes substantial improvements to telemetry collection through module, script block, operational, and transcript logs. Older versions, however, do have the same logging hooks available. On systems with powershell v2 installed, the .NET v2 CLR may be loaded, which will provide a logging bypass. Removing powershell v2, and installing powershell >= v5 is essential to maintaining reliable logging pipelines.
23 |
24 | Unmanaged powershell is [explained in greater detail on Lee Christensen's blog](https://silentbreaksecurity.com/powershell-jobs-without-powershell-exe/), but is summarized as follows:
25 |
26 | * The .NET common language runtime (CLR) is loaded into the current process.
27 | * Attacker tools specify the version of the CLR loaded, but will oftentimes rely on loading v2 if available.
28 | * Foreign processes require a method of code injection.
29 | * The injected code loads the CLR.
30 | * The CLR loads a custom C# assembly (effectively a powershell runner) into an AppDomain.
31 | * Commands or script blocks are loaded into the C# assembly and the .NET execution method is called.
32 |
33 | Additional information on unmanaged powershell can be found on [Justin Warner's blog](https://www.sixdub.net/?p=367).
34 |
35 | # Blind Spots and Assumptions
36 | This strategy relies on the following assumptions:
37 |
38 | * Endpoint tooling is running and functioning correctly on the system.
39 | * Module loads in Windows are being recorded.
40 | * Logs from endpoint tooling are reported to the server.
41 | * Endpoint tooling is correctly forwarding logs to SIEM.
42 | * SIEM is successfully indexing endpoint tooling logs.
43 |
44 | A blind spot will occur if any of the assumptions are violated. For instance, the following would trip the alert:
45 | * A legitimate powershell host is abused (e.g. powershell.exe).
46 | * A whitelisted powershell host is abused.
47 | * Endpoint tooling is modified to not collect module load events or report to the server.
48 |
49 | # False Positives
50 | There are several instances where false positives will occur:
51 |
52 | * A legitimate powershell host is used and is not suppressed via the whitelist.
53 |
54 | Legitimate powershell hosts typically look like the following:
55 |
56 | * They are digitally signed by Microsoft, or a valid 3rd party application which may need to make direct powershell calls.
57 | * The powershell host loads the native powershell library into memory using a standard method (e.g. LoadLibrary).
58 | * This is a binary which we generally trust.
59 |
60 | # Priority
61 | The priority is set to medium under all conditions.
62 |
63 | # Validation
64 | Validation can occur for this ADS by performing the following execution on a MacOS host:
65 |
66 | ```
67 | Copy-Item C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Destination C:\windows\temp\unusual-powershell-host-process-test.exe -Force
68 |
69 | Start-Process C:\windows\temp\unusual-powershell-host-process-test.exe -ArgumentList '-NoProfile','-NonInteractive','-Windowstyle Hidden','-Command {Get-Date}'
70 |
71 | Remove-Item 'C:\windows\temp\unusual-powershell-host-process-test.exe' -Force -ErrorAction SilentlyContinue
72 | ```
73 |
74 | # Response
75 | In the event that this alert fires, the following response procedures are recommended:
76 |
77 | * Compare the suspect powershell host against entries on the whitelist.
78 | * Note if there are minor issues due to path or drive letter differences.
79 | * Check the digital signature of the binary.
80 | * Use either tooling or powershell to identify if the binary is digitally signed.
81 | * Make a trust determination on the signer and binary.
82 | * Identify if the binary corresponds to an installed application.
83 | * Look at osquery to find installed packages that might match the binary.
84 | * Look at the execution behavior of the binary.
85 | * Has it made any unusual network connections?
86 | * Has it spawned any child processes?
87 | * Has it made any suspicious file modifications?
88 | If the binary is not trustworthy, or cannot be traced to a legitimate installed application, treat it as a potential compromise and escalate to a security incident.
89 |
90 | # Additional Resources
91 | * [Unmanaged powershell](https://github.com/leechristensen/UnmanagedPowerShell)
92 | * [Powershell without powershell](https://silentbreaksecurity.com/powershell-jobs-without-powershell-exe)
93 | * [Bypassing AppLocker Policies](https://www.sixdub.net/?p=367)
--------------------------------------------------------------------------------
/ADS-Examples/005-Active-Directory-Privileged-Group-Modification.md:
--------------------------------------------------------------------------------
1 | # Goal
2 | Detect changes to privileged groups in Active Directory that could indicate malicious or unexpected administrative activity.
3 |
4 | # Categorization
5 | These attempts are categorized as [Credential Access / Account Manipulation](https://attack.mitre.org/wiki/Technique/T1098).
6 |
7 | # Strategy Abstract
8 | The strategy will function as follows:
9 |
10 | * Collect Windows Event Logs related to AD group changes.
11 | * Compare AD group changes against a list of privileged groups.
12 | * Alert on any unusual changes to privileged groups.
13 |
14 | # Technical Context
15 | Privileged Groups are a list of abstract high-value targets in AD that provide privileged access or can be misused to perform privilege escalation. These include [builtin AD groups (e.g. Account Operators, Domain Admins, Enterprise Admins)](https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/appendix-b--privileged-accounts-and-groups-in-active-directory) as well as custom groups which have been delegated sensitive permissions.
16 |
17 | When configured correctly, AD Domain Controllers will record Event IDs for group modifications. The following event IDs are of interest for this ADS:
18 |
19 | |Event Code|Description|
20 | |----------|-----------|
21 | 4727|A security-enabled global group was created.|
22 | 4728|A member was added to a security-enabled global group.|
23 | 4729|A member was removed from a security-enabled global group.|
24 | 4730|A security-enabled global group was deleted.|
25 | 4754|A security-enabled universal group was created.|
26 | 4756|A member was added to a security-enabled universal group.|
27 | 4757|A member was removed from a security-enabled universal group.|
28 | 4758|A security-enabled universal group was deleted.|
29 | 4764|A group's type was changed.|
30 |
31 | The following AD builtin groups are monitored for changes:
32 |
33 | |Group Name|Description|
34 | |----------|-----------|
35 | Administrators|Builtin administrators group for the domain|
36 | Domain Admins|Builtin administrators group for the domain|
37 | Enterprise Admins|Builtin administrators group for the domain|
38 | Schema Admins|Highly privileged builtin group|
39 | Account Operators|Highly privileged builtin group|
40 | Backup Operators|Highly privileged builtin group|
41 |
42 | # Blind Spots and Assumptions
43 | This strategy relies on the following assumptions:
44 | * Group change event auditing is enabled by GPO.
45 | * Group change events are written to the Windows Event Log.
46 | * The DCs are correctly forwarding the group change events to WEF servers.
47 | * WEF servers are correctly forwarding events to the SIEM.
48 | * SIEM is successfully indexing group change events.
49 |
50 | A blind spot will occur if any of the assumptions are violated. For instance, the following would not trip the alert:
51 | * Windows event logging breaks.
52 | * A group is modified in a manner which does not generate an event log.
53 | * A legitimate account in a sensitive group is hijacked.
54 | * A sensitive group is not correctly added to the monitoring list.
55 |
56 | # False Positives
57 | There are several instances where false positives for this ADS could occur:
58 | * Legitimate changes to the group are made as part of sanctioned systems administration activities.
59 | * Automation scripts remove leavers from privileged groups.
60 |
61 | # Priority
62 | The priority is set to high under the following conditions:
63 | * A new user is added to a builtin Windows group.
64 | * A new user is added to a Tier-0 administration group.
65 |
66 | The priority is set to medium under the following conditions:
67 | * A new user is added to a Tier-1 administration group.
68 |
69 | The priority is set to low under the following conditions:
70 | * The group modification event is a removal.
71 |
72 | # Validation
73 | Validation can occur for this ADS by performing the following execution on a Windows host with RSAT installed:
74 |
75 | ```
76 | Import-Module ActiveDirectory
77 | Add-ADGroupMember -Identity "Account Operators" -Members
78 | Remove-ADGroupMember -Identity "Account Operators" -Members
79 | ```
80 |
81 | # Response
82 | In the event that this alert fires, the following response procedures are recommended:
83 | * Validate the group modified, user added and the user making the change.
84 | * If the user making the change is not an administrator at the appropriate permissions level, escalate to a security incident.
85 | * If the user added to the group is not a member of an administratively relevant team, escalate to a security incident.
86 | * If the user added to the group is a new account, escalate to a security incident.
87 | * Validate there is a change management ticket or announcement for the change.
88 | * If there is no change management ticket or announcement, contact the user who made the change.
89 | * If the user is unaware of the activity, escalate to a security incident.
90 |
91 | # Additional Resources
92 | * [Privileged Groups in AD](https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/appendix-b--privileged-accounts-and-groups-in-active-directory)
93 | * [Securing PAM](https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material)
--------------------------------------------------------------------------------
/ADS-Examples/006-Cilium-Blocked-DNS-Resolution.md:
--------------------------------------------------------------------------------
1 | # Cilium Blocked DNS Resolution
2 |
3 | ## Goal
4 |
5 | The purpose of this ADS is to detect when Cilium blocks DNS requests originating from pods in the Rubix environment.
6 |
7 | ## Categorization
8 |
9 | These attempts are categorized as [Command and Control](https://attack.mitre.org/tactics/TA0011/) / [Application Layer Protocol: DNS](https://attack.mitre.org/techniques/T1071/004/)
10 |
11 | ## Environments
12 |
13 | * Palantir Rubix
14 |
15 | ## Platforms
16 |
17 | * Self-Hosted Applications
18 | * SAAS Applications
19 | * Kubernetes
20 |
21 | ## Tooling
22 |
23 | * Cilium
24 |
25 | ## Technical Context
26 |
27 | ### Rubix
28 | Rubix is a cloud platform that runs alongside the Palantir Cloud to securely run Foundry workers and executors in containers on Kubernetes; Rubix is a more secure and scalable solution that Palantir developed to replace Palantir Cloud Jails.
29 |
30 | Some of Rubix's most notable differences with existing Palantir Cloud Jails:
31 |
32 | * Spark and other RCE workloads run in a Palantir-secured Kubernetes cluster, which provides container sandboxing from the underlying hosts and network isolation between individual Spark modules.
33 | * RCE workloads are further isolated by running in a separate VPC with only front door access back to the Palantir Cloud. This significantly reduces the risk and blast radius of a malicious Foundry RCE workload.
34 | * SSH onto Rubix hosts is only available for break glass circumstances. It is not available to administrators in steady state. Rubix intentionally limits the use and availability of SSH in order to minimize the attack surface for each cluster.
35 | * Each host in a Rubix environment is destroyed and rebuilt every 40-72 hours. The primary benefit for doing this is it greatly reduces the risk of persistent threats, as an attacker will need to re-compromise a host every time it is prebuilt. This additionally introduces a baseline amount of entropy into the environment that will allow us to be more confident in our platform’s ability to survive isolated failure.
36 |
37 | Rubix provides the ability to dynamically scale the resources available for Foundry workloads, up to the maximum configured by a deployment for that instance group. For example, an instance group used by Spark jobs can dynamically scale from 10 nodes under low utilization up to 20 or 50 or 100+ nodes when demand is at its peak.
38 |
39 | ### Cilium & Hubble
40 | [Cilium](https://cilium.io/) is open-source software used in the Rubix environment for securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes.
41 |
42 | Cilium supports DNS-based network controls that only allow for the resolution of domains specified in the DNS egress policy. Rubix stacks use DNS-based security policies to disallow traffic to any domains not specified in that stack's egress configuration policy.
43 |
44 | [Hubble](https://docs.cilium.io/en/v1.9/intro/#what-is-hubble) is the observability / logging platform built on top of Cilium; it uses [eBPF](https://ebpf.io/) to achieve visibility into the operations occurring on an endpoint, and in turn generates discrete log entries for a variety of events.
45 |
46 | **Palantir's SIEM currently ingests Cilium / Hubble's network flow events, and process events; the following log entry shows an example of a network flow event in which Cilium drops traffic to a desetination whose domain isn't on the allow list:**
47 |
48 | ```
49 | { [-]
50 | flow: { [-]
51 | IP: { [-]
52 | destination: 10.0.X.X
53 | ipVersion: IPv4
54 | source: 10.0.X.X
55 | }
56 | Summary: DNS Query XXXXXXXX.windows.net. AAAA
57 | Type: L7
58 | destination: { [-]
59 | identity: 2
60 | labels: [ [-]
61 | reserved:world
62 | ]
63 | }
64 | event_type: { [-]
65 | type: 129
66 | }
67 | l4: { [-]
68 | UDP: { [-]
69 | destination_port: 8053
70 | source_port: 43086
71 | }
72 | }
73 | l7: { [-]
74 | dns: { [-]
75 | observation_source: proxy
76 | qtypes: [ [-]
77 | AAAA
78 | ]
79 | query: XXXXXXXX.windows.net.
80 | }
81 | type: REQUEST
82 | }
83 | node_name: XXXXXXXX
84 | source: { [-]
85 | ID: 2321
86 | identity: 29085
87 | labels: [ [-]
88 | k8s:com.palantir.deployability.ingress-manager.pod/service=spark-module-0e49dd1f90564bda844af9131dcc6e048772
89 | k8s:io.cilium.k8s.namespace.labels.name=smm-0e49dd1f90564bda844af9131dcc6e048772
90 | k8s:io.cilium.k8s.namespace.labels.spark-backend-id=0e49dd1f-9056-4bda-844a-f9131dcc6e04-8772
91 | k8s:io.cilium.k8s.namespace.labels.spark-module-id=0e49dd1f-9056-4bda-844a-f9131dcc6e04
92 | k8s:io.cilium.k8s.namespace.labels.spark-module-type=python-1
93 | k8s:io.cilium.k8s.policy.cluster=default
94 | k8s:io.cilium.k8s.policy.serviceaccount=default
95 | k8s:io.kubernetes.pod.namespace=smm-0e49dd1f90564bda844af9131dcc6e048772
96 | k8s:is-driver-pod=true
97 | k8s:spark-app-id=0e49dd1f-9056-4bda-844a-f9131dcc6e04
98 | k8s:spark-app-selector=spark-b39e6245b12d493f9bc4375508ef1a29
99 | k8s:spark-module-id=0e49dd1f-9056-4bda-844a-f9131dcc6e04
100 | k8s:spark-role=driver
101 | ]
102 | namespace: smm-0e49dd1f90564bda844af9131dcc6e048772
103 | pod_name: python1-0e49dd1f-9056-4bda-844a-f9131dcc6e04-1619652123324-driver
104 | }
105 | time: 2021-04-28T23:25:57.893023190Z
106 | verdict: DROPPED
107 | }
108 | node_name: XXXXXXXX
109 | time: 2021-04-28T23:25:57.893023190Z
110 | }
111 | ```
112 | ### Rubix Egress Configurations
113 | Each Rubix stack maintains a `security.yml` file that contains the allowed egress IP addresses, URLs, and domains. In addition to the stack-specific allowed egress, there are globally-allowed egress IPs, URLs, and domains that are applied to every stack. The globally-allowed egress values are for cloud service infrastructure, and internal Palantir infrastructure. Cilium collects the egress details from the stack's `security.yml` file and generates corresponding rules in real-time.
114 |
115 | ## Strategy Abstract
116 |
117 | This alerting & detection strategy will function as follows:
118 |
119 | * Hubble logs will be ingested into our SIEM for all Rubix stacks.
120 | * A scheduled Splunk query will identify blocked DNS requests by searching for `cilium:v2:flow_dns` events with `flow.verdict"=DROPPED`.
121 | * Blocked domains will be evaluated against an allowed-list.
122 | * Events for blocked domains in the allowed-list will be suppressed.
123 | * Blocked domains that are not in the allowed-list will generate an alert.
124 |
125 | ## Blind Spots and Assumptions
126 |
127 | ### Blind Spots:
128 |
129 | A blind spot may occur under the following circumstances:
130 |
131 | * Cilium / Hubble telemetry is not correctly logged and ingested into our SIEM.
132 | * A previously approved domain name is utilized for Command & Control.
133 | * An adversary is able to leverage advanced capabilities to bypass DNS security policies, such as [Domain Fronting](https://attack.mitre.org/techniques/T1090/004) using an approved domain.
134 |
135 | ### Assumptions:
136 |
137 | This strategy relies on the following assumptions:
138 |
139 | * Cilium / Hubble telemetry is correctly logged and ingested into our SIEM.
140 | * A known-good domain name has not been repurposed for malicious command & control communication.
141 |
142 | ## False Positives
143 |
144 | The following events will result in a false positive:
145 |
146 | * If a known good domain hasn't yet been added to a stack's `security.yml` file, this alert may fire when Rubix resources attempt to resolve that domain.
147 |
148 | ## Validation
149 |
150 | To validate this ADS:
151 |
152 | * Request a member of the Rubix team use break-glass access and authenticate to a Rubix host.
153 | * The break-glass activity above will trigger its own separate InfoSec alert; please let the rest of the team know why the activity is occurring.
154 | * Have the Rubix personnel Perform a domain name query for a domain not included in the allowed list using nslookup, dig, or an equivalent tool.
155 | * Run the ADS's search query against the timeframe of the activity.
156 | * Validate that the activity resulted in a true-positive event for the ADS.
157 |
158 | ## Alert Priority
159 |
160 | This alert is set to **Medium** priority under all circumstances.
161 |
162 | ## Response
163 |
164 | In the event that this alert fires, the following response procedures are recommended:
165 |
166 | * Identify the domain name indicated in the failed DNS request.
167 | * Use the following questions to determine the context of the event:
168 | * What does open-source intelligence suggest about the domain name?
169 | * Has it been used to distribute malware in the past?
170 | * How long has it been in use?
171 | * What are the domain registration details?
172 | * Identify historical traffic to the suspect domain name; is this infrastructure historically known to us?
173 | * It's possible that the domain is allowed in other network security controls, but just hasn't been added to the Rubix stack's configurations yet.
174 | * The Cilium / Hubble events used in this alert contain a label `k8s:io.cilium.k8s.namespace.labels.spark-module-id` that records the Spark module ID responsible for the alert. We can attribute the activity to a specific Spark module, and Foundry user using the Spark module ID.
175 | * Ex. `k8s:io.cilium.k8s.namespace.labels.spark-module-id=87b0ccfc-f89c-4319-935e-56be1a3d6b56`
176 | * If the initial triage steps listed above don't yield answers that explain the alert, escalate to an investigation.
177 |
178 | ## Additional Resources
179 |
180 | * [Cilium Documentation](https://docs.cilium.io/)
181 | * [Kubernetes Documentation](https://kubernetes.io/docs/home/)
182 | * [Introducing Rubix](https://blog.palantir.com/introducing-rubix-kubernetes-at-palantir-ab0ce16ea42e)
183 |
--------------------------------------------------------------------------------
/ADS-Framework.md:
--------------------------------------------------------------------------------
1 | # Goal
2 | The goal is the intended purpose of the alert. It is a simple, plaintext description of the type of behavior you're attempting to detect in your ADS.
3 |
4 | # Categorization
5 | The categorization is a mapping of the ADS to the relevant entry in the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) Framework. ATT&CK provides a language for various post-exploitation techniques and strategies that adversaries might use.
6 |
7 | Mapping to the ATT&CK framework allows for further investigation into the technique, provides a reference to the areas of the killchain where the ADS will be used, and can further drive insight and metrics into alerting gaps. In our environment, we have a knowledge base which maps all of our ADS to individual components of the MITRE ATT&CK framework. When generating a hypothesis for a new alert, an engineer can simply review where we are strongest — or weakest — according to individual ATT&CK techniques.
8 |
9 | When selecting a MITRE ATT&CK category, please select both the parent and child category (e.g. Credential Access / Brute Force).
10 |
11 | # Strategy Abstract
12 | The strategy abstract is a high-level walkthrough of how the ADS functions. This describes what the alert is looking for, what technical data sources are used, any enrichment that occurs, and any false positive minimization steps.
13 |
14 | # Technical Context
15 | Technical Context provides detailed information and background needed for a responder to understand all components of the alert. This should appropriately link to any platform or tooling knowledge and should include information about the direct aspects of the alert. The goal of the Technical Context section is to provide a self-contained reference for a responder to make a judgement call on any potential alert, even if they do not have direct subject matter expertise on the ADS itself.
16 |
17 | # Blind Spots and Assumptions
18 | Blind Spots and Assumptions are the recognized issues, assumptions, and areas where an ADS may not fire. No ADS is perfect and identifying assumptions and blind spots can help other engineers understand how an ADS may fail to fire or be defeated by an adversary.
19 |
20 | # False Positives
21 | False Positives are the known instances of an ADS misfiring due to a misconfiguration, idiosyncrasy in the environment, or other non-malicious scenario. The False Positives section notes uniqueness to your own environment, and should include the defining characteristics of any activity that could generate a false positive alert. These false positive alerts should be suppressed within the SIEM to prevent alert generation when a known false positive event occurs.
22 |
23 | Each alert / detection strategy needs to be tested and refined to remove as many false positives as possible before it is put into production.
24 |
25 | False positive minimization relies on looking at several principles of the strategy and making adjustments, such as:
26 |
27 | * Add an additional component to the rule to maximize true positives.
28 | * Remove common false positives through patterns.
29 | * Back-end filtering to store indices of expected false positives.
30 |
31 | Ideally, you want your strategy to have the fewest false positives possible while maintaining the spirit of your rule. If a low false positive rate cannot be reached, the alert may need to be broken down, refactored, or entirely discarded.
32 |
33 | # Validation
34 | Validation are the steps required to generate a representative true positive event which triggers this alert. This is similar to a unit test and describes how an engineer can cause the ADS to fire. This can be a walkthrough of steps used to generate an alert, a script to trigger the ADS (such as Red Canary's Atomic Red Team Tests), or a scenario used in an alert testing and orchestration platform.
35 |
36 | Each alert / detection strategy must have true positive validation. This is a testing process designed to prove the true positives are detected.
37 |
38 | True positive validation relies on generating a scenario in which the detection strategy is testing, and then validating in the tool.
39 |
40 | To perform positive validation:
41 |
42 | * Generate a scenario where a true positive would be generated.
43 | * Document the process of your testing scenario.
44 | * From a testing device, generate a true positive alert.
45 | * Validate the true positive alert was detected by the strategy.
46 |
47 | If you are unable to generate a true positive alert, the alert may need to be broken down, refactored, or entirely discarded.
48 |
49 | # Priority
50 | Priority describes the various alerting levels that an ADS may be tagged with. While the alert itself should reflect the priority when it is fired through configuration in your SIEM (e.g. High, Medium, Low), this section details the criteria for the specific priorities.
51 |
52 | # Response
53 | These are the general response steps in the event that this alert fired. These steps instruct the next responder on the process of triaging and investigating an alert.
54 |
55 | # Additional Resources
56 | Additional Resources are any other internal, external, or technical references that may be useful for understanding the ADS.
57 |
58 | The title for this alerting strategy should be informative but succint, and should be targeting a singular event i.e "Non-SA Bastion Logon" rather than reference all events of this type ("Bastion Logons").
59 |
60 | The strategy should be stored under the Draft Alerting and Detection Strategies page while you're working on it, peer-reviewed, and a Like attached to the page when approved by a peer to move into production.
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Palantir Technologies
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Alerting and Detection Strategies Framework
2 |
3 | ## About This Repository
4 | This is a public version of the [Alerting and Detection Strategy (ADS) framework we use on the Incident Response Team at Palantir](https://www.medium.com/@palantir).
5 |
6 | This GitHub project provides the necessary building blocks for adopting this framework for organizations looking to improve the efficacy of their detection strategies. While there are operational security considerations around publicly acknowledging and documenting internal alerts, we hope these examples spur greater sharing and collaboration, inspire detection enhancements for other defenders, and ultimately increase the operational cost for attackers.
7 |
8 | ## ADS Framework
9 | Prior to the development and adoption of the ADS framework, we faced major challenges with development of alerting strategies. There was a lack of rigor around the creation, development, and implementation of an alert, which led to sub-optimal alerts going to production without documentation or peer-review. Over time, some of these alerts gained a reputation of being low-quality, which led to fatigue, alerting apathy, or additional engineering time and resources.
10 |
11 | To combat the issues and deficiencies previously noted, we developed an ADS Framework which is used for all alerting development. This is a natural language template which helps frame hypothesis generation, testing, and management of new ADS.
12 |
13 | The ADS Framework has the following sections, each which must be completed prior to production implementation:
14 |
15 | * Goal
16 | * Categorization
17 | * Strategy Abstract
18 | * Technical Context
19 | * Blind Spots and Assumptions
20 | * False Positives
21 | * Validation
22 | * Priority
23 | * Response
24 |
25 | Each section is required to successfully deploy a new ADS, and guarantees that any given alert will have sufficient documentation, will be validated for durability, and reviewed prior to production deployment.
26 |
27 | Each production or draft alert is based on the ADS framework is stored in a durable, version-controlled, and centralized location (e.g. Wiki, GitHub entry, etc.)
28 |
29 | ## Repository Layout
30 | This repository is organized as follows:
31 | * [**ADS-Framework**](./ADS-Framework.md): The core ADS framework which is used internally at Palantir.
32 | * [**ADS-Examples**](./ADS-Examples/): ADS examples which have been generated in accordance to this framework. These represent human-readable alerting strategies which may be deployed to detect malicious or anomalous activity.
33 |
34 | ### Using This Repository
35 | **Note**: We recommend that you spin up a lab environment before deploying any of these configurations, scripts, or subscriptions to a production environment.
36 |
37 | 1. Download the repository and review the contents.
38 | 2. Run a ADS hack week and try converting or generating several new alerts.
39 | 3. Perform peer review of each new ADS and provide critical feedback.
40 | 4. Start the process of converting legacy alerts into the ADS format.
41 |
42 | ## Contributing
43 | Contributions, fixes, and improvements can be submitted directly against this project as a GitHub issue or pull request.
44 |
45 | ## License
46 | MIT License
47 |
48 | Copyright (c) 2017 Palantir Technologies Inc.
49 |
50 | Permission is hereby granted, free of charge, to any person obtaining a copy
51 | of this software and associated documentation files (the "Software"), to deal
52 | in the Software without restriction, including without limitation the rights
53 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
54 | copies of the Software, and to permit persons to whom the Software is
55 | furnished to do so, subject to the following conditions:
56 |
57 | The above copyright notice and this permission notice shall be included in all
58 | copies or substantial portions of the Software.
59 |
60 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
61 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
62 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
63 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
64 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
65 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
66 | SOFTWARE.
67 |
68 | ## Further Reading and Acknowledgements
69 |
70 | We would like to extend thanks to following for their contributions to the InfoSec community, or for assisting in the development of the ADS Framework:
71 |
72 | * [MITRE ATT&CK Framework](https://attack.mitre.org/wiki/Main_Page)
73 | * [Red Canary Atomic Red Team Testing Framework](https://github.com/redcanaryco/atomic-red-team)
74 |
--------------------------------------------------------------------------------