├── LICENSE ├── README.md ├── chapter1 ├── SSH-connection-with-paramiko.py ├── parsing-HTML-with-BeautifulSoap.py ├── scapy-ICMP.py ├── sending-http-request.py └── simple-TCP-client.py ├── chapter2 ├── ARP-scan.py ├── port-scan.py ├── portscanner-with-decorators-and-generator │ ├── README.md │ ├── portscanner │ │ ├── __init__.py │ │ └── portscanner.py │ └── setup.py └── portscanner │ ├── README.md │ ├── portscanner │ ├── __init__.py │ └── portscanner.py │ └── setup.py ├── chapter3 ├── HTML-analysis.py ├── JS-analysis.py ├── http-headers-analysis.py ├── security-headers-check.py └── web-technology-fingerprinting-with-wappalyzer.py ├── chapter4 ├── SQLmap-with-MITM-proxy.py ├── detect-potential-SQLInjection.py ├── flask-application-with-IDOR.py ├── parameterized-queries-in-SQLite3.py ├── scrape-with-PlayWright-advanced-with-crawler.py ├── scrape-with-PlayWright-advanced.py ├── scrape-with-PlayWright.py ├── scrape-with-Request-and-BeautifulSoup.py ├── test-XXS.py └── test-stored-XXS.py ├── chapter5 ├── AWS-S3-with-boto.py ├── Azure-SDK-blob-storage.py ├── HardCoded-Cred-with-LLM.py ├── encryption-within-serverless.py ├── enumerate-AWS-resources.py ├── enumerate-EC2-instances-AWS.py ├── get-AWS-secrets-with-boto.py ├── get-Cloud-Watch-logs.py ├── get-permissions-of-lambda-function.py ├── subprocess-for-Terraform.py ├── update-with-webhook.py └── validate-cloud-formation.py ├── chapter6 ├── Error-handling.py ├── Jenkinsfile ├── ZAP-automation.py ├── beagle-security-automation.py └── logger-implementation.py ├── chapter7 ├── AWS-compliance-audit.py ├── pandas-for-intrusion-detection.py ├── retriave-threat-intelligence.py └── scikit-learn-automation ├── chapter8 ├── asymmetric-encryption-example.py ├── bcrypt-example.py ├── hashlib-example.py └── symmetric-encryption-example.py └── chapter9 ├── automating-log-analysis.py ├── automating-notification-and-reporting.py ├── automating-quarantine-and-isolation.py ├── automating-threat-hunting-task.py ├── automating-threat-intelligence-integration.py ├── data-collection-from-API.py ├── detect-communications-with-known-IPs.py ├── generating-incident-report.py ├── incident-response-workflow.py ├── logging-and-reporting.py ├── preditct-anomalies.py ├── process-Apache-log.py ├── visualize-anomalies.py └── visualize-log-data.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Packt 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Offensive Security Using Python 2 | 3 | Book Name 4 | 5 | This is the code repository for [Offensive Security Using Python](https://www.packtpub.com/en-us/product/offensive-security-using-python-9781835468166), published by Packt. 6 | 7 | **A hands-on guide to offensive tactics and threat mitigation using practical strategies** 8 | 9 | ## What is this book about? 10 | Offensive Security Using Python is your go-to manual for mastering the quick-paced field of offensive security. This book is packed with valuable insights, real-world examples, and hands-on activities to help you leverage Python to navigate the complicated world of web security, exploit vulnerabilities, and automate challenging security tasks. 11 | From detecting vulnerabilities to exploiting them with cutting-edge Python techniques, you’ll gain practical insights into web security, along with guidance on how to use automation to improve the accuracy and effectiveness of your security activities. You’ll also learn how to design personalized security automation tools. 12 | 13 | This book covers the following exciting features: 14 | * Familiarize yourself with advanced Python techniques tailored to security professionals’ needs 15 | * Understand how to exploit web vulnerabilities using Python 16 | * Build automated security pipelines using Python and third-party tools 17 | * Develop custom security automation tools to streamline your workflow 18 | * Implement secure coding practices with Python to boost your applications 19 | * Discover Python-based threat detection and incident response techniques 20 | 21 | If you feel this book is for you, get your [copy](https://www.amazon.com/Offensive-Security-Using-Python-Handbook-ebook/dp/B0CV82CWGQ/) today! 22 | 23 | https://www.packtpub.com/ 24 | 25 | ## Instructions and Navigations 26 | All of the code is organized into folders. For example, Chapter03. 27 | 28 | The commands will look like the following: 29 | ``` 30 | pip install wapiti3 31 | wapiti -h 32 | ``` 33 | 34 | **Following is what you need for this book:** 35 | This book is for a diverse audience interested in cybersecurity and offensive security. Whether you're an experienced Python developer looking to enhance offensive security skills, an ethical hacker, a penetration tester eager to learn advanced Python techniques, or a cybersecurity enthusiast exploring Python's potential in vulnerability analysis, you'll find valuable insights. If you have a solid foundation in Python programming language and are eager to understand cybersecurity intricacies, this book will help you get started on the right foot. 36 | 37 | With the following software and hardware list you can run all code files present in the book (Chapter 1-9). 38 | 39 | ### Software and Hardware List 40 | 41 | | Chapter | Software required | OS required | 42 | | -------- | ---------------------------------| ----------------------------------| 43 | | 1-9 | Python3, Visual Studio Code | Windows, Mac OS X, and Linux (Any)| 44 | 45 | 46 | ### Related products 47 | 48 | * Zabbix 7 IT Infrastructure Monitoring Cookbook [[Packt]](https://www.packtpub.com/en-in/product/zabbix-7-it-infrastructure-monitoring-cookbook-9781801078320) [[Amazon]](https://www.amazon.in/Zabbix-Infrastructure-Monitoring-Cookbook-maintaining-ebook/dp/B0C53V9XPG) 49 | 50 | * Automating Security Detection Engineering [[Packt]](https://www.packtpub.com/en-in/product/automating-security-detection-engineering-9781837636419) [[Amazon]](https://www.amazon.com/Automating-Security-Detection-Engineering-hands-ebook/dp/B0D343MGWD) 51 | 52 | ## Get to Know the Author 53 | **Rejah Rehim** 54 | He is a visionary in cybersecurity, serves as Beagle Security, CEO and cofounder. With a 15-year track record, he is a driving force renowned for Python Penetration Testing Cookbook and Effective Python Penetration Testing. 55 | 56 | **Manindar Mohan** 57 | He is a a cybersecurity architect with 8 years of expertise, is a vital Elite Team member at Kerala Police Cyberdome. He is an ISMS Lead Auditor and OWASP Kerala Chapter board contributor. Despite having a career in aircraft engineering, he became a cybersecurity architect because of his passion for cyberspace. 58 | -------------------------------------------------------------------------------- /chapter1/SSH-connection-with-paramiko.py: -------------------------------------------------------------------------------- 1 | # SSH Connection with Paramiko 2 | import paramiko 3 | # Create an SSH client 4 | ssh_client = paramiko.SSHClient() 5 | # Automatically add the server's host key 6 | ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) 7 | # Connect to the SSH server 8 | ssh_client.connect("example.com", username="user", 9 | password="password") # Update the credentials here 10 | 11 | # Execute a command 12 | stdin, stdout, stderr = ssh_client.exec_command("ls -l") 13 | 14 | # Print the command output 15 | print(stdout.read().decode("utf-8")) 16 | 17 | # Close the SSH connection 18 | ssh_client.close() 19 | -------------------------------------------------------------------------------- /chapter1/parsing-HTML-with-BeautifulSoap.py: -------------------------------------------------------------------------------- 1 | # Parsing HTML with BeautifulSoup 2 | from bs4 import BeautifulSoup 3 | 4 | html = """ 5 | 6 | 7 | Sample Page 8 | 9 | 10 |

This is a sample paragraph.

11 | 12 | 13 | """ 14 | 15 | # Parse the HTML 16 | soup = BeautifulSoup(html, "html.parser") 17 | # Extract the text from the paragraph 18 | paragraph = soup.find("p") 19 | print(paragraph.text) -------------------------------------------------------------------------------- /chapter1/scapy-ICMP.py: -------------------------------------------------------------------------------- 1 | # Creating a Basic ICMP Ping Packet 2 | # Import the IP,ICMP and sr1 from Scapy module 3 | from scapy.all import IP, ICMP, sr1 4 | 5 | # Create an ICMP packet 6 | packet = IP(dst="192.168.1.1") / ICMP() 7 | 8 | # Send the packet and receive a response 9 | response = sr1(packet) 10 | -------------------------------------------------------------------------------- /chapter1/sending-http-request.py: -------------------------------------------------------------------------------- 1 | # Sending an HTTP GET Request 2 | import requests 3 | 4 | url = "https://examplecode.com" #update to your valid url 5 | 6 | response = requests.get(url) 7 | 8 | # Print the response content 9 | print(response.text) -------------------------------------------------------------------------------- /chapter1/simple-TCP-client.py: -------------------------------------------------------------------------------- 1 | # Creating a Simple TCP Client 2 | import socket 3 | target_host = "example.com" 4 | target_port = 80 5 | # Create a socket object 6 | client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 7 | # Connect to the server 8 | client.connect((target_host, target_port)) 9 | # Send data 10 | client.send(b"GET / HTTP/1.1\r\nHost: example.com\r\n\r\n") 11 | # Receive data 12 | response = client.recv(4096) 13 | # Print the response 14 | print(response) -------------------------------------------------------------------------------- /chapter2/ARP-scan.py: -------------------------------------------------------------------------------- 1 | # Import the necessary modules from Scapy 2 | from scapy.all import ARP, Ether, srp 3 | 4 | # Function to perform ARP scan 5 | def arp_scan(target_ip): 6 | # Create an ARP request packet 7 | arp_request = ARP(pdst=target_ip) 8 | # Create an Ethernet frame to encapsulate the ARP request 9 | ether_frame = Ether(dst="ff:ff:ff:ff:ff:ff") #Broadcasting to all devices in the network 10 | 11 | # Combine the Ethernet frame and ARP request packet 12 | arp_request_packet = ether_frame / arp_request 13 | 14 | # Send the packet and receive the response 15 | result = srp(arp_request_packet, timeout=3, verbose=False)[0] 16 | # List to store the discovered devices 17 | devices_list = [] 18 | 19 | # Parse the response and extract IP and MAC addresses 20 | for sent, received in result: 21 | devices_list.append({'ip': received.psrc,'mac': received.hwsrc}) 22 | 23 | return devices_list 24 | 25 | # Function to print scan results 26 | def print_scan_results(devices_list): 27 | print("IP Address\t\tMAC Address") 28 | print("-----------------------------------------") 29 | for device in devices_list: 30 | print(f"{device['ip']}\t\t{device['mac']}") 31 | 32 | # Main function to perform the scan 33 | def main(target_ip): 34 | print(f"Scanning {target_ip}...") 35 | devices_list = arp_scan(target_ip) 36 | print_scan_results(devices_list) 37 | 38 | # Entry point of the script 39 | if __name__ == "__main__": 40 | # Define the target IP range (e.g.,"192.168.1.1/24") 41 | target_ip = input("Enter the target IP range (e.g.,92.168.1.1/24): ") 42 | main(target_ip) -------------------------------------------------------------------------------- /chapter2/port-scan.py: -------------------------------------------------------------------------------- 1 | from portscanner.portscanner import PortScanner 2 | 3 | scanner = PortScanner("192.168.1.1", 200, 202) # update the values 4 | open_ports = scanner.scan_ports() 5 | print("Open ports: ", open_ports) -------------------------------------------------------------------------------- /chapter2/portscanner-with-decorators-and-generator/README.md: -------------------------------------------------------------------------------- 1 | Port Scan Module with Decorators and Generator -------------------------------------------------------------------------------- /chapter2/portscanner-with-decorators-and-generator/portscanner/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Offensive-Security-Using-Python/ac1147771080e62dace8cd08e9a916c3040e6d83/chapter2/portscanner-with-decorators-and-generator/portscanner/__init__.py -------------------------------------------------------------------------------- /chapter2/portscanner-with-decorators-and-generator/portscanner/portscanner.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import time 3 | 4 | #Class Definition 5 | class PortScanner: 6 | def __init__(self, target_host, start_port,end_port): 7 | self.target_host = target_host 8 | self.start_port = start_port 9 | self.end_port = end_port 10 | self.open_ports = [] 11 | #timing_decorator Decorator Method 12 | def timing_decorator(func): 13 | def wrapper(*args, **kwargs): 14 | start_time = time.time() 15 | result = func(*args, **kwargs) 16 | end_time = time.time() 17 | print(f"Scanning took {end_time - start_time:.2f} seconds.") 18 | return result 19 | return wrapper 20 | #is_port_open Method 21 | def is_port_open(self, port): 22 | try: 23 | with socket.socket(socket.AF_INET,socket.SOCK_STREAM) as s: 24 | s.settimeout(1) 25 | s.connect((self.target_host, port)) 26 | return True 27 | except (socket.timeout,ConnectionRefusedError): 28 | return False 29 | #scan_ports Method 30 | @timing_decorator 31 | def scan_ports(self): 32 | open_ports = [port for port in range(self.start_port, self.end_port + 1) if self.is_port_open(port)] 33 | return open_ports 34 | #scan_ports_generator Method 35 | @timing_decorator 36 | def scan_ports_generator(self): 37 | for port in range(self.start_port,self.end_port + 1): 38 | if self.is_port_open(port): 39 | yield port 40 | 41 | def main(): # type: ignore 42 | target_host = input("Enter target host: ") 43 | start_port = int(input("Enter starting port: ")) 44 | end_port = int(input("Enter ending port: ")) 45 | 46 | scanner = PortScanner(target_host, start_port,end_port) 47 | 48 | open_ports = scanner.scan_ports() 49 | print("Open ports: ", open_ports) 50 | 51 | open_ports_generator = scanner.scan_ports_generator() 52 | print("Open ports (using generator):", list(open_ports_generator)) 53 | 54 | if __name__ == "__main__": 55 | main() -------------------------------------------------------------------------------- /chapter2/portscanner-with-decorators-and-generator/setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | setup( 4 | name='portscanner', 5 | version='0.1', 6 | packages=['portscanner'], 7 | install_requires=[], 8 | entry_points={ 9 | 'console_scripts': [ 10 | 'portscanner = portscanner.portscanner:main' 11 | ] 12 | } 13 | ) -------------------------------------------------------------------------------- /chapter2/portscanner/README.md: -------------------------------------------------------------------------------- 1 | Port Scan Module -------------------------------------------------------------------------------- /chapter2/portscanner/portscanner/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Offensive-Security-Using-Python/ac1147771080e62dace8cd08e9a916c3040e6d83/chapter2/portscanner/portscanner/__init__.py -------------------------------------------------------------------------------- /chapter2/portscanner/portscanner/portscanner.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import threading 3 | import time 4 | 5 | #Class Definition 6 | class PortScanner: 7 | def __init__(self, target_host, start_port,end_port): 8 | self.target_host = target_host 9 | self.start_port = start_port 10 | self.end_port = end_port 11 | self.open_ports = [] 12 | #is_port_open Method 13 | def is_port_open(self, port): 14 | try: 15 | with socket.socket(socket.AF_INET,socket.SOCK_STREAM) as s: 16 | s.settimeout(1) 17 | s.connect((self.target_host, port)) 18 | return True 19 | except (socket.timeout,ConnectionRefusedError): 20 | return False 21 | #scan_ports Method 22 | def scan_ports(self): 23 | open_ports = [port for port in range(self.start_port, self.end_port + 1) if self.is_port_open(port)] 24 | return open_ports 25 | def main(): # type: ignore 26 | target_host = input("Enter target host: ") 27 | start_port = int(input("Enter starting port: ")) 28 | end_port = int(input("Enter ending port: ")) 29 | 30 | scanner = PortScanner(target_host, start_port,end_port) 31 | 32 | open_ports = scanner.scan_ports() 33 | print("Open ports: ", open_ports) 34 | 35 | if __name__ == "__main__": 36 | main() -------------------------------------------------------------------------------- /chapter2/portscanner/setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | setup( 4 | name='portscanner', 5 | version='0.1', 6 | packages=['portscanner'], 7 | install_requires=[], 8 | entry_points={ 9 | 'console_scripts': [ 10 | 'portscanner = portscanner.portscanner:main' 11 | ] 12 | } 13 | ) -------------------------------------------------------------------------------- /chapter3/HTML-analysis.py: -------------------------------------------------------------------------------- 1 | from bs4 import BeautifulSoup 2 | import requests 3 | url = 'https://example.com' 4 | response = requests.get(url) 5 | soup = BeautifulSoup(response.content, 'html.parser') 6 | # Extract script tags to find JavaScript libraries 7 | script_tags = soup.find_all('script') 8 | for script in script_tags: 9 | print(script.get('src')) 10 | 11 | # Extract CSS links to find CSS frameworks 12 | css_links = soup.find_all('link', {'rel': 'stylesheet'}) 13 | for link in css_links: 14 | print(link.get('href')) -------------------------------------------------------------------------------- /chapter3/JS-analysis.py: -------------------------------------------------------------------------------- 1 | import re 2 | import requests 3 | url = 'https://example.com' 4 | response = requests.get(url) 5 | javascript_code = response.text 6 | # Search for specific JavaScript libraries/frameworks 7 | libraries = re.findall(r'someLibraryName', 8 | javascript_code) 9 | if libraries: 10 | print('SomeLibraryName is used.') -------------------------------------------------------------------------------- /chapter3/http-headers-analysis.py: -------------------------------------------------------------------------------- 1 | import requests 2 | url = 'https://example.com' 3 | response = requests.get(url) 4 | headers = response.headers 5 | # Extract and analyze headers 6 | server = headers.get('Server') 7 | print(f'Server: {server}') -------------------------------------------------------------------------------- /chapter3/security-headers-check.py: -------------------------------------------------------------------------------- 1 | import requests 2 | def check_security_headers(url): 3 | response = requests.get(url) 4 | headers = response.headers 5 | security_headers = { 6 | 'Content-Security-Policy': 'Content Security Policy (CSP) header is missing!', 7 | 'Strict-Transport-Security': 'Strict Transport Security (HSTS) header is missing!', 8 | 'X-Content-Type-Options': 'X-Content-Type-Options header is missing!', 9 | 'X-Frame-Options': 'X-Frame-Options header is missing!', 10 | 'Referrer-Policy': 'Referrer Policy header is missing!' 11 | } 12 | for header, message in security_headers.items(): 13 | if header not in headers: 14 | print(message) 15 | else: 16 | print(f'{header}: {headers[header]}') 17 | 18 | # Example usage 19 | if __name__ == "__main__": 20 | website_url = input("Enter the URL to check security headers: ") 21 | check_security_headers(website_url) -------------------------------------------------------------------------------- /chapter3/web-technology-fingerprinting-with-wappalyzer.py: -------------------------------------------------------------------------------- 1 | from wappalyzer import Wappalyzer, WebPage 2 | url = 'https://example.com' 3 | webpage = WebPage.new_from_url(url) 4 | wappalyzer = Wappalyzer.latest() 5 | # Analyze the webpage 6 | technologies = wappalyzer.analyze(webpage) 7 | for technology in technologies: 8 | print(f'Technology: {technology}') -------------------------------------------------------------------------------- /chapter4/SQLmap-with-MITM-proxy.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | from mitmproxy import proxy, options 3 | from mitmproxy.tools.dump import DumpMaster 4 | 5 | # Function to automate SQLMap with captured HTTP requests from mitmproxy 6 | 7 | def automate_sqlmap_with_mitmproxy(): 8 | # SQLMap command template 9 | sqlmap_command = ["sqlmap", "-r", "-", "--batch", "--level=5", "--risk=3"] 10 | 11 | try: 12 | # Start mitmproxy to capture HTTP traffic 13 | mitmproxy_opts = options.Options(listen_host='127.0.0.1', listen_port=8080) 14 | m = DumpMaster(options=mitmproxy_opts) 15 | config = proxy.config.ProxyConfig(mitmproxy_opts) 16 | m.server = proxy.server.ProxyServer(config) 17 | m.addons.add(DumpMaster) 18 | 19 | # Start mitmproxy in a separate thread 20 | t = threading.Thread(target=m.run) 21 | t.start() 22 | 23 | # Process captured requests in real-time 24 | while True: 25 | # Assuming mitmproxy captures and saves requests to 'captured_request.txt' 26 | with open('captured_request.txt', 'r') as file: 27 | request_data = file.read() 28 | # Run SQLMap using subprocess 29 | process = subprocess.Popen(sqlmap_command, stdin=subprocess.PIPE,stdout=subprocess.PIPE, stderr=subprocess.PIPE) 30 | stdout, stderr = process.communicate(input=request_data.encode()) 31 | 32 | # Print SQLMap output 33 | print("SQLMap output:") 34 | print(stdout.decode()) 35 | 36 | if stderr: 37 | print("Error occurred:") 38 | print(stderr.decode()) 39 | 40 | # Sleep for a while before checking for new requests 41 | time.sleep(5) 42 | 43 | except Exception as e: 44 | print("An error occurred:", e) 45 | 46 | finally: 47 | # Stop mitmproxy 48 | m.shutdown() 49 | t.join() 50 | 51 | # Start the automation process 52 | automate_sqlmap_with_mitmproxy() -------------------------------------------------------------------------------- /chapter4/detect-potential-SQLInjection.py: -------------------------------------------------------------------------------- 1 | import requests 2 | def check_sql_injection(url): 3 | payloads = ["'", '"', "';--", "')", "'OR 1=1--", "'OR '1'='1", "'='", "1'1"] 4 | for payload in payloads: 5 | test_url = f"{url}{payload}" 6 | response = requests.get(test_url) 7 | # Check for potential signs of SQL injection in the response 8 | if "error" in response.text.lower() or "exception" in response.text.lower(): 9 | print(f"Potential SQL Injection Vulnerability found at: {test_url}") 10 | return 11 | 12 | print("No SQL Injection Vulnerabilities detected.") 13 | 14 | # Example usage: 15 | if __name__ == '__main__': 16 | target_url = "http://example.com/login?id=" 17 | check_sql_injection(target_url) -------------------------------------------------------------------------------- /chapter4/flask-application-with-IDOR.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, request, jsonify 2 | 3 | app = Flask(__name__) 4 | users = { 5 | '123': {'username': 'alice', 'email':'alice@example.com'}, 6 | '124': {'username': 'bob', 'email':'bob@example.com'} 7 | } 8 | 9 | @app.route('/user', methods=['GET']) 10 | def get_user(): 11 | user_id = request.args.get('id') 12 | user_data = users.get(user_id) 13 | return jsonify(user_data) 14 | 15 | if __name__ == '__main__': 16 | app.run(debug=True) -------------------------------------------------------------------------------- /chapter4/parameterized-queries-in-SQLite3.py: -------------------------------------------------------------------------------- 1 | import sqlite3 2 | username = input("Enter username: ") 3 | password = input("Enter password: ") 4 | # Establish a database connection 5 | conn = sqlite3.connect('example.db') 6 | cursor = conn.cursor() 7 | 8 | # Use a parameterized query to prevent SQL injection 9 | cursor.execute("SELECT * FROM users WHERE username = ? AND password = ?", (username, password)) 10 | 11 | # Fetch the result 12 | result = cursor.fetchone() 13 | 14 | # Validate the login 15 | if result: 16 | print("Login successful!") 17 | else: 18 | print("Invalid credentials.") 19 | 20 | # Close the connection 21 | conn.close() -------------------------------------------------------------------------------- /chapter4/scrape-with-PlayWright-advanced-with-crawler.py: -------------------------------------------------------------------------------- 1 | from playwright.sync_api import sync_playwright 2 | 3 | def scrape_data(): 4 | with sync_playwright() as p: 5 | browser = p.chromium.launch() 6 | context = browser.new_context() 7 | 8 | # Open a new page 9 | page = context.new_page() 10 | 11 | # Navigate to the website 12 | page.goto('https://example.com') 13 | 14 | # Example: Log in (replace these with your actual login logic) 15 | page.fill('input[name="username"]', 'your_username') 16 | page.fill('input[name="password"]', 'your_password') 17 | page.click('button[type="submit"]') 18 | 19 | # Wait for navigation to dashboard or relevant page after login 20 | page.wait_for_load_state('load') 21 | 22 | # Start crawling and scraping 23 | scraped_data = [] 24 | 25 | while True: 26 | # Scraping data on the current page 27 | data_elements = page.query_selector_all('.data-element-selector') 28 | scraped_data.extend([element.text_content() for element in data_elements]) 29 | 30 | # Look for the 'next page' button or link 31 | next_page_button = page.query_selector('.next-page-button-selector') 32 | 33 | if not next_page_button: 34 | # If no next page is found, stop crawling 35 | break 36 | 37 | # Click on the 'next page' button 38 | next_page_button.click() 39 | # Wait for the new page to load 40 | page.wait_for_load_state('load') 41 | 42 | # Print or process scraped data from all pages 43 | for data in scraped_data: 44 | print(data) 45 | 46 | # Close the browser 47 | context.close() 48 | 49 | if __name__ == "__main__": 50 | scrape_data() -------------------------------------------------------------------------------- /chapter4/scrape-with-PlayWright-advanced.py: -------------------------------------------------------------------------------- 1 | from playwright.sync_api import sync_playwright 2 | 3 | def scrape_data(): 4 | with sync_playwright() as p: 5 | browser = p.chromium.launch() 6 | context = browser.new_context() 7 | 8 | # Open a new page 9 | page = context.new_page() 10 | 11 | # Navigate to the website 12 | page.goto('https://example.com') 13 | 14 | # Example: Log in (replace these with your actual login logic) 15 | page.fill('input[name="username"]', 'your_username') 16 | page.fill('input[name="password"]', 'your_password') 17 | page.click('button[type="submit"]') 18 | 19 | # Wait for navigation to dashboard or relevant page after login 20 | page.wait_for_load_state('load') 21 | 22 | # Scraping data 23 | data_elements = page.query_selector_all('.data-element-selector') 24 | scraped_data = [element.text_content() for element in data_elements] 25 | 26 | # Print or process scraped data 27 | for data in scraped_data: 28 | print(data) 29 | 30 | # Close the browser 31 | context.close() 32 | 33 | if __name__ == "__main__": 34 | scrape_data() -------------------------------------------------------------------------------- /chapter4/scrape-with-PlayWright.py: -------------------------------------------------------------------------------- 1 | from playwright.sync_api import sync_playwright 2 | 3 | def scrape_website(url): 4 | with sync_playwright() as p: 5 | browser = p.chromium.launch() 6 | context = browser.new_context() 7 | page = context.new_page() 8 | 9 | page.goto(url) 10 | # Replace 'your_selector' with the actual CSS selector for the element you want to scrape 11 | elements = page.query_selector_all('your_selector') 12 | 13 | # Extracting information from the elements 14 | for element in elements: 15 | text = element.text_content() 16 | print(text) # Change this to process or save the scraped data 17 | 18 | browser.close() 19 | 20 | if __name__ == "__main__": 21 | # Replace 'https://example.com' with the URL you want to scrape 22 | scrape_website('https://example.com') -------------------------------------------------------------------------------- /chapter4/scrape-with-Request-and-BeautifulSoup.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from bs4 import BeautifulSoup 3 | 4 | # Send a GET request to the website 5 | url = 'https://example.com' 6 | response = requests.get(url) 7 | 8 | # Parse HTML content using Beautiful Soup 9 | soup = BeautifulSoup(response.text, 'html.parser') 10 | 11 | # Extract specific data 12 | title = soup.find('title').text 13 | print(f"Website title: {title}") 14 | 15 | # Find all links on the page 16 | links = soup.find_all('a') 17 | for link in links: 18 | print(link.get('href')) -------------------------------------------------------------------------------- /chapter4/test-XXS.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from urllib.parse import quote 3 | 4 | # Target URL to test for XSS vulnerability 5 | target_url = "https://example.com/page?id=" 6 | 7 | # Payloads for testing, modify as needed 8 | xss_payloads = [ "", "", "" ] 9 | 10 | def test_xss_vulnerability(url, payload): 11 | # Encode the payload for URL inclusion 12 | encoded_payload = quote(payload) 13 | 14 | # Craft the complete URL with the encoded payload 15 | test_url = f"{url}{encoded_payload}" 16 | 17 | try: 18 | # Send a GET request to the target URL with the payload 19 | response = requests.get(test_url) 20 | 21 | # Check the response for indications of successful exploitation 22 | if payload in response.text: 23 | print(f"XSS vulnerability found! Payload: {payload}") 24 | else: 25 | print(f"No XSS vulnerability with payload: {payload}") 26 | 27 | except requests.RequestException as e: 28 | print(f"Request failed: {e}") 29 | 30 | if __name__ == "__main__": 31 | # Test each payload against the target URL for XSS vulnerability 32 | for payload in xss_payloads: 33 | test_xss_vulnerability(target_url, payload) -------------------------------------------------------------------------------- /chapter4/test-stored-XXS.py: -------------------------------------------------------------------------------- 1 | import requests 2 | 3 | # Target URL to test for stored XSS vulnerability 4 | target_url = "https://example.com/comment" 5 | 6 | # Malicious payload to be stored 7 | xss_payload = "" 8 | 9 | def inject_payload(url, payload): 10 | try: 11 | # Craft a POST request to inject the payload into the vulnerable endpoint 12 | response = requests.post(url, data={"comment": payload}) 13 | # Check if the payload was successfully injected 14 | if response.status_code == 200: 15 | print("Payload injected successfully for stored XSS!") 16 | except requests.RequestException as e: 17 | print(f"Request failed: {e}") 18 | 19 | def retrieve_payload(url): 20 | try: 21 | # Send a GET request to retrieve the stored data 22 | response = requests.get(url) 23 | 24 | # Check if the payload is present in the retrieved content 25 | if xss_payload in response.text: 26 | print(f"Stored XSS vulnerability found! Payload: {xss_payload}") 27 | else: 28 | print("No stored XSS vulnerability detected.") 29 | 30 | except requests.RequestException as e: 31 | print(f"Request failed: {e}") 32 | 33 | if __name__ == "__main__": 34 | # Inject the malicious payload 35 | inject_payload(target_url, xss_payload) 36 | 37 | # Retrieve the page content to check if the payload is stored and executed 38 | retrieve_payload(target_url) -------------------------------------------------------------------------------- /chapter5/AWS-S3-with-boto.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | s3client = boto3.client( 4 | service_name='s3', 5 | region_name='us-east-1', 6 | aws_access_key_id='ACCESS_KEY', #Update the AWS Access key here 7 | aws_secret_access_key='SECRET_KEY' #Update the AWS Secret key here 8 | ) 9 | response = s3.list_buckets() 10 | for bucket in response['Buckets']: 11 | print(f'Bucket Name: {bucket["Name"]}') -------------------------------------------------------------------------------- /chapter5/Azure-SDK-blob-storage.py: -------------------------------------------------------------------------------- 1 | from azure.storage.blob import BlobServiceClient 2 | # Connect to the Azure Blob service 3 | connection_string = "" 4 | blob_service_client = BlobServiceClient.from_connection_string(connection_string) 5 | # List containers in the storage account 6 | containers = blob_service_client.list_containers() 7 | for container in containers: 8 | print(f'Container Name: {container.name}') -------------------------------------------------------------------------------- /chapter5/HardCoded-Cred-with-LLM.py: -------------------------------------------------------------------------------- 1 | import openai 2 | import argparse 3 | # Function to check for AWS or Azure keys in the provided text 4 | def check_for_keys(text): 5 | # Use the OpenAI GPT-3 API to analyze the content 6 | response = openai.Completion.create( 7 | engine="davinci-codex", 8 | prompt=text, 9 | max_tokens=100 10 | ) 11 | generated_text = response['choices'][0]['text'] 12 | # Check the generated text for AWS or Azure keys 13 | if 'AWS_ACCESS_KEY_ID' in generated_text and 'AWS_SECRET_ACCESS_KEY' in generated_text: 14 | print("Potential AWS keys found.") 15 | elif 'AZURE_CLIENT_ID' in generated_text and 'AZURE_CLIENT_SECRET' in generated_text: 16 | print("Potential Azure keys found.") 17 | else: 18 | print("No potential AWS or Azure keys found.") 19 | 20 | # Create argument parser 21 | parser = argparse.ArgumentParser(description='Check for AWS or Azure keys in a JavaScript file.') 22 | parser.add_argument('file_path', type=str, help='Path to the JavaScript file') 23 | 24 | # Parse command line arguments 25 | args = parser.parse_args() 26 | 27 | # Read the JavaScript file content 28 | file_path = args.file_path 29 | try: 30 | with open(file_path, 'r') as file: 31 | javascript_content = file.read() 32 | check_for_keys(javascript_content) 33 | except FileNotFoundError: 34 | print(f"File '{file_path}' not found.") -------------------------------------------------------------------------------- /chapter5/encryption-within-serverless.py: -------------------------------------------------------------------------------- 1 | from cryptography.fernet import Fernet 2 | # Encrypt data in a Lambda function using Fernet encryption 3 | def encrypt_data(data): 4 | key = Fernet.generate_key() 5 | cipher_suite = Fernet(key) 6 | encrypted_data = cipher_suite.encrypt(data.encode()) 7 | return encrypted_data -------------------------------------------------------------------------------- /chapter5/enumerate-AWS-resources.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | # Initialize an AWS session 3 | session = boto3.Session(region_name='us-west-1') #Replace with your desired region 4 | 5 | # Create clients for different AWS services 6 | ec2_client = session.client('ec2') 7 | s3_client = session.client('s3') 8 | iam_client = session.client('iam') 9 | 10 | # Enumerate EC2 instances 11 | response = ec2_client.describe_instances() 12 | for reservation in response['Reservations']: 13 | for instance in reservation['Instances']: 14 | print(f"EC2 Instance ID: {instance['InstanceId']},State: {instance['State']['Name']}") 15 | 16 | # Enumerate S3 buckets 17 | buckets = s3_client.list_buckets() for bucket in buckets['Buckets']: 18 | print(f"S3 Bucket Name: {bucket['Name']}") 19 | 20 | # Enumerate IAM users 21 | users = iam_client.list_users() 22 | for user in users['Users']: 23 | print(f"IAM User Name: {user['UserName']}") -------------------------------------------------------------------------------- /chapter5/enumerate-EC2-instances-AWS.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | # Initialize an AWS session 3 | session = boto3.Session(region_name='us-west-1') #Replace with your desired region 4 | # Create an EC2 client 5 | ec2_client = session.client('ec2') 6 | 7 | # Enumerate EC2 instances 8 | response = ec2_client.describe_instances() 9 | 10 | # Process response to extract instance details 11 | for reservation in response['Reservations']: 12 | for instance in reservation['Instances']: 13 | instance_id = instance['InstanceId'] 14 | instance_state = instance['State']['Name'] 15 | instance_type = instance['InstanceType'] 16 | public_ip = instance.get('PublicIpAddress', 'N/A') 17 | 18 | # Retrieves Public IP if available 19 | print(f"EC2 Instance ID: {instance_id}") 20 | print(f"Instance State: {instance_state}") 21 | print(f"Instance Type: {instance_type}") 22 | print(f"Public IP: {public_ip}") 23 | print("-" * 30) # Separator for better readability -------------------------------------------------------------------------------- /chapter5/get-AWS-secrets-with-boto.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | # Access AWS Secrets Manager to retrieve secrets 3 | def retrieve_secret(secret_name): 4 | client = boto3.client('secretsmanager') 5 | response = client.get_secret_value(SecretId=secret_name) 6 | secret = response['SecretString'] 7 | return secret -------------------------------------------------------------------------------- /chapter5/get-Cloud-Watch-logs.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | # Get CloudWatch logs for a Lambda function 3 | def get_lambda_logs(lambda_name): 4 | client = boto3.client('logs') 5 | response =client.describe_log_streams(logGroupName=f'/aws/lambda/{lambda_name}') 6 | log_stream_name = response['logStreams'][0]['logStreamName'] 7 | logs = client.get_log_events(logGroupName=f'/aws/lambda/{lambda_name}', logStreamName=log_stream_name) 8 | return logs['events'] -------------------------------------------------------------------------------- /chapter5/get-permissions-of-lambda-function.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | # Check Lambda function's permissions 3 | def check_lambda_permissions(lambda_name): 4 | client = boto3.client('lambda') 5 | response = client.get_policy(FunctionName=lambda_name) 6 | permissions = response['Policy'] 7 | return permissions 8 | # Analyze permissions and enforce least privilege 9 | # Example: Validate permissions against predefined access levels 10 | # Implement corrective actions -------------------------------------------------------------------------------- /chapter5/subprocess-for-Terraform.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | # Use Terraform to apply consistent configurations 3 | def apply_terraform(): 4 | subprocess.run(["terraform", "init"]) 5 | subprocess.run(["terraform", "apply"]) 6 | # Ensure consistent configurations across environments -------------------------------------------------------------------------------- /chapter5/update-with-webhook.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import json 3 | import requests 4 | 5 | def send_to_webhook(finding): 6 | webhook_url = "YOUR_WEBHOOK_URL_HERE" # Replace this with your actual webhook URL 7 | headers = { 8 | "Content-Type": "application/json" 9 | } 10 | payload = { 11 | "finding_id": finding["FindingUniqueId"], 12 | "severity": finding["Severity"], 13 | "description": finding["Description"], 14 | # Include any other relevant data from the finding 15 | } 16 | try: 17 | response = requests.post(webhook_url, json=payload, headers=headers) 18 | response.raise_for_status() 19 | print(f"Webhook sent for finding: {finding['FindingUniqueId']}") 20 | except requests.RequestException as e: 21 | print(f"Failed to send webhook for finding {finding['FindingUniqueId']}: {e}") 22 | 23 | if __name__ == "__main__": 24 | if len(sys.argv) != 2: 25 | print("Usage: python script.py ") 26 | sys.exit(1) 27 | 28 | json_file_path = sys.argv[1] 29 | try: 30 | with open(json_file_path, "r") as file: 31 | data = json.load(file) 32 | except FileNotFoundError: 33 | print(f"File not found: {json_file_path}") 34 | sys.exit(1) 35 | except json.JSONDecodeError as e: 36 | print(f"Error loading JSON: {e}") 37 | sys.exit(1) 38 | 39 | # Send data to webhook for critical findings 40 | for finding in data: 41 | if finding.get("Severity", "").lower() == "critical": 42 | send_to_webhook(finding) -------------------------------------------------------------------------------- /chapter5/validate-cloud-formation.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | # Use CloudFormation validate-template to check for misconfigurations 3 | def validate_cf_template(template_file): 4 | subprocess.run(["aws", "cloudformation", "validate-template", "--template-body", 5 | f"file://{template_file}"]) 6 | # Validate CloudFormation template for misconfigurations -------------------------------------------------------------------------------- /chapter6/Error-handling.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import sys 3 | import time 4 | 5 | # Define global variables 6 | BEAGLE_API_BASE_URL = "https://api.beaglesecurity.com/rest/v2" 7 | ACCESS_TOKEN = "YOUR_ACCESS_TOKEN" 8 | 9 | # Define maximum retry attempts 10 | MAX_RETRIES = 3 11 | 12 | 13 | def get_projects(): 14 | # Retrieve projects from Beagle Security 15 | url = f"{BEAGLE_API_BASE_URL}/projects" 16 | headers = {"Authorization": f"Bearer{ACCESS_TOKEN}"} 17 | 18 | # Implement retry logic for network issues 19 | retries = 0 20 | while retries < MAX_RETRIES: 21 | try: 22 | response = requests.get(url,headers=headers) 23 | response.raise_for_status() # Raise an exception for HTTP errors 24 | return response.json() 25 | except requests.exceptions.RequestException as e: 26 | print(f"Error fetching projects: {e}") 27 | retries += 1 28 | if retries < MAX_RETRIES: 29 | print("Retrying...") 30 | time.sleep(5) # Wait for 5 second before retrying 31 | else: 32 | print("Max retries reached. Exiting...") 33 | sys.exit(1) 34 | 35 | 36 | def create_project(name): 37 | # Create a new project if it doesn't exist 38 | url = f"{BEAGLE_API_BASE_URL}/projects" 39 | headers = { 40 | "Content-Type": "application/json", 41 | "Authorization": f"Bearer {ACCESS_TOKEN}", 42 | } 43 | data = {"name": name} 44 | 45 | # Implement error handling for API responses 46 | try: 47 | response = requests.post(url, json=data, headers=headers) 48 | response.raise_for_status() 49 | return response.json() 50 | except requests.exceptions.RequestException as e: 51 | print(f"Error creating project: {e}") 52 | sys.exit(1) 53 | 54 | 55 | # Similarly, implement error handling for other functions: create_application, verify_domain, start_test,send_results_to_webhook 56 | -------------------------------------------------------------------------------- /chapter6/Jenkinsfile: -------------------------------------------------------------------------------- 1 | pipeline { 2 | agent any 3 | 4 | stages { 5 | stage('Initialize') { 6 | steps { 7 | // Checkout source code from repository if needed 8 | // For example: git 'https://github.com/your/repository.git' 9 | } 10 | } 11 | 12 | stage('OWASP ZAP Scan') { 13 | steps { 14 | sh ''' 15 | python3 -m venv venv 16 | source venv/bin/activate 17 | pip install python-owasp-zap-v2 requests 18 | python owasp_zap_scan.py 19 | ''' 20 | } 21 | } 22 | } 23 | } -------------------------------------------------------------------------------- /chapter6/ZAP-automation.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from zapv2 import ZAPv2 3 | 4 | def send_webhook_notification(report): 5 | webhook_url = 'https://your.webhook.endpoint' #Replace this with your actual webhook URL 6 | headers = {'Content-Type': 'application/json'} 7 | data = {'report': report} 8 | 9 | try: 10 | response = requests.post(webhook_url,json=data, headers=headers) 11 | response.raise_for_status() 12 | print("Webhook notification sent successfully.") 13 | except requests.exceptions.RequestException as e: 14 | print(f"Failed to send webhook notification: {e}") 15 | 16 | def main(): 17 | # Step 2: Initialize OWASP ZAP Session 18 | zap = ZAPv2() 19 | 20 | # Step 3: Configure Target URLs 21 | target_url = 'http://example.com' 22 | 23 | # Step 4: Perform Active Scan 24 | scan_id = zap.spider.scan(target_url) 25 | zap.spider.wait_for_complete(scan_id) 26 | scan_id = zap.ascan.scan(target_url) 27 | zap.ascan.wait_for_complete(scan_id) 28 | 29 | # Step 5: Get Scan Results 30 | alerts = zap.core.alerts() 31 | for alert in alerts: 32 | print('Alert: {}'.format(alert)) 33 | 34 | # Step 6: Generate Report 35 | report = zap.core.htmlreport() 36 | 37 | # Step 7: Send Webhook Notification 38 | send_webhook_notification(report) 39 | 40 | with open('report.html', 'w') as f: 41 | f.write(report) 42 | 43 | if __name__ == "__main__": 44 | main() 45 | -------------------------------------------------------------------------------- /chapter6/beagle-security-automation.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import sys 3 | 4 | # Define global variables 5 | BEAGLE_API_BASE_URL ="https://api.beaglesecurity.com/rest/v2" 6 | ACCESS_TOKEN = "YOUR_ACCESS_TOKEN" 7 | 8 | def get_projects(): 9 | # Retrieve projects from Beagle Security 10 | url = f"{BEAGLE_API_BASE_URL}/projects" 11 | headers = {"Authorization": f"Bearer{ACCESS_TOKEN}"} 12 | response = requests.get(url, headers=headers) 13 | return response.json() 14 | 15 | def create_project(name): 16 | # Create a new project if it doesn't exist 17 | url = f"{BEAGLE_API_BASE_URL}/projects" 18 | headers = { 19 | "Content-Type": "application/json", 20 | "Authorization": f"Bearer {ACCESS_TOKEN}", 21 | } 22 | data = {"name": name} 23 | response = requests.post(url, json=data,headers=headers) 24 | return response.json() 25 | 26 | def create_application(project_id, name, url): 27 | # Create a new application under the specified project 28 | url = f"{BEAGLE_API_BASE_URL}/applications" 29 | headers = { 30 | "Content-Type": "application/json", 31 | "Authorization": f"Bearer {ACCESS_TOKEN}", 32 | } 33 | data = {"projectId": project_id, "name": name, "url": url} 34 | esponse = requests.post(url, json=data, headers=headers) 35 | return response.json() 36 | 37 | def verify_domain(application_token): 38 | # Verify domain ownership for the application 39 | url = f"{BEAGLE_API_BASE_URL}/applications/signature?application_token={application_token}" 40 | headers = {"Authorization": f"Bearer{ACCESS_TOKEN}"} 41 | response = requests.get(url, headers=headers) 42 | return response.json() 43 | 44 | def start_test(application_token): 45 | # Start a security test for the specified application 46 | url = f"{BEAGLE_API_BASE_URL}/test/start" 47 | headers = { 48 | "Content-Type": "application/json", 49 | "Authorization": f"Bearer {ACCESS_TOKEN}", 50 | } 51 | data = {"applicationToken": application_token} 52 | response = requests.post(url, json=data,headers=headers) 53 | return response.json() 54 | 55 | def send_results_to_webhook(application_token, result_token, webhook_url): 56 | # Get test result 57 | url = f"{BEAGLE_API_BASE_URL}/test/result?application_token={application_token}&result_token={result_token}" 58 | headers = {"Authorization": f"Bearer{ACCESS_TOKEN}"} 59 | response = requests.get(url, headers=headers) 60 | test_result = response.json() 61 | 62 | # Send result to webhook 63 | webhook_data = { 64 | "application_token": application_token, 65 | "result_token": result_token, 66 | "result": test_result, 67 | } 68 | webhook_response = requests.post(webhook_url,json=webhook_data) 69 | return webhook_response.status_code 70 | 71 | def main(): 72 | # Check if project name argument is provided 73 | if len(sys.argv) < 2: 74 | print("Usage: python script.py") 75 | sys.exit(1) 76 | 77 | # Extract project name from command-line arguments 78 | project_name = sys.argv[1] 79 | # Example usage 80 | application_name = "Your Application" 81 | application_url = "https://your-application-url.com" 82 | webhook_url = "https://your-webhook-url.com" 83 | 84 | # Retrieve projects or create a new one 85 | projects = get_projects() 86 | project_id = projects.get(project_name) 87 | if not project_id: 88 | new_project = create_project(project_name) 89 | project_id = new_project["id"] 90 | 91 | # Create a new application under the project 92 | new_application = create_application(project_id,application_name, application_url) 93 | application_token = new_application["applicationToken"] 94 | 95 | # Verify domain ownership 96 | domain_verification_signature = verify_domain(application_token) 97 | 98 | # Start a security test 99 | test_start_response = start_test(application_token) 100 | result_token = test_start_response["resultToken"] 101 | 102 | # Send results to webhook 103 | webhook_status_code = send_results_to_webhook(application_token, result_token, webhook_url) 104 | print(f"Webhook status code: {webhook_status_code}") 105 | 106 | if __name__ == "__main__": 107 | main() 108 | -------------------------------------------------------------------------------- /chapter6/logger-implementation.py: -------------------------------------------------------------------------------- 1 | # Import necessary libraries 2 | import logging 3 | 4 | # Configure logging 5 | logging.basicConfig(filename='automation.log',level=logging.INFO) 6 | 7 | 8 | def main(): 9 | # Configure logging 10 | logger = logging.getLogger(__name__) 11 | 12 | # Example usage 13 | project_name = "Your Project" 14 | application_name = "Your Application" 15 | application_url = "https://your-application-url.com" 16 | webhook_url = "https://your-webhook-url.com" 17 | 18 | try: 19 | # Retrieve projects or create a new one 20 | projects = get_projects() 21 | project_id = projects.get(project_name) 22 | if not project_id: 23 | new_project = create_project(project_name) 24 | project_id = new_project["id"] 25 | 26 | # Create a new application under the project 27 | new_application = create_application(project_id, application_name, application_url) 28 | application_token = new_application["applicationToken"] 29 | 30 | # Verify domain ownership 31 | domain_verification_signature = verify_domain(application_token) 32 | 33 | # Start a security test 34 | test_start_response = start_test(application_token) 35 | result_token = test_start_response["resultToken"] 36 | 37 | # Send results to webhook 38 | webhook_status_code = send_results_to_webhook(application_token, result_token,webhook_url) 39 | logger.info(f"Webhook status code:{webhook_status_code}") 40 | except Exception as e: 41 | logger.error(f"An error occurred: {e}", exc_info=True) 42 | 43 | 44 | if __name__ == "__main__": 45 | main() -------------------------------------------------------------------------------- /chapter7/AWS-compliance-audit.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import requests 3 | import json 4 | 5 | class ComplianceAutomationTool: 6 | def __init__(self, iam_client): 7 | self.iam_client = iam_client 8 | 9 | def conduct_compliance_audit(self): 10 | # Retrieve user access permissions from IAM system 11 | 12 | users = self.iam_client.list_users() 13 | # Implement compliance checks 14 | excessive_permissions_users = self.check_excessive_permissions(users) 15 | return excessive_permissions_users 16 | 17 | def check_excessive_permissions(self, users): 18 | # Check for users with excessive permissions 19 | excessive_permissions_users = [user['UserName'] for user in users if self.has_excessive_permissions(user)] 20 | return excessive_permissions_users 21 | 22 | def send_results_to_webhook(self, excessive_permissions_users, webhook_url): 23 | # Prepare payload with audit results 24 | payload = { 25 | 'excessive_permissions_users': excessive_permissions_users, 26 | } 27 | 28 | # Send POST request to webhook URL 29 | response = requests.post(webhook_url, json=payload)   30 | 31 | # Check if request was successful 32 | if response.status_code == 200: 33 | print("Audit results sent to webhook successfully.") 34 | else: 35 | print("Failed to send audit results to webhook. Status code:", response.status_code) 36 | 37 | # Usage example 38 | def main(): 39 | # Initialize IAM client 40 | iam_client = boto3.client('iam') 41 | 42 | # Instantiate ComplianceAutomationTool with IAM client 43 | compliance_automation_tool = ComplianceAutomationTool(iam_client) 44 | 45 | # Conduct compliance audit 46 | excessive_permissions_users = compliance_automation_tool.conduct_compliance_audit() 47 | 48 | # Define webhook URL 49 | webhook_url = 'https://example.com/webhook' # Replace with actual webhook URL 50 | 51 | # Send audit results to webhook 52 | compliance_automation_tool.send_results_to_webhook(excessive_permissions_users, webhook_url) 53 | 54 | 55 | if __name__ == "__main__": 56 | main() -------------------------------------------------------------------------------- /chapter7/pandas-for-intrusion-detection.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | 3 | # Read security incident data from CSV file into a DataFrame 4 | df = pd.read_csv('security_incidents.csv') 5 | # Perform data analysis and exploration 6 | 7 | # Example: Calculate the total number of incidents by severity 8 | 9 | incident_count_by_severity = df['Severity'].value_counts() 10 | # Example: Filter incidents with high severity 11 | high_severity_incidents = df[df['Severity'] == 'High'] 12 | # Example: Generate summary statistics for incidents by category incident_summary_by_category = df.groupby('Category').agg({'Severity': 'count', 'Duration': 'mean'}) 13 | # Output analysis results 14 | 15 | print("Incident Count by Severity:") 16 | print(incident_count_by_severity) 17 | print("\nHigh Severity Incidents:") 18 | print(high_severity_incidents) 19 | print("\nIncident Summary by Category:") 20 | print(incident_summary_by_category) 21 | -------------------------------------------------------------------------------- /chapter7/retriave-threat-intelligence.py: -------------------------------------------------------------------------------- 1 | import requests 2 | 3 | class ThreatIntelligenceIntegration: 4 | def __init__(self, api_key): 5 | self.api_key = api_key 6 | self.base_url ='https://api.threatintelligenceplatform.com' 7 | 8 | def fetch_threat_data(self, ip_address): 9 | # Construct API request URL 10 | url = f"{self.base_url}/threats?ip={ip_address}&apikey={self.api_key}" 11 | 12 | # Send GET request to API endpoint 13 | response = requests.get(url) 14 | 15 | # Parse response and extract threat data 16 | if response.status_code == 200: 17 | threat_data = response.json() 18 | return threat_data 19 | else: 20 | print("Failed to fetch threat data from API.") 21 | return None 22 | 23 | # Usage example 24 | def main(): 25 | # Initialize ThreatIntelligenceIntegration with API key 26 | api_key = 'your_api_key' 27 | threat_intel_integration = ThreatIntelligenceIntegration(api_key) 28 | 29 | # Example IP address for demonstration 30 | ip_address = '123.456.789.0' 31 | 32 | # Fetch threat data for the IP address 33 | threat_data = threat_intel_integration.fetch_threat_data(ip_address) 34 | 35 | # Process threat data and incorporate it into compliance audit 36 | if threat_data: 37 | # Process threat data (e.g., extract threat categories, severity) 38 | # Incorporate threat data into compliance audit logic 39 | print("Threat data fetched successfully:", threat_data) 40 | else: 41 | print("No threat data available for the specified IP address.") 42 | 43 | if __name__ == "__main__": 44 | main() -------------------------------------------------------------------------------- /chapter7/scikit-learn-automation: -------------------------------------------------------------------------------- 1 | from sklearn.ensemble import IsolationForest 2 | import numpy as np 3 | 4 | # Generate sample network traffic data (replace with actual data) 5 | data = np.random.randn(1000, 2) 6 | 7 | # Train Isolation Forest model for anomaly detection 8 | model = IsolationForest() 9 | model.fit(data) 10 | 11 | # Predict anomalies in the data 12 | anomaly_predictions = model.predict(data) 13 | 14 | # Output anomaly predictions 15 | print("Anomaly Predictions:") 16 | print(anomaly_predictions) 17 | -------------------------------------------------------------------------------- /chapter8/asymmetric-encryption-example.py: -------------------------------------------------------------------------------- 1 | from cryptography.hazmat.primitives.asymmetric import rsa 2 | from cryptography.hazmat.primitives import serialization 3 | from cryptography.hazmat.primitives.asymmetric import padding 4 | from cryptography.hazmat.primitives import hashes 5 | 6 | # Generate a private key 7 | private_key = rsa.generate_private_key( 8 | public_exponent=65537, 9 | key_size=2048, 10 | ) 11 | 12 | # Generate the corresponding public key 13 | public_key = private_key.public_key() 14 | 15 | # Serialize the private key 16 | pem = private_key.private_bytes( 17 | encoding=serialization.Encoding.PEM, 18 | format=serialization.PrivateFormat.TraditionalOpenSSL, 19 | encryption_algorithm=serialization.BestAvailableEncryption(b'mypassword') 20 | 21 | ) 22 | 23 | # Serialize the public key 24 | public_pem = public_key.public_bytes( 25 | encoding=serialization.Encoding.PEM, 26 | format=serialization.PublicFormat.SubjectPublicKeyInfo 27 | ) 28 | 29 | # Encrypt a message using the public key 30 | message = b"Secret message" 31 | cipher_text = public_key.encrypt( 32 | message, 33 | padding.OAEP( 34 | mgf=padding.MGF1(algorithm=hashes.SHA256()), 35 | algorithm=hashes.SHA256(), 36 | label=None 37 | ) 38 | ) 39 | print(f"Cipher Text: {cipher_text}") 40 | 41 | # Decrypt the message using the private key 42 | plain_text = private_key.decrypt( 43 | 44 | cipher_text, 45 | padding.OAEP( 46 | mgf=padding.MGF1(algorithm=hashes.SHA256()), 47 | algorithm=hashes.SHA256(), 48 | label=None 49 | ) 50 | ) 51 | print(f"Plain Text: {plain_text.decode()}") -------------------------------------------------------------------------------- /chapter8/bcrypt-example.py: -------------------------------------------------------------------------------- 1 | import bcrypt 2 | 3 | def hash_password(password): 4 | salt = bcrypt.gensalt() 5 | return bcrypt.hashpw(password.encode(), salt) 6 | 7 | 8 | def check_password(password, hashed): 9 | return bcrypt.checkpw(password.encode(), hashed) 10 | 11 | # Example usage: 12 | password = "securepassword" 13 | hashed_password = hash_password(password) 14 | print(f"Hashed Password: {hashed_password}") 15 | 16 | # Verify the password 17 | is_valid = check_password("securepassword", hashed_password) 18 | print(f"Password is valid: {is_valid}") -------------------------------------------------------------------------------- /chapter8/hashlib-example.py: -------------------------------------------------------------------------------- 1 | import hashlib 2 | 3 | def hash_password(password): 4 | return hashlib.sha256(password.encode()).hexdigest() 5 | 6 | # Example usage: 7 | password = "securepassword" 8 | hashed_password = hash_password(password) 9 | print(f"Hashed Password: {hashed_password}") -------------------------------------------------------------------------------- /chapter8/symmetric-encryption-example.py: -------------------------------------------------------------------------------- 1 | from cryptography.fernet import Fernet 2 | 3 | 4 | # Generate a key 5 | key = Fernet.generate_key() 6 | cipher_suite = Fernet(key) 7 | 8 | # Encrypt a message 9 | cipher_text = cipher_suite.encrypt(b"Secret message") 10 | print(f"Cipher Text: {cipher_text}") 11 | 12 | # Decrypt the message 13 | plain_text = cipher_suite.decrypt(cipher_text) 14 | print(f"Plain Text: {plain_text.decode()}") -------------------------------------------------------------------------------- /chapter9/automating-log-analysis.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pandas as pd 3 | 4 | def analyze_logs(log_directory): 5 | for log_file in os.listdir(log_directory): 6 | if log_file.endswith('.log'): 7 | logs = pd.read_csv(os.path.join(log_directory, log_file),delimiter=' ', header=None) 8 | # Define column names (assumes Apache log format) 9 | logs.columns = ['ip', 'identifier', 'user', 10 | 'time', 'request', 'status', 'size', 'referrer', 'user_agent'] 11 | 12 | # Detect failed login attempts (status code401) 13 | failed_logins = logs[logs['status'] == '401'] 14 | 15 | if not failed_logins.empty: 16 | send_alert(f"Failed login attempts detected in {log_file}") 17 | 18 | def send_alert(message): 19 | # Send email alert 20 | import smtplib 21 | from email.mime.text import MIMEText 22 | msg = MIMEText(message) 23 | msg['Subject'] = 'Security Alert' 24 | msg['From'] = 'alert@example.com' 25 | msg['To'] = 'admin@example.com' 26 | s = smtplib.SMTP('localhost') 27 | s.send_message(msg) 28 | s.quit() 29 | 30 | analyze_logs('/var/log/apache2') -------------------------------------------------------------------------------- /chapter9/automating-notification-and-reporting.py: -------------------------------------------------------------------------------- 1 | import pdfkit 2 | import pandas as pd 3 | 4 | def generate_report(logs, filename): 5 | html = logs.to_html() 6 | pdfkit.from_string(html, filename) 7 | 8 | def analyze_logs(log_directory): 9 | for log_file in os.listdir(log_directory): 10 | if log_file.endswith('.log'): 11 | logs = pd.read_csv(os.path.join(log_directory, 12 | log_file), delimiter=' ', header=None) 13 | logs.columns = ['ip', 'identifier', 'user', 'time', 'request', 'status', 'size', 'referrer','user_agent'] 14 | generate_report(logs, 15 | f'report_{log_file}.pdf') 16 | send_alert(f"Report generated for {log_file}") 17 | 18 | def send_alert(message): 19 | import smtplib 20 | from email.mime.text import MIMEText 21 | msg = MIMEText(message) 22 | msg['Subject'] = 'Incident Report' 23 | msg['From'] = 'alert@example.com' 24 | msg['To'] = 'admin@example.com' 25 | s = smtplib.SMTP('localhost') 26 | s.send_message(msg) 27 | s.quit() 28 | 29 | analyze_logs('/var/log/apache2') -------------------------------------------------------------------------------- /chapter9/automating-quarantine-and-isolation.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import pandas as pd 3 | 4 | def isolate_ip(ip_address): 5 | subprocess.run(['iptables', '-A', 'INPUT', '-s', 6 | ip_address, '-j', 'DROP']) 7 | 8 | def analyze_logs(log_directory): 9 | for log_file in os.listdir(log_directory): 10 | if log_file.endswith('.log'): 11 | logs = pd.read_csv(os.path.join(log_directory, log_file), delimiter=' ', header=None) 12 | logs.columns = ['ip', 'identifier', 'user','time', 'request', 'status', 'size', 'referrer','user_agent'] 13 | 14 | for ip in logs['ip'].unique(): 15 | threat_info = enrich_with_threat_intelligence(ip) 16 | if threat_info.get('malicious'): 17 | isolate_ip(ip) 18 | send_alert(f"Isolated malicious IP: {ip}") 19 | 20 | def send_alert(message): 21 | import smtplib 22 | from email.mime.text import MIMEText 23 | msg = MIMEText(message) 24 | msg['Subject'] = 'Security Alert' 25 | msg['From'] = 'alert@example.com' 26 | msg['To'] = 'admin@example.com' 27 | s = smtplib.SMTP('localhost') 28 | s.send_message(msg) 29 | s.quit() 30 | 31 | def enrich_with_threat_intelligence(ip_address): 32 | response = requests.get(f"https://api.threatintelligence.com/{ip_address}") 33 | return response.json() 34 | 35 | analyze_logs('/var/log/apache2') -------------------------------------------------------------------------------- /chapter9/automating-threat-hunting-task.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import requests 3 | 4 | def collect_data(api_url): 5 | response = requests.get(api_url) 6 | return response.json() 7 | 8 | def parse_logs(log_file): 9 | logs = pd.read_csv(log_file, delimiter=' ',header=None) 10 | logs.columns = ['ip', 'identifier', 'user', 'time','request', 'status', 'size', 'referrer', 'user_agent'] 11 | return logs 12 | 13 | def extract_iocs(threat_feed): 14 | iocs = [] 15 | for entry in threat_feed: 16 | iocs.extend(entry['indicators']) 17 | return iocs 18 | def search_iocs(logs, iocs): 19 | for ioc in iocs: 20 | matches = logs[logs['request'].str.contains(ioc)] 21 | if not matches.empty: 22 | print(f"IOC detected: {ioc}") 23 | 24 | threat_feed = collect_data('https://api.threatintelligence.com/feed') 25 | iocs = extract_iocs(threat_feed) 26 | logs = parse_logs('access.log') 27 | 28 | search_iocs(logs, iocs) -------------------------------------------------------------------------------- /chapter9/automating-threat-intelligence-integration.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import pandas as pd 3 | 4 | def enrich_with_threat_intelligence(ip_address): 5 | response = requests.get(f"https://api.threatintelligence.com/{ip_address}") 6 | return response.json() 7 | 8 | def analyze_logs(log_directory): 9 | for log_file in os.listdir(log_directory): 10 | if log_file.endswith('.log'): 11 | logs = pd.read_csv(os.path.join(log_directory, 12 | log_file), delimiter=' ', header=None) 13 | logs.columns = ['ip', 'identifier', 'user','time', 'request', 'status', 'size', 'referrer', 'user_agent'] 14 | for ip in logs['ip'].unique(): 15 | threat_info = enrich_with_threat_intelligence(ip) 16 | if threat_info.get('malicious'): 17 | send_alert(f"Malicious IP detected:{ip}") 18 | 19 | def send_alert(message): 20 | import smtplib 21 | from email.mime.text import MIMEText 22 | msg = MIMEText(message) 23 | msg['Subject'] = 'Security Alert' 24 | msg['From'] = 'alert@example.com' 25 | msg['To'] = 'admin@example.com' 26 | s = smtplib.SMTP('localhost') 27 | s.send_message(msg) 28 | s.quit() 29 | 30 | analyze_logs('/var/log/apache2') -------------------------------------------------------------------------------- /chapter9/data-collection-from-API.py: -------------------------------------------------------------------------------- 1 | import requests 2 | def collect_data(api_url): 3 | response = requests.get(api_url) 4 | return response.json() 5 | 6 | data = collect_data('https://api.example.com/logs') -------------------------------------------------------------------------------- /chapter9/detect-communications-with-known-IPs.py: -------------------------------------------------------------------------------- 1 | from scapy.all import sniff, IP 2 | 3 | def analyze_packet(packet): 4 | if IP in packet: 5 | ip_src = packet[IP].src 6 | ip_dst = packet[IP].dst 7 | # Example: Detecting communication with known malicious IP 8 | if ip_dst in malicious_ips: 9 | print(f"Suspicious communication detected:{ip_src} -> {ip_dst}") 10 | 11 | malicious_ips = ['192.168.1.1', '10.0.0.1'] 12 | sniff(filter="ip", prn=analyze_packet) -------------------------------------------------------------------------------- /chapter9/generating-incident-report.py: -------------------------------------------------------------------------------- 1 | from reportlab.lib.pagesizes import letter 2 | from reportlab.pdfgen import canvas 3 | 4 | def generate_report(): 5 | c = canvas.Canvas("incident_report.pdf", 6 | pagesize=letter) 7 | c.drawString(100, 750, "Incident Report") 8 | c.drawString(100, 730, "Threat Detected: Yes") 9 | c.drawString(100, 710, "Response Actions Taken:") 10 | c.drawString(120, 690, "1. System Isolated") 11 | c.drawString(120, 670, "2. Threat Eradicated") 12 | c.drawString(120, 650, "3. Systems Recovered") 13 | c.save() 14 | 15 | # Generate the report 16 | generate_report() -------------------------------------------------------------------------------- /chapter9/incident-response-workflow.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import subprocess 3 | # Define the incident response workflow 4 | 5 | def incident_response_workflow(): 6 | # Step 1: Detect threat 7 | threat_detected = detect_threat() 8 | if threat_detected: 9 | # Step 2: Analyze threat 10 | analyze_threat() 11 | # Step 3: Contain threat 12 | contain_threat() 13 | # Step 4: Eradicate threat 14 | eradicate_threat() 15 | # Step 5: Recover systems 16 | recover_systems() 17 | 18 | def detect_threat(): 19 | # Example threat detection logic 20 | # This could involve checking logs, alerts, or SIEM notifications 21 | return True 22 | def analyze_threat(): # Example threat analysis logic 23 | # This could involve deeper inspection of logs, network traffic analysis, or malware analysis 24 | print("Analyzing threat...") 25 | def contain_threat(): 26 | # Example threat containment logic 27 | # This could involve isolating the affected machine from the network 28 | subprocess.run(["ifconfig", "eth0", "down"]) 29 | print("Threat contained.") 30 | 31 | def eradicate_threat(): 32 | # Example threat eradication logic 33 | # This could involve removing malware, closing vulnerabilities, or patching systems 34 | print("Eradicating threat...") 35 | 36 | def recover_systems(): 37 | # Example system recovery logic 38 | # This could involve restoring systems from backups,validating system integrity, and bringing systems back online 39 | print("Recovering systems...") 40 | 41 | # Execute the workflow 42 | incident_response_workflow() -------------------------------------------------------------------------------- /chapter9/logging-and-reporting.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import time 3 | 4 | # Configure logging 5 | logging.basicConfig(filename='incident_response.log',level=logging.INFO) 6 | 7 | def log_action(action): 8 | logging.info(f"{action} performed at {time.strftime('%Y-%m-%d %H:%M:%S')}") 9 | 10 | # Example logging actions 11 | log_action("Threat detected") 12 | log_action("System isolated") 13 | log_action("Threat eradicated") 14 | log_action("Systems recovered") -------------------------------------------------------------------------------- /chapter9/preditct-anomalies.py: -------------------------------------------------------------------------------- 1 | from sklearn.ensemble import IsolationForest 2 | # Train Isolation Forest model 3 | model = IsolationForest(contamination=0.01) 4 | model.fit(logs[['request', 'status', 'size']]) 5 | # Predict anomalies 6 | logs['anomaly'] = model.predict(logs[['request', 'status','size']]) 7 | logs['anomaly'] = logs['anomaly'].map({1: 'normal', -1:'anomaly'}) -------------------------------------------------------------------------------- /chapter9/process-Apache-log.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | # Load Apache log file 3 | log_file = 'access.log' 4 | logs = pd.read_csv(log_file, delimiter=' ', header=None) 5 | # Define column names 6 | logs.columns = ['ip', 'identifier', 'user', 'time','request', 'status', 'size', 'referrer', 'user_agent'] 7 | # Convert time to datetime 8 | logs['time'] = pd.to_datetime(logs['time'],format='[%d/%b/%Y:%H:%M:%S %z]') -------------------------------------------------------------------------------- /chapter9/visualize-anomalies.py: -------------------------------------------------------------------------------- 1 | import matplotlib.pyplot as plt 2 | import seaborn as sns 3 | # Plotting anomalies 4 | sns.scatterplot(x='time', y='size', hue='anomaly',data=logs) 5 | plt.title('Log Anomalies Over Time') 6 | plt.xlabel('Time') 7 | plt.ylabel('Request Size') 8 | plt.show() -------------------------------------------------------------------------------- /chapter9/visualize-log-data.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import matplotlib.pyplot as plt 3 | 4 | def parse_logs(log_file): 5 | logs = pd.read_csv(log_file, delimiter=' ',header=None) 6 | logs.columns = ['ip', 'identifier', 'user', 'time','request', 'status', 'size', 'referrer', 'user_agent'] 7 | return logs 8 | 9 | def visualize_logs(logs): 10 | plt.hist(logs['status'], bins=range(100, 600, 100),edgecolor='black') 11 | plt.title('HTTP Status Codes') 12 | plt.xlabel('Status Code') 13 | plt.ylabel('Frequency') 14 | plt.show() 15 | 16 | logs = parse_logs('access.log') 17 | visualize_logs(logs) --------------------------------------------------------------------------------