├── LICENSE
├── README.md
├── chapter1
├── SSH-connection-with-paramiko.py
├── parsing-HTML-with-BeautifulSoap.py
├── scapy-ICMP.py
├── sending-http-request.py
└── simple-TCP-client.py
├── chapter2
├── ARP-scan.py
├── port-scan.py
├── portscanner-with-decorators-and-generator
│ ├── README.md
│ ├── portscanner
│ │ ├── __init__.py
│ │ └── portscanner.py
│ └── setup.py
└── portscanner
│ ├── README.md
│ ├── portscanner
│ ├── __init__.py
│ └── portscanner.py
│ └── setup.py
├── chapter3
├── HTML-analysis.py
├── JS-analysis.py
├── http-headers-analysis.py
├── security-headers-check.py
└── web-technology-fingerprinting-with-wappalyzer.py
├── chapter4
├── SQLmap-with-MITM-proxy.py
├── detect-potential-SQLInjection.py
├── flask-application-with-IDOR.py
├── parameterized-queries-in-SQLite3.py
├── scrape-with-PlayWright-advanced-with-crawler.py
├── scrape-with-PlayWright-advanced.py
├── scrape-with-PlayWright.py
├── scrape-with-Request-and-BeautifulSoup.py
├── test-XXS.py
└── test-stored-XXS.py
├── chapter5
├── AWS-S3-with-boto.py
├── Azure-SDK-blob-storage.py
├── HardCoded-Cred-with-LLM.py
├── encryption-within-serverless.py
├── enumerate-AWS-resources.py
├── enumerate-EC2-instances-AWS.py
├── get-AWS-secrets-with-boto.py
├── get-Cloud-Watch-logs.py
├── get-permissions-of-lambda-function.py
├── subprocess-for-Terraform.py
├── update-with-webhook.py
└── validate-cloud-formation.py
├── chapter6
├── Error-handling.py
├── Jenkinsfile
├── ZAP-automation.py
├── beagle-security-automation.py
└── logger-implementation.py
├── chapter7
├── AWS-compliance-audit.py
├── pandas-for-intrusion-detection.py
├── retriave-threat-intelligence.py
└── scikit-learn-automation
├── chapter8
├── asymmetric-encryption-example.py
├── bcrypt-example.py
├── hashlib-example.py
└── symmetric-encryption-example.py
└── chapter9
├── automating-log-analysis.py
├── automating-notification-and-reporting.py
├── automating-quarantine-and-isolation.py
├── automating-threat-hunting-task.py
├── automating-threat-intelligence-integration.py
├── data-collection-from-API.py
├── detect-communications-with-known-IPs.py
├── generating-incident-report.py
├── incident-response-workflow.py
├── logging-and-reporting.py
├── preditct-anomalies.py
├── process-Apache-log.py
├── visualize-anomalies.py
└── visualize-log-data.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Packt
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Offensive Security Using Python
2 |
3 |
4 |
5 | This is the code repository for [Offensive Security Using Python](https://www.packtpub.com/en-us/product/offensive-security-using-python-9781835468166), published by Packt.
6 |
7 | **A hands-on guide to offensive tactics and threat mitigation using practical strategies**
8 |
9 | ## What is this book about?
10 | Offensive Security Using Python is your go-to manual for mastering the quick-paced field of offensive security. This book is packed with valuable insights, real-world examples, and hands-on activities to help you leverage Python to navigate the complicated world of web security, exploit vulnerabilities, and automate challenging security tasks.
11 | From detecting vulnerabilities to exploiting them with cutting-edge Python techniques, you’ll gain practical insights into web security, along with guidance on how to use automation to improve the accuracy and effectiveness of your security activities. You’ll also learn how to design personalized security automation tools.
12 |
13 | This book covers the following exciting features:
14 | * Familiarize yourself with advanced Python techniques tailored to security professionals’ needs
15 | * Understand how to exploit web vulnerabilities using Python
16 | * Build automated security pipelines using Python and third-party tools
17 | * Develop custom security automation tools to streamline your workflow
18 | * Implement secure coding practices with Python to boost your applications
19 | * Discover Python-based threat detection and incident response techniques
20 |
21 | If you feel this book is for you, get your [copy](https://www.amazon.com/Offensive-Security-Using-Python-Handbook-ebook/dp/B0CV82CWGQ/) today!
22 |
23 |
24 |
25 | ## Instructions and Navigations
26 | All of the code is organized into folders. For example, Chapter03.
27 |
28 | The commands will look like the following:
29 | ```
30 | pip install wapiti3
31 | wapiti -h
32 | ```
33 |
34 | **Following is what you need for this book:**
35 | This book is for a diverse audience interested in cybersecurity and offensive security. Whether you're an experienced Python developer looking to enhance offensive security skills, an ethical hacker, a penetration tester eager to learn advanced Python techniques, or a cybersecurity enthusiast exploring Python's potential in vulnerability analysis, you'll find valuable insights. If you have a solid foundation in Python programming language and are eager to understand cybersecurity intricacies, this book will help you get started on the right foot.
36 |
37 | With the following software and hardware list you can run all code files present in the book (Chapter 1-9).
38 |
39 | ### Software and Hardware List
40 |
41 | | Chapter | Software required | OS required |
42 | | -------- | ---------------------------------| ----------------------------------|
43 | | 1-9 | Python3, Visual Studio Code | Windows, Mac OS X, and Linux (Any)|
44 |
45 |
46 | ### Related products
47 |
48 | * Zabbix 7 IT Infrastructure Monitoring Cookbook [[Packt]](https://www.packtpub.com/en-in/product/zabbix-7-it-infrastructure-monitoring-cookbook-9781801078320) [[Amazon]](https://www.amazon.in/Zabbix-Infrastructure-Monitoring-Cookbook-maintaining-ebook/dp/B0C53V9XPG)
49 |
50 | * Automating Security Detection Engineering [[Packt]](https://www.packtpub.com/en-in/product/automating-security-detection-engineering-9781837636419) [[Amazon]](https://www.amazon.com/Automating-Security-Detection-Engineering-hands-ebook/dp/B0D343MGWD)
51 |
52 | ## Get to Know the Author
53 | **Rejah Rehim**
54 | He is a visionary in cybersecurity, serves as Beagle Security, CEO and cofounder. With a 15-year track record, he is a driving force renowned for Python Penetration Testing Cookbook and Effective Python Penetration Testing.
55 |
56 | **Manindar Mohan**
57 | He is a a cybersecurity architect with 8 years of expertise, is a vital Elite Team member at Kerala Police Cyberdome. He is an ISMS Lead Auditor and OWASP Kerala Chapter board contributor. Despite having a career in aircraft engineering, he became a cybersecurity architect because of his passion for cyberspace.
58 |
--------------------------------------------------------------------------------
/chapter1/SSH-connection-with-paramiko.py:
--------------------------------------------------------------------------------
1 | # SSH Connection with Paramiko
2 | import paramiko
3 | # Create an SSH client
4 | ssh_client = paramiko.SSHClient()
5 | # Automatically add the server's host key
6 | ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
7 | # Connect to the SSH server
8 | ssh_client.connect("example.com", username="user",
9 | password="password") # Update the credentials here
10 |
11 | # Execute a command
12 | stdin, stdout, stderr = ssh_client.exec_command("ls -l")
13 |
14 | # Print the command output
15 | print(stdout.read().decode("utf-8"))
16 |
17 | # Close the SSH connection
18 | ssh_client.close()
19 |
--------------------------------------------------------------------------------
/chapter1/parsing-HTML-with-BeautifulSoap.py:
--------------------------------------------------------------------------------
1 | # Parsing HTML with BeautifulSoup
2 | from bs4 import BeautifulSoup
3 |
4 | html = """
5 |
6 |
7 | Sample Page
8 |
9 |
10 |
This is a sample paragraph.
11 |
12 |
13 | """
14 |
15 | # Parse the HTML
16 | soup = BeautifulSoup(html, "html.parser")
17 | # Extract the text from the paragraph
18 | paragraph = soup.find("p")
19 | print(paragraph.text)
--------------------------------------------------------------------------------
/chapter1/scapy-ICMP.py:
--------------------------------------------------------------------------------
1 | # Creating a Basic ICMP Ping Packet
2 | # Import the IP,ICMP and sr1 from Scapy module
3 | from scapy.all import IP, ICMP, sr1
4 |
5 | # Create an ICMP packet
6 | packet = IP(dst="192.168.1.1") / ICMP()
7 |
8 | # Send the packet and receive a response
9 | response = sr1(packet)
10 |
--------------------------------------------------------------------------------
/chapter1/sending-http-request.py:
--------------------------------------------------------------------------------
1 | # Sending an HTTP GET Request
2 | import requests
3 |
4 | url = "https://examplecode.com" #update to your valid url
5 |
6 | response = requests.get(url)
7 |
8 | # Print the response content
9 | print(response.text)
--------------------------------------------------------------------------------
/chapter1/simple-TCP-client.py:
--------------------------------------------------------------------------------
1 | # Creating a Simple TCP Client
2 | import socket
3 | target_host = "example.com"
4 | target_port = 80
5 | # Create a socket object
6 | client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
7 | # Connect to the server
8 | client.connect((target_host, target_port))
9 | # Send data
10 | client.send(b"GET / HTTP/1.1\r\nHost: example.com\r\n\r\n")
11 | # Receive data
12 | response = client.recv(4096)
13 | # Print the response
14 | print(response)
--------------------------------------------------------------------------------
/chapter2/ARP-scan.py:
--------------------------------------------------------------------------------
1 | # Import the necessary modules from Scapy
2 | from scapy.all import ARP, Ether, srp
3 |
4 | # Function to perform ARP scan
5 | def arp_scan(target_ip):
6 | # Create an ARP request packet
7 | arp_request = ARP(pdst=target_ip)
8 | # Create an Ethernet frame to encapsulate the ARP request
9 | ether_frame = Ether(dst="ff:ff:ff:ff:ff:ff") #Broadcasting to all devices in the network
10 |
11 | # Combine the Ethernet frame and ARP request packet
12 | arp_request_packet = ether_frame / arp_request
13 |
14 | # Send the packet and receive the response
15 | result = srp(arp_request_packet, timeout=3, verbose=False)[0]
16 | # List to store the discovered devices
17 | devices_list = []
18 |
19 | # Parse the response and extract IP and MAC addresses
20 | for sent, received in result:
21 | devices_list.append({'ip': received.psrc,'mac': received.hwsrc})
22 |
23 | return devices_list
24 |
25 | # Function to print scan results
26 | def print_scan_results(devices_list):
27 | print("IP Address\t\tMAC Address")
28 | print("-----------------------------------------")
29 | for device in devices_list:
30 | print(f"{device['ip']}\t\t{device['mac']}")
31 |
32 | # Main function to perform the scan
33 | def main(target_ip):
34 | print(f"Scanning {target_ip}...")
35 | devices_list = arp_scan(target_ip)
36 | print_scan_results(devices_list)
37 |
38 | # Entry point of the script
39 | if __name__ == "__main__":
40 | # Define the target IP range (e.g.,"192.168.1.1/24")
41 | target_ip = input("Enter the target IP range (e.g.,92.168.1.1/24): ")
42 | main(target_ip)
--------------------------------------------------------------------------------
/chapter2/port-scan.py:
--------------------------------------------------------------------------------
1 | from portscanner.portscanner import PortScanner
2 |
3 | scanner = PortScanner("192.168.1.1", 200, 202) # update the values
4 | open_ports = scanner.scan_ports()
5 | print("Open ports: ", open_ports)
--------------------------------------------------------------------------------
/chapter2/portscanner-with-decorators-and-generator/README.md:
--------------------------------------------------------------------------------
1 | Port Scan Module with Decorators and Generator
--------------------------------------------------------------------------------
/chapter2/portscanner-with-decorators-and-generator/portscanner/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Offensive-Security-Using-Python/ac1147771080e62dace8cd08e9a916c3040e6d83/chapter2/portscanner-with-decorators-and-generator/portscanner/__init__.py
--------------------------------------------------------------------------------
/chapter2/portscanner-with-decorators-and-generator/portscanner/portscanner.py:
--------------------------------------------------------------------------------
1 | import socket
2 | import time
3 |
4 | #Class Definition
5 | class PortScanner:
6 | def __init__(self, target_host, start_port,end_port):
7 | self.target_host = target_host
8 | self.start_port = start_port
9 | self.end_port = end_port
10 | self.open_ports = []
11 | #timing_decorator Decorator Method
12 | def timing_decorator(func):
13 | def wrapper(*args, **kwargs):
14 | start_time = time.time()
15 | result = func(*args, **kwargs)
16 | end_time = time.time()
17 | print(f"Scanning took {end_time - start_time:.2f} seconds.")
18 | return result
19 | return wrapper
20 | #is_port_open Method
21 | def is_port_open(self, port):
22 | try:
23 | with socket.socket(socket.AF_INET,socket.SOCK_STREAM) as s:
24 | s.settimeout(1)
25 | s.connect((self.target_host, port))
26 | return True
27 | except (socket.timeout,ConnectionRefusedError):
28 | return False
29 | #scan_ports Method
30 | @timing_decorator
31 | def scan_ports(self):
32 | open_ports = [port for port in range(self.start_port, self.end_port + 1) if self.is_port_open(port)]
33 | return open_ports
34 | #scan_ports_generator Method
35 | @timing_decorator
36 | def scan_ports_generator(self):
37 | for port in range(self.start_port,self.end_port + 1):
38 | if self.is_port_open(port):
39 | yield port
40 |
41 | def main(): # type: ignore
42 | target_host = input("Enter target host: ")
43 | start_port = int(input("Enter starting port: "))
44 | end_port = int(input("Enter ending port: "))
45 |
46 | scanner = PortScanner(target_host, start_port,end_port)
47 |
48 | open_ports = scanner.scan_ports()
49 | print("Open ports: ", open_ports)
50 |
51 | open_ports_generator = scanner.scan_ports_generator()
52 | print("Open ports (using generator):", list(open_ports_generator))
53 |
54 | if __name__ == "__main__":
55 | main()
--------------------------------------------------------------------------------
/chapter2/portscanner-with-decorators-and-generator/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | setup(
4 | name='portscanner',
5 | version='0.1',
6 | packages=['portscanner'],
7 | install_requires=[],
8 | entry_points={
9 | 'console_scripts': [
10 | 'portscanner = portscanner.portscanner:main'
11 | ]
12 | }
13 | )
--------------------------------------------------------------------------------
/chapter2/portscanner/README.md:
--------------------------------------------------------------------------------
1 | Port Scan Module
--------------------------------------------------------------------------------
/chapter2/portscanner/portscanner/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Offensive-Security-Using-Python/ac1147771080e62dace8cd08e9a916c3040e6d83/chapter2/portscanner/portscanner/__init__.py
--------------------------------------------------------------------------------
/chapter2/portscanner/portscanner/portscanner.py:
--------------------------------------------------------------------------------
1 | import socket
2 | import threading
3 | import time
4 |
5 | #Class Definition
6 | class PortScanner:
7 | def __init__(self, target_host, start_port,end_port):
8 | self.target_host = target_host
9 | self.start_port = start_port
10 | self.end_port = end_port
11 | self.open_ports = []
12 | #is_port_open Method
13 | def is_port_open(self, port):
14 | try:
15 | with socket.socket(socket.AF_INET,socket.SOCK_STREAM) as s:
16 | s.settimeout(1)
17 | s.connect((self.target_host, port))
18 | return True
19 | except (socket.timeout,ConnectionRefusedError):
20 | return False
21 | #scan_ports Method
22 | def scan_ports(self):
23 | open_ports = [port for port in range(self.start_port, self.end_port + 1) if self.is_port_open(port)]
24 | return open_ports
25 | def main(): # type: ignore
26 | target_host = input("Enter target host: ")
27 | start_port = int(input("Enter starting port: "))
28 | end_port = int(input("Enter ending port: "))
29 |
30 | scanner = PortScanner(target_host, start_port,end_port)
31 |
32 | open_ports = scanner.scan_ports()
33 | print("Open ports: ", open_ports)
34 |
35 | if __name__ == "__main__":
36 | main()
--------------------------------------------------------------------------------
/chapter2/portscanner/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | setup(
4 | name='portscanner',
5 | version='0.1',
6 | packages=['portscanner'],
7 | install_requires=[],
8 | entry_points={
9 | 'console_scripts': [
10 | 'portscanner = portscanner.portscanner:main'
11 | ]
12 | }
13 | )
--------------------------------------------------------------------------------
/chapter3/HTML-analysis.py:
--------------------------------------------------------------------------------
1 | from bs4 import BeautifulSoup
2 | import requests
3 | url = 'https://example.com'
4 | response = requests.get(url)
5 | soup = BeautifulSoup(response.content, 'html.parser')
6 | # Extract script tags to find JavaScript libraries
7 | script_tags = soup.find_all('script')
8 | for script in script_tags:
9 | print(script.get('src'))
10 |
11 | # Extract CSS links to find CSS frameworks
12 | css_links = soup.find_all('link', {'rel': 'stylesheet'})
13 | for link in css_links:
14 | print(link.get('href'))
--------------------------------------------------------------------------------
/chapter3/JS-analysis.py:
--------------------------------------------------------------------------------
1 | import re
2 | import requests
3 | url = 'https://example.com'
4 | response = requests.get(url)
5 | javascript_code = response.text
6 | # Search for specific JavaScript libraries/frameworks
7 | libraries = re.findall(r'someLibraryName',
8 | javascript_code)
9 | if libraries:
10 | print('SomeLibraryName is used.')
--------------------------------------------------------------------------------
/chapter3/http-headers-analysis.py:
--------------------------------------------------------------------------------
1 | import requests
2 | url = 'https://example.com'
3 | response = requests.get(url)
4 | headers = response.headers
5 | # Extract and analyze headers
6 | server = headers.get('Server')
7 | print(f'Server: {server}')
--------------------------------------------------------------------------------
/chapter3/security-headers-check.py:
--------------------------------------------------------------------------------
1 | import requests
2 | def check_security_headers(url):
3 | response = requests.get(url)
4 | headers = response.headers
5 | security_headers = {
6 | 'Content-Security-Policy': 'Content Security Policy (CSP) header is missing!',
7 | 'Strict-Transport-Security': 'Strict Transport Security (HSTS) header is missing!',
8 | 'X-Content-Type-Options': 'X-Content-Type-Options header is missing!',
9 | 'X-Frame-Options': 'X-Frame-Options header is missing!',
10 | 'Referrer-Policy': 'Referrer Policy header is missing!'
11 | }
12 | for header, message in security_headers.items():
13 | if header not in headers:
14 | print(message)
15 | else:
16 | print(f'{header}: {headers[header]}')
17 |
18 | # Example usage
19 | if __name__ == "__main__":
20 | website_url = input("Enter the URL to check security headers: ")
21 | check_security_headers(website_url)
--------------------------------------------------------------------------------
/chapter3/web-technology-fingerprinting-with-wappalyzer.py:
--------------------------------------------------------------------------------
1 | from wappalyzer import Wappalyzer, WebPage
2 | url = 'https://example.com'
3 | webpage = WebPage.new_from_url(url)
4 | wappalyzer = Wappalyzer.latest()
5 | # Analyze the webpage
6 | technologies = wappalyzer.analyze(webpage)
7 | for technology in technologies:
8 | print(f'Technology: {technology}')
--------------------------------------------------------------------------------
/chapter4/SQLmap-with-MITM-proxy.py:
--------------------------------------------------------------------------------
1 | import subprocess
2 | from mitmproxy import proxy, options
3 | from mitmproxy.tools.dump import DumpMaster
4 |
5 | # Function to automate SQLMap with captured HTTP requests from mitmproxy
6 |
7 | def automate_sqlmap_with_mitmproxy():
8 | # SQLMap command template
9 | sqlmap_command = ["sqlmap", "-r", "-", "--batch", "--level=5", "--risk=3"]
10 |
11 | try:
12 | # Start mitmproxy to capture HTTP traffic
13 | mitmproxy_opts = options.Options(listen_host='127.0.0.1', listen_port=8080)
14 | m = DumpMaster(options=mitmproxy_opts)
15 | config = proxy.config.ProxyConfig(mitmproxy_opts)
16 | m.server = proxy.server.ProxyServer(config)
17 | m.addons.add(DumpMaster)
18 |
19 | # Start mitmproxy in a separate thread
20 | t = threading.Thread(target=m.run)
21 | t.start()
22 |
23 | # Process captured requests in real-time
24 | while True:
25 | # Assuming mitmproxy captures and saves requests to 'captured_request.txt'
26 | with open('captured_request.txt', 'r') as file:
27 | request_data = file.read()
28 | # Run SQLMap using subprocess
29 | process = subprocess.Popen(sqlmap_command, stdin=subprocess.PIPE,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
30 | stdout, stderr = process.communicate(input=request_data.encode())
31 |
32 | # Print SQLMap output
33 | print("SQLMap output:")
34 | print(stdout.decode())
35 |
36 | if stderr:
37 | print("Error occurred:")
38 | print(stderr.decode())
39 |
40 | # Sleep for a while before checking for new requests
41 | time.sleep(5)
42 |
43 | except Exception as e:
44 | print("An error occurred:", e)
45 |
46 | finally:
47 | # Stop mitmproxy
48 | m.shutdown()
49 | t.join()
50 |
51 | # Start the automation process
52 | automate_sqlmap_with_mitmproxy()
--------------------------------------------------------------------------------
/chapter4/detect-potential-SQLInjection.py:
--------------------------------------------------------------------------------
1 | import requests
2 | def check_sql_injection(url):
3 | payloads = ["'", '"', "';--", "')", "'OR 1=1--", "'OR '1'='1", "'='", "1'1"]
4 | for payload in payloads:
5 | test_url = f"{url}{payload}"
6 | response = requests.get(test_url)
7 | # Check for potential signs of SQL injection in the response
8 | if "error" in response.text.lower() or "exception" in response.text.lower():
9 | print(f"Potential SQL Injection Vulnerability found at: {test_url}")
10 | return
11 |
12 | print("No SQL Injection Vulnerabilities detected.")
13 |
14 | # Example usage:
15 | if __name__ == '__main__':
16 | target_url = "http://example.com/login?id="
17 | check_sql_injection(target_url)
--------------------------------------------------------------------------------
/chapter4/flask-application-with-IDOR.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, request, jsonify
2 |
3 | app = Flask(__name__)
4 | users = {
5 | '123': {'username': 'alice', 'email':'alice@example.com'},
6 | '124': {'username': 'bob', 'email':'bob@example.com'}
7 | }
8 |
9 | @app.route('/user', methods=['GET'])
10 | def get_user():
11 | user_id = request.args.get('id')
12 | user_data = users.get(user_id)
13 | return jsonify(user_data)
14 |
15 | if __name__ == '__main__':
16 | app.run(debug=True)
--------------------------------------------------------------------------------
/chapter4/parameterized-queries-in-SQLite3.py:
--------------------------------------------------------------------------------
1 | import sqlite3
2 | username = input("Enter username: ")
3 | password = input("Enter password: ")
4 | # Establish a database connection
5 | conn = sqlite3.connect('example.db')
6 | cursor = conn.cursor()
7 |
8 | # Use a parameterized query to prevent SQL injection
9 | cursor.execute("SELECT * FROM users WHERE username = ? AND password = ?", (username, password))
10 |
11 | # Fetch the result
12 | result = cursor.fetchone()
13 |
14 | # Validate the login
15 | if result:
16 | print("Login successful!")
17 | else:
18 | print("Invalid credentials.")
19 |
20 | # Close the connection
21 | conn.close()
--------------------------------------------------------------------------------
/chapter4/scrape-with-PlayWright-advanced-with-crawler.py:
--------------------------------------------------------------------------------
1 | from playwright.sync_api import sync_playwright
2 |
3 | def scrape_data():
4 | with sync_playwright() as p:
5 | browser = p.chromium.launch()
6 | context = browser.new_context()
7 |
8 | # Open a new page
9 | page = context.new_page()
10 |
11 | # Navigate to the website
12 | page.goto('https://example.com')
13 |
14 | # Example: Log in (replace these with your actual login logic)
15 | page.fill('input[name="username"]', 'your_username')
16 | page.fill('input[name="password"]', 'your_password')
17 | page.click('button[type="submit"]')
18 |
19 | # Wait for navigation to dashboard or relevant page after login
20 | page.wait_for_load_state('load')
21 |
22 | # Start crawling and scraping
23 | scraped_data = []
24 |
25 | while True:
26 | # Scraping data on the current page
27 | data_elements = page.query_selector_all('.data-element-selector')
28 | scraped_data.extend([element.text_content() for element in data_elements])
29 |
30 | # Look for the 'next page' button or link
31 | next_page_button = page.query_selector('.next-page-button-selector')
32 |
33 | if not next_page_button:
34 | # If no next page is found, stop crawling
35 | break
36 |
37 | # Click on the 'next page' button
38 | next_page_button.click()
39 | # Wait for the new page to load
40 | page.wait_for_load_state('load')
41 |
42 | # Print or process scraped data from all pages
43 | for data in scraped_data:
44 | print(data)
45 |
46 | # Close the browser
47 | context.close()
48 |
49 | if __name__ == "__main__":
50 | scrape_data()
--------------------------------------------------------------------------------
/chapter4/scrape-with-PlayWright-advanced.py:
--------------------------------------------------------------------------------
1 | from playwright.sync_api import sync_playwright
2 |
3 | def scrape_data():
4 | with sync_playwright() as p:
5 | browser = p.chromium.launch()
6 | context = browser.new_context()
7 |
8 | # Open a new page
9 | page = context.new_page()
10 |
11 | # Navigate to the website
12 | page.goto('https://example.com')
13 |
14 | # Example: Log in (replace these with your actual login logic)
15 | page.fill('input[name="username"]', 'your_username')
16 | page.fill('input[name="password"]', 'your_password')
17 | page.click('button[type="submit"]')
18 |
19 | # Wait for navigation to dashboard or relevant page after login
20 | page.wait_for_load_state('load')
21 |
22 | # Scraping data
23 | data_elements = page.query_selector_all('.data-element-selector')
24 | scraped_data = [element.text_content() for element in data_elements]
25 |
26 | # Print or process scraped data
27 | for data in scraped_data:
28 | print(data)
29 |
30 | # Close the browser
31 | context.close()
32 |
33 | if __name__ == "__main__":
34 | scrape_data()
--------------------------------------------------------------------------------
/chapter4/scrape-with-PlayWright.py:
--------------------------------------------------------------------------------
1 | from playwright.sync_api import sync_playwright
2 |
3 | def scrape_website(url):
4 | with sync_playwright() as p:
5 | browser = p.chromium.launch()
6 | context = browser.new_context()
7 | page = context.new_page()
8 |
9 | page.goto(url)
10 | # Replace 'your_selector' with the actual CSS selector for the element you want to scrape
11 | elements = page.query_selector_all('your_selector')
12 |
13 | # Extracting information from the elements
14 | for element in elements:
15 | text = element.text_content()
16 | print(text) # Change this to process or save the scraped data
17 |
18 | browser.close()
19 |
20 | if __name__ == "__main__":
21 | # Replace 'https://example.com' with the URL you want to scrape
22 | scrape_website('https://example.com')
--------------------------------------------------------------------------------
/chapter4/scrape-with-Request-and-BeautifulSoup.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from bs4 import BeautifulSoup
3 |
4 | # Send a GET request to the website
5 | url = 'https://example.com'
6 | response = requests.get(url)
7 |
8 | # Parse HTML content using Beautiful Soup
9 | soup = BeautifulSoup(response.text, 'html.parser')
10 |
11 | # Extract specific data
12 | title = soup.find('title').text
13 | print(f"Website title: {title}")
14 |
15 | # Find all links on the page
16 | links = soup.find_all('a')
17 | for link in links:
18 | print(link.get('href'))
--------------------------------------------------------------------------------
/chapter4/test-XXS.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from urllib.parse import quote
3 |
4 | # Target URL to test for XSS vulnerability
5 | target_url = "https://example.com/page?id="
6 |
7 | # Payloads for testing, modify as needed
8 | xss_payloads = [ "", "", "