├── LICENSE
├── README.md
├── cache-me-outside
├── cache-me-outside-man.md
└── cache-me-outside.py
├── domain-intelligence-tool
├── config.json
├── domain-intelligence-tool-man-page.txt
├── domain-intelligence-tool-man.md
├── domain-intelligence-tool.py
└── requirements.txt
├── mastodon-user-search
├── env-example
├── masto-servers-list.txt
├── mastodon-user-search-man.md
└── mastodon-user-search.py
├── nostr-user-search
├── nostr-relay-list.txt
├── nostr-user-search-man.md
└── nostr-user-search.py
└── tweet-cache-search
├── tweet-cache-search-man.md
└── tweet-cache-search.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Inforensics
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Inforensics - OSINT Discovery Tools
2 | 
3 | OSINT Discovery is a set of Python scripts designed to search for users or URLs across different social media platforms and caching services. Currently, it supports searching for users on Nostr and Mastodon networks, searching for cached tweets across various archiving services, and searching for cached versions of any URL. Along with report generator for any domain.
4 | Created by [inforensics.ai](https://inforensics.ai)
5 |
6 | ## Scripts
7 |
8 | ### 1. Nostr User Search
9 | This script searches for a Nostr user across multiple relays.
10 | #### Features:
11 | - Search by public key or NIP-05 identifier
12 | - Use default relays or specify custom ones
13 | - Read relay list from a file
14 | - Verbose mode for detailed output
15 | #### Usage:
16 | ```
17 | python nostr-user-search.py [-h] [-r RELAYS [RELAYS ...]] [-f FILE] [-v] identifier
18 | ```
19 |
20 | ### 2. Mastodon User Search
21 | This script searches for a Mastodon user across multiple instances.
22 | #### Features:
23 | - Fetch instances from the instances.social API
24 | - Specify custom instances or read from a file
25 | - Control minimum instance size and status
26 | - Verbose mode for detailed output
27 | #### Usage:
28 | ```
29 | python mastodon-user-search.py [-h] [-c COUNT] [-m MIN_USERS] [--include-down] [--include-closed] [-v] [-i INSTANCES [INSTANCES ...]] [-f FILE] username
30 | ```
31 |
32 | ### 3. Tweet Cache Search
33 | This script searches for cached tweets of a specified Twitter username across multiple archiving and caching services.
34 | #### Features:
35 | - Search across multiple caching services (Wayback Machine, Google Cache, etc.)
36 | - Option to open results in the default web browser
37 | - Command-line interface with optional arguments
38 | #### Usage:
39 | ```
40 | python tweet-cache-search.py [-h] [-u USERNAME] [-o]
41 | ```
42 |
43 | ### 4. Cache-Me-Outside
44 | This script searches for cached versions of any URL across various caching and archiving services.
45 | #### Features:
46 | - Search across multiple services (Wayback Machine, Google Cache, Bing, Yandex, etc.)
47 | - Option to open results in the default web browser
48 | - JSON output option for easy parsing
49 | - Automatic installation of required libraries
50 | #### Usage:
51 | ```
52 | python cache-me-outside.py [-h] [-u URL] [-o] [-j]
53 | ```
54 |
55 | ## Installation
56 | 1. Clone the repository:
57 | ```
58 | git clone https://github.com/inforensics-ai/osint-user-discovery.git
59 | ```
60 | 2. Navigate to the project directory:
61 | ```
62 | cd osint-user-discovery
63 | ```
64 | 3. Each script will attempt to install its required dependencies when run. However, you can also install all dependencies manually:
65 | ```
66 | pip install -r requirements.txt
67 | ```
68 |
69 | ### 5. Domain Intelligence Tool
70 | This script performs comprehensive intelligence gathering on a specified domain.
71 | #### Features:
72 | - DNS record retrieval (A, AAAA, CNAME, MX, NS, TXT, SOA, SRV)
73 | - SSL/TLS certificate analysis
74 | - WHOIS information retrieval
75 | - Web technology detection
76 | - Subdomain enumeration
77 | - SSL/TLS vulnerability checks
78 | - HTTP header analysis
79 | - Email security configuration (DMARC, SPF)
80 | - CAA and TLSA record checks
81 | - Reverse DNS lookups
82 | - Domain age calculation
83 | - SSL certificate chain analysis
84 | - Security header checks
85 | - Web server version detection
86 | - DNSSEC implementation check
87 | - IP geolocation
88 | - SSL/TLS protocol support check
89 | - Domain reputation check
90 | - Robots.txt and sitemap.xml retrieval
91 | - DNS propagation check
92 | - HSTS preload status check
93 | - Generation of common domain variations
94 | - DNS zone transfer attempt
95 | #### Usage:
96 | ```
97 | python domain-intelligence-tool.py [-h] [--json] [--markdown] [--config CONFIG] domain
98 | ```
99 |
100 | ## Installation
101 | 1. Clone the repository:
102 | ```
103 | git clone https://github.com/inforensics-ai/osint-user-discovery.git
104 | ```
105 | 2. Navigate to the project directory:
106 | ```
107 | cd osint-user-discovery/domain-intelligence-tool
108 | ```
109 | 3. Each script will attempt to install its required dependencies when run. However, you can also install all dependencies manually:
110 | ```
111 | pip install -r requirements.txt
112 | ```
113 | 4. For the Domain Intelligence Tool:
114 | - Create a `config.json` file in the same directory as the script with the following structure:
115 | ```json
116 | {
117 | "api_keys": {
118 | "geoip2": "your_geoip2_api_key_here"
119 | },
120 | "markdown_output_path": "/path/to/output/directory",
121 | "geolite2_db_path": "/path/to/GeoLite2-City.mmdb"
122 | }
123 | ```
124 | - Download the GeoLite2-City.mmdb database and place it in the location specified in your config.json file.
125 |
126 | ## Contributing
127 | Contributions are welcome! Please feel free to submit a Pull Request.
128 |
129 | ## License
130 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
131 |
132 | ## Disclaimer
133 | These tools are for educational and research purposes only. Always respect privacy and adhere to the terms of service of the platforms you're querying. Ensure you have permission before performing any scans or intelligence gathering on domains you do not own.
134 |
135 | ## Contact
136 | For bug reports and feature requests, please open an issue on this repository.
137 |
--------------------------------------------------------------------------------
/cache-me-outside/cache-me-outside-man.md:
--------------------------------------------------------------------------------
1 | # NAME
2 |
3 | cache-me-outside.py - Search for cached versions of any URL across various caching and archiving services
4 |
5 | # SYNOPSIS
6 |
7 | `cache-me-outside.py [-h] [-u URL] [-o] [-j]`
8 |
9 | # DESCRIPTION
10 |
11 | `cache-me-outside.py` is a Python script that searches for cached versions of a specified URL across multiple caching and archiving services. It provides links to search results or cached pages from these services and optionally opens the results in your default web browser.
12 |
13 | Created by inforensics.ai
14 |
15 | The script searches the following services:
16 |
17 | - Wayback Machine
18 | - Google Cache
19 | - Bing
20 | - Yandex
21 | - Baidu
22 | - Internet Archive
23 | - Archive.today
24 |
25 | # OPTIONS
26 |
27 | `-h, --help`
28 | Show the help message and exit.
29 |
30 | `-u URL, --url URL`
31 | Specify the URL to search for. If not provided, the script will prompt for input.
32 |
33 | `-o, --open`
34 | Open successful results in the default web browser.
35 |
36 | `-j, --json`
37 | Output results in JSON format.
38 |
39 | # USAGE
40 |
41 | 1. Ensure you have Python 3 installed on your system.
42 | 2. Install the required library:
43 | ```
44 | pip install requests
45 | ```
46 | 3. Run the script with desired options:
47 | ```
48 | ./cache-me-outside.py [-u URL] [-o] [-j]
49 | ```
50 |
51 | # OUTPUT
52 |
53 | The script will display results for each service, showing either a link to the search results/cached page or a message indicating that no results were found. If the `-o` option is used, it will also open successful results in your default web browser.
54 |
55 | # EXAMPLES
56 |
57 | Search for a specific URL:
58 | ```
59 | $ ./cache-me-outside.py -u https://example.com
60 | ```
61 |
62 | Search for a URL and open results in the browser:
63 | ```
64 | $ ./cache-me-outside.py -u https://example.com -o
65 | ```
66 |
67 | Output results in JSON format:
68 | ```
69 | $ ./cache-me-outside.py -u https://example.com -j
70 | ```
71 |
72 | # NOTES
73 |
74 | - This script provides links to search results or cached pages. It does not scrape or display the actual content of the pages.
75 | - Some services might have restrictions on automated access. Use this script responsibly and in accordance with each service's terms of use.
76 | - The script's effectiveness depends on the availability and indexing of content by the searched services.
77 | - A small delay is added between requests to be respectful to the services being queried.
78 |
79 | # LICENSE
80 |
81 | MIT License
82 |
83 | Copyright (c) 2024 inforensics.ai
84 |
85 | Permission is hereby granted, free of charge, to any person obtaining a copy
86 | of this software and associated documentation files (the "Software"), to deal
87 | in the Software without restriction, including without limitation the rights
88 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
89 | copies of the Software, and to permit persons to whom the Software is
90 | furnished to do so, subject to the following conditions:
91 |
92 | The above copyright notice and this permission notice shall be included in all
93 | copies or substantial portions of the Software.
94 |
95 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
96 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
97 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
98 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
99 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
100 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
101 | SOFTWARE.
102 |
103 | # DISCLAIMER
104 |
105 | This tool is for educational purposes only. Always respect privacy and adhere to the terms of service of the platforms you're querying. The authors and contributors are not responsible for any misuse or damage caused by this program.
106 |
107 | # SEE ALSO
108 |
109 | - Python Requests library documentation: https://docs.python-requests.org/
110 | - Internet Archive: https://archive.org/
111 | - Wayback Machine: https://web.archive.org/
112 |
113 | # AUTHOR
114 |
115 | This script and man page were created by inforensics.ai.
116 |
117 | # BUGS
118 |
119 | Please report any bugs or issues to the script maintainer at inforensics.ai.
120 |
--------------------------------------------------------------------------------
/cache-me-outside/cache-me-outside.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | """
4 | cache-me-outside.py - Search for cached versions of any URL across various services.
5 | Made by inforensics.ai
6 | """
7 |
8 | import sys
9 | import subprocess
10 |
11 | def install_requirements():
12 | required = {'requests': '2.31.0'}
13 | for package, version in required.items():
14 | try:
15 | __import__(package)
16 | except ImportError:
17 | print(f"{package} is not installed. Attempting to install...")
18 | subprocess.check_call([sys.executable, "-m", "pip", "install", f"{package}=={version}"])
19 | print(f"{package} has been installed.")
20 |
21 | install_requirements()
22 |
23 | import requests
24 | from urllib.parse import quote_plus, urlparse
25 | import argparse
26 | import webbrowser
27 | import json
28 | import re
29 | import time
30 |
31 | def safe_request(url):
32 | try:
33 | response = requests.get(url, timeout=10)
34 | response.raise_for_status()
35 | return response
36 | except requests.RequestException as e:
37 | return f"Error accessing {url}: {str(e)}"
38 |
39 | def search_wayback_machine(url):
40 | wb_url = f"https://web.archive.org/web/*/{url}"
41 | result = safe_request(wb_url)
42 | if isinstance(result, requests.Response) and result.status_code == 200:
43 | return {"service": "Wayback Machine", "status": "success", "url": wb_url}
44 | return {"service": "Wayback Machine", "status": "error", "message": str(result)}
45 |
46 | def search_google_cache(url):
47 | cache_url = f"https://webcache.googleusercontent.com/search?q=cache:{url}"
48 | result = safe_request(cache_url)
49 | if isinstance(result, requests.Response) and result.status_code == 200:
50 | return {"service": "Google Cache", "status": "success", "url": cache_url}
51 | return {"service": "Google Cache", "status": "error", "message": str(result)}
52 |
53 | def search_bing(url):
54 | bing_url = f"https://www.bing.com/search?q=url:{quote_plus(url)}"
55 | result = safe_request(bing_url)
56 | if isinstance(result, requests.Response) and result.status_code == 200:
57 | cache_pattern = r']*>Cached<'
58 | match = re.search(cache_pattern, result.text)
59 | if match:
60 | cache_url = match.group(1)
61 | return {"service": "Bing", "status": "success", "url": cache_url, "search_url": bing_url}
62 | else:
63 | return {"service": "Bing", "status": "success", "url": bing_url, "note": "No cached version link found"}
64 | return {"service": "Bing", "status": "error", "message": str(result)}
65 |
66 | def search_yandex(url):
67 | yandex_url = f"https://yandex.com/search/?text=url:{quote_plus(url)}"
68 | result = safe_request(yandex_url)
69 | if isinstance(result, requests.Response) and result.status_code == 200:
70 | cache_pattern = r']*>Cached<'
71 | match = re.search(cache_pattern, result.text)
72 | if match:
73 | cache_url = match.group(1)
74 | return {"service": "Yandex", "status": "success", "url": cache_url, "search_url": yandex_url}
75 | else:
76 | return {"service": "Yandex", "status": "success", "url": yandex_url, "note": "No cached version link found"}
77 | return {"service": "Yandex", "status": "error", "message": str(result)}
78 |
79 | def search_baidu(url):
80 | baidu_url = f"https://www.baidu.com/s?wd=url:{quote_plus(url)}"
81 | result = safe_request(baidu_url)
82 | if isinstance(result, requests.Response) and result.status_code == 200:
83 | return {"service": "Baidu", "status": "success", "url": baidu_url}
84 | return {"service": "Baidu", "status": "error", "message": str(result)}
85 |
86 | def search_internet_archive(url):
87 | ia_url = f"https://archive.org/search.php?query={quote_plus(url)}"
88 | result = safe_request(ia_url)
89 | if isinstance(result, requests.Response) and result.status_code == 200:
90 | return {"service": "Internet Archive", "status": "success", "url": ia_url}
91 | return {"service": "Internet Archive", "status": "error", "message": str(result)}
92 |
93 | def search_archive_today(url):
94 | at_url = f"https://archive.today/{url}"
95 | result = safe_request(at_url)
96 | if isinstance(result, requests.Response) and result.status_code == 200:
97 | return {"service": "Archive.today", "status": "success", "url": at_url}
98 | return {"service": "Archive.today", "status": "error", "message": str(result)}
99 |
100 | def main():
101 | parser = argparse.ArgumentParser(description="Search for cached versions of any URL across various services.")
102 | parser.add_argument("-u", "--url", help="URL to search for")
103 | parser.add_argument("-o", "--open", action="store_true", help="Open successful results in default web browser")
104 | parser.add_argument("-j", "--json", action="store_true", help="Output results in JSON format")
105 | args = parser.parse_args()
106 |
107 | if args.url:
108 | url = args.url
109 | else:
110 | url = input("Enter the URL to search for: ")
111 |
112 | # Ensure the URL has a scheme
113 | if not urlparse(url).scheme:
114 | url = "http://" + url
115 |
116 | services = [
117 | search_wayback_machine,
118 | search_google_cache,
119 | search_bing,
120 | search_yandex,
121 | search_baidu,
122 | search_internet_archive,
123 | search_archive_today
124 | ]
125 |
126 | results = []
127 | for service in services:
128 | result = service(url)
129 | results.append(result)
130 | if not args.json:
131 | if result["status"] == "success":
132 | print(f"{result['service']} results:")
133 | print(f" URL: {result['url']}")
134 | if "search_url" in result:
135 | print(f" Search URL: {result['search_url']}")
136 | if "note" in result:
137 | print(f" Note: {result['note']}")
138 |
139 | if args.open:
140 | webbrowser.open(result["url"])
141 | else:
142 | print(f"No results found on {result['service']}: {result['message']}")
143 | print("-" * 50)
144 |
145 | # Add a small delay between requests to be respectful to the services
146 | time.sleep(1)
147 |
148 | if args.json:
149 | print(json.dumps(results, indent=2))
150 |
151 | if __name__ == "__main__":
152 | main()
153 |
--------------------------------------------------------------------------------
/domain-intelligence-tool/config.json:
--------------------------------------------------------------------------------
1 | {
2 | "api_keys": {
3 | "geoip2": "your_geoip2_api_key_here"
4 | },
5 | "markdown_output_path": "",
6 | "geolite2_db_path": "../GeoLite2-City.mmdb"
7 | }
8 |
--------------------------------------------------------------------------------
/domain-intelligence-tool/domain-intelligence-tool-man-page.txt:
--------------------------------------------------------------------------------
1 | INFORENSICS-DOMAIN-INTELLIGENCE(1) User Commands INFORENSICS-DOMAIN-INTELLIGENCE(1)
2 |
3 | NAME
4 | inforensics-domain-intelligence - Comprehensive domain intelligence gathering tool
5 |
6 | SYNOPSIS
7 | inforensics-domain-intelligence [OPTIONS] DOMAIN
8 |
9 | DESCRIPTION
10 | inforensics-domain-intelligence is a Python script that performs comprehensive
11 | intelligence gathering on a specified domain. It collects and analyzes
12 | various aspects of domain information, including DNS records, SSL/TLS
13 | configuration, WHOIS data, and more.
14 |
15 | OPTIONS
16 | DOMAIN
17 | The target domain to analyze.
18 |
19 | --json
20 | Output the results in JSON format.
21 |
22 | --markdown
23 | Output the results in Markdown format.
24 |
25 | --config FILE
26 | Specify a custom configuration file (default: config.json).
27 |
28 | FEATURES
29 | The tool performs the following checks and analyses:
30 |
31 | • DNS record retrieval (A, AAAA, CNAME, MX, NS, TXT, SOA, SRV)
32 | • SSL/TLS certificate analysis
33 | • WHOIS information retrieval
34 | • Web technology detection
35 | • Subdomain enumeration
36 | • SSL/TLS vulnerability checks
37 | • HTTP header analysis
38 | • Email security configuration (DMARC, SPF)
39 | • CAA and TLSA record checks
40 | • Reverse DNS lookups
41 | • Domain age calculation
42 | • SSL certificate chain analysis
43 | • Security header checks
44 | • Web server version detection
45 | • DNSSEC implementation check
46 | • IP geolocation
47 | • SSL/TLS protocol support check
48 | • Domain reputation check (against common blacklists)
49 | • Robots.txt and sitemap.xml retrieval
50 | • DNS propagation check
51 | • HSTS preload status check
52 | • Generation of common domain variations
53 | • DNS zone transfer attempt
54 |
55 | CONFIGURATION
56 | The tool uses a configuration file (default: config.json) to set various
57 | options, including API keys and output paths. The configuration file
58 | should be in JSON format.
59 |
60 | EXIT STATUS
61 | 0 Success
62 | 1 Various errors (e.g., network issues, missing dependencies)
63 |
64 | FILES
65 | config.json
66 | Default configuration file.
67 |
68 | GeoLite2-City.mmdb
69 | Required for IP geolocation. Path specified in config.json.
70 |
71 | NOTES
72 | This tool performs active reconnaissance on the specified domain. Ensure
73 | you have permission to scan the target domain before use.
74 |
75 | Some features (like subdomain enumeration) may be intrusive and should
76 | be used with caution.
77 |
78 | BUGS
79 | Report bugs to: https://github.com/inforensics/domain-intelligence-tool/issues
80 |
81 | AUTHOR
82 | Written by the Inforensics AI.
83 |
84 | COPYRIGHT
85 | Copyright © 2024 Inforensics. MIT License
86 |
87 | SEE ALSO
88 | whois(1), dig(1), nmap(1)
89 |
90 | Inforensics July 2024 INFORENSICS-DOMAIN-INTELLIGENCE(1)
91 |
--------------------------------------------------------------------------------
/domain-intelligence-tool/domain-intelligence-tool-man.md:
--------------------------------------------------------------------------------
1 | # Inforensics Domain Intelligence Tool
2 |
3 | ## Overview
4 |
5 | The Inforensics Domain Intelligence Tool is a comprehensive Python script designed for gathering and analyzing various aspects of domain information. It provides a wide range of checks and analyses, making it a valuable tool for cybersecurity professionals, system administrators, and researchers.
6 |
7 | ## Features
8 |
9 | - DNS record retrieval (A, AAAA, CNAME, MX, NS, TXT, SOA, SRV)
10 | - SSL/TLS certificate analysis
11 | - WHOIS information retrieval
12 | - Web technology detection
13 | - Subdomain enumeration
14 | - SSL/TLS vulnerability checks
15 | - HTTP header analysis
16 | - Email security configuration (DMARC, SPF)
17 | - CAA and TLSA record checks
18 | - Reverse DNS lookups
19 | - Domain age calculation
20 | - SSL certificate chain analysis
21 | - Security header checks
22 | - Web server version detection
23 | - DNSSEC implementation check
24 | - IP geolocation
25 | - SSL/TLS protocol support check
26 | - Domain reputation check (against common blacklists)
27 | - Robots.txt and sitemap.xml retrieval
28 | - DNS propagation check
29 | - HSTS preload status check
30 | - Generation of common domain variations
31 | - DNS zone transfer attempt
32 |
33 | ## Installation
34 |
35 | 1. Clone the repository:
36 | ```
37 | git clone https://github.com/inforensics/domain-intelligence-tool.git
38 | cd domain-intelligence-tool
39 | ```
40 |
41 | 2. Install the required dependencies:
42 | ```
43 | pip install -r requirements.txt
44 | ```
45 |
46 | 3. Download the GeoLite2-City.mmdb database and place it in the location specified in your config.json file.
47 |
48 | 4. Create a `config.json` file in the same directory as the script (see Configuration section below).
49 |
50 | ## Usage
51 |
52 | Basic usage:
53 | ```
54 | python inforensics_domain_intelligence.py example.com
55 | ```
56 |
57 | Output in JSON format:
58 | ```
59 | python inforensics_domain_intelligence.py example.com --json
60 | ```
61 |
62 | Output in Markdown format:
63 | ```
64 | python inforensics_domain_intelligence.py example.com --markdown
65 | ```
66 |
67 | Use a custom configuration file:
68 | ```
69 | python inforensics_domain_intelligence.py example.com --config custom_config.json
70 | ```
71 |
72 | ## Configuration
73 |
74 | Create a `config.json` file with the following structure:
75 |
76 | ```json
77 | {
78 | "api_keys": {
79 | "geoip2": "your_geoip2_api_key_here"
80 | },
81 | "markdown_output_path": "/path/to/output/directory",
82 | "geolite2_db_path": "/path/to/GeoLite2-City.mmdb"
83 | }
84 | ```
85 |
86 | ## Output
87 |
88 | The tool provides output in three formats:
89 | 1. Console output (default)
90 | 2. JSON format (use `--json` flag)
91 | 3. Markdown format (use `--markdown` flag)
92 |
93 | ## Caution
94 |
95 | This tool performs active reconnaissance on the specified domain. Ensure you have permission to scan the target domain before use. Some features (like subdomain enumeration) may be intrusive and should be used with caution.
96 |
97 | ## Contributing
98 |
99 | Contributions to the Inforensics Domain Intelligence Tool are welcome! Please feel free to submit pull requests, create issues or spread the word.
100 |
101 | ## License
102 |
103 | This project is licensed under MIT [LICENSE](../LICENSE) file for details.
104 |
105 | ## Disclaimer
106 |
107 | This tool is for educational and ethical use only. Always obtain proper authorization before scanning any domains you do not own or have explicit permission to test.
108 |
109 | ## Contact
110 |
111 | For bugs, questions, and discussions please use the [GitHub Issues](https://github.com/inforensics/domain-intelligence-tool/issues).
112 |
113 | ## Acknowledgments
114 |
115 | - Thanks to all the open-source projects that made this tool possible.
116 | - Special thanks to the Inforensics team for their continuous support and contributions.
117 |
118 |
--------------------------------------------------------------------------------
/domain-intelligence-tool/domain-intelligence-tool.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | import sys
4 | import json
5 | import argparse
6 | import os
7 | from pathlib import Path
8 | import dns.resolver
9 | import socket
10 | import ssl
11 | from datetime import datetime
12 | from cryptography import x509
13 | from cryptography.hazmat.backends import default_backend
14 | import whois
15 | from ipwhois import IPWhois
16 | import requests
17 | from requests.exceptions import RequestException
18 | from bs4 import BeautifulSoup
19 | import subprocess
20 | import re
21 | from tqdm import tqdm
22 | import dns.zone
23 | import geoip2.database
24 | import OpenSSL
25 | import idna
26 |
27 | ASCII_BANNER = '''
28 | ██╗███╗ ██╗███████╗ ██████╗ ██████╗ ███████╗███╗ ██╗███████╗██╗ ██████╗███████╗
29 | ██║████╗ ██║██╔════╝██╔═══██╗██╔══██╗██╔════╝████╗ ██║██╔════╝██║██╔════╝██╔════╝
30 | ██║██╔██╗ ██║█████╗ ██║ ██║██████╔╝█████╗ ██╔██╗ ██║███████╗██║██║ ███████╗
31 | ██║██║╚██╗██║██╔══╝ ██║ ██║██╔══██╗██╔══╝ ██║╚██╗██║╚════██║██║██║ ╚════██║
32 | ██║██║ ╚████║██║ ╚██████╔╝██║ ██║███████╗██║ ╚████║███████║██║╚██████╗███████║
33 | ╚═╝╚═╝ ╚═══╝╚═╝ ╚═════╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═══╝╚══════╝╚═╝ ╚═════╝╚══════╝
34 |
35 | Domain Intelligence Tool
36 | '''
37 |
38 | # Load configuration
39 | def load_config(config_path):
40 | default_config = {
41 | "api_keys": {
42 | "geoip2": ""
43 | },
44 | "markdown_output_path": "",
45 | "geolite2_db_path": "GeoLite2-City.mmdb"
46 | }
47 |
48 | if os.path.exists(config_path):
49 | with open(config_path, 'r') as f:
50 | config = json.load(f)
51 | # Update default config with loaded config
52 | default_config.update(config)
53 | else:
54 | print(f"Config file not found at {config_path}. Using default configuration.")
55 |
56 | return default_config
57 |
58 | # Global configuration variable
59 | CONFIG = load_config('config.json')
60 |
61 | def is_website_live(domain):
62 | try:
63 | response = requests.get(f"http://{domain}", timeout=10)
64 | response.raise_for_status()
65 | return True
66 | except RequestException:
67 | try:
68 | response = requests.get(f"https://{domain}", timeout=10)
69 | response.raise_for_status()
70 | return True
71 | except RequestException:
72 | return False
73 |
74 | def get_dns_records(domain):
75 | dns_info = {}
76 | record_types = ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'TXT', 'SOA', 'SRV']
77 |
78 | for record_type in record_types:
79 | try:
80 | answers = dns.resolver.resolve(domain, record_type)
81 | dns_info[record_type] = [str(rdata) for rdata in answers]
82 | except dns.resolver.NoAnswer:
83 | dns_info[record_type] = []
84 | except dns.resolver.NXDOMAIN:
85 | dns_info[record_type] = "Domain does not exist"
86 | except Exception as e:
87 | dns_info[record_type] = f"Error: {str(e)}"
88 |
89 | return dns_info
90 |
91 | def get_ssl_info(domain):
92 | try:
93 | context = ssl.create_default_context()
94 | with socket.create_connection((domain, 443)) as sock:
95 | with context.wrap_socket(sock, server_hostname=domain) as secure_sock:
96 | der_cert = secure_sock.getpeercert(binary_form=True)
97 | cert = x509.load_der_x509_certificate(der_cert, default_backend())
98 |
99 | cert_info = {
100 | "subject": ", ".join([f"{attr.oid._name}={attr.value}" for attr in cert.subject]),
101 | "issuer": ", ".join([f"{attr.oid._name}={attr.value}" for attr in cert.issuer]),
102 | "version": cert.version.name,
103 | "serialNumber": cert.serial_number,
104 | "notBefore": cert.not_valid_before_utc,
105 | "notAfter": cert.not_valid_after_utc,
106 | "subjectAltName": [
107 | f"{name.value}" for name in cert.extensions.get_extension_for_oid(x509.oid.ExtensionOID.SUBJECT_ALTERNATIVE_NAME).value
108 | ] if cert.extensions.get_extension_for_oid(x509.oid.ExtensionOID.SUBJECT_ALTERNATIVE_NAME) else [],
109 | }
110 |
111 | return cert_info
112 | except ssl.SSLError as e:
113 | return f"SSL Error: {str(e)}"
114 | except Exception as e:
115 | return f"Error: {str(e)}"
116 |
117 | def get_whois_info(domain):
118 | try:
119 | w = whois.whois(domain)
120 | return {
121 | "registrar": w.registrar,
122 | "creation_date": w.creation_date,
123 | "expiration_date": w.expiration_date,
124 | "name_servers": w.name_servers
125 | }
126 | except Exception as e:
127 | return f"WHOIS Error: {str(e)}"
128 |
129 | def get_ip_info(ip):
130 | try:
131 | obj = IPWhois(ip)
132 | results = obj.lookup_rdap()
133 | return {
134 | "ASN": results.get('asn'),
135 | "ASN_Country": results.get('asn_country_code'),
136 | "ASN_Description": results.get('asn_description')
137 | }
138 | except Exception as e:
139 | return f"IP WHOIS Error: {str(e)}"
140 |
141 | def detect_web_technologies(domain):
142 | try:
143 | response = requests.get(f"https://{domain}", timeout=5)
144 | soup = BeautifulSoup(response.text, 'html.parser')
145 |
146 | technologies = []
147 | if 'wordpress' in response.text.lower():
148 | technologies.append('WordPress')
149 | if 'joomla' in response.text.lower():
150 | technologies.append('Joomla')
151 | if 'drupal' in response.text.lower():
152 | technologies.append('Drupal')
153 |
154 | server = response.headers.get('Server')
155 | if server:
156 | technologies.append(f"Web Server: {server}")
157 |
158 | return technologies
159 | except Exception as e:
160 | return f"Web Technology Detection Error: {str(e)}"
161 |
162 | def enumerate_subdomains(domain):
163 | try:
164 | output = subprocess.check_output(['sublist3r', '-d', domain, '-o', 'subdomains.txt'], stderr=subprocess.STDOUT)
165 | with open('subdomains.txt', 'r') as f:
166 | subdomains = f.read().splitlines()
167 | subprocess.run(['rm', 'subdomains.txt'])
168 | return subdomains
169 | except Exception as e:
170 | return f"Subdomain Enumeration Error: {str(e)}"
171 |
172 | def check_ssl_vulnerabilities(domain):
173 | try:
174 | context = ssl.create_default_context()
175 | with socket.create_connection((domain, 443)) as sock:
176 | with context.wrap_socket(sock, server_hostname=domain) as secure_sock:
177 | cipher = secure_sock.cipher()
178 | version = secure_sock.version()
179 |
180 | return {
181 | "SSL Version": version,
182 | "Cipher Suite": cipher[0],
183 | "Bit Strength": cipher[2],
184 | }
185 | except ssl.SSLError as e:
186 | return f"SSL Vulnerability Check Error: {str(e)}"
187 | except Exception as e:
188 | return f"Error: {str(e)}"
189 |
190 | def analyze_http_headers(domain):
191 | try:
192 | response = requests.get(f"https://{domain}", timeout=5)
193 | return dict(response.headers)
194 | except Exception as e:
195 | return f"HTTP Headers Analysis Error: {str(e)}"
196 |
197 | def check_email_security(domain):
198 | try:
199 | dmarc = dns.resolver.resolve(f"_dmarc.{domain}", 'TXT')
200 | spf = dns.resolver.resolve(domain, 'TXT')
201 | return {
202 | "DMARC": str(dmarc[0]),
203 | "SPF": str(spf[0])
204 | }
205 | except dns.resolver.NXDOMAIN:
206 | return {
207 | "DMARC": "Not found",
208 | "SPF": "Not found"
209 | }
210 | except Exception as e:
211 | return {
212 | "Error": f"Email Security Check Error: {str(e)}"
213 | }
214 |
215 | def get_caa_records(domain):
216 | try:
217 | answers = dns.resolver.resolve(domain, 'CAA')
218 | return [str(rdata) for rdata in answers]
219 | except Exception as e:
220 | return f"CAA Record Error: {str(e)}"
221 |
222 | def get_tlsa_records(domain):
223 | try:
224 | answers = dns.resolver.resolve(f"_443._tcp.{domain}", 'TLSA')
225 | return [str(rdata) for rdata in answers]
226 | except Exception as e:
227 | return f"TLSA Record Error: {str(e)}"
228 |
229 | def get_reverse_dns(ip):
230 | try:
231 | return socket.gethostbyaddr(ip)[0]
232 | except Exception as e:
233 | return f"Reverse DNS Error: {str(e)}"
234 |
235 | def get_domain_age(creation_date):
236 | if creation_date:
237 | if isinstance(creation_date, list):
238 | creation_date = creation_date[0]
239 | age = datetime.now() - creation_date
240 | return f"{age.days} days"
241 | return "Unknown"
242 |
243 | def get_ssl_cert_chain(domain):
244 | try:
245 | context = ssl.create_default_context()
246 | with socket.create_connection((domain, 443)) as sock:
247 | with context.wrap_socket(sock, server_hostname=domain) as secure_sock:
248 | der_cert = secure_sock.getpeercert(binary_form=True)
249 | cert = x509.load_der_x509_certificate(der_cert, default_backend())
250 |
251 | chain = []
252 | current_cert = cert
253 | while current_cert:
254 | cert_info = {
255 | "subject": ", ".join([f"{attr.oid._name}={attr.value}" for attr in current_cert.subject]),
256 | "issuer": ", ".join([f"{attr.oid._name}={attr.value}" for attr in current_cert.issuer]),
257 | "not_before": current_cert.not_valid_before_utc,
258 | "not_after": current_cert.not_valid_after_utc,
259 | }
260 | chain.append(cert_info)
261 |
262 | # If the issuer is the same as the subject, we've reached the root certificate
263 | if current_cert.subject == current_cert.issuer:
264 | break
265 |
266 | # Try to fetch the next certificate in the chain
267 | try:
268 | issuer_cert = fetch_issuer_cert(current_cert)
269 | if issuer_cert:
270 | current_cert = issuer_cert
271 | else:
272 | break
273 | except Exception:
274 | break
275 |
276 | return chain
277 | except ssl.SSLError as e:
278 | return f"SSL Error: {str(e)}"
279 | except Exception as e:
280 | return f"Error: {str(e)}"
281 |
282 | def fetch_issuer_cert(cert):
283 | # This is a placeholder function. In a real-world scenario, you would implement
284 | # logic to fetch the issuer's certificate, possibly from a certificate store or online.
285 | # For now, we'll just return None to indicate we can't fetch further certificates.
286 | return None
287 |
288 | def get_security_headers(domain):
289 | try:
290 | response = requests.get(f"https://{domain}", timeout=5)
291 | security_headers = {
292 | 'Strict-Transport-Security': response.headers.get('Strict-Transport-Security', 'Not Set'),
293 | 'Content-Security-Policy': response.headers.get('Content-Security-Policy', 'Not Set'),
294 | 'X-Frame-Options': response.headers.get('X-Frame-Options', 'Not Set'),
295 | 'X-XSS-Protection': response.headers.get('X-XSS-Protection', 'Not Set'),
296 | 'X-Content-Type-Options': response.headers.get('X-Content-Type-Options', 'Not Set'),
297 | 'Referrer-Policy': response.headers.get('Referrer-Policy', 'Not Set'),
298 | 'Feature-Policy': response.headers.get('Feature-Policy', 'Not Set'),
299 | }
300 | return security_headers
301 | except Exception as e:
302 | return f"Security Headers Error: {str(e)}"
303 |
304 | def get_web_server_version(domain):
305 | try:
306 | response = requests.get(f"https://{domain}", timeout=5)
307 | server = response.headers.get('Server', 'Not Disclosed')
308 | return server
309 | except Exception as e:
310 | return f"Web Server Version Error: {str(e)}"
311 |
312 | def check_dnssec(domain):
313 | try:
314 | answers = dns.resolver.resolve(domain, 'DNSKEY')
315 | return "DNSSEC is implemented"
316 | except dns.resolver.NoAnswer:
317 | return "DNSSEC is not implemented"
318 | except Exception as e:
319 | return f"DNSSEC Check Error: {str(e)}"
320 |
321 | def get_ip_geolocation(ip):
322 | try:
323 | with geoip2.database.Reader(CONFIG['geolite2_db_path']) as reader:
324 | response = reader.city(ip)
325 | return {
326 | 'country': response.country.name,
327 | 'city': response.city.name,
328 | 'latitude': response.location.latitude,
329 | 'longitude': response.location.longitude,
330 | }
331 | except Exception as e:
332 | return f"IP Geolocation Error: {str(e)}"
333 |
334 | def check_ssl_tls_protocols(domain):
335 | protocols = ['SSLv2', 'SSLv3', 'TLSv1', 'TLSv1.1', 'TLSv1.2', 'TLSv1.3']
336 | supported = {}
337 | for protocol in protocols:
338 | try:
339 | context = ssl.create_default_context()
340 | context.set_ciphers('ALL:@SECLEVEL=0')
341 | context.minimum_version = getattr(ssl, f'{protocol.replace(".", "_")}_METHOD')
342 | with socket.create_connection((domain, 443)) as sock:
343 | with context.wrap_socket(sock, server_hostname=domain) as secure_sock:
344 | supported[protocol] = True
345 | except:
346 | supported[protocol] = False
347 | return supported
348 |
349 | def check_domain_reputation(domain):
350 | blacklists = [
351 | 'zen.spamhaus.org',
352 | 'bl.spamcop.net',
353 | 'cbl.abuseat.org',
354 | ]
355 | results = {}
356 | for bl in blacklists:
357 | try:
358 | host = f"{domain}.{bl}"
359 | socket.gethostbyname(host)
360 | results[bl] = "Listed"
361 | except:
362 | results[bl] = "Not Listed"
363 | return results
364 |
365 | def get_robots_txt(domain):
366 | try:
367 | response = requests.get(f"https://{domain}/robots.txt", timeout=5)
368 | if response.status_code == 200:
369 | return response.text
370 | else:
371 | return f"No robots.txt found (Status code: {response.status_code})"
372 | except Exception as e:
373 | return f"Robots.txt Error: {str(e)}"
374 |
375 | def get_sitemap(domain):
376 | try:
377 | response = requests.get(f"https://{domain}/sitemap.xml", timeout=5)
378 | if response.status_code == 200:
379 | return "Sitemap found"
380 | else:
381 | return f"No sitemap.xml found (Status code: {response.status_code})"
382 | except Exception as e:
383 | return f"Sitemap Error: {str(e)}"
384 |
385 | def check_dns_propagation(domain):
386 | nameservers = [
387 | '8.8.8.8', '1.1.1.1', '9.9.9.9', '208.67.222.222',
388 | '8.8.4.4', '1.0.0.1', '149.112.112.112', '208.67.220.220'
389 | ]
390 | results = {}
391 | for ns in nameservers:
392 | try:
393 | resolver = dns.resolver.Resolver()
394 | resolver.nameservers = [ns]
395 | answers = resolver.resolve(domain, 'A')
396 | results[ns] = [str(rdata) for rdata in answers]
397 | except Exception as e:
398 | results[ns] = f"Error: {str(e)}"
399 | return results
400 |
401 | def check_hsts_preload(domain):
402 | try:
403 | response = requests.get(f"https://hstspreload.org/api/v2/status/{domain}", timeout=5)
404 |
405 | if response.status_code == 404:
406 | return "Domain not found in HSTS preload list"
407 |
408 | response.raise_for_status()
409 | data = response.json()
410 | return data.get('status', 'Status not found in response')
411 |
412 | except requests.RequestException as e:
413 | return f"HSTS Preload Check Error: {str(e)}"
414 | except json.JSONDecodeError as json_err:
415 | return f"JSON Parsing Error: {str(json_err)}. Raw response: {response.text[:100]}..."
416 |
417 | def generate_domain_variations(domain):
418 | variations = set() # Using a set to avoid duplicates
419 | parts = domain.split('.')
420 | name = parts[0]
421 | tld = '.'.join(parts[1:])
422 |
423 | # Common TLD variations
424 | variations.add(f"{name}.co")
425 | variations.add(f"{name}.org")
426 | variations.add(f"{name}.net")
427 |
428 | # Hyphen variation
429 | variations.add(f"{name}-{tld}")
430 |
431 | # Number substitutions
432 | variations.add(name.replace('i', '1') + '.' + tld)
433 | variations.add(name.replace('l', '1') + '.' + tld)
434 | variations.add(name.replace('o', '0') + '.' + tld)
435 | variations.add(name + '.' + tld.replace('o', '0'))
436 |
437 | # Character swaps
438 | for i in range(len(name) - 1):
439 | swapped = list(name)
440 | swapped[i], swapped[i+1] = swapped[i+1], swapped[i]
441 | variations.add(''.join(swapped) + '.' + tld)
442 |
443 | # Remove the original domain if it's in the set
444 | variations.discard(domain)
445 |
446 | return list(variations)
447 |
448 | def attempt_zone_transfer(domain):
449 | try:
450 | answers = dns.resolver.resolve(domain, 'NS')
451 | nameservers = [str(rdata) for rdata in answers]
452 |
453 | for ns in nameservers:
454 | try:
455 | z = dns.zone.from_xfr(dns.query.xfr(ns, domain))
456 | return {str(name): str(z[name].to_text()) for name in z.nodes.keys()}
457 | except Exception as e:
458 | pass
459 | return "Zone transfer not allowed"
460 | except Exception as e:
461 | return f"Zone Transfer Error: {str(e)}"
462 |
463 | def main(domain, json_output=False, markdown_output=False):
464 | print(ASCII_BANNER)
465 | print(f"Analyzing domain: {domain}\n")
466 |
467 | result = {
468 | "domain": domain,
469 | "query_time": datetime.now().isoformat(),
470 | }
471 |
472 | # Check if the website is live
473 | if not is_website_live(domain):
474 | result["status"] = "Domain does not have a live website"
475 | result["error"] = "Unable to connect to the website. The domain might not be hosted or could be blocking our requests."
476 |
477 | if json_output:
478 | print(json.dumps(result, indent=2, default=str))
479 | elif markdown_output:
480 | output_path = CONFIG['markdown_output_path'] or os.path.dirname(os.path.abspath(__file__))
481 | filename = os.path.join(output_path, f"{domain}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.md")
482 | with open(filename, 'w') as f:
483 | f.write(f"# Inforensics Domain Intelligence Report for {domain}\n\n")
484 | f.write(f"Query Time: {result['query_time']}\n\n")
485 | f.write(f"## Status\n\n{result['status']}\n\n")
486 | f.write(f"## Error\n\n{result['error']}\n\n")
487 | f.write("\n---\n")
488 | f.write("Generated by Inforensics Domain Intelligence Tool\n")
489 | f.write("Created by [Inforensics](https://inforensics.ai)\n")
490 | print(f"Markdown report saved as {filename}")
491 | else:
492 | print(f"\nStatus: {result['status']}")
493 | print(f"Error: {result['error']}")
494 |
495 | return
496 |
497 | tasks = [
498 | ("DNS Records", get_dns_records),
499 | ("SSL Certificate", get_ssl_info),
500 | ("WHOIS Information", get_whois_info),
501 | ("Web Technologies", detect_web_technologies),
502 | ("Subdomains", enumerate_subdomains),
503 | ("SSL Vulnerabilities", check_ssl_vulnerabilities),
504 | ("HTTP Headers", analyze_http_headers),
505 | ("Email Security", check_email_security),
506 | ("CAA Records", get_caa_records),
507 | ("TLSA Records", get_tlsa_records),
508 | ("SSL Certificate Chain", get_ssl_cert_chain),
509 | ("Security Headers", get_security_headers),
510 | ("Web Server Version", get_web_server_version),
511 | ("DNSSEC", check_dnssec),
512 | ("SSL/TLS Protocols", check_ssl_tls_protocols),
513 | ("Domain Reputation", check_domain_reputation),
514 | ("Robots.txt", get_robots_txt),
515 | ("Sitemap", get_sitemap),
516 | ("DNS Propagation", check_dns_propagation),
517 | ("HSTS Preload Status", check_hsts_preload),
518 | ("Domain Variations", generate_domain_variations),
519 | ("Zone Transfer", attempt_zone_transfer),
520 | ]
521 |
522 | with tqdm(total=len(tasks), desc="Progress", unit="task") as pbar:
523 | for task_name, task_func in tasks:
524 | result[task_name] = task_func(domain)
525 | pbar.update(1)
526 |
527 | # Get IP info and Reverse DNS for each A record
528 | result["IP Info"] = {}
529 | result["Reverse DNS"] = {}
530 | a_records = result.get("DNS Records", {}).get('A', [])
531 | with tqdm(total=len(a_records), desc="IP Info", unit="ip") as pbar:
532 | for ip in a_records:
533 | result['IP Info'][ip] = get_ip_info(ip)
534 | result['Reverse DNS'][ip] = get_reverse_dns(ip)
535 | pbar.update(1)
536 |
537 | # Get IP Geolocation
538 | result["IP Geolocation"] = {}
539 | with tqdm(total=len(a_records), desc="IP Geolocation", unit="ip") as pbar:
540 | for ip in a_records:
541 | result['IP Geolocation'][ip] = get_ip_geolocation(ip)
542 | pbar.update(1)
543 |
544 | # Calculate Domain Age
545 | if isinstance(result.get('WHOIS Information'), dict) and 'creation_date' in result['WHOIS Information']:
546 | result['Domain Age'] = get_domain_age(result['WHOIS Information']['creation_date'])
547 | else:
548 | result['Domain Age'] = "Unable to calculate (WHOIS information not available)"
549 |
550 | if json_output:
551 | print(json.dumps(result, indent=2, default=str))
552 | elif markdown_output:
553 | output_path = CONFIG['markdown_output_path'] or os.path.dirname(os.path.abspath(__file__))
554 | filename = os.path.join(output_path, f"{domain}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.md")
555 | with open(filename, 'w') as f:
556 | f.write(f"# Inforensics Domain Intelligence Report for {domain}\n\n")
557 | f.write(f"Query Time: {result['query_time']}\n\n")
558 |
559 | for key, value in result.items():
560 | if key not in ['domain', 'query_time']:
561 | f.write(f"## {key}\n\n")
562 | f.write(f"```\n{json.dumps(value, indent=2, default=str)}\n```\n\n")
563 |
564 | f.write("\n---\n")
565 | f.write("Generated by Inforensics Domain Intelligence Tool\n")
566 | f.write("Created by [Inforensics](https://inforensics.ai)\n")
567 | print(f"Markdown report saved as {filename}")
568 | else:
569 | print(f"\nInforensics Domain Intelligence Report for {domain}")
570 | print(f"Query Time: {result['query_time']}")
571 | for key, value in result.items():
572 | if key not in ['domain', 'query_time']:
573 | print(f"\n{key}:")
574 | if isinstance(value, dict):
575 | for subkey, subvalue in value.items():
576 | if isinstance(subvalue, list):
577 | print(f" {subkey}:")
578 | for item in subvalue:
579 | print(f" {item}")
580 | else:
581 | print(f" {subkey}: {subvalue}")
582 | elif isinstance(value, list):
583 | for item in value:
584 | print(f" {item}")
585 | else:
586 | print(f" {value}")
587 |
588 | if __name__ == "__main__":
589 | parser = argparse.ArgumentParser(description="Inforensics Domain Intelligence Tool")
590 | parser.add_argument("domain", help="The domain to query")
591 | parser.add_argument("--json", action="store_true", help="Output in JSON format")
592 | parser.add_argument("--markdown", action="store_true", help="Output in Markdown format")
593 | parser.add_argument("--config", default="config.json", help="Path to configuration file")
594 | args = parser.parse_args()
595 |
596 | CONFIG = load_config(args.config)
597 | main(args.domain, args.json, args.markdown)
598 |
--------------------------------------------------------------------------------
/domain-intelligence-tool/requirements.txt:
--------------------------------------------------------------------------------
1 | dnspython==2.3.0
2 | cryptography==40.0.2
3 | python-whois==0.8.0
4 | ipwhois==1.2.0
5 | requests==2.30.0
6 | beautifulsoup4==4.12.2
7 | tqdm==4.65.0
8 | geoip2==4.7.0
9 | pyOpenSSL==23.1.1
10 | idna==3.4
11 |
--------------------------------------------------------------------------------
/mastodon-user-search/env-example:
--------------------------------------------------------------------------------
1 | INSTANCES_API_KEY=instances_api_key
2 |
--------------------------------------------------------------------------------
/mastodon-user-search/mastodon-user-search-man.md:
--------------------------------------------------------------------------------
1 | # MASTODON-USER-SEARCH(1)
2 |
3 | ## NAME
4 |
5 | mastodon-user-search - Search for a Mastodon user across multiple instances
6 |
7 | ## SYNOPSIS
8 |
9 | `mastodon-user-search` [OPTIONS] USERNAME
10 |
11 | ## DESCRIPTION
12 |
13 | The mastodon-user-search script searches for a specified Mastodon user across multiple Mastodon instances. It can use the instances.social API to fetch a list of instances to search, or allow the user to specify their own list of instances.
14 |
15 | ## OPTIONS
16 |
17 | `USERNAME`
18 | The Mastodon username to search for (required).
19 |
20 | `-c`, `--count` COUNT
21 | Number of instances to search when using the API (default: 100).
22 |
23 | `-m`, `--min-users` MIN_USERS
24 | Minimum number of users an instance should have when using the API (default: 1000).
25 |
26 | `--include-down`
27 | Include down instances in the search when using the API.
28 |
29 | `--include-closed`
30 | Include instances with closed registrations when using the API.
31 |
32 | `-v`, `--verbose`
33 | Enable verbose output.
34 |
35 | `-i`, `--instances` INSTANCE [INSTANCE ...]
36 | List of Mastodon instances to search. If provided, the API will not be used.
37 |
38 | `-f`, `--file` FILE
39 | File containing a list of Mastodon instances to search. If provided, the API will not be used.
40 |
41 | ## EXAMPLES
42 |
43 | Search for user 'johndoe' using the default API settings:
44 |
45 | mastodon-user-search johndoe
46 |
47 | Search for user 'janedoe' on specific instances:
48 |
49 | mastodon-user-search -i mastodon.social mstdn.social janedoe
50 |
51 | Search for user 'alexsmith' using instances from a file:
52 |
53 | mastodon-user-search -f instances.txt alexsmith
54 |
55 | Search for user 'sarahbrown' with verbose output:
56 |
57 | mastodon-user-search -v sarahbrown
58 |
59 | ## ENVIRONMENT
60 |
61 | `INSTANCES_API_KEY`
62 | API key for instances.social. Required if using the API to fetch instances.
63 |
64 | ## FILES
65 |
66 | If using the `-f` option, the specified file should contain one Mastodon instance domain per line.
67 |
68 | ## EXIT STATUS
69 |
70 | 0
71 | Success
72 | 1
73 | Failure (e.g., no instances available, API error)
74 |
75 | ## BUGS
76 |
77 | Report bugs to jascha@inforensics.ai
78 |
79 | ## AUTHOR
80 |
81 | Created by inforensics.ai
82 |
83 | ## SEE ALSO
84 |
85 | Mastodon API documentation: https://docs.joinmastodon.org/api/
86 | instances.social API: https://instances.social/api/doc/
87 |
--------------------------------------------------------------------------------
/mastodon-user-search/mastodon-user-search.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Mastodon User Search Script
4 | Created by inforensics.ai
5 |
6 | This script searches for a Mastodon user across multiple instances.
7 | It uses the instances.social API to fetch a list of instances to search,
8 | or allows the user to specify their own list of instances.
9 | """
10 |
11 | import requests
12 | import concurrent.futures
13 | import os
14 | import argparse
15 | from dotenv import load_dotenv
16 |
17 | load_dotenv()
18 |
19 | INSTANCES_API_KEY = os.getenv('INSTANCES_API_KEY')
20 |
21 | def get_instances_from_api(count=100, min_users=1000, include_down=False, include_closed=False):
22 | url = "https://instances.social/api/v1/instances/list"
23 | params = {
24 | "count": count,
25 | "min_users": min_users,
26 | "include_down": str(include_down).lower(),
27 | "include_closed": str(include_closed).lower(),
28 | "sort_by": "active_users",
29 | "sort_order": "desc"
30 | }
31 | headers = {"Authorization": f"Bearer {INSTANCES_API_KEY}"}
32 |
33 | try:
34 | response = requests.get(url, params=params, headers=headers)
35 | response.raise_for_status()
36 | data = response.json()
37 | return [instance['name'] for instance in data['instances']]
38 | except requests.RequestException as e:
39 | print(f"Error fetching instances: {e}")
40 | return []
41 |
42 | def get_instances_from_file(file_path):
43 | try:
44 | with open(file_path, 'r') as f:
45 | return [line.strip() for line in f if line.strip()]
46 | except IOError as e:
47 | print(f"Error reading file: {e}")
48 | return []
49 |
50 | def search_user(instance, username, verbose=False):
51 | if verbose:
52 | print(f"Searching {instance}...")
53 | try:
54 | url = f"https://{instance}/api/v1/accounts/search?q={username}&limit=1"
55 | response = requests.get(url, timeout=5)
56 | if response.status_code == 200:
57 | results = response.json()
58 | if results and results[0]['username'].lower() == username.lower():
59 | return {
60 | 'instance': instance,
61 | 'account': results[0]
62 | }
63 | if verbose:
64 | print(f"User not found on {instance}")
65 | except requests.RequestException as e:
66 | if verbose:
67 | print(f"Error searching {instance}: {str(e)}")
68 | return None
69 |
70 | def search_mastodon_users(username, instances, verbose=False):
71 | with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
72 | future_to_instance = {executor.submit(search_user, instance, username, verbose): instance for instance in instances}
73 | for future in concurrent.futures.as_completed(future_to_instance):
74 | result = future.result()
75 | if result:
76 | return result
77 | return None
78 |
79 | def main():
80 | description = "Search for a Mastodon user across multiple instances."
81 | epilog = ("Created by inforensics.ai\n"
82 | "Report bugs to jascha@inforensics.ai")
83 |
84 | parser = argparse.ArgumentParser(description=description, epilog=epilog,
85 | formatter_class=argparse.RawDescriptionHelpFormatter)
86 | parser.add_argument("username", help="The Mastodon username to search for")
87 | parser.add_argument("-c", "--count", type=int, default=100, help="Number of instances to search (default: 100)")
88 | parser.add_argument("-m", "--min-users", type=int, default=1000, help="Minimum number of users an instance should have (default: 1000)")
89 | parser.add_argument("--include-down", action="store_true", help="Include down instances in the search")
90 | parser.add_argument("--include-closed", action="store_true", help="Include instances with closed registrations")
91 | parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
92 | parser.add_argument("-i", "--instances", nargs='+', help="List of Mastodon instances to search")
93 | parser.add_argument("-f", "--file", help="File containing a list of Mastodon instances to search")
94 |
95 | args = parser.parse_args()
96 |
97 | print("Mastodon User Search Script")
98 | print("Created by inforensics.ai")
99 | print()
100 |
101 | if args.verbose:
102 | print("Verbose mode enabled")
103 |
104 | if args.instances:
105 | instances = args.instances
106 | print(f"Using {len(instances)} instances provided via command line.")
107 | elif args.file:
108 | instances = get_instances_from_file(args.file)
109 | print(f"Using {len(instances)} instances from file: {args.file}")
110 | else:
111 | print("Fetching list of instances from API...")
112 | instances = get_instances_from_api(count=args.count, min_users=args.min_users,
113 | include_down=args.include_down, include_closed=args.include_closed)
114 |
115 | if not instances:
116 | print("No instances available to search. Please check your input or API key.")
117 | return
118 |
119 | print(f"Searching for user @{args.username} across {len(instances)} instances...")
120 | result = search_mastodon_users(args.username, instances, args.verbose)
121 |
122 | if result:
123 | account = result['account']
124 | print(f"\nUser found on {result['instance']}:")
125 | print(f"Username: @{account['username']}")
126 | print(f"Display name: {account['display_name']}")
127 | print(f"Account URL: {account['url']}")
128 | else:
129 | print(f"\nUser @{args.username} not found on any of the searched instances.")
130 |
131 | if __name__ == "__main__":
132 | main()
133 |
--------------------------------------------------------------------------------
/nostr-user-search/nostr-relay-list.txt:
--------------------------------------------------------------------------------
1 | anthro.cc/relay
2 | lunchbox.sandwich.farm
3 | relay.n057r.club
4 | nostr.stakey.net
5 | user.kindpag.es
6 | nostr.sidnlabs.nl
7 | nostr.at
8 | relay.nostr.net
9 | nostr.dbtc.link
10 | nostr.sebastix.dev
11 | purplepag.es
12 | dvms.f7z.io
13 | nostr01.counterclockwise.io
14 | nostr.fractalized.net
15 | relay.varke.eu
16 | 030939.xyz
17 | soloco.nl
18 | uk.purplerelay.com
19 | 140.f7z.io
20 | eu.purplerelay.com
21 | fl.purplerelay.com
22 | relay.highlighter.com
23 | ir.purplerelay.com
24 | relay.wellorder.net
25 | nostr-pub.wellorder.net
26 | nostr.myshosholoza.co.za
27 | nostr.naut.social
28 | nostr.sebastix.social
29 | relay.exit.pub
30 | relay.lnpay.me
31 | relay.nostr.lighting
32 | bitcoinmaximalists.online
33 | nostr.lu.ke
34 | nostr.hashrelay.com
35 | relay.nquiz.io
36 | replicatr.fractalized.net
37 | relay.nsecbunker.com
38 | rnostr.fractalized.net
39 | nostr.hashbang.nl
40 | a.nos.lol
41 | relay2.nostrchat.io
42 | nostr.1f52b.xyz
43 | nostr-relay.app
44 | nostr.swiss-enigma.ch
45 | nostr.cercatrova.me
46 | nostr.jfischer.org
47 | nostr.ingwie.me
48 | nr.rosano.ca
49 | nostr.tavux.tech
50 | private.red.gb.net
51 | relay.nostrocket.org
52 | nostr.polonkai.eu
53 | paid.nostrified.org
54 | nostrrelay.com
55 | relay.nostrati.com
56 | relay.kisiel.net.pl
57 | nostr.flamingo-mail.com
58 | relay.nostrview.com
59 | relay.nostromo.social
60 | ch.purplerelay.com
61 | relay.magnifiq.tech
62 | nostr.manasiwibi.com
63 | nerostr.xmr.rocks
64 | purplerelay.com
65 | de.purplerelay.com
66 | me.purplerelay.com
67 | relay.nosto.re
68 | relay.hakua.xyz
69 | nostr.sathoarder.com
70 | knostr.neutrine.com
71 | nostr2.sanhauf.com
72 | nostr.xmr.rocks
73 | obiurgator.thewhall.com
74 | nostr.skitso.business
75 | nostr.dodge.me.uk
76 | relay.snort.social
77 | ae.purplerelay.com
78 | njump.me
79 | nostr.dumango.com
80 | nostr1.daedaluslabs.io
81 | nostr.daedaluslabs.io
82 | in.purplerelay.com
83 | nostr.oxtr.dev
84 | relay.wikifreedia.xyz
85 | adult.18plus.social
86 | relay.ingwie.me
87 | nostr.madco.me
88 | nostr.yuv.al
89 | klepak.online
90 | e.nos.lol
91 | pyramid.fiatjaf.com
92 | relay.urbanzap.space
93 | relay.nostrified.org
94 | relay.primal.net
95 | relay.nostr.hach.re
96 | relay.martien.io
97 | relay.camelus.app
98 | nsrelay.assilvestrar.club
99 | relay.stoner.com
100 | nostr.lopp.social
101 | relay.minibits.cash
102 | nostr.self-determined.de
103 | nostr.blockpower.capital
104 | nostrasia.mom
105 | nostr.hubmaker.io
106 | notes.miguelalmodo.com
107 | nostr.noones.com
108 | nosflare.royalgarter.workers.dev
109 | nostr.ussenterprise.xyz
110 | relay.nostr.jabber.ch
111 | nostr.t-rg.ws
112 | nostr.cizmar.net
113 | nostr.koning-degraaf.nl
114 | relay.layer.systems
115 | nostr.256k1.dev
116 | relay.nostr.nu
117 | nostr.cahlen.org
118 | relay.guggero.org
119 | nostr3.daedaluslabs.io
120 | nostr2.daedaluslabs.io
121 | nostr.mom
122 | pl.czas.xyz
123 | relay.lumina.rocks
124 | nostr.kosmos.org
125 | relay.moinsen.com
126 | nostr.data.haus
127 | relay.vengeful.eu
128 | relay.nostrplebs.com
129 | nos.lol
130 | at.nostrworks.com
131 | nostr.sats.li
132 | nostr.heavyrubberslave.com
133 | nostr-verif.slothy.win
134 | relay.nostr.bg
135 | countries.fiatjaf.com
136 | nostr.mutinywallet.com
137 | ftp.halifax.rwth-aachen.de/nostr
138 | relay.angor.io
139 | relay.nip05.social
140 | relay.dolu.dev
141 | relay.roli.social
142 | relay2.angor.io
143 | relay.nostrasia.net
144 | relay.oh-happy-day.xyz
145 | nostr.l00p.org
146 | relay.nostrhub.fr
147 | echo.websocket.org
148 | nostr.hifish.org
149 | nostr.codingarena.top
150 | nostr.se7enz.com
151 | privateisland.club
152 | relay.nostr.directory
153 | nostr.slothy.win
154 | nostr.nodeofsven.com
155 | ivy.hnslogin.world
156 | nostr.metamadeenah.com
157 | nostr.0x7e.xyz
158 | nostr.bitcoinist.org
159 | nostr.heliodex.cf
160 | relay.zap.store
161 | relay.nsec.app
162 | greensoul.space
163 | relay.test.nquiz.io
164 | relay.sharegap.net
165 | nostr.massmux.com
166 | relay2.nostrasia.net
167 | nostr.schorsch.fans
168 | nostr.petrkr.net/strfry
169 | nostr.kolbers.de
170 | cfrelay.puhcho.workers.dev
171 | nostr.javi.space
172 | fiatjaf.com
173 | custom.fiatjaf.com
174 | nostr.btc-library.com
175 | ditto.puhcho.me/relay
176 | nostr.vulpem.com
177 | relay1.nostrchat.io
178 | relay.piazza.today
179 | nostr.openhoofd.nl
180 | relay.nostrich.cc
181 | nostr.cheeserobot.org
182 | nostr.thurk.org
183 | relay.hamnet.io
184 | relay.twixy.app
185 | nostr.donky.social
186 | nostr.carroarmato0.be
187 | relay.zone667.com
188 | relay.getalby.com/v1
189 | nostria.space
190 | oregonbitcoiners.com/relay
191 | nostr.gleeze.com
192 | global-relay.cesc.trade
193 | freelay.sovbit.host
194 | relay.azzamo.net
195 | relay.bitcoinbarcelona.xyz
196 | xmr.ithurtswhenip.ee
197 | nostr.neilalexander.dev
198 | brb.io
199 | groups.fiatjaf.com
200 | nostr.hexhex.online
201 | nostr.spaceshell.xyz
202 | nostr.searx.is
203 | ithurtswhenip.ee
204 | relay.minibolt.info
205 | relay.nostrcheck.me
206 | cache2.primal.net/v1
207 | strfry.openhoofd.nl
208 | relay.nostr.vet
209 | nostr.rubberdoll.cc
210 | nostr.satstralia.com
211 | nostr.sovbit.host
212 | nostr.wine
213 | relay.nostr.pt
214 | relay.nostr.wf
215 | relay.rootservers.eu
216 | relay.sepiropht.me
217 | nostr.portemonero.com
218 | freedomhub1.reelnetwork.eu
219 | nproxy.kristapsk.lv
220 | nostr.lifeonbtc.xyz
221 | relay.f7z.io
222 | xxmmrr.shogatsu.ovh
223 | relay.proxymana.net
224 | feeds.nostr.band/nostrhispano
225 | nostr.bitcoinplebs.de
226 | nostr.plantroon.com
227 | relay.nostr.band
228 | node.coincreek.com/nostrclient/api/v1/relay
229 | xmr.usenostr.org
230 | bouncer.minibolt.info
231 | thenewregi.me/nostr
232 | adre.su
233 | unostr.site
234 | relay.gasteazi.net
235 | nostr.jcloud.es
236 | btc.klendazu.com
237 | relay.s-w.art
238 | cache0.primal.net/cache17
239 | nostr-relay.sn-media.com
240 | relay.agorist.space
241 | relay.orangepill.ovh
242 | nostr.inosta.cc
243 | relaypag.es
244 | nosflare.plebes.fans
245 | cache1.primal.net/v1
246 | nostr.huszonegy.world
247 | eosla.com
248 | nosdrive.app/relay
249 | lnbits.michaelantonfischer.com/nostrrelay/michaelantonf
250 | relay.noswhere.com
251 | mastodon.cloud/api/v1/streaming
252 | strfry.iris.to
253 | polnostr.xyz
254 | nostr.thesamecat.io
255 | profiles.nostr1.com
256 | nostrtalk.nostr1.com
257 | fiatjaf.nostr1.com
258 | cryptowolf.nostr1.com
259 | mleku.nostr1.com
260 | nostr.primz.org
261 | tigs.nostr1.com
262 | christpill.nostr1.com
263 | rkgk.moe
264 | bevo.nostr1.com
265 | theforest.nostr1.com
266 | zap.nostr1.com
267 | avb.nostr1.com
268 | frens.nostr1.com
269 | gardn.nostr1.com
270 | vitor.nostr1.com
271 | moonboi.nostr1.com
272 | bostr.bitcointxoko.com
273 | galaxy13.nostr1.com
274 | 21ideas.nostr1.com
275 | shawn.nostr1.com
276 | dreamofthe90s.nostr1.com
277 | island.nostr1.com
278 | studio314.nostr1.com
279 | wbc.nostr1.com
280 | ren.nostr1.com
281 | support.nostr1.com
282 | blowater.nostr1.com
283 | hayloo88.nostr1.com
284 | riray.nostr1.com
285 | feedstr.nostr1.com
286 | nostrdevs.nostr1.com
287 | marmot.nostr1.com
288 | nostr.8777.ch
289 | chefstr.nostr1.com
290 | frjosh.nostr1.com
291 | milwaukie.nostr1.com
292 | prism.nostr1.com
293 | nostr21.com
294 | relay.jerseyplebs.com
295 | pater.nostr1.com
296 | relay.usefusion.ai
297 | pleb.cloud
298 | datagrave.wild-vibes.ts.net/nostr
299 | lnbits.btconsulting.nl/nostrrelay/nostr
300 | nostrua.com
301 | slick.mjex.me
302 | relay.satlantis.io
303 | svjgceikshv.sylonet.io
304 | relay.neuance.net
305 | nostr.dakukitsune.ca
306 | relay.bitcoinpark.com
307 | nostr.roundrockbitcoiners.com
308 | relay.mypear.xyz
309 | nostr.polyserv.xyz
310 | relay.hrf.org
311 | relay.oke.minds.io/nostr/v1/ws
312 | kmc-nostr.amiunderwater.com
313 | relay-testnet.k8s.summary.news
314 | nostrue.com
315 | nostr.coincrowd.fund
316 | nostr.netbros.com
317 | relay.deezy.io
318 | dev.nostrplayground.com
319 | relay.nostrosity.com
320 | relay.toastr.space
321 | relay.nos.social
322 | bouncer.nostree.me
323 | khatru-test.puhcho.me
324 | relay.gnostr.cloud
325 | relay.kamp.site
326 | slime.church/relay
327 | nostr.cloud.vinney.xyz
328 | nostr.cybercan.click
329 | relay.orange-crush.com
330 | relay.artx.market
331 | relay.stemstr.app
332 | relay.alex71btc.com
333 | relay.geyser.fund
334 | nostr-dev.zbd.gg
335 | relay.livefreebtc.com
336 | nostr.drafted.pro
337 | relay.staging.geyser.fund
338 | nostr.zbd.gg
339 | rly.bopln.com
340 | nostr.kungfu-g.rip
341 | rly.nostrkid.com
342 | relay.nostr.sc
343 | nostr.novacisko.cz
344 | relay.satoshidnc.com
345 | relay.devstr.org
346 | nostr.topeth.info
347 | nostr.bitcoiner.social
348 | relay.fanfares.io
349 | relay.staging.flashapp.me
350 | atl.purplerelay.com
351 | relay.wavlake.com
352 | nostr.uthark.com
353 | relay.s3x.social
354 | relay.thoreau.fyi
355 | relay.shitforce.one
356 | relay.mattybs.lol
357 | relay.mostr.pub
358 | nostr.fbxl.net
359 | lnbits.eldamar.icu/nostrrelay/relay
360 | relay.tribly.co
361 | us.nostr.wine
362 | dev-relay.kube.b-n.space
363 | hotrightnow.nostr1.com
364 | relay.tagayasu.xyz
365 | relay.famstr.xyz
366 | relay.mostro.network
367 | ca.purplerelay.com
368 | bitcoiner.social
369 | us.purplerelay.com
370 | relay.casualcrypto.date
371 | nostr.lorentz.is
372 | nostr.mining.sc
373 | relay.earthly.land
374 | nostr.randomdevelopment.biz
375 | relay.minds.com/nostr/v1/ws
376 | relay.strfront.com
377 | strfry.chatbett.de
378 | thecitadel.nostr1.com
379 | nostr.gerbils.online
380 | relay.magiccity.live
381 | hodlbod.coracle.tools
382 | paid.nostr.lc
383 | nostr.liberty.fans
384 | relay.farscapian.com
385 | relay.illuminodes.com
386 | relay.oldcity-bitcoiners.info
387 | auth.nostr1.com
388 | bitstack.app
389 | nostr.psychoet.nexus
390 | monitorlizard.nostr1.com
391 | nostr-relay.philipcristiano.com
392 | relay5.bitransfer.org
393 | nostr.happytavern.co
394 | nostr.notribe.net
395 | brisceaux.com
396 | relay.arrakis.lat
397 | relay.corpum.com
398 | relay.roygbiv.guide
399 | nostr.malin.onl
400 | relay.notmandatory.org
401 | nostr.overmind.lol
402 | nostr.ser1.net
403 | nostr.fmt.wiz.biz
404 | nostrasia.casa
405 | junxingwang.org
406 | merrcurrup.railway.app
407 | offchain.pub
408 | nostr.pailakapo.com
409 | nostr.n-ti.me
410 | nostr.dvdt.dev
411 | relay1.xfire.to:
412 | nostr.ch3n2k.com
413 | nostr.screaminglife.io
414 | strfry-test-external.summary.news
415 | relay.mutinywallet.com
416 | chorus.pjv.me
417 | relay.causes.com
418 | nostr.fort-btc.club
419 | relay.cosmicbolt.net
420 | strfry.corebreach.com
421 | anon.computer
422 | nostr.pjv.me
423 | relay.sincensura.org
424 | nostr.lnwallet.app
425 | nostr.goldfigure.ai
426 | cellar.nostr.wine
427 | nostr.corebreach.com
428 | relay.satsdays.com
429 | relay.unknown.cloud
430 | relay.poster.place
431 | nostr-relay.algotech.io
432 | carlos-cdb.top
433 | relay.westernbtc.com
434 | nostril.cam
435 | rss.nos.social
436 | nostr.animeomake.com
437 | beta.nostril.cam
438 | relay.siamstr.com
439 | nostr.itsnebula.net
440 | nostr-relay.psfoundation.info
441 | nostr.sagaciousd.com
442 | onlynotes.lol
443 | jingle.carlos-cdb.top
444 | reactions.v0l.io
445 | nostr-02.dorafactory.org
446 | za.purplerelay.com
447 | af.purplerelay.com
448 | bucket.coracle.social
449 | nostr.semisol.dev
450 | satellite.hzrd149.com
451 | nostrrelay.win
452 | nostr.rocketnode.space
453 | yestr.me
454 | nostr.tools.global.id
455 | nostr-tbd.website
456 | nostr-02.yakihonne.com
457 | nostr.sectiontwo.org
458 | nostr.hekster.org
459 | nostr.intrepid18.com
460 | remnant.cloud
461 | relay.gems.xyz
462 | relay.benthecarman.com
463 | nostrpub.yeghro.site
464 | n.wingu.se
465 | nostr.sovrgn.co.za
466 | relay.nostrainsley.coracle.tools
467 | relay.phantom-power.coracle.tools
468 | nostr.lecturify.net
469 | us.nostr.land
470 | relay.bitmapstr.io
471 | nostr.extrabits.io
472 | sync.ucdavis.edu
473 | libretechsystems.nostr1.com
474 | nostrelay.yeghro.site
475 | r314y.0xd43m0n.xyz
476 | relay.sovereign-stack.org
477 | multiplexer.huszonegy.world
478 | nostr-relay.bitcoin.ninja
479 | relay.notoshi.win
480 | relay.chontit.win
481 | relap.orzv.workers.dev
482 | nostr.reckless.dev
483 | nostr-usa.ka1gbeoa21bnm.us-west-2.cs.amazonlightsail.com
484 | bostr.cx.ms
485 | nostr.bch.ninja
486 | nostr.easydns.ca
487 | nostrrelay.stewlab.win
488 | novoa.nagoya
489 | nostr.jaonoctus.dev
490 | tmp-relay.cesc.trade
491 | relay.johnnyasantos.com
492 | nostr.millonair.us
493 | loli.church
494 | br.purplerelay.com
495 | nostr.babyshark.win
496 | relay.hodl.ar
497 | nostr-relay.schnitzel.world
498 | relay.bitcreekwallet.org
499 | nostr.palandi.cloud
500 | ragnar-relay.com
501 | cl.purplerelay.com
502 | nostr.girino.org
503 | relay.beta.fogtype.com
504 | yabu.me
505 | nostr.cxplay.org
506 | relay.momostr.pink
507 | nostr.kloudcover.com
508 | nfrelay.app
509 | relay.lawallet.ar
510 | rsslay.ch3n2k.com
511 | followed-nine-reasonable-rocket.trycloudflare.com
512 | bostr.nokotaro.work
513 | testnet.plebnet.dev/nostrrelay/1
514 | creatr.nostr.wine
515 | powrelay.xyz
516 | bostr.nokotaro.com
517 | nostr-01.yakihonne.com
518 | bits.lnbc.sk/nostrclient/api/v1/relay
519 | nostr-03.dorafactory.org
520 | staging.yabu.me
521 | directory.yabu.me
522 | relay.nostrassets.com
523 | bostr.online
524 | nrelay.c-stellar.net
525 | nostrja-world-relays-test.heguro.com
526 | dev-relay.nostrassets.com
527 | relay.nostr.cymru
528 | srtrelay.c-stellar.net
529 | nostr.tbai.me:592
530 | relay.bostr.online
531 | test1.erbie.io
532 | nostr-relay.texashedge.xyz
533 | nostr.yuhr.org
534 | th2.nostr.earnkrub.xyz
535 | hk.purplerelay.com
536 | relay.damus.io
537 | relay.0v0.social
538 | nostr.atitlan.io
539 | nostr.1sat.org
540 | nostrvista.aaroniumii.com
541 | nostrja-kari.heguro.com
542 | relay.ohbe.me
543 | misskey.io
544 | kr.purplerelay.com
545 | tw.purplerelay.com
546 | n.ok0.org
547 | au.purplerelay.com
548 | nostr.uneu.net
549 | relay-jp.haniwar.com
550 | search.nos.today
551 | rebelbase.social/relay
552 | nostr.lbdev.fun
553 | relay.nostr.wirednet.jp
554 | jp.purplerelay.com
555 | relay.cxplay.org
556 | nostr.zoel.network
557 | relay.honk.pw
558 | primal-cache.mutinywallet.com/v1
559 | nostr.15b.blue
560 | nostr.brackrat.com
561 | nostr.stubby.dev
562 | relays.diggoo.com
563 | welcome.nostr.wine
564 | relay.nostpy.lol
565 | r.hostr.cc
566 | kiwibuilders.nostr21.net
567 | nostr.dmgd.monster
568 | wc1.current.ninja
569 | relay.verified-nostr.com
570 | relay.nostr.com.au
571 | history.nostr.watch
572 | nostr-relay.cbrx.io
573 | lightningrelay.com
574 | cfrelay.haorendashu.workers.dev
575 | kadargo.zw.is
576 | core.btcmap.org/nostrrelay/relay
577 | nostr.tegila.com.br
578 | nostr.faust.duckdns.org
579 | relay.lax1dude.net
580 | unhostedwallet.com
581 | inbox.nostr.wine
582 | nostr.cx.ms
583 | nostr.btczh.tw
584 | relay.mom
585 | relay.bitdevs.tw
586 | nostrich.adagio.tw
587 | nostr.filmweb.pl
588 | orangepiller.org
589 | mm.suzuqi.com
590 | nostr.decentony.com
591 | relay.refinery.coracle.tools
592 | nostr.danvergara.com
593 | relay.nostrid.com
594 | multiplextr.coracle.social
595 | bostr.erechorse.com
596 | relay.rebelbase.site
597 | lnbits.aruku.kro.kr/nostrrelay/private
598 | nostr.sudocarlos.com
599 | nostr.einundzwanzig.space
600 | relay.keychat.io
601 | lightning.benchodroff.com/nostrclient/api/v1/relay
602 | premis.one
603 | clnbits.diynodes.com/nostrclient/api/v1/relay
604 | freerelay.xyz
605 | relay.braydon.com
606 | submarin.online
607 | misskey.takehi.to
608 | nostrja-kari-nip50.heguro.com
609 | misskey.04.si
610 | social.camph.net
611 | nostr.thank.eu
612 | nostr.hashi.sbs
613 | relay.fountain.fm
614 | misskey.design
615 | eostagram.com
616 | misskey.art
617 | sushi.ski
618 | nostr.strits.dk
619 | relay.hawties.xyz
620 | nostr-relay.puhcho.me
621 | ditto.slothy.win/relay
622 | relay.nimo.cash
623 | problematic.network
624 | relay.nostar.org
625 | nostr.a2x.pub
626 | nostr-dev.wellorder.net
627 | nostr-verified.wellorder.net
628 | relay.0xchat.com
629 | relay.nostr.youlot.org
630 | nostr.globals.fans
631 | nostr.stupleb.cc
632 | gnost.faust.duckdns.org
633 | relay.nostrology.org
634 | nr.yay.so
635 | relay.nostrcn.com
636 | nostr.community.networks.deavmi.assigned.network
637 | nostr-1.afarazit.eu
638 | nostr.zoomout.chat
639 | nostr-relay.lnmarkets.com
640 | relay.nostrzoo.com
641 | nostr.1729.cloud
642 | nostr1.starbackr.me
643 | nostr.libertasprimordium.com
644 | nostr.datamagik.com
645 | nostr.mwmdev.com
646 | nostr.lnorb.com
647 | nostr.21m.fr
648 | africa.nostr.joburg
649 | nostr.lightning.contact
650 | relay.orangepill.dev
651 | expensive-relay.fiatjaf.com
652 | nostr.lukeacl.com
653 | nostr-alpha.gruntwerk.org
654 | relay.darker.to
655 | relay.nostrmoto.xyz
656 | nostr.rocks
657 | nostr.hackerman.pro
658 | relay.nostrprotocol.net
659 | astral.ninja
660 | nostrsatva.net
661 | nostr.h4x0r.host
662 | nostr.supremestack.xyz
663 | nostr.zerofeerouting.com
664 | nrelay.paypirate.xyz
665 | nostr.21crypto.ch
666 | relay.nostry.eu
667 | nostr.satoshi.fun
668 | nostr.misskey.cf
669 | relay.nostr.info
670 | nostr.jiashanlu.synology.me
671 | nostr.crypticthreadz.com
672 | nostr.lordkno.ws
673 | nostring.deno.dev
674 | nostr.21sats.net
675 | nostr.0x50.tech
676 | relay.punkhub.me
677 | relay.badgr.space
678 | nostr.up.railway.app
679 | noster.online
680 | nostr-relay.untethr.me
681 | nostr.bitcoin.sex
682 | nostr.bostonbtc.com
683 | rsslay.nostr.moe
684 | nostr.itas.li
685 | nostr.itssilvestre.com
686 | nostr.cr0.bar
687 | nostr.sg
688 | nostr.cro.social
689 | nostr.aozing.com
690 | nostr.lnprivate.network
691 | nostr3.actn.io
692 | nostr.coollamer.com
693 | nostrafrica.pcdkd.fyi
694 | relay.leafbodhi.com
695 | nostr.middling.mydns.jp
696 | nostr.rewardsbunny.com
697 | nostr-relay.alekberg.net
698 | nostr01.opencult.com
699 | relay.cubanoticias.info
700 | th1.nostream.earnkrub.xyz
701 | relay.nostr.xyz
702 | strfry.nostr-x.com
703 | nostr.arguflow.gg
704 | nostr.com.de
705 | relay.nostrich.land
706 | nostr.d11n.net
707 | nostr.indexafrica.io
708 | nostr.1661.io
709 | nostr.zerofiat.world
710 | no-str.org
711 | nostrical.com
712 | foolay.nostr.moe
713 | nostr-relay.freedomnode.com
714 | relay.nostr.pro
715 | jiggytom.ddns.net
716 | nostr.8e23.net
717 | nostr.jolt.run
718 | nostr.chaker.net
719 | nostr.developer.li
720 | relay.nostr.au
721 | lv01.tater.ninja
722 | nostr.fractalized.ovh
723 | nostr.xpedite-tech.com
724 | nostr-rs-relay.phamthanh.me
725 | nostr.digitalreformation.info
726 | nostr.demovement.net
727 | nostr01.vida.dev
728 | nostr.openchain.fr
729 | nostr.600.wtf
730 | relay.codl.co
731 | nostr-relay.smoove.net
732 | relay.sendstr.com
733 | merrcurr.up.railway.app
734 | nostr.fluidtrack.in
735 | relay.nostrfiles.dev
736 | nostr.pobblelabs.org
737 | student.chadpolytechnic.com
738 | relay.21spirits.io
739 | nostr.onsats.org
740 | nostr.ddns.net
741 | relay.taxi
742 | nostr.app.runonflux.io
743 | nostr.2b9t.xyz
744 | relay.plebstr.com
745 | nostr-relay.gkbrk.com
746 | nostream.denizenid.com
747 | nostr.bitcoin-21.org
748 | relay.valireum.net
749 | nostr.orba.ca
750 | nostr.nymsrelay.com
751 | relay.futohq.com
752 | nostr.island.network
753 | relay.nyx.ma
754 | nostr.dogdogback.com
755 | sound-money-relay.denizenid.com
756 | relay1.gems.xyz
757 | nostr.xmr.sh
758 | relay.valera.co
759 | nostr.dehein.org
760 | global.relay.red
761 | nostr.ahaspharos.de
762 | relay.nostrify.io
763 | rsr.uyky.net:30443
764 | relays.world/nostr
765 | relay.intify.io
766 | nostr.bingtech.tk
767 | blg.nostr.sx
768 | nostr.hodl.haus
769 | nostr.dncn.xyz
770 | nostr.whoop.ph
771 | btc-italia.online
772 | nostr2.actn.io
773 | nostr.zhongwen.world
774 | bitcoinforthe.lol
775 | relay.ryzizub.com
776 | nostr.maximacitadel.org
777 | relay.dev.bdw.to
778 | nostr.rocketstyle.com.au
779 | nostr.gruntwerk.org
780 | nostr-01.dorafactory.org
781 | nostr.beta3.dev
782 | relay.koreus.social
783 | nostr.barf.bz
784 | nostr.mrbits.it
785 | nostr.thegrungies.com
786 | relay.bleskop.com
787 | nostr.argdx.net
788 | nostr.retroware.run.place
789 | relay.vtuber.directory
790 | ca.orangepill.dev
791 | nostrich.friendship.tw
792 | nostr.fly.dev
793 | relay.mynostr.id
794 | relay.nostr.express
795 | nostr.f44.dev
796 | nostr-pub1.southflorida.ninja
797 | nostr.howtobitcoin.shop
798 | relay.nostr.ai
799 | nostr.handyjunky.com
800 | nostream.gromeul.eu
801 | free.nostr.lc
802 | nostr.bitcoin-basel.ch
803 | nostr.snblago.com
804 | nostr.hyperlingo.com
805 | the.hodl.haus
806 | relay.lexingtonbitcoin.org
807 | nostr.chainofimmortals.net
808 | nostr.blockchaincaffe.it
809 | relay.hllo.live
810 | nostr.cloudversia.com
811 | nostr.privoxy.io
812 | nostr.radixrat.com
813 | nostr2.namek.link
814 | nostr-relay.australiaeast.cloudapp.azure.com
815 | nostr-3.orba.ca
816 | relay.thes.ai
817 | public.nostr.swissrouting.com
818 | nostr.ivmanto.dev
819 | nostr-relay.usebitcoin.space
820 | relay.grunch.dev
821 | test.theglobalpersian.com
822 | nostr.0nyx.eu
823 | nostr.debancariser.com
824 | nostr.ethtozero.fr
825 | nostr.ono.re
826 | nostr.delo.software
827 | nostrex.fly.dev
828 | nostr-relay.trustbtc.org
829 | nostr.nordlysln.net
830 | nostr.coinos.io
831 | nostr-relay.derekross.me
832 | paid.spore.ws
833 | relay.haths.cc
834 | universe.nostrich.land
835 | nostream.nostrly.io
836 | longhorn.bgp.rodeo
837 | relay.nosphr.com
838 | nostro.cc
839 | nostr.unknown.place
840 | sg.unfiltered.zone
841 | satstacker.cloud
842 | nostr.simplex.icu
843 | nostr.actn.io
844 | relay-pub.deschooling.us
845 | nostr-relay.digitalmob.ro
846 | nostr.milou.lol
847 | nostr.soscary.net
848 | relay.nostrgraph.net
849 | relay.21baiwan.com
850 | spore.ws
851 | booger.pro
852 | pow32.nostr.land
853 | lbrygen.xyz
854 | nostr.sixteensixtyone.com
855 | nostr.lapalomilla.mx
856 | carnivore-diet-relay.denizenid.com
857 | nostr.formigator.eu
858 | relay.r3d.red
859 | nostr.reelnetwork.eu
860 | nostr.koutakou.tech
861 | relay.nostr-x.com
862 | relay.cryptocculture.com
863 | nostr.robotechy.com
864 | nostr.thibautrey.fr
865 | nostr.zenon.info
866 | no.str.watch
867 | nostr.hugo.md
868 | relay.nostr-latam.link
869 | relay.2nodez.com
870 | nostr-bg01.ciph.rs
871 | relayable.org
872 | nostr.xpersona.net
873 | relay.nostr.ae
874 | nostr-1.nbo.angani.co
875 | relay.nostr.ro
876 | relay.minds.io/nostr/v1/ws
877 | nostr.drss.io
878 | rsslay.nostr.net
879 | freedom-relay.herokuapp.com/ws
880 | nostr.azte.co
881 | no.contry.xyz
882 | nostr.herci.one
883 | nostr.gromeul.eu
884 | relay.nostr.smoak.haus
885 | nostr.mustardnodes.com
886 | swiss.nostr.lc
887 | nostr.uselessshit.co
888 | nostr.zkid.social
889 | th1.nostr.earnkrub.xyz
890 | 1.noztr.com
891 | nostr.rocket-tech.net
892 | nostre.cc
893 | puravida.nostr.land
894 | hist.nostr.land
895 | paid.no.str.cr
896 | nostr-relay.j3s7m4n.com
897 | relay.bitransfer.org
898 | relay.nostr.africa
899 | nostr.spleenrider.one
900 | nostr.jimc.me
901 | nostr.zebedee.cloud
902 | nostr.klabo.blog
903 | nostr.bongbong.com
904 | relay.nostr.net.in
905 | relay.boring.surf
906 | nostr.sandwich.farm
907 | relay.nostr.ch
908 | nostr.ncsa.illinois.edu
909 | nostr.sovbit.com
910 | nostr-sg.com
911 | nostr-2.zebedee.cloud
912 | deschooling.us
913 | relay.pineapple.pizza
914 | denostr.paiya.app
915 | relay.nostr.or.jp
916 | nostr.satsophone.tk
917 | nostream.kinchie.snowinning.com
918 | nostr.kollider.xyz
919 | relay.nostr.lu
920 | relay.zeh.app
921 | nostr.walletofsatoshi.com
922 | nostr.rdfriedl.com
923 | nostr-2.orba.ca
924 | relay.kronkltd.net
925 | relay.nostr.scot
926 | nostr.relayer.se
927 | nostr.utxo.lol
928 | nostr-relay.freeberty.net
929 | relay.nostr.moe
930 | nostr-2.afarazit.eu
931 | nostr.shmueli.org
932 | rs.nostr-x.com
933 | nostr.plebchain.org
934 | nostr.leximaster.com
935 | n.s.nyc
936 | freespeech.casa
937 | relay.nostrss.re
938 | deconomy-netser.ddns.net:2121
939 | nostr.ownscale.org
940 | nostrrelay.geforcy.com
941 | nostr.nikolaj.online
942 | nostr.w3ird.tech
943 | relay.dev.kronkltd.net
944 | wizards.wormrobot.org
945 | test.nostr.lc
946 | relay.nostr.hu
947 | private-nostr.v0l.io
948 | nostr.zaprite.io
949 | relay.nostr-relay.org
950 | nostr.namek.link
951 | node01.nostress.cc
952 | nostr-relay.nonce.academy
953 | nostr-relay.wolfandcrow.tech
954 | relay.bitid.nz
955 | nos.nth.io
956 | pow.nostrati.com
957 | nostr.oooxxx.ml
958 | nostr.pleb.network
959 | nostr-01.bolt.observer
960 | relay.vanderwarker.family
961 | zur.nostr.sx
962 | relay2.nostr.vet
963 | relay.current.fyi
964 | nostr.simatime.com
965 | nostr-pub.semisol.dev
966 | no.str.cr
967 | nostr-mv.ashiroid.com
968 | eden.nostr.land
969 | relay.nostriches.org
970 | nostr.shawnyeager.net
971 | relay.bitransfermedia.com
972 | nostr.coinsamba.com.br
973 | nostr.papanode.com
974 | spleenrider.herokuapp.com
975 | nostr.yael.at
976 | nostr.mikedilger.com
977 | relay.nostrich.de
978 | nostr.rikmeijer.nl
979 | relay.cynsar.foundation
980 | relay.nostrdocs.com
981 | mule.platanito.org
982 | nostr.planz.io
983 | relay.nostr.vision
984 | nostr.chainbits.co.uk
985 | nostr.localhost.re
986 | nostr.nicfab.eu
987 | nostr.blocs.fr
988 | nostr.mouton.dev
989 | nostr-relay.pcdkd.fyi
990 | nostr.coinfundit.com
--------------------------------------------------------------------------------
/nostr-user-search/nostr-user-search-man.md:
--------------------------------------------------------------------------------
1 | # NOSTR-USER-SEARCH(1)
2 |
3 | ## NAME
4 |
5 | nostr-user-search - Search for a Nostr user across multiple relays
6 |
7 | ## SYNOPSIS
8 |
9 | `nostr-user-search` [OPTIONS] IDENTIFIER
10 |
11 | ## DESCRIPTION
12 |
13 | The nostr-user-search script searches for a specified Nostr user across multiple Nostr relays. It can use a default list of relays or allow the user to specify their own list of relays to search.
14 |
15 | ## OPTIONS
16 |
17 | `IDENTIFIER`
18 | The Nostr user identifier to search for (required). This can be either a public key (npub1... or hex) or a NIP-05 identifier (user@example.com).
19 |
20 | `-r`, `--relays` RELAY [RELAY ...]
21 | List of Nostr relays to search. If not provided, a default list of relays will be used.
22 |
23 | `-f`, `--file` FILE
24 | File containing a list of Nostr relays to search. If provided, this overrides the default relays and the `-r` option.
25 |
26 | `-v`, `--verbose`
27 | Enable verbose output.
28 |
29 | ## EXAMPLES
30 |
31 | Search for a user by public key using the default relays:
32 |
33 | nostr-user-search npub1s...
34 |
35 | Search for a user by NIP-05 identifier using specific relays:
36 |
37 | nostr-user-search -r wss://relay1.com wss://relay2.com user@example.com
38 |
39 | Search for a user using relays from a file:
40 |
41 | nostr-user-search -f relays.txt npub1s...
42 |
43 | Search for a user with verbose output:
44 |
45 | nostr-user-search -v npub1s...
46 |
47 | ## FILES
48 |
49 | If using the `-f` option, the specified file should contain one Nostr relay URL per line, each starting with `wss://`.
50 |
51 | ## EXIT STATUS
52 |
53 | 0
54 | Success
55 | 1
56 | Failure (e.g., no relays available, connection error)
57 |
58 | ## BUGS
59 |
60 | Report bugs to jascha@inforensics.ai
61 |
62 | ## AUTHOR
63 |
64 | Created by inforensics.ai
65 |
66 | ## SEE ALSO
67 |
68 | Nostr Protocol: https://github.com/nostr-protocol/nostr
69 | NIP-01 (Basic protocol flow description): https://github.com/nostr-protocol/nips/blob/master/01.md
70 | NIP-05 (Mapping Nostr keys to DNS-based internet identifiers): https://github.com/nostr-protocol/nips/blob/master/05.md
71 |
--------------------------------------------------------------------------------
/nostr-user-search/nostr-user-search.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Nostr User Search Script
4 | Created by inforensics.ai
5 |
6 | This script searches for a Nostr user across multiple relays.
7 | It allows users to specify their own list of relays to search.
8 | """
9 |
10 | import asyncio
11 | import argparse
12 | import json
13 | from urllib.parse import urlparse
14 | import ssl
15 | import certifi
16 | import websockets
17 |
18 | DEFAULT_RELAYS = [
19 | "wss://relay.damus.io",
20 | "wss://relay.nostr.bg",
21 | "wss://nostr.zebedee.cloud",
22 | "wss://relay.nostr.band",
23 | "wss://nos.lol",
24 | ]
25 |
26 | async def search_user(relay_url, identifier, verbose=False):
27 | if verbose:
28 | print(f"Searching {relay_url}...")
29 |
30 | ssl_context = ssl.create_default_context(cafile=certifi.where())
31 |
32 | try:
33 | async with websockets.connect(relay_url, ssl=ssl_context) as websocket:
34 | # Check if the identifier is a public key (hex string) or a NIP-05 identifier
35 | if len(identifier) == 64 and all(c in '0123456789abcdef' for c in identifier.lower()):
36 | # It's likely a public key
37 | query = {
38 | "kinds": [0], # Metadata event
39 | "authors": [identifier],
40 | }
41 | else:
42 | # Treat it as a NIP-05 identifier
43 | query = {
44 | "kinds": [0], # Metadata event
45 | "search": identifier,
46 | }
47 |
48 | request = json.dumps(["REQ", "search", query])
49 | await websocket.send(request)
50 |
51 | while True:
52 | response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
53 | data = json.loads(response)
54 | if data[0] == "EVENT" and data[2]["kind"] == 0:
55 | content = json.loads(data[2]["content"])
56 | return {
57 | "relay": relay_url,
58 | "pubkey": data[2]["pubkey"],
59 | "name": content.get("name", "Unknown"),
60 | "display_name": content.get("display_name", "Unknown"),
61 | "nip05": content.get("nip05", "Unknown"),
62 | }
63 | elif data[0] == "EOSE":
64 | break
65 |
66 | except (websockets.exceptions.WebSocketException, asyncio.TimeoutError) as e:
67 | if verbose:
68 | print(f"Error searching {relay_url}: {str(e)}")
69 | return None
70 |
71 | async def search_nostr_users(identifier, relays, verbose=False):
72 | tasks = [search_user(relay, identifier, verbose) for relay in relays]
73 | results = await asyncio.gather(*tasks)
74 | return next((result for result in results if result), None)
75 |
76 | def main():
77 | description = "Search for a Nostr user across multiple relays."
78 | epilog = ("Created by inforensics.ai\n"
79 | "Report bugs to jascha@inforensics.ai")
80 |
81 | parser = argparse.ArgumentParser(description=description, epilog=epilog,
82 | formatter_class=argparse.RawDescriptionHelpFormatter)
83 | parser.add_argument("identifier", help="The Nostr user identifier (public key or NIP-05) to search for")
84 | parser.add_argument("-r", "--relays", nargs='+', default=DEFAULT_RELAYS,
85 | help="List of Nostr relays to search (default: use a predefined list)")
86 | parser.add_argument("-f", "--file", help="File containing a list of Nostr relays to search")
87 | parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose output")
88 |
89 | args = parser.parse_args()
90 |
91 | print("Nostr User Search Script")
92 | print("Created by inforensics.ai")
93 | print()
94 |
95 | if args.verbose:
96 | print("Verbose mode enabled")
97 |
98 | if args.file:
99 | with open(args.file, 'r') as f:
100 | relays = [line.strip() for line in f if line.strip()]
101 | print(f"Using {len(relays)} relays from file: {args.file}")
102 | else:
103 | relays = args.relays
104 | print(f"Using {len(relays)} provided relays.")
105 |
106 | if not relays:
107 | print("No relays available to search. Please check your input.")
108 | return
109 |
110 | print(f"Searching for user {args.identifier} across {len(relays)} relays...")
111 | result = asyncio.run(search_nostr_users(args.identifier, relays, args.verbose))
112 |
113 | if result:
114 | print(f"\nUser found on {result['relay']}:")
115 | print(f"Public Key: {result['pubkey']}")
116 | print(f"Name: {result['name']}")
117 | print(f"Display Name: {result['display_name']}")
118 | print(f"NIP-05: {result['nip05']}")
119 | else:
120 | print(f"\nUser {args.identifier} not found on any of the searched relays.")
121 |
122 | if __name__ == "__main__":
123 | main()
124 |
--------------------------------------------------------------------------------
/tweet-cache-search/tweet-cache-search-man.md:
--------------------------------------------------------------------------------
1 | # NAME
2 |
3 | tweet-cache-search.py - Search for cached tweets across various archiving and caching services
4 |
5 | # SYNOPSIS
6 |
7 | `tweet-cache-search.py [-h] [-u USERNAME] [-o]`
8 |
9 | # DESCRIPTION
10 |
11 | `tweet-cache-search.py` is a Python script that searches for cached tweets of a specified Twitter username across multiple archiving and caching services. It provides links to search results or cached pages from these services and optionally opens the results in your default web browser.
12 |
13 | The script searches the following services:
14 |
15 | - Wayback Machine
16 | - Google Cache
17 | - Ghost Archive
18 | - Bing
19 | - Yandex
20 | - Baidu
21 | - Internet Archive
22 | - WebCite
23 |
24 | # OPTIONS
25 |
26 | `-h, --help`
27 | Show the help message and exit.
28 |
29 | `-u USERNAME, --username USERNAME`
30 | Specify the Twitter username to search for. If not provided, the script will prompt for input.
31 |
32 | `-o, --open`
33 | Open the search results in the default web browser.
34 |
35 | # USAGE
36 |
37 | 1. Ensure you have Python 3 installed on your system.
38 | 2. Install the required library:
39 | ```
40 | pip install requests
41 | ```
42 | 3. Run the script with desired options:
43 | ```
44 | ./tweet-cache-search.py [-u USERNAME] [-o]
45 | ```
46 |
47 | # OUTPUT
48 |
49 | The script will display results for each service, showing either a link to the search results/cached page or a message indicating that no results were found. If the `-o` option is used, it will also open successful results in your default web browser.
50 |
51 | # EXAMPLES
52 |
53 | Search for a specific username:
54 | ```
55 | $ ./tweet-cache-search.py -u example_user
56 | ```
57 |
58 | Search for a username and open results in the browser:
59 | ```
60 | $ ./tweet-cache-search.py -u example_user -o
61 | ```
62 |
63 | Run the script interactively:
64 | ```
65 | $ ./tweet-cache-search.py
66 | Enter the Twitter username to search for: example_user
67 | ```
68 |
69 | # NOTES
70 |
71 | - This script provides links to search results or cached pages. It does not scrape or display the actual tweets.
72 | - Some services might have restrictions on automated access. Use this script responsibly and in accordance with each service's terms of use.
73 | - The script's effectiveness depends on the availability and indexing of content by the searched services.
74 | - Opening results in the browser (`-o` option) will attempt to open a new tab or window for each successful result.
75 |
76 | # SEE ALSO
77 |
78 | - Python Requests library documentation: https://docs.python-requests.org/
79 | - Python argparse module: https://docs.python.org/3/library/argparse.html
80 | - Python webbrowser module: https://docs.python.org/3/library/webbrowser.html
81 |
82 | # AUTHOR
83 |
84 | This script and man page were created with the assistance of an [Inforensics](https://inforensics.ai) AI language model.
85 |
86 | # BUGS
87 |
88 | Please report any bugs or issues in Github Issues
89 |
90 | # COPYRIGHT
91 |
92 | This is free software: you are free to change and redistribute it under the terms of MIT [LICENSE](../LICENSE).
93 |
--------------------------------------------------------------------------------
/tweet-cache-search/tweet-cache-search.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | import requests
4 | from urllib.parse import quote_plus
5 | import argparse
6 | import webbrowser
7 |
8 | def safe_request(url):
9 | try:
10 | response = requests.get(url, timeout=10)
11 | response.raise_for_status()
12 | return response
13 | except requests.RequestException as e:
14 | return f"Error accessing {url}: {str(e)}"
15 |
16 | def search_wayback_machine(username):
17 | url = f"https://web.archive.org/web/*/https://twitter.com/{username}"
18 | result = safe_request(url)
19 | if isinstance(result, requests.Response) and result.status_code == 200:
20 | return f"Wayback Machine results: {url}"
21 | return f"No results found on Wayback Machine: {result}"
22 |
23 | def search_google_cache(username):
24 | url = f"https://webcache.googleusercontent.com/search?q=cache:https://twitter.com/{username}"
25 | result = safe_request(url)
26 | if isinstance(result, requests.Response) and result.status_code == 200:
27 | return f"Google Cache results: {url}"
28 | return f"No results found on Google Cache: {result}"
29 |
30 | def search_ghost_archive(username):
31 | url = f"https://ghostarchive.org/search?term={username}"
32 | result = safe_request(url)
33 | if isinstance(result, requests.Response) and result.status_code == 200:
34 | return f"Ghost Archive results: {url}"
35 | return f"No results found on Ghost Archive: {result}"
36 |
37 | def search_bing(username):
38 | url = f"https://www.bing.com/search?q=site:twitter.com+{quote_plus(username)}"
39 | result = safe_request(url)
40 | if isinstance(result, requests.Response) and result.status_code == 200:
41 | return f"Bing search results: {url}"
42 | return f"No results found on Bing: {result}"
43 |
44 | def search_yandex(username):
45 | url = f"https://yandex.com/search/?text=site:twitter.com+{quote_plus(username)}"
46 | result = safe_request(url)
47 | if isinstance(result, requests.Response) and result.status_code == 200:
48 | return f"Yandex search results: {url}"
49 | return f"No results found on Yandex: {result}"
50 |
51 | def search_baidu(username):
52 | url = f"https://www.baidu.com/s?wd=site:twitter.com+{quote_plus(username)}"
53 | result = safe_request(url)
54 | if isinstance(result, requests.Response) and result.status_code == 200:
55 | return f"Baidu search results: {url}"
56 | return f"No results found on Baidu: {result}"
57 |
58 | def search_internet_archive(username):
59 | url = f"https://archive.org/search.php?query=twitter.com%2F{username}"
60 | result = safe_request(url)
61 | if isinstance(result, requests.Response) and result.status_code == 200:
62 | return f"Internet Archive search results: {url}"
63 | return f"No results found on Internet Archive: {result}"
64 |
65 | def search_webcite(username):
66 | url = f"http://webcitation.org/query?url=https://twitter.com/{username}"
67 | result = safe_request(url)
68 | if isinstance(result, requests.Response) and result.status_code == 200:
69 | return f"WebCite results: {url}"
70 | return f"No results found on WebCite: {result}"
71 |
72 | def main():
73 | parser = argparse.ArgumentParser(description="Search for cached tweets across various services.")
74 | parser.add_argument("-u", "--username", help="Twitter username to search for")
75 | parser.add_argument("-o", "--open", action="store_true", help="Open results in default web browser")
76 | args = parser.parse_args()
77 |
78 | if args.username:
79 | username = args.username
80 | else:
81 | username = input("Enter the Twitter username to search for: ")
82 |
83 | services = [
84 | search_wayback_machine,
85 | search_google_cache,
86 | search_ghost_archive,
87 | search_bing,
88 | search_yandex,
89 | search_baidu,
90 | search_internet_archive,
91 | search_webcite
92 | ]
93 |
94 | for service in services:
95 | result = service(username)
96 | print(result)
97 | if args.open and "results:" in result:
98 | url = result.split(": ")[1]
99 | webbrowser.open(url)
100 | print("-" * 50)
101 |
102 | if __name__ == "__main__":
103 | main()
104 |
--------------------------------------------------------------------------------