├── .gitignore ├── LICENSE ├── README.md ├── jsondetective ├── __init__.py ├── analyzer.py ├── cli.py └── dataclass_gen.py ├── pyproject.toml └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | /venv -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Tim Farrelly 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # JSONDetective 🔍 2 | 3 | A powerful tool for analyzing and understanding JSON schemas. Built to handle large, complex JSON files by 4 | automatically detecting and abstracting patterns in your data. 5 | 6 | Key features: 7 | - Automatically recognizes and normalizes date formats in both keys and values 8 | - Detects optional fields by analyzing multiple instances 9 | - Abstracts repeated patterns into clean, readable schemas 10 | 11 | ## Quick Start 12 | 13 | ```bash 14 | # Install 15 | pip install jsondetective 16 | 17 | # Use 18 | jsondetective data.json 19 | ``` 20 | 21 | ## Pattern Recognition Example 22 | 23 | Given a JSON with repeated date patterns like: 24 | ```json 25 | { 26 | "2021-08-24": {"views": 100, "likes": 20}, 27 | "2021-08-25": {"views": 150, "likes": 30}, 28 | "2021-08-26": {"views": 200, "likes": 40} 29 | } 30 | ``` 31 | 32 | JSONDetective recognizes the pattern and abstracts it as: 33 | ```json 34 | { 35 | "yyyy-mm-dd_1": { 36 | "type": "object", 37 | "properties": { 38 | "views": {"type": "integer"}, 39 | "likes": {"type": "integer"} 40 | } 41 | } 42 | } 43 | ``` 44 | Note: The `_1` suffix indicates the nesting level in the JSON structure. 45 | 46 | ## Complex Structure Example 47 | 48 | It also handles nested structures with various data types and patterns: 49 | 50 | ```json 51 | { 52 | "users": [ 53 | { 54 | "id": "123", 55 | "joined_date": "2024-01-15", 56 | "last_active": "2024-03-20T15:30:00Z", 57 | "activity": { 58 | "2024-03-19": {"posts": 5}, 59 | "2024-03-20": {"posts": 3} 60 | }, 61 | "preferences": { 62 | "theme": "dark", 63 | "notifications": true 64 | } 65 | } 66 | ], 67 | // many more users... 68 | } 69 | ``` 70 | 71 | Produces this clean schema: 72 | ```json 73 | { 74 | "users": { 75 | "type": "array", 76 | "items": { 77 | "id": { 78 | "type": "string", 79 | "examples": ["123"] 80 | }, 81 | "joined_date": { 82 | "type": "string", 83 | "format": "yyyy-mm-dd" 84 | }, 85 | "last_active": { 86 | "type": "string", 87 | "format": "datetime" 88 | }, 89 | "activity": { 90 | "type": "object", 91 | "properties": { 92 | "yyyy-mm-dd_2": { 93 | "type": "object", 94 | "properties": { 95 | "posts": {"type": "integer"} 96 | } 97 | } 98 | } 99 | }, 100 | "preferences": { 101 | "type": "object", 102 | "properties": { 103 | "theme": { 104 | "type": "string", 105 | "optional": true 106 | }, 107 | "notifications": { 108 | "type": "boolean" 109 | } 110 | } 111 | } 112 | } 113 | } 114 | } 115 | ``` 116 | 117 | ## Features 118 | 119 | - **Intelligent Pattern Detection**: 120 | - Recognizes date formats in both keys and values 121 | - Abstracts repeated structures 122 | - Identifies optional fields 123 | - **Schema Intelligence**: 124 | - Detects data types 125 | - Identifies nested structures 126 | - Provides example values 127 | - **Experimental**: Python dataclass generation (beta feature) 128 | 129 | ## Advanced Usage 130 | 131 | ### Experimental Python Dataclass Generation 132 | 133 | ```bash 134 | # Print dataclass to console 135 | jsondetective data.json -d 136 | 137 | # Save to file 138 | jsondetective data.json -d -o my_dataclasses.py 139 | 140 | # Custom class name 141 | jsondetective data.json -d -c MyDataClass 142 | ``` 143 | 144 | ### CLI Options 145 | 146 | ```bash 147 | jsondetective [JSON_FILE] [OPTIONS] 148 | 149 | Options: 150 | -d, --create-dataclass Generate Python dataclass code 151 | -o, --output-path PATH Save dataclass to file 152 | -c, --class-name TEXT Name for the root dataclass (default: Root) 153 | --help Show this message and exit 154 | ``` 155 | 156 | ## Why Use JSONDetective? 157 | 158 | - **Pattern Recognition**: Automatically detects and abstracts repeated patterns 159 | - **Date Handling**: Intelligent date format recognition and normalization 160 | - **Large Files**: Efficiently processes and summarizes large JSON structures 161 | - **Clear Output**: Clean, readable schema representation 162 | - **Time Saving**: No manual inspection of large JSON files needed 163 | -------------------------------------------------------------------------------- /jsondetective/__init__.py: -------------------------------------------------------------------------------- 1 | from .analyzer import JSONSchemaAnalyzer 2 | from .dataclass_gen import schema_to_dataclass_file, generate_dataclass_code 3 | 4 | __version__ = "0.1.0" 5 | __all__ = ["JSONSchemaAnalyzer", "schema_to_dataclass_file", "generate_dataclass_code"] -------------------------------------------------------------------------------- /jsondetective/analyzer.py: -------------------------------------------------------------------------------- 1 | import json 2 | from collections import defaultdict 3 | from typing import Dict, Any, Union, Tuple, TextIO 4 | from datetime import datetime 5 | from rich.console import Console 6 | from rich.syntax import Syntax 7 | 8 | 9 | class JSONSchemaAnalyzer: 10 | """Analyzes JSON structures to infer schema and generate dataclasses.""" 11 | 12 | def __init__(self): 13 | self.schema_structure = {} 14 | self.max_unique_samples = 5 15 | self.date_counters = defaultdict(int) 16 | self.field_occurrence = defaultdict(int) 17 | self.total_objects_analyzed = 0 18 | 19 | @staticmethod 20 | def load_json(file_or_path: Union[str, TextIO]) -> Any: 21 | """Load JSON from a file path or file object.""" 22 | try: 23 | if isinstance(file_or_path, str): 24 | with open(file_or_path, 'r', encoding='utf-8') as f: 25 | return json.load(f) 26 | return json.load(file_or_path) 27 | except UnicodeDecodeError: 28 | # Try alternative encodings 29 | encodings = ['utf-8-sig', 'latin-1'] 30 | for encoding in encodings: 31 | try: 32 | if isinstance(file_or_path, str): 33 | with open(file_or_path, 'r', encoding=encoding) as f: 34 | return json.load(f) 35 | file_or_path.seek(0) # Reset file pointer 36 | return json.load(file_or_path, encoding=encoding) 37 | except: 38 | continue 39 | raise ValueError("Unable to decode JSON file with supported encodings") 40 | 41 | @staticmethod 42 | def _get_type_name(value: Any) -> str: 43 | if value is None: 44 | return 'null' 45 | elif isinstance(value, bool): 46 | return 'boolean' 47 | elif isinstance(value, int): 48 | return 'integer' 49 | elif isinstance(value, float): 50 | return 'float' 51 | elif isinstance(value, str): 52 | return 'string' 53 | elif isinstance(value, list): 54 | return 'array' 55 | elif isinstance(value, dict): 56 | return 'object' 57 | return str(type(value).__name__) 58 | 59 | def _detect_date_pattern(self, key: str) -> Tuple[bool, str]: 60 | """Check if a string is a date and return its pattern format.""" 61 | date_patterns = { 62 | '%Y-%m-%d': 'yyyy-mm-dd', # 2021-11-08 63 | '%d-%m-%Y': 'dd-mm-yyyy', # 08-11-2021 64 | '%Y/%m/%d': 'yyyy/mm/dd', # 2021/11/08 65 | '%d/%m/%Y': 'dd/mm/yyyy', # 08/11/2021 66 | '%Y%m%d': 'yyyymmdd', # 20211108 67 | '%d%m%Y': 'ddmmyyyy', # 08112021 68 | '%B %d, %Y': 'month dd, yyyy', # November 08, 2021 69 | '%d %B %Y': 'dd month yyyy', # 08 November 2021 70 | '%Y-%m': 'yyyy-mm', # 2021-11 71 | '%m-%Y': 'mm-yyyy', # 11-2021 72 | } 73 | 74 | for strftime_pattern, readable_pattern in date_patterns.items(): 75 | try: 76 | datetime.strptime(key, strftime_pattern) 77 | return True, readable_pattern 78 | except ValueError: 79 | continue 80 | return False, '' 81 | 82 | def _normalize_path(self, path: str) -> str: 83 | """Convert date-based keys to a normalized format with pattern.""" 84 | if not path: 85 | return path 86 | 87 | parts = path.split('.') 88 | normalized_parts = [] 89 | 90 | for part in parts: 91 | is_date, pattern = self._detect_date_pattern(part) 92 | if is_date: 93 | level = len(normalized_parts) 94 | date_key = f"{pattern}_{level}" 95 | normalized_parts.append(date_key) 96 | else: 97 | normalized_parts.append(part) 98 | 99 | return '.'.join(normalized_parts) 100 | 101 | def _analyze_value(self, path: str, value: Any) -> None: 102 | """Recursively analyze a value and update statistics.""" 103 | normalized_path = self._normalize_path(path) 104 | value_type = self._get_type_name(value) 105 | 106 | # Increment occurrence counter for this path 107 | self.field_occurrence[normalized_path] += 1 108 | 109 | if normalized_path not in self.schema_structure: 110 | self.schema_structure[normalized_path] = { 111 | 'type': value_type, 112 | 'samples': set() 113 | } 114 | 115 | if not isinstance(value, (dict, list)) and len( 116 | self.schema_structure[normalized_path]['samples']) < self.max_unique_samples: 117 | self.schema_structure[normalized_path]['samples'].add(str(value)) 118 | 119 | if isinstance(value, dict): 120 | for key, val in value.items(): 121 | new_path = f"{path}.{key}" if path else key 122 | self._analyze_value(new_path, val) 123 | elif isinstance(value, list) and value: 124 | self._analyze_value(f"{normalized_path}[]", value[0]) 125 | 126 | def analyze_json(self, json_data: Union[Dict, list]) -> None: 127 | """Analyze multiple objects from the JSON data.""" 128 | if isinstance(json_data, list): 129 | # Analyze up to first 3 objects 130 | for i, record in enumerate(json_data[:3]): 131 | self._analyze_value("", record) 132 | self.total_objects_analyzed += 1 133 | else: 134 | # If it's a dictionary, analyze up to first 3 values 135 | for i, (_, record) in enumerate(list(json_data.items())[:3]): 136 | self._analyze_value("", record) 137 | self.total_objects_analyzed += 1 138 | 139 | def _merge_schema_objects(self, obj1: Dict, obj2: Dict) -> Dict: 140 | """Merge two schema objects, properly handling arrays and nested objects.""" 141 | if not isinstance(obj1, dict) or not isinstance(obj2, dict): 142 | return obj1 143 | 144 | result = obj1.copy() 145 | 146 | # Special handling for 'items' in arrays 147 | if 'type' in result and result['type'] == 'array' and 'items' in result and 'items' in obj2: 148 | if isinstance(result['items'], dict) and isinstance(obj2['items'], dict): 149 | result['items'] = self._merge_schema_objects(result['items'], obj2['items']) 150 | return result 151 | 152 | # Special handling for 'properties' in objects 153 | if 'properties' in result and 'properties' in obj2: 154 | result['properties'] = self._merge_schema_objects(result['properties'], obj2['properties']) 155 | return result 156 | 157 | # Merge other keys 158 | for key, value in obj2.items(): 159 | if key not in result: 160 | result[key] = value 161 | elif isinstance(value, dict) and isinstance(result[key], dict): 162 | result[key] = self._merge_schema_objects(result[key], value) 163 | 164 | return result 165 | 166 | def _build_nested_schema(self) -> Dict: 167 | """Convert flat schema structure to nested dictionary.""" 168 | 169 | def create_nested_dict(path_parts: list, value: Dict, full_path: str) -> Dict: 170 | if not path_parts: 171 | result = {'type': value['type']} 172 | if value['samples']: 173 | result['examples'] = list(value['samples']) 174 | # Add required/optional status based on field occurrence 175 | if self.field_occurrence[full_path] < self.total_objects_analyzed: 176 | result['optional'] = True 177 | return result 178 | 179 | current_part = path_parts[0] 180 | remaining_parts = path_parts[1:] 181 | 182 | if current_part.endswith('[]'): 183 | current_part = current_part[:-2] 184 | return { 185 | current_part: { 186 | 'type': 'array', 187 | 'items': create_nested_dict(remaining_parts, value, full_path) 188 | } 189 | } 190 | else: 191 | nested = create_nested_dict(remaining_parts, value, full_path) 192 | return { 193 | current_part: nested if not remaining_parts else { 194 | 'type': 'object', 195 | 'properties': nested 196 | } 197 | } 198 | 199 | result = {} 200 | 201 | # Sort paths to ensure parent objects are processed before their children 202 | sorted_paths = sorted(self.schema_structure.items(), key=lambda x: len(x[0].split('.'))) 203 | 204 | for path, info in sorted_paths: 205 | if not path: # root level 206 | continue 207 | 208 | path_parts = path.split('.') 209 | current_dict = create_nested_dict(path_parts, info, path) 210 | 211 | for key, value in current_dict.items(): 212 | if key not in result: 213 | result[key] = value 214 | else: 215 | result[key] = self._merge_schema_objects(result[key], value) 216 | 217 | return result 218 | 219 | def print_schema(self) -> None: 220 | """Print the clean schema as formatted JSON""" 221 | nested_schema = self._build_nested_schema() 222 | schema_json = json.dumps(nested_schema, indent=2) 223 | 224 | console = Console() 225 | syntax = Syntax(schema_json, "json", theme="monokai", line_numbers=True) 226 | console.print(syntax) 227 | 228 | def analyze_file(self, file_path: str) -> None: 229 | """Convenience method to analyze a JSON file directly.""" 230 | data = self.load_json(file_path) 231 | self.analyze_json(data) 232 | -------------------------------------------------------------------------------- /jsondetective/cli.py: -------------------------------------------------------------------------------- 1 | import click 2 | from rich.console import Console 3 | from pathlib import Path 4 | import json 5 | from typing import Any, Optional 6 | 7 | from .analyzer import JSONSchemaAnalyzer 8 | from .dataclass_gen import schema_to_dataclass_file, generate_dataclass_code 9 | 10 | console = Console() 11 | 12 | 13 | def try_load_json(file_path: Path, encoding: str) -> Optional[Any]: 14 | """ 15 | Attempt to load JSON file with specified encoding. 16 | Returns the parsed JSON data if successful, None if failed. 17 | """ 18 | try: 19 | with file_path.open('r', encoding=encoding) as f: 20 | return json.load(f) 21 | except (UnicodeDecodeError, UnicodeError): 22 | return None 23 | except json.JSONDecodeError as e: 24 | # If it's a BOM error, return None to try next encoding 25 | if "BOM" in str(e): 26 | return None 27 | # For other JSON errors, we should raise them as they indicate malformed JSON 28 | raise 29 | 30 | 31 | def load_json_with_fallback(file_path: Path) -> Any: 32 | """ 33 | Try to load JSON file with multiple encodings. 34 | Raises click.ClickException if all attempts fail. 35 | """ 36 | # List of encodings to try, in order of preference 37 | encodings = ['utf-8', 'utf-8-sig', 'latin-1', 'cp1252'] 38 | 39 | for encoding in encodings: 40 | data = try_load_json(file_path, encoding) 41 | if data is not None: 42 | console.print(f"[dim]Successfully loaded JSON with {encoding} encoding[/dim]") 43 | return data 44 | 45 | # If we get here, none of the encodings worked 46 | raise click.ClickException( 47 | "Failed to load JSON file. Tried the following encodings: " + 48 | ", ".join(encodings) 49 | ) 50 | 51 | 52 | @click.command() 53 | @click.argument( 54 | 'json_path', 55 | type=click.Path(exists=True, path_type=Path), 56 | ) 57 | @click.option( 58 | "--create-dataclass", "-d", 59 | is_flag=True, 60 | help="Generate Python dataclass code", 61 | ) 62 | @click.option( 63 | "--output-path", "-o", 64 | type=click.Path(path_type=Path), 65 | help="Path to save the generated dataclass file (optional)", 66 | ) 67 | @click.option( 68 | "--class-name", "-c", 69 | default="Root", 70 | help="Name of the root dataclass (default: Root)", 71 | ) 72 | def main(json_path: Path, create_dataclass: bool, output_path: Path, class_name: str) -> None: 73 | """ 74 | Analyze JSON files and optionally generate Python dataclasses. 75 | 76 | JSON_PATH: Path to the JSON file to analyze 77 | 78 | Examples: 79 | jsondetective data.json 80 | jsondetective data.json --create-dataclass 81 | jsondetective data.json -d -o my_classes.py 82 | jsondetective data.json -d -c MyDataClass 83 | """ 84 | try: 85 | analyzer = JSONSchemaAnalyzer() 86 | 87 | # Use the new fallback loading function 88 | data = load_json_with_fallback(json_path) 89 | analyzer.analyze_json(data) 90 | 91 | console.print("\n[bold blue]JSON Schema:[/bold blue]") 92 | analyzer.print_schema() 93 | 94 | if create_dataclass: 95 | schema = analyzer._build_nested_schema() 96 | 97 | if output_path: 98 | schema_to_dataclass_file(schema, output_path, class_name) 99 | console.print(f"\n[bold green]Dataclass code saved to {output_path}[/bold green]") 100 | else: 101 | code = generate_dataclass_code(schema, class_name) 102 | console.print("\n[bold blue]Generated Dataclass Code:[/bold blue]") 103 | console.print(code) 104 | 105 | except click.ClickException: 106 | raise 107 | except Exception as e: 108 | console.print(f"[bold red]Error:[/bold red] {str(e)}") 109 | raise click.Abort() 110 | 111 | 112 | if __name__ == "__main__": 113 | main() 114 | -------------------------------------------------------------------------------- /jsondetective/dataclass_gen.py: -------------------------------------------------------------------------------- 1 | from typing import Dict, Any 2 | 3 | 4 | 5 | def generate_dataclass_code(schema: Dict[str, Any], class_name: str = "Root") -> str: 6 | """Generate Python code for dataclasses based on the JSON schema.""" 7 | 8 | # Track all classes we need to generate 9 | classes = [] 10 | 11 | def type_mapping(type_info: Dict[str, Any], field_name: str) -> str: 12 | """Map JSON schema types to Python types.""" 13 | type_name = type_info.get('type', 'any') 14 | 15 | if type_name == 'array': 16 | item_type = 'Any' 17 | if 'items' in type_info: 18 | item_class_name = f"{class_name}{field_name.title()}" 19 | item_type = generate_nested_class(type_info['items'], item_class_name) 20 | return f"List[{item_type}]" 21 | elif type_name == 'object': 22 | nested_class_name = f"{class_name}{field_name.title()}" 23 | return generate_nested_class(type_info, nested_class_name) 24 | elif type_name == 'string': 25 | # Check examples for datetime strings 26 | examples = type_info.get('examples', []) 27 | if examples and any('GMT' in str(ex) for ex in examples): 28 | return 'datetime' 29 | return 'str' 30 | elif type_name == 'integer': 31 | return 'int' 32 | elif type_name == 'float': 33 | return 'float' 34 | elif type_name == 'boolean': 35 | return 'bool' 36 | elif type_name == 'null': 37 | return 'None' 38 | return 'Any' 39 | 40 | def generate_nested_class(schema_part: Dict[str, Any], nested_class_name: str) -> str: 41 | """Generate a nested dataclass for complex objects.""" 42 | if 'properties' not in schema_part and schema_part.get('type') != 'object': 43 | return type_mapping(schema_part, '') 44 | 45 | properties = schema_part.get('properties', {}) 46 | if not properties: 47 | return 'Dict[str, Any]' 48 | 49 | class_code = [f"@dataclass\nclass {nested_class_name}:"] 50 | 51 | for prop_name, prop_info in properties.items(): 52 | is_optional = prop_info.get('optional', False) 53 | python_type = type_mapping(prop_info, prop_name) 54 | 55 | if is_optional: 56 | python_type = f"Optional[{python_type}]" 57 | default_value = " = None" 58 | else: 59 | default_value = "" 60 | 61 | # Convert snake_case to valid Python identifier 62 | valid_name = prop_name.replace('-', '_') 63 | 64 | # Add field with type annotation 65 | class_code.append(f" {valid_name}: {python_type}{default_value}") 66 | 67 | classes.append('\n'.join(class_code)) 68 | return nested_class_name 69 | 70 | # Generate the root class 71 | generate_nested_class({"type": "object", "properties": schema}, class_name) 72 | 73 | # Combine all classes with proper imports 74 | imports = [ 75 | "from dataclasses import dataclass, field", 76 | "from typing import List, Optional, Dict, Any", 77 | "from datetime import datetime", 78 | "" 79 | ] 80 | 81 | return '\n\n'.join(imports + classes) 82 | 83 | 84 | def schema_to_dataclass_file(schema: Dict[str, Any], output_file: str, class_name: str = "TinderData") -> None: 85 | """Generate a .py file containing the dataclass definitions.""" 86 | code = generate_dataclass_code(schema, class_name) 87 | 88 | with open(output_file, 'w') as f: 89 | f.write(code) 90 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["setuptools>=45", "wheel"] 3 | build-backend = "setuptools.build_meta" 4 | 5 | [tool.black] 6 | line-length = 88 7 | target-version = ['py37'] 8 | include = '\.pyi?$' 9 | 10 | [tool.isort] 11 | profile = "black" 12 | multi_line_output = 3 -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | def read_readme(): 4 | """Read README.md and handle potential UTF-8 BOM.""" 5 | try: 6 | # First try UTF-8 7 | with open("README.md", "r", encoding="utf-8") as f: 8 | return f.read() 9 | except UnicodeDecodeError: 10 | # If that fails, try UTF-8-SIG (UTF-8 with BOM) 11 | with open("README.md", "r", encoding="utf-8-sig") as f: 12 | return f.read() 13 | 14 | setup( 15 | name="jsondetective", 16 | version="1.0.2", 17 | packages=find_packages(), 18 | install_requires=[ 19 | "rich>=13.0.0", 20 | "click>=7.1.2", 21 | ], 22 | entry_points={ 23 | "console_scripts": [ 24 | "jsondetective=jsondetective.cli:main", 25 | ], 26 | }, 27 | author="Tim Farrelly", 28 | author_email="timf34@gmail.com", 29 | description="Instantly understand JSON structure through automatic schema inference", 30 | long_description=read_readme(), 31 | long_description_content_type="text/markdown", 32 | url="https://github.com/timf34/jsondetective", 33 | classifiers=[ 34 | "Programming Language :: Python :: 3", 35 | "License :: OSI Approved :: MIT License", 36 | "Operating System :: OS Independent", 37 | ], 38 | python_requires=">=3.7", 39 | ) --------------------------------------------------------------------------------