├── .env.example ├── .gitignore ├── README.md ├── main.py └── requirements.txt /.env.example: -------------------------------------------------------------------------------- 1 | OPENROUTER_API_KEY="" 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # Unit test / coverage reports 28 | htmlcov/ 29 | .tox/ 30 | .nox/ 31 | .coverage 32 | .coverage.* 33 | .cache 34 | nosetests.xml 35 | coverage.xml 36 | *.cover 37 | .hypothesis/ 38 | .pytest_cache/ 39 | 40 | # Environments 41 | .env 42 | .venv 43 | env/ 44 | venv/ 45 | ENV/ 46 | env.bak/ 47 | venv.bak/ 48 | 49 | # IDEs and editors 50 | .idea/ 51 | .vscode/ 52 | *.swp 53 | *.swo 54 | *~ 55 | .project 56 | .pydevproject 57 | .settings/ 58 | 59 | # OS generated files 60 | .DS_Store 61 | .DS_Store? 62 | ._* 63 | .Spotlight-V100 64 | .Trashes 65 | ehthumbs.db 66 | Thumbs.db 67 | 68 | # Jupyter Notebook 69 | .ipynb_checkpoints 70 | 71 | # Logs 72 | *.log 73 | logs/ 74 | 75 | # Local configuration 76 | *.local.py 77 | instance/ 78 | config.local.py -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 🧠 Omni Engineer: An AI-Powered Developer Console 2 | 3 | An intelligent assistant designed to enhance your development workflow with advanced AI capabilities. 4 | 5 | ## ✨ NEW 6 | O1 support. Simply use 7 | openai/o1-preview or openai/o1-mini as models. 8 | 9 | ## 🔍 Overview 10 | 11 | Omni Engineer is a console-based tool that integrates AI capabilities into your development process. It offers smart responses to coding queries, file management, web searching, and image processing functionalities, now with enhanced features for a more robust development experience. 12 | 13 | Omni Engineer is a spiritual successor to [Claude Engineer](https://github.com/Doriandarko/claude-engineer), built from extensive usage of hand-made AI tools, trial and error, and user feedback. This new script allows for more control via simplicity while introducing powerful new features like multi-file editing and chat session management. 14 | 15 | ## 🌟 Features 16 | 17 | - AI-Powered Responses with Streaming Output 18 | - Advanced File Management (Add, Edit, Create, Show Content) 19 | - Multi-File Editing Support 20 | - Web Searching with DuckDuckGo Integration 21 | - Image Processing (Local Files and URLs) 22 | - Undo Functionality for File Edits 23 | - Conversation Save & Load 24 | - Syntax Highlighting for Code 25 | - Diff Display for File Changes 26 | - AI Model Selection and Switching 27 | 28 | ## 🖥️ Commands 29 | 30 | - `/add `: Add files to AI context 31 | - `/edit `: Edit existing files 32 | - `/new `: Create new files 33 | - `/search`: Perform web searches 34 | - `/image `: Add images to context 35 | - `/clear`: Clear AI memory 36 | - `/reset`: Reset the session 37 | - `/diff`: Toggle diff display 38 | - `/history`: View chat history 39 | - `/save`: Save current chat 40 | - `/load`: Load a previous chat 41 | - `/undo `: Undo last file edit 42 | - `/help`: Display available commands 43 | - `/model`: Show current AI model 44 | - `/change_model`: Change the AI model 45 | - `/show `: Display content of a file 46 | 47 | ## 🚀 Installation 48 | 49 | 1. Clone the repository: 50 | ``` 51 | git clone https://github.com/doriandarko/omni-engineer.git 52 | cd omni-engineer 53 | ``` 54 | 2. Install the required packages: 55 | ``` 56 | pip install -r requirements.txt 57 | ``` 58 | 3. Rename the .env.example to .env and add your API Key: 59 | ``` 60 | OPENROUTER_API_KEY="Your key" 61 | ``` 62 | 4. Run the main script: 63 | ``` 64 | python omni-eng.py 65 | ``` 66 | 67 | ## 📚 Usage 68 | 69 | After launching the console, enter commands or questions as needed. The AI will respond accordingly, assisting with various development tasks. Use the `/help` command to see a list of available commands and their descriptions. 70 | 71 | ## 🤖 AI Models 72 | 73 | Omni Engineer utilizes OpenRouter to access a variety of AI models. The default model is set to "anthropic/claude-3.5-sonnet" for general assistance and "google/gemini-pro-1.5" for code editing. You can view the current model with `/model` and change it using `/change_model`. For detailed information on available models and their capabilities, refer to [OpenRouter's documentation](https://openrouter.ai/models). 74 | 75 | ## 🔧 Advanced Features 76 | 77 | - **Multi-File Editing**: Edit multiple files in a single session. 78 | - **Real-time Diff Display**: See changes as they're made with the diff feature. 79 | - **Syntax Highlighting**: Improved code readability with syntax highlighting. 80 | - **Image Context**: Add both local and URL-based images to your AI context. 81 | - **Flexible Model Selection**: Switch between different AI models for various tasks. 82 | 83 | ## 🐛 Issue Reporting 84 | 85 | Please use the issue tracker only for reporting actual bugs in the code. This helps keep the issue tracker focused on improving the project's stability and functionality. 86 | 87 | ## 🤝 Contributing 88 | 89 | Contributions to Omni Engineer are welcome! Please feel free to submit pull requests, create issues for bugs, or suggest new features. 90 | 91 | omni-engineer Copyright (c) 2024, Pietro Schirano 92 | 93 | ## ⭐️ Star History 94 | 95 | [![Star History Chart](https://api.star-history.com/svg?repos=Doriandarko/omni-engineer&type=Date)](https://star-history.com/#Doriandarko/omni-engineer&Date) 96 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import os 2 | from openai import OpenAI 3 | import sys 4 | from dotenv import load_dotenv 5 | from colorama import init, Fore, Back, Style 6 | import difflib 7 | import asyncio 8 | from duckduckgo_search import DDGS 9 | import json 10 | from pygments import highlight 11 | from pygments.lexers import get_lexer_by_name 12 | from pygments.formatters import TerminalFormatter 13 | from rich.console import Console 14 | from rich.table import Table 15 | import base64 16 | from urllib.parse import urlparse 17 | import requests 18 | from PIL import Image 19 | from io import BytesIO 20 | from prompt_toolkit import PromptSession 21 | from prompt_toolkit.history import FileHistory 22 | from prompt_toolkit.auto_suggest import AutoSuggestFromHistory 23 | from prompt_toolkit.formatted_text import HTML 24 | from prompt_toolkit.completion import WordCompleter 25 | from prompt_toolkit.application.current import get_app 26 | 27 | is_diff_on = True 28 | 29 | init(autoreset=True) 30 | load_dotenv() 31 | client = OpenAI( 32 | base_url="https://openrouter.ai/api/v1", 33 | api_key=os.getenv("OPENROUTER_API_KEY"), 34 | ) 35 | 36 | DEFAULT_MODEL = "anthropic/claude-3.7-sonnet:thinking" 37 | EDITOR_MODEL = "google/gemini-2.0-flash-001" 38 | # Other common models: 39 | # "openai/gpt-4o-2024-08-06" 40 | # "meta-llama/llama-3.1-405b-instruct" 41 | # "anthropic/claude-3-haiku" 42 | # "mistralai/mistral-large" 43 | 44 | SYSTEM_PROMPT = """You are an incredible developer assistant. You have the following traits: 45 | - You write clean, efficient code 46 | - You explain concepts with clarity 47 | - You think through problems step-by-step 48 | - You're passionate about helping developers improve 49 | 50 | When given an /edit instruction: 51 | - First After completing the code review, construct a plan for the change 52 | - Then provide specific edit instructions 53 | - Format your response as edit instructions 54 | - Do NOT execute changes yourself""" 55 | 56 | EDITOR_PROMPT = """You are a code-editing AI. Your mission: 57 | 58 | ULTRA IMPORTANT: 59 | - YOU NEVER!!! add the type of file at the beginning of the file like ```python etq. 60 | - YOU NEVER!!! add ``` at the start or end of the file meaning you never add anything that is not the code at the start or end of the file. 61 | 62 | - Execute line-by-line edit instructions safely 63 | - If a line doesn't need to be changed, output the line as is. 64 | - NEVER add or delete lines, unless explicitly instructed 65 | - YOU ONLY OUTPUT THE CODE. 66 | - NEVER!!! add the type of file at the beginning of the file like ```python etq. 67 | - ULTRA IMPORTANT you NEVER!!! add ``` at the start or end of the file meaning you never add anything that is not the code at the start or end of the file. 68 | - Never change imports or function definitions unless explicitly instructed 69 | - If you spot potential issues in the instructions, fix them!""" 70 | 71 | added_files = [] 72 | stored_searches = {} 73 | file_templates = { 74 | "python": "def main():\n pass\n\nif __name__ == \"__main__\":\n main()", 75 | "html": "\n\n\n \n \n Document\n\n\n \n\n", 76 | "javascript": "// Your JavaScript code here" 77 | } 78 | undo_history = {} 79 | stored_images = {} 80 | command_history = FileHistory('.aiconsole_history.txt') 81 | commands = WordCompleter(['/add', '/edit', '/new', '/search', '/image', '/clear', '/reset', '/diff', '/history', '/save', '/load', '/undo', '/help', '/model', '/change_model', '/show', 'exit'], ignore_case=True) 82 | session = PromptSession(history=command_history) 83 | 84 | async def get_input_async(message): 85 | session = PromptSession() 86 | result = await session.prompt_async(HTML(f"{message} "), 87 | auto_suggest=AutoSuggestFromHistory(), 88 | completer=commands, 89 | refresh_interval=0.5) 90 | return result.strip() 91 | 92 | def encode_image(image_path): 93 | """Turn a local image into base64.""" 94 | try: 95 | with open(image_path, "rb") as image_file: 96 | return base64.b64encode(image_file.read()).decode('utf-8') 97 | except FileNotFoundError: 98 | return None 99 | except IOError: 100 | return None 101 | 102 | def validate_image_url(url, timeout=10): 103 | try: 104 | response = requests.get( 105 | url, 106 | stream=True, 107 | timeout=timeout, 108 | headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.28 Safari/537.36'} 109 | ) 110 | response.raise_for_status() 111 | 112 | # Check Content-Type 113 | content_type = response.headers.get('Content-Type', '').lower() 114 | if content_type.startswith(('image/', 'application/octet-stream')): 115 | return True 116 | 117 | # Force load it as an image 118 | image = Image.open(BytesIO(response.content)) 119 | image.verify() 120 | 121 | return True 122 | 123 | except requests.exceptions.RequestException as e: 124 | print_colored(f"Network error: {e}", Fore.RED) 125 | return False 126 | except Image.UnidentifiedImageError: 127 | print_colored(f"The URL doesn't point to a valid image.", Fore.RED) 128 | return False 129 | except Exception as e: 130 | print_colored(f"Unexpected error: {e}", Fore.RED) 131 | return False 132 | 133 | def is_url(string): 134 | """Check if a string is a valid URL.""" 135 | try: 136 | result = urlparse(string) 137 | return all([result.scheme, result.netloc]) 138 | except ValueError: 139 | return False 140 | 141 | async def handle_image_command(filepaths_or_urls, default_chat_history): 142 | """Add local images & URLs to memory and chat history""" 143 | if not filepaths_or_urls: 144 | print_colored("❌ No images or URLs provided.", Fore.RED) 145 | return default_chat_history 146 | 147 | processed_images = 0 148 | success_images = 0 149 | 150 | for idx, image_path in enumerate(filepaths_or_urls, 1): 151 | try: 152 | if is_url(image_path): # URL-based 153 | if validate_image_url(image_path): 154 | stored_images[f"image_{len(stored_images) + 1}"] = { 155 | "type": "image", 156 | "source": "url", 157 | "content": image_path 158 | } 159 | default_chat_history.append({ 160 | "role": "user", 161 | "content": [{"type": "image_url", "image_url": {"url": image_path}}] 162 | }) 163 | print_colored(f"✅ URL-based image {idx} added successfully!", Fore.GREEN) 164 | success_images += 1 165 | else: 166 | print_colored(f"❌ {image_path} isn't a valid image URL. Skipping.", Fore.RED) 167 | 168 | else: # Local filepath 169 | image_content = encode_image(image_path) 170 | if image_content: 171 | try: 172 | # Detect if it's actually an image 173 | with Image.open(image_path) as img: 174 | img_format = img.format.lower() if img.format else None 175 | if img_format not in ['jpeg', 'jpg', 'png', 'webp', 'gif']: 176 | raise ValueError(f"Unsupported image format: {img_format}") 177 | 178 | data_uri = f"data:image/{img_format};base64,{image_content}" 179 | 180 | stored_images[f"image_{len(stored_images) + 1}"] = { 181 | "type": "image", 182 | "source": "local", 183 | "content": data_uri 184 | } 185 | default_chat_history.append({ 186 | "role": "user", 187 | "content": [{ 188 | "type": "image_url", 189 | "image_url": {"url": data_uri} 190 | }] 191 | }) 192 | print_colored(f"✅ Local image {idx} added successfully!", Fore.GREEN) 193 | success_images += 1 194 | except (IOError, ValueError) as e: 195 | print_colored(f"❌ {image_path} isn't a valid image. Error: {e}. Skipping.", Fore.RED) 196 | else: 197 | print_colored(f"❌ Failed loading: {image_path}", Fore.RED) 198 | 199 | except Exception as e: 200 | print_colored(f"❌ Unexpected error processing {image_path}: {e}. Skipping.", Fore.RED) 201 | 202 | processed_images += 1 # Always increment, even if we skip 203 | 204 | print_colored(f"🖼️ {processed_images} images processed. {success_images} added successfully. {len(stored_images)} total images in memory!", Fore.CYAN) 205 | 206 | return default_chat_history 207 | 208 | async def aget_results(word): 209 | results = await DDGS(proxy=None).atext(word, max_results=100) 210 | return results 211 | 212 | def clear_console(): 213 | os.system('cls' if os.name == 'nt' else 'clear') 214 | 215 | def print_colored(text, color=Fore.WHITE, style=Style.NORMAL, end='\n'): 216 | print(f"{style}{color}{text}{Style.RESET_ALL}", end=end) 217 | 218 | def get_streaming_response(messages, model): 219 | try: 220 | url = "https://openrouter.ai/api/v1/chat/completions" 221 | headers = { 222 | "Authorization": f"Bearer {os.getenv('OPENROUTER_API_KEY')}", 223 | "Content-Type": "application/json" 224 | } 225 | 226 | # Configure model-specific reasoning settings 227 | reasoning_config = None 228 | if "anthropic" in model: 229 | reasoning_config = { 230 | "max_tokens": 8000 # For Anthropic models 231 | } 232 | elif "openai" in model: 233 | reasoning_config = { 234 | "effort": "high" # For OpenAI models 235 | } 236 | 237 | payload = { 238 | "model": model, 239 | "messages": messages, 240 | "max_tokens": 10000, 241 | "stream": True 242 | } 243 | 244 | # Only add reasoning if we have a configuration 245 | if reasoning_config: 246 | payload["reasoning"] = reasoning_config 247 | 248 | # Make the direct API call 249 | response = requests.post(url, json=payload, headers=headers, stream=True) 250 | response.raise_for_status() 251 | 252 | full_response = "" 253 | full_reasoning = "" 254 | current_mode = None # Track if we're in reasoning or content mode 255 | 256 | # Process the SSE stream 257 | for line in response.iter_lines(): 258 | if line: 259 | line = line.decode('utf-8') 260 | if line.startswith('data: '): 261 | if line == 'data: [DONE]': 262 | break 263 | 264 | line = line[6:] # Remove 'data: ' prefix 265 | try: 266 | chunk = json.loads(line) 267 | delta = chunk.get('choices', [{}])[0].get('delta', {}) 268 | 269 | # Check for reasoning tokens 270 | if 'reasoning' in delta and delta['reasoning'] is not None: 271 | # Check if we need to transition from content to reasoning 272 | if current_mode != "reasoning": 273 | current_mode = "reasoning" 274 | 275 | # Print reasoning in cyan, no prefix 276 | print_colored(delta['reasoning'], Fore.CYAN, end="") 277 | full_reasoning += delta['reasoning'] 278 | 279 | # Check for content tokens 280 | elif 'content' in delta and delta['content'] is not None: 281 | # Check if we need to transition from reasoning to content 282 | if current_mode == "reasoning": 283 | print("\n\n") # Add spacing between reasoning and content 284 | current_mode = "content" 285 | elif current_mode is None: 286 | current_mode = "content" 287 | 288 | # Print content in white, no prefix 289 | print_colored(delta['content'], Fore.WHITE, end="") 290 | full_response += delta['content'] 291 | 292 | except json.JSONDecodeError: 293 | continue 294 | except Exception as e: 295 | # Just continue on parse errors 296 | continue 297 | 298 | # Ensure newline at end 299 | print() 300 | 301 | return { 302 | "content": full_response, 303 | "reasoning": full_reasoning 304 | } 305 | except Exception as e: 306 | print_colored(f"Error in streaming response: {e}", Fore.RED) 307 | return {"content": "", "reasoning": ""} 308 | 309 | def read_file_content(filepath): 310 | try: 311 | with open(filepath, 'r', encoding='utf-8') as file: 312 | return file.read() 313 | except FileNotFoundError: 314 | return f"❌ Error: File not found: {filepath}" 315 | except IOError as e: 316 | return f"❌ Error reading {filepath}: {e}" 317 | 318 | def write_file_content(filepath, content): 319 | try: 320 | with open(filepath, 'w', encoding='utf-8') as f: 321 | f.write(content) 322 | return True 323 | except IOError: 324 | return False 325 | 326 | def is_text_file(file_path, sample_size=8192, text_characters=set(bytes(range(32,127)) + b'\n\r\t\b')): 327 | """Determine whether a file is text or binary.""" 328 | try: 329 | with open(file_path, 'rb') as f: 330 | chunk = f.read(sample_size) 331 | 332 | if not chunk: # Empty files are considered text 333 | return True 334 | 335 | if b'\x00' in chunk: # Null bytes usually indicate binary 336 | return False 337 | 338 | # If >30% of chars are non-text, probably binary 339 | text_chars = sum(byte in text_characters for byte in chunk) 340 | return text_chars / len(chunk) > 0.7 341 | 342 | except IOError: 343 | return False 344 | 345 | async def handle_add_command(chat_history, *paths): 346 | global added_files 347 | contents = [] 348 | new_context = "" 349 | 350 | for path in paths: 351 | if os.path.isfile(path): # File handling 352 | content = read_file_content(path) 353 | if not content.startswith("❌"): 354 | contents.append((path, content)) 355 | added_files.append(path) 356 | 357 | elif os.path.isdir(path): # Directory handling 358 | print_colored(f"📁 Processing folder: {path}", Fore.CYAN) 359 | for item in os.listdir(path): 360 | item_path = os.path.join(path, item) 361 | if os.path.isfile(item_path) and is_text_file(item_path): 362 | content = read_file_content(item_path) 363 | if not content.startswith("❌"): 364 | contents.append((item_path, content)) 365 | added_files.append(item_path) 366 | 367 | else: 368 | print_colored(f"❌ '{path}' is neither a valid file nor folder.", Fore.RED) 369 | 370 | if contents: 371 | for fp, content in contents: 372 | new_context += f"""The following file has been added: {fp}: 373 | \n{content}\n\n""" 374 | 375 | chat_history.append({"role": "user", "content": new_context}) 376 | print_colored(f"✅ Added {len(contents)} files to knowledge!", Fore.GREEN) 377 | else: 378 | print_colored("❌ No valid files were added to knowledge.", Fore.YELLOW) 379 | 380 | return chat_history 381 | 382 | async def handle_edit_command(default_chat_history, editor_chat_history, filepaths): 383 | all_contents = [read_file_content(fp) for fp in filepaths] 384 | valid_files, valid_contents = [], [] 385 | 386 | for filepath, content in zip(filepaths, all_contents): 387 | if content.startswith("❌"): 388 | print_colored(content, Fore.RED) 389 | else: 390 | valid_files.append(filepath) 391 | valid_contents.append(content) 392 | 393 | if not valid_files: 394 | print_colored("❌ No valid files to edit.", Fore.YELLOW) 395 | return default_chat_history, editor_chat_history 396 | 397 | user_request = await get_input_async(f"What would you like to change in {', '.join(valid_files)}?") 398 | 399 | instructions_prompt = "For these files:\n" 400 | instructions_prompt += "\n".join([f"File: {fp}\n```\n{content}\n```\n" for fp, content in zip(valid_files, valid_contents)]) 401 | instructions_prompt += f"User wants: {user_request}\nProvide LINE-BY-LINE edit instructions for ALL files. Number each instruction and specify which file it applies to.\n" 402 | 403 | default_chat_history.append({"role": "user", "content": instructions_prompt}) 404 | default_instructions = get_streaming_response(default_chat_history, DEFAULT_MODEL) 405 | default_chat_history.append({"role": "assistant", "content": default_instructions}) 406 | 407 | print_colored("\n" + "=" * 50, Fore.MAGENTA) 408 | 409 | for idx, (filepath, content) in enumerate(zip(valid_files, valid_contents), 1): 410 | try: 411 | print_colored(f"📝 EDITING {filepath} ({idx}/{len(valid_files)}):", Fore.BLUE) 412 | 413 | edit_message = f""" 414 | Original code: 415 | 416 | {content} 417 | 418 | Instructions: {default_instructions} 419 | 420 | Follow only instructions applicable to {filepath}. Output ONLY the new code. No explanations. DO NOT ADD ANYTHING ELSE. no type of file at the beginning of the file like ```python etq. no ``` at the end of the file. 421 | """ 422 | 423 | editor_chat_history.append({"role": "user", "content": edit_message}) 424 | 425 | current_content = read_file_content(filepath) # Read fresh 426 | if current_content.startswith("❌"): 427 | return default_chat_history, editor_chat_history 428 | 429 | lines = current_content.split('\n') 430 | buffer = "" 431 | edited_lines = lines.copy() # Create a copy to store edited lines 432 | line_index = 0 433 | 434 | for chunk in client.chat.completions.create( 435 | model=EDITOR_MODEL, 436 | messages=editor_chat_history, 437 | stream=True, 438 | ): 439 | if chunk.choices[0].delta.content: 440 | content = chunk.choices[0].delta.content 441 | print_colored(content, end="") 442 | buffer += content 443 | 444 | while '\n' in buffer: 445 | line, buffer = buffer.split('\n', 1) 446 | if line_index < len(edited_lines): 447 | edited_lines[line_index] = line 448 | print_colored(f"✏️ Updated Line {line_index+1}: {line[:50]}...", Fore.CYAN) 449 | line_index += 1 450 | else: 451 | edited_lines.append(line) 452 | print_colored(f"➕ NEW Line {line_index+1}: {line[:50]}...", Fore.YELLOW) 453 | line_index += 1 454 | 455 | result = '\n'.join(edited_lines) 456 | undo_history[filepath] = current_content # Store undo 457 | editor_chat_history.append({"role": "assistant", "content": result}) 458 | 459 | if is_diff_on: 460 | display_diff(current_content, result) # Show final diff if it's on 461 | 462 | # Write the changes to the file only after the entire editing process 463 | if write_file_content(filepath, result): 464 | print_colored(f"✅ {filepath} successfully edited and saved!", Fore.GREEN) 465 | else: 466 | print_colored(f"❌ Failed to save changes to {filepath}", Fore.RED) 467 | 468 | print_colored("=" * 50, Fore.MAGENTA) 469 | except Exception as e: 470 | print_colored(f"❌ Error editing {filepath}: {e}", Fore.RED) 471 | 472 | return default_chat_history, editor_chat_history 473 | 474 | async def handle_new_command(default_chat_history, editor_chat_history, filepaths): 475 | if not filepaths: 476 | print_colored("❌ No file paths provided.", Fore.RED) 477 | return default_chat_history, editor_chat_history 478 | 479 | print_colored(f"🆕 Creating new files: {', '.join(filepaths)}", Fore.BLUE) 480 | created_files = [] 481 | for filepath in filepaths: 482 | file_ext = os.path.splitext(filepath)[1][1:] 483 | template = file_templates.get(file_ext, "") 484 | try: 485 | with open(filepath, 'x') as f: 486 | f.write(template) 487 | print_colored(f"✅ Created {filepath} with template", Fore.GREEN) 488 | created_files.append(filepath) 489 | except FileExistsError: 490 | print_colored(f"⚠️ {filepath} already exists. It will be edited, not overwritten.", Fore.YELLOW) 491 | created_files.append(filepath) 492 | except IOError as e: 493 | print_colored(f"❌ Could not create {filepath}: {e}", Fore.RED) 494 | 495 | if created_files: 496 | user_input = (await get_input_async(f"Do you want to edit the newly created files? (y/n):")).lower() 497 | if user_input == 'y': 498 | default_chat_history, editor_chat_history = await handle_edit_command( 499 | default_chat_history, editor_chat_history, created_files 500 | ) 501 | 502 | return default_chat_history, editor_chat_history 503 | 504 | async def handle_clear_command(): 505 | global added_files, stored_searches, stored_images 506 | cleared_something = False 507 | 508 | if added_files: 509 | added_files.clear() 510 | cleared_something = True 511 | print_colored("✅ Cleared memory of added files.", Fore.GREEN) 512 | 513 | if stored_searches: 514 | stored_searches.clear() 515 | cleared_something = True 516 | print_colored("✅ Cleared stored searches.", Fore.GREEN) 517 | 518 | if stored_images: 519 | image_count = len(stored_images) 520 | stored_images.clear() 521 | cleared_something = True 522 | print_colored(f"✅ Cleared {image_count} images from memory.", Fore.GREEN) 523 | 524 | if not cleared_something: 525 | print_colored("ℹ️ No files, searches or images in memory to clear.", Fore.YELLOW) 526 | 527 | async def handle_reset_command(default_chat_history, editor_chat_history): 528 | """Clears all chat history and added files memory.""" 529 | global added_files, stored_searches, stored_images 530 | default_chat_history.clear() 531 | editor_chat_history.clear() 532 | added_files.clear() 533 | stored_searches.clear() 534 | stored_images.clear() 535 | 536 | # Re-initialize: 537 | default_chat_history = [{"role": "system", "content": SYSTEM_PROMPT}] 538 | editor_chat_history = [{"role": "system", "content": EDITOR_PROMPT}] 539 | 540 | print_colored( 541 | "✅ All chat history, memory of added files, stored searches, and images have been reset.", 542 | Fore.GREEN, 543 | ) 544 | 545 | return default_chat_history, editor_chat_history # Return the resetted histories 546 | 547 | def toggle_diff(): 548 | global is_diff_on 549 | is_diff_on = not is_diff_on 550 | status = "on" if is_diff_on else "off" 551 | print_colored( 552 | f"Diff is now {status} 🚀" if is_diff_on else f"Diff is now {status} 🚫", 553 | Fore.YELLOW, 554 | ) 555 | 556 | def handle_history_command(chat_history): 557 | print_colored("\n📜 Chat History:", Fore.BLUE) 558 | for idx, message in enumerate(chat_history[1:], 1): # Skip system message 559 | role = message['role'].capitalize() 560 | content = message['content'][:100] + "..." if len(message['content']) > 100 else message['content'] 561 | print_colored(f"{idx}. {role}: {content}", Fore.CYAN) 562 | 563 | async def handle_save_command(chat_history): 564 | filename = await get_input_async("Enter filename to save chat history:") 565 | try: 566 | with open(filename, 'w') as f: 567 | json.dump(chat_history, f) 568 | print_colored(f"✅ Chat history saved to {filename}", Fore.GREEN) 569 | except IOError as e: 570 | print_colored(f"❌ Error saving chat history: {e}", Fore.RED) 571 | 572 | async def handle_load_command(): 573 | filename = await get_input_async("Enter filename to load chat history:") 574 | try: 575 | with open(filename, 'r') as f: 576 | loaded_history = json.load(f) 577 | print_colored(f"✅ Chat history loaded from {filename}", Fore.GREEN) 578 | return loaded_history 579 | except IOError as e: 580 | print_colored(f"❌ Error loading chat history: {e}", Fore.RED) 581 | return None 582 | 583 | async def handle_undo_command(filepath): 584 | if not filepath: 585 | print_colored("❌ No filepath provided for undo operation.", Fore.RED) 586 | return 587 | if filepath in undo_history: 588 | content = undo_history[filepath] 589 | if write_file_content(filepath, content): 590 | print_colored(f"✅ Undid last edit for {filepath}", Fore.GREEN) 591 | del undo_history[filepath] # Remove the used undo history 592 | else: 593 | print_colored(f"❌ Failed to undo edit for {filepath}", Fore.RED) 594 | else: 595 | print_colored(f"❌ No undo history for {filepath}", Fore.RED) 596 | 597 | def syntax_highlight(code, language): 598 | lexer = get_lexer_by_name(language) 599 | return highlight(code, lexer, TerminalFormatter()) 600 | 601 | def print_welcome_message(): 602 | print_colored( 603 | "🔮 Welcome to the Assistant Developer Console! 🔮", Fore.MAGENTA, Style.BRIGHT 604 | ) 605 | 606 | console = Console() 607 | table = Table() 608 | 609 | table.add_column("Command", style="cyan", no_wrap=True) 610 | table.add_column("Description") 611 | 612 | table.add_row("/add", "Add files to AI's knowledge base") 613 | table.add_row("/edit", "Edit existing files") 614 | table.add_row("/new", "Create new files") 615 | table.add_row("/search", "Perform a DuckDuckGo search") 616 | table.add_row("/image", "Add image(s) to AI's knowledge base") 617 | table.add_row("/clear", "Clear added files, searches, and images from AI's memory") 618 | table.add_row("/reset", "Reset entire chat and file memory") 619 | table.add_row("/diff", "Toggle display of diffs") 620 | table.add_row("/history", "View chat history") 621 | table.add_row("/save", "Save chat history to a file") 622 | table.add_row("/load", "Load chat history from a file") 623 | table.add_row("/undo", "Undo last edit for a specific file") 624 | table.add_row("/help", "Show this help message") 625 | table.add_row("/model", "Show current AI model") 626 | table.add_row("/change_model", "Change the AI model") 627 | table.add_row("/show", "Show content of a file") 628 | table.add_row("exit", "Exit the application") 629 | 630 | console.print(table) 631 | 632 | print_colored( 633 | "For any other input, the AI will respond to your query or command.", 634 | Fore.YELLOW, 635 | ) 636 | print_colored( 637 | "Use ' help' for more information on a specific command.", 638 | Fore.YELLOW, 639 | ) 640 | 641 | def print_files_and_searches_in_memory(): 642 | if added_files: 643 | file_list = ', '.join(added_files) 644 | print_colored( 645 | f"📂 Files currently in memory: {file_list}", Fore.CYAN, Style.BRIGHT 646 | ) 647 | if stored_searches: 648 | search_list = ', '.join(stored_searches.keys()) 649 | print_colored( 650 | f"🔍 Searches currently in memory: {search_list}", Fore.CYAN, Style.BRIGHT 651 | ) 652 | 653 | def display_diff(original, edited): 654 | diff = difflib.unified_diff( 655 | original.splitlines(), edited.splitlines(), lineterm='', n=0 656 | ) 657 | for line in diff: 658 | if line.startswith('+'): 659 | print_colored(line, Fore.GREEN) 660 | elif line.startswith('-'): 661 | print_colored(line, Fore.RED) 662 | else: 663 | print_colored(line, Fore.BLUE) 664 | 665 | async def handle_search_command(default_chat_history): 666 | search_query = await get_input_async("What would you like to search?") 667 | if not search_query.strip(): 668 | print_colored("❌ Empty search query. Please provide a search term.", Fore.RED) 669 | return default_chat_history 670 | 671 | print_colored(f"\n🔍 Searching for: {search_query}", Fore.BLUE) 672 | 673 | try: 674 | results = await aget_results(search_query) 675 | search_name = search_query[:10].strip() # Truncate to first 10 characters 676 | stored_searches[search_name] = results 677 | print_colored(f"✅ Search results for '{search_name}' stored in memory.", Fore.GREEN) 678 | 679 | # Add search results to chat history 680 | search_content = f"Search results for '{search_query}':\n" 681 | for idx, result in enumerate(results[:8], 1): # Limit to first 5 results for brevity 682 | search_content += f"{idx}. {result['title']}: {result['body'][:100]}...\n" 683 | default_chat_history.append({"role": "user", "content": search_content}) 684 | 685 | except Exception as e: 686 | print_colored(f"❌ Error performing search: {e}", Fore.RED) 687 | 688 | return default_chat_history 689 | 690 | async def handle_help_command(): 691 | print_welcome_message() 692 | 693 | def show_current_model(): 694 | print_colored(f"Current model: {DEFAULT_MODEL}", Fore.CYAN) 695 | 696 | async def change_model(): 697 | global DEFAULT_MODEL 698 | new_model = await get_input_async("Enter the new model name: ") 699 | DEFAULT_MODEL = new_model 700 | print_colored(f"Model changed to: {DEFAULT_MODEL}", Fore.GREEN) 701 | 702 | async def show_file_content(filepath): 703 | content = read_file_content(filepath) 704 | if content.startswith("❌"): 705 | print_colored(content, Fore.RED) 706 | else: 707 | print_colored(f"Content of {filepath}:", Fore.CYAN) 708 | print(content) 709 | 710 | async def main(): 711 | default_chat_history = [{"role": "system", "content": SYSTEM_PROMPT}] 712 | editor_chat_history = [{"role": "system", "content": EDITOR_PROMPT}] 713 | clear_console() 714 | print_welcome_message() 715 | print_files_and_searches_in_memory() 716 | 717 | while True: 718 | try: 719 | prompt = await get_input_async(f"\n\nYou:") 720 | 721 | print_files_and_searches_in_memory() 722 | 723 | if prompt.lower() == "exit": 724 | print_colored( 725 | "Thank you for using the OpenAI Developer Console. Goodbye!", Fore.MAGENTA 726 | ) 727 | break 728 | 729 | if prompt.startswith("/add "): 730 | filepaths = prompt.split("/add ", 1)[1].strip().split() 731 | default_chat_history = await handle_add_command(default_chat_history, *filepaths) 732 | continue 733 | 734 | if prompt.startswith("/edit "): 735 | filepaths = prompt.split("/edit ", 1)[1].strip().split() 736 | default_chat_history, editor_chat_history = await handle_edit_command( 737 | default_chat_history, editor_chat_history, filepaths 738 | ) 739 | continue 740 | 741 | if prompt.startswith("/new "): 742 | filepaths = prompt.split("/new ", 1)[1].strip().split() 743 | default_chat_history, editor_chat_history = await handle_new_command( 744 | default_chat_history, editor_chat_history, filepaths 745 | ) 746 | continue 747 | 748 | if prompt.startswith("/search"): 749 | default_chat_history = await handle_search_command(default_chat_history) 750 | continue 751 | 752 | if prompt.startswith("/clear"): 753 | await handle_clear_command() 754 | continue 755 | 756 | if prompt.startswith("/reset"): 757 | default_chat_history, editor_chat_history = await handle_reset_command( 758 | default_chat_history, editor_chat_history 759 | ) 760 | continue 761 | 762 | if prompt.startswith("/diff"): 763 | toggle_diff() 764 | continue 765 | 766 | if prompt.startswith("/history"): 767 | handle_history_command(default_chat_history) 768 | continue 769 | 770 | if prompt.startswith("/save"): 771 | await handle_save_command(default_chat_history) 772 | continue 773 | 774 | if prompt.startswith("/image "): 775 | image_paths = prompt.split("/image ", 1)[1].strip().split() 776 | default_chat_history = await handle_image_command(image_paths, default_chat_history) 777 | continue 778 | 779 | if prompt.startswith("/load"): 780 | loaded_history = await handle_load_command() 781 | if loaded_history: 782 | default_chat_history = loaded_history 783 | continue 784 | 785 | if prompt.startswith("/undo "): 786 | filepath = prompt.split("/undo ", 1)[1].strip() 787 | await handle_undo_command(filepath) 788 | continue 789 | 790 | if prompt.startswith("/help"): 791 | await handle_help_command() 792 | continue 793 | 794 | if prompt.startswith("/model"): 795 | show_current_model() 796 | continue 797 | 798 | if prompt.startswith("/change_model"): 799 | await change_model() 800 | continue 801 | 802 | if prompt.startswith("/show "): 803 | filepath = prompt.split("/show ", 1)[1].strip() 804 | await show_file_content(filepath) 805 | continue 806 | 807 | print_colored("\n🤖 Assistant:", Fore.BLUE) 808 | try: 809 | default_chat_history.append({"role": "user", "content": prompt}) 810 | response = get_streaming_response(default_chat_history, DEFAULT_MODEL) 811 | 812 | # Store both content and reasoning in chat history 813 | default_chat_history.append({ 814 | "role": "assistant", 815 | "content": response["content"], 816 | "reasoning": response["reasoning"] if response["reasoning"] else None 817 | }) 818 | except Exception as e: 819 | print_colored(f"Error: {e}. Please try again.", Fore.RED) 820 | 821 | except Exception as e: 822 | print_colored(f"An error occurred: {e}", Fore.RED) 823 | continue 824 | 825 | if __name__ == "__main__": 826 | asyncio.run(main()) 827 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | openai>=1.12.0 2 | python-dotenv>=1.0.0 3 | colorama>=0.4.6 4 | duckduckgo-search>=4.1.1 5 | pygments>=2.17.2 6 | rich>=13.7.0 7 | Pillow>=10.2.0 8 | prompt_toolkit>=3.0.43 9 | requests>=2.31.0 --------------------------------------------------------------------------------