├── .gitignore ├── LICENSE ├── MANIFEST.in ├── README.md ├── pyproject.toml ├── requirements.txt └── src ├── redsage.egg-info ├── PKG-INFO ├── SOURCES.txt ├── dependency_links.txt ├── entry_points.txt ├── requires.txt └── top_level.txt └── redsage ├── __init__.py ├── cli.py ├── core ├── agent.py ├── context.py └── watcher.py ├── llm ├── base.py ├── claude.py └── openai.py └── utils ├── config.py └── git.py /.gitignore: -------------------------------------------------------------------------------- 1 | ~/.pypirc 2 | redsage.yaml 3 | dist 4 | __pycache__/ 5 | *.pyc 6 | .env 7 | dist/ -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Warren Chisasa 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include README.md 2 | include LICENSE 3 | include .env 4 | recursive-include src/redsage * 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RedSage: Minimal Terminal Pair Programmer 2 | 3 | **Author:** Warren Chisasa 4 | 5 | RedSage is a lightweight, terminal-based coding assistant that connects to LLM APIs (e.g., Claude, OpenAI) to provide real-time pair programming capabilities. Designed for developers seeking a simple yet powerful coding companion, RedSage focuses on **simplicity**, **ease of setup**, and **intuitive functionality**. 6 | 7 | --- 8 | 9 | ## 🚀 Overview 10 | 11 | RedSage empowers developers to write, analyze, and improve code directly from the terminal. Whether you're debugging, writing new functions, or collaborating with AI, RedSage is your go-to lightweight coding assistant. 12 | 13 | --- 14 | 15 | ## ✨ Features 16 | 17 | - 🤖 **Multi-LLM Support**: Integrates with Claude or OpenAI APIs. 18 | - 📂 **Intelligent File Watching**: Tracks changes in your codebase in real-time. 19 | - 🔗 **Seamless Git Integration**: Easily manage branches and commits. 20 | - 🖥️ **Minimal Configuration Requirements**: Simple YAML-based setup. 21 | - 💬 **Intuitive Command-Line Interface**: Easy-to-use CLI with rich features. 22 | - 🌐 **Multi-Language Programming Support**: Write and analyze code in various languages. 23 | 24 | --- 25 | 26 | ## 🛠️ Installation 27 | 28 | ### Prerequisites 29 | Ensure the following are installed on your system: 30 | - Python 3.8+ 31 | - `pip` (Python Package Manager) 32 | - `git` 33 | 34 | ### Install via pip 35 | ```bash 36 | pip install redsage 37 | ``` 38 | 39 | --- 40 | 41 | ## ⚡ Quick Setup 42 | 43 | ### 1. Initialize Configuration 44 | Run the following command to initialize Redsage: 45 | ```bash 46 | redsage init 47 | ``` 48 | 49 | ### 2. Set API Key 50 | Export your API key securely using environment variables: 51 | ```bash 52 | export REDSAGE_API_KEY=your_api_key 53 | ``` 54 | 55 | Or update the `redsage.yaml` file with your API key: 56 | ```yaml 57 | llm: 58 | provider: "openai" 59 | api_key: "your_api_key_here" 60 | ``` 61 | 62 | --- 63 | 64 | ## 📖 Usage 65 | 66 | ### Start RedSage 67 | ```bash 68 | redsage start 69 | ``` 70 | 71 | ### Available Commands 72 | - `/help` - Show available commands. 73 | - `/context` - Display conversation context. 74 | - `/suggest` - Get code improvement suggestions. 75 | - `/explain` - Explain selected code. 76 | - `/diff` - Show current changes. 77 | - `/save` - Save changes to a git branch. 78 | - `/undo` - Revert the last change. 79 | - `/switch` - Switch LLM provider. 80 | - `/quit` - Exit RedSage. 81 | - `/paste` - Paste code for further queries. 82 | - `/ask` - Ask questions about pasted content. 83 | 84 | 85 | --- 86 | 87 | ## ⚙️ Configuration 88 | 89 | Create a `redsage.yaml` file in your project root for fine-tuned settings: 90 | ```yaml 91 | llm: 92 | provider: "openai" # or "claude" 93 | api_key: "${REDSAGE_API_KEY}" 94 | 95 | watch: 96 | paths: ["./src"] 97 | ignore: ["*.pyc", "__pycache__"] 98 | 99 | git: 100 | enabled: true 101 | branch_prefix: "redsage/" 102 | ``` 103 | 104 | --- 105 | 106 | ## 🔒 Security Considerations 107 | 108 | - **API Keys**: Stored securely in environment variables or YAML files. 109 | - **Local File Access Only**: Redsage doesn't transmit local code to external servers unless specified by the user. 110 | - **Git Confirmation**: Git operations require user confirmation. 111 | - **Sanitized Input Handling**: Redsage validates all inputs to prevent injection attacks. 112 | 113 | --- 114 | 115 | ## 🧩 Dependencies 116 | 117 | - `watchdog` 118 | - `click` 119 | - `anthropic` 120 | - `openai` 121 | - `pyyaml` 122 | - `gitpython` 123 | - `prompt_toolkit` 124 | 125 | --- 126 | 127 | ## 🤝 Contributing 128 | 129 | We welcome contributions! Follow these steps: 130 | 1. Fork the repository. 131 | 2. Create your feature branch: 132 | ```bash 133 | git checkout -b feature/AmazingFeature 134 | ``` 135 | 3. Commit your changes: 136 | ```bash 137 | git commit -m 'Add some AmazingFeature' 138 | ``` 139 | 4. Push to the branch: 140 | ```bash 141 | git push origin feature/AmazingFeature 142 | ``` 143 | 5. Open a Pull Request. 144 | 145 | --- 146 | 147 | ## 🗺️ Roadmap 148 | 149 | - 🧠 Enhanced context management. 150 | - 🌍 Support for more LLM providers. 151 | - 📊 Advanced code analysis capabilities. 152 | - ⚡ Performance optimizations. 153 | 154 | --- 155 | 156 | ## 📜 License 157 | 158 | Distributed under the MIT License. See `LICENSE` for more information. 159 | 160 | --- 161 | 162 | ## 📬 Contact 163 | 164 | Warren Chisasa 165 | 📧 Email: [warrenchisasa@gmail.com](mailto:warrenchisasa@gmail.com) 166 | 🔗 Project Link: [GitHub Repository](https://github.com/chisasaw/redsage) -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["setuptools>=61.0", "wheel"] 3 | build-backend = "setuptools.build_meta" 4 | 5 | [project] 6 | name = "redsage" 7 | version = "0.1.1" 8 | description = "A minimal terminal pair programmer" 9 | readme = "README.md" 10 | requires-python = ">=3.8" 11 | license = {file = "LICENSE"} 12 | authors = [ 13 | {name = "Warren Chisasa", email = "warrenchisasa@gmail.com"} 14 | ] 15 | keywords = ["ai", "pair-programming", "llm", "coding-assistant"] 16 | classifiers = [ 17 | "Development Status :: 3 - Alpha", 18 | "Intended Audience :: Developers", 19 | "License :: OSI Approved :: MIT License", 20 | "Programming Language :: Python :: 3.8", 21 | "Programming Language :: Python :: 3.9", 22 | "Programming Language :: Python :: 3.10", 23 | ] 24 | 25 | dependencies = [ 26 | "watchdog>=2.1.6", 27 | "click>=8.0.0", 28 | "anthropic>=0.3.0", 29 | "openai>=1.0.0", 30 | "pyyaml>=5.4.1", 31 | "gitpython>=3.1.0", 32 | "prompt_toolkit>=3.0.0" 33 | ] 34 | 35 | [project.scripts] 36 | redsage = "redsage.cli:main" 37 | 38 | [project.urls] 39 | Homepage = "https://github.com/chisasaw/redsage" 40 | Repository = "https://github.com/chisasaw/redsage" -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | anthropic>=0.3.0 2 | openai>=1.0.0 3 | pyyaml>=5.4.1 4 | gitpython>=3.1.0 5 | prompt_toolkit>=3.0.0 6 | watchdog>=2.1.6 7 | python-dotenv >= 1.0.1 -------------------------------------------------------------------------------- /src/redsage.egg-info/PKG-INFO: -------------------------------------------------------------------------------- 1 | Metadata-Version: 2.1 2 | Name: redsage 3 | Version: 0.1.1 4 | Summary: A minimal terminal pair programmer 5 | Author-email: Warren Chisasa 6 | License: MIT License 7 | 8 | Copyright (c) 2024 Warren Chisasa 9 | 10 | Permission is hereby granted, free of charge, to any person obtaining a copy 11 | of this software and associated documentation files (the "Software"), to deal 12 | in the Software without restriction, including without limitation the rights 13 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 14 | copies of the Software, and to permit persons to whom the Software is 15 | furnished to do so, subject to the following conditions: 16 | 17 | The above copyright notice and this permission notice shall be included in all 18 | copies or substantial portions of the Software. 19 | 20 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 21 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 22 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 23 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 24 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 25 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 26 | SOFTWARE. 27 | 28 | Project-URL: Homepage, https://github.com/chisasaw/redsage 29 | Project-URL: Repository, https://github.com/chisasaw/redsage 30 | Keywords: ai,pair-programming,llm,coding-assistant 31 | Classifier: Development Status :: 3 - Alpha 32 | Classifier: Intended Audience :: Developers 33 | Classifier: License :: OSI Approved :: MIT License 34 | Classifier: Programming Language :: Python :: 3.8 35 | Classifier: Programming Language :: Python :: 3.9 36 | Classifier: Programming Language :: Python :: 3.10 37 | Requires-Python: >=3.8 38 | Description-Content-Type: text/markdown 39 | License-File: LICENSE 40 | Requires-Dist: watchdog>=2.1.6 41 | Requires-Dist: click>=8.0.0 42 | Requires-Dist: anthropic>=0.3.0 43 | Requires-Dist: openai>=1.0.0 44 | Requires-Dist: pyyaml>=5.4.1 45 | Requires-Dist: gitpython>=3.1.0 46 | Requires-Dist: prompt_toolkit>=3.0.0 47 | 48 | # RedSage: Minimal Terminal Pair Programmer 49 | 50 | **Author:** Warren Chisasa 51 | 52 | RedSage is a lightweight, terminal-based coding assistant that connects to LLM APIs (e.g., Claude, OpenAI) to provide real-time pair programming capabilities. Designed for developers seeking a simple yet powerful coding companion, RedSage focuses on **simplicity**, **ease of setup**, and **intuitive functionality**. 53 | 54 | --- 55 | 56 | ## 🚀 Overview 57 | 58 | RedSage empowers developers to write, analyze, and improve code directly from the terminal. Whether you're debugging, writing new functions, or collaborating with AI, RedSage is your go-to lightweight coding assistant. 59 | 60 | --- 61 | 62 | ## ✨ Features 63 | 64 | - 🤖 **Multi-LLM Support**: Integrates with Claude or OpenAI APIs. 65 | - 📂 **Intelligent File Watching**: Tracks changes in your codebase in real-time. 66 | - 🔗 **Seamless Git Integration**: Easily manage branches and commits. 67 | - 🖥️ **Minimal Configuration Requirements**: Simple YAML-based setup. 68 | - 💬 **Intuitive Command-Line Interface**: Easy-to-use CLI with rich features. 69 | - 🌐 **Multi-Language Programming Support**: Write and analyze code in various languages. 70 | 71 | --- 72 | 73 | ## 🛠️ Installation 74 | 75 | ### Prerequisites 76 | Ensure the following are installed on your system: 77 | - Python 3.8+ 78 | - `pip` (Python Package Manager) 79 | - `git` 80 | 81 | ### Install via pip 82 | ```bash 83 | pip install redsage 84 | ``` 85 | 86 | --- 87 | 88 | ## ⚡ Quick Setup 89 | 90 | ### 1. Initialize Configuration 91 | Run the following command to initialize Redsage: 92 | ```bash 93 | redsage init 94 | ``` 95 | 96 | ### 2. Set API Key 97 | Export your API key securely using environment variables: 98 | ```bash 99 | export REDSAGE_API_KEY=your_api_key 100 | ``` 101 | 102 | Or update the `redsage.yaml` file with your API key: 103 | ```yaml 104 | llm: 105 | provider: "openai" 106 | api_key: "your_api_key_here" 107 | ``` 108 | 109 | --- 110 | 111 | ## 📖 Usage 112 | 113 | ### Start RedSage 114 | ```bash 115 | redsage start 116 | ``` 117 | 118 | ### Available Commands 119 | - `/help` - Show available commands. 120 | - `/context` - Display conversation context. 121 | - `/suggest` - Get code improvement suggestions. 122 | - `/explain` - Explain selected code. 123 | - `/diff` - Show current changes. 124 | - `/save` - Save changes to a git branch. 125 | - `/undo` - Revert the last change. 126 | - `/switch` - Switch LLM provider. 127 | - `/quit` - Exit RedSage. 128 | - `/paste` - Paste code for further queries. 129 | - `/ask` - Ask questions about pasted content. 130 | 131 | 132 | --- 133 | 134 | ## ⚙️ Configuration 135 | 136 | Create a `redsage.yaml` file in your project root for fine-tuned settings: 137 | ```yaml 138 | llm: 139 | provider: "openai" # or "claude" 140 | api_key: "${REDSAGE_API_KEY}" 141 | 142 | watch: 143 | paths: ["./src"] 144 | ignore: ["*.pyc", "__pycache__"] 145 | 146 | git: 147 | enabled: true 148 | branch_prefix: "redsage/" 149 | ``` 150 | 151 | --- 152 | 153 | ## 🔒 Security Considerations 154 | 155 | - **API Keys**: Stored securely in environment variables or YAML files. 156 | - **Local File Access Only**: Redsage doesn't transmit local code to external servers unless specified by the user. 157 | - **Git Confirmation**: Git operations require user confirmation. 158 | - **Sanitized Input Handling**: Redsage validates all inputs to prevent injection attacks. 159 | 160 | --- 161 | 162 | ## 🧩 Dependencies 163 | 164 | - `watchdog` 165 | - `click` 166 | - `anthropic` 167 | - `openai` 168 | - `pyyaml` 169 | - `gitpython` 170 | - `prompt_toolkit` 171 | 172 | --- 173 | 174 | ## 🤝 Contributing 175 | 176 | We welcome contributions! Follow these steps: 177 | 1. Fork the repository. 178 | 2. Create your feature branch: 179 | ```bash 180 | git checkout -b feature/AmazingFeature 181 | ``` 182 | 3. Commit your changes: 183 | ```bash 184 | git commit -m 'Add some AmazingFeature' 185 | ``` 186 | 4. Push to the branch: 187 | ```bash 188 | git push origin feature/AmazingFeature 189 | ``` 190 | 5. Open a Pull Request. 191 | 192 | --- 193 | 194 | ## 🗺️ Roadmap 195 | 196 | - 🧠 Enhanced context management. 197 | - 🌍 Support for more LLM providers. 198 | - 📊 Advanced code analysis capabilities. 199 | - ⚡ Performance optimizations. 200 | 201 | --- 202 | 203 | ## 📜 License 204 | 205 | Distributed under the MIT License. See `LICENSE` for more information. 206 | 207 | --- 208 | 209 | ## 📬 Contact 210 | 211 | Warren Chisasa 212 | 📧 Email: [warrenchisasa@gmail.com](mailto:warrenchisasa@gmail.com) 213 | 🔗 Project Link: [GitHub Repository](https://github.com/chisasaw/redsage) 214 | -------------------------------------------------------------------------------- /src/redsage.egg-info/SOURCES.txt: -------------------------------------------------------------------------------- 1 | .env 2 | LICENSE 3 | MANIFEST.in 4 | README.md 5 | pyproject.toml 6 | src/redsage/__init__.py 7 | src/redsage/cli.py 8 | src/redsage.egg-info/PKG-INFO 9 | src/redsage.egg-info/SOURCES.txt 10 | src/redsage.egg-info/dependency_links.txt 11 | src/redsage.egg-info/entry_points.txt 12 | src/redsage.egg-info/requires.txt 13 | src/redsage.egg-info/top_level.txt 14 | src/redsage/__pycache__/__init__.cpython-312.pyc 15 | src/redsage/__pycache__/cli.cpython-312.pyc 16 | src/redsage/core/agent.py 17 | src/redsage/core/context.py 18 | src/redsage/core/watcher.py 19 | src/redsage/core/__pycache__/agent.cpython-312.pyc 20 | src/redsage/core/__pycache__/watcher.cpython-312.pyc 21 | src/redsage/llm/base.py 22 | src/redsage/llm/claude.py 23 | src/redsage/llm/openai.py 24 | src/redsage/llm/__pycache__/base.cpython-312.pyc 25 | src/redsage/llm/__pycache__/claude.cpython-312.pyc 26 | src/redsage/llm/__pycache__/openai.cpython-312.pyc 27 | src/redsage/utils/config.py 28 | src/redsage/utils/git.py 29 | src/redsage/utils/__pycache__/config.cpython-312.pyc 30 | src/redsage/utils/__pycache__/git.cpython-312.pyc -------------------------------------------------------------------------------- /src/redsage.egg-info/dependency_links.txt: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/redsage.egg-info/entry_points.txt: -------------------------------------------------------------------------------- 1 | [console_scripts] 2 | redsage = redsage.cli:main 3 | -------------------------------------------------------------------------------- /src/redsage.egg-info/requires.txt: -------------------------------------------------------------------------------- 1 | watchdog>=2.1.6 2 | click>=8.0.0 3 | anthropic>=0.3.0 4 | openai>=1.0.0 5 | pyyaml>=5.4.1 6 | gitpython>=3.1.0 7 | prompt_toolkit>=3.0.0 8 | -------------------------------------------------------------------------------- /src/redsage.egg-info/top_level.txt: -------------------------------------------------------------------------------- 1 | redsage 2 | -------------------------------------------------------------------------------- /src/redsage/__init__.py: -------------------------------------------------------------------------------- 1 | # RedSage: Minimal Terminal Pair Programmer 2 | # Package initialization and version management 3 | 4 | # Version information 5 | __version__ = "0.1.0" 6 | __author__ = "RedSage Team" 7 | __email__ = "warrenchisasa@gmail" 8 | 9 | # Core package imports for easy access 10 | from redsage.cli import RedSageCLI, main 11 | from redsage.core.agent import RedSageAgent 12 | from redsage.core.watcher import FileWatcher 13 | from redsage.llm.base import BaseLLMProvider 14 | from redsage.utils.config import Config 15 | from redsage.utils.git import GitManager 16 | 17 | # Package-level configuration 18 | __all__ = [ 19 | 'RedSageCLI', 20 | 'main', 21 | 'RedSageAgent', 22 | 'FileWatcher', 23 | 'BaseLLMProvider', 24 | 'Config', 25 | 'GitManager' 26 | ] 27 | 28 | # Package-level docstring 29 | __doc__ = """ 30 | RedSage: A lightweight terminal-based pair programming assistant. 31 | 32 | Features: 33 | - Multi-LLM support (Claude, OpenAI) 34 | - Real-time file watching 35 | - Git integration 36 | - Simple command-line interface 37 | """ -------------------------------------------------------------------------------- /src/redsage/cli.py: -------------------------------------------------------------------------------- 1 | import os 2 | import click 3 | import platform 4 | import yaml 5 | from typing import Optional 6 | from prompt_toolkit import PromptSession 7 | from prompt_toolkit.completion import WordCompleter 8 | from dotenv import load_dotenv, set_key 9 | from redsage.core.agent import RedSageAgent 10 | from redsage.core.watcher import FileWatcher 11 | from redsage.utils.config import Config 12 | from redsage.utils.git import GitManager 13 | import subprocess 14 | 15 | # Load the .env file 16 | load_dotenv() 17 | 18 | class RedSageCLI: 19 | def __init__(self, config_path: Optional[str] = None): 20 | """ 21 | Initialize RedSage CLI with optional configuration path. 22 | """ 23 | # Load configuration 24 | self.config = Config(config_path) 25 | 26 | # Ensure the directory is a valid Git repository 27 | project_root = os.getcwd() # Assuming the project root is the current working directory 28 | 29 | if not os.path.isdir(os.path.join(project_root, '.git')): 30 | click.echo(f"No Git repository found at {project_root}.") 31 | create_git_repo = click.prompt("Would you like to initialize a Git repository?", type=bool, default=True) 32 | if create_git_repo: 33 | try: 34 | # Initialize Git repository 35 | subprocess.check_call(['git', 'init'], cwd=project_root) 36 | click.echo(f"Initialized a new Git repository at {project_root}.") 37 | except subprocess.CalledProcessError: 38 | raise ValueError(f"Failed to initialize Git repository at {project_root}. Please check your Git setup.") 39 | else: 40 | raise ValueError(f"Project is not a Git repository and Git initialization was declined.") 41 | 42 | # Initialize GitManager with the correct path (project root or Git root) 43 | self.git_ops = GitManager(self.config) 44 | 45 | # Initialize LLM agent 46 | self.agent = RedSageAgent(config=self.config) 47 | 48 | # Initialize file watcher 49 | watch_paths = self.config.get('watch.paths', ['.']) 50 | self.watcher = FileWatcher(paths=watch_paths) 51 | 52 | # Setup prompt session with command completions 53 | self.session = PromptSession( 54 | completer=WordCompleter([ 55 | '/help', '/context', '/suggest', 56 | '/explain', '/diff', '/save', 57 | '/undo', '/switch', '/quit', '/paste', '/ask' 58 | ]), 59 | # Disable styles to prevent ANSI escape codes in the prompt 60 | style=None 61 | ) 62 | 63 | # To store pasted content 64 | self.pasted_content = None 65 | 66 | def start_interactive_mode(self): 67 | """ 68 | Start the interactive RedSage CLI session. 69 | """ 70 | click.echo(click.style("Welcome to RedSage Pair Programmer!", fg="magenta")) 71 | click.echo(click.style("Type /help for available commands.", fg="magenta")) 72 | 73 | # Start file watcher in the background 74 | self.watcher.start_watching() 75 | 76 | while True: 77 | try: 78 | # Prompt user input without ANSI escape codes 79 | user_input = self.session.prompt('redsage> ') 80 | 81 | # Process command 82 | if user_input.startswith('/'): 83 | self._handle_command(user_input) 84 | else: 85 | # Process natural language query 86 | self._process_query(user_input) 87 | 88 | except KeyboardInterrupt: 89 | continue 90 | except EOFError: 91 | break 92 | 93 | def _handle_command(self, command: str): 94 | """ 95 | Handle user-entered commands. 96 | """ 97 | command_handlers = { 98 | '/help': self._show_help, 99 | '/context': self._show_context, 100 | '/suggest': self._get_suggestions, 101 | '/explain': self._explain_code, 102 | '/diff': self._show_changes, 103 | '/save': self._save_changes, 104 | '/undo': self._undo_last_change, 105 | '/switch': self._switch_provider, 106 | '/quit': self._quit, 107 | '/paste': self._paste_content, 108 | '/ask': self._ask_question 109 | } 110 | 111 | cmd_parts = command.split(maxsplit=1) 112 | base_cmd = cmd_parts[0] 113 | 114 | handler = command_handlers.get(base_cmd) 115 | if handler: 116 | handler(cmd_parts[1] if len(cmd_parts) > 1 else None) 117 | else: 118 | click.echo(f"Unknown command: {base_cmd}. Type /help for available commands.") 119 | 120 | def _show_help(self, _: Optional[str] = None): 121 | """Show the available commands.""" 122 | click.echo(click.style(""" 123 | Available commands: 124 | /help - Show available commands 125 | /context - Display current conversation context 126 | /suggest - Get code improvement suggestions 127 | /explain - Explain recent code context 128 | /diff - Show recent file changes 129 | /save - Save current changes to a git branch 130 | /undo - Undo the most recent change 131 | /switch - Switch LLM provider 132 | /quit - Quit the program 133 | /paste - Paste code for further queries 134 | /ask - Ask questions about pasted content 135 | """, fg="magenta")) 136 | 137 | def _paste_content(self, _: Optional[str] = None): 138 | """Allow user to paste content (multiline).""" 139 | click.echo(click.style("Please paste your content and press Enter when done.", fg="magenta")) 140 | content = "" 141 | while True: 142 | try: 143 | line = self.session.prompt('paste> ') 144 | if not line: 145 | break 146 | content += line + "\n" 147 | except EOFError: 148 | break 149 | self.pasted_content = content 150 | click.echo(click.style("Content successfully pasted. You can now ask questions about it using /ask.", fg="magenta")) 151 | 152 | def _ask_question(self, question: Optional[str] = None): 153 | """Allow user to ask questions about the pasted content.""" 154 | if self.pasted_content is None: 155 | click.echo("No content pasted. Use /paste to provide content first.") 156 | return 157 | if not question: 158 | question = click.prompt("What would you like to ask about the pasted content?") 159 | response = self.agent.get_suggestion(f"Answer the following question about this code: {self.pasted_content}\n\nQuestion: {question}") 160 | click.echo(response) 161 | 162 | def _show_context(self, _: Optional[str] = None): 163 | """Display the current context.""" 164 | click.echo(str(self.agent.context)) 165 | 166 | def _get_suggestions(self, query: Optional[str] = None): 167 | """Get code suggestions based on optional query.""" 168 | query = query or "Provide code improvement suggestions" 169 | suggestion = self.agent.get_suggestion(query) 170 | click.echo(suggestion) 171 | 172 | def _explain_code(self, code: Optional[str] = None): 173 | """Explain the provided code or last context.""" 174 | explanation_query = f"Explain the following code: {code}" if code else "Explain recent code context" 175 | explanation = self.agent.get_suggestion(explanation_query) 176 | click.echo(explanation) 177 | 178 | def _show_changes(self, _: Optional[str] = None): 179 | """Show recent file changes.""" 180 | click.echo(str(self.agent.context['recent_changes'])) 181 | 182 | def _save_changes(self, branch_name: Optional[str] = None): 183 | """Save current changes to a git branch.""" 184 | result = self.git_ops.save_changes(branch_name) 185 | click.echo(result) 186 | 187 | def _undo_last_change(self, _: Optional[str] = None): 188 | """Undo the most recent change.""" 189 | result = self.git_ops.undo_last_change() 190 | click.echo(result) 191 | 192 | def _switch_provider(self, new_provider: Optional[str] = None): 193 | """Switch the current LLM provider.""" 194 | if not new_provider: 195 | click.echo("Please specify a provider (openai).") 196 | return 197 | self.agent.llm_provider = self.agent._initialize_llm_provider(new_provider) 198 | click.echo(f"Switched to {new_provider} provider.") 199 | 200 | def _quit(self, _: Optional[str] = None): 201 | """Exit the RedSage CLI.""" 202 | click.echo(click.style("Exiting RedSage CLI.", fg="magenta")) 203 | exit() 204 | 205 | def _process_query(self, query: str): 206 | """Process natural language query.""" 207 | result = self.agent.get_suggestion(query) 208 | click.echo(result) 209 | 210 | @click.group() 211 | def main(): 212 | """ 213 | Main entry point for RedSage CLI. 214 | """ 215 | pass 216 | 217 | @main.command() 218 | @click.option('--config', default=None, help='Path to configuration file') 219 | def init(config): 220 | """ 221 | Initialize the RedSage program. 222 | """ 223 | click.echo("Initializing RedSage...") 224 | default_config_path = config or os.path.join(os.getcwd(), 'redsage.yaml') 225 | if os.path.exists(default_config_path): 226 | click.echo(f"Using existing configuration file at {default_config_path}") 227 | else: 228 | click.echo(f"Configuration file not found at {default_config_path}. Please create one.") 229 | 230 | @main.command() 231 | def start(): 232 | """ 233 | Start the RedSage program. 234 | """ 235 | click.echo("Starting RedSage...") 236 | cli = RedSageCLI() 237 | cli.start_interactive_mode() 238 | 239 | if __name__ == '__main__': 240 | main() 241 | -------------------------------------------------------------------------------- /src/redsage/core/agent.py: -------------------------------------------------------------------------------- 1 | import os 2 | from typing import List, Optional, Dict, Any 3 | from abc import ABC, abstractmethod 4 | 5 | from redsage.llm.base import BaseLLMProvider 6 | from redsage.llm.openai import OpenAILLMProvider 7 | from redsage.utils.config import Config 8 | from redsage.utils.git import GitManager 9 | 10 | class RedSageAgent: 11 | """ 12 | Central agent for managing LLM interactions and code suggestions 13 | in the RedSage terminal pair programming environment. 14 | """ 15 | def __init__(self, config: Config): 16 | """ 17 | Initialize the RedSage agent with configuration and LLM provider. 18 | 19 | :param config: Configuration object with LLM and system settings 20 | """ 21 | self.config = config 22 | self.llm_provider = self._initialize_llm_provider(config.get('llm', {}).get('provider', 'openai')) 23 | self.git_manager = GitManager(config) 24 | 25 | # Conversation context tracking 26 | self.context: Dict[str, Any] = { 27 | 'files': [], 28 | 'recent_changes': [], 29 | 'conversation_history': [] 30 | } 31 | 32 | def _initialize_llm_provider(self, provider: str) -> BaseLLMProvider: 33 | """ 34 | Initialize the appropriate LLM provider based on configuration. 35 | 36 | :param provider: Name of the LLM provider (e.g., 'openai') 37 | :return: Initialized LLM provider instance 38 | """ 39 | providers = { 40 | 'openai': OpenAILLMProvider 41 | } 42 | provider_class = providers.get(provider.lower()) 43 | if not provider_class: 44 | raise ValueError(f"Unsupported LLM provider: {provider}") 45 | 46 | api_key = os.getenv('REDSAGE_API_KEY') 47 | if api_key: 48 | return provider_class(api_key=api_key) 49 | else: 50 | raise ValueError("REDSAGE_API_KEY environment variable is not set.") 51 | 52 | def update_context(self, files: List[str], changes: Optional[List[Dict[str, Any]]] = None): 53 | """ 54 | Update the current conversation context with file and change information. 55 | 56 | :param files: List of files being tracked 57 | :param changes: Optional list of file changes 58 | """ 59 | self.context['files'] = list(set(self.context['files'] + files)) 60 | 61 | if changes: 62 | self.context['recent_changes'].extend(changes) 63 | # Limit recent changes to last 10 entries 64 | self.context['recent_changes'] = self.context['recent_changes'][-10:] 65 | 66 | def process_command(self, command: str) -> str: 67 | """ 68 | Process user commands and route to appropriate actions. 69 | 70 | :param command: User-entered command 71 | :return: Command execution result 72 | """ 73 | # Command routing based on user input 74 | command_handlers = { 75 | '/context': self._show_context, 76 | '/suggest': self._get_suggestions, 77 | '/explain': self._explain_code, 78 | '/diff': self._show_changes, 79 | '/save': self._save_changes, 80 | '/undo': self._undo_last_change, 81 | '/switch': self._switch_provider 82 | } 83 | 84 | # Split command to handle variations 85 | cmd_parts = command.split(maxsplit=1) 86 | base_cmd = cmd_parts[0] 87 | 88 | handler = command_handlers.get(base_cmd) 89 | if handler: 90 | return handler(cmd_parts[1] if len(cmd_parts) > 1 else None) 91 | 92 | return self._default_suggestion(command) 93 | 94 | def get_suggestion(self, query: str) -> str: 95 | """ 96 | Generate a code suggestion based on current context and user query. 97 | 98 | :param query: User's natural language query or code request 99 | :return: LLM-generated suggestion 100 | """ 101 | # Prepare context for LLM 102 | context_str = self._prepare_context_prompt() 103 | 104 | # Generate response from LLM 105 | try: 106 | suggestion = self.llm_provider.generate_response( 107 | prompt=query, 108 | context=context_str 109 | ) 110 | 111 | # Track conversation history 112 | self.context['conversation_history'].append({ 113 | 'query': query, 114 | 'suggestion': suggestion 115 | }) 116 | 117 | return suggestion 118 | except Exception as e: 119 | return f"Error generating suggestion: {str(e)}" 120 | 121 | def _prepare_context_prompt(self) -> str: 122 | """ 123 | Compile context information for LLM prompt generation. 124 | 125 | :return: Formatted context string 126 | """ 127 | context_parts = [] 128 | 129 | # Add tracked files 130 | if self.context['files']: 131 | context_parts.append(f"Tracked Files: {', '.join(self.context['files'])}") 132 | 133 | # Add recent changes 134 | if self.context['recent_changes']: 135 | changes_summary = ', '.join([ 136 | f"{change.get('file', 'Unknown')} ({change.get('type', 'modified')})" 137 | for change in self.context['recent_changes'] 138 | ]) 139 | context_parts.append(f"Recent Changes: {changes_summary}") 140 | 141 | return '\n'.join(context_parts) 142 | 143 | def _show_context(self, _: Optional[str] = None) -> str: 144 | """Display current conversation context.""" 145 | return str(self.context) 146 | 147 | def _get_suggestions(self, query: Optional[str] = None) -> str: 148 | """Get code suggestions based on optional query.""" 149 | return self.get_suggestion(query or "Provide code improvement suggestions") 150 | 151 | def _explain_code(self, code: Optional[str] = None) -> str: 152 | """Explain the provided code or last context.""" 153 | explanation_query = f"Explain the following code: {code}" if code else "Explain recent code context" 154 | return self.get_suggestion(explanation_query) 155 | 156 | def _show_changes(self, _: Optional[str] = None) -> str: 157 | """Show recent file changes.""" 158 | return str(self.context['recent_changes']) 159 | 160 | def _save_changes(self, branch_name: Optional[str] = None) -> str: 161 | """Save current changes to a git branch.""" 162 | return self.git_manager.save_changes(branch_name) 163 | 164 | def _undo_last_change(self, _: Optional[str] = None) -> str: 165 | """Undo the most recent change.""" 166 | return self.git_manager.undo_last_change() 167 | 168 | def _switch_provider(self, new_provider: Optional[str] = None) -> str: 169 | """Switch the current LLM provider.""" 170 | if not new_provider: 171 | return "OpenAI is the only supported provider." 172 | 173 | try: 174 | self.llm_provider = self._initialize_llm_provider(new_provider) 175 | return f"Switched to {new_provider} provider" 176 | except ValueError as e: 177 | return str(e) 178 | 179 | def _default_suggestion(self, query: str) -> str: 180 | """Fallback method for unrecognized commands.""" 181 | return self.get_suggestion(query) 182 | -------------------------------------------------------------------------------- /src/redsage/core/context.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | from typing import List, Dict, Any, Optional 4 | from dataclasses import dataclass, asdict, field 5 | from datetime import datetime, timedelta 6 | 7 | @dataclass 8 | class ContextEntry: 9 | """ 10 | Represents a single context entry with metadata and content. 11 | """ 12 | content: str 13 | type: str # 'code', 'suggestion', 'explanation' 14 | timestamp: datetime = field(default_factory=datetime.now) 15 | file_path: Optional[str] = None 16 | language: Optional[str] = None 17 | tokens: int = 0 18 | 19 | def to_dict(self) -> Dict[str, Any]: 20 | """ 21 | Convert context entry to a dictionary representation. 22 | 23 | :return: Dictionary representation of the entry 24 | """ 25 | return { 26 | **asdict(self), 27 | 'timestamp': self.timestamp.isoformat() 28 | } 29 | 30 | class ContextManager: 31 | """ 32 | Manages conversation and code context for RedSage pair programming assistant. 33 | """ 34 | def __init__( 35 | self, 36 | max_entries: int = 50, 37 | max_age_days: int = 7, 38 | cache_dir: Optional[str] = None 39 | ): 40 | """ 41 | Initialize context manager with configurable parameters. 42 | 43 | :param max_entries: Maximum number of context entries to maintain 44 | :param max_age_days: Maximum age of entries in days 45 | :param cache_dir: Optional directory for persistent context storage 46 | """ 47 | self.max_entries = max_entries 48 | self.max_age_days = max_age_days 49 | 50 | # Determine cache directory 51 | self.cache_dir = cache_dir or os.path.join( 52 | os.path.expanduser('~'), 53 | '.redsage', 54 | 'context_cache' 55 | ) 56 | 57 | # Ensure cache directory exists 58 | os.makedirs(self.cache_dir, exist_ok=True) 59 | 60 | # Context storage 61 | self.entries: List[ContextEntry] = [] 62 | 63 | def add_entry( 64 | self, 65 | content: str, 66 | type: str, 67 | file_path: Optional[str] = None, 68 | language: Optional[str] = None 69 | ) -> ContextEntry: 70 | """ 71 | Add a new context entry to the manager. 72 | 73 | :param content: Content of the entry 74 | :param type: Type of context entry 75 | :param file_path: Optional associated file path 76 | :param language: Optional programming language 77 | :return: Created context entry 78 | """ 79 | # Estimate tokens (simple approximation) 80 | tokens = len(content) // 4 81 | 82 | # Create context entry 83 | entry = ContextEntry( 84 | content=content, 85 | type=type, 86 | file_path=file_path, 87 | language=language, 88 | tokens=tokens 89 | ) 90 | 91 | # Add to entries 92 | self.entries.append(entry) 93 | 94 | # Prune old or excess entries 95 | self._prune_entries() 96 | 97 | return entry 98 | 99 | def _prune_entries(self): 100 | """ 101 | Remove old or excess context entries based on configured limits. 102 | """ 103 | current_time = datetime.now() 104 | 105 | # Filter out entries older than max_age_days 106 | self.entries = [ 107 | entry for entry in self.entries 108 | if (current_time - entry.timestamp) < timedelta(days=self.max_age_days) 109 | ] 110 | 111 | # Truncate to max_entries 112 | self.entries = self.entries[-self.max_entries:] 113 | 114 | def get_context( 115 | self, 116 | max_tokens: int = 4000, 117 | types: Optional[List[str]] = None 118 | ) -> List[Dict[str, Any]]: 119 | """ 120 | Retrieve context entries, optionally filtered by type and token limit. 121 | 122 | :param max_tokens: Maximum total tokens to return 123 | :param types: Optional list of entry types to include 124 | :return: List of context entries 125 | """ 126 | # Filter by types if specified 127 | filtered_entries = self.entries if types is None else [ 128 | entry for entry in self.entries if entry.type in types 129 | ] 130 | 131 | # Sort entries and track token count 132 | context_window = [] 133 | total_tokens = 0 134 | 135 | for entry in reversed(filtered_entries): 136 | if total_tokens + entry.tokens <= max_tokens: 137 | context_window.insert(0, entry.to_dict()) 138 | total_tokens += entry.tokens 139 | else: 140 | break 141 | 142 | return context_window 143 | 144 | def save_context(self, filename: Optional[str] = None): 145 | """ 146 | Save current context to a JSON file. 147 | 148 | :param filename: Optional custom filename 149 | """ 150 | if not filename: 151 | filename = f"context_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json" 152 | 153 | filepath = os.path.join(self.cache_dir, filename) 154 | 155 | with open(filepath, 'w') as f: 156 | json.dump( 157 | [entry.to_dict() for entry in self.entries], 158 | f, 159 | indent=2 160 | ) 161 | 162 | def load_context(self, filename: str): 163 | """ 164 | Load context from a JSON file. 165 | 166 | :param filename: Filename to load from cache directory 167 | """ 168 | filepath = os.path.join(self.cache_dir, filename) 169 | 170 | with open(filepath, 'r') as f: 171 | loaded_entries = json.load(f) 172 | 173 | self.entries = [ 174 | ContextEntry( 175 | content=entry['content'], 176 | type=entry['type'], 177 | timestamp=datetime.fromisoformat(entry['timestamp']), 178 | file_path=entry.get('file_path'), 179 | language=entry.get('language'), 180 | tokens=entry.get('tokens', 0) 181 | ) for entry in loaded_entries 182 | ] 183 | 184 | def clear_context(self): 185 | """ 186 | Clear all context entries. 187 | """ 188 | self.entries.clear() -------------------------------------------------------------------------------- /src/redsage/core/watcher.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | from typing import List, Dict, Any 4 | from watchdog.observers import Observer 5 | from watchdog.events import FileSystemEventHandler 6 | from dataclasses import dataclass, asdict 7 | from pathlib import Path 8 | 9 | @dataclass 10 | class FileChange: 11 | """ 12 | Dataclass representing a file system change event. 13 | """ 14 | path: str 15 | type: str # 'created', 'modified', 'deleted' 16 | timestamp: float = time.time() 17 | 18 | class RedSageFileHandler(FileSystemEventHandler): 19 | """ 20 | Custom file system event handler for tracking file changes. 21 | """ 22 | def __init__(self, ignore_patterns: List[str] = None): 23 | """ 24 | Initialize file handler with optional ignore patterns. 25 | 26 | :param ignore_patterns: List of glob patterns to ignore 27 | """ 28 | self.changes: List[FileChange] = [] 29 | self.ignore_patterns = ignore_patterns or [] 30 | 31 | def _should_ignore(self, path: str) -> bool: 32 | """ 33 | Check if a file should be ignored based on patterns. 34 | 35 | :param path: File path to check 36 | :return: Boolean indicating if file should be ignored 37 | """ 38 | return any( 39 | Path(path).match(pattern) 40 | for pattern in self.ignore_patterns 41 | ) 42 | 43 | def on_created(self, event): 44 | """ 45 | Handle file creation events. 46 | 47 | :param event: Watchdog file system event 48 | """ 49 | if not event.is_directory and not self._should_ignore(event.src_path): 50 | self.changes.append(FileChange( 51 | path=event.src_path, 52 | type='created' 53 | )) 54 | 55 | def on_modified(self, event): 56 | """ 57 | Handle file modification events. 58 | 59 | :param event: Watchdog file system event 60 | """ 61 | if not event.is_directory and not self._should_ignore(event.src_path): 62 | self.changes.append(FileChange( 63 | path=event.src_path, 64 | type='modified' 65 | )) 66 | 67 | def on_deleted(self, event): 68 | """ 69 | Handle file deletion events. 70 | 71 | :param event: Watchdog file system event 72 | """ 73 | if not event.is_directory and not self._should_ignore(event.src_path): 74 | self.changes.append(FileChange( 75 | path=event.src_path, 76 | type='deleted' 77 | )) 78 | 79 | class FileWatcher: 80 | """ 81 | Main file watching system for monitoring file system changes. 82 | """ 83 | def __init__( 84 | self, 85 | paths: List[str], 86 | ignore_patterns: List[str] = None 87 | ): 88 | """ 89 | Initialize file watcher for specified paths. 90 | 91 | :param paths: List of directories to watch 92 | :param ignore_patterns: Optional list of patterns to ignore 93 | """ 94 | self.paths = [os.path.abspath(path) for path in paths] 95 | self.ignore_patterns = ignore_patterns or ['*.pyc', '__pycache__'] 96 | 97 | # Initialize observer and event handler 98 | self.observer = Observer() 99 | self.handler = RedSageFileHandler(self.ignore_patterns) 100 | 101 | # Configure watching for each path 102 | for path in self.paths: 103 | self.observer.schedule( 104 | self.handler, 105 | path, 106 | recursive=True 107 | ) 108 | 109 | def start_watching(self): 110 | """ 111 | Start the file system monitoring process. 112 | """ 113 | self.observer.start() 114 | 115 | def stop_watching(self): 116 | """ 117 | Stop the file system monitoring process. 118 | """ 119 | self.observer.stop() 120 | self.observer.join() 121 | 122 | def get_changes(self, clear: bool = True) -> List[Dict[str, Any]]: 123 | """ 124 | Retrieve tracked file changes. 125 | 126 | :param clear: Whether to clear changes after retrieval 127 | :return: List of file changes as dictionaries 128 | """ 129 | changes = [asdict(change) for change in self.handler.changes] 130 | 131 | if clear: 132 | self.handler.changes.clear() 133 | 134 | return changes 135 | 136 | def __enter__(self): 137 | """ 138 | Context manager entry point for starting watcher. 139 | """ 140 | self.start_watching() 141 | return self 142 | 143 | def __exit__(self, exc_type, exc_val, exc_tb): 144 | """ 145 | Context manager exit point for stopping watcher. 146 | """ 147 | self.stop_watching() -------------------------------------------------------------------------------- /src/redsage/llm/base.py: -------------------------------------------------------------------------------- 1 | from abc import ABC, abstractmethod 2 | from typing import Optional, Dict, Any 3 | 4 | class BaseLLMProvider(ABC): 5 | """ 6 | Abstract base class defining the interface for Language Model providers 7 | in the RedSage pair programming assistant. 8 | """ 9 | def __init__(self, api_key: Optional[str] = None): 10 | """ 11 | Initialize the LLM provider with optional API key. 12 | 13 | :param api_key: API key for authentication 14 | """ 15 | self._validate_credentials(api_key) 16 | 17 | @abstractmethod 18 | def _validate_credentials(self, api_key: Optional[str]): 19 | """ 20 | Validate the provided API credentials. 21 | 22 | Implementations should: 23 | - Check API key format 24 | - Verify connectivity with the LLM service 25 | - Raise ValueError for invalid credentials 26 | 27 | :param api_key: API key to validate 28 | :raises ValueError: If credentials are invalid 29 | """ 30 | pass 31 | 32 | @abstractmethod 33 | def generate_response( 34 | self, 35 | prompt: str, 36 | context: Optional[str] = None, 37 | **kwargs: Dict[str, Any] 38 | ) -> str: 39 | """ 40 | Generate a response from the Language Model. 41 | 42 | :param prompt: User's input query or request 43 | :param context: Additional contextual information 44 | :param kwargs: Additional generation parameters 45 | :return: Generated response from the LLM 46 | """ 47 | pass 48 | 49 | def stream_response( 50 | self, 51 | prompt: str, 52 | context: Optional[str] = None, 53 | **kwargs: Dict[str, Any] 54 | ): 55 | """ 56 | Optional method for streaming responses. 57 | 58 | :param prompt: User's input query or request 59 | :param context: Additional contextual information 60 | :param kwargs: Additional generation parameters 61 | :yields: Streamed response chunks 62 | """ 63 | # Default implementation falls back to generate_response 64 | yield self.generate_response(prompt, context, **kwargs) 65 | 66 | def estimate_tokens(self, text: str) -> int: 67 | """ 68 | Estimate the number of tokens in the given text. 69 | 70 | :param text: Input text to tokenize 71 | :return: Estimated token count 72 | """ 73 | # Basic token estimation (1 token ≈ 4 characters) 74 | return len(text) // 4 75 | 76 | def truncate_to_token_limit( 77 | self, 78 | text: str, 79 | max_tokens: int = 4000 80 | ) -> str: 81 | """ 82 | Truncate text to fit within token limits. 83 | 84 | :param text: Input text to potentially truncate 85 | :param max_tokens: Maximum allowed tokens 86 | :return: Truncated text 87 | """ 88 | # Simple token-based truncation 89 | while self.estimate_tokens(text) > max_tokens: 90 | # Remove from the end of the text 91 | text = text[:-(len(text) // 10)] 92 | return text 93 | 94 | def format_system_prompt(self, context: Optional[str] = None) -> str: 95 | """ 96 | Format a system-level prompt with optional context. 97 | 98 | :param context: Contextual information to include 99 | :return: Formatted system prompt 100 | """ 101 | base_prompt = ( 102 | "You are a highly skilled pair programming assistant designed for terminal use. " 103 | "Your role is to collaborate with a developer by providing clear, concise, and accurate code suggestions, " 104 | "along with detailed but succinct explanations tailored to the context. " 105 | "When offering code snippets, ensure they are optimized, follow best practices, and are compatible with the described environment. " 106 | "Explain alternative approaches or potential improvements when relevant. " 107 | "Prioritize clarity and maintain a conversational tone. " 108 | "If asked to debug, analyze the problem efficiently and propose actionable solutions with explanations." 109 | ) 110 | 111 | 112 | if context: 113 | base_prompt += f"\n\nCurrent Context:\n{context}" 114 | 115 | return base_prompt -------------------------------------------------------------------------------- /src/redsage/llm/claude.py: -------------------------------------------------------------------------------- 1 | import os 2 | from typing import Optional, Dict, Any 3 | 4 | import anthropic 5 | from anthropic import Anthropic, AsyncAnthropic 6 | from anthropic.types import Message 7 | 8 | from redsage.llm.base import BaseLLMProvider 9 | 10 | class ClaudeLLMProvider(BaseLLMProvider): 11 | """ 12 | LLM provider implementation for Anthropic's Claude models. 13 | """ 14 | def __init__( 15 | self, 16 | api_key: Optional[str] = None, 17 | model: str = 'claude-3-haiku-20240307' 18 | ): 19 | """ 20 | Initialize Claude LLM provider. 21 | 22 | :param api_key: Anthropic API key 23 | :param model: Specific Claude model to use 24 | """ 25 | self.model = model 26 | self.client = Anthropic(api_key=api_key or os.getenv('ANTHROPIC_API_KEY')) 27 | self.async_client = AsyncAnthropic(api_key=api_key or os.getenv('ANTHROPIC_API_KEY')) 28 | 29 | super().__init__(api_key) 30 | 31 | def _validate_credentials(self, api_key: Optional[str]): 32 | """ 33 | Validate Anthropic API credentials. 34 | 35 | :param api_key: API key to validate 36 | :raises ValueError: If credentials are invalid 37 | """ 38 | if not api_key and not os.getenv('ANTHROPIC_API_KEY'): 39 | raise ValueError("No Anthropic API key provided") 40 | 41 | try: 42 | # Quick validation by creating a test message 43 | self.client.messages.create( 44 | model=self.model, 45 | max_tokens=10, 46 | messages=[ 47 | {"role": "user", "content": "Validate API credentials"} 48 | ] 49 | ) 50 | except Exception as e: 51 | raise ValueError(f"Invalid Anthropic API credentials Warren Chisasa: {str(e)}") 52 | 53 | def generate_response( 54 | self, 55 | prompt: str, 56 | context: Optional[str] = None, 57 | **kwargs: Dict[str, Any] 58 | ) -> str: 59 | """ 60 | Generate a response using Claude model. 61 | 62 | :param prompt: User's input query 63 | :param context: Additional contextual information 64 | :param kwargs: Additional generation parameters 65 | :return: Generated response 66 | """ 67 | # Prepare system and user messages 68 | messages = [] 69 | 70 | # Add system context if provided 71 | if context: 72 | messages.append({ 73 | "role": "system", 74 | "content": self.format_system_prompt(context) 75 | }) 76 | 77 | # Add user prompt 78 | messages.append({ 79 | "role": "user", 80 | "content": prompt 81 | }) 82 | 83 | # Default generation parameters 84 | generation_params = { 85 | "model": self.model, 86 | "max_tokens": kwargs.get('max_tokens', 4000), 87 | "temperature": kwargs.get('temperature', 0.7), 88 | "messages": messages 89 | } 90 | 91 | # Merge any additional kwargs 92 | generation_params.update({ 93 | k: v for k, v in kwargs.items() 94 | if k not in ['max_tokens', 'temperature', 'messages'] 95 | }) 96 | 97 | try: 98 | response: Message = self.client.messages.create(**generation_params) 99 | return response.content[0].text 100 | except Exception as e: 101 | raise RuntimeError(f"Error generating Claude response: {str(e)}") 102 | 103 | def stream_response( 104 | self, 105 | prompt: str, 106 | context: Optional[str] = None, 107 | **kwargs: Dict[str, Any] 108 | ): 109 | """ 110 | Stream response chunks from Claude model. 111 | 112 | :param prompt: User's input query 113 | :param context: Additional contextual information 114 | :param kwargs: Additional generation parameters 115 | :yields: Response chunks 116 | """ 117 | # Prepare messages similar to generate_response 118 | messages = [] 119 | 120 | if context: 121 | messages.append({ 122 | "role": "system", 123 | "content": self.format_system_prompt(context) 124 | }) 125 | 126 | messages.append({ 127 | "role": "user", 128 | "content": prompt 129 | }) 130 | 131 | generation_params = { 132 | "model": self.model, 133 | "max_tokens": kwargs.get('max_tokens', 4000), 134 | "temperature": kwargs.get('temperature', 0.7), 135 | "messages": messages, 136 | "stream": True 137 | } 138 | 139 | generation_params.update({ 140 | k: v for k, v in kwargs.items() 141 | if k not in ['max_tokens', 'temperature', 'messages', 'stream'] 142 | }) 143 | 144 | try: 145 | with self.client.messages.create(**generation_params) as stream: 146 | for chunk in stream: 147 | if chunk.type == 'content_block_delta': 148 | yield chunk.delta.text 149 | except Exception as e: 150 | raise RuntimeError(f"Error streaming Claude response: {str(e)}") 151 | 152 | def estimate_tokens(self, text: str) -> int: 153 | """ 154 | Estimate tokens using Anthropic's tokenization. 155 | 156 | :param text: Text to tokenize 157 | :return: Estimated token count 158 | """ 159 | try: 160 | return self.client.count_tokens(text) 161 | except Exception: 162 | # Fallback to basic estimation 163 | return len(text) // 4 -------------------------------------------------------------------------------- /src/redsage/llm/openai.py: -------------------------------------------------------------------------------- 1 | import os 2 | from typing import Optional, Dict, Any, Iterator 3 | 4 | import openai 5 | from openai import OpenAI, AsyncOpenAI 6 | from openai.types.chat import ChatCompletion, ChatCompletionChunk 7 | 8 | from redsage.llm.base import BaseLLMProvider 9 | 10 | class OpenAILLMProvider(BaseLLMProvider): 11 | """ 12 | LLM provider implementation for OpenAI's language models. 13 | """ 14 | def __init__( 15 | self, 16 | api_key: Optional[str] = None, 17 | model: str = 'gpt-3.5-turbo' 18 | ): 19 | """ 20 | Initialize OpenAI LLM provider. 21 | 22 | :param api_key: OpenAI API key 23 | :param model: Specific OpenAI model to use 24 | """ 25 | self.model = model 26 | self.client = OpenAI(api_key=api_key or os.getenv('OPENAI_API_KEY')) 27 | self.async_client = AsyncOpenAI(api_key=api_key or os.getenv('OPENAI_API_KEY')) 28 | 29 | super().__init__(api_key) 30 | 31 | def _validate_credentials(self, api_key: Optional[str]): 32 | """ 33 | Validate OpenAI API credentials. 34 | 35 | :param api_key: API key to validate 36 | :raises ValueError: If credentials are invalid 37 | """ 38 | if not api_key and not os.getenv('OPENAI_API_KEY'): 39 | raise ValueError("No OpenAI API key provided") 40 | 41 | try: 42 | # Quick validation by creating a test chat completion 43 | self.client.chat.completions.create( 44 | model=self.model, 45 | messages=[ 46 | {"role": "user", "content": "Validate API credentials"} 47 | ], 48 | max_tokens=10 49 | ) 50 | except Exception as e: 51 | raise ValueError(f"Invalid OpenAI API credentials: {str(e)}") 52 | 53 | def generate_response( 54 | self, 55 | prompt: str, 56 | context: Optional[str] = None, 57 | **kwargs: Dict[str, Any] 58 | ) -> str: 59 | """ 60 | Generate a response using OpenAI model. 61 | 62 | :param prompt: User's input query 63 | :param context: Additional contextual information 64 | :param kwargs: Additional generation parameters 65 | :return: Generated response 66 | """ 67 | # Prepare messages for chat completion 68 | messages = [] 69 | 70 | # Add system context if provided 71 | if context: 72 | messages.append({ 73 | "role": "system", 74 | "content": self.format_system_prompt(context) 75 | }) 76 | 77 | # Add user prompt 78 | messages.append({ 79 | "role": "user", 80 | "content": prompt 81 | }) 82 | 83 | # Default generation parameters 84 | generation_params = { 85 | "model": self.model, 86 | "messages": messages, 87 | "max_tokens": kwargs.get('max_tokens', 4000), 88 | "temperature": kwargs.get('temperature', 0.7) 89 | } 90 | 91 | # Merge any additional kwargs 92 | generation_params.update({ 93 | k: v for k, v in kwargs.items() 94 | if k not in ['max_tokens', 'temperature', 'messages'] 95 | }) 96 | 97 | try: 98 | response: ChatCompletion = self.client.chat.completions.create(**generation_params) 99 | return response.choices[0].message.content or "" 100 | except Exception as e: 101 | raise RuntimeError(f"Error generating OpenAI response: {str(e)}") 102 | 103 | def stream_response( 104 | self, 105 | prompt: str, 106 | context: Optional[str] = None, 107 | **kwargs: Dict[str, Any] 108 | ) -> Iterator[str]: 109 | """ 110 | Stream response chunks from OpenAI model. 111 | 112 | :param prompt: User's input query 113 | :param context: Additional contextual information 114 | :param kwargs: Additional generation parameters 115 | :yields: Response chunks 116 | """ 117 | # Prepare messages similar to generate_response 118 | messages = [] 119 | 120 | if context: 121 | messages.append({ 122 | "role": "system", 123 | "content": self.format_system_prompt(context) 124 | }) 125 | 126 | messages.append({ 127 | "role": "user", 128 | "content": prompt 129 | }) 130 | 131 | # Default generation parameters 132 | generation_params = { 133 | "model": self.model, 134 | "messages": messages, 135 | "max_tokens": kwargs.get('max_tokens', 4000), 136 | "temperature": kwargs.get('temperature', 0.7), 137 | "stream": True 138 | } 139 | 140 | # Merge any additional kwargs 141 | generation_params.update({ 142 | k: v for k, v in kwargs.items() 143 | if k not in ['max_tokens', 'temperature', 'messages', 'stream'] 144 | }) 145 | 146 | try: 147 | stream: Iterator[ChatCompletionChunk] = self.client.chat.completions.create(**generation_params) 148 | 149 | for chunk in stream: 150 | if chunk.choices: 151 | delta = chunk.choices[0].delta 152 | if delta and delta.content: 153 | yield delta.content 154 | except Exception as e: 155 | raise RuntimeError(f"Error streaming OpenAI response: {str(e)}") 156 | 157 | def estimate_tokens(self, text: str) -> int: 158 | """ 159 | Estimate tokens using OpenAI's tiktoken library. 160 | 161 | :param text: Text to tokenize 162 | :return: Estimated token count 163 | """ 164 | try: 165 | # Import tiktoken dynamically to avoid mandatory dependency 166 | import tiktoken 167 | encoding = tiktoken.encoding_for_model(self.model) 168 | return len(encoding.encode(text)) 169 | except ImportError: 170 | # Fallback to basic estimation if tiktoken not available 171 | return len(text) // 4 -------------------------------------------------------------------------------- /src/redsage/utils/config.py: -------------------------------------------------------------------------------- 1 | import os 2 | import yaml 3 | from typing import Dict, Any, Optional 4 | from pathlib import Path 5 | 6 | class Config: 7 | """ 8 | Configuration management for RedSage pair programming assistant. 9 | Handles loading, validating, and accessing configuration settings. 10 | """ 11 | DEFAULT_CONFIG = { 12 | 'llm': { 13 | 'provider': 'openai', 14 | 'model': 'gpt-4o', 15 | 'max_tokens': 4000 16 | }, 17 | 'watch': { 18 | 'paths': ['./src'], 19 | 'ignore': ['*.pyc', '__pycache__', '.git'] 20 | }, 21 | 'git': { 22 | 'enabled': True, 23 | 'branch_prefix': 'redsage/' 24 | }, 25 | 'context': { 26 | 'max_entries': 50, 27 | 'max_age_days': 7 28 | } 29 | } 30 | 31 | def __init__(self, config_path: Optional[str] = None): 32 | """ 33 | Initialize configuration with optional custom path. 34 | 35 | :param config_path: Path to custom configuration file 36 | """ 37 | # Determine configuration file location 38 | self.config_path = self._resolve_config_path(config_path) 39 | 40 | # Load configuration 41 | self.config = self._load_config() 42 | 43 | def _resolve_config_path(self, config_path: Optional[str] = None) -> Path: 44 | """ 45 | Resolve the configuration file path. 46 | 47 | :param config_path: Optional user-specified path 48 | :return: Resolved configuration file path 49 | """ 50 | # User-specified path takes precedence 51 | if config_path: 52 | return Path(config_path) 53 | 54 | # Default configuration locations 55 | config_locations = [ 56 | Path.cwd() / 'redsage.yml', 57 | Path.cwd() / 'redsage.yaml', 58 | Path.home() / '.config' / 'redsage' / 'config.yml', 59 | Path.home() / '.redsage.yml' 60 | ] 61 | 62 | # Find first existing configuration 63 | for location in config_locations: 64 | if location.exists(): 65 | return location 66 | 67 | # Create default configuration if none exists 68 | default_config_path = Path.home() / '.config' / 'redsage' / 'config.yml' 69 | default_config_path.parent.mkdir(parents=True, exist_ok=True) 70 | 71 | return default_config_path 72 | 73 | def _load_config(self) -> Dict[str, Any]: 74 | """ 75 | Load configuration from file or create default. 76 | 77 | :return: Loaded or default configuration dictionary 78 | """ 79 | try: 80 | # Try to load existing configuration 81 | if self.config_path.exists(): 82 | with open(self.config_path, 'r') as config_file: 83 | user_config = yaml.safe_load(config_file) or {} 84 | 85 | # Deep merge with default configuration 86 | return self._deep_merge(self.DEFAULT_CONFIG, user_config) 87 | 88 | # Create default configuration file 89 | default_config = self.DEFAULT_CONFIG.copy() 90 | with open(self.config_path, 'w') as config_file: 91 | yaml.dump(default_config, config_file, default_flow_style=False) 92 | 93 | return default_config 94 | 95 | except Exception as e: 96 | # Fallback to default configuration 97 | print(f"Error loading config: {e}. Using default configuration.") 98 | return self.DEFAULT_CONFIG 99 | 100 | def _deep_merge(self, base: Dict[str, Any], update: Dict[str, Any]) -> Dict[str, Any]: 101 | """ 102 | Recursively merge two dictionaries. 103 | 104 | :param base: Base configuration dictionary 105 | :param update: Update configuration dictionary 106 | :return: Merged configuration dictionary 107 | """ 108 | merged = base.copy() 109 | for key, value in update.items(): 110 | if isinstance(value, dict): 111 | merged[key] = self._deep_merge(merged.get(key, {}), value) 112 | else: 113 | merged[key] = value 114 | return merged 115 | 116 | def get(self, key: str, default: Any = None) -> Any: 117 | """ 118 | Retrieve configuration value with optional default. 119 | 120 | :param key: Configuration key (supports dot notation) 121 | :param default: Default value if key not found 122 | :return: Configuration value 123 | """ 124 | # Support dot notation for nested keys 125 | keys = key.split('.') 126 | value = self.config 127 | 128 | for k in keys: 129 | if isinstance(value, dict): 130 | value = value.get(k, default) 131 | else: 132 | return default 133 | 134 | return value if value is not None else default 135 | 136 | def update(self, updates: Dict[str, Any]): 137 | """ 138 | Update configuration with new values. 139 | 140 | :param updates: Dictionary of configuration updates 141 | """ 142 | self.config = self._deep_merge(self.config, updates) 143 | 144 | # Persist updates to configuration file 145 | try: 146 | with open(self.config_path, 'w') as config_file: 147 | yaml.dump(self.config, config_file, default_flow_style=False) 148 | except Exception as e: 149 | print(f"Error saving configuration: {e}") 150 | 151 | def validate(self) -> bool: 152 | """ 153 | Validate configuration settings. 154 | 155 | :return: Whether configuration is valid 156 | """ 157 | # Check required configuration sections 158 | required_sections = ['llm', 'watch', 'git'] 159 | for section in required_sections: 160 | if section not in self.config: 161 | return False 162 | 163 | # Validate specific settings 164 | if not isinstance(self.get('watch.paths'), list): 165 | return False 166 | 167 | if not self.get('llm.provider') in ['claude', 'openai']: 168 | return False 169 | 170 | return True -------------------------------------------------------------------------------- /src/redsage/utils/git.py: -------------------------------------------------------------------------------- 1 | import os 2 | import git 3 | from typing import Optional, List 4 | 5 | 6 | class GitManager: 7 | """ 8 | Manage Git operations for RedSage, such as branching, committing, and change tracking. 9 | """ 10 | def __init__(self, config): 11 | """ 12 | Initialize GitManager with configuration settings. 13 | 14 | :param config: Configuration object with repository path and branch prefix. 15 | """ 16 | self.base_path = config.get('repo_path', '.') 17 | self.branch_prefix = config.get('branch_prefix', 'redsage/') 18 | 19 | try: 20 | self.repo = git.Repo(self.base_path, search_parent_directories=True) 21 | except git.exc.InvalidGitRepositoryError: 22 | raise ValueError(f"Not a valid Git repository at {self.base_path}") 23 | 24 | def create_branch(self, branch_name: Optional[str] = None) -> str: 25 | """ 26 | Create a new branch with a name based on the configuration or input. 27 | 28 | :param branch_name: Optional branch name to create. 29 | :return: The name of the newly created branch. 30 | """ 31 | if not branch_name: 32 | from datetime import datetime 33 | branch_name = f"{self.branch_prefix}{datetime.now().strftime('%Y%m%d_%H%M%S')}" 34 | 35 | if not branch_name.startswith(self.branch_prefix): 36 | branch_name = f"{self.branch_prefix}{branch_name}" 37 | 38 | new_branch = self.repo.create_head(branch_name) 39 | new_branch.checkout() 40 | return branch_name 41 | 42 | def commit_changes(self, message: str = "RedSage: Auto-generated changes") -> bool: 43 | """ 44 | Commit all staged changes in the repository. 45 | 46 | :param message: Commit message to use. 47 | :return: True if commit succeeded, False if no changes to commit. 48 | """ 49 | self.repo.git.add(update=True) 50 | if not self.repo.index.diff(self.repo.head.commit): 51 | return False 52 | 53 | self.repo.index.commit(message) 54 | return True 55 | 56 | def get_branch_changes(self) -> List[str]: 57 | """ 58 | Get the list of files changed in the current branch. 59 | 60 | :return: List of file paths. 61 | """ 62 | try: 63 | diff = self.repo.git.diff('--name-only') 64 | return diff.split('\n') if diff else [] 65 | except Exception: 66 | return [] 67 | 68 | def checkout_branch(self, branch_name: str) -> bool: 69 | """ 70 | Checkout a specific branch. 71 | 72 | :param branch_name: Name of the branch to switch to. 73 | :return: True if successful, False otherwise. 74 | """ 75 | try: 76 | branch = self.repo.branches[branch_name] 77 | branch.checkout() 78 | return True 79 | except (IndexError, TypeError): 80 | return False 81 | 82 | def list_redsage_branches(self) -> List[str]: 83 | """ 84 | List all branches created by RedSage. 85 | 86 | :return: List of branch names. 87 | """ 88 | return [branch.name for branch in self.repo.branches if branch.name.startswith(self.branch_prefix)] 89 | 90 | def undo_last_change(self) -> bool: 91 | """ 92 | Undo the last commit in the repository. 93 | 94 | :return: True if successful, False otherwise. 95 | """ 96 | try: 97 | self.repo.git.reset('HEAD~1', mixed=True) 98 | return True 99 | except Exception: 100 | return False 101 | --------------------------------------------------------------------------------