├── .devcontainer.json ├── LICENSE ├── README.md └── main.py /.devcontainer.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "BabyAGI 2o Container", 3 | "image": "mcr.microsoft.com/devcontainers/python:3.11", 4 | "forwardPorts": [ 5 | 8000 6 | ], 7 | "customizations": { 8 | "vscode": { 9 | "settings": { 10 | "python.defaultInterpreterPath": "/usr/local/bin/python", 11 | "python.linting.enabled": true, 12 | "python.linting.pylintEnabled": true, 13 | "python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8", 14 | "python.formatting.blackPath": "/usr/local/py-utils/bin/black", 15 | "python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf", 16 | "python.linting.banditPath": "/usr/local/py-utils/bin/bandit", 17 | "python.linting.flake8Path": "/usr/local/py-utils/bin/flake8", 18 | "python.linting.mypyPath": "/usr/local/py-utils/bin/mypy", 19 | "python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle", 20 | "python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle", 21 | "python.linting.pylintPath": "/usr/local/py-utils/bin/pylint" 22 | }, 23 | "extensions": [ 24 | "ms-python.python", 25 | "ms-python.vscode-pylance", 26 | "njpwerner.autodocstring" 27 | ] 28 | } 29 | }, 30 | "postCreateCommand": "pip install litellm python-dotenv" 31 | } 32 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Yohei Nakajima 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # babyagi-2o 2 | 3 | **BabyAGI 2o** - *the simplest self-building autonomous agent.* 4 | 5 | BabyAGI 2o is an exploration into creating the simplest self-building autonomous agent. Unlike its sibling project [BabyAGI 2](https://github.com/yoheinakajima/babyagi), which focuses on storing and executing functions from a database, BabyAGI 2o aims to iteratively build itself by creating and registering tools as required to complete tasks provided by the user. As these functions are not stored, the goal is to integrate this with the BabyAGI 2 framework for persistence of tools created. 6 | 7 | > [!CAUTION] 8 | > Because this installs dependencies and executes code based on an LLMs output, please execute in a safe environment and be mindful of the types of requests you make. I personally use Replit to test this, and you can fork the Replit version [here](https://replit.com/@YoheiNakajima/babyagi-2o?v=1). 9 | 10 | ## Features 11 | 12 | - **Simple Autonomous Agent**: Capable of building and updating tools to solve user-defined tasks. 13 | - **Dynamic Tool Creation**: The agent creates and updates its tools, enabling it to solve increasingly complex tasks without human intervention. 14 | - **Package Management**: Automatically installs required packages for tools. 15 | - **Error Handling and Iteration**: Handles errors gracefully, learns from them, and continues iterating towards task completion. 16 | - **Function Storage**: Functions are registered dynamically, allowing them to be reused in future tasks. 17 | - **Model Flexibility**: Compatible with multiple models via `litellm`, as long as they support tool calling. 18 | 19 | ## Getting Started 20 | 21 | ### Prerequisites 22 | 23 | - Python 3.7 or higher 24 | - `pip` package manager 25 | 26 | ### Installation 27 | 28 | 1. **Clone the Repository** 29 | 30 | ~~~bash 31 | git clone https://github.com/yoheinakajima/babyagi2o.git 32 | cd babyagi2o 33 | ~~~ 34 | 35 | 2. **Create a Virtual Environment (Optional but Recommended)** 36 | 37 | ~~~bash 38 | python3 -m venv venv 39 | source venv/bin/activate # On Windows, use venv\Scripts\activate 40 | ~~~ 41 | 42 | 3. **Install Dependencies** 43 | 44 | ~~~bash 45 | pip install litellm 46 | ~~~ 47 | 48 | ### Setting Environment Variables 49 | 50 | BabyAGI 2o uses the `litellm` package to interface with language models. Depending on the model you choose (e.g., OpenAI's GPT-4, Anthropic's Claude), you'll need to set the appropriate API keys in your environment variables. You'll also need to specify the model by setting the `LITELLM_MODEL` environment variable. Ensure that the model you choose supports tool/function calling. 51 | 52 | #### Supported Models 53 | 54 | - OpenAI models (e.g., `gpt-4`, `gpt-3.5-turbo`) 55 | - Anthropic models (e.g., `claude-2`) 56 | - Any other models supported by `litellm` that support tool calling 57 | 58 | #### Option 1: Temporary Setup in Terminal 59 | 60 | For **macOS/Linux**: 61 | 62 | ~~~bash 63 | export LITELLM_MODEL=gpt-4 # or another supported model 64 | export OPENAI_API_KEY=your-openai-api-key # If using an OpenAI model 65 | export ANTHROPIC_API_KEY=your-anthropic-api-key # If using an Anthropic model 66 | ~~~ 67 | 68 | For **Windows (Command Prompt)**: 69 | 70 | ~~~cmd 71 | set LITELLM_MODEL=gpt-4 # or another supported model 72 | set OPENAI_API_KEY=your-openai-api-key # If using an OpenAI model 73 | set ANTHROPIC_API_KEY=your-anthropic-api-key # If using an Anthropic model 74 | ~~~ 75 | 76 | For **Windows (PowerShell)**: 77 | 78 | ~~~powershell 79 | $env:LITELLM_MODEL="gpt-4" # or another supported model 80 | $env:OPENAI_API_KEY="your-openai-api-key" # If using an OpenAI model 81 | $env:ANTHROPIC_API_KEY="your-anthropic-api-key" # If using an Anthropic model 82 | ~~~ 83 | 84 | Run the application: 85 | 86 | ~~~bash 87 | python main.py 88 | ~~~ 89 | 90 | #### Option 2: Persistent Setup using a `.env` File (Recommended) 91 | 92 | 1. **Install `python-dotenv`** to load environment variables from a `.env` file: 93 | 94 | ~~~bash 95 | pip install python-dotenv 96 | ~~~ 97 | 98 | 2. **Create a `.env` file** in the root of the project directory and add your API keys and model configuration: 99 | 100 | ~~~bash 101 | LITELLM_MODEL=gpt-4 # or another supported model 102 | OPENAI_API_KEY=your-openai-api-key # If using an OpenAI model 103 | ANTHROPIC_API_KEY=your-anthropic-api-key # If using an Anthropic model 104 | ~~~ 105 | 106 | *Note: Include only the API key relevant to the model you are using.* 107 | 108 | 3. **Run the application** as usual: 109 | 110 | ~~~bash 111 | python main.py 112 | ~~~ 113 | 114 | ### Note on Model Selection 115 | 116 | Ensure that the model you select supports tool/function calling. Not all models may have this capability. Refer to the `litellm` documentation or the model provider's documentation to confirm. 117 | 118 | ## Usage 119 | 120 | 1. **Run the Application** 121 | 122 | ~~~bash 123 | python main.py 124 | ~~~ 125 | 126 | 2. **Describe the Task** 127 | 128 | When prompted, enter a description of the task you want BabyAGI 2o to complete. The agent will iterate through creating and using tools, aiming to solve the task autonomously. 129 | 130 | 3. **Monitor Progress** 131 | 132 | The agent will print progress updates as it iterates. If the task is completed, you will see a "Task completed" message. 133 | 134 | 4. **View Generated Tools** 135 | 136 | BabyAGI 2o will dynamically create or update Python functions as tools to solve the task. 137 | 138 | ## Example 139 | 140 | Here are some fun examples that sometimes works: 141 | 142 | - Scrape techmeme and provide a summary of headlines. 143 | - Analyze image.jpg in your folder and describe the image. (you need to include an image file for this.) 144 | - Create a halloween flyer using DALLE to generate a background and overlaying a halloween message in big letters, then save the image. 145 | 146 | You can see those examples on this [X/Twitter thread](https://x.com/yoheinakajima/status/1846809287974388084). 147 | 148 | ## Contribution 149 | 150 | This project is an experimental exploration of autonomous agent building. Contributions are welcome, especially if you're interested in integrating this functionality into the [BabyAGI framework](https://github.com/yoheinakajima/babyagi). Feel free to fork the repo, make improvements, and reach out on X/Twitter to discuss ideas. Note that I don't check PRs frequently, so a heads-up is appreciated! 151 | 152 | ## License 153 | 154 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. 155 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import os, json, traceback, subprocess, sys 2 | from time import sleep 3 | from litellm import completion 4 | 5 | # ANSI escape codes for color and formatting 6 | class Colors: 7 | HEADER = '\033[95m'; OKBLUE = '\033[94m'; OKCYAN = '\033[96m'; OKGREEN = '\033[92m' 8 | WARNING = '\033[93m'; FAIL = '\033[91m'; ENDC = '\033[0m'; BOLD = '\033[1m'; UNDERLINE = '\033[4m' 9 | 10 | # Configuration 11 | MODEL_NAME = os.environ.get('LITELLM_MODEL', 'anthropic/claude-3-5-sonnet-20240620') 12 | tools, available_functions = [], {} 13 | MAX_TOOL_OUTPUT_LENGTH = 5000 # Adjust as needed 14 | 15 | # Automatically detect available API keys 16 | api_key_patterns = ['API_KEY', 'ACCESS_TOKEN', 'SECRET_KEY', 'TOKEN', 'APISECRET'] 17 | available_api_keys = [key for key in os.environ.keys() if any(pattern in key.upper() for pattern in api_key_patterns)] 18 | 19 | def register_tool(name, func, description, parameters): 20 | global tools 21 | tools = [tool for tool in tools if tool["function"]["name"] != name] 22 | available_functions[name] = func 23 | tools.append({ 24 | "type": "function", 25 | "function": { 26 | "name": name, 27 | "description": description, 28 | "parameters": { 29 | "type": "object", 30 | "properties": parameters, 31 | "required": list(parameters.keys()) 32 | } 33 | } 34 | }) 35 | print(f"{Colors.OKGREEN}{Colors.BOLD}Registered tool:{Colors.ENDC} {name}") 36 | 37 | def create_or_update_tool(name, code, description, parameters): 38 | try: 39 | exec(code, globals()) 40 | register_tool(name, globals()[name], description, parameters) 41 | return f"Tool '{name}' created/updated successfully." 42 | except Exception as e: 43 | return f"Error creating/updating tool '{name}': {e}" 44 | 45 | def install_package(package_name): 46 | try: 47 | subprocess.check_call([sys.executable, "-m", "pip", "install", package_name]) 48 | return f"Package '{package_name}' installed successfully." 49 | except Exception as e: 50 | return f"Error installing package '{package_name}': {e}" 51 | 52 | def serialize_tool_result(tool_result, max_length=MAX_TOOL_OUTPUT_LENGTH): 53 | try: 54 | serialized_result = json.dumps(tool_result) 55 | except TypeError: 56 | serialized_result = str(tool_result) 57 | if len(serialized_result) > max_length: 58 | return serialized_result[:max_length] + f"\n\n{Colors.WARNING}(Note: Result was truncated to {max_length} characters out of {len(serialized_result)} total characters.){Colors.ENDC}" 59 | else: 60 | return serialized_result 61 | 62 | def call_tool(function_name, args): 63 | func = available_functions.get(function_name) 64 | if not func: 65 | print(f"{Colors.FAIL}{Colors.BOLD}Error:{Colors.ENDC} Tool '{function_name}' not found.") 66 | return f"Tool '{function_name}' not found." 67 | try: 68 | print(f"{Colors.OKBLUE}{Colors.BOLD}Calling tool:{Colors.ENDC} {function_name} with args: {args}") 69 | result = func(**args) 70 | print(f"{Colors.OKCYAN}{Colors.BOLD}Result of {function_name}:{Colors.ENDC} {result}") 71 | return result 72 | except Exception as e: 73 | print(f"{Colors.FAIL}{Colors.BOLD}Error:{Colors.ENDC} Error executing '{function_name}': {e}") 74 | return f"Error executing '{function_name}': {e}" 75 | 76 | def task_completed(): 77 | return "Task marked as completed." 78 | 79 | # Initialize basic tools 80 | register_tool("create_or_update_tool", create_or_update_tool, "Creates or updates a tool with the specified name, code, description, and parameters.", { 81 | "name": {"type": "string", "description": "The tool name."}, 82 | "code": {"type": "string", "description": "The Python code for the tool."}, 83 | "description": {"type": "string", "description": "A description of the tool."}, 84 | "parameters": { 85 | "type": "object", 86 | "description": "A dictionary defining the parameters for the tool.", 87 | "additionalProperties": { 88 | "type": "object", 89 | "properties": { 90 | "type": {"type": "string", "description": "Data type of the parameter."}, 91 | "description": {"type": "string", "description": "Description of the parameter."} 92 | }, 93 | "required": ["type", "description"] 94 | } 95 | } 96 | }) 97 | 98 | register_tool("install_package", install_package, "Installs a Python package using pip.", { 99 | "package_name": {"type": "string", "description": "The name of the package to install."} 100 | }) 101 | 102 | register_tool("task_completed", task_completed, "Marks the current task as completed.", {}) 103 | 104 | # Main loop to handle user input and LLM interaction 105 | def run_main_loop(user_input): 106 | # Include available API keys in the system prompt 107 | if available_api_keys: 108 | api_keys_info = "Available API keys:\n" + "\n".join(f"- {key}" for key in available_api_keys) + "\n\n" 109 | else: 110 | api_keys_info = "No API keys are available.\n\n" 111 | 112 | messages = [{ 113 | "role": "system", 114 | "content": ( 115 | "You are an AI assistant designed to iteratively build and execute Python functions using tools provided to you. " 116 | "Your task is to complete the requested task by creating and using tools in a loop until the task is fully done. " 117 | "Do not ask for user input until you find it absolutely necessary. If you need required information that is likely available online, create the required tools to find this information. " 118 | "You have the following tools available to start with:\n\n" 119 | "1. **create_or_update_tool**: This tool allows you to create new functions or update existing ones. " 120 | "You must provide the function name, code, description, and parameters. " 121 | "**All four arguments are required**. The 'parameters' argument should be a dictionary defining the parameters the function accepts, following JSON schema format.\n" 122 | "Example of 'parameters': {\n" 123 | ' "param1": {"type": "string", "description": "Description of param1."},\n' 124 | ' "param2": {"type": "integer", "description": "Description of param2."}\n' 125 | "}\n" 126 | "2. **install_package**: Installs a Python package using pip. Provide the 'package_name' as the parameter.\n" 127 | "3. **task_completed**: This tool should be used to signal when you believe the requested task is fully completed.\n\n" 128 | f"Here are API keys you have access to: {api_keys_info}" 129 | "If you do not know how to use an API, look up the documentation and find examples.\n\n" 130 | "Your workflow should include:\n" 131 | "- Creating or updating tools with all required arguments.\n" 132 | "- Using 'install_package' when a required library is missing.\n" 133 | "- Using created tools to progress towards completing the task.\n" 134 | "- When creating or updating tools, provide the complete code as it will be used without any edits.\n" 135 | "- Handling any errors by adjusting your tools or arguments as necessary.\n" 136 | "- **Being token-efficient**: avoid returning excessively long outputs. If a tool returns a large amount of data, consider summarizing it or returning only relevant parts.\n" 137 | "- Prioritize using tools that you have access to via the available API keys.\n" 138 | "- Signaling task completion with 'task_completed()' when done.\n" 139 | "\nPlease ensure that all function calls include all required parameters, and be mindful of token limits when handling tool outputs." 140 | ) 141 | }, {"role": "user", "content": user_input}] 142 | iteration, max_iterations = 0, 50 143 | while iteration < max_iterations: 144 | print(f"{Colors.HEADER}{Colors.BOLD}Iteration {iteration + 1} running...{Colors.ENDC}") 145 | try: 146 | response = completion(model=MODEL_NAME, messages=messages, tools=tools, tool_choice="auto") 147 | response_message = response.choices[0].message 148 | if response_message.content: 149 | print(f"{Colors.OKCYAN}{Colors.BOLD}LLM Response:{Colors.ENDC}\n{response_message.content}\n") 150 | messages.append(response_message) 151 | if response_message.tool_calls: 152 | for tool_call in response_message.tool_calls: 153 | function_name = tool_call.function.name 154 | args = json.loads(tool_call.function.arguments) 155 | tool_result = call_tool(function_name, args) 156 | serialized_tool_result = serialize_tool_result(tool_result) 157 | messages.append({ 158 | "role": "tool", 159 | "name": function_name, 160 | "tool_call_id": tool_call.id, 161 | "content": serialized_tool_result 162 | }) 163 | if 'task_completed' in [tc.function.name for tc in response_message.tool_calls]: 164 | print(f"{Colors.OKGREEN}{Colors.BOLD}Task completed.{Colors.ENDC}") 165 | break 166 | except Exception as e: 167 | print(f"{Colors.FAIL}{Colors.BOLD}Error:{Colors.ENDC} Error in main loop: {e}") 168 | traceback.print_exc() 169 | iteration += 1 170 | sleep(2) 171 | print(f"{Colors.WARNING}{Colors.BOLD}Max iterations reached or task completed.{Colors.ENDC}") 172 | 173 | if __name__ == "__main__": 174 | run_main_loop(input(f"{Colors.BOLD}Describe the task you want to complete: {Colors.ENDC}")) 175 | --------------------------------------------------------------------------------