├── .github └── workflows │ └── publish.yml ├── .gitignore ├── LICENSE ├── MANIFEST.in ├── README.md ├── agentserve ├── __init__.py ├── agent_registry.py ├── agent_server.py ├── cli.py ├── config.py ├── logging_config.py └── queues │ ├── calery_task_queue.py │ ├── local_task_queue.py │ ├── redis_task_queue.py │ └── task_queue.py ├── async_example.py ├── example.py ├── examples ├── langchain │ └── agent.py └── openai │ └── agent.py └── setup.py /.github/workflows/publish.yml: -------------------------------------------------------------------------------- 1 | name: Upload Python Package to PYPI 2 | 3 | on: 4 | push: 5 | tags: 6 | - 'v*.*.*' 7 | 8 | permissions: 9 | contents: read 10 | 11 | jobs: 12 | release: 13 | 14 | runs-on: ubuntu-latest 15 | 16 | steps: 17 | - uses: actions/checkout@v3 18 | with: 19 | fetch-depth: 0 20 | 21 | - name: Set up Python 22 | uses: actions/setup-python@v4 23 | with: 24 | python-version: '3.x' 25 | 26 | - name: Install dependencies 27 | run: | 28 | pip install --upgrade pip 29 | pip install setuptools setuptools_scm wheel twine 30 | 31 | - name: Build 32 | run: python setup.py sdist bdist_wheel 33 | 34 | - name: Publish to PyPI 35 | env: 36 | TWINE_USERNAME: __token__ 37 | TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }} 38 | run: twine upload dist/* -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | build/ 3 | dist/ 4 | agentserve.egg-info/ 5 | __pycache__/ 6 | *__pycache__/ 7 | *.py[cod] -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 PropsAI 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | recursive-include agentserve/templates * -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ``` 2 | ___ __ ____ 3 | / _ |___ ____ ___ / /_/ __/__ _____ _____ 4 | / __ / _ `/ -_) _ \/ __/\ \/ -_) __/ |/ / -_) 5 | /_/ |_\_, /\__/_//_/\__/___/\__/_/ |___/\__/ 6 | /___/ 7 | ``` 8 | 9 | # AgentServe 10 | 11 | [![Discord](https://img.shields.io/badge/Discord-Join_Discord-blue?style=flat&logo=Discord)](https://discord.gg/JkPrCnExSf) 12 | [![GitHub](https://img.shields.io/badge/GitHub-View_on_GitHub-blue?style=flat&logo=GitHub)](https://github.com/PropsAI/agentserve) 13 | ![License](https://img.shields.io/badge/License-MIT-blue.svg) 14 | ![PyPI Version](https://img.shields.io/pypi/v/agentserve.svg) 15 | ![GitHub Stars](https://img.shields.io/github/stars/PropsAI/agentserve?style=social) 16 | ![Beta](https://img.shields.io/badge/Status-Beta-yellow) 17 | 18 | AgentServe is a lightweight framework for hosting and scaling AI agents. It is designed to be easy to use and integrate with existing projects and agent / LLM frameworks. It wraps your agent in a REST API and supports optional task queuing for scalability. 19 | 20 | Join the [Discord](https://discord.gg/JkPrCnExSf) for support and discussion. 21 | 22 | ## Goals and Objectives 23 | 24 | The goal of AgentServe is to provide the easiest way to take an local agent to production and standardize the communication layer between multiple agents, humans, and other systems. 25 | 26 | ## Features 27 | 28 | - **Standardized:** AgentServe provides a standardized way to communicate with AI agents via a REST API. 29 | - **Framework Agnostic:** AgentServe supports multiple agent frameworks (OpenAI, LangChain, LlamaIndex, and Blank). 30 | - **Task Queuing:** AgentServe supports optional task queuing for scalability. Choose between local, Redis, or Celery task queues based on your needs. 31 | - **Configurable:** AgentServe is designed to be configurable via an `agentserve.yaml` file and overridable with environment variables. 32 | - **Easy to Use:** AgentServe aims to be easy to use and integrate with existing projects and make deployment as simple as possible. 33 | 34 | ## Requirements 35 | 36 | - Python 3.9+ 37 | 38 | ## Installation 39 | 40 | To install AgentServe, you can use pip: 41 | 42 | ```bash 43 | pip install -U agentserve 44 | ``` 45 | 46 | ## Getting Started 47 | 48 | AgentServe allows you to easily wrap your agent code in a FastAPI application and expose it via REST endpoints. Below are the steps to integrate AgentServe into your project. 49 | 50 | ### 1. Install AgentServe 51 | 52 | First, install the `agentserve` package using pip: 53 | 54 | ```bash 55 | pip install -U agentserve 56 | ``` 57 | 58 | Make sure your virtual environment is activated if you're using one. 59 | 60 | ### 2. Create or Update Your Agent 61 | 62 | Within your entry point file (e.g. `main.py`) we will import `agentserve` and create an app instance, then decorate an agent function with `@app.agent`. Finally, we will call `app.run()` to start the server. 63 | 64 | The agent function should take a single argument, `task_data`, which will be a dictionary of data prequired by your agent. 65 | 66 | **Example:** 67 | 68 | ```python 69 | # main.py 70 | import agentserve 71 | from openai import OpenAI 72 | 73 | app = agentserve.app() 74 | 75 | @app.agent 76 | def my_agent(task_data): 77 | # Your agent logic goes here 78 | client = OpenAI() 79 | response = client.chat.completions.create( 80 | model="gpt-4o-mini", 81 | messages=[{"role": "user", "content": task_data["prompt"]}] 82 | ) 83 | return response.choices[0].message.content 84 | 85 | if __name__ == "__main__": 86 | app.run() 87 | ``` 88 | 89 | In this example: 90 | 91 | - We import agentserve and create an app instance using `agentserve.app()`. 92 | - We define our agent function `my_agent` and decorate it with `@app.agent`. 93 | - Within the agent function, we implement our agent's logic. 94 | - We call `app.run()` to start the server. 95 | 96 | ### 3. Run the Agent Server 97 | 98 | To run the agent server, use the following command: 99 | 100 | ```bash 101 | python main.py 102 | ``` 103 | 104 | ### 4. Configure Task Queue (Optional) 105 | 106 | By default, AgentServe uses a local task queue, which is suitable for development and testing. If you need more robust queue management for production, you can configure AgentServe to use Redis or Celery. 107 | 108 | **Using a Configuration File** 109 | Create a file named agentserve.yaml in your project directory: 110 | 111 | ```yaml 112 | # agentserve.yaml 113 | 114 | task_queue: celery # Options: 'local', 'redis', 'celery' 115 | 116 | celery: 117 | broker_url: pyamqp://guest@localhost// 118 | ``` 119 | 120 | **Using Environment Variables** 121 | 122 | Alternatively, you can set configuration options using environment variables: 123 | 124 | ```bash 125 | export AGENTSERVE_TASK_QUEUE=celery 126 | export AGENTSERVE_CELERY_BROKER_URL=pyamqp://guest@localhost// 127 | ``` 128 | 129 | ### 5. Start the Worker (if using Celery or Redis) 130 | 131 | To start the worker, use the following command: 132 | 133 | ```bash 134 | agentserve startworker 135 | ``` 136 | 137 | ### 6. Test the Agent 138 | 139 | With the server and worker (if needed) running, you can test your agent using the available endpoints. 140 | 141 | **Synchronous Task Processing** 142 | 143 | `POST /task/sync` 144 | 145 | ```bash 146 | curl -X POST http://localhost:8000/task/sync \ 147 | -H "Content-Type: application/json" \ 148 | -d '{"input": "Test input"}' 149 | ``` 150 | 151 | **Asynchronously process a task** 152 | 153 | `POST /task/async` 154 | 155 | ```bash 156 | curl -X POST http://localhost:8000/task/async \ 157 | -H "Content-Type: application/json" \ 158 | -d '{"input": "Test input"}' 159 | ``` 160 | 161 | **Get the status of a task** 162 | 163 | `GET /task/status/:task_id` 164 | 165 | ```bash 166 | curl http://localhost:8000/task/status/1234567890 167 | ``` 168 | 169 | **Get the result of a task** 170 | 171 | `GET /task/result/:task_id` 172 | 173 | ```bash 174 | curl http://localhost:8000/task/result/1234567890 175 | ``` 176 | 177 | ## Configuration Options 178 | 179 | AgentServe allows you to configure various aspects of the application using a configuration file or environment variables. 180 | 181 | #### Using agentserve.yaml 182 | 183 | Place an `agentserve.yaml` file in your project directory with the desired configurations. 184 | 185 | **Example:** 186 | 187 | ```yaml 188 | # agentserve.yaml 189 | 190 | task_queue: celery # Options: 'local', 'redis', 'celery' 191 | 192 | celery: 193 | broker_url: pyamqp://guest@localhost// 194 | 195 | redis: 196 | host: localhost 197 | port: 6379 198 | 199 | server: 200 | host: 0.0.0.0 201 | port: 8000 202 | 203 | queue: # if using local task queue 204 | max_workers: 10 # default 205 | ``` 206 | 207 | 208 | #### Using Environment Variables 209 | 210 | Set the desired configuration options using environment variables. 211 | 212 | You can override configurations using environment variables without modifying the configuration file. 213 | 214 | - `AGENTSERVE_TASK_QUEUE` 215 | - `AGENTSERVE_CELERY_BROKER_URL` 216 | - `AGENTSERVE_REDIS_HOST` 217 | - `AGENTSERVE_REDIS_PORT` 218 | - `AGENTSERVE_SERVER_HOST` 219 | - `AGENTSERVE_SERVER_PORT` 220 | - `AGENTSERVE_QUEUE_MAX_WORKERS` 221 | 222 | **Example:** 223 | 224 | ```bash 225 | export AGENTSERVE_TASK_QUEUE=redis 226 | export AGENTSERVE_REDIS_HOST=redis-server-host 227 | export AGENTSERVE_REDIS_PORT=6379 228 | ``` 229 | 230 | ### FastAPI Configuration 231 | 232 | You can specify FastAPI settings, including CORS configuration, using the `fastapi` key in your `agentserve.yaml` configuration file. 233 | 234 | **Example:** 235 | 236 | ```yaml 237 | # agentserve.yaml 238 | 239 | fastapi: 240 | cors: 241 | allow_origins: 242 | - "http://localhost:3000" 243 | - "https://yourdomain.com" 244 | allow_credentials: true 245 | allow_methods: 246 | - "*" 247 | allow_headers: 248 | - "*" 249 | ``` 250 | #### Using Environment Variables 251 | 252 | Alternatively, you can set the desired configuration options using environment variables. 253 | 254 | **Example:** 255 | 256 | ```bash 257 | export AGENTSERVE_CORS_ORIGINS="http://localhost:3000,https://yourdomain.com" 258 | export AGENTSERVE_CORS_ALLOW_CREDENTIALS="true" 259 | export AGENTSERVE_CORS_ALLOW_METHODS="GET,POST" 260 | export AGENTSERVE_CORS_ALLOW_HEADERS="Content-Type,Authorization" 261 | ``` 262 | 263 | ## Advanced Usage 264 | 265 | ### Integrating with Existing Projects 266 | 267 | You can integrate AgentServe into your existing projects by importing agentserve and defining your agent function. 268 | 269 | **Example:** 270 | 271 | ```python 272 | # main.py 273 | import agentserve 274 | 275 | app = agentserve.app() 276 | 277 | @app.agent 278 | def my_custom_agent(task_data): 279 | # Your custom agent logic (e.g. using LangChain, LlamaIndex, etc.) 280 | result = perform_complex_computation(task_data) 281 | return {"result": result} 282 | 283 | if __name__ == "__main__": 284 | app.run() 285 | ``` 286 | 287 | ### Input Validation 288 | 289 | AgentServe allows you to validate the input to your agent function using Pydantic. Simply add an input schema to your agent function. 290 | 291 | **Example:** 292 | 293 | ```python 294 | # main.py 295 | import agentserve 296 | from pydantic import BaseModel 297 | 298 | class MyInputSchema(BaseModel): 299 | prompt: str 300 | 301 | @app.agent(input_schema=MyInputSchema) 302 | def my_custom_agent(task_data): 303 | # Your custom agent logic 304 | return {"result": "Hello, world!"} 305 | 306 | if __name__ == "__main__": 307 | app.run() 308 | ``` 309 | 310 | ## Hosting 311 | 312 | INSTRUCTIONS COMING SOON 313 | 314 | ## ROADMAP 315 | 316 | - [ ] Add support for streaming responses 317 | - [ ] Add easy instructions for more hosting options (GCP, Azure, AWS, etc.) 318 | - [ ] Add support for external storage for task results 319 | - [ ] Add support for multi model agents 320 | - [ ] Add support for more agent frameworks 321 | 322 | ## License 323 | 324 | This project is licensed under the MIT License. 325 | 326 | ## Contact 327 | 328 | Join the [Discord](https://discord.gg/JkPrCnExSf) for support and discussion. 329 | 330 | For any questions or issues, please contact Peter at peter@getprops.ai. 331 | -------------------------------------------------------------------------------- /agentserve/__init__.py: -------------------------------------------------------------------------------- 1 | # agentserve/__init__.py 2 | from .agent_server import AgentServer as app 3 | from .logging_config import setup_logger 4 | 5 | logger = setup_logger() -------------------------------------------------------------------------------- /agentserve/agent_registry.py: -------------------------------------------------------------------------------- 1 | # agentserve/agent_registry.py 2 | from typing import Callable, Optional, Type 3 | from pydantic import BaseModel 4 | from .logging_config import setup_logger 5 | import asyncio 6 | 7 | class AgentRegistry: 8 | def __init__(self): 9 | self.agent_function = None 10 | self.input_schema: Optional[Type[BaseModel]] = None 11 | self.logger = setup_logger("agentserve.agent_registry") 12 | 13 | def register_agent(self, func: Optional[Callable] = None, *, input_schema: Optional[Type[BaseModel]] = None): 14 | if func is None: 15 | def wrapper(func: Callable): 16 | return self.register_agent(func, input_schema=input_schema) 17 | return wrapper 18 | 19 | self.input_schema = input_schema 20 | is_async = asyncio.iscoroutinefunction(func) 21 | self.logger.info(f"Registering {'async' if is_async else 'sync'} function") 22 | 23 | async def async_validated_func(task_data): 24 | if self.input_schema is not None: 25 | validated_data = self.input_schema(**task_data) 26 | return await func(validated_data) 27 | return await func(task_data) 28 | 29 | def sync_validated_func(task_data): 30 | if self.input_schema is not None: 31 | validated_data = self.input_schema(**task_data) 32 | return func(validated_data) 33 | return func(task_data) 34 | 35 | if is_async: 36 | self.agent_function = async_validated_func 37 | setattr(self.agent_function, '_is_async', True) 38 | else: 39 | self.agent_function = sync_validated_func 40 | setattr(self.agent_function, '_is_async', False) 41 | 42 | return self.agent_function 43 | 44 | def get_agent(self): 45 | if self.agent_function is None: 46 | raise ValueError("No agent has been registered.") 47 | return self.agent_function -------------------------------------------------------------------------------- /agentserve/agent_server.py: -------------------------------------------------------------------------------- 1 | # agentserve/agent_server.py 2 | 3 | from fastapi import FastAPI, HTTPException 4 | from fastapi.middleware.cors import CORSMiddleware 5 | from pydantic import ValidationError 6 | from .queues.task_queue import TaskQueue 7 | from .agent_registry import AgentRegistry 8 | from typing import Dict, Any, Optional 9 | from .config import Config 10 | from .logging_config import setup_logger 11 | import uuid 12 | 13 | class AgentServer: 14 | def __init__(self, config: Optional[Config] = None): 15 | self.logger = setup_logger("agentserve.server") 16 | self.config = config or Config() 17 | self.app = FastAPI() 18 | 19 | # Add CORS middleware with custom origins 20 | cors_config = self.config.get_nested('fastapi', 'cors', default={}) 21 | self.app.add_middleware( 22 | CORSMiddleware, 23 | allow_origins=cors_config.get('allow_origins', ["*"]), 24 | allow_credentials=cors_config.get('allow_credentials', True), 25 | allow_methods=cors_config.get('allow_methods', ["*"]), 26 | allow_headers=cors_config.get('allow_headers', ["*"]), 27 | ) 28 | 29 | self.agent_registry = AgentRegistry() 30 | self.task_queue = self._initialize_task_queue() 31 | self.agent = self.agent_registry.register_agent 32 | self._setup_routes() 33 | self.logger.info("AgentServer initialized") 34 | 35 | def _initialize_task_queue(self): 36 | task_queue_type = self.config.get('task_queue', 'local').lower() 37 | self.logger.info(f"Initializing {task_queue_type} task queue") 38 | 39 | try: 40 | if task_queue_type == 'celery': 41 | from .queues.celery_task_queue import CeleryTaskQueue 42 | return CeleryTaskQueue(self.config) 43 | elif task_queue_type == 'redis': 44 | from .queues.redis_task_queue import RedisTaskQueue 45 | return RedisTaskQueue(self.config) 46 | else: 47 | from .queues.local_task_queue import LocalTaskQueue 48 | return LocalTaskQueue(self.config) 49 | except Exception as e: 50 | self.logger.error(f"Failed to initialize task queue: {str(e)}") 51 | raise 52 | 53 | def _setup_routes(self): 54 | @self.app.post("/task/sync") 55 | async def sync_task(task_data: Dict[str, Any]): 56 | self.logger.debug(f"sync_task called with data: {task_data}") 57 | try: 58 | agent_function = self.agent_registry.get_agent() 59 | if getattr(agent_function, '_is_async', False): 60 | self.logger.info("Function is async, running in event loop") 61 | result = await agent_function(task_data) 62 | else: 63 | self.logger.info("Function is sync, running directly") 64 | result = agent_function(task_data) 65 | return {"result": result} 66 | except ValidationError as ve: 67 | if hasattr(ve, 'errors'): 68 | raise HTTPException( 69 | status_code=400, 70 | detail={ 71 | "message": "Validation error", 72 | "errors": ve.errors() 73 | } 74 | ) 75 | raise HTTPException(status_code=400, detail=str(ve)) 76 | except Exception as e: 77 | raise HTTPException(status_code=500, detail=str(e)) 78 | 79 | @self.app.post("/task/async") 80 | async def async_task(task_data: Dict[str, Any]): 81 | task_id = str(uuid.uuid4()) 82 | agent_function = self.agent_registry.get_agent() 83 | self.task_queue.enqueue(agent_function, task_data, task_id) 84 | return {"task_id": task_id} 85 | 86 | @self.app.get("/task/status/{task_id}") 87 | async def get_status(task_id: str): 88 | status = self.task_queue.get_status(task_id) 89 | if status == 'not_found': 90 | raise HTTPException(status_code=404, detail="Task not found") 91 | return {"status": status} 92 | 93 | @self.app.get("/task/result/{task_id}") 94 | async def get_result(task_id: str): 95 | try: 96 | result = self.task_queue.get_result(task_id) 97 | if result is not None: 98 | return {"result": result} 99 | else: 100 | status = self.task_queue.get_status(task_id) 101 | return {"status": status} 102 | except Exception as e: 103 | raise HTTPException(status_code=500, detail=str(e)) 104 | 105 | def run(self, host="0.0.0.0", port=8000): 106 | import uvicorn 107 | uvicorn.run(self.app, host=host, port=port) -------------------------------------------------------------------------------- /agentserve/cli.py: -------------------------------------------------------------------------------- 1 | # agentserve/cli.py 2 | 3 | import click 4 | from .config import Config 5 | 6 | @click.group() 7 | def main(): 8 | """CLI tool for managing AI agents.""" 9 | click.echo(click.style("\nWelcome to AgentServe CLI\n\n", fg='green', bold=True)) 10 | click.echo("Go to https://github.com/Props/agentserve for more information.\n\n\n") 11 | 12 | @cli.command() 13 | def startworker(): 14 | """Starts the AgentServe worker (if required).""" 15 | config = Config() 16 | task_queue_type = config.get('task_queue', 'local').lower() 17 | 18 | if task_queue_type == 'celery': 19 | from .queues.celery_task_queue import CeleryTaskQueue 20 | task_queue = CeleryTaskQueue(config) 21 | # Start the Celery worker 22 | argv = [ 23 | 'worker', 24 | '--loglevel=info', 25 | '--pool=solo', # Use 'solo' pool to avoid issues on some platforms 26 | ] 27 | task_queue.celery_app.worker_main(argv) 28 | elif task_queue_type == 'redis': 29 | # For Redis (RQ), start a worker process 30 | from rq import Worker, Connection 31 | from .queues.redis_task_queue import RedisTaskQueue 32 | task_queue = RedisTaskQueue(config) 33 | with Connection(task_queue.redis_conn): 34 | worker = Worker([task_queue.task_queue]) 35 | worker.work(log_level='INFO') 36 | else: 37 | click.echo("No worker required for the 'local' task queue.") 38 | -------------------------------------------------------------------------------- /agentserve/config.py: -------------------------------------------------------------------------------- 1 | # agentserve/config.py 2 | 3 | import os 4 | import yaml 5 | 6 | class Config: 7 | def __init__(self): 8 | self.config = self._load_config() 9 | 10 | def _load_config(self): 11 | # Load from 'agentserve.yaml' if exists 12 | config_path = 'agentserve.yaml' 13 | config = {} 14 | if os.path.exists(config_path): 15 | with open(config_path, 'r') as file: 16 | config = yaml.safe_load(file) or {} 17 | 18 | # Override with environment variables 19 | config['task_queue'] = os.getenv('AGENTSERVE_TASK_QUEUE', config.get('task_queue', 'local')) 20 | 21 | # FastAPI configuration 22 | fastapi_config = config.setdefault('fastapi', {}) 23 | # CORS configuration within FastAPI config 24 | cors_config = fastapi_config.setdefault('cors', {}) 25 | cors_origins_env = os.getenv('AGENTSERVE_CORS_ORIGINS') 26 | if cors_origins_env: 27 | cors_config['allow_origins'] = [origin.strip() for origin in cors_origins_env.split(',')] 28 | else: 29 | cors_config.setdefault('allow_origins', ["*"]) 30 | 31 | cors_config['allow_credentials'] = self._get_bool_env( 32 | 'AGENTSERVE_CORS_ALLOW_CREDENTIALS', 33 | cors_config.get('allow_credentials', True) 34 | ) 35 | cors_methods_env = os.getenv('AGENTSERVE_CORS_ALLOW_METHODS') 36 | if cors_methods_env: 37 | cors_config['allow_methods'] = [method.strip() for method in cors_methods_env.split(',')] 38 | else: 39 | cors_config.setdefault('allow_methods', ["*"]) 40 | 41 | cors_headers_env = os.getenv('AGENTSERVE_CORS_ALLOW_HEADERS') 42 | if cors_headers_env: 43 | cors_config['allow_headers'] = [header.strip() for header in cors_headers_env.split(',')] 44 | else: 45 | cors_config.setdefault('allow_headers', ["*"]) 46 | 47 | # Celery configuration 48 | celery_broker_url = os.getenv('AGENTSERVE_CELERY_BROKER_URL') 49 | if celery_broker_url: 50 | config.setdefault('celery', {})['broker_url'] = celery_broker_url 51 | 52 | # Redis configuration 53 | redis_host = os.getenv('AGENTSERVE_REDIS_HOST') 54 | redis_port = os.getenv('AGENTSERVE_REDIS_PORT') 55 | if redis_host or redis_port: 56 | redis_config = config.setdefault('redis', {}) 57 | if redis_host: 58 | redis_config['host'] = redis_host 59 | if redis_port: 60 | redis_config['port'] = int(redis_port) 61 | 62 | # Server configuration 63 | server_host = os.getenv('AGENTSERVE_SERVER_HOST') 64 | server_port = os.getenv('AGENTSERVE_SERVER_PORT') 65 | if server_host or server_port: 66 | server_config = config.setdefault('server', {}) 67 | if server_host: 68 | server_config['host'] = server_host 69 | if server_port: 70 | server_config['port'] = int(server_port) 71 | 72 | queue_config = config.setdefault('queue', {}) 73 | queue_config['max_workers'] = int(os.getenv('AGENTSERVE_QUEUE_MAX_WORKERS', queue_config.get('max_workers', 10))) 74 | 75 | return config 76 | 77 | def get(self, key, default=None): 78 | return self.config.get(key, default) 79 | 80 | def get_nested(self, *keys, default=None): 81 | value = self.config 82 | for key in keys: 83 | if not isinstance(value, dict): 84 | return default 85 | value = value.get(key) 86 | if value is None: 87 | return default 88 | return value 89 | 90 | def _get_bool_env(self, env_var, default): 91 | val = os.getenv(env_var) 92 | if val is not None: 93 | return val.lower() in ('true', '1', 'yes') 94 | return default -------------------------------------------------------------------------------- /agentserve/logging_config.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import sys 3 | from typing import Optional 4 | 5 | _loggers = {} 6 | 7 | def setup_logger(name: str = "agentserve", level: Optional[str] = None) -> logging.Logger: 8 | if name in _loggers: 9 | return _loggers[name] 10 | 11 | logger = logging.getLogger(name) 12 | 13 | if not logger.handlers: 14 | handler = logging.StreamHandler(sys.stdout) 15 | formatter = logging.Formatter( 16 | '%(asctime)s - %(name)s - %(levelname)s - %(message)s' 17 | ) 18 | handler.setFormatter(formatter) 19 | logger.addHandler(handler) 20 | logger.propagate = False # Prevent duplicate logging 21 | 22 | log_level = getattr(logging, (level or "DEBUG").upper()) 23 | logger.setLevel(log_level) 24 | 25 | _loggers[name] = logger 26 | return logger -------------------------------------------------------------------------------- /agentserve/queues/calery_task_queue.py: -------------------------------------------------------------------------------- 1 | # agentserve/celery_task_queue.py 2 | 3 | import asyncio 4 | from typing import Any, Dict 5 | from .task_queue import TaskQueue 6 | from ..config import Config 7 | from ..logging_config import setup_logger 8 | 9 | class CeleryTaskQueue(TaskQueue): 10 | def __init__(self, config: Config): 11 | try: 12 | from celery import Celery 13 | except ImportError: 14 | raise ImportError("CeleryTaskQueue requires the 'celery' package. Please install it.") 15 | 16 | self.logger = setup_logger("agentserve.queue.celery") 17 | broker_url = config.get('celery', {}).get('broker_url', 'pyamqp://guest@localhost//') 18 | self.celery_app = Celery('agent_server', broker=broker_url) 19 | self.loop = asyncio.new_event_loop() 20 | self._register_tasks() 21 | self.logger.info("CeleryTaskQueue initialized") 22 | 23 | def _register_tasks(self): 24 | @self.celery_app.task(name='agent_task') 25 | def agent_task(task_data, is_async=False): 26 | from ..agent_registry import AgentRegistry 27 | agent_registry = AgentRegistry() 28 | agent_function = agent_registry.get_agent() 29 | 30 | if is_async: 31 | asyncio.set_event_loop(self.loop) 32 | return self.loop.run_until_complete(agent_function(task_data)) 33 | return agent_function(task_data) 34 | 35 | def enqueue(self, agent_function, task_data: Dict[str, Any], task_id: str): 36 | self.logger.debug(f"Enqueueing task {task_id}") 37 | is_async = getattr(agent_function, '_is_async', False) 38 | self.celery_app.send_task('agent_task', 39 | args=[task_data], 40 | kwargs={'is_async': is_async}, 41 | task_id=task_id) 42 | 43 | def get_status(self, task_id: str) -> str: 44 | result = self.celery_app.AsyncResult(task_id) 45 | return result.status 46 | 47 | def get_result(self, task_id: str) -> Any: 48 | result = self.celery_app.AsyncResult(task_id) 49 | if result.state == 'SUCCESS': 50 | return result.result 51 | if result.state == 'FAILURE': 52 | raise Exception(str(result.result)) 53 | return None -------------------------------------------------------------------------------- /agentserve/queues/local_task_queue.py: -------------------------------------------------------------------------------- 1 | # agentserve/local_task_queue.py 2 | 3 | import asyncio 4 | from typing import Any, Dict, Optional 5 | from .task_queue import TaskQueue 6 | import threading 7 | from ..logging_config import setup_logger 8 | import concurrent.futures 9 | 10 | class LocalTaskQueue(TaskQueue): 11 | def __init__(self, config: Optional[Dict[str, Any]] = None): 12 | self.logger = setup_logger("agentserve.queue.local") 13 | self.results = {} 14 | self.statuses = {} 15 | max_workers = 10 # default 16 | if config: 17 | max_workers = config.get('queue', {}).get('max_workers', 10) 18 | self.thread_pool = concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) 19 | self.lock = threading.Lock() 20 | self.logger.info("LocalTaskQueue initialized") 21 | 22 | def enqueue(self, agent_function, task_data: Dict[str, Any], task_id: str): 23 | self.logger.debug(f"Enqueueing task {task_id}") 24 | with self.lock: 25 | self.statuses[task_id] = 'queued' 26 | self.thread_pool.submit(self._run_task, agent_function, task_data, task_id) 27 | 28 | def _run_task(self, agent_function, task_data: Dict[str, Any], task_id: str): 29 | self.logger.debug(f"Starting task {task_id}") 30 | with self.lock: 31 | self.statuses[task_id] = 'in_progress' 32 | 33 | try: 34 | if getattr(agent_function, '_is_async', False): 35 | loop = asyncio.new_event_loop() 36 | asyncio.set_event_loop(loop) 37 | try: 38 | result = loop.run_until_complete(agent_function(task_data)) 39 | finally: 40 | loop.close() 41 | else: 42 | result = agent_function(task_data) 43 | 44 | with self.lock: 45 | self.results[task_id] = result 46 | self.statuses[task_id] = 'completed' 47 | self.logger.info(f"Task {task_id} completed successfully") 48 | 49 | except Exception as e: 50 | self.logger.error(f"Task {task_id} failed: {str(e)}") 51 | with self.lock: 52 | self.results[task_id] = e 53 | self.statuses[task_id] = 'failed' 54 | 55 | def get_status(self, task_id: str) -> str: 56 | with self.lock: 57 | return self.statuses.get(task_id, 'not_found') 58 | 59 | def get_result(self, task_id: str) -> Any: 60 | with self.lock: 61 | if task_id not in self.results: 62 | return None 63 | result = self.results[task_id] 64 | if isinstance(result, Exception): 65 | raise result 66 | return result -------------------------------------------------------------------------------- /agentserve/queues/redis_task_queue.py: -------------------------------------------------------------------------------- 1 | # agentserve/redis_task_queue.py 2 | 3 | import asyncio 4 | from typing import Any, Dict 5 | from .task_queue import TaskQueue 6 | from ..logging_config import setup_logger 7 | 8 | class RedisTaskQueue(TaskQueue): 9 | def __init__(self, config: Config): 10 | try: 11 | from redis import Redis 12 | from rq import Queue 13 | except ImportError: 14 | raise ImportError("RedisTaskQueue requires 'redis' and 'rq' packages. Please install them.") 15 | 16 | self.logger = setup_logger("agentserve.queue.redis") 17 | redis_config = config.get('redis', {}) 18 | redis_host = redis_config.get('host', 'localhost') 19 | redis_port = redis_config.get('port', 6379) 20 | self.redis_conn = Redis(host=redis_host, port=redis_port) 21 | self.task_queue = Queue(connection=self.redis_conn) 22 | self.loop = asyncio.new_event_loop() 23 | self.logger.info("RedisTaskQueue initialized") 24 | 25 | def enqueue(self, agent_function, task_data: Dict[str, Any], task_id: str): 26 | self.logger.debug(f"Enqueueing task {task_id}") 27 | if getattr(agent_function, '_is_async', False): 28 | wrapped_func = self._wrap_async_function(agent_function) 29 | self.task_queue.enqueue_call(func=wrapped_func, args=(task_data,), job_id=task_id) 30 | else: 31 | self.task_queue.enqueue_call(func=agent_function, args=(task_data,), job_id=task_id) 32 | 33 | def _wrap_async_function(self, func): 34 | def wrapper(task_data): 35 | asyncio.set_event_loop(self.loop) 36 | return self.loop.run_until_complete(func(task_data)) 37 | return wrapper 38 | 39 | def get_status(self, task_id: str) -> str: 40 | job = self.task_queue.fetch_job(task_id) 41 | return job.get_status() if job else 'not_found' 42 | 43 | def get_result(self, task_id: str) -> Any: 44 | job = self.task_queue.fetch_job(task_id) 45 | if not job: 46 | return None 47 | if job.is_finished: 48 | return job.result 49 | if job.is_failed: 50 | raise Exception(job.exc_info) 51 | return None -------------------------------------------------------------------------------- /agentserve/queues/task_queue.py: -------------------------------------------------------------------------------- 1 | # agentserve/task_queue.py 2 | 3 | from abc import ABC, abstractmethod 4 | from typing import Any, Dict 5 | 6 | class TaskQueue(ABC): 7 | @abstractmethod 8 | def enqueue(self, agent_function, task_data: Dict[str, Any], task_id: str): 9 | pass 10 | 11 | @abstractmethod 12 | def get_status(self, task_id: str) -> str: 13 | pass 14 | 15 | @abstractmethod 16 | def get_result(self, task_id: str) -> Any: 17 | pass -------------------------------------------------------------------------------- /async_example.py: -------------------------------------------------------------------------------- 1 | import agentserve 2 | from pydantic import BaseModel 3 | import asyncio 4 | 5 | # Configure logging level 6 | agentserve.setup_logger(level="DEBUG") # or "INFO", "WARNING", "ERROR" 7 | 8 | app = agentserve.app() 9 | 10 | class MyInputSchema(BaseModel): 11 | prompt: str 12 | 13 | @app.agent(input_schema=MyInputSchema) 14 | async def my_agent(task_data): 15 | await asyncio.sleep(20) 16 | return task_data 17 | 18 | app.run() -------------------------------------------------------------------------------- /example.py: -------------------------------------------------------------------------------- 1 | import agentserve 2 | from pydantic import BaseModel 3 | 4 | # Configure logging level 5 | agentserve.setup_logger(level="INFO") # or "INFO", "WARNING", "ERROR" 6 | 7 | app = agentserve.app() 8 | 9 | 10 | class MyInputSchema(BaseModel): 11 | prompt: str 12 | 13 | @app.agent(input_schema=MyInputSchema) 14 | def my_agent(task_data): 15 | return task_data 16 | 17 | app.run() -------------------------------------------------------------------------------- /examples/langchain/agent.py: -------------------------------------------------------------------------------- 1 | import agentserve 2 | from langchain_openai import ChatOpenAI 3 | 4 | app = agentserve.app() 5 | 6 | @app.agent 7 | def my_custom_agent(task_data): 8 | client = ChatOpenAI() 9 | response = client.chat.completions.create( 10 | model="gpt-4o-mini", 11 | messages=[{"role": "user", "content": task_data["prompt"]}] 12 | ) 13 | return response.choices[0].message.content 14 | 15 | if __name__ == "__main__": 16 | app.run() -------------------------------------------------------------------------------- /examples/openai/agent.py: -------------------------------------------------------------------------------- 1 | import agentserve 2 | from openai import OpenAI 3 | 4 | app = agentserve.app() 5 | 6 | @app.agent 7 | def my_custom_agent(task_data): 8 | client = OpenAI() 9 | response = client.chat.completions.create( 10 | model="gpt-4o-mini", 11 | messages=[{"role": "user", "content": task_data["prompt"]}] 12 | ) 13 | return response.choices[0].message.content 14 | 15 | if __name__ == "__main__": 16 | app.run() -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | # setup.py 2 | 3 | from setuptools import setup, find_packages 4 | 5 | setup( 6 | name='agentserve', 7 | use_scm_version=True, 8 | setup_requires=['setuptools_scm'], 9 | packages=find_packages(), 10 | include_package_data=True, 11 | install_requires=[ 12 | 'fastapi', 13 | 'uvicorn', 14 | 'rq', 15 | 'redis', 16 | 'celery', 17 | 'click', 18 | 'pydantic', 19 | 'pyyaml' 20 | ], 21 | entry_points={ 22 | 'console_scripts': [ 23 | 'agentserve=agentserve.cli:main', 24 | ], 25 | }, 26 | author='Peter', 27 | author_email='peter@getprops.ai', 28 | description='A framework for hosting and scaling AI agents.', 29 | url='https://github.com/PropsAI/agentserve', 30 | long_description=open('README.md').read(), 31 | long_description_content_type='text/markdown', 32 | classifiers=[ 33 | 'Programming Language :: Python :: 3', 34 | 'License :: OSI Approved :: MIT License' 35 | ], 36 | ) --------------------------------------------------------------------------------