├── LICENSE ├── README.md ├── agents ├── api_server.py ├── memory.py ├── roles.py ├── swarm.py └── telegram_bot.py ├── config └── settings.py ├── images └── banner.jpg ├── main.py └── requirements.txt /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) Quantum Swarm (QUARM) 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![Quantum Swarm Banner](images/banner.jpg) 2 | # Quantum Swarm 🤖 3 | 4 | Quantum Swarm (QUARM) is a powerful multi-agent system that processes queries through a coordinated swarm of specialized AI agents. Each agent has a unique role in analyzing and responding to user queries, providing comprehensive and well-thought-out responses. 5 | 6 | ## Features ✨ 7 | 8 | - **Multi-Agent Processing**: Complex queries are processed by multiple specialized agents: 9 | - Query Triage: Determines query complexity 10 | - Query Interpreter: Breaks down and analyzes queries 11 | - Research Specialist: Identifies key areas for investigation 12 | - Critical Analyzer: Evaluates information and identifies gaps 13 | - Creative Explorer: Generates novel perspectives 14 | - Information Synthesizer: Combines insights into coherent responses 15 | 16 | - **Multiple Interfaces**: 17 | - CLI mode for direct interaction 18 | - Telegram bot integration 19 | - RESTful API with streaming support 20 | - Web interface support 21 | 22 | - **Advanced Capabilities**: 23 | - Streaming responses in real-time 24 | - Conversation memory with automatic cleanup 25 | - Customizable agent parameters 26 | - Support for multiple LLM providers (OpenAI, Groq, Heurist) 27 | - CORS support for web integration 28 | 29 | ## Installation 🚀 30 | 31 | 1. Clone the repository: 32 | ```bash 33 | git clone https://github.com/QuarmFW/Quarm.git 34 | cd quarm 35 | ``` 36 | 37 | 2. Install dependencies: 38 | ```bash 39 | pip install -r requirements.txt 40 | ``` 41 | 42 | 3. Configure settings: 43 | - Copy `config/settings.py` and configure your API keys and preferences 44 | 45 | We have also created a webpage that allows you to generate the roles file in the proper format. [Click here to generate your roles file](https://quarm.io/generator.html). 46 | 47 | ## Usage 💡 48 | 49 | ### CLI Mode 50 | ```bash 51 | python main.py 52 | ``` 53 | 54 | ### Telegram Bot 55 | 1. Get a bot token from [@BotFather](https://t.me/botfather) 56 | 2. Add the token to `config/settings.py` 57 | 3. Run: 58 | ```bash 59 | python main.py 60 | ``` 61 | 62 | ### API Server 63 | 1. Configure API settings in `config/settings.py` 64 | 2. Run: 65 | ```bash 66 | python main.py 67 | ``` 68 | 69 | The API will be available at `http://localhost:8000` by default. 70 | 71 | ## API Endpoints 🌐 72 | 73 | - `GET /health`: Health check endpoint 74 | - `GET /agent-parameters`: Get available agent parameters 75 | - `POST /query`: Process a query through the agent swarm 76 | 77 | ### Query Example 78 | ```json 79 | { 80 | "text": "Your query here", 81 | "user_id": "optional_user_id", 82 | "parameters": { 83 | "interpreter": { 84 | "depth_of_analysis": 80 85 | }, 86 | "researcher": { 87 | "technical_depth": 90 88 | } 89 | } 90 | } 91 | ``` 92 | 93 | ## Configuration ⚙️ 94 | 95 | Key settings in `config/settings.py`: 96 | 97 | - LLM Provider Configuration: 98 | - `OPENAI_API_KEY`: OpenAI API key 99 | - `GROQ_API_KEY`: Groq API key 100 | - `HEURIST_API_KEY`: Heurist API key 101 | 102 | - Interface Settings: 103 | - `USE_API`: Enable/disable API server 104 | - `USE_MEMORY`: Enable/disable conversation memory 105 | - `TELEGRAM_BOT_TOKEN`: Telegram bot token 106 | 107 | - Server Configuration: 108 | - `API_HOST`: API server host 109 | - `API_PORT`: API server port 110 | - `SSL_ENABLED`: Enable/disable SSL 111 | 112 | ## Contributing 🤝 113 | 114 | Contributions are welcome. Please feel free to submit a Pull Request. Please note that this project is currently in development phase, and the codebase is not yet fully stable. It's currently capable of processing queries through a set pipeline of multiple agents, and possible integrations with frameworks like ElizaOS are in the works. 115 | 116 | ## License 📄 117 | 118 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. -------------------------------------------------------------------------------- /agents/api_server.py: -------------------------------------------------------------------------------- 1 | from fastapi import FastAPI, HTTPException 2 | from fastapi.middleware.cors import CORSMiddleware 3 | from fastapi.responses import StreamingResponse 4 | from pydantic import BaseModel 5 | from typing import Optional, Dict, Any, AsyncGenerator 6 | from agents.swarm import AgentSwarm 7 | from config.settings import ( 8 | SSL_ENABLED, 9 | SEND_FULL_SWARM_RESPONSE, 10 | USE_STREAMING 11 | ) 12 | import json 13 | import uvicorn 14 | 15 | class AgentParameters(BaseModel): 16 | interpreter: Optional[Dict[str, int]] = None 17 | researcher: Optional[Dict[str, int]] = None 18 | critic: Optional[Dict[str, int]] = None 19 | creative: Optional[Dict[str, int]] = None 20 | synthesizer: Optional[Dict[str, int]] = None 21 | 22 | class Query(BaseModel): 23 | text: str 24 | user_id: Optional[str] = "default" 25 | parameters: Optional[AgentParameters] = None 26 | 27 | class APIServer: 28 | def __init__(self, host: str = "0.0.0.0", port: int = 8000): 29 | self.app = FastAPI( 30 | title="AI Agent Swarm API", 31 | description="API interface for the AI Agent Swarm", 32 | version="1.0.0" 33 | ) 34 | self.host = host 35 | self.port = port 36 | self.swarm = AgentSwarm() 37 | 38 | self.app.add_middleware( 39 | CORSMiddleware, 40 | allow_origins=["https://quarm.io"], 41 | allow_credentials=True, 42 | allow_methods=["GET", "POST"], 43 | allow_headers=["*"], 44 | expose_headers=["Content-Type", "Authorization"], 45 | max_age=3600 46 | ) 47 | 48 | self.setup_routes() 49 | 50 | async def stream_to_sse(self, generator: AsyncGenerator) -> AsyncGenerator[str, None]: 51 | """Convert generator output to SSE format""" 52 | try: 53 | async for chunk in generator: 54 | if chunk: 55 | # Format as SSE data 56 | yield f"data: {json.dumps(chunk)}\n\n" 57 | except Exception as e: 58 | error_msg = f"data: {json.dumps({'error': str(e)})}\n\n" 59 | yield error_msg 60 | 61 | def setup_routes(self): 62 | @self.app.get("/agent-parameters") 63 | async def get_agent_parameters(): 64 | """Get available agent parameters and their defaults""" 65 | from agents.roles import AGENT_ROLES 66 | parameters = {} 67 | for role, config in AGENT_ROLES.items(): 68 | if "parameters" in config: 69 | parameters[role] = config["parameters"] 70 | return parameters 71 | 72 | @self.app.post("/query") 73 | async def process_query(query: Query): 74 | try: 75 | if USE_STREAMING: 76 | generator = self.swarm.process_query_streaming( 77 | query.text, 78 | user_id=query.user_id, 79 | parameters=query.parameters.dict() if query.parameters else None 80 | ) 81 | return StreamingResponse( 82 | self.stream_to_sse(generator), 83 | media_type='text/event-stream' 84 | ) 85 | else: 86 | if SEND_FULL_SWARM_RESPONSE: 87 | responses = await self.swarm.process_query_with_details( 88 | query.text, 89 | user_id=query.user_id, 90 | parameters=query.parameters.dict() if query.parameters else None 91 | ) 92 | return responses 93 | else: 94 | response = await self.swarm.process_query( 95 | query.text, 96 | user_id=query.user_id, 97 | parameters=query.parameters.dict() if query.parameters else None 98 | ) 99 | return {"response": response} 100 | except Exception as e: 101 | raise HTTPException(status_code=500, detail=str(e)) 102 | 103 | @self.app.get("/health") 104 | async def health_check(): 105 | return {"status": "healthy"} 106 | 107 | def run(self): 108 | """Start the API server""" 109 | print(f"🚀 Starting AI Agent Swarm API server on port {self.port}...") 110 | uvicorn.run(self.app, host=self.host, port=self.port) -------------------------------------------------------------------------------- /agents/memory.py: -------------------------------------------------------------------------------- 1 | from typing import Dict, List, Optional 2 | from datetime import datetime, timedelta 3 | import asyncio 4 | from collections import deque 5 | 6 | class ConversationMemory: 7 | def __init__(self, max_history: int = 10, max_age_hours: int = 24): 8 | # Store conversations per user 9 | self._conversations: Dict[str, deque] = {} 10 | self.max_history = max_history 11 | self.max_age = timedelta(hours=max_age_hours) 12 | self._cleanup_task = None 13 | 14 | def start_cleanup(self): 15 | """Start the cleanup task if it's not already running""" 16 | if self._cleanup_task is None: 17 | loop = asyncio.get_event_loop() 18 | self._cleanup_task = loop.create_task(self._periodic_cleanup()) 19 | 20 | def add_exchange(self, user_id: str, query: str, response: str): 21 | """Add a query-response pair to the user's conversation history""" 22 | if user_id not in self._conversations: 23 | self._conversations[user_id] = deque(maxlen=self.max_history) 24 | 25 | self._conversations[user_id].append({ 26 | 'timestamp': datetime.now(), 27 | 'query': query, 28 | 'response': response 29 | }) 30 | 31 | def get_context(self, user_id: str, max_items: int = 3) -> str: 32 | """Get recent conversation context for a user""" 33 | if user_id not in self._conversations: 34 | return "" 35 | 36 | recent = list(self._conversations[user_id])[-max_items:] 37 | 38 | context_parts = [] 39 | for exchange in recent: 40 | context_parts.append(f"User: {exchange['query']}") 41 | context_parts.append(f"Assistant: {exchange['response']}\n") 42 | 43 | return "\n".join(context_parts) 44 | 45 | async def _periodic_cleanup(self): 46 | """Periodically remove old conversations""" 47 | while True: 48 | now = datetime.now() 49 | for user_id in list(self._conversations.keys()): 50 | while self._conversations[user_id]: 51 | oldest = self._conversations[user_id][0] 52 | if now - oldest['timestamp'] > self.max_age: 53 | self._conversations[user_id].popleft() 54 | else: 55 | break 56 | 57 | if not self._conversations[user_id]: 58 | del self._conversations[user_id] 59 | 60 | await asyncio.sleep(3600) # Cleanup every hour -------------------------------------------------------------------------------- /agents/roles.py: -------------------------------------------------------------------------------- 1 | AGENT_ROLES = { 2 | "triage": { 3 | "name": "Query Triage", 4 | "system": """You are a friendly AI assistant that responds to simple queries. If a query is complex, you hand it off to a swarm of agents to analyze. Do as follows: 5 | For simple queries (greetings, basic facts, simple definitions), respond with format: 6 | SIMPLE: 7 | 8 | For complex queries that need deeper analysis, ONLY respond with: 9 | COMPLEX 10 | 11 | Examples of SIMPLE queries: 12 | - Greetings/farewells 13 | - Basic facts ("What's the capital of France?") 14 | - Simple definitions ("What is photosynthesis?") 15 | 16 | Examples of COMPLEX queries: 17 | - Multi-faceted questions 18 | - Analytical requests 19 | - Comparisons 20 | - Strategic/planning questions 21 | - Technical explanations""" 22 | }, 23 | "interpreter": { 24 | "name": "Query Interpreter", 25 | "system": "You are an expert at understanding user queries. Break down the user's question into its core components and identify the main objectives.", 26 | "parameters": { 27 | "depth_of_analysis": { 28 | "default": 70, 29 | "min": 0, 30 | "max": 100, 31 | "description": "How deeply to analyze the query" 32 | }, 33 | "context_awareness": { 34 | "default": 80, 35 | "min": 0, 36 | "max": 100, 37 | "description": "How much to consider contextual information" 38 | } 39 | } 40 | }, 41 | "researcher": { 42 | "name": "Research Specialist", 43 | "system": "You are a research specialist. Given a topic, identify key areas that need investigation and formulate specific research questions.", 44 | "parameters": { 45 | "research_breadth": { 46 | "default": 75, 47 | "min": 0, 48 | "max": 100, 49 | "description": "How broad the research scope should be" 50 | }, 51 | "technical_depth": { 52 | "default": 65, 53 | "min": 0, 54 | "max": 100, 55 | "description": "Level of technical detail in research" 56 | }, 57 | "source_diversity": { 58 | "default": 80, 59 | "min": 0, 60 | "max": 100, 61 | "description": "Diversity of perspectives to consider" 62 | } 63 | } 64 | }, 65 | "critic": { 66 | "name": "Critical Analyzer", 67 | "system": "You are a critical thinker. Analyze the information provided and identify potential gaps, biases, or areas of concern.", 68 | "parameters": { 69 | "skepticism_level": { 70 | "default": 70, 71 | "min": 0, 72 | "max": 100, 73 | "description": "How skeptical to be of information" 74 | }, 75 | "bias_detection": { 76 | "default": 85, 77 | "min": 0, 78 | "max": 100, 79 | "description": "Focus on detecting biases" 80 | }, 81 | "rigor_level": { 82 | "default": 75, 83 | "min": 0, 84 | "max": 100, 85 | "description": "Thoroughness of critical analysis" 86 | } 87 | } 88 | }, 89 | "creative": { 90 | "name": "Creative Explorer", 91 | "system": "You are a creative thinker. Generate novel perspectives and alternative viewpoints on the topic.", 92 | "parameters": { 93 | "creativity_level": { 94 | "default": 85, 95 | "min": 0, 96 | "max": 100, 97 | "description": "Level of creative thinking" 98 | }, 99 | "unconventional_thinking": { 100 | "default": 75, 101 | "min": 0, 102 | "max": 100, 103 | "description": "Willingness to explore unconventional ideas" 104 | }, 105 | "practicality_balance": { 106 | "default": 60, 107 | "min": 0, 108 | "max": 100, 109 | "description": "Balance between creativity and practicality" 110 | } 111 | } 112 | }, 113 | "synthesizer": { 114 | "name": "Information Synthesizer", 115 | "system": "You are an expert at combining different perspectives. Integrate the various viewpoints into a coherent and comprehensive response without sounding like a robot. Keep responses as short as possible, preferably a couple of sentences, with an absolute maximum of 4.", 116 | "parameters": { 117 | "conciseness": { 118 | "default": 70, 119 | "min": 0, 120 | "max": 100, 121 | "description": "How concise the final response should be" 122 | }, 123 | "integration_level": { 124 | "default": 85, 125 | "min": 0, 126 | "max": 100, 127 | "description": "How thoroughly to integrate different viewpoints" 128 | }, 129 | "clarity_focus": { 130 | "default": 80, 131 | "min": 0, 132 | "max": 100, 133 | "description": "Focus on clarity vs complexity" 134 | } 135 | } 136 | } 137 | } -------------------------------------------------------------------------------- /agents/swarm.py: -------------------------------------------------------------------------------- 1 | from typing import Dict, Optional, AsyncGenerator 2 | import asyncio 3 | from groq import AsyncGroq 4 | from openai import AsyncOpenAI 5 | from config.settings import ( 6 | GROQ_API_KEY, 7 | OPENAI_API_KEY, 8 | HEURIST_API_KEY, 9 | GROQ_MODEL, 10 | OPENAI_MODEL, 11 | HEURIST_MODEL, 12 | OPENAI_BASE_URL, 13 | HEURIST_BASE_URL, 14 | MAX_RETRIES, 15 | RETRY_DELAY, 16 | USE_MEMORY, 17 | MAX_MEMORY_ITEMS, 18 | MEMORY_MAX_AGE_HOURS 19 | ) 20 | from agents.roles import AGENT_ROLES 21 | from agents.memory import ConversationMemory 22 | 23 | class AgentSwarm: 24 | def __init__(self): 25 | # Select provider based on available API key 26 | if OPENAI_API_KEY: 27 | self.client = AsyncOpenAI( 28 | api_key=OPENAI_API_KEY, 29 | base_url=OPENAI_BASE_URL 30 | ) 31 | self.model = OPENAI_MODEL 32 | self.provider = "openai" 33 | elif GROQ_API_KEY: 34 | self.client = AsyncGroq(api_key=GROQ_API_KEY) 35 | self.model = GROQ_MODEL 36 | self.provider = "groq" 37 | elif HEURIST_API_KEY: 38 | self.client = AsyncOpenAI( 39 | api_key=HEURIST_API_KEY, 40 | base_url=HEURIST_BASE_URL 41 | ) 42 | self.model = HEURIST_MODEL 43 | self.provider = "heurist" 44 | else: 45 | raise ValueError("No API keys configured. Please add one in settings.py") 46 | self.memory = ConversationMemory( 47 | max_history=MAX_MEMORY_ITEMS, 48 | max_age_hours=MEMORY_MAX_AGE_HOURS 49 | ) if USE_MEMORY else None 50 | 51 | async def handle_streaming_response(self, stream) -> str: 52 | """Handle streaming response from Heurist""" 53 | full_response = [] 54 | async for chunk in stream: 55 | if hasattr(chunk.choices[0].delta, "content"): 56 | content = chunk.choices[0].delta.content 57 | if content is not None: 58 | full_response.append(content) 59 | return "".join(full_response) 60 | 61 | async def query_agent(self, role: Dict, context: str, parameters: Optional[Dict] = None) -> str: 62 | """Query a single agent with retry logic and custom parameters""" 63 | for attempt in range(MAX_RETRIES): 64 | try: 65 | # Get role parameters 66 | role_params = {} 67 | if parameters and role["name"].lower().replace(" ", "_") in parameters: 68 | role_params = parameters[role["name"].lower().replace(" ", "_")] 69 | 70 | # Modify system prompt based on parameters 71 | system_prompt = role["system"] 72 | if role_params: 73 | param_context = "\nParameters:\n" 74 | for param, value in role_params.items(): 75 | param_context += f"- {param}: {value}%\n" 76 | system_prompt = f"{system_prompt}\n{param_context}" 77 | 78 | messages = [ 79 | { 80 | "role": "system", 81 | "content": system_prompt 82 | }, 83 | { 84 | "role": "user", 85 | "content": context 86 | } 87 | ] 88 | 89 | # Handle provider-specific parameters 90 | params = { 91 | "messages": messages, 92 | "model": self.model, 93 | } 94 | 95 | if self.provider == "openai": 96 | params["temperature"] = 0.7 97 | elif self.provider == "heurist": 98 | params["temperature"] = 0.7 99 | params["max_tokens"] = 64 100 | params["stream"] = True 101 | 102 | completion = await self.client.chat.completions.create(**params) 103 | 104 | # Handle streaming response for Heurist 105 | if self.provider == "heurist": 106 | return await self.handle_streaming_response(completion) 107 | 108 | # Handle regular response for other providers 109 | return completion.choices[0].message.content 110 | except Exception as e: 111 | if attempt == MAX_RETRIES - 1: 112 | raise e 113 | await asyncio.sleep(RETRY_DELAY) 114 | 115 | async def handle_simple_query(self, triage_response: str) -> str: 116 | """Extract and format the response from the triage agent""" 117 | # Remove the "SIMPLE: " prefix and any extra whitespace 118 | return triage_response.replace("SIMPLE:", "").strip() 119 | 120 | async def process_query(self, user_query: str, telegram_mode: bool = False, user_id: str = "default") -> str: 121 | """Process a user query through the agent swarm""" 122 | print(f"\n🤔 Processing query: '{user_query}'\n") 123 | 124 | # Get conversation context if memory is enabled 125 | context_info = "" 126 | if self.memory: 127 | context = self.memory.get_context(user_id) 128 | if context: 129 | context_info = f"\nPrevious conversation:\n{context}" 130 | 131 | # Step 0: Triage the query 132 | triage_response = await self.query_agent( 133 | AGENT_ROLES["triage"], 134 | f"Evaluate this query: '{user_query}'{context_info}" 135 | ) 136 | print(f"🔄 {AGENT_ROLES['triage']['name']}:") 137 | print(triage_response + "\n") 138 | 139 | # If it's a simple query, handle and store response 140 | if triage_response.startswith("SIMPLE:"): 141 | response = await self.handle_simple_query(triage_response) 142 | if self.memory: 143 | self.memory.add_exchange(user_id, user_query, response) 144 | return response 145 | 146 | # For complex queries, proceed with full swarm analysis 147 | print("⚡ Activating full agent swarm for complex query...\n") 148 | 149 | # Update interpreter prompt to include context 150 | interpreter_response = await self.query_agent( 151 | AGENT_ROLES["interpreter"], 152 | f"Analyze this query considering the conversation context:\n{context_info}\nQuery: '{user_query}'" 153 | ) 154 | print(f"🔍 {AGENT_ROLES['interpreter']['name']}:") 155 | print(interpreter_response + "\n") 156 | 157 | # Step 2: Research Planning 158 | researcher_response = await self.query_agent( 159 | AGENT_ROLES["researcher"], 160 | f"Based on this interpretation:\n{interpreter_response}\nWhat specific aspects need investigation?" 161 | ) 162 | print(f"📚 {AGENT_ROLES['researcher']['name']}:") 163 | print(researcher_response + "\n") 164 | 165 | # Step 3: Critical Analysis 166 | critic_response = await self.query_agent( 167 | AGENT_ROLES["critic"], 168 | f"Critically analyze these research points:\n{researcher_response}" 169 | ) 170 | print(f"⚖️ {AGENT_ROLES['critic']['name']}:") 171 | print(critic_response + "\n") 172 | 173 | # Step 4: Creative Exploration 174 | creative_response = await self.query_agent( 175 | AGENT_ROLES["creative"], 176 | f"Given this analysis:\n{critic_response}\nExplore creative perspectives and alternatives." 177 | ) 178 | print(f"💡 {AGENT_ROLES['creative']['name']}:") 179 | print(creative_response + "\n") 180 | 181 | # Step 5: Final Synthesis 182 | synthesis_context = f""" 183 | Original Query: {user_query} 184 | 185 | Interpretation: {interpreter_response} 186 | 187 | Research Points: {researcher_response} 188 | 189 | Critical Analysis: {critic_response} 190 | 191 | Creative Perspectives: {creative_response} 192 | 193 | Please synthesize all this information into a comprehensive response. 194 | """ 195 | 196 | final_response = await self.query_agent( 197 | AGENT_ROLES["synthesizer"], 198 | synthesis_context 199 | ) 200 | print(f"🎯 {AGENT_ROLES['synthesizer']['name']} - Final Response:") 201 | print(final_response + "\n") 202 | 203 | # Store the final response if memory is enabled 204 | if self.memory: 205 | self.memory.add_exchange(user_id, user_query, final_response) 206 | return final_response 207 | 208 | async def process_query_with_details(self, user_query: str, user_id: str = "default") -> dict: 209 | """Process a query and return all agent responses""" 210 | print(f"\n🤔 Processing query: '{user_query}'\n") 211 | 212 | # Get conversation context if memory is enabled 213 | context_info = "" 214 | if self.memory: 215 | context = self.memory.get_context(user_id) 216 | if context: 217 | context_info = f"\nPrevious conversation:\n{context}" 218 | 219 | # Initialize response structure 220 | response = { 221 | "is_simple_query": False, 222 | "synthesizer": {"name": "Information Synthesizer", "response": ""} 223 | } 224 | 225 | # Step 0: Triage the query 226 | triage_response = await self.query_agent( 227 | AGENT_ROLES["triage"], 228 | f"Evaluate this query: '{user_query}'{context_info}" 229 | ) 230 | print(f"🔄 {AGENT_ROLES['triage']['name']}:") 231 | print(triage_response + "\n") 232 | 233 | response["triage"] = {"name": "Query Triage", "response": triage_response} 234 | 235 | # If it's a simple query, handle and return 236 | if triage_response.startswith("SIMPLE:"): 237 | simple_response = await self.handle_simple_query(triage_response) 238 | if self.memory: 239 | self.memory.add_exchange(user_id, user_query, simple_response) 240 | response["is_simple_query"] = True 241 | response["synthesizer"]["response"] = simple_response 242 | return response 243 | 244 | # For complex queries, proceed with full swarm analysis 245 | print("⚡ Activating full agent swarm for complex query...\n") 246 | 247 | # Step 1: Interpretation 248 | interpreter_response = await self.query_agent( 249 | AGENT_ROLES["interpreter"], 250 | f"Analyze this query considering the conversation context:\n{context_info}\nQuery: '{user_query}'" 251 | ) 252 | print(f"🔍 {AGENT_ROLES['interpreter']['name']}:") 253 | print(interpreter_response + "\n") 254 | response["interpreter"] = {"name": "Query Interpreter", "response": interpreter_response} 255 | 256 | # Step 2: Research 257 | researcher_response = await self.query_agent( 258 | AGENT_ROLES["researcher"], 259 | f"Based on this interpretation:\n{interpreter_response}\nWhat specific aspects need investigation?" 260 | ) 261 | print(f"📚 {AGENT_ROLES['researcher']['name']}:") 262 | print(researcher_response + "\n") 263 | response["researcher"] = {"name": "Research Specialist", "response": researcher_response} 264 | 265 | # Step 3: Critical Analysis 266 | critic_response = await self.query_agent( 267 | AGENT_ROLES["critic"], 268 | f"Critically analyze these research points:\n{researcher_response}" 269 | ) 270 | print(f"⚖️ {AGENT_ROLES['critic']['name']}:") 271 | print(critic_response + "\n") 272 | response["critic"] = {"name": "Critical Analyzer", "response": critic_response} 273 | 274 | # Step 4: Creative Exploration 275 | creative_response = await self.query_agent( 276 | AGENT_ROLES["creative"], 277 | f"Given this analysis:\n{critic_response}\nExplore creative perspectives and alternatives." 278 | ) 279 | print(f"💡 {AGENT_ROLES['creative']['name']}:") 280 | print(creative_response + "\n") 281 | response["creative"] = {"name": "Creative Explorer", "response": creative_response} 282 | 283 | # Step 5: Final Synthesis 284 | synthesis_context = f""" 285 | Original Query: {user_query} 286 | 287 | Interpretation: {interpreter_response} 288 | 289 | Research Points: {researcher_response} 290 | 291 | Critical Analysis: {critic_response} 292 | 293 | Creative Perspectives: {creative_response} 294 | 295 | Please synthesize all this information into a comprehensive response. 296 | """ 297 | 298 | final_response = await self.query_agent( 299 | AGENT_ROLES["synthesizer"], 300 | synthesis_context 301 | ) 302 | print(f"🎯 {AGENT_ROLES['synthesizer']['name']} - Final Response:") 303 | print(final_response + "\n") 304 | 305 | response["synthesizer"]["response"] = final_response 306 | 307 | # Store the final response if memory is enabled 308 | if self.memory: 309 | self.memory.add_exchange(user_id, user_query, final_response) 310 | 311 | return response 312 | 313 | async def process_query_streaming(self, user_query: str, user_id: str = "default", parameters: Optional[Dict] = None) -> AsyncGenerator[Dict, None]: 314 | """Process a query and stream the response in real-time""" 315 | print(f"\n🤔 Processing query: '{user_query}'\n") 316 | 317 | # Get conversation context if memory is enabled 318 | context_info = "" 319 | if self.memory: 320 | context = self.memory.get_context(user_id) 321 | if context: 322 | context_info = f"\nPrevious conversation:\n{context}" 323 | 324 | # Step 0: Triage 325 | print(f"🔄 {AGENT_ROLES['triage']['name']}:") 326 | triage_text = "" 327 | async for chunk in self.query_agent_stream( 328 | AGENT_ROLES["triage"], 329 | f"Evaluate this query: '{user_query}'{context_info}" 330 | ): 331 | triage_text += chunk 332 | yield { 333 | "role": "triage", 334 | "name": AGENT_ROLES["triage"]["name"], 335 | "content": chunk 336 | } 337 | print("\n") 338 | 339 | # If simple query, stream direct response 340 | if triage_text.startswith("SIMPLE:"): 341 | simple_response = triage_text.replace("SIMPLE:", "").strip() 342 | if self.memory: 343 | self.memory.add_exchange(user_id, user_query, simple_response) 344 | yield { 345 | "role": "synthesizer", 346 | "name": "Simple Response", 347 | "content": simple_response 348 | } 349 | return 350 | 351 | # For complex queries, proceed with full swarm analysis 352 | print("⚡ Activating full agent swarm for complex query...\n") 353 | 354 | # Step 1: Interpretation 355 | print(f"🔍 {AGENT_ROLES['interpreter']['name']}:") 356 | interpreter_text = "" 357 | async for chunk in self.query_agent_stream( 358 | AGENT_ROLES["interpreter"], 359 | f"Analyze this query considering the conversation context:\n{context_info}\nQuery: '{user_query}'", 360 | parameters 361 | ): 362 | interpreter_text += chunk 363 | yield { 364 | "role": "interpreter", 365 | "name": AGENT_ROLES["interpreter"]["name"], 366 | "content": chunk 367 | } 368 | print("\n") 369 | 370 | # Step 2: Research 371 | print(f"📚 {AGENT_ROLES['researcher']['name']}:") 372 | researcher_text = "" 373 | async for chunk in self.query_agent_stream( 374 | AGENT_ROLES["researcher"], 375 | f"Based on this interpretation:\n{interpreter_text}\nWhat specific aspects need investigation?", 376 | parameters 377 | ): 378 | researcher_text += chunk 379 | yield { 380 | "role": "researcher", 381 | "name": AGENT_ROLES["researcher"]["name"], 382 | "content": chunk 383 | } 384 | print("\n") 385 | 386 | # Step 3: Critical Analysis 387 | print(f"⚖️ {AGENT_ROLES['critic']['name']}:") 388 | critic_text = "" 389 | async for chunk in self.query_agent_stream( 390 | AGENT_ROLES["critic"], 391 | f"Critically analyze these research points:\n{researcher_text}", 392 | parameters 393 | ): 394 | critic_text += chunk 395 | yield { 396 | "role": "critic", 397 | "name": AGENT_ROLES["critic"]["name"], 398 | "content": chunk 399 | } 400 | print("\n") 401 | 402 | # Step 4: Creative Exploration 403 | print(f"💡 {AGENT_ROLES['creative']['name']}:") 404 | creative_text = "" 405 | async for chunk in self.query_agent_stream( 406 | AGENT_ROLES["creative"], 407 | f"Given this analysis:\n{critic_text}\nExplore creative perspectives and alternatives.", 408 | parameters 409 | ): 410 | creative_text += chunk 411 | yield { 412 | "role": "creative", 413 | "name": AGENT_ROLES["creative"]["name"], 414 | "content": chunk 415 | } 416 | print("\n") 417 | 418 | # Step 5: Final Synthesis 419 | synthesis_context = f""" 420 | Original Query: {user_query} 421 | 422 | Interpretation: {interpreter_text} 423 | 424 | Research Points: {researcher_text} 425 | 426 | Critical Analysis: {critic_text} 427 | 428 | Creative Perspectives: {creative_text} 429 | 430 | Please synthesize all this information into a comprehensive response. 431 | """ 432 | 433 | print(f"🎯 {AGENT_ROLES['synthesizer']['name']} - Final Response:") 434 | final_response = "" 435 | async for chunk in self.query_agent_stream( 436 | AGENT_ROLES["synthesizer"], 437 | synthesis_context, 438 | parameters 439 | ): 440 | final_response += chunk 441 | yield { 442 | "role": "synthesizer", 443 | "name": AGENT_ROLES["synthesizer"]["name"], 444 | "content": chunk 445 | } 446 | print("\n") 447 | 448 | # Store the final response if memory is enabled 449 | if self.memory: 450 | self.memory.add_exchange(user_id, user_query, final_response) 451 | 452 | async def query_agent_stream(self, role: Dict, context: str, parameters: Optional[Dict] = None) -> AsyncGenerator[str, None]: 453 | """Query a single agent with streaming""" 454 | try: 455 | # Get role parameters 456 | role_params = {} 457 | if parameters and role["name"].lower().replace(" ", "_") in parameters: 458 | role_params = parameters[role["name"].lower().replace(" ", "_")] 459 | 460 | # Modify system prompt based on parameters 461 | system_prompt = role["system"] 462 | if role_params: 463 | param_context = "\nParameters:\n" 464 | for param, value in role_params.items(): 465 | param_context += f"- {param}: {value}%\n" 466 | system_prompt = f"{system_prompt}\n{param_context}" 467 | 468 | messages = [ 469 | { 470 | "role": "system", 471 | "content": system_prompt 472 | }, 473 | { 474 | "role": "user", 475 | "content": context 476 | } 477 | ] 478 | 479 | params = { 480 | "messages": messages, 481 | "model": self.model, 482 | "stream": True 483 | } 484 | 485 | if self.provider == "openai": 486 | params["temperature"] = 0.7 487 | elif self.provider == "heurist": 488 | params["temperature"] = 0.7 489 | params["max_tokens"] = 64 490 | 491 | stream = await self.client.chat.completions.create(**params) 492 | 493 | async for chunk in stream: 494 | if chunk.choices[0].delta.content: 495 | yield chunk.choices[0].delta.content 496 | 497 | except Exception as e: 498 | print(f"Streaming error: {str(e)}") 499 | # Fallback to non-streaming if streaming fails 500 | response = await self.query_agent(role, context, parameters) 501 | yield response -------------------------------------------------------------------------------- /agents/telegram_bot.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from telegram import Update 3 | from telegram.ext import Application, CommandHandler, MessageHandler, filters, ContextTypes 4 | from config.settings import TELEGRAM_BOT_TOKEN, USE_MEMORY 5 | from agents.swarm import AgentSwarm 6 | 7 | class TelegramBot: 8 | def __init__(self): 9 | self.swarm = AgentSwarm() 10 | self.app = Application.builder().token(TELEGRAM_BOT_TOKEN).build() 11 | 12 | # Add handlers 13 | self.app.add_handler(CommandHandler("start", self.start_command)) 14 | self.app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, self.handle_message)) 15 | 16 | async def start_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE): 17 | """Send a message when the command /start is issued.""" 18 | await update.message.reply_text( 19 | "👋 Welcome to the AI Agent Swarm!\n\n" 20 | "I'm here to help answer your questions using a swarm of specialized AI agents.\n" 21 | "Just send me your query and I'll process it.\n\n" 22 | "Example: 'Explain the concept of quantum computing'" 23 | ) 24 | 25 | async def handle_message(self, update: Update, context: ContextTypes.DEFAULT_TYPE): 26 | """Process user messages.""" 27 | try: 28 | await update.message.chat.send_action("typing") 29 | 30 | # Pass the Telegram user ID to maintain separate conversation histories 31 | response = await self.swarm.process_query( 32 | update.message.text, 33 | telegram_mode=True, 34 | user_id=str(update.effective_user.id) 35 | ) 36 | 37 | if response: 38 | if len(response) > 4096: 39 | for i in range(0, len(response), 4096): 40 | await update.message.reply_text(response[i:i+4096]) 41 | else: 42 | await update.message.reply_text(response) 43 | else: 44 | await update.message.reply_text("I couldn't generate a response. Please try again.") 45 | 46 | except Exception as e: 47 | await update.message.reply_text( 48 | f"❌ Sorry, an error occurred: {str(e)}\n" 49 | "Please try again later." 50 | ) 51 | 52 | def run(self): 53 | """Start the bot.""" 54 | print("Starting Telegram bot...") 55 | # Start the memory cleanup task if memory is enabled 56 | if USE_MEMORY: 57 | self.swarm.memory.start_cleanup() 58 | self.app.run_polling() -------------------------------------------------------------------------------- /config/settings.py: -------------------------------------------------------------------------------- 1 | # Used LLM configuration, provide an API key for the LLM you want to use, leave others empty 2 | OPENAI_API_KEY = "" 3 | OPENAI_MODEL = "gpt-4-turbo-preview" # or "gpt-3.5-turbo" for a cheaper option 4 | OPENAI_BASE_URL = "https://api.openai.com/v1" # Default OpenAI URL 5 | 6 | GROQ_API_KEY = "" 7 | GROQ_MODEL = "llama3-8b-8192" 8 | 9 | HEURIST_API_KEY = "" 10 | HEURIST_MODEL = "mistralai/mixtral-8x7b-instruct" # Example Heurist model 11 | HEURIST_BASE_URL = "https://llm-gateway.heurist.xyz" 12 | 13 | # Memory configuration 14 | USE_MEMORY = False # Set to False to disable conversation memory 15 | MAX_MEMORY_ITEMS = 3 # Number of previous exchanges to remember 16 | MEMORY_MAX_AGE_HOURS = 24 # How long to keep conversations in memory 17 | 18 | # Telegram configuration 19 | TELEGRAM_BOT_TOKEN = "" # Your Telegram bot token from @BotFather 20 | 21 | # API configuration 22 | USE_API = False # Set to True to enable API server 23 | API_HOST = "127.0.0.1" # Use "0.0.0.0" to allow external connections 24 | API_PORT = 8000 25 | SEND_FULL_SWARM_RESPONSE = True # Set to True to send all agent responses via API 26 | USE_STREAMING = True # Set to True to enable streaming responses 27 | 28 | # Swarm configuration 29 | MAX_RETRIES = 3 30 | RETRY_DELAY = 1 # seconds 31 | 32 | # SSL configuration 33 | SSL_ENABLED = False # SSL will be handled by Nginx instead -------------------------------------------------------------------------------- /images/banner.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Quarm-AI/Quarm/7c674e0df72fb4e5398c3e462b9523edac3769e3/images/banner.jpg -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import threading 3 | from agents.swarm import AgentSwarm 4 | from agents.telegram_bot import TelegramBot 5 | from agents.api_server import APIServer 6 | from config.settings import ( 7 | TELEGRAM_BOT_TOKEN, 8 | USE_MEMORY, 9 | USE_API, 10 | API_HOST, 11 | API_PORT 12 | ) 13 | 14 | async def cli_mode(): 15 | """Run in CLI mode""" 16 | swarm = AgentSwarm() 17 | if USE_MEMORY: 18 | swarm.memory.start_cleanup() 19 | 20 | print("Welcome to the AI Agent Swarm!") 21 | print("Please enter your query (or 'quit' to exit):") 22 | 23 | while True: 24 | try: 25 | user_query = input("\n❓ Your query: ").strip() 26 | if user_query.lower() == 'quit': 27 | break 28 | if not user_query: 29 | continue 30 | 31 | await swarm.process_query(user_query) 32 | print("\n-----------------------------------") 33 | except Exception as e: 34 | print(f"\n❌ Error: {str(e)}") 35 | print("Please try again or type 'quit' to exit.") 36 | 37 | def telegram_mode(): 38 | """Run in Telegram mode""" 39 | print("🤖 Starting AI Agent Swarm in Telegram mode...") 40 | bot = TelegramBot() 41 | bot.run() 42 | 43 | def start_api_server(): 44 | """Start the API server in a separate thread""" 45 | server = APIServer(host=API_HOST, port=API_PORT) 46 | server.run() 47 | 48 | if __name__ == "__main__": 49 | # Start API server if enabled 50 | if USE_API: 51 | print(f"🚀 Starting API server on {API_HOST}:{API_PORT}") 52 | api_thread = threading.Thread(target=start_api_server, daemon=True) 53 | api_thread.start() 54 | 55 | # Run main interface (CLI or Telegram) 56 | if TELEGRAM_BOT_TOKEN: 57 | telegram_mode() 58 | else: 59 | print("ℹ️ No Telegram token found in settings.py, running in CLI mode...") 60 | asyncio.run(cli_mode()) -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | fastapi>=0.104.0 2 | uvicorn>=0.24.0 3 | python-telegram-bot>=20.7 4 | openai>=1.3.0 5 | groq>=0.4.0 6 | pydantic>=2.5.0 7 | python-dotenv>=1.0.0 8 | aiohttp>=3.9.0 --------------------------------------------------------------------------------