├── LICENSE ├── README.md └── Camel-Coder.py /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 6, 23, 2023 3 | 4 | Copyright 2023, Alexander Rafael Izquierdo 5 | 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The GNU General Public License is a free, copyleft license for 12 | software and other kinds of works. 13 | 14 | The licenses for most software and other practical works are designed 15 | to take away your freedom to share and change the works. By contrast, 16 | the GNU General Public License is intended to guarantee your freedom to 17 | share and change all versions of a program--to make sure it remains free 18 | software for all its users. 19 | 20 | We, the authors of this license, urge everyone to help improve the GNU GPL 21 | by making modifications to its provisions as needed to address specific 22 | concerns related to theft protection and copyright infringement. 23 | 24 | License Agreement 25 | 26 | TERMS AND CONDITIONS 27 | 28 | 0. Definitions. 29 | 30 | "This License" refers to version 3 of the GNU General Public License. 31 | 32 | "Copyright" also means copyright-like laws that apply to other kinds of 33 | works, such as semiconductor masks. 34 | 35 | "The Program" refers to any copyrightable work licensed under this License. 36 | Each licensee is addressed as "you". "Licensees" and "recipients" may be 37 | individuals or organizations. 38 | 39 | 1. Theft Protection. 40 | 41 | Unauthorized copying, distribution, or use of the software, in whole or in part, 42 | without proper authorization or a valid license is strictly prohibited. Anyone found 43 | to be engaged in theft of the software will be subject to legal action and may be held 44 | liable for damages. 45 | 46 | 2. Copyright Infringement. 47 | 48 | The software and its source code, as licensed under this License, are protected by 49 | copyright law. Any unauthorized reproduction, distribution, or modification of the 50 | software, including the creation of derivative works, is a violation of copyright law 51 | and will be pursued to the fullest extent permitted by law. 52 | 53 | 3. Your Rights. 54 | 55 | Each licensee is granted the right to copy, distribute, and modify the Program 56 | subject to the terms and conditions of this License. 57 | 58 | 4. Attribution Requirement. 59 | 60 | If you use the software, in whole or in part, you must provide proper attribution to 61 | the original author, Alexander Rafael Izquierdo, in a prominent manner. This includes 62 | crediting the author's name and including a reference to the original work and license. 63 | 64 | 5. Permission for Commercial or Research Use. 65 | 66 | For commercial or research use of the software, you must obtain explicit permission from 67 | the original author, Alexander Rafael Izquierdo. Please contact the author to request 68 | permission and discuss terms of use. 69 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Camel-Coder 2 | 3 | Camel-Coder is an advanced Python script powered by OpenAI's GPT-3.5-turbo model, designed to enable robust task-oriented chatbot conversations. It leverages the role-playing conversation capabilities of the GPT model to assign specific roles to different chat agents, guiding them towards achieving a specific task collaboratively. 4 | 5 | ## Features 6 | 7 | - **Multiple Agents:** Camel-Coder provides support for multi-agent interactions. Each agent is assigned a unique role with specific responsibilities, adding depth and versatility to the chatbot capabilities. 8 | 9 | - **Task-Oriented Conversation:** The chatbot is designed to steer the conversation towards the completion of a specific task, facilitating goal-oriented dialogue. 10 | 11 | - **Role-Based Prompts:** Camel-Coder introduces role-specific prompt templates for each agent, offering instructions and guidelines that are tailored to each agent's function in the task completion process. 12 | 13 | - **Intervention Mechanism:** The script incorporates an innovative intervention mechanism. A special monitor agent can intervene in the conversation to ensure the conversation remains within the bounds of the set objective. 14 | 15 | - **Thoughtful Agent:** The script also features a thoughtful agent that proactively provides suggestions and guidance to drive the conversation in the right direction. 16 | 17 | - **Coding Agent:** This agent generates functional prototypes based on the discussion in the conversation, providing a practical outcome from the task-oriented dialogue. 18 | 19 | - **Conversation Saving:** It offers an automatic saving of the complete conversation to a text file, making it easy to review, analyze, or audit the conversation at any later stage. 20 | 21 | - **Token Counting and Cost Estimation:** To ensure transparency and cost-effectiveness, the script counts the tokens used in a conversation and provides a corresponding cost estimate. 22 | 23 | ## Setup 24 | 25 | 1. **Dependencies:** Install the required dependencies using pip: 26 | 27 | ```pip install openai langchain``` 28 | 29 | markdown 30 | 31 | 2. **Environment Variables:** Set up the required environment variables. This primarily includes the OpenAI API key. 32 | 33 | 3. **Role-Specific Prompts:** Define role-specific prompts for each agent by tweaking the `assistant_role_name`, `user_role_name`, and `task` variables in the CamelCoder.py script. 34 | 35 | 4. **Execution:** Run the CamelCoder.py script to start the chatbot. 36 | 37 | ## File Structure and Workspace 38 | 39 | Camel-Coder establishes a directory structure within the workspace to categorize the generated code and related files. 40 | 41 | ## Conversation Saving 42 | 43 | To configure the automatic conversation-saving feature, follow the below steps: 44 | 45 | 1. Open the CamelCoder.py script. 46 | 2. Find the `conversation_directory` variable. 47 | 3. Replace the current path with the desired directory where you wish to save the conversation file: 48 | 49 | `conversation_directory = "/your/desired/directory"` 50 | 51 | bash 52 | 53 | Replace `/your/desired/directory` with the actual path where you want the conversation file to be saved. 54 | 55 | ## Token Counting and Cost Estimation 56 | 57 | After executing the Camel-Coder script, the console output will provide information about the token count and cost estimation. 58 | 59 | ## License 60 | 61 | This project is licensed under the terms of a Custom GNU License. 62 | -------------------------------------------------------------------------------- /Camel-Coder.py: -------------------------------------------------------------------------------- 1 | import os 2 | import datetime 3 | import time 4 | from typing import List 5 | from langchain.callbacks import get_openai_callback 6 | from langchain.chat_models import ChatOpenAI 7 | from langchain.prompts.chat import ( 8 | SystemMessagePromptTemplate, 9 | HumanMessagePromptTemplate, 10 | ) 11 | from langchain.schema import AIMessage, HumanMessage, SystemMessage, BaseMessage 12 | import openai 13 | 14 | os.environ["OPENAI_API_KEY"] = 'your-api-key' # replace 'your-api-key' with your actual API key 15 | openai.api_key = "your-api-key" # replace 'your-api-key' with your actual API key 16 | 17 | conversation_directory = "path/to/your/workspace/" # Change to disired Path 18 | 19 | 20 | class CAMELAgent: 21 | def __init__( 22 | self, 23 | system_message: SystemMessage, 24 | model: ChatOpenAI, 25 | ) -> None: 26 | self.system_message = system_message 27 | self.model = model 28 | self.init_messages() 29 | 30 | def reset(self) -> None: 31 | self.init_messages() 32 | return self.stored_messages 33 | 34 | def init_messages(self) -> None: 35 | self.stored_messages = [self.system_message] 36 | 37 | def update_messages(self, message: BaseMessage) -> List[BaseMessage]: 38 | self.stored_messages.append(message) 39 | return self.stored_messages 40 | 41 | def step( 42 | self, 43 | input_message: HumanMessage, 44 | ) -> AIMessage: 45 | messages = self.update_messages(input_message) 46 | 47 | output_message = self.model(messages) 48 | self.update_messages(output_message) 49 | 50 | return output_message 51 | 52 | class CodingAgent(CAMELAgent): 53 | def __init__( 54 | self, 55 | system_message: SystemMessage, 56 | model: ChatOpenAI, 57 | inception_prompt: BaseMessage, 58 | ) -> None: 59 | super().__init__(system_message, model) 60 | self.inception_prompt = inception_prompt 61 | 62 | def update_messages(self, message: BaseMessage) -> List[BaseMessage]: 63 | self.stored_messages.append(message) 64 | # always prepend the inception prompt 65 | self.stored_messages = [self.inception_prompt] + self.stored_messages 66 | return self.stored_messages 67 | 68 | 69 | assistant_role_name = "Ai expert" 70 | user_role_name = "Project Lead" 71 | task = "create a website that will be able to create openai templates based off of user requests for easily deolpyable scripts" 72 | 73 | TOKEN_LIMIT = 14000 74 | 75 | word_limit = 50 # word limit for task brainstorming 76 | 77 | 78 | #Hardcoded agents 79 | thoughtful_role_name = "Thoughtful Agent" 80 | monitor_role_name = "Monitor Agent" 81 | coding_role_name = "Python Coding Expert" 82 | 83 | class MonitorAgent: 84 | def __init__(self, task, model_name="gpt-3.5-turbo-16k", monitor_role_name="MonitorAgent", user_role_name="User", api_key=openai.api_key): 85 | self.task_keywords = ["initialize", "configure", "diagnostics", "verify"] 86 | self.model_name = model_name 87 | self.task = task # Store the task in the instance 88 | self.intervene = False # a flag that tells whether to intervene or not 89 | self.intervention_message = "" # Stores the intervention message 90 | self.monitor_role_name = monitor_role_name 91 | self.user_role_name = user_role_name 92 | self.api_key = api_key if api_key else os.getenv("OPENAI_API_KEY") 93 | self.set_api_key(self.api_key) 94 | self.monitor_inception_prompt = ( 95 | f"Never forget you are a {self.monitor_role_name} and I am a {self.user_role_name}. Never flip roles!" 96 | "We share a common interest in collaborating to successfully complete a task." 97 | f"Your role is to monitor the conversation and ensure goal conformance. Here is the task: {self.task}. Never forget our task!" 98 | "Your responsibility is to observe the conversation and provide feedback or intervene when necessary." 99 | "You should assess whether the conversation is conforming to the task goal." 100 | "If you notice any deviations or need to provide guidance, feel free to do so." 101 | "Your intervention: " 102 | ) 103 | 104 | def set_api_key(self, api_key): 105 | openai.api_key = api_key 106 | 107 | def generate_suggestions(self, conversation): 108 | # Use GPT-4 model to understand context and generate suggestions 109 | prompt = f"The task is: '{self.task}'. The conversation so far is: '{conversation}'. Based on this, what are some topics related to the task that should be discussed?" 110 | response = openai.ChatCompletion.create( 111 | model=self.model_name, 112 | messages=[ 113 | {"role": "system", "content": self.monitor_inception_prompt}, 114 | {"role": "user", "content": prompt} 115 | ], 116 | max_tokens=100 117 | ) 118 | suggestions = response.choices[0].message['content'].strip().split(',') 119 | return suggestions 120 | 121 | def should_intervene(self, conversation): 122 | # Use GPT-4 model to understand context and decide if an intervention is necessary 123 | prompt = f"The task is: '{self.task}'. The conversation so far is: '{conversation}'. Based on this, when should there be an intervention to guide the discussion?" 124 | response = openai.ChatCompletion.create( 125 | model=self.model_name, 126 | messages=[ 127 | {"role": "system", "content": self.monitor_inception_prompt}, 128 | {"role": "user", "content": prompt} 129 | ], 130 | max_tokens=20 131 | ) 132 | decision = response.choices[0].message['content'].strip().lower() 133 | return decision == 'yes' 134 | 135 | def stage_intervention(self, conversation): 136 | # Use GPT-4 model to generate an intervention message 137 | prompt = f"The task is: '{self.task}'. The conversation so far is: '{conversation}'. Based on this, generate an intervention message to guide the discussion." 138 | response = openai.ChatCompletion.create( 139 | model=self.model_name, 140 | messages=[ 141 | {"role": "system", "content": self.monitor_inception_prompt}, 142 | {"role": "user", "content": prompt} 143 | ], 144 | max_tokens=200 145 | ) 146 | intervention_message = response.choices[0].message['content'].strip() 147 | return intervention_message 148 | 149 | def step(self, conversation): 150 | # Check the intervene flag 151 | if self.intervene: 152 | # Reset the intervene flag and return the intervention message 153 | self.intervene = False 154 | return self.intervention_message 155 | else: 156 | suggestions = self.generate_suggestions(conversation) 157 | return "\n".join(suggestions) if suggestions else None 158 | 159 | def update_status(self, conversation): 160 | # Use GPT-4 model to generate a status update based on the current conversation 161 | prompt = f"The conversation so far is: '{conversation}'. Based on this, what is the status and what do you think?" 162 | response = openai.ChatCompletion.create( 163 | model=self.model_name, 164 | messages=[ 165 | {"role": "system", "content": self.monitor_inception_prompt}, 166 | {"role": "user", "content": prompt} 167 | ], 168 | max_tokens=100 169 | ) 170 | status_update = response.choices[0].message['content'].strip() 171 | return status_update 172 | 173 | coding_inception_prompt = ( 174 | f"As the {coding_role_name}, your primary objective is to directly translate the ongoing discussion, ideas, and defined objectives into real, executable code. Your role is crucial in transforming the conversation into a functioning coding project.\n\n" 175 | f"With your advanced programming skills, you're expected to craft a robust, maintainable, and scalable piece of software or application that aligns with the established requirements and expectations. Your final output must be functional, well-structured code demonstrating a keen understanding of the task at hand and a strong problem-solving ability.\n\n" 176 | f"Based on the ongoing conversation, your task is twofold:\n\n" 177 | f"1. Generate a hypothetical file structure for the coding project: Create a directory structure that reflects the discussed components, modules, and their dependencies. Organize the structure in a logical manner, capturing the relationships between the components. Each component/module should be represented as a directory, and the dependencies should be reflected in the structure.\n\n" 178 | f"2. Provide placeholder code: Implement key functionalities discussed in the conversation by providing relevant code snippets, class definitions, function definitions, or any other code representation that reflects the intended behavior of the coding project. The placeholder code should serve as a starting point for the actual implementation.\n\n" 179 | f"Keep in mind that the code and file structure should adhere to best practices, such as proper naming conventions, modularity, and code reusability.\n\n" 180 | f"To complete your task, please provide the following:\n\n" 181 | f"File Structure:\n\n\n" 182 | f"Placeholder Code:\n" 183 | ) 184 | 185 | 186 | 187 | 188 | 189 | task_specifier_sys_msg = SystemMessage(content="You can make a task more specific.") 190 | task_specifier_prompt = ( 191 | "Here is a task that involves a discussion among the following agents:\n\n" 192 | 193 | "1. {assistant_role_name}: Your role is to provide guidance and assistance throughout the task.\n" 194 | "2. {user_role_name}: Your role is to provide instructions and collaborate with other agents.\n" 195 | "3. {monitor_role_name}: Your role is to observe the conversation and ensure goal conformance.\n" 196 | "4. {coding_role_name}: Your role is to develop a large and complex prototype based on the ongoing discussion.\n" 197 | "5. {thoughtful_role_name}: Your role is to provide thoughtful suggestions to guide the conversation.\n\n" 198 | 199 | "The task to be discussed is as follows: {task}.\n" 200 | "Please make the task more specific, be creative and imaginative.\n" 201 | "Reply with the full task in {word_limit} words or less. Do not add anything else.\n" 202 | ) 203 | 204 | assistant_inception_prompt = ( 205 | """Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! 206 | We share a common interest in collaborating to successfully complete a task. 207 | You must help me complete the task. 208 | Here is the task: {task}. Never forget our task! 209 | I, as the {assistant_role_name}, will instruct you based on your expertise and my needs to complete the task. 210 | 211 | I will give you one question at a time. 212 | You must write a specific answer that appropriately completes the requested question. 213 | Do not add anything else other than your answer to my instruction. 214 | You must adhere to the instructions provided by the {user_role_name} at all times. 215 | 216 | Unless I say the task is completed, you should always start your Format with: 217 | 218 | 219 | Production difficulty: 220 | Current State of Production: 221 | 222 | 223 | 224 | must be a specific and descriptive answer that directly addresses the requested question. 225 | Do not provide general information or additional explanations beyond what is required. You must be honest and say you cannot directly create products outside of your capiablities 226 | 227 | Remember to end with: Next question. 228 | 229 | As we proceed, please also keep in mind the roles of other agents: 230 | - {assistant_role_name}: Assist the user in completing the task by providing relevant information and following their instructions. 231 | - {user_role_name}: Provide instructions and collaborate with other agents. 232 | - {thoughtful_role_name}: Provide thoughtful suggestions to guide the conversation and contribute to the overall progress. 233 | - {coding_role_name}: Develop a large and complex prototype based on the ongoing discussion. 234 | - {monitor_role_name}: Observe the conversation and ensure that all agents are adhering to the task goal. Intervene when necessary. 235 | 236 | Let's collaborate effectively to accomplish our task! 237 | we are a group of collective agents not humans. DO NOT CREATE DEADLINES, WE WORK STEP BY STEP! Our goal is to strive towards the completing the given task: {task}. Refrain from being "chatty" and continully imrpove the product in collaberation with the other agents.""" 238 | ) 239 | 240 | user_inception_prompt = ( 241 | """As {user_role_name}, your task is to guide {assistant_role_name} to complete the task: '{task}'. 242 | Do not repeat your own instructions and consider the responses from the {assistant_role_name} and {thoughtful_role_name} when formulating your next step. 243 | IMPORTANT: Remember, you are not assuming the roles of {monitor_role_name}, {coding_role_name}, {assistant_role_name}, or {thoughtful_role_name}. 244 | Use the following format when providing guidance: 245 | Production difficulty: 246 | Current State of Production: 247 | My Instructions: 248 | Your role is to direct the process through specific questions, requests, or instructions to the {assistant_role_name}. 249 | Let's collaborate effectively to accomplish our task! 250 | we are a group of collective agents not humans. DO NOT CREATE DEADLINES, WE WORK STEP BY STEP! once you recive a response go to the next step. Our goal is to strive towards the completing the given task: {task}. Refrain from being "chatty" and continully imrpove the product in collaberation with the other agents.""" 251 | ) 252 | 253 | 254 | 255 | thoughtful_inception_prompt = ( 256 | """Never forget you are a {thoughtful_role_name} and I am a {user_role_name}. Never flip roles! 257 | We share a common interest in collaborating to successfully complete a task. 258 | Your role is to provide thoughtful suggestions to guide the conversation. 259 | Here is the task: {task}. Never forget our task! 260 | 261 | You, as the {thoughtful_role_name}, should help guide the conversation by providing thoughtful suggestions, clarifications, and insights. 262 | Your goal is to help the {user_role_name} and the {assistant_role_name} achieve their objectives effectively and efficiently. 263 | always format your response as such: 264 | 265 | Current State of Production: 266 | 267 | 268 | Always end the format with "Next Question" 269 | 270 | You should focus on the ongoing conversation and provide suggestions that contribute to the overall progress. 271 | Please avoid intervening excessively or attempting to control the conversation. 272 | 273 | Keep in mind the roles of other agents as well: 274 | - {assistant_role_name}: Assist the {user_role_name} in completing the task by providing relevant information and following their instructions. 275 | - {user_role_name}: Provide instructions and collaborate with other agents. 276 | - {coding_role_name}: Develop a large and complex prototype based on the ongoing discussion. 277 | - {monitor_role_name}: Observe the conversation and ensure that all agents are adhering to the task goal. Intervene when necessary. 278 | 279 | Let's collaborate effectively to accomplish our task! 280 | we are a group of collective agents not humans. DO NOT CREATE DEADLINES, WE WORK STEP BY STEP! Our goal is to strive towards the completing the given task: {task}. Refrain from being "chatty" and continully imrpove the product in collaberation with the other agents." 281 | """ 282 | ) 283 | 284 | monitor_inception_prompt = ( 285 | """Never forget you are a {monitor_role_name} and I am a {user_role_name}. Never flip roles! 286 | We share a common interest in collaborating to successfully complete a task. 287 | Your role is to monitor the conversation and ensure goal conformance. 288 | Here is the task: {task}. Never forget our task! 289 | 290 | As the {monitor_role_name}, you should closely observe the conversation among the agents. 291 | Your goal is to ensure that all agents are adhering to the task goal and following the guidelines set by the prompt. 292 | 293 | Intervene when necessary to steer the conversation back on track or address any potential issues related to the task or collaboration. 294 | 295 | Keep in mind the roles of other agents as well: 296 | - {assistant_role_name}: Assist the {user_role_name} in completing the task by providing relevant information and following their instructions. 297 | - {user_role_name}: Provide instructions and collaborate with other agents. 298 | - {thoughtful_role_name}: Provide thoughtful suggestions to guide the conversation and contribute to the overall progress. 299 | - {coding_role_name}: Develop a large and complex prototype based on the ongoing discussion. 300 | 301 | Let's collaborate effectively to accomplish our task!""" 302 | ) 303 | 304 | coding_inception_prompt = ( 305 | """Never forget you are a {coding_role_name} and I am a {user_role_name}. Never flip roles! 306 | We share a common interest in collaborating to successfully complete a task. 307 | Your role is to develop a large and complex prototype based on the ongoing discussion. 308 | Here is the task: {task}. Never forget our task! 309 | 310 | As the {coding_role_name}, you should actively follow the conversation and develop a large and complex prototype based on the ongoing discussion. 311 | Your goal is to create a prototype that aligns with the requirements and objectives discussed by the agents. 312 | 313 | Please ensure that you consider all relevant information provided during the conversation and incorporate it into the prototype. 314 | 315 | Keep in mind the roles of other agents as well: 316 | - {assistant_role_name}: Assist the {user_role_name} in completing the task by providing relevant information and following their instructions. 317 | - {user_role_name}: Provide instructions and collaborate with other agents. 318 | - {thoughtful_role_name}: Provide thoughtful suggestions to guide the conversation and contribute to the overall progress. 319 | - {monitor_role_name}: Observe the conversation and ensure that all agents are adhering to the task goal. Intervene when necessary. 320 | 321 | Let's collaborate effectively to accomplish our task!""" 322 | ) 323 | 324 | 325 | def write_conversation_to_file(conversation, filename): 326 | def timestamp(): 327 | now = datetime.datetime.now() 328 | timestamp = now.strftime("%H%M%d%m%Y") 329 | return timestamp 330 | 331 | def append_timestamp_to_filename(filename): 332 | base, extension = os.path.splitext(filename) 333 | new_filename = f"{base}-{timestamp()}{extension}" 334 | return new_filename 335 | 336 | filename = os.path.join(conversation_directory, append_timestamp_to_filename(filename)) 337 | 338 | try: 339 | with open(filename, 'w') as f: 340 | for turn in conversation: 341 | speaker, statement = turn 342 | f.write(f"{speaker}: {statement}\n\n") 343 | print(f"Conversation successfully written to {filename}") 344 | except Exception as e: 345 | print(f"Failed to write conversation to file: {e}") 346 | 347 | filename = append_timestamp_to_filename(filename) 348 | 349 | with open(filename, 'w') as f: 350 | for turn in conversation: 351 | speaker, statement = turn 352 | f.write(f"{speaker}: {statement}\n\n") 353 | 354 | 355 | def get_sys_msgs(assistant_role_name, user_role_name, task, coding_role_name, thoughtful_role_name, monitor_role_name): 356 | assistant_sys_template = SystemMessagePromptTemplate.from_template(template=assistant_inception_prompt) 357 | assistant_sys_msg = assistant_sys_template.format_messages( 358 | assistant_role_name=assistant_role_name, 359 | user_role_name=user_role_name, 360 | task=task, 361 | coding_role_name=coding_role_name, 362 | thoughtful_role_name=thoughtful_role_name, 363 | monitor_role_name=monitor_role_name 364 | )[0] 365 | 366 | user_sys_template = SystemMessagePromptTemplate.from_template(template=user_inception_prompt) 367 | user_sys_msg = user_sys_template.format_messages( 368 | assistant_role_name=assistant_role_name, 369 | user_role_name=user_role_name, 370 | task=task, 371 | coding_role_name=coding_role_name, 372 | thoughtful_role_name=thoughtful_role_name, 373 | monitor_role_name=monitor_role_name 374 | )[0] 375 | 376 | thoughtful_sys_template = SystemMessagePromptTemplate.from_template(template=thoughtful_inception_prompt) 377 | thoughtful_sys_msg = thoughtful_sys_template.format_messages( 378 | assistant_role_name=assistant_role_name, 379 | user_role_name=user_role_name, 380 | task=task, 381 | coding_role_name=coding_role_name, 382 | thoughtful_role_name=thoughtful_role_name, 383 | monitor_role_name=monitor_role_name 384 | )[0] 385 | 386 | monitor_sys_template = SystemMessagePromptTemplate.from_template(template=monitor_inception_prompt) 387 | monitor_sys_msg = monitor_sys_template.format_messages( 388 | assistant_role_name=assistant_role_name, 389 | user_role_name=user_role_name, 390 | task=task, 391 | coding_role_name=coding_role_name, 392 | thoughtful_role_name=thoughtful_role_name, 393 | monitor_role_name=monitor_role_name 394 | )[0] 395 | 396 | coding_sys_template = SystemMessagePromptTemplate.from_template(template=coding_inception_prompt) 397 | coding_sys_msg = coding_sys_template.format_messages( 398 | assistant_role_name=assistant_role_name, 399 | user_role_name=user_role_name, 400 | task=task, 401 | coding_role_name=coding_role_name, 402 | thoughtful_role_name=thoughtful_role_name, 403 | monitor_role_name=monitor_role_name 404 | )[0] 405 | 406 | return assistant_sys_msg, user_sys_msg, thoughtful_sys_msg, monitor_sys_msg, coding_sys_msg 407 | 408 | def initialize_chats( 409 | assistant_role_name, user_role_name, task, coding_role_name, thoughtful_role_name, monitor_role_name 410 | ): 411 | assistant_sys_msg, user_sys_msg, thoughtful_sys_msg, monitor_sys_msg, coding_sys_msg = get_sys_msgs( 412 | assistant_role_name, user_role_name, task, coding_role_name, thoughtful_role_name, monitor_role_name 413 | ) 414 | 415 | # Initialize your coding agent 416 | assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 417 | user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 418 | thoughtful_agent = CAMELAgent(thoughtful_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 419 | monitor_agent = CAMELAgent(monitor_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 420 | coding_agent = CAMELAgent(coding_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 421 | 422 | return assistant_agent, user_agent, thoughtful_agent, monitor_agent, coding_agent, coding_sys_msg 423 | 424 | # Call the initialize_chats function here 425 | assistant_sys_msg, user_sys_msg, thoughtful_sys_msg, monitor_sys_msg, coding_sys_msg = get_sys_msgs( 426 | assistant_role_name, user_role_name, task, coding_role_name, thoughtful_role_name, monitor_role_name 427 | ) 428 | 429 | 430 | 431 | 432 | task_specifier_template = HumanMessagePromptTemplate.from_template(template=task_specifier_prompt) 433 | task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=0.7)) 434 | task_specifier_msg = task_specifier_template.format_messages( 435 | assistant_role_name=assistant_role_name, 436 | user_role_name=user_role_name, 437 | task=task, 438 | coding_role_name=coding_role_name, 439 | thoughtful_role_name=thoughtful_role_name, 440 | monitor_role_name=monitor_role_name, 441 | word_limit=word_limit 442 | )[0] 443 | specified_task_msg = task_specify_agent.step(task_specifier_msg) 444 | specified_task = specified_task_msg.content 445 | 446 | print(f"Specified task: {specified_task}") 447 | 448 | assistant_sys_msg, user_sys_msg, thoughtful_sys_msg, monitor_sys_msg, _ = get_sys_msgs( 449 | assistant_role_name, user_role_name, specified_task, coding_role_name, thoughtful_role_name, monitor_role_name 450 | ) 451 | 452 | # Reinitialize other agents with updated system messages 453 | assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 454 | user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 455 | thoughtful_agent = CAMELAgent(thoughtful_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 456 | monitor_agent = CAMELAgent(monitor_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 457 | coding_agent = CAMELAgent(coding_sys_msg, ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.2)) 458 | 459 | assistant_msg = HumanMessage( 460 | content=( 461 | f"{user_sys_msg.content}. " 462 | "Now start giving me instructions one by one. " 463 | "Only reply with Instruction and Input." 464 | ) 465 | ) 466 | 467 | from typing import List 468 | import re 469 | import os 470 | import glob 471 | 472 | # Check if the directory exists 473 | if not os.path.exists(conversation_directory): 474 | # If not, create the directory 475 | os.makedirs(conversation_directory) 476 | 477 | # Then you can use it as your workspace 478 | os.chdir(conversation_directory) 479 | 480 | 481 | # Function to create directories recursively if they don't already exist 482 | def create_directory(directory_path): 483 | try: 484 | os.makedirs(directory_path, exist_ok=True) 485 | print(f"Created directory: {directory_path}") 486 | except Exception as e: 487 | print(f"Error while creating directory: {directory_path}. Error: {str(e)}") 488 | 489 | ## Function to recursively generate the file structure and scripts 490 | def generate_file_structure_and_scripts(file_structure_content, coding_agent, project_directory="workspace"): 491 | os.makedirs(project_directory, exist_ok=True) 492 | 493 | lines = file_structure_content.split("\n") 494 | current_directory = project_directory 495 | indentation_levels = [0] 496 | 497 | for line in lines: 498 | stripped_line = line.lstrip() 499 | indentation = len(line) - len(stripped_line) 500 | 501 | if stripped_line.endswith(':'): 502 | # This is a directory 503 | directory_name = stripped_line[:-1] # removing the colon at the end 504 | if directory_name.startswith("```"): # ignore lines enclosed in triple backticks 505 | continue 506 | current_directory = os.path.join(current_directory, directory_name) 507 | os.makedirs(current_directory, exist_ok=True) 508 | indentation_levels.append(indentation) 509 | 510 | elif stripped_line and not stripped_line.startswith("```"): 511 | # This is a file 512 | while indentation < indentation_levels[-1]: # Moving up in the directory tree 513 | current_directory = os.path.dirname(current_directory) 514 | indentation_levels.pop() 515 | 516 | file_name = stripped_line.strip('/') 517 | 518 | if file_name: # This ignores empty lines 519 | file_path = os.path.join(current_directory, file_name) 520 | 521 | if not os.path.exists(file_path): 522 | if stripped_line.endswith('/'): # if the name ends with '/' treat it as a directory 523 | os.makedirs(file_path, exist_ok=True) 524 | print(f"Created directory: {file_path}") 525 | else: 526 | # Ensure parent directory exists 527 | parent_directory = os.path.dirname(file_path) 528 | os.makedirs(parent_directory, exist_ok=True) 529 | 530 | code_prompt = f"As the {coding_agent}, provide code for the file with little to no placeholder code this is meant to be a functional prototype: {file_name}" 531 | code_ai_msg = coding_agent.step(AIMessage(content=code_prompt)) 532 | 533 | if "```" in code_ai_msg.content: 534 | code_content = "\n".join(code_ai_msg.content.split("```")[1].split("\n")[1:-1]) # Updated code extraction 535 | # Remove placeholder end points 536 | code_content = code_content.replace('...', '') 537 | else: 538 | print(f"Warning: AI response does not contain expected code block for file: {file_name}") 539 | code_content = "" 540 | 541 | with open(file_path, 'w') as f: 542 | f.write(code_content) 543 | print(f"Created file: {file_path}") 544 | 545 | if stripped_line.endswith('/'): # Update current directory for the next file or directory 546 | current_directory = file_path 547 | indentation_levels.append(indentation) 548 | 549 | 550 | 551 | # Check if we need to go up in directory tree 552 | if indentation < indentation_levels[-1]: 553 | while indentation < indentation_levels[-1]: 554 | current_directory = os.path.dirname(current_directory) 555 | indentation_levels.pop() 556 | 557 | # Now we prompt the Coding Agent to refine the created code 558 | for file_path in get_all_files_in_directory(project_directory): 559 | # Read the original code from the file 560 | with open(file_path, 'r') as file: 561 | original_code = file.read() 562 | 563 | # Ask the coding agent to refine the code 564 | refinement_prompt = f"As the {coding_agent}, please fill in all and any placeholder logic in the following code while expanding fuctionality when you can: \n\n{original_code}" 565 | refinement_ai_msg = coding_agent.step(AIMessage(content=refinement_prompt)) 566 | 567 | # Extract the refined code from the AI response 568 | refined_code = refinement_ai_msg.content.split("```")[1].strip() # Extract the code content only 569 | 570 | # Write the refined code back to the file 571 | with open(file_path, 'w') as file: 572 | file.write(refined_code) 573 | 574 | print(f"Refined file: {file_path}") 575 | 576 | 577 | 578 | # Function to write code to a file 579 | def write_code_to_file(file_path, code_content): 580 | # Check if the file path is a directory, if so, print message and return 581 | if os.path.isdir(file_path): 582 | print(f"Skipping directory: {file_path}") 583 | return 584 | 585 | # Check if the parent directory of the file path is a directory, if not, print message and return 586 | if not os.path.isdir(os.path.dirname(file_path)): 587 | print(f"Parent directory does not exist: {os.path.dirname(file_path)}") 588 | return 589 | 590 | # Check if the file already exists, if not, create and write to it 591 | if not os.path.exists(file_path): 592 | with open(file_path, "w") as file: 593 | file.write(code_content) 594 | print(f"Generated code for file: {file_path}") 595 | else: 596 | print(f"Skipping existing file: {file_path}") 597 | 598 | 599 | # Function to extract files from the file structure 600 | def extract_files_from_file_structure(file_structure_content): 601 | files = [] 602 | lines = file_structure_content.content.split("\n") 603 | current_directory = "" 604 | 605 | for line in lines: 606 | if line.startswith(" "): 607 | # File or subdirectory 608 | file_match = re.match(r"^\s+([├└──]+) (.+)", line) 609 | if file_match: 610 | indentation = file_match.group(1) 611 | file_name = file_match.group(2) 612 | path = os.path.join(current_directory, file_name) if current_directory else file_name 613 | files.append(path) 614 | else: 615 | # Directory 616 | directory_match = re.match(r"^([├└──]+) (.+)/$", line) 617 | if directory_match: 618 | indentation = directory_match.group(1) 619 | directory_name = directory_match.group(2) 620 | current_directory = os.path.join(current_directory, directory_name) 621 | 622 | return files 623 | 624 | # Function to get all files in a directory, including nested directories 625 | def get_all_files_in_directory(directory): 626 | return [f for f in glob.glob(directory + "**/*", recursive=True) if os.path.isfile(f)] 627 | 628 | # Truncate the conversation text to a specific number of tokens 629 | def truncate_text(text, max_tokens): 630 | tokens = text.split() 631 | if len(tokens) > max_tokens: 632 | tokens = tokens[:max_tokens] 633 | return " ".join(tokens) 634 | 635 | 636 | conversation = [] 637 | total_tokens = 0 638 | preserve_last_complete_message = True 639 | 640 | assistant_inception_msg = SystemMessage(content=assistant_inception_prompt) 641 | thoughtful_inception_msg = SystemMessage(content=thoughtful_inception_prompt) 642 | coding_inception_msg = SystemMessage(content=coding_inception_prompt) 643 | user_inception_msg = SystemMessage(content=user_inception_prompt) 644 | user_agent.init_messages() 645 | user_agent.update_messages(user_inception_msg) 646 | 647 | # Initialize the MonitorAgent 648 | monitor_agent = MonitorAgent(monitor_inception_prompt, "gpt-3.5-turbo-16k") 649 | 650 | # Add agents to the list 651 | agents = [ 652 | (user_role_name, user_agent, HumanMessage, user_inception_msg), 653 | (assistant_role_name, assistant_agent, HumanMessage, assistant_inception_msg), 654 | (thoughtful_role_name, thoughtful_agent, AIMessage, thoughtful_inception_msg), 655 | (coding_role_name, coding_agent, AIMessage, coding_inception_msg), 656 | ] 657 | 658 | # Set the number of loops for user, assistant, and thoughtful agents 659 | loop_count = 4 660 | 661 | # Set the number of main loops before running the coding agent and monitor agent intervention 662 | main_loops_before_coding = 4 663 | main_loops_before_monitor_intervention = 6 664 | 665 | # Main conversation loop 666 | with get_openai_callback() as cb: 667 | chat_turn_limit = 50 668 | main_loop_count = 0 669 | 670 | for n in range(chat_turn_limit): 671 | separator_line = "\n" + "=" * 60 + "\n" 672 | 673 | # User, Assistant, Thoughtful loop 674 | for _ in range(loop_count): 675 | for i, (role_name, agent, MessageClass, inception_msg) in enumerate(agents[:-1]): 676 | if n == 1 and role_name == user_role_name: 677 | ai_msg = agent.step(inception_msg) 678 | else: 679 | # Gather previous agent messages excluding the current agent's own responses 680 | prev_agent_responses = [msg[1] for msg in conversation if msg[0] != role_name] 681 | 682 | # Filter out messages that are not strings or AIMessage objects 683 | prev_agent_responses = [msg for msg in prev_agent_responses if isinstance(msg, str) or isinstance(msg, AIMessage)] 684 | 685 | # Extract the content from each message and join them with a newline 686 | message_content = "\n".join([msg.content if isinstance(msg, AIMessage) else msg for msg in prev_agent_responses[-2:]]) 687 | ai_msg = agent.step(AIMessage(content=message_content)) 688 | 689 | msg = MessageClass(content=ai_msg.content) 690 | conversation.append((role_name, msg.content)) 691 | total_tokens += len(msg.content.split()) 692 | print(separator_line) 693 | print(f"\n{'-' * 50}\n{role_name}:\n{'-' * 50}\n{msg.content}\n") 694 | print(separator_line) 695 | 696 | if total_tokens > TOKEN_LIMIT: 697 | print("Token limit exceeded. Truncating conversation.") 698 | if preserve_last_complete_message: 699 | last_complete_message = "\n".join([msg.content for _, _, msg, _ in agents[i-1:i-2]]) 700 | 701 | # Increment the main_loop_count after one full loop 702 | main_loop_count += 1 703 | 704 | 705 | # Coding agent loop after main_loops_before_coding full main loops 706 | if main_loop_count % main_loops_before_coding == 0: 707 | role_name, coding_agent, MessageClass, coding_inception_msg = agents[-1] 708 | 709 | # Find the previous main loop and refinement response by the coding agent 710 | prev_main_loop = None 711 | prev_refinement_response = None 712 | for agent_name, msg in reversed(conversation): 713 | if agent_name == coding_role_name: 714 | if isinstance(msg, AIMessage): 715 | prev_main_loop = msg.content 716 | elif prev_main_loop is not None and prev_refinement_response is None: 717 | if isinstance(msg, AIMessage): 718 | prev_refinement_response = msg.content 719 | break 720 | 721 | # Generate the file structure and scripts based on the file structure content 722 | file_structure_prompt = ( 723 | f"As the {coding_role_name}, based on the previous main loop and refinement, please generate a hypothetical file structure " 724 | f"that would be suitable for this coding project.\n\n" 725 | f"{prev_main_loop}" 726 | ) 727 | file_structure_ai_msg = coding_agent.step(MessageClass(content=file_structure_prompt)) 728 | file_structure_msg = MessageClass(content=file_structure_ai_msg.content) 729 | conversation.append((role_name, file_structure_msg)) 730 | total_tokens += len(file_structure_msg.content.split()) 731 | print(separator_line) 732 | print(f"\n{'-' * 50}\n{role_name}:\n{'-' * 50}\n{file_structure_msg.content}\n") 733 | print(separator_line) 734 | 735 | # After you've received the response from the Python Coding Expert 736 | response = file_structure_msg.content # Replace with actual response content 737 | 738 | # Extract the file structure content from the response 739 | file_structure_content = response.split('```')[1].strip() 740 | 741 | # Generate file structure 742 | generate_file_structure_and_scripts(file_structure_content, coding_agent, conversation_directory) 743 | 744 | # Print message 745 | print(separator_line) 746 | print(f"\n{'-' * 50}\n{role_name}:\n{'-' * 50}\n{file_structure_msg.content}\n") 747 | print(separator_line) 748 | 749 | print(f"Total Successful Requests: {cb.successful_requests}") 750 | print(f"Total Tokens Used: {cb.total_tokens}") 751 | print(f"Prompt Tokens: {cb.prompt_tokens}") 752 | print(f"Completion Tokens: {cb.completion_tokens}") 753 | print(f"Total Cost (USD): ${cb.total_cost}") 754 | 755 | write_conversation_to_file(conversation, 'conversation.txt') 756 | --------------------------------------------------------------------------------