.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | [](https://discord.gg/WCNEFsrtjw)
2 | [![Stargazers][stars-shield]][stars-url]
3 | [![Issues][issues-shield]][issues-url]
4 | [![License][license-shield]][license-url]
5 |
6 | Open Architect
7 |
8 |
9 | Orchestrate your fleet of AI software designers, engineers and reviewers!
10 |
11 |
12 |
13 |
14 | Created at the MistralAI hackathon in SF
15 |
16 |
17 |
18 | Just create tickets (or have an AI architect assist you), and let the agents do the work. Your approval is required to merge PRs, so no bug or wrong code is ever introduced without your approval!
19 |
20 | ## Setup
21 |
22 | 1. Clone this repo
23 | 2. Install dependencies with `pip install -r requirements.txt`
24 | 3. Enter the infos related to the repository in which you want to work in `settings.json` or run `oa setup`
25 | 4. Profit
26 |
27 | ### How to connect to Github
28 |
29 | 1. Be part of the org of your repo
30 | 2. Go to https://github.com/settings/tokens?type=beta and create a personnal access token for the repo
31 | 3. Run the `init_connections.py` script and provide the URL to your repo and the access token
32 |
33 | ### Hot to connect to Trello
34 |
35 | 1. Connect to https://trello.com/ and log in. Navigate to your target Board. Save the Board ID.
36 | 2. Create a PowerUp for your Workspace at https://trello.com/power-ups/admin (you don't need the Iframe connector URL). Save your API Key and Secret
37 | 3. Run the `init_connections.py` script and provide the needed secrets
38 |
39 | ### Other requirement
40 |
41 | We currently use OpenAI for our inferences, so you'll need an [OpenAI API key](https://platform.openai.com/api-keys) with some credits (we currently use GPT3.5 Turbo and GPT4, see pricing [here](https://openai.com/pricing#:~:text=1M%20tokens-,GPT%2D3.5%20Turbo,-GPT%2D3.5%20Turbo)).
42 |
43 | Then run the `init_connections.py` script and provide your key
44 |
45 |
46 | ## How to run
47 |
48 | Design tickets and add them to your backlog with:
49 | - `streamlit run start_architecting.py` to spawn an architect running in a chatbot that will create code tickets with you.
50 |
51 | Run your fleet with:
52 | - `./oa start intern` to spawn a developer that will process tickets and open PRs with code.
53 | - `./oa start reviewer` to spawn a reviewer that will review PRs and ask for changes.
54 |
55 | You can also start multiple ones with `./oa start agent1 agent2`
56 |
57 | ## Contributing
58 |
59 | This is intended to be a collaborative project, and we'd love to take suggestions for new features, improvements, and fixes!
60 |
61 | - If there is something you'd like us to work on, feel free to open an **Issue** with the adequate tag and a good description. If it's a bug, please add steps to reproduce it.
62 | - If you have a contribution you'd like to make: first of all, thanks! You rock! Please open a PR and we'll review it as soon as we can!
63 |
64 | [stars-shield]: https://img.shields.io/github/stars/OpenArchitectAI/open-architect?style=for-the-badge
65 | [stars-url]: https://github.com/OpenArchitectAI/open-architect/stargazers
66 | [issues-shield]: https://img.shields.io/github/issues/OpenArchitectAI/open-architect?style=for-the-badge
67 | [issues-url]: https://github.com/OpenArchitectAI/open-architect/issues
68 | [license-shield]: https://img.shields.io/github/license/OpenArchitectAI/open-architect?style=for-the-badge
69 | [license-url]: https://github.com/OpenArchitectAI/open-architect/blob/main/LICENSE
70 |
--------------------------------------------------------------------------------
/create_backlog_tickets_for_all.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | from random import randint
3 | import os
4 |
5 | from src.models import Ticket
6 | from src.helpers.trello import TrelloHelper
7 |
8 | load_dotenv()
9 |
10 | N_TICKETS = 10
11 |
12 | trello_api_key = os.getenv("TRELLO_API_KEY")
13 | trello_api_secret = os.getenv("TRELLO_API_SECRET")
14 | trello_token = os.getenv("TRELLO_TOKEN")
15 | trello_board_id = os.getenv("TRELLO_BOARD_ID")
16 |
17 | th = TrelloHelper(trello_api_key, trello_token, trello_board_id)
18 |
19 | tickets = [
20 | Ticket(
21 | title="Test Ticket " + str(randint(0, 10000)),
22 | description="This is a test ticket",
23 | )
24 | for _ in range(N_TICKETS)
25 | ]
26 | th.push_tickets_to_backlog_and_assign(tickets)
27 |
--------------------------------------------------------------------------------
/init_connections.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | import os
3 |
4 | from trello import create_oauth_token
5 |
6 | load_dotenv()
7 |
8 | gh_repo = os.getenv("GITHUB_REPO_URL")
9 | gh_api_token_intern = os.getenv("GITHUB_TOKEN_INTERN")
10 | gh_api_token_reviewer = os.getenv("GITHUB_TOKEN_REVIEWER")
11 | openai_api_key = os.getenv("OPENAI_API_KEY")
12 | trello_api_key = os.getenv("TRELLO_API_KEY")
13 | trello_api_secret = os.getenv("TRELLO_API_SECRET")
14 | trello_token = os.getenv("TRELLO_TOKEN")
15 | trello_board_id = os.getenv("TRELLO_BOARD_ID")
16 |
17 | if gh_repo is None:
18 | gh_repo = input("Enter the GitHub repo URL: ")
19 | if gh_api_token_intern is None:
20 | gh_api_token_intern = input("Enter your GitHub Personnal Access token for the intern (https://github.com/settings/tokens): ")
21 | if gh_api_token_reviewer is None:
22 | gh_api_token_reviewer = input("Enter your GitHub Personnal Access token for the reviewer (https://github.com/settings/tokens): ")
23 |
24 | if trello_api_key is None:
25 | trello_api_key = input("Enter your Trello API key (https://trello.com/power-ups/admin): ")
26 | if trello_api_secret is None:
27 | trello_api_secret = input("Enter your Trello API secret (https://trello.com/power-ups/admin): ")
28 | if trello_token is None:
29 | auth_token = create_oauth_token(key=trello_api_key, secret=trello_api_secret, name='Trello API')
30 | trello_token = auth_token['oauth_token']
31 | if trello_board_id is None:
32 | trello_board_id = input("Enter your Trello Board ID: ")
33 |
34 | if openai_api_key is None:
35 | openai_api_key = input("Enter your OpenAI API key (https://platform.openai.com/api-keys): ")
36 |
37 |
38 | # Write them back to the .env file
39 | with open(".env", "w") as f:
40 | f.write(f"GITHUB_REPO_URL={gh_repo}\n")
41 | f.write(f"GITHUB_TOKEN_INTERN={gh_api_token_intern}\n")
42 | f.write(f"GITHUB_TOKEN_REVIEWER={gh_api_token_reviewer}\n")
43 | f.write(f"OPENAI_API_KEY={openai_api_key}\n")
44 | f.write(f"TRELLO_API_KEY={trello_api_key}\n")
45 | f.write(f"TRELLO_API_SECRET={trello_api_secret}\n")
46 | f.write(f"TRELLO_TOKEN={trello_token}\n")
47 | f.write(f"TRELLO_BOARD_ID={trello_board_id}\n")
48 |
49 | print("Environment variables set up successfully")
--------------------------------------------------------------------------------
/oa:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Get the directory of the current script
4 | SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
5 |
6 | # Run the Python script with the provided arguments
7 | python "${SCRIPT_DIR}/oa_cli.py" "$@"
8 |
--------------------------------------------------------------------------------
/oa_cli.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | from dotenv import load_dotenv
3 | from threading import Thread
4 | import os
5 | from src.helpers.github import GHHelper
6 | from src.helpers.trello import TrelloHelper
7 | from src.agents.intern import Intern
8 | from src.agents.reviewer import Reviewer
9 |
10 |
11 | def start_intern(gh_helper_intern, trello_helper):
12 | intern = Intern("Alex", gh_helper=gh_helper_intern, board_helper=trello_helper)
13 | intern_thread = Thread(target=intern.run)
14 | intern_thread.start()
15 |
16 |
17 | def start_reviewer(gh_helper_reviewer, trello_helper):
18 | reviewer = Reviewer(
19 | "Charlie", gh_helper=gh_helper_reviewer, board_helper=trello_helper
20 | )
21 | reviewer_thread = Thread(target=reviewer.run)
22 | reviewer_thread.start()
23 |
24 |
25 | def main():
26 | parser = argparse.ArgumentParser(description="Open Architect CLI")
27 | parser.add_argument("command", choices=["run"], help="Command to execute")
28 | parser.add_argument(
29 | "agents", nargs="+", choices=["intern", "reviewer"], help="Agents to run"
30 | )
31 | args = parser.parse_args()
32 |
33 | load_dotenv()
34 |
35 | gh_repo = os.getenv("GITHUB_REPO_URL")
36 | gh_api_token_intern = os.getenv("GITHUB_TOKEN_INTERN")
37 | gh_api_token_reviewer = os.getenv("GITHUB_TOKEN_REVIEWER")
38 | openai_api_key = os.getenv("OPENAI_API_KEY")
39 | trello_api_key = os.getenv("TRELLO_API_KEY")
40 | trello_api_secret = os.getenv("TRELLO_API_SECRET")
41 | trello_token = os.getenv("TRELLO_TOKEN")
42 | trello_board_id = os.getenv("TRELLO_BOARD_ID")
43 |
44 | if (
45 | gh_repo is None
46 | or gh_api_token_intern is None
47 | or gh_api_token_reviewer is None
48 | or openai_api_key is None
49 | or trello_api_key is None
50 | or trello_api_secret is None
51 | or trello_token is None
52 | or trello_board_id is None
53 | ):
54 | print(
55 | "Please run the init_connections.py script to set up the environment variables"
56 | )
57 | return
58 |
59 | gh_helper_intern = GHHelper(gh_api_token_intern, gh_repo)
60 | gh_helper_reviewer = GHHelper(gh_api_token_reviewer, gh_repo)
61 | trello_helper = TrelloHelper(trello_api_key, trello_token, trello_board_id)
62 |
63 | if args.command == "run":
64 | for agent in args.agents:
65 | if agent == "intern":
66 | start_intern(gh_helper_intern, trello_helper)
67 | elif agent == "reviewer":
68 | start_reviewer(gh_helper_reviewer, trello_helper)
69 |
70 |
71 | if __name__ == "__main__":
72 | main()
73 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | aiohttp==3.9.3
2 | aiosignal==1.3.1
3 | alembic==1.13.1
4 | altair==5.2.0
5 | annotated-types==0.6.0
6 | anyio==4.3.0
7 | attrs==23.2.0
8 | backoff==2.2.1
9 | blinker==1.7.0
10 | cachetools==5.3.3
11 | certifi==2024.2.2
12 | cffi==1.16.0
13 | charset-normalizer==3.3.2
14 | click==8.1.7
15 | colorlog==6.8.2
16 | cryptography==42.0.5
17 | datasets==2.14.7
18 | Deprecated==1.2.14
19 | dill==0.3.7
20 | distro==1.9.0
21 | dspy-ai==2.4.0
22 | filelock==3.13.3
23 | frozenlist==1.4.1
24 | fsspec==2023.10.0
25 | gitdb==4.0.11
26 | GitPython==3.1.42
27 | h11==0.14.0
28 | httpcore==1.0.5
29 | httpx==0.27.0
30 | huggingface-hub==0.22.1
31 | idna==3.6
32 | Jinja2==3.1.3
33 | joblib==1.3.2
34 | jsonschema==4.21.1
35 | jsonschema-specifications==2023.12.1
36 | Mako==1.3.2
37 | markdown-it-py==3.0.0
38 | MarkupSafe==2.1.5
39 | mdurl==0.1.2
40 | multidict==6.0.5
41 | multiprocess==0.70.15
42 | numpy==1.26.4
43 | oauthlib==3.2.2
44 | openai==1.14.3
45 | optuna==3.6.0
46 | orjson==3.10.0
47 | packaging==23.2
48 | pandas==2.2.1
49 | pillow==10.2.0
50 | protobuf==4.25.3
51 | py-trello==0.19.0
52 | pyarrow==15.0.2
53 | pyarrow-hotfix==0.6
54 | pycparser==2.21
55 | pydantic==2.5.0
56 | pydantic_core==2.14.1
57 | pydeck==0.8.1b0
58 | PyGithub==2.3.0
59 | Pygments==2.17.2
60 | PyJWT==2.8.0
61 | PyNaCl==1.5.0
62 | python-dateutil==2.9.0.post0
63 | python-dotenv==1.0.1
64 | pytz==2024.1
65 | PyYAML==6.0.1
66 | referencing==0.34.0
67 | regex==2023.12.25
68 | requests==2.31.0
69 | requests-oauthlib==2.0.0
70 | rich==13.7.1
71 | rpds-py==0.18.0
72 | six==1.16.0
73 | smmap==5.0.1
74 | sniffio==1.3.1
75 | SQLAlchemy==2.0.29
76 | streamlit==1.32.2
77 | tenacity==8.2.3
78 | toml==0.10.2
79 | toolz==0.12.1
80 | tornado==6.4
81 | tqdm==4.66.2
82 | typing_extensions==4.10.0
83 | tzdata==2024.1
84 | ujson==5.9.0
85 | urllib3==2.2.1
86 | wrapt==1.16.0
87 | xxhash==3.4.1
88 | yarl==1.9.4
89 |
--------------------------------------------------------------------------------
/src/agents/architect/__init__.py:
--------------------------------------------------------------------------------
1 | import json
2 | from typing import Any, List
3 |
4 | from pydantic import BaseModel
5 | from openai import OpenAI
6 | import streamlit as st
7 |
8 | from src.helpers.github import GHHelper
9 | from src.helpers.board import BoardHelper
10 | from src.lib.terminal import colorize
11 | from src.models import Ticket
12 |
13 |
14 | class ArchitectAgentRequest(BaseModel):
15 | question: str
16 | history: Any
17 |
18 | class CreateTicketsRequest(BaseModel):
19 | question: str
20 | history: Any
21 |
22 | class CreateSubtasksRequest(BaseModel):
23 | question: str
24 | history: Any
25 |
26 | class AskFollowupQuestionsRequest(BaseModel):
27 | question: str
28 | history: Any
29 |
30 | class ReferenceExistingCodeRequest(BaseModel):
31 | question: str
32 | history: Any
33 |
34 | class Architect:
35 | def __init__(self, name, gh_helper: GHHelper, board_helper: BoardHelper):
36 | self.name = name
37 | self.gh_helper = gh_helper
38 | self.board_helper = board_helper
39 | self.log_name = colorize(f"[{self.name} the Architect]", bold=True, color="green")
40 | print(
41 | f"{self.log_name} Nice to meet you, I'm {self.name} the Architect! I'm here to help you break down your tasks into smaller tickets and create them for you! 🏗️🔨📝"
42 | )
43 |
44 | def run(self):
45 | # Step 1: Create a chat interface
46 | st.title("Open Architect")
47 |
48 | if "messages" not in st.session_state:
49 | st.session_state.messages = []
50 | st.session_state.messages.append({"role": "assistant", "content": "Hey! What new features would you like to add to your project " + str(self.gh_helper.repo.full_name) + " today? I'll help you break it down to subtasks, figure out how to integrate with your existing code and then set my crew of SWE agents to get it built out for you!"})
51 |
52 | for message in st.session_state.messages:
53 | with st.chat_message(message["role"]):
54 | st.markdown(message["content"])
55 |
56 | if prompt := st.chat_input("What do you want to build today?"):
57 | st.session_state.messages.append({"role": "user", "content": prompt})
58 | with st.chat_message("user"):
59 | st.markdown(prompt)
60 |
61 | with st.chat_message("assistant"):
62 | architectureAgentReq = ArchitectAgentRequest(
63 | question=prompt,
64 | history=[
65 | msg["content"]
66 | for msg in st.session_state.messages
67 | ],
68 | )
69 | response = self.compute_response(architectureAgentReq)
70 | res = st.write(response)
71 |
72 | st.session_state.messages.append({"role": "assistant", "content": response})
73 |
74 | def compute_response(self, architectAgentRequest: ArchitectAgentRequest):
75 | """
76 | Routing logic for all tools supported by the feedback agent.
77 | """
78 | messages = [
79 | {"role": "system", "content": f"""You are a principal software engineer who is responsible for mentoring engineers and breaking down tasks into smaller tickets.
80 |
81 | Reference the existing codebase to determine how to build the features in the existing code.
82 |
83 | Once you know what to build, you can then break down the task into smaller tickets and then create those tickets.
84 |
85 | After you have all the subtasks proceed to creating the tasks - "Here are the subtasks that I have created for this task - are we good to create the tasks?".
86 |
87 | Create the tasks for the user. - "Creating your tasks".
88 |
89 | You have been given the following question: {architectAgentRequest.question}. Based on the conversation so far {architectAgentRequest.history}, do the following
90 |
91 | - if there is context about what the user is trying to build, reference their existing code
92 | - if there are references to their existing code, create subtasks
93 | - if there are subtasks, ask to create tasks
94 |
95 | """},
96 | {"role": "user", "content": f"address the user's question: {architectAgentRequest.question}"}]
97 |
98 | tools = [
99 | # {
100 | # "type": "function",
101 | # "function": {
102 | # "name": "ask_followup_questions",
103 | # "description": "Ask additional questions to better understand what the user wants to build. This will help you to better understand the project requirements and break the task down into smaller tickets. You should ask questions to clarify the project requirements and get a detailed description of the project. You should not create tickets until you have a clear understanding of the project requirements. Once you have all the details of the project, you can then break down the task into smaller tickets and create those tickets. After you have all the subtasks, proceed to creating the tasks. You should ask the user if they are good to create the tasks and then create the tasks for the user.",
104 | # "parameters": {"type": "object", "properties": {}, "required": []},
105 | # },
106 | # },
107 | {
108 | "type": "function",
109 | "function": {
110 | "name": "create_subtasks",
111 | "description": "If the user asks to break it down into parts, then call create_subtasks. Based on the user's initial descriptions of the task, break the task down into detailed subtasks to accomplish the larger task. Each subtask should include a title and a detailed description of the subtask.",
112 | "parameters": {"type": "object", "properties": {}, "required": []},
113 | },
114 | },
115 | {
116 | "type": "function",
117 | "function": {
118 | "name": "create_tasks",
119 | "description": "When the user asks to create tasks or create tickets in trello, call create tickets. Create tickets based on the subtasks that are generated for the task. This will actually take the subtasks generated and create the trello tickets for them.",
120 | "parameters": {"type": "object", "properties": {}, "required": []},
121 | },
122 | },
123 | {
124 | "type": "function",
125 | "function": {
126 | "name": "reference_existing_code",
127 | "description": "If the user asks to implement in their codebase. Go through the existing code in order to better understand how to build the requested user feature in the codebase. Analyze the code files, and determine the best way to build out support for the new features in the existing code. ",
128 | "parameters": {"type": "object", "properties": {}, "required": []},
129 | }
130 | },
131 | ]
132 |
133 | openai_client = OpenAI()
134 |
135 | response = openai_client.chat.completions.create(
136 | model="gpt-3.5-turbo-1106",
137 | messages=messages,
138 | tools=tools,
139 | tool_choice="auto",
140 | )
141 | response_message = response.choices[0].message
142 | tool_calls = response_message.tool_calls
143 | function_request_mapping = {
144 | # "ask_followup_questions": AskFollowupQuestionsRequest(
145 | # question=architectAgentRequest.question,
146 | # history=architectAgentRequest.history,
147 | # ),
148 | "create_tasks": CreateTicketsRequest(
149 | question=architectAgentRequest.question,
150 | history=architectAgentRequest.history,
151 | ),
152 | "create_subtasks": CreateSubtasksRequest(
153 | question=architectAgentRequest.question,
154 | history=architectAgentRequest.history,
155 | ),
156 | "reference_existing_code": ReferenceExistingCodeRequest(
157 | question=architectAgentRequest.question,
158 | history=architectAgentRequest.history,
159 | )
160 | }
161 |
162 | if tool_calls:
163 | available_functions = {
164 | # "ask_followup_questions": ask_followup_questions,
165 | "create_tasks": self.create_tasks,
166 | "create_subtasks": self.create_subtasks,
167 | "reference_existing_code": self.reference_existing_code,
168 | }
169 | messages.append(response_message)
170 | for tool_call in tool_calls:
171 | function_name = tool_call.function.name
172 | print("Function called is: " + str(function_name))
173 | function_to_call = available_functions[function_name]
174 | print("Function to call is: " + str(function_to_call))
175 | function_args = function_request_mapping[function_name]
176 | print("Function args are: " + str(function_args))
177 | return function_to_call(function_args)
178 |
179 | print("returning message: " + str(response_message.content))
180 | return response_message.content
181 |
182 |
183 | def ask_followup_questions(self, askFollowupQuestionsRequest: AskFollowupQuestionsRequest):
184 | """
185 | This function will be responsible for asking follow up questions to better understand what the user wants to build.
186 | """
187 | try:
188 | questionPrompt = f"""Given the description of the project so far {askFollowupQuestionsRequest.history} and the user's latest question {askFollowupQuestionsRequest.question}, come up with additional follow up questions to further deepen your understanding of what the user is trying to build. Ask more questions about the front end, backend, or hosting requirements. Understand the details of the product features. Ask questions until you are confident that you are able to generate a detailed execution plan for the project. The response should be a list of questions that you can ask the user to better understand the project requirements. Limit to 2-3 questions at a time.
189 | """
190 |
191 | openai_client = OpenAI()
192 | response = openai_client.chat.completions.create(
193 | model="gpt-3.5-turbo-1106",
194 | messages=[
195 | {
196 | "role": "system",
197 | "content": "You are a senior staff engineer, who is responsible for asking in depth follow up questions to deepen your understanding of a problem before you determine a plan to build it.",
198 | },
199 | {"role": "user", "content": questionPrompt},
200 | ],
201 | )
202 | response = response.choices[0].message.content
203 |
204 | return response
205 |
206 | except Exception as e:
207 | print("Failed to generate subtasks with error " + str(e))
208 | return "Failed to generate subtasks with error " + str(e)
209 |
210 |
211 | def reference_existing_code(self, referenceExistingCodeRequest: ReferenceExistingCodeRequest):
212 | """
213 | This method should take the user's question and search the current codebase for all references to that question. It should then summarize the current code and how the user's request can be built within that codebase.
214 | """
215 | # First get the codebase dict
216 | codebase_dict = self.gh_helper.get_entire_codebase()
217 |
218 | # Now search the codebase for the user's question
219 | codebase = codebase_dict.files
220 |
221 | try:
222 | questionPrompt = f"""Given the description of the project so far {referenceExistingCodeRequest.history} and the user's latest question {referenceExistingCodeRequest.question}, figure out which files in the codebase are most relevant for the user in order to best design a solution to the feature requests. You have this codebase to reference {codebase}. If the feature is completely irrelevant to the user's existing code, let the user know that the feature request seems to be unrelated to the existing codebase and ask them to verify if they do indeed want to build that feature in the current codebase.
223 |
224 | If the feature request seems relevant, your response should be something like
225 |
226 | Going through your existing codebase, I would suggest that we build out _feature_ by modifying the following files _files_ and adding the following functionality to them _functionality description_. Happy to break this down into granular subtasks for you next on how I'm planning to approach this execution.
227 | """
228 | openai_client = OpenAI()
229 |
230 | response = openai_client.chat.completions.create(
231 | model="gpt-3.5-turbo-1106",
232 | messages=[
233 | {
234 | "role": "system",
235 | "content": "You are a senior staff engineer, who is responsible for going through the codebase in detail and coming up with an execution plan of how to build out certain features. You need to reference the codebase to determine which files are most relevant for the user to build out the feature requests and then come up with an execution plan to build it out across several subtasks.",
236 | },
237 | {"role": "user", "content": questionPrompt},
238 | ],
239 | )
240 | response = response.choices[0].message.content
241 |
242 | return response
243 |
244 | except Exception as e:
245 | print("Failed to generate subtasks with error " + str(e))
246 | return "Failed to generate subtasks with error " + str(e)
247 |
248 | def create_tasks(self, createTicketsRequest: CreateTicketsRequest):
249 | """
250 | This function will be responsible for creating multiple tickets in parallel.
251 | """
252 | # Given the conversation history, create tickets for each subtask
253 | try:
254 | questionPrompt = f"""Given the following subtask information {createTicketsRequest.history}, generate a list of tasks in the following json format
255 | {{
256 | "subtasks": [
257 | {{
258 | "title": "title of the ticket"
259 | "description": "description of the ticket"
260 | }},
261 | ]
262 | }}
263 |
264 | Take each subtask and generate a title and description. Each one should correspond with a list element in the subtask list. You need to cover all of the subtasks that are mentioned and create a ticket for each one. Each ticket should include the title and description of the subtask. The response should be a list of these json objects for each subtask.
265 | """
266 |
267 | openai_client = OpenAI()
268 | response = openai_client.chat.completions.create(
269 | model="gpt-3.5-turbo-1106",
270 | messages=[
271 | {
272 | "role": "system",
273 | "content": "You are a senior staff engineer, who is responsible for breaking up large complex tasks into small, granular subtasks that more junior engineers can easily work through and execute on.",
274 | },
275 | {"role": "user", "content": questionPrompt},
276 | ],
277 |
278 | response_format={ "type": "json_object" }
279 | )
280 |
281 | subtasks = response.choices[0].message.content
282 | print("The tasks created are: " + str(subtasks))
283 | subtask_json = json.loads(subtasks)["subtasks"]
284 |
285 | # Create a list of ticket objects from the subtasks and call create
286 | tickets = []
287 | ticket_titles = []
288 | for subtask in subtask_json:
289 | ticket = Ticket(title=subtask["title"], description=subtask["description"])
290 | tickets.append(ticket)
291 | ticket_titles.append(ticket.title)
292 |
293 | createdTickets = self.board_helper.push_tickets_to_backlog_and_assign(tickets)
294 |
295 | ticketMarkdown = generate_ticket_markdown(createdTickets)
296 |
297 | return "Great! I've just created the following tickets and assigned them to our agents to get started on immediately \n" + ticketMarkdown
298 |
299 | # response = client.chat.completions.create(
300 | # model="gpt-3.5-turbo-1106",
301 | # messages=[
302 | # {
303 | # "role": "system",
304 | # "content": "You are a senior staff engineer, who has just created several tasks for a project. You now need to let the user know which tasks have been created so that they can be worked on. Let them know the titles of the tasks that have been created and say that you will get to working on them right away.",
305 | # },
306 | # {
307 | # "role": "user",
308 | # "content": f"I've just created the following tickets {createdTickets}",
309 | # },
310 | # ],
311 | # )
312 | # finalResponse = response.choices[0].message.content
313 | # return finalResponse
314 |
315 | except Exception as e:
316 | print("Failed to generate subtasks with error " + str(e))
317 | return "Failed to generate subtasks with error " + str(e)
318 |
319 | # Define the tool for breaking up the overall project description into multiple smaller tasks and then getting user feedback on them
320 | def create_subtasks(self, project_description):
321 | """
322 | This function will be responsible for breaking up the overall project description into multiple smaller tasks and then getting user feedback on them.
323 | """
324 | try:
325 | questionPrompt = f"""Given the following project description {project_description}, please break it down into smaller tasks that can be accomplished to complete the project. Each task should include a title and a detailed description of the task. The subtasks should all be small enough to be completed in a single day and should represent a micro chunk of work that a user can do to build up to solving the overall task. Focus only on engineering tasks, don't include design or user testing etc. The response should be in the following format
326 |
327 | Brief description of the task and breakdown. Don't include 'Title of the task' in the output, replace it with the actual title -
328 |
329 |
330 | Here is a break down of your task into a list of more manageable subtasks -
331 |
332 | 1. Title of the task
333 | Detailed description of the task with a breakdown of the steps that need to be taken to complete the task
334 | 2. Title of the task
335 | Detailed description of the task with a breakdown of the steps that need to be taken to complete the task
336 | 3. Title of the task
337 | Detailed description of the task with a breakdown of the steps that need to be taken to complete the task
338 | """
339 | openai_client = OpenAI()
340 | response = openai_client.chat.completions.create(
341 | model="gpt-3.5-turbo-1106",
342 | messages=[
343 | {
344 | "role": "system",
345 | "content": "You are a senior staff engineer, who is responsible for breaking up large complex tasks into small, granular subtasks that more junior engineers can easily work through and execute on.",
346 | },
347 | {"role": "user", "content": questionPrompt},
348 | ],
349 | )
350 | subtasks = response.choices[0].message.content
351 | return subtasks
352 | except Exception as e:
353 | print("Failed to generate subtasks with error " + str(e))
354 | return "Failed to generate subtasks with error " + str(e)
355 |
356 |
357 | def generate_ticket_markdown(tickets: List[Ticket]):
358 | markdown = ""
359 | for ticket in tickets:
360 | markdown += f"- **{ticket.title}**: {ticket.description}\n"
361 | return markdown
362 |
363 |
364 |
--------------------------------------------------------------------------------
/src/agents/intern/__init__.py:
--------------------------------------------------------------------------------
1 | from random import choice
2 | import time
3 | from threading import Thread
4 | from typing import List
5 |
6 | from src.lib.terminal import colorize
7 | from src.agents.intern.processors import better_code_change, generate_code_change
8 | from src.models import Ticket
9 | from src.helpers.github import GHHelper
10 | from src.helpers.board import BoardHelper
11 |
12 | REFRESH_EVERY = 5
13 | PROCESS_EVERY = 5
14 | MAX_REFRESH_CYCLES_WITHOUT_WORK = 50000
15 | MAX_PROCESS_CYCLES_WITHOUT_WORK = 10000
16 |
17 |
18 | class Intern:
19 | def __init__(self, name, gh_helper: GHHelper, board_helper: BoardHelper):
20 | self.name = name
21 | self.id = choice(board_helper.get_intern_list())
22 | self.ticket_todo_list: List[Ticket] = []
23 | self.pr_backlog = []
24 | self.gh_helper = gh_helper
25 | self.board_helper = board_helper
26 | self.fetch_thread = Thread(target=self.refresh_loop)
27 | self.process_thread = Thread(target=self.process_loop)
28 | self.log_name = colorize(f"[{self.name} the intern]", bold=True, color="red")
29 | print(
30 | f"{self.log_name} Hey! I'm {self.name} the software dev intern 😁, excited to start working! I'll look for tickets to code!"
31 | )
32 |
33 | def refresh_ticket_todo_list(self):
34 | # for ticket in self.trello_helper.get_backlog_tickets():
35 | # print(f"Ticket ID: {ticket.id}, Title: {ticket.title}, Description: {ticket.description}, Assignee: {ticket.assignee_id}, Label: {ticket}")
36 | # next_tickets = [
37 | # t
38 | # for t in self.trello_helper.get_backlog_tickets()
39 | # if t not in self.ticket_backlog and t.assignee_id == self.id
40 | # ]
41 | # print(f"[INTERN {self.name}] Looking on Trello for new assigned tickets")
42 | next_tickets = [
43 | t
44 | for t in self.board_helper.get_tickets_todo_list()
45 | if t not in self.ticket_todo_list
46 | ]
47 | self.ticket_todo_list.extend(next_tickets)
48 | return len(next_tickets) != 0
49 |
50 | def refresh_pr_backlog(self):
51 | return False # Not implemented yet
52 | print(f"[INTERN {self.name}] Looking on GitHub for reviewed PRs")
53 | next_prs = [
54 | pr
55 | for pr in self.gh_helper.list_open_prs()
56 | if pr not in self.pr_backlog and pr.assignee_id == self.id
57 | ]
58 | self.pr_backlog.extend(next_prs)
59 | return len(next_prs) != 0
60 |
61 | def process_pr(self):
62 | pr = self.pr_backlog.pop(0)
63 | self.board_helper.move_to_wip(pr.ticket_id)
64 | comment = self.gh_helper.get_comments(pr)
65 | # Do some processing with LLMs, create a new code_change
66 | code_change = generate_code_change("", comment)
67 | self.gh_helper.push_changes(code_change, pr.ticket_id, pr.assignee_id)
68 | print(f"[{self.log_name}] Moving card to waiting for review")
69 | self.board_helper.move_to_waiting_for_review(pr.ticket_id)
70 |
71 | def process_ticket(self):
72 | # Get the first ticket from the backlog
73 | # Process it (Trello + GH)
74 | ticket = self.ticket_todo_list.pop(0)
75 |
76 | print(f'{self.log_name} Starting to work on ticket "{ticket.title:.30}..."')
77 |
78 | # Move ticket to WIP
79 | self.board_helper.move_to_wip(ticket.id)
80 |
81 | # There should not be some PRs already assigned to this ticket (for now)
82 | # Call an agent to create a PR
83 | new_files, body = generate_code_change(
84 | ticket, self.gh_helper.get_entire_codebase()
85 | )
86 |
87 | print(f"{self.log_name} Finished coding! Pushing changes to a PR...")
88 | # Push the changes to the PR
89 | branch_name = f"{ticket.id}_{ticket.title.lower().replace(' ', '_')}"
90 | self.gh_helper.push_changes(
91 | branch_name=branch_name,
92 | pr_title=ticket.title,
93 | pr_body=body,
94 | new_files=new_files,
95 | ticket_id=ticket.id,
96 | author_id=ticket.assignee_id,
97 | )
98 |
99 | self.board_helper.move_to_waiting_for_review(ticket_id=ticket.id)
100 | print(f"{self.log_name} PR Created! Feel free to review it!")
101 |
102 | def refresh_loop(self):
103 | cycles_without_work = 0
104 | while True:
105 | if not self.refresh_pr_backlog() and not self.refresh_ticket_todo_list():
106 | cycles_without_work += 1
107 | if cycles_without_work == MAX_REFRESH_CYCLES_WITHOUT_WORK:
108 | print(f"{self.log_name} No more work to do, bye bye!")
109 | break
110 | else:
111 | cycles_without_work = 0
112 | time.sleep(REFRESH_EVERY)
113 | self.process_thread.join()
114 | self.fetch_thread.join()
115 |
116 | def process_loop(self):
117 | number_of_attempts = 0
118 | while True:
119 | if len(self.pr_backlog) > 0:
120 | self.process_pr()
121 | number_of_attempts = 0
122 | elif len(self.ticket_todo_list) > 0:
123 | self.process_ticket()
124 | number_of_attempts = 0
125 | else:
126 | # print(f"[INTERN {self.name}] No tickets to process, idling...")
127 | number_of_attempts += 1
128 | if number_of_attempts == MAX_PROCESS_CYCLES_WITHOUT_WORK:
129 | print(f"{self.log_name} No more work to do, bye bye!")
130 | break
131 | time.sleep(10)
132 |
133 | self.fetch_thread.join()
134 | self.process_thread.join()
135 |
136 | def run(self):
137 | # Two threads are created running in an infinite loop
138 | # The first one listens to refresh commands and refreshes the backlogs
139 | # The second one consumes the backlogs and processes the tickets and PRs
140 | self.fetch_thread.start()
141 | self.process_thread.start()
142 |
--------------------------------------------------------------------------------
/src/agents/intern/generators/diff_generator.py:
--------------------------------------------------------------------------------
1 | from typing import Dict, List
2 | import dspy
3 | import json
4 |
5 | from src.models import Codebase, Ticket
6 |
7 |
8 | class RelevantFileSelectionSignature(dspy.Signature):
9 | files_in_codebase = dspy.InputField()
10 | ticket = dspy.InputField()
11 | relevant_files: List[str] = dspy.OutputField(
12 | desc="Give the relevant files for you to observe to complete the ticket. They must be keys of the codebase dict."
13 | )
14 |
15 |
16 | # Define the agent
17 | class DiffGeneratorSignature(dspy.Signature):
18 | relevant_codebase = dspy.InputField()
19 | ticket = dspy.InputField()
20 | git_diff = dspy.OutputField(desc="Give ONLY the git diff")
21 | explanations = dspy.OutputField(desc="Give explanations for the diff generated")
22 |
23 |
24 | class NewFilesGeneratorSignature(dspy.Signature):
25 | relevant_codebase = dspy.InputField()
26 | ticket = dspy.InputField()
27 | new_files: Dict[str, str] = dspy.OutputField(
28 | desc="Generate the entire files that need to be update or created complete the ticket, with all of their content post update. The key is the path of the file and the value is the content of the file."
29 | )
30 | explanations = dspy.OutputField(
31 | desc="Give explanations for the new files generated. Use Markdown to format the text."
32 | )
33 |
34 |
35 | class DiffGenerator(dspy.Module):
36 | def __init__(self):
37 | super().__init__()
38 |
39 | self.diff_generator = dspy.ChainOfThought(DiffGeneratorSignature)
40 | self.relevant_file_selector = dspy.TypedChainOfThought(
41 | RelevantFileSelectionSignature
42 | )
43 | self.new_files_generator = dspy.TypedChainOfThought(NewFilesGeneratorSignature)
44 |
45 | def forward(self, codebase: Codebase, ticket: Ticket):
46 | relevant_files = self.relevant_file_selector(
47 | files_in_codebase=json.dumps(list(codebase.files.keys())),
48 | ticket=json.dumps(ticket.model_dump()),
49 | )
50 |
51 | subset_codebase = {
52 | file: codebase.files[file] for file in relevant_files.relevant_files
53 | }
54 |
55 | relevant_codebase = Codebase(files=subset_codebase)
56 |
57 | new_files = self.new_files_generator(
58 | relevant_codebase=json.dumps(relevant_codebase.model_dump()),
59 | ticket=json.dumps(ticket.model_dump()),
60 | )
61 |
62 | return new_files.new_files, new_files.explanations
63 |
--------------------------------------------------------------------------------
/src/agents/intern/processors.py:
--------------------------------------------------------------------------------
1 | from src.agents.intern.generators.diff_generator import DiffGenerator
2 | from src.language_models import gpt4, mistral
3 | import dspy
4 |
5 | from src.models import Codebase, Ticket
6 |
7 |
8 | def better_code_change(previous_code_change, pr, comments):
9 | # This function will be called from Intern.process_pr
10 | # It will take the previous code_change, the pr and the comments
11 | # and will return a new code_change
12 | # The previous code_change will be the last code_change that was pushed to the PR
13 | # The pr will be the PR that is being processed
14 | # The comments will be the comments that were made on the PR since the last code_change
15 | # The function will return a new code_change that will be pushed to the PR
16 | return "new_code_change"
17 |
18 |
19 | def generate_code_change(ticket: Ticket, code_base: Codebase):
20 | # This function will be called from Intern.process_ticket
21 | # It will take the ticket and the code_base
22 | # and will return a new code_change
23 | dspy.configure(lm=gpt4)
24 |
25 | diff_generator = DiffGenerator()
26 |
27 | new_files, explanations = diff_generator(code_base, ticket)
28 |
29 | return new_files, explanations
30 |
--------------------------------------------------------------------------------
/src/agents/reviewer/__init__.py:
--------------------------------------------------------------------------------
1 | import time
2 | from typing import List
3 |
4 | from src.lib.terminal import colorize
5 | from src.agents.reviewer.generators.code_review_generator import GeneratedCodeReview
6 | from src.agents.reviewer.processors import review_code
7 | from src.models import PR
8 | from src.helpers.board import BoardHelper
9 | from src.helpers.github import GHHelper
10 |
11 |
12 | class Reviewer:
13 | def __init__(self, name, gh_helper: GHHelper, board_helper: BoardHelper):
14 | self.name = name
15 | self.pr_backlog: List[PR] = []
16 | self.gh_helper = gh_helper
17 | self.board_helper = board_helper
18 | self.log_name = colorize(f"[{self.name} the reviewer]", bold=True, color="cyan")
19 | print(
20 | f"{self.log_name} Hi, my name is {self.name} and I'm an experienced code reviewer. I'm ready to review some bad code 🧐."
21 | )
22 |
23 | def refresh_pr_backlog(self):
24 | next_prs = [
25 | pr for pr in self.gh_helper.list_open_prs() if pr not in self.pr_backlog
26 | ]
27 | self.pr_backlog.extend(next_prs)
28 |
29 | # def simplify_pr(self, raw_pr: PullRequest, ticket: Ticket) -> PR:
30 | # files_changed = raw_pr.get_files()
31 |
32 | # files_changed_with_diffs: List[ModifiedFile] = []
33 | # for file in files_changed:
34 | # files_changed_with_diffs.append(
35 | # ModifiedFile(
36 | # filename=file.filename,
37 | # status=file.status,
38 | # additions=file.additions,
39 | # deletions=file.deletions,
40 | # changes=file.changes,
41 | # patch=file.patch, # This contains the diff for the file
42 | # )
43 | # )
44 |
45 | # pr: PR = PR(
46 | # id=raw_pr.number,
47 | # ticket_id=ticket.id,
48 | # assignee_id=ticket.assignee_id,
49 | # title=raw_pr.title,
50 | # description=raw_pr.body if raw_pr.body is not None else "",
51 | # files_changed=files_changed_with_diffs,
52 | # )
53 |
54 | # return pr
55 |
56 | def submit_code_review(self, generated_code_review: GeneratedCodeReview):
57 | for comment in generated_code_review.code_review.comments:
58 | if not "position" in comment:
59 | comment["position"] = 0
60 |
61 | self.gh_helper.submit_code_review(generated_code_review.code_review)
62 |
63 | trello_ticket_id = generated_code_review.code_review.pr.ticket_id
64 | if (
65 | generated_code_review.is_valid_code
66 | and generated_code_review.resolves_ticket
67 | ):
68 | self.board_helper.move_to_approved(ticket_id=trello_ticket_id)
69 | else:
70 | self.board_helper.move_to_reviewed(ticket_id=trello_ticket_id)
71 |
72 | def process_pr(self):
73 | codebase = self.gh_helper.get_entire_codebase()
74 |
75 | # Get first open PR from GH that hasn't been approved yet
76 | pr = self.pr_backlog.pop(0)
77 |
78 | # Fetch the Trello ticket that corresponds to this PR
79 | ticket = self.board_helper.get_ticket(ticket_id=pr.ticket_id)
80 |
81 | print(
82 | f'{self.log_name} I am reviewing the PR for this ticket: "{ticket.title:.30}..."'
83 | )
84 | generated_code_review = review_code(codebase=codebase, ticket=ticket, pr=pr)
85 | self.submit_code_review(generated_code_review=generated_code_review)
86 | print(
87 | f"{self.log_name} Finished reviewing the PR! Check it out on Github: {pr.url}"
88 | )
89 |
90 | def run(self):
91 | while True:
92 | self.refresh_pr_backlog()
93 | if len(self.pr_backlog) > 0:
94 | self.process_pr()
95 | time.sleep(10)
96 |
--------------------------------------------------------------------------------
/src/agents/reviewer/generators/code_review_generator.py:
--------------------------------------------------------------------------------
1 | from typing import List
2 | import dspy
3 |
4 | from src.models import Codebase, Ticket, PR, CodeReview
5 |
6 |
7 | class GeneratedCodeReview:
8 | is_valid_code: bool
9 | resolves_ticket: bool
10 | code_review: CodeReview
11 |
12 |
13 | class ReviewerSignature(dspy.Signature):
14 | # Inputs
15 | # codebase: Codebase = dspy.InputField()
16 | ticket: Ticket = dspy.InputField()
17 | pr: PR = dspy.InputField()
18 | # Outputs
19 | is_valid_code: bool = dspy.OutputField(desc="Check if the code is actually valid.")
20 | resolves_ticket: bool = dspy.OutputField(
21 | desc="Does this code actually resolve the ticket? Is it changing the right files?"
22 | )
23 | code_review: CodeReview = dspy.OutputField(
24 | desc="If valid, provide a brief comment saying that it looks good. If not valid, provide comments for changes that are needed along with exact line numbers and/or start and end line number. If adding comments, make it different from the main body text."
25 | )
26 |
27 |
28 | class ReviewerAgent(dspy.Module):
29 | def __init__(self):
30 | super().__init__()
31 | self.code_review_generator = dspy.TypedPredictor(signature=ReviewerSignature)
32 |
33 | def forward(
34 | self, codebase: Codebase, pr: PR, ticket: Ticket
35 | ) -> GeneratedCodeReview:
36 | # Get the review
37 | generated_review = self.code_review_generator(
38 | codebase=codebase, ticket=ticket, pr=pr
39 | )
40 |
41 | return generated_review
42 |
--------------------------------------------------------------------------------
/src/agents/reviewer/processors.py:
--------------------------------------------------------------------------------
1 | import dspy
2 |
3 | from src.agents.reviewer.generators.code_review_generator import (
4 | GeneratedCodeReview,
5 | ReviewerAgent,
6 | )
7 |
8 | from src.language_models import gpt4
9 | from src.models import Codebase, Ticket, PR
10 |
11 |
12 | def fix_code(codebase: Codebase, ticket: Ticket, pr: PR):
13 | # Note this function will most likely be implemented by the intern
14 | # This function is responsible for fixing the code
15 | # It should take the codebase, the ticket and the PR
16 | # It should update the codebase that is passed in and return that codebase
17 | updatedCodebase = codebase
18 | return updatedCodebase
19 |
20 |
21 | def review_code(codebase: Codebase, ticket: Ticket, pr: PR) -> GeneratedCodeReview:
22 | # This function is responsible for reviewing the code
23 | # It should take each of the PR comments, the associated code chunks
24 | # It should update comment on the PR and either approve or request changes
25 | dspy.configure(lm=gpt4)
26 |
27 | reviewer_agent = ReviewerAgent()
28 |
29 | code_review = reviewer_agent(codebase=codebase, ticket=ticket, pr=pr)
30 |
31 | return code_review
32 |
--------------------------------------------------------------------------------
/src/constants.py:
--------------------------------------------------------------------------------
1 | from enum import Enum
2 |
3 |
4 | class TicketStatus(Enum):
5 | BACKLOG = "Backlog"
6 | TODO = "To Do"
7 | WIP = "WIP"
8 | READY_FOR_REVIEW = "Ready for Review"
9 | REVIEWED = "Reviewed"
10 | APPROVED = "Approved"
11 |
--------------------------------------------------------------------------------
/src/helpers/board.py:
--------------------------------------------------------------------------------
1 | from typing import List
2 | from src.models import Ticket
3 |
4 |
5 | class BoardHelper:
6 | def __init__(self):
7 | raise NotImplementedError("This is an interface class")
8 |
9 | def get_ticket(self, ticket_id) -> Ticket:
10 | raise NotImplementedError("This is an interface class")
11 |
12 | def get_tickets_todo_list(self) -> List[Ticket]:
13 | raise NotImplementedError("This is an interface class")
14 |
15 | def get_waiting_for_review_tickets(self) -> List[Ticket]:
16 | raise NotImplementedError("This is an interface class")
17 |
18 | def move_to_waiting_for_review(self, ticket_id):
19 | raise NotImplementedError("This is an interface class")
20 |
21 | def move_to_reviewed(self, ticket_id):
22 | raise NotImplementedError("This is an interface class")
23 |
24 | def move_to_approved(self, ticket_id):
25 | pass
26 |
27 | def move_to_wip(self, ticket_id):
28 | raise NotImplementedError("This is an interface class")
29 |
30 | def move_to_backlog(self, ticket_id):
31 | raise NotImplementedError("This is an interface class")
32 |
33 | def push_tickets_to_backlog_and_assign(self, tickets: List[Ticket]) -> List[Ticket]:
34 | raise NotImplementedError("This is an interface class")
35 |
36 | def assign_ticket(self, ticket_id, assignee_id):
37 | raise NotImplementedError("This is an interface class")
38 |
39 | def get_intern_list(self):
40 | raise NotImplementedError("This is an interface class")
41 |
42 | def create_intern(self, name):
43 | raise NotImplementedError("This is an interface class")
44 |
--------------------------------------------------------------------------------
/src/helpers/github.py:
--------------------------------------------------------------------------------
1 | from typing import List
2 |
3 | from github import Auth, Github, InputGitTreeElement, InputGitAuthor
4 |
5 | from src.models import CodeReview, Codebase, ModifiedFile, Ticket, PR
6 |
7 | TICKET_DELIMITER = "Ticket: "
8 | AUTHOR_DELIMITER = "Author: "
9 |
10 |
11 | def add_ticket_info_to_pr_body(ticket_id, author_id, pr_body):
12 | return f"{TICKET_DELIMITER}{ticket_id}\n{AUTHOR_DELIMITER}{author_id}\n{pr_body}"
13 |
14 |
15 | def extract_ticket_info_from_pr_body(pr_body):
16 | ticket_id = None
17 | author_id = None
18 | lines = pr_body.split("\n")
19 | for line in lines:
20 | if line.startswith(TICKET_DELIMITER):
21 | ticket_id = line.replace(TICKET_DELIMITER, "")
22 | if line.startswith(AUTHOR_DELIMITER):
23 | author_id = line.replace(AUTHOR_DELIMITER, "")
24 |
25 | return ticket_id, author_id
26 |
27 |
28 | class GHHelper:
29 | def __init__(self, gh_api_token, gh_repo):
30 | auth = Auth.Token(gh_api_token)
31 | self.gh = Github(auth=auth)
32 |
33 | try:
34 | self.repo = self.gh.get_repo(gh_repo)
35 | print(f"Selected repo: {self.repo.name}")
36 | except Exception as e:
37 | print(f"Error: {e}")
38 |
39 | def list_open_prs(self) -> List[PR]:
40 | open_prs = []
41 | for pr in self.repo.get_pulls(state="open"):
42 | # Check if PR is not approved or not changes requested
43 | if not any(
44 | review.state in ["APPROVED", "CHANGES_REQUESTED"]
45 | for review in pr.get_reviews()
46 | ):
47 | files_changed = pr.get_files()
48 | files_changed_with_diffs = []
49 | for file in files_changed:
50 | files_changed_with_diffs.append(
51 | ModifiedFile(
52 | filename=file.filename,
53 | status=file.status,
54 | additions=file.additions,
55 | deletions=file.deletions,
56 | changes=file.changes,
57 | patch=file.patch, # This contains the diff for the file
58 | )
59 | )
60 | ticket_id, assignee_id = extract_ticket_info_from_pr_body(pr.body)
61 | open_prs.append(
62 | PR(
63 | id=pr.number,
64 | title=pr.title,
65 | description=pr.body,
66 | assignee_id=assignee_id,
67 | ticket_id=ticket_id,
68 | files_changed=files_changed_with_diffs,
69 | raw_pr=pr,
70 | url=pr.html_url,
71 | )
72 | )
73 | return open_prs
74 |
75 | def get_pr(self, pr_number):
76 | try:
77 | pr = self.repo.get_pull(pr_number)
78 | return pr
79 | except Exception as e:
80 | print(f"Error fetching PR #{pr_number}: {e}")
81 | return None
82 |
83 | def get_comments(self, pr):
84 | pr = self.repo.get_pull(pr)
85 | return pr.get_issue_comments()
86 |
87 | def add_comment(self, pr, comment):
88 | pr = self.repo.get_pull(pr)
89 | pr.create_issue_comment(comment)
90 |
91 | def mark_pr_as_approved(self, pr):
92 | pr = self.repo.get_pull(pr)
93 | pr.create_review(event="APPROVE")
94 |
95 | def submit_code_review(self, code_review: CodeReview):
96 | pr = self.repo.get_pull(number=code_review.pr.id)
97 | pr.create_review(
98 | event=code_review.event,
99 | body=code_review.body,
100 | comments=code_review.comments,
101 | )
102 |
103 | def push_changes(
104 | self, branch_name, pr_title, pr_body, new_files, ticket_id, author_id
105 | ):
106 | # Parse the diff and create blobs for each modified file
107 | # Hoping that the diff is properly formatted
108 |
109 | modified_files = []
110 | for file_path, file_content in new_files.items():
111 | blob = self.repo.create_git_blob(file_content, "utf-8")
112 | modified_files.append(
113 | InputGitTreeElement(
114 | path=file_path, mode="100644", type="blob", sha=blob.sha
115 | )
116 | )
117 |
118 | # Get the current commit's tree
119 | base_tree = self.repo.get_git_tree(sha=self.repo.get_commits()[0].sha)
120 |
121 | # Create a new tree with the modified files
122 | new_tree = self.repo.create_git_tree(modified_files, base_tree)
123 |
124 | # Create a new branch
125 | ref = self.repo.create_git_ref(
126 | f"refs/heads/{branch_name}", self.repo.get_commits()[0].sha
127 | )
128 |
129 | # Create a new commit with the new tree on the new branch
130 | author = InputGitAuthor("Open Architect", "openarchitect@gmail.com")
131 |
132 | commit_message = "Apply diff to multiple files"
133 | new_commit = self.repo.create_git_commit(
134 | commit_message,
135 | new_tree,
136 | [self.repo.get_git_commit(self.repo.get_commits()[0].sha)],
137 | author,
138 | )
139 |
140 | # Update the branch reference to point to the new commit
141 | ref.edit(sha=new_commit.sha)
142 |
143 | # Create a new pull request
144 | base_branch = "main" # Replace with the target branch for the pull request
145 | head = f"{self.repo.owner.login}:{branch_name}" # Update the head parameter
146 | self.repo.create_pull(
147 | title=pr_title,
148 | body=add_ticket_info_to_pr_body(ticket_id, author_id, pr_body),
149 | head=head,
150 | base=base_branch,
151 | )
152 |
153 | def get_entire_codebase(self) -> Codebase:
154 | contents = self.repo.get_contents("")
155 | if not isinstance(contents, list):
156 | contents = [contents]
157 |
158 | codebase_dict = {}
159 | for file in contents:
160 | try:
161 | codebase_dict[file.path] = file.decoded_content.decode("utf-8")
162 | except Exception as e:
163 | pass
164 |
165 | codebase = Codebase(files=codebase_dict)
166 |
167 | return codebase
168 |
169 | def get_file_content(self, file):
170 | contents = self.repo.get_contents(file)
171 | if isinstance(contents, list):
172 | contents = contents[0]
173 |
174 | return contents.decoded_content.decode("utf-8")
175 |
--------------------------------------------------------------------------------
/src/helpers/trello.py:
--------------------------------------------------------------------------------
1 | import json
2 | from typing import List
3 | from random import choice
4 | import requests
5 |
6 | from trello import TrelloClient
7 |
8 | from src.helpers.board import BoardHelper
9 | from src.constants import TicketStatus
10 | from src.models import Ticket
11 |
12 | HEADERS = {
13 | "Accept": "application/json",
14 | "Content-Type": "application/json",
15 | }
16 |
17 | COLORS = ["green", "yellow", "orange", "red", "purple", "blue", "sky", "lime", "pink", "black"]
18 |
19 | class CustomTrelloClient(TrelloClient):
20 | # Patch, because the base one is not working
21 | def __init__(self, api_key, token):
22 | self.api_key = api_key
23 | self.token = token
24 |
25 | def add_card(self, name, desc, list_id, label_id):
26 | url = f"https://api.trello.com/1/cards"
27 | query_params = {
28 | "key": self.api_key,
29 | "token": self.token,
30 | "name": name,
31 | "desc": desc,
32 | "idList": list_id,
33 | "idLabels": [label_id],
34 | }
35 | res = requests.post(url, headers=HEADERS, params=query_params)
36 | # print("Response from adding card to trello board: " + str(res.json()))
37 |
38 | def fetch_json(
39 | self,
40 | uri_path,
41 | http_method="GET",
42 | headers=None,
43 | query_params=None,
44 | post_args=None,
45 | files=None,
46 | ):
47 | if headers is None:
48 | headers = HEADERS
49 | if post_args is None:
50 | post_args = {}
51 | if query_params is None:
52 | query_params = {}
53 | if http_method in ("POST", "PUT", "DELETE") and not files:
54 | headers["Content-Type"] = "application/json; charset=utf-8"
55 |
56 | data = None
57 | if post_args:
58 | data = json.dumps(post_args)
59 |
60 | query_params["key"] = self.api_key
61 | query_params["token"] = self.token
62 |
63 | url = f"https://api.trello.com/1/{uri_path}"
64 | response = requests.request(
65 | http_method, url, headers=headers, params=query_params, data=data
66 | )
67 | return response.json()
68 |
69 |
70 | class TrelloHelper(BoardHelper):
71 | def __init__(self, trello_api_key, trello_token, trello_board_id):
72 | self.client = CustomTrelloClient(
73 | api_key=trello_api_key,
74 | token=trello_token,
75 | )
76 |
77 | board = self.client.get_board(trello_board_id)
78 | self.board_id = trello_board_id
79 | self.list_ids = {list.name: list.id for list in board.list_lists()}
80 | expected_values = [v.value for v in TicketStatus]
81 | for val in expected_values:
82 | if val not in self.list_ids:
83 | print(
84 | f"Error: List {val} not found. Make sure you have the correct list names {expected_values}."
85 | )
86 | exit(1)
87 |
88 | def get_ticket(self, ticket_id) -> Ticket:
89 | card = self.client.get_card(ticket_id)
90 | return Ticket(
91 | id=card.id,
92 | title=card.name,
93 | description=card.description,
94 | assignee_id=card.labels[0].id if card.labels else None,
95 | )
96 |
97 | def get_tickets_todo_list(self) -> List[Ticket]:
98 | cards = self.client.get_list(
99 | self.list_ids[TicketStatus.TODO.value]
100 | ).list_cards()
101 | if not cards:
102 | return []
103 |
104 | return [
105 | Ticket(
106 | id=card.id,
107 | title=card.name,
108 | description=card.description,
109 | assignee_id=card.labels[0].id if card.labels else "unassigned",
110 | )
111 | for card in cards
112 | ]
113 |
114 | def get_waiting_for_review_tickets(self) -> List[Ticket]:
115 | cards = self.client.get_list(
116 | self.list_ids[TicketStatus.READY_FOR_REVIEW.value]
117 | ).list_cards()
118 | if not cards:
119 | return []
120 |
121 | return [
122 | Ticket(
123 | id=card.id,
124 | title=card.name,
125 | description=card.description,
126 | assignee_id=card.labels[0].id if card.labels else "unassigned",
127 | )
128 | for card in cards
129 | ]
130 |
131 | def move_to_waiting_for_review(self, ticket_id):
132 | ticket = self.client.get_card(ticket_id)
133 | ticket.change_list(self.list_ids[TicketStatus.READY_FOR_REVIEW.value])
134 |
135 | def move_to_reviewed(self, ticket_id):
136 | ticket = self.client.get_card(ticket_id)
137 | ticket.change_list(self.list_ids[TicketStatus.REVIEWED.value])
138 |
139 | def move_to_approved(self, ticket_id):
140 | ticket = self.client.get_card(ticket_id)
141 | ticket.change_list(self.list_ids[TicketStatus.APPROVED.value])
142 |
143 | def move_to_wip(self, ticket_id):
144 | ticket = self.client.get_card(ticket_id)
145 | ticket.change_list(self.list_ids[TicketStatus.WIP.value])
146 |
147 | def push_tickets_to_backlog_and_assign(self, tickets: List[Ticket]) -> List[Ticket]:
148 | interns = self.get_intern_list()
149 | ticket_list = []
150 | for ticket in tickets:
151 | assignee = choice(interns)
152 | print(f"Assigning {ticket.title} to {assignee}")
153 | ticket.assignee_id = assignee
154 | ticket.status = TicketStatus.BACKLOG
155 | print(f"Adding card to list {self.list_ids[TicketStatus.BACKLOG.value]}")
156 | # Create a card in the backlog
157 | self.client.add_card(
158 | ticket.title,
159 | ticket.description,
160 | self.list_ids[TicketStatus.BACKLOG.value],
161 | assignee,
162 | )
163 | ticket_list.append(ticket)
164 |
165 | return ticket_list
166 |
167 | def get_intern_list(self):
168 | return [label.id for label in self.client.get_board(self.board_id).get_labels()]
169 |
170 | def create_intern(self, name):
171 | label = self.client.get_board(self.board_id).add_label(name, color=choice(COLORS))
172 | return label.id
173 |
174 | # TESTING METHODS
175 |
176 | def move_to_todo(self, ticket_id):
177 | ticket = self.client.get_card(ticket_id)
178 | ticket.change_list(self.list_ids[TicketStatus.TODO.value])
179 |
180 | def get_last_ticket(self):
181 | return self.client.get_list(self.list_ids[TicketStatus.BACKLOG.value]).list_cards()[-1]
182 |
183 | def delete_ticket(self, ticket_id):
184 | self.client.get_board(self.board_id).get_card(ticket_id).delete()
185 |
186 | def fire_intern(self, intern_id):
187 | self.client.get_board(self.board_id).delete_label(intern_id)
188 |
--------------------------------------------------------------------------------
/src/language_models.py:
--------------------------------------------------------------------------------
1 | import dspy
2 | import json
3 | from src.lib.mistral_dspy import Mistral
4 |
5 | mistral_key = json.load(open("creds.json"))["mistral"]
6 |
7 | turbo = dspy.OpenAI(model="gpt-3.5-turbo-0125", max_tokens=4096)
8 | gpt4 = dspy.OpenAI(model="gpt-4-0125-preview", max_tokens=4096)
9 | mistral_7b = dspy.OllamaLocal(model="mistral")
10 | mistral = Mistral(model="mistral-large-latest", api_key=mistral_key, max_tokens=4096)
11 |
--------------------------------------------------------------------------------
/src/lib/mistral_chat_completion.py:
--------------------------------------------------------------------------------
1 | from enum import Enum
2 | from typing import List, Optional, Union
3 |
4 | from pydantic import BaseModel
5 |
6 | class UsageInfo(BaseModel):
7 | prompt_tokens: int
8 | total_tokens: int
9 | completion_tokens: Optional[int]
10 |
11 | class Function(BaseModel):
12 | name: str
13 | description: str
14 | parameters: dict
15 |
16 |
17 | class ToolType(str, Enum):
18 | function = "function"
19 |
20 |
21 | class FunctionCall(BaseModel):
22 | name: str
23 | arguments: str
24 |
25 |
26 | class ToolCall(BaseModel):
27 | id: str = "null"
28 | type: ToolType = ToolType.function
29 | function: FunctionCall
30 |
31 |
32 | class ResponseFormats(str, Enum):
33 | text: str = "text"
34 | json_object: str = "json_object"
35 |
36 |
37 | class ToolChoice(str, Enum):
38 | auto: str = "auto"
39 | any: str = "any"
40 | none: str = "none"
41 |
42 |
43 | class ResponseFormat(BaseModel):
44 | type: ResponseFormats = ResponseFormats.text
45 |
46 |
47 | class ChatMessage(BaseModel):
48 | role: str
49 | content: Union[str, List[str]]
50 | name: Optional[str] = None
51 | tool_calls: Optional[List[ToolCall]] = None
52 |
53 |
54 | class FinishReason(str, Enum):
55 | stop = "stop"
56 | length = "length"
57 | error = "error"
58 | tool_calls = "tool_calls"
59 |
60 |
61 | class ChatCompletionResponseChoice(BaseModel):
62 | index: int
63 | message: ChatMessage
64 | finish_reason: Optional[FinishReason]
65 |
66 |
67 | class ChatCompletionResponse(BaseModel):
68 | id: str
69 | object: str
70 | created: int
71 | model: str
72 | choices: List[ChatCompletionResponseChoice]
73 | usage: UsageInfo
74 |
--------------------------------------------------------------------------------
/src/lib/mistral_dspy.py:
--------------------------------------------------------------------------------
1 | from typing import Any, List, Optional, Union
2 |
3 | import backoff
4 |
5 | from dsp.modules.lm import LM
6 | from pydantic import BaseModel
7 |
8 | from src.lib.ported_exceptions import MistralAPIException
9 | from src.lib.ported_clients import MistralClient
10 |
11 | class ChatMessage(BaseModel):
12 | role: str
13 | content: Union[str, List[str]]
14 |
15 | def backoff_hdlr(details):
16 | """Handler from https://pypi.org/project/backoff/"""
17 | print(
18 | "Backing off {wait:0.1f} seconds after {tries} tries "
19 | "calling function {target} with kwargs "
20 | "{kwargs}".format(**details),
21 | )
22 |
23 |
24 | def giveup_hdlr(details):
25 | """wrapper function that decides when to give up on retry"""
26 | if "rate limits" in details.message:
27 | return False
28 | return True
29 |
30 |
31 | class Mistral(LM):
32 | """Wrapper around Mistral AI's API.
33 |
34 | Currently supported models include `mistral-small-latest`, `mistral-medium-latest`, `mistral-large-latest`.
35 | """
36 |
37 | def __init__(
38 | self,
39 | model: str = "mistral-medium-latest",
40 | api_key: Optional[str] = None,
41 | **kwargs,
42 | ):
43 | """
44 | Parameters
45 | ----------
46 | model : str
47 | Which pre-trained model from Mistral AI to use?
48 | Choices are [`mistral-small-latest`, `mistral-medium-latest`, `mistral-large-latest`]
49 | api_key : str
50 | The API key for Mistral AI.
51 | **kwargs: dict
52 | Additional arguments to pass to the API provider.
53 | """
54 | super().__init__(model)
55 |
56 | self.client = MistralClient(api_key=api_key)
57 |
58 | self.provider = "mistral"
59 | self.kwargs = {
60 | "model": model,
61 | "temperature": 0.17,
62 | "max_tokens": 150,
63 | **kwargs,
64 | }
65 |
66 | self.history: list[dict[str, Any]] = []
67 |
68 | def basic_request(self, prompt: str, **kwargs):
69 | """Basic request to Mistral AI's API."""
70 | raw_kwargs = kwargs
71 | kwargs = {
72 | **self.kwargs,
73 | "messages": [ChatMessage(role="user", content=prompt)],
74 | **kwargs,
75 | }
76 |
77 | # Mistral disallows "n" arguments
78 | n = kwargs.pop("n", None)
79 | if n is not None and n > 1 and kwargs["temperature"] == 0.0:
80 | kwargs["temperature"] = 0.7
81 |
82 | response = self.client.chat(**kwargs)
83 |
84 | history = {
85 | "prompt": prompt,
86 | "response": response,
87 | "kwargs": kwargs,
88 | "raw_kwargs": raw_kwargs,
89 | }
90 | self.history.append(history)
91 |
92 | return response
93 |
94 | @backoff.on_exception(
95 | backoff.expo,
96 | MistralAPIException,
97 | max_time=1000,
98 | on_backoff=backoff_hdlr,
99 | giveup=giveup_hdlr,
100 | )
101 | def request(self, prompt: str, **kwargs):
102 | """Handles retrieval of completions from Mistral AI whilst handling API errors."""
103 | prompt = prompt + "Follow the format only once !"
104 | return self.basic_request(prompt, **kwargs)
105 |
106 | def __call__(
107 | self,
108 | prompt: str,
109 | only_completed: bool = True,
110 | return_sorted: bool = False,
111 | **kwargs,
112 | ):
113 |
114 | assert only_completed, "for now"
115 | assert return_sorted is False, "for now"
116 |
117 | n = kwargs.pop("n", 1)
118 |
119 | completions = []
120 | for _ in range(n):
121 | response = self.request(prompt, **kwargs)
122 | completions.append(response.choices[0].message.content)
123 |
124 | return completions
125 |
--------------------------------------------------------------------------------
/src/lib/ported_clients.py:
--------------------------------------------------------------------------------
1 | from json import JSONDecodeError
2 | import logging
3 | import os
4 | from abc import ABC
5 | import posixpath
6 | import time
7 | from typing import Any, Dict, Iterator, List, Optional, Union
8 | from httpx import Client, ConnectError, HTTPTransport, RequestError, Response
9 |
10 | import orjson
11 |
12 | from src.lib.ported_exceptions import (
13 | MistralAPIException,
14 | MistralAPIStatusException,
15 | MistralConnectionException,
16 | MistralException,
17 | )
18 | from src.lib.mistral_chat_completion import ChatCompletionResponse, ChatMessage, Function, ResponseFormat, ToolChoice
19 |
20 | logging.basicConfig(
21 | format="%(asctime)s %(levelname)s %(name)s: %(message)s",
22 | level=os.getenv("LOG_LEVEL", "ERROR"),
23 | )
24 |
25 | RETRY_STATUS_CODES = {429, 500, 502, 503, 504}
26 |
27 | ENDPOINT = "https://api.mistral.ai"
28 |
29 | class ClientBase(ABC):
30 | def __init__(
31 | self,
32 | endpoint: str,
33 | api_key: Optional[str] = None,
34 | max_retries: int = 5,
35 | timeout: int = 120,
36 | ):
37 | self._max_retries = max_retries
38 | self._timeout = timeout
39 |
40 | self._endpoint = endpoint
41 | self._api_key = api_key
42 | self._logger = logging.getLogger(__name__)
43 |
44 | # For azure endpoints, we default to the mistral model
45 | if "inference.azure.com" in self._endpoint:
46 | self._default_model = "mistral"
47 |
48 | # This should be automatically updated by the deploy script
49 | self._version = "0.1.8"
50 |
51 | def _parse_tools(self, tools: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
52 | parsed_tools: List[Dict[str, Any]] = []
53 | for tool in tools:
54 | if tool["type"] == "function":
55 | parsed_function = {}
56 | parsed_function["type"] = tool["type"]
57 | if isinstance(tool["function"], Function):
58 | parsed_function["function"] = tool["function"].model_dump(exclude_none=True)
59 | else:
60 | parsed_function["function"] = tool["function"]
61 |
62 | parsed_tools.append(parsed_function)
63 |
64 | return parsed_tools
65 |
66 | def _parse_tool_choice(self, tool_choice: Union[str, ToolChoice]) -> str:
67 | if isinstance(tool_choice, ToolChoice):
68 | return tool_choice.value
69 | return tool_choice
70 |
71 | def _parse_response_format(self, response_format: Union[Dict[str, Any], ResponseFormat]) -> Dict[str, Any]:
72 | if isinstance(response_format, ResponseFormat):
73 | return response_format.model_dump(exclude_none=True)
74 | return response_format
75 |
76 | def _parse_messages(self, messages: List[Any]) -> List[Dict[str, Any]]:
77 | parsed_messages: List[Dict[str, Any]] = []
78 | for message in messages:
79 | if isinstance(message, ChatMessage):
80 | parsed_messages.append(message.model_dump(exclude_none=True))
81 | else:
82 | parsed_messages.append(message)
83 |
84 | return parsed_messages
85 |
86 | def _make_chat_request(
87 | self,
88 | messages: List[Any],
89 | model: Optional[str] = None,
90 | tools: Optional[List[Dict[str, Any]]] = None,
91 | temperature: Optional[float] = None,
92 | max_tokens: Optional[int] = None,
93 | top_p: Optional[float] = None,
94 | random_seed: Optional[int] = None,
95 | stream: Optional[bool] = None,
96 | safe_prompt: Optional[bool] = False,
97 | tool_choice: Optional[Union[str, ToolChoice]] = None,
98 | response_format: Optional[Union[Dict[str, str], ResponseFormat]] = None,
99 | ) -> Dict[str, Any]:
100 | request_data: Dict[str, Any] = {
101 | "messages": self._parse_messages(messages),
102 | "safe_prompt": safe_prompt,
103 | }
104 |
105 | if model is not None:
106 | request_data["model"] = model
107 | else:
108 | if self._default_model is None:
109 | raise MistralException(message="model must be provided")
110 | request_data["model"] = self._default_model
111 |
112 | if tools is not None:
113 | request_data["tools"] = self._parse_tools(tools)
114 | if temperature is not None:
115 | request_data["temperature"] = temperature
116 | if max_tokens is not None:
117 | request_data["max_tokens"] = max_tokens
118 | if top_p is not None:
119 | request_data["top_p"] = top_p
120 | if random_seed is not None:
121 | request_data["random_seed"] = random_seed
122 | if stream is not None:
123 | request_data["stream"] = stream
124 |
125 | if tool_choice is not None:
126 | request_data["tool_choice"] = self._parse_tool_choice(tool_choice)
127 | if response_format is not None:
128 | request_data["response_format"] = self._parse_response_format(response_format)
129 |
130 | self._logger.debug(f"Chat request: {request_data}")
131 |
132 | return request_data
133 |
134 | def _process_line(self, line: str) -> Optional[Dict[str, Any]]:
135 | if line.startswith("data: "):
136 | line = line[6:].strip()
137 | if line != "[DONE]":
138 | json_streamed_response: Dict[str, Any] = orjson.loads(line)
139 | return json_streamed_response
140 | return None
141 |
142 |
143 | class MistralClient(ClientBase):
144 | """
145 | Synchronous wrapper around the async client
146 | """
147 |
148 | def __init__(
149 | self,
150 | api_key: Optional[str] = os.environ.get("MISTRAL_API_KEY", None),
151 | endpoint: str = ENDPOINT,
152 | max_retries: int = 5,
153 | timeout: int = 120,
154 | ):
155 | super().__init__(endpoint, api_key, max_retries, timeout)
156 |
157 | self._client = Client(
158 | follow_redirects=True, timeout=self._timeout, transport=HTTPTransport(retries=self._max_retries)
159 | )
160 |
161 | def __del__(self) -> None:
162 | self._client.close()
163 |
164 | def _check_response_status_codes(self, response: Response) -> None:
165 | if response.status_code in RETRY_STATUS_CODES:
166 | raise MistralAPIStatusException.from_response(
167 | response,
168 | message=f"Status: {response.status_code}. Message: {response.text}",
169 | )
170 | elif 400 <= response.status_code < 500:
171 | if response.stream:
172 | response.read()
173 | raise MistralAPIException.from_response(
174 | response,
175 | message=f"Status: {response.status_code}. Message: {response.text}",
176 | )
177 | elif response.status_code >= 500:
178 | if response.stream:
179 | response.read()
180 | raise MistralException(
181 | message=f"Status: {response.status_code}. Message: {response.text}",
182 | )
183 |
184 | def _check_streaming_response(self, response: Response) -> None:
185 | self._check_response_status_codes(response)
186 |
187 | def _check_response(self, response: Response) -> Dict[str, Any]:
188 | self._check_response_status_codes(response)
189 |
190 | json_response: Dict[str, Any] = response.json()
191 |
192 | if "object" not in json_response:
193 | raise MistralException(message=f"Unexpected response: {json_response}")
194 | if "error" == json_response["object"]: # has errors
195 | raise MistralAPIException.from_response(
196 | response,
197 | message=json_response["message"],
198 | )
199 |
200 | return json_response
201 |
202 | def _request(
203 | self,
204 | method: str,
205 | json: Dict[str, Any],
206 | path: str,
207 | attempt: int = 1,
208 | ) -> Iterator[Dict[str, Any]]:
209 | accept_header = "application/json"
210 | headers = {
211 | "Accept": accept_header,
212 | "User-Agent": f"mistral-client-python/{self._version}",
213 | "Authorization": f"Bearer {self._api_key}",
214 | "Content-Type": "application/json",
215 | }
216 |
217 | url = posixpath.join(self._endpoint, path)
218 |
219 | self._logger.debug(f"Sending request: {method} {url} {json}")
220 |
221 | new_json = json.copy()
222 | new_json['messages'] = []
223 |
224 | for message in json['messages']:
225 | new_json["messages"].append(message.model_dump(exclude_none=True))
226 |
227 | response: Response
228 |
229 | try:
230 | response = self._client.request(
231 | method,
232 | url,
233 | headers=headers,
234 | json=new_json,
235 | )
236 |
237 | yield self._check_response(response)
238 |
239 | except ConnectError as e:
240 | raise MistralConnectionException(str(e)) from e
241 | except RequestError as e:
242 | raise MistralException(f"Unexpected exception ({e.__class__.__name__}): {e}") from e
243 | except JSONDecodeError as e:
244 | raise MistralAPIException.from_response(
245 | response,
246 | message=f"Failed to decode json body: {response.text}",
247 | ) from e
248 | except MistralAPIStatusException as e:
249 | attempt += 1
250 | if attempt > self._max_retries:
251 | raise MistralAPIStatusException.from_response(response, message=str(e)) from e
252 | backoff = 2.0**attempt # exponential backoff
253 | time.sleep(backoff)
254 |
255 | # Retry as a generator
256 | for r in self._request(method, json, path, attempt=attempt):
257 | yield r
258 |
259 | def chat(
260 | self,
261 | messages: List[Any],
262 | model: Optional[str] = None,
263 | tools: Optional[List[Dict[str, Any]]] = None,
264 | temperature: Optional[float] = None,
265 | max_tokens: Optional[int] = None,
266 | top_p: Optional[float] = None,
267 | random_seed: Optional[int] = None,
268 | safe_mode: bool = False,
269 | safe_prompt: bool = False,
270 | tool_choice: Optional[Union[str, ToolChoice]] = None,
271 | response_format: Optional[Union[Dict[str, str], ResponseFormat]] = None,
272 | ) -> ChatCompletionResponse:
273 | """A chat endpoint that returns a single response.
274 |
275 | Args:
276 | model (str): model the name of the model to chat with, e.g. mistral-tiny
277 | messages (List[Any]): messages an array of messages to chat with, e.g.
278 | [{role: 'user', content: 'What is the best French cheese?'}]
279 | tools (Optional[List[Function]], optional): a list of tools to use.
280 | temperature (Optional[float], optional): temperature the temperature to use for sampling, e.g. 0.5.
281 | max_tokens (Optional[int], optional): the maximum number of tokens to generate, e.g. 100. Defaults to None.
282 | top_p (Optional[float], optional): the cumulative probability of tokens to generate, e.g. 0.9.
283 | Defaults to None.
284 | random_seed (Optional[int], optional): the random seed to use for sampling, e.g. 42. Defaults to None.
285 | safe_mode (bool, optional): deprecated, use safe_prompt instead. Defaults to False.
286 | safe_prompt (bool, optional): whether to use safe prompt, e.g. true. Defaults to False.
287 |
288 | Returns:
289 | ChatCompletionResponse: a response object containing the generated text.
290 | """
291 | request = self._make_chat_request(
292 | messages,
293 | model,
294 | tools=tools,
295 | temperature=temperature,
296 | max_tokens=max_tokens,
297 | top_p=top_p,
298 | random_seed=random_seed,
299 | stream=False,
300 | safe_prompt=safe_mode or safe_prompt,
301 | tool_choice=tool_choice,
302 | response_format=response_format,
303 | )
304 |
305 | single_response = self._request("post", request, "v1/chat/completions")
306 |
307 | for response in single_response:
308 | return ChatCompletionResponse(**response)
309 |
310 | raise MistralException("No response received")
--------------------------------------------------------------------------------
/src/lib/ported_exceptions.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | from typing import Any, Dict, Optional
4 |
5 | from httpx import Response
6 |
7 |
8 | class MistralException(Exception):
9 | """Base Exception class, returned when nothing more specific applies"""
10 |
11 | def __init__(self, message: Optional[str] = None) -> None:
12 | super(MistralException, self).__init__(message)
13 |
14 | self.message = message
15 |
16 | def __str__(self) -> str:
17 | msg = self.message or ""
18 | return msg
19 |
20 | def __repr__(self) -> str:
21 | return f"{self.__class__.__name__}(message={str(self)})"
22 |
23 |
24 | class MistralAPIException(MistralException):
25 | """Returned when the API responds with an error message"""
26 |
27 | def __init__(
28 | self,
29 | message: Optional[str] = None,
30 | http_status: Optional[int] = None,
31 | headers: Optional[Dict[str, Any]] = None,
32 | ) -> None:
33 | super().__init__(message)
34 | self.http_status = http_status
35 | self.headers = headers or {}
36 |
37 | @classmethod
38 | def from_response(
39 | cls, response: Response, message: Optional[str] = None
40 | ) -> MistralAPIException:
41 | return cls(
42 | message=message or response.text,
43 | http_status=response.status_code,
44 | headers=dict(response.headers),
45 | )
46 |
47 | def __repr__(self) -> str:
48 | return f"{self.__class__.__name__}(message={str(self)}, http_status={self.http_status})"
49 |
50 | class MistralAPIStatusException(MistralAPIException):
51 | """Returned when we receive a non-200 response from the API that we should retry"""
52 |
53 | class MistralConnectionException(MistralException):
54 | """Returned when the SDK can not reach the API server for any reason"""
55 |
--------------------------------------------------------------------------------
/src/lib/terminal.py:
--------------------------------------------------------------------------------
1 | def colorize(text, color, bold=False):
2 | colors = {
3 | "red": "\033[91m",
4 | "green": "\033[92m",
5 | "yellow": "\033[93m",
6 | "blue": "\033[94m",
7 | "magenta": "\033[95m",
8 | "cyan": "\033[96m",
9 | "white": "\033[97m",
10 | "reset": "\033[0m",
11 | }
12 |
13 | bold_code = "\033[1m" if bold else ""
14 | color_code = colors.get(color.lower(), "")
15 |
16 | return f"{bold_code}{color_code}{text}{colors['reset']}"
17 |
18 |
19 | # # Example usage
20 | # print(colorize("Hello, World!", "red", bold=True))
21 | # print(colorize("This is a green text.", "green"))
22 | # print(colorize("This is a bold blue text.", "blue", bold=True))
23 |
--------------------------------------------------------------------------------
/src/models.py:
--------------------------------------------------------------------------------
1 | from typing import Dict, List, Literal, Optional
2 | from pydantic import BaseModel
3 |
4 | from github.PullRequest import ReviewComment
5 |
6 | ticket_stages = ["backlog", "todo", "wip", "review", "done"]
7 |
8 |
9 | class Ticket(BaseModel):
10 | id: Optional[str] = None
11 | title: str
12 | description: str
13 | status: Optional[str] = None
14 | assignee_id: Optional[str] = None
15 |
16 |
17 | class ModifiedFile(BaseModel):
18 | filename: str
19 | status: str
20 | additions: int
21 | deletions: int
22 | changes: int
23 | patch: str
24 |
25 |
26 | # Todo: Use github.PullRequest class instead of this
27 | class PR(BaseModel):
28 | id: int
29 | ticket_id: Optional[str] = None
30 | assignee_id: Optional[str] = None
31 | title: str
32 | description: str
33 | files_changed: List[ModifiedFile] = []
34 | url: Optional[str] = None
35 |
36 |
37 | class CodeReview(BaseModel):
38 | pr: PR
39 | body: str
40 | event: Literal["APPROVE", "REQUEST_CHANGES", "COMMENT"]
41 | comments: List[ReviewComment] = []
42 |
43 |
44 | class Codebase(BaseModel):
45 | files: Dict[str, str]
46 |
--------------------------------------------------------------------------------
/start_architecting.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | import os
3 |
4 | from src.helpers.github import GHHelper
5 | from src.helpers.trello import TrelloHelper
6 |
7 | from src.agents.architect import Architect
8 |
9 | load_dotenv()
10 |
11 | gh_repo = os.getenv("GITHUB_REPO_URL")
12 | gh_api_token_reviewer = os.getenv("GITHUB_TOKEN_REVIEWER")
13 | openai_api_key = os.getenv("OPENAI_API_KEY")
14 | trello_api_key = os.getenv("TRELLO_API_KEY")
15 | trello_api_secret = os.getenv("TRELLO_API_SECRET")
16 | trello_token = os.getenv("TRELLO_TOKEN")
17 | trello_board_id = os.getenv("TRELLO_BOARD_ID")
18 |
19 | if (
20 | gh_repo is None
21 | or gh_api_token_reviewer is None
22 | or openai_api_key is None
23 | or trello_api_key is None
24 | or trello_api_secret is None
25 | or trello_token is None
26 | or trello_board_id is None
27 | ):
28 | print(
29 | "Please run the init_connections.py script to set up the environment variables"
30 | )
31 |
32 | gh_helper = GHHelper(gh_api_token_reviewer, gh_repo)
33 | trello_helper = TrelloHelper(trello_api_key, trello_token, trello_board_id)
34 |
35 | architect = Architect(
36 | "Sophia",
37 | gh_helper=gh_helper,
38 | board_helper=trello_helper,
39 | )
40 |
41 | architect.run()
--------------------------------------------------------------------------------
/start_coding.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | from threading import Thread
3 | import os
4 |
5 | from src.helpers.github import GHHelper
6 | from src.helpers.trello import TrelloHelper
7 |
8 | from src.agents.intern import Intern
9 | from src.agents.reviewer import Reviewer
10 |
11 | load_dotenv()
12 |
13 | gh_repo = os.getenv("GITHUB_REPO_URL")
14 | gh_api_token_intern = os.getenv("GITHUB_TOKEN_INTERN")
15 | gh_api_token_reviewer = os.getenv("GITHUB_TOKEN_REVIEWER")
16 | trello_api_key = os.getenv("TRELLO_API_KEY")
17 | trello_api_secret = os.getenv("TRELLO_API_SECRET")
18 | trello_token = os.getenv("TRELLO_TOKEN")
19 | trello_board_id = os.getenv("TRELLO_BOARD_ID")
20 |
21 | if (
22 | gh_repo is None
23 | or gh_api_token_intern is None
24 | or gh_api_token_reviewer is None
25 | or trello_api_key is None
26 | or trello_api_secret is None
27 | or trello_token is None
28 | or trello_board_id is None
29 | ):
30 | print(
31 | "Please run the init_connections.py script to set up the environment variables"
32 | )
33 |
34 | gh_helper_intern = GHHelper(gh_api_token_intern, gh_repo)
35 | gh_helper_reviewer = GHHelper(gh_api_token_reviewer, gh_repo)
36 | trello_helper = TrelloHelper(trello_api_key, trello_token, trello_board_id)
37 |
38 | intern = Intern("alex", gh_helper=gh_helper_intern, board_helper=trello_helper)
39 | reviewer = Reviewer(
40 | "charlie",
41 | gh_helper=gh_helper_reviewer,
42 | board_helper=trello_helper,
43 | )
44 |
45 | intern_thread = Thread(target=intern.run)
46 | reviewer_thread = Thread(target=reviewer.run)
47 |
48 | # Step 1: With User input (streamit), define tickets, push to Trello's Backlog
49 |
50 | while True:
51 | # Step 2: Let's get to work (n + 1 threads)
52 | intern_thread.start()
53 | reviewer_thread.start()
54 |
--------------------------------------------------------------------------------
/test_gh_helper.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | import os
3 |
4 | from gh_helper import GHHelper
5 |
6 | load_dotenv()
7 |
8 | gh_repo = os.getenv("GITHUB_REPO_URL")
9 | gh_api_token = os.getenv("GITHUB_TOKEN")
10 | if gh_repo is None or gh_api_token is None:
11 | print("Please run the start.py script to set up the environment variables")
12 |
13 | gh = GHHelper(gh_api_token, gh_repo)
14 |
15 | gh.list_open_prs()
16 |
17 | print(gh.get_entire_codebase())
18 |
--------------------------------------------------------------------------------
/test_trello_helper.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | import os
3 |
4 | from src.models import Ticket
5 | from src.helpers.trello import TrelloHelper
6 |
7 | load_dotenv()
8 |
9 | trello_api_key = os.getenv("TRELLO_API_KEY")
10 | trello_token = os.getenv("TRELLO_TOKEN")
11 | trello_board_id = os.getenv("TRELLO_BOARD_ID")
12 |
13 | if trello_api_key is None or trello_token is None or trello_board_id is None:
14 | print("Please run the start.py script to set up the environment variables")
15 |
16 | trello_helper = TrelloHelper(trello_api_key, trello_token, trello_board_id)
17 |
18 | ## Step 0: Create an intern
19 | new_intern = trello_helper.create_intern("Test Intern")
20 |
21 | ## Step 1: Get the available "candidates" (interns) from Trello
22 | interns = trello_helper.get_intern_list()
23 | print("ok" if interns[-1] else "not ok")
24 |
25 | ## Step 2: Create a ticket and assign it to an intern
26 | new_ticket = Ticket(
27 | title="Test Ticket",
28 | description="This is a test ticket",
29 | assignee_id=new_intern,
30 | )
31 | created_ticket = trello_helper.push_tickets_to_backlog_and_assign([new_ticket])[-1]
32 | print("ok" if created_ticket.title == new_ticket.title else "not ok")
33 | trello_helper.move_to_todo(trello_helper.get_last_ticket().id)
34 |
35 | ## Step 3: Get the tickets from the To Do list
36 | tickets = trello_helper.get_tickets_todo_list()
37 | print("ok" if tickets[-1] else "not ok")
38 |
39 | ## Step 4: Cleanup
40 | trello_helper.delete_ticket(tickets[-1].id)
41 | trello_helper.fire_intern(new_intern)
42 | print("Done!")
43 |
--------------------------------------------------------------------------------