├── .env.example
├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── assets
└── diagram.svg
├── autolang
├── __main__.py
├── agent
│ ├── base.py
│ └── prompt.py
├── auto.py
├── executor.py
├── learner.py
├── planner.py
├── printer.py
├── reviewer.py
└── utils.py
├── requirements.txt
└── run_docker.sh
/.env.example:
--------------------------------------------------------------------------------
1 | # cp .env.example .env
2 | # Edit your .env file with your own values
3 | # DON'T COMMIT OR PUSH .env FILE!
4 |
5 | # API CONFIG
6 | OPENAI_API_KEY=
7 | OPENAI_API_MODEL=gpt-3.5-turbo # Options: gpt-4, gpt-4-32k, gpt-3.5-turbo, text-davinci-003, etc.
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # Unit test / coverage reports
7 | htmlcov/
8 | .tox/
9 | .nox/
10 | .coverage
11 | .coverage.*
12 | .cache
13 | nosetests.xml
14 | coverage.xml
15 | *.cover
16 | *.py,cover
17 | .hypothesis/
18 | .pytest_cache/
19 | cover/
20 |
21 | # Environments
22 | .env
23 | .venv
24 | env/
25 | venv/
26 | ENV/
27 | env.bak/
28 | venv.bak/
29 |
30 | # Visual Studio Code
31 | .vscode/*
32 | !.vscode/settings.json
33 | !.vscode/tasks.json
34 | !.vscode/launch.json
35 | !.vscode/extensions.json
36 | !.vscode/*.code-snippets
37 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.11
2 |
3 | ARG openai_key
4 |
5 | ENV PYTHONBUFFERED 1
6 | ENV OPENAI_API_KEY $openai_key
7 |
8 | RUN python -m pip install --upgrade pip
9 |
10 | WORKDIR /app
11 |
12 | COPY requirements.txt .
13 | RUN pip install -r requirements.txt
14 |
15 | ADD autolang autolang
16 | CMD [ "python3", "-u", "-m" , "autolang"]
17 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | The MIT License
2 |
3 | Copyright (c) Alvaro Sevilla
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in
13 | all copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21 | THE SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Autolang
2 |
3 | Another take on BabyAGI, focused on workflows that complete. Powered by langchain.
4 |
5 | Here's a simple demo: https://twitter.com/pictobit/status/1645504308874563584
6 |
7 | ## Running
8 |
9 | To run Autolang, follow these steps:
10 |
11 | (Optional) Customize the [tools provided to the agent](autolang/\_\_main\_\_.py)
12 |
13 | Install dependencies:
14 | ```sh
15 | pip install -r requirements.txt
16 | ```
17 |
18 | Copy the `.env.example` file to `.env,` then edit it:
19 | ```sh
20 | cp .env.example .env
21 | ```
22 |
23 | Run the script:
24 | ```sh
25 | python -m autolang
26 | ```
27 |
28 | Alternatively, run with Docker:
29 | ```sh
30 | ./run_docker.sh
31 | ```
32 | ## Architecture
33 |
34 | Autolang uses four main components:
35 |
36 |
37 |
38 |
39 |
40 | ### Planner
41 | Runs once at the start, it thinks of a strategy to solve the problem, and produces a task list.
42 |
43 | ### Executor
44 | A custom langchain agent, which implements ReAct to solve a single task in the plan. It can be provided any tools in the langchain format.
45 |
46 | ### Learner
47 | Here's the interesting part. The system holds an information context string, which starts empty.
48 | After each step, the learner merges the result with the current context, as a sort of medium-term memory
49 |
50 | ### Reviewer
51 | Assesses the current task list, based on the current completed tasks and generated info context, and reprioritizes the pending tasks accordingly
52 |
53 | ## Next steps
54 | Right now, the main limitation is the limited info context. As a next step, I'm planning on adding a "long term memory agent", that extracts information from the context, replacing it with a key. The executor agent will be provided a tool to retrieve these saved snippets if required.
55 |
56 |
57 |
--------------------------------------------------------------------------------
/assets/diagram.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
--------------------------------------------------------------------------------
/autolang/__main__.py:
--------------------------------------------------------------------------------
1 | import os
2 | import faiss
3 | import readline # for better CLI experience
4 | from typing import List
5 | from langchain import FAISS, InMemoryDocstore
6 | from langchain.agents import Tool, load_tools
7 | from langchain.chat_models import ChatOpenAI
8 | from langchain.embeddings import OpenAIEmbeddings
9 | from langchain.llms.base import BaseLLM
10 |
11 | from .auto import AutoAgent
12 | from dotenv import load_dotenv
13 |
14 | # Load default environment variables (.env)
15 | load_dotenv()
16 |
17 | # API Keys
18 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", default="")
19 | assert OPENAI_API_KEY, "OPENAI_API_KEY environment variable is missing from .env"
20 |
21 | OPENAI_API_MODEL = os.getenv("OPENAI_API_MODEL", default="gpt-3.5-turbo")
22 | assert OPENAI_API_MODEL, "OPENAI_API_MODEL environment variable is missing from .env"
23 |
24 | objective = input('What is my purpose? ')
25 |
26 |
27 | llm: BaseLLM = ChatOpenAI(model_name=OPENAI_API_MODEL, temperature=0, request_timeout=120) # type: ignore
28 | embeddings = OpenAIEmbeddings() # type: ignore
29 |
30 | """
31 | Customize the tools the agent uses here. Here are some others you can add:
32 |
33 | os.environ["WOLFRAM_ALPHA_APPID"] = ""
34 | os.environ["SERPER_API_KEY"] = ""
35 |
36 | tool_names = ["terminal", "requests", "python_repl", "human", "google-serper", "wolfram-alpha"]
37 | """
38 |
39 | tool_names = ["python_repl", "human"]
40 |
41 | tools: List[Tool] = load_tools(tool_names, llm=llm) # type: ignore
42 |
43 | index = faiss.IndexFlatL2(1536)
44 | docstore = InMemoryDocstore({})
45 | vectorstore = FAISS(embeddings.embed_query, index, docstore, {})
46 |
47 | agent = AutoAgent.from_llm_and_objectives(llm, objective, tools, vectorstore, verbose=True)
48 |
49 | agent.run()
50 |
--------------------------------------------------------------------------------
/autolang/agent/base.py:
--------------------------------------------------------------------------------
1 | """An agent designed to hold a conversation in addition to using tools."""
2 | from __future__ import annotations
3 |
4 | import re
5 | from typing import Any, List, Optional, Sequence, Tuple
6 |
7 | from langchain.agents.agent import Agent
8 | from langchain.callbacks.base import BaseCallbackManager
9 | from langchain.chains import LLMChain
10 | from langchain.llms.base import BaseLLM
11 | from langchain.prompts import PromptTemplate
12 | from langchain.tools.base import BaseTool
13 |
14 | from .prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX
15 |
16 | class AutonomousAgent(Agent):
17 | """An agent designed to execute a single task within a larger workflow."""
18 |
19 | ai_prefix: str = "Jarvis"
20 |
21 | @property
22 | def _agent_type(self) -> str:
23 | return "autonomous"
24 |
25 | @property
26 | def observation_prefix(self) -> str:
27 | return "Observation: "
28 |
29 | @property
30 | def llm_prefix(self) -> str:
31 | return "Thought:"
32 |
33 | @property
34 | def finish_tool_name(self) -> str:
35 | return self.ai_prefix
36 |
37 | @classmethod
38 | def create_prompt(
39 | cls,
40 | tools: Sequence[BaseTool],
41 | prefix: str = PREFIX,
42 | suffix: str = SUFFIX,
43 | format_instructions: str = FORMAT_INSTRUCTIONS,
44 | ai_prefix: str = "AI",
45 | human_prefix: str = "Human",
46 | objective: Optional[str] = None,
47 | input_variables: Optional[List[str]] = None,
48 | ) -> PromptTemplate:
49 | tool_strings = "\n".join(
50 | [f"> {tool.name}: {tool.description}" for tool in tools]
51 | )
52 | tool_names = ", ".join([tool.name for tool in tools])
53 | prefix = prefix.format(objective=objective)
54 | format_instructions = format_instructions.format(tool_names=tool_names, ai_prefix=ai_prefix, human_prefix=human_prefix)
55 | template = "\n\n".join([prefix, tool_strings, format_instructions, suffix])
56 | input_variables = ["input", "context", "agent_scratchpad"]
57 | return PromptTemplate(template=template, input_variables=input_variables)
58 |
59 | def _extract_tool_and_input(self, llm_output: str) -> Optional[Tuple[str, str]]:
60 | if f"{self.ai_prefix}:" in llm_output:
61 | return self.ai_prefix, llm_output.split(f"{self.ai_prefix}:")[-1].strip()
62 | regex = r"Action: (.*?)[\n]*Action Input: (.*)"
63 | match = re.search(regex, llm_output)
64 | if not match:
65 | raise ValueError(f"Could not parse LLM output: `{llm_output}`")
66 | action = match.group(1)
67 | action_input = match.group(2)
68 | return action.strip(), action_input.strip(" ").strip('"')
69 |
70 | @classmethod
71 | def from_llm_and_tools(
72 | cls,
73 | llm: BaseLLM,
74 | tools: Sequence[BaseTool],
75 | objective: Optional[str] = None,
76 | callback_manager: Optional[BaseCallbackManager] = None,
77 | prefix: str = PREFIX,
78 | suffix: str = SUFFIX,
79 | format_instructions: str = FORMAT_INSTRUCTIONS,
80 | ai_prefix: str = "Jarvis",
81 | human_prefix: str = "Human",
82 | input_variables: Optional[List[str]] = None,
83 | **kwargs: Any,
84 | ) -> "AutonomousAgent":
85 | """Construct an agent from an LLM and tools."""
86 | cls._validate_tools(tools)
87 | prompt = cls.create_prompt(
88 | tools,
89 | ai_prefix=ai_prefix,
90 | human_prefix=human_prefix,
91 | prefix=prefix,
92 | suffix=suffix,
93 | objective=objective,
94 | format_instructions=format_instructions,
95 | input_variables=input_variables,
96 | )
97 | llm_chain = LLMChain(
98 | llm=llm,
99 | prompt=prompt,
100 | callback_manager=callback_manager, # type: ignore
101 | )
102 | tool_names = [tool.name for tool in tools]
103 | return cls(llm_chain=llm_chain, allowed_tools=tool_names, ai_prefix=ai_prefix, **kwargs)
104 |
--------------------------------------------------------------------------------
/autolang/agent/prompt.py:
--------------------------------------------------------------------------------
1 | from typing import List, Optional, Sequence
2 |
3 | from langchain.prompts import PromptTemplate
4 | from langchain.tools.base import BaseTool
5 |
6 |
7 | PREFIX = """Jarvis is a general purpose AI model trained by OpenAI.
8 |
9 | Jarvis is tasked with executing a single task within the context of a larger workflow trying to accomplish the following objective: {objective}. It should focus only on the current task, and doesn't attempt to perform further work.
10 |
11 | Jarvis is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions.
12 |
13 | Overall, Jarvis is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics.
14 | Jarvis is not having a conversation with a user, but rather producing the output of executing a task within a larger workflow.
15 |
16 | TOOLS:
17 | ------
18 |
19 | Jarvis has access to the following tools:"""
20 | FORMAT_INSTRUCTIONS = """
21 | Thought Process:
22 | ----------------
23 |
24 | Jarvis always uses the followin thought process and foramt to execute its tasks:
25 |
26 | ```
27 | Thought: Do I need to use a tool? Yes
28 | Action: the action to take, should be one of [{tool_names}]
29 | Action Input: the input to the action
30 | Observation: the result of the action
31 | ```
32 |
33 | When Jarvis has a response to say to the Human, or if it doesn't need to use a tool, it always uses the format:
34 |
35 | ```
36 | Thought: Do I need to use a tool? No
37 | {ai_prefix}: [your response here]
38 | ```"""
39 |
40 | SUFFIX = """Begin!
41 |
42 | Current context:
43 | {context}
44 |
45 | Current task: {input}
46 | {agent_scratchpad}"""
47 |
48 |
49 |
--------------------------------------------------------------------------------
/autolang/auto.py:
--------------------------------------------------------------------------------
1 | from collections import deque
2 | from typing import Any, Dict, List, Optional
3 | from pydantic import BaseModel, Field
4 | from langchain.agents import Tool
5 | from langchain.llms.base import BaseLLM
6 | from langchain.vectorstores import VectorStore
7 |
8 | from .executor import ExecutionAgent
9 | from .planner import PlanningChain
10 | from .reviewer import ReviewingChain
11 | from .learner import LearningChain
12 | from .printer import print_objective, print_next_task, print_task_list, print_task_result, print_end
13 |
14 | class AutoAgent(BaseModel):
15 |
16 | planning_chain: PlanningChain = Field(...)
17 | reviewing_chain: ReviewingChain = Field(...)
18 | execution_agent: ExecutionAgent = Field(...)
19 | learning_chain: LearningChain = Field(...)
20 |
21 | objective: str = Field(alias="objective")
22 | vectorstore: Any = Field(...)
23 |
24 | memory: str = Field("", init=False)
25 | complete_list: List[Dict[str, str]] = Field(default_factory=list)
26 | pending_list: deque[Dict[str, str]] = Field(default_factory=deque)
27 |
28 | @classmethod
29 | def from_llm_and_objectives(
30 | cls,
31 | llm: BaseLLM,
32 | objective: str,
33 | tools: List[Tool],
34 | vectorstore: VectorStore,
35 | verbose: bool = False,
36 | ) -> "AutoAgent":
37 | planning_chain = PlanningChain.from_llm(llm, objective, tools=tools, verbose=verbose)
38 | reviewing_chain = ReviewingChain.from_llm(llm, objective, verbose=verbose)
39 | execution_agent = ExecutionAgent.from_llm(llm, objective, tools, verbose=verbose)
40 | learning_chain = LearningChain.from_llm(llm, objective, verbose=verbose)
41 | return cls(
42 | objective=objective,
43 | planning_chain=planning_chain,
44 | reviewing_chain = reviewing_chain,
45 | execution_agent=execution_agent,
46 | learning_chain=learning_chain,
47 | vectorstore=vectorstore,
48 | )
49 |
50 | def add_task(self, task: Dict):
51 | self.pending_list.append(task)
52 |
53 | def run(self, max_iterations: Optional[int] = None):
54 | num_iters = 0
55 | print_objective(self.objective)
56 |
57 | self.pending_list = deque(self.planning_chain.generate_tasks())
58 |
59 | while len(self.pending_list) > 0 and (max_iterations is None or num_iters < max_iterations):
60 | num_iters += 1
61 | print_task_list(self.complete_list, self.pending_list)
62 |
63 | task = self.pending_list.popleft()
64 | print_next_task(task)
65 |
66 | result = self.execution_agent.execute_task(task["task_name"], self.memory)
67 | if not result: result = "Empty result"
68 | print_task_result(result)
69 |
70 | self.complete_list.append({"task_id": task["task_id"], "task_name": task["task_name"]})
71 | self.memory = self.learning_chain.update_memory(
72 | memory=self.memory,
73 | observation=result,
74 | completed_tasks=[t["task_name"] for t in self.complete_list],
75 | pending_tasks=[t["task_name"] for t in self.pending_list],
76 | )
77 | self.pending_list = self.reviewing_chain.review_tasks(
78 | this_task_id=len(self.complete_list),
79 | completed_tasks=[t["task_name"] for t in self.complete_list],
80 | pending_tasks=[t["task_name"] for t in self.pending_list],
81 | context=self.memory)
82 |
83 | final_answer = self.execution_agent.execute_task("Provide the final answer", self.memory)
84 | print_end(final_answer)
85 |
--------------------------------------------------------------------------------
/autolang/executor.py:
--------------------------------------------------------------------------------
1 | from typing import List
2 | from pydantic import BaseModel, Field
3 | from langchain.agents import AgentExecutor, Tool
4 | from langchain.llms.base import BaseLLM
5 |
6 | from .agent.base import AutonomousAgent
7 |
8 | class ExecutionAgent(BaseModel):
9 |
10 | agent: AgentExecutor = Field(...)
11 |
12 | @classmethod
13 | def from_llm(cls, llm: BaseLLM, objective: str, tools: List[Tool], verbose: bool = True) -> "ExecutionAgent":
14 | agent = AutonomousAgent.from_llm_and_tools(llm=llm, tools=tools, objective=objective, verbose=verbose)
15 | agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=verbose)
16 | return cls(agent=agent_executor)
17 |
18 | def execute_task(self, task: str, context: str) -> str:
19 | for i in range(3):
20 | try:
21 | return self.agent.run({"input": task, "context": context})
22 | except ValueError:
23 | print(f"Value error running executor agent. Will retry {2-i} times")
24 | return "Failed to execute task."
25 |
--------------------------------------------------------------------------------
/autolang/learner.py:
--------------------------------------------------------------------------------
1 | from typing import List
2 | from langchain import LLMChain, PromptTemplate
3 | from langchain.llms.base import BaseLLM
4 |
5 | learning_template = """Cass is an AI specialized in information consolidation, part of a larger system that is solving a complex problem in multiple steps. Cass is provided the current information context, and the result of the latest step, and updates the context incorporating the result.
6 | Cass is also provided the list of completed and still pending tasks.
7 | The rest of the system is provided the task lists and context in the same way, so the context should never contain the tasks themselves
8 | The information context is the only persistent memory the system has, after every step, the context must be updated with all relevant informtion, such that the context contains all information needed to complete the objective.
9 |
10 | The ultimate objective is: {objective}.
11 | Completed tasks: {completed_tasks}
12 | The last task output was:
13 | {last_output}
14 |
15 | The list of pending tasks: {pending_tasks}
16 |
17 | Current context to update:
18 | {context}
19 |
20 | Cass will generate an updated context. This context will replace the current context.
21 | Cass: """
22 |
23 | learning_prompt = lambda objective: PromptTemplate(
24 | template=learning_template,
25 | partial_variables={"objective": objective},
26 | input_variables=["completed_tasks", "pending_tasks", "last_output", "context"],
27 | )
28 |
29 | class LearningChain(LLMChain):
30 |
31 | @classmethod
32 | def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) -> "LearningChain":
33 | return cls(prompt=learning_prompt(objective), llm=llm, verbose=verbose)
34 |
35 | def update_memory(self, memory: str, observation: str, completed_tasks: List[str], pending_tasks: List[str]):
36 | return self.run(
37 | completed_tasks=completed_tasks,
38 | pending_tasks=pending_tasks,
39 | last_output=observation,
40 | context=memory
41 | )
42 |
43 |
44 |
--------------------------------------------------------------------------------
/autolang/planner.py:
--------------------------------------------------------------------------------
1 | from typing import List, Dict
2 | from pydantic import Field
3 | from langchain import LLMChain, PromptTemplate
4 | from langchain.agents import Tool
5 | from langchain.llms.base import BaseLLM
6 |
7 | from .utils import parse_task_list
8 |
9 | planning_template = """You are a task creation AI tasked with generating a full, exhaustive list of tasks to accomplish the following objective: {objective}.
10 | The AI system that will execute these tasks will have access to the following tools:
11 | {tool_strings}
12 | Each task may only use a single tool, but not all tasks need to use one. The task should not specify the tool. The final task should achieve the objective.
13 | Each task will be performed by a capable agent, do not break the problem down into too many tasks.
14 | Aim to keep the list short, and never generate more than 5 tasks. Your response should be each task in a separate line, one line per task.
15 | Use the following format:
16 | 1. First task
17 | 2. Second task
18 | """
19 |
20 | planning_prompt = lambda objective: PromptTemplate(
21 | template=planning_template,
22 | partial_variables={"objective": objective},
23 | input_variables=["tool_strings"],
24 | )
25 |
26 |
27 | class PlanningChain(LLMChain):
28 |
29 | tool_strings: str = Field(...)
30 |
31 | @classmethod
32 | def from_llm(cls, llm: BaseLLM, objective: str, tools: List[Tool] , verbose: bool = True) -> "PlanningChain":
33 | tool_strings = "\n".join([f"> {tool.name}: {tool.description}" for tool in tools])
34 | return cls(prompt=planning_prompt(objective), llm=llm, verbose=verbose, tool_strings=tool_strings)
35 |
36 | def generate_tasks(self) -> List[Dict]:
37 | response = self.run(tool_strings=self.tool_strings)
38 | return parse_task_list(response)
39 |
--------------------------------------------------------------------------------
/autolang/printer.py:
--------------------------------------------------------------------------------
1 | def print_objective(objective):
2 | color_print("*****Objective*****", 4)
3 | print(objective)
4 |
5 | def print_task_list(complete_list, pending_list):
6 | color_print("*****TASK LIST*****", 5)
7 | print("Completed: ")
8 | for task in complete_list:
9 | print(str(task["task_id"]) + ": " + task["task_name"])
10 |
11 | print("\nPending: ")
12 | for task in pending_list:
13 | print(str(task["task_id"]) + ": " + task["task_name"])
14 |
15 | def print_next_task(task):
16 | color_print("*****NEXT TASK*****", 2)
17 | print(str(task["task_id"]) + ": " + task["task_name"])
18 |
19 | def print_task_result(result):
20 | color_print("*****TASK RESULT*****", 3)
21 | print(result)
22 |
23 | def print_end(final_result):
24 | color_print("*****TASK ENDING*****", 1)
25 | print(final_result)
26 |
27 | # leave at the end as the codes somehow screw up indenting in the rest of the file
28 | def color_print(text: str, color: int):
29 | print(f"\n\033[9{color}m\033[1m{text}\033[0m\033[0m\n")
30 |
--------------------------------------------------------------------------------
/autolang/reviewer.py:
--------------------------------------------------------------------------------
1 | from collections import deque
2 | from typing import List, Dict
3 | from langchain import LLMChain, PromptTemplate
4 | from langchain.llms.base import BaseLLM
5 |
6 | from .utils import parse_task_list
7 |
8 | reviewing_template = """Albus is a task reviewing and prioritization AI, tasked with cleaning the formatting of and reprioritizing the following tasks: {pending_tasks}.
9 | Albus is provided with the list of completed tasks, the current pending tasks, and the information context that has been generated so far by the system.
10 |
11 | Albus will decide if the current completed tasks and context are enough to generate a final answer. If this is the case, Albus will notify this using this exact format:
12 | Review: Can answer
13 |
14 | Albus will never generate the final answer.
15 | If there is not enough information to answer, Albus will generate a new list of tasks. The tasks will be ordered by priority, with the most important task first. The tasks will be numbered, starting with {next_task_id}. The following format will be used:
16 | Review: Must continue
17 | #. First task
18 | #. Second task
19 |
20 | Albus will use the current pending tasks to generate this list, but it may remove tasks that are no longer required, or add new ones if strictly required.
21 |
22 | The ultimate objective is: {objective}.
23 | The following tasks have already been completed: {completed_tasks}.
24 | This is the information context generated so far:
25 | {context}
26 | """
27 |
28 | reviewing_prompt = lambda objective: PromptTemplate(
29 | template=reviewing_template,
30 | partial_variables={"objective": objective},
31 | input_variables=["completed_tasks", "pending_tasks", "context", "next_task_id"],
32 | )
33 |
34 | class ReviewingChain(LLMChain):
35 |
36 | @classmethod
37 | def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) -> "ReviewingChain":
38 | return cls(prompt=reviewing_prompt(objective), llm=llm, verbose=verbose)
39 |
40 | def review_tasks(self, this_task_id: int, completed_tasks: List[str], pending_tasks: List[str], context: str) -> deque[Dict]:
41 | next_task_id = int(this_task_id) + 1
42 | response = self.run(completed_tasks=completed_tasks, pending_tasks=pending_tasks, context=context, next_task_id=next_task_id)
43 | return deque(parse_task_list(response))
44 |
--------------------------------------------------------------------------------
/autolang/utils.py:
--------------------------------------------------------------------------------
1 |
2 | def parse_task_list(response):
3 | new_tasks = response.split('\n')
4 | prioritized_task_list = []
5 | for task_string in new_tasks:
6 | if not task_string.strip(): continue
7 | task_parts = task_string.strip().split(".", 1)
8 | if len(task_parts) == 2:
9 | task_id = task_parts[0].strip()
10 | task_name = task_parts[1].strip()
11 | prioritized_task_list.append({"task_id": task_id, "task_name": task_name})
12 | return prioritized_task_list
13 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | faiss_cpu==1.7.3
2 | langchain==0.0.136
3 | pydantic==1.10.7
4 | openai==0.27.4
5 | python-dotenv==1.0.0
--------------------------------------------------------------------------------
/run_docker.sh:
--------------------------------------------------------------------------------
1 | docker run -i $(docker build -q . --build-arg openai_key=$OPENAI_API_KEY)
2 |
--------------------------------------------------------------------------------