4 |
5 | **Dev Assistant** is a Python project that demonstrates an intelligent agent capable of performing tasks, learning from its environment, and evaluating its progress towards a given objective. The agent is composed of several modules, each responsible for a specific aspect of the agent's behavior.
6 |
7 | The agent operates on the basis of input objectives provided by the user, and it employs a range of tools to achieve the desired outcome.
8 |
9 | The tool is particularly useful for tasks that result in the creation of multiple files upon completion, and it is designed to streamline the workflow of developers.
10 |
11 | ## Key Components
12 |
13 | - ReasoningModule: Generates and prioritizes tasks based on the agent's objective and current state.
14 | - PerceptionModule: Processes tasks and results to optimize them for the agent's understanding and execution.
15 | - ExecutionModule: Executes tasks using various tools and returns the results.
16 | - LearningModule: Learns from observations and adjusts the agent's behavior to improve efficiency.
17 | - MemoryModule: Stores and retrieves relevant information based on the agent's tasks and objectives.
18 | - EvaluationModule: Evaluates the agent's progress towards its objective and determines if the objective has been achieved.
19 |
20 | ## How to Use
21 |
22 | To set up the project, follow these steps:
23 |
24 | 1. Clone the repository to your local machine.
25 | 2. Install the required dependencies by running `make install`.
26 | 3. Set up the necessary environment variables in a `.envrc` file. You will need to provide your OpenAI API key.
27 | 4. Run the project using the command `make docker` or `make`.
28 |
29 | ## Running the Project
30 |
31 | You can run the project in different modes:
32 |
33 | - To run the project with a specific objective, use the command `python -u -m main --obj "Your objective here"`.
34 | - To run the project in verbose mode, add the `--verbose` flag to the command.
35 | - To run the project with a visualizer, add the `--visualizer` flag to the command.
36 |
37 |
38 |
39 | WARNING:
The agent is equipped with tools that allow making modifications to the machine where it is currently operating. It is recommended to run agent inside docker container. Run
make docker
command to start a container.
40 |
41 |
42 |
43 | ## Tools
44 |
45 | The assistant makes use of several tools to complete tasks. Some of these tools include:
46 |
47 | - Python REPL
48 | - Bash commands
49 | - File manipulation (read, write, delete, etc.)
50 | - GitHub integration
51 | - Web scraping
52 |
53 | ## Structure
54 |
55 | The project consists of several Python files, each containing a specific module or class:
56 |
57 | - AgentOrchestrator.py: Contains the main AgentOrchestrator class, which coordinates the different modules to achieve the agent's objective.
58 | - main.py: The main script that runs the agent and handles command-line arguments.
59 |
60 | ## Future Improvements
61 |
62 | - Improve the agent's ability to handle more complex objectives and tasks.
63 | - Add more tools and capabilities to the ExecutionModule.
64 | - Enhance the agent's learning and adaptation capabilities.
65 | - Implement a visualizer to display the agent's progress and decision-making process.
66 |
67 | ## Contributing
68 |
69 | If you'd like to contribute to the project, feel free to submit a pull request or open an issue on the repository.
70 |
71 | ## Links
72 |
73 | - [Auto-GPT](https://github.com/Torantulino/Auto-GPT)
74 | - [babyAGI](https://github.com/yoheinakajima/babyagi)
75 | - [Llama Index](https://github.com/jerryjliu/llama_index)
76 | - [langchain](https://github.com/hwchase17/langchain)
77 |
78 | ## License
79 |
80 | This project is licensed under the MIT License.
81 |
--------------------------------------------------------------------------------
/modules/reasoning.py:
--------------------------------------------------------------------------------
1 | from collections import deque
2 | from langchain import LLMChain, PromptTemplate
3 | from langchain.chains import LLMChain
4 | from langchain.llms import BaseLLM
5 | from langchain.prompts import PromptTemplate
6 | from modules.memory import MemoryModule
7 | from typing import Dict, List
8 |
9 |
10 | class ReasoningModule:
11 | def __init__(self, llm, memory_module: MemoryModule, verbose: bool = True):
12 | self.task_list = deque()
13 | self.completed_task_list = deque()
14 | self.memory_module = memory_module
15 | self.task_creation_chain = TaskCreationChain.from_llm(llm, verbose)
16 | self.task_prioritization_chain = TaskPrioritizationChain.from_llm(llm, verbose)
17 | self.milestone_chain = MilestoneChain.from_llm(llm, verbose)
18 |
19 | def initialize_tasks(self):
20 | milestones = self.milestone_chain.run(objective=self.memory_module.objective)
21 | self.memory_module.store(str(milestones))
22 | for milestone in milestones:
23 | self.task_list.append({"task_name": milestone})
24 | self.task_list = deque(self.prioritize_tasks(0))
25 |
26 | def update_tasks(self, task: dict, result: dict):
27 | incomplete_tasks = [t["task_name"] for t in self.task_list]
28 | task_description = task["task_name"]
29 | incomplete_tasks = "\n".join(incomplete_tasks)
30 | if len(self.task_list) == 0:
31 | incomplete_tasks = "all"
32 | objective = self.memory_module.objective
33 | response = self.task_creation_chain.run(
34 | result=result,
35 | task_description=task_description,
36 | incomplete_tasks=incomplete_tasks,
37 | objective=objective,
38 | )
39 | new_tasks = response.split("\n")
40 | new_tasks = [{"task_name": task_name} for task_name in new_tasks if task_name.strip()]
41 | this_task_id = int("".join(filter(str.isdigit, task["task_id"]))) if isinstance(task["task_id"], str) else task["task_id"]
42 | task_id_counter = this_task_id
43 |
44 | for new_task in new_tasks:
45 | task_id_counter += 1
46 | new_task.update({"task_id": task_id_counter})
47 | self.task_list.append(new_task)
48 |
49 | self.task_list = deque(self.prioritize_tasks(this_task_id))
50 |
51 | def prioritize_tasks(self, this_task_id: int) -> List[Dict]:
52 | """Prioritize tasks."""
53 | task_names = [t["task_name"] for t in self.task_list]
54 | task_names = "\n".join(task_names)
55 | objective = self.memory_module.objective
56 | next_task_id = this_task_id + 1
57 | response = self.task_prioritization_chain.run(task_names=task_names, next_task_id=next_task_id, objective=objective)
58 | new_tasks = response.split("\n")
59 | prioritized_task_list = []
60 | task_id_counter = this_task_id
61 | for task_string in new_tasks:
62 | if not task_string.strip():
63 | continue
64 | task_parts = task_string.strip().split(".", 1)
65 | if len(task_parts) == 2:
66 | task_id_counter += 1
67 | # task_id = task_parts[0].strip()
68 | task_name = task_parts[1].strip()
69 | prioritized_task_list.append({"task_id": task_id_counter, "task_name": task_name})
70 | return prioritized_task_list
71 |
72 |
73 | class TaskCreationChain(LLMChain):
74 | """Chain to generate tasks."""
75 |
76 | @classmethod
77 | def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:
78 | """Get the response parser."""
79 | task_creation_template = (
80 | "As a task creation AI, create new tasks with the objective: {objective}.\n"
81 | "Last completed task's result: {result}.\n"
82 | "Task description: {task_description}.\n"
83 | "Incomplete tasks: {incomplete_tasks}\n\n"
84 | "Ensure tasks are actionable and achievable by an agent with limited resources.\n"
85 | "Create short, finite tasks. Avoid continuous tasks like monitoring or testing.\n"
86 | "Consider if a new task is essential for reaching the objective.\n"
87 | "Return tasks as an array.\n"
88 | )
89 | prompt = PromptTemplate(
90 | template=task_creation_template,
91 | input_variables=["result", "task_description", "incomplete_tasks", "objective"],
92 | )
93 | return cls(prompt=prompt, llm=llm, verbose=verbose)
94 |
95 |
96 | class TaskPrioritizationChain(LLMChain):
97 | """Chain to prioritize tasks."""
98 |
99 | @classmethod
100 | def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:
101 | """Get the response parser."""
102 | task_prioritization_template = (
103 | "As a task prioritization AI, format and prioritize tasks: {task_names}\n"
104 | "Objective: {objective}\n\n"
105 | "Return prioritized tasks as a numbered list starting with {next_task_id}.\n"
106 | )
107 | prompt = PromptTemplate(
108 | template=task_prioritization_template,
109 | input_variables=["task_names", "next_task_id", "objective"],
110 | )
111 | return cls(prompt=prompt, llm=llm, verbose=verbose)
112 |
113 |
114 | class MilestoneChain(LLMChain):
115 | """Chain to generate milestones."""
116 |
117 | @classmethod
118 | def from_llm(cls, llm: BaseLLM, verbose: bool = True) -> LLMChain:
119 | """Get the response parser."""
120 | milestone_template = "As a milestone AI, generate milestones for the objective: {objective}.\n" "Return milestones as an array.\n"
121 |
122 | return cls(llm=llm, prompt=PromptTemplate(input_variables=["objective"], template=milestone_template), verbose=verbose)
123 |
124 | def run(self, objective: str) -> List[str]:
125 | """Run the chain."""
126 | return self.generate_milestones(objective=objective)
127 |
128 | def generate_milestones(self, objective: str) -> List[str]:
129 | """Generate milestones."""
130 | response = self.predict(objective=objective)
131 | return response.strip().split("\n") if response else []
132 |
--------------------------------------------------------------------------------
/orchestrator.py:
--------------------------------------------------------------------------------
1 | from langchain import OpenAI
2 | from langchain.chains.base import Chain
3 | from langchain.llms import BaseLLM
4 | from modules.evaluation import EvaluationModule
5 | from langchain.vectorstores import Pinecone
6 | from modules.execution import ExecutionModule
7 | from modules.learning import LearningModule
8 | from modules.memory import MemoryModule
9 | from modules.perception import PerceptionModule
10 | from modules.reasoning import ReasoningModule
11 | from typing import Any, Dict, List, Optional
12 | from colorama import Fore, Back, Style
13 | from langchain.callbacks import get_openai_callback
14 | import rich
15 |
16 |
17 | class AgentOrchestrator(Chain):
18 | memory_module: MemoryModule
19 | perception_module: PerceptionModule
20 | learning_module: LearningModule
21 | reasoning_module: ReasoningModule
22 | execution_module: ExecutionModule
23 | evaluation_module: EvaluationModule
24 |
25 | max_iterations: Optional[int] = None
26 |
27 | @property
28 | def input_keys(self) -> List[str]:
29 | return ["objective"]
30 |
31 | @property
32 | def output_keys(self) -> List[str]:
33 | return []
34 |
35 | def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
36 | self.memory_module.objective = inputs["objective"]
37 | self.reasoning_module.initialize_tasks()
38 |
39 | num_iters = 0
40 | while True:
41 | with get_openai_callback() as cb:
42 | if self.reasoning_module.task_list:
43 | # Step 1: Pull the first task
44 | task = self.reasoning_module.task_list.popleft()
45 |
46 | self.print_task_list()
47 | self.print_next_task(task)
48 |
49 | # TODO: Enable it back when flow is complerely tested
50 | # # Process the current task using PerceptionModule
51 | # processed_task = self.perception_module.process_task(task)
52 | # self.print_optimized_next_task(processed_task)
53 | processed_task = task
54 |
55 | # Step 2: Execute the task
56 | execution_result = self.execution_module.execute(processed_task)
57 | self.print_task_result(execution_result)
58 | self.reasoning_module.completed_task_list.append(task)
59 |
60 | self.memory_module.store_result(execution_result, processed_task)
61 | print(f"\n{Fore.LIGHTMAGENTA_EX}Saved new result to memory{Fore.RESET}")
62 |
63 | # TODO: Enable it back when flow is complerely tested
64 | # # Process the execution result using PerceptionModule before storing it in the MemoryModule
65 | # processed_execution_result = self.perception_module.process_result(execution_result)
66 | # self.print_optimized_task_result(processed_execution_result)
67 | processed_execution_result = execution_result
68 |
69 | # TODO: Enable it back when flow is complerely tested
70 | # new_memory = self.learning_module.learn_from(
71 | # observation=processed_execution_result,
72 | # completed_tasks=list(self.reasoning_module.completed_task_list),
73 | # pending_tasks=list(self.reasoning_module.task_list),
74 | # )
75 | # self.print_new_memory(new_memory)
76 |
77 | # # Step 3: Store the result in Memory
78 | # self.memory_module.store(new_memory)
79 | # print(f"\n{Fore.LIGHTMAGENTA_EX}Saved new learnings to memory{Fore.RESET}")
80 |
81 | # Step 4: Create new tasks and reprioritize task list
82 | self.reasoning_module.update_tasks(processed_task, processed_execution_result)
83 | print(f"\n{Fore.LIGHTMAGENTA_EX}Updated tasks based on stored data{Fore.RESET}")
84 |
85 | # # Evaluate the task result
86 | # is_finished, final_answer = self.evaluation_module.evaluate_from(
87 | # observation=processed_execution_result,
88 | # )
89 |
90 | # self.print_evaluated_task_result(is_finished, final_answer)
91 |
92 | # if is_finished:
93 | # break
94 |
95 | rich.print(cb)
96 |
97 | num_iters += 1
98 | if self.max_iterations is not None and num_iters == self.max_iterations:
99 | print(f"\n{Fore.RED}\n*****TASK ENDING*****\n{Fore.RESET}")
100 | break
101 |
102 | # self.print_end(final_answer)
103 |
104 | return {}
105 |
106 | @classmethod
107 | def from_llm(cls, llm: OpenAI, exec_llm: BaseLLM, verbose: bool = False, **kwargs) -> "AgentOrchestrator":
108 | memory_module = MemoryModule(llm, verbose=verbose)
109 | perception_module = PerceptionModule(llm, memory_module=memory_module, verbose=verbose)
110 | learning_module = LearningModule(llm, memory_module=memory_module, verbose=verbose)
111 | reasoning_module = ReasoningModule(llm, memory_module=memory_module, verbose=verbose)
112 | execution_module = ExecutionModule(exec_llm, memory_module=memory_module, verbose=verbose)
113 | evaluation_module = EvaluationModule(llm, memory_module=memory_module, verbose=verbose)
114 |
115 | return cls(
116 | memory_module=memory_module,
117 | perception_module=perception_module,
118 | reasoning_module=reasoning_module,
119 | learning_module=learning_module,
120 | execution_module=execution_module,
121 | evaluation_module=evaluation_module,
122 | **kwargs,
123 | )
124 |
125 | def print_task_list(self):
126 | print(f"\n{Fore.BLUE}*****Completed*****{Fore.RESET}")
127 | for task in self.reasoning_module.completed_task_list:
128 | print(f"- {task['task_name']}")
129 | print(f"\n{Fore.GREEN}*****Pending*****{Fore.RESET}")
130 | for task in self.reasoning_module.task_list:
131 | print(f"- {task['task_name']}")
132 |
133 | def print_next_task(self, task: Dict):
134 | print(f"\n{Fore.LIGHTBLUE_EX}*****Next Task*****{Fore.RESET}")
135 | rich.print(task)
136 |
137 | def print_optimized_next_task(self, task: Dict):
138 | print(f"\n{Fore.LIGHTBLUE_EX}*****Optimized Next Task*****{Fore.RESET}")
139 | rich.print(task)
140 |
141 | def print_task_result(self, result: str):
142 | print(f"\n{Fore.LIGHTGREEN_EX}*****Task Result*****{Fore.RESET}")
143 | rich.print(result)
144 |
145 | def print_optimized_task_result(self, result: str):
146 | print(f"\n{Fore.LIGHTGREEN_EX}*****Optimized Task Result*****{Fore.RESET}")
147 | rich.print(result)
148 |
149 | def print_evaluated_task_result(self, is_finished: bool, result: str):
150 | print(f"\n{Fore.LIGHTCYAN_EX}*****Evaluated Task Result*****{Fore.RESET}")
151 | print(f"\n{Fore.LIGHTYELLOW_EX}Is finished: {is_finished}{Fore.RESET}")
152 | rich.print(result)
153 |
154 | def print_new_memory(self, new_memory: str):
155 | print(f"\n{Fore.LIGHTMAGENTA_EX}*****New Memory*****{Fore.RESET}")
156 | rich.print(new_memory)
157 |
158 | def print_end(self, final_result):
159 | print(f"\n{Fore.RED}*****End Result*****{Fore.RESET}")
160 | rich.print(final_result)
161 |
--------------------------------------------------------------------------------
/modules/execution.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | import json
3 | import traceback
4 | from typing import Any, Dict, List, Optional, Sequence, Tuple, Union
5 | import platform
6 | from langchain import OpenAI
7 | from pydantic import Field
8 | from langchain.agents import Agent, AgentExecutor
9 | from langchain.agents.agent import AgentOutputParser
10 | from langchain.callbacks import BaseCallbackManager
11 | from langchain.chains import LLMChain
12 | from langchain.docstore.document import Document
13 | from langchain.llms.base import BaseLLM
14 | from langchain.output_parsers import GuardrailsOutputParser
15 | from langchain.prompts.base import BasePromptTemplate
16 | from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate
17 | from langchain.schema import AgentAction, AgentFinish, BaseLanguageModel
18 | from langchain.tools import BaseTool
19 | from langchain.prompts.chat import MessagesPlaceholder
20 | from langchain.schema import AIMessage, BaseMessage, HumanMessage
21 | from langchain.schema import OutputParserException
22 | from langchain.agents.chat.base import ChatAgent
23 | from modules.execution_tools import GuardRailTool, get_tools, tree
24 | from modules.memory import MemoryModule
25 | from rich import print
26 |
27 | # Define the Guardrails Schema for the Execution Assistant
28 | rail_spec = """
29 |
30 |
31 |
41 |
42 |
43 |
44 | You are a helpful Task Driven Autonomous Agent running on {operating_system} only capable of communicating with valid JSON, and no other text.
45 | You should always respond with one of the provided actions and corresponding to this action input. If you don't know what to do, you should decide by yourself.
46 | You can take as many actions as you want, but you should always return a valid JSON that follows the schema and only one action at a time.
47 |
48 | @complete_json_suffix_v2
49 |
50 |
51 |
52 | Ultimate objective: {{{{objective}}}}
53 | Previously completed tasks and project context: {{{{context}}}}
54 | Working directory tree: {{{{dir_tree}}}}
55 |
56 | Finish the following task.
57 |
58 | Task: {{{{input}}}}
59 |
60 | Choose one of the available actions and return a JSON that follows the correct schema.
61 |
62 | {{{{agent_scratchpad}}}}
63 |
64 |
65 |
66 | """
67 | # Objective, Context and Directory tree are sent in separate messages, so they are not included in this prompt
68 |
69 |
70 | class ExecutionModule:
71 | def __init__(
72 | self,
73 | llm: BaseLLM,
74 | memory_module: MemoryModule,
75 | verbose: bool = True,
76 | ):
77 | self.memory_module = memory_module
78 | tools = get_tools(llm, memory_module)
79 | agent = ExecutionAgent.from_llm_and_tools(llm=llm, tools=tools, verbose=verbose)
80 | self.agent = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=verbose)
81 |
82 | def execute(self, task: Dict[str, Any]) -> Union[str, Document]:
83 | task_name = task["task_name"]
84 | objective = self.memory_module.objective
85 | context = self.memory_module.retrieve_related_information(task_name)
86 | dir_tree = tree() or "No directory tree available"
87 | for i in range(3):
88 | try:
89 | return self.agent.run(
90 | {
91 | "input": task_name,
92 | "objective": objective,
93 | "context": context,
94 | "dir_tree": dir_tree,
95 | }
96 | )
97 | except Exception as e:
98 | print(traceback.format_exc())
99 | print(f"Exception running executor agent. Will retry {2-i} times")
100 | return "Failed to execute task."
101 |
102 |
103 | FINAL_ANSWER_ACTION = "final"
104 |
105 | class ExecutionOutputParser(GuardrailsOutputParser):
106 | def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
107 | # sourcery skip: avoid-builtin-shadow
108 | try:
109 | result = json.loads(text)
110 | action = result["action"]
111 | input = text
112 | except Exception as e:
113 | # Retry once by directly calling LLM and asking to extract action and action input as a json
114 | try:
115 | print(f"---\nCould not parse LLM output: {text}\nerror: {e}\nRetrying...\n---")
116 | llm = OpenAI(temperature=0)
117 | text = llm(f"{self.guard.instructions.source}\n\nExtract and return action and other fields in json format from this: {text}")
118 | result = json.loads(text)
119 | action = result["action"]
120 | input = text
121 | except Exception as e2:
122 | raise OutputParserException(f"---\nCould not parse LLM output: {text}\nerror: {e2}\n---") from e2
123 | if FINAL_ANSWER_ACTION in action:
124 | if "action_input" in result:
125 | action_input = result["action_input"]
126 | elif action in input:
127 | if isinstance(input[action], str):
128 | action_input = input[action]
129 | elif isinstance(input[action], dict) and "action_input" in input[action]:
130 | action_input = input[action]["action_input"]
131 | else:
132 | action_input = str(input[action])
133 | else:
134 | action_input = str(input)
135 | return AgentFinish({"output": action_input}, text)
136 | return AgentAction(action, input, text)
137 |
138 | class ExecutionAgent(ChatAgent):
139 | output_parser: ExecutionOutputParser = Field(default_factory=ExecutionOutputParser)
140 |
141 | @property
142 | def observation_prefix(self) -> str:
143 | """Prefix to append the observation with."""
144 | return "Result of Action JSON: "
145 |
146 | @property
147 | def llm_prefix(self) -> str:
148 | """Prefix to append the llm call with."""
149 | return "Action JSON:"
150 |
151 | def _construct_scratchpad(self, intermediate_steps: List[Tuple[AgentAction, str]]) -> str:
152 | """Construct the scratchpad that lets the agent continue its thought process."""
153 | return super()._construct_scratchpad(intermediate_steps[-3:])
154 |
155 | def get_full_inputs(self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) -> Dict[str, Any]:
156 | """Create the full inputs for the LLMChain from intermediate steps."""
157 | thoughts = self._construct_scratchpad(intermediate_steps)
158 | new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
159 | return kwargs | new_inputs
160 |
161 | @classmethod
162 | def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:
163 | ExecutionOutputParser()
164 |
165 | @classmethod
166 | def create_prompt(cls, output_parser: GuardrailsOutputParser) -> BasePromptTemplate:
167 | messages = [
168 | SystemMessagePromptTemplate.from_template(output_parser.guard.instructions.source),
169 | HumanMessagePromptTemplate.from_template(output_parser.guard.base_prompt),
170 | ]
171 | return ChatPromptTemplate.from_messages(messages=messages)
172 |
173 | @classmethod
174 | def from_llm_and_tools(
175 | cls,
176 | llm: BaseLanguageModel,
177 | tools: Sequence[GuardRailTool],
178 | callback_manager: Optional[BaseCallbackManager] = None,
179 | output_parser: Optional[ExecutionOutputParser] = None,
180 | verbose: bool = True,
181 | **kwargs: Any,
182 | ) -> Agent:
183 | cls._validate_tools(tools)
184 | tool_strings_spec = "\n".join(
185 | [
186 | f'"
194 | for tool in tools
195 | ]
196 | )
197 | operating_system = platform.platform()
198 | complete_rail_spec = rail_spec.format(tool_strings_spec=tool_strings_spec, operating_system=operating_system)
199 | output_parser = ExecutionOutputParser.from_rail_string(complete_rail_spec)
200 | prompt = cls.create_prompt(output_parser)
201 | llm_chain = LLMChain(
202 | llm=llm,
203 | prompt=prompt,
204 | callback_manager=callback_manager,
205 | verbose=verbose,
206 | )
207 | tool_names = [tool.name for tool in tools]
208 | return cls(
209 | llm_chain=llm_chain,
210 | allowed_tools=tool_names,
211 | output_parser=output_parser,
212 | **kwargs,
213 | )
214 |
215 | @property
216 | def _stop(self) -> List[str]:
217 | return ["Result of Action JSON:"]
218 |
219 | @property
220 | def _agent_type(self) -> str:
221 | raise ValueError
222 |
--------------------------------------------------------------------------------
/modules/tools/package-lock.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "tools",
3 | "lockfileVersion": 3,
4 | "requires": true,
5 | "packages": {
6 | "": {
7 | "dependencies": {
8 | "googlethis": "^1.7.1"
9 | }
10 | },
11 | "node_modules/asynckit": {
12 | "version": "0.4.0",
13 | "resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
14 | "integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q=="
15 | },
16 | "node_modules/axios": {
17 | "version": "0.21.4",
18 | "resolved": "https://registry.npmjs.org/axios/-/axios-0.21.4.tgz",
19 | "integrity": "sha512-ut5vewkiu8jjGBdqpM44XxjuCjq9LAKeHVmoVfHVzy8eHgxxq8SbAVQNovDA8mVi05kP0Ea/n/UzcSHcTJQfNg==",
20 | "dependencies": {
21 | "follow-redirects": "^1.14.0"
22 | }
23 | },
24 | "node_modules/boolbase": {
25 | "version": "1.0.0",
26 | "resolved": "https://registry.npmjs.org/boolbase/-/boolbase-1.0.0.tgz",
27 | "integrity": "sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww=="
28 | },
29 | "node_modules/cheerio": {
30 | "version": "1.0.0-rc.10",
31 | "resolved": "https://registry.npmjs.org/cheerio/-/cheerio-1.0.0-rc.10.tgz",
32 | "integrity": "sha512-g0J0q/O6mW8z5zxQ3A8E8J1hUgp4SMOvEoW/x84OwyHKe/Zccz83PVT4y5Crcr530FV6NgmKI1qvGTKVl9XXVw==",
33 | "dependencies": {
34 | "cheerio-select": "^1.5.0",
35 | "dom-serializer": "^1.3.2",
36 | "domhandler": "^4.2.0",
37 | "htmlparser2": "^6.1.0",
38 | "parse5": "^6.0.1",
39 | "parse5-htmlparser2-tree-adapter": "^6.0.1",
40 | "tslib": "^2.2.0"
41 | },
42 | "engines": {
43 | "node": ">= 6"
44 | },
45 | "funding": {
46 | "url": "https://github.com/cheeriojs/cheerio?sponsor=1"
47 | }
48 | },
49 | "node_modules/cheerio-select": {
50 | "version": "1.6.0",
51 | "resolved": "https://registry.npmjs.org/cheerio-select/-/cheerio-select-1.6.0.tgz",
52 | "integrity": "sha512-eq0GdBvxVFbqWgmCm7M3XGs1I8oLy/nExUnh6oLqmBditPO9AqQJrkslDpMun/hZ0yyTs8L0m85OHp4ho6Qm9g==",
53 | "dependencies": {
54 | "css-select": "^4.3.0",
55 | "css-what": "^6.0.1",
56 | "domelementtype": "^2.2.0",
57 | "domhandler": "^4.3.1",
58 | "domutils": "^2.8.0"
59 | },
60 | "funding": {
61 | "url": "https://github.com/sponsors/fb55"
62 | }
63 | },
64 | "node_modules/combined-stream": {
65 | "version": "1.0.8",
66 | "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
67 | "integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==",
68 | "dependencies": {
69 | "delayed-stream": "~1.0.0"
70 | },
71 | "engines": {
72 | "node": ">= 0.8"
73 | }
74 | },
75 | "node_modules/css-select": {
76 | "version": "4.3.0",
77 | "resolved": "https://registry.npmjs.org/css-select/-/css-select-4.3.0.tgz",
78 | "integrity": "sha512-wPpOYtnsVontu2mODhA19JrqWxNsfdatRKd64kmpRbQgh1KtItko5sTnEpPdpSaJszTOhEMlF/RPz28qj4HqhQ==",
79 | "dependencies": {
80 | "boolbase": "^1.0.0",
81 | "css-what": "^6.0.1",
82 | "domhandler": "^4.3.1",
83 | "domutils": "^2.8.0",
84 | "nth-check": "^2.0.1"
85 | },
86 | "funding": {
87 | "url": "https://github.com/sponsors/fb55"
88 | }
89 | },
90 | "node_modules/css-what": {
91 | "version": "6.1.0",
92 | "resolved": "https://registry.npmjs.org/css-what/-/css-what-6.1.0.tgz",
93 | "integrity": "sha512-HTUrgRJ7r4dsZKU6GjmpfRK1O76h97Z8MfS1G0FozR+oF2kG6Vfe8JE6zwrkbxigziPHinCJ+gCPjA9EaBDtRw==",
94 | "engines": {
95 | "node": ">= 6"
96 | },
97 | "funding": {
98 | "url": "https://github.com/sponsors/fb55"
99 | }
100 | },
101 | "node_modules/delayed-stream": {
102 | "version": "1.0.0",
103 | "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
104 | "integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==",
105 | "engines": {
106 | "node": ">=0.4.0"
107 | }
108 | },
109 | "node_modules/dom-serializer": {
110 | "version": "1.4.1",
111 | "resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-1.4.1.tgz",
112 | "integrity": "sha512-VHwB3KfrcOOkelEG2ZOfxqLZdfkil8PtJi4P8N2MMXucZq2yLp75ClViUlOVwyoHEDjYU433Aq+5zWP61+RGag==",
113 | "dependencies": {
114 | "domelementtype": "^2.0.1",
115 | "domhandler": "^4.2.0",
116 | "entities": "^2.0.0"
117 | },
118 | "funding": {
119 | "url": "https://github.com/cheeriojs/dom-serializer?sponsor=1"
120 | }
121 | },
122 | "node_modules/domelementtype": {
123 | "version": "2.3.0",
124 | "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-2.3.0.tgz",
125 | "integrity": "sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==",
126 | "funding": [
127 | {
128 | "type": "github",
129 | "url": "https://github.com/sponsors/fb55"
130 | }
131 | ]
132 | },
133 | "node_modules/domhandler": {
134 | "version": "4.3.1",
135 | "resolved": "https://registry.npmjs.org/domhandler/-/domhandler-4.3.1.tgz",
136 | "integrity": "sha512-GrwoxYN+uWlzO8uhUXRl0P+kHE4GtVPfYzVLcUxPL7KNdHKj66vvlhiweIHqYYXWlw+T8iLMp42Lm67ghw4WMQ==",
137 | "dependencies": {
138 | "domelementtype": "^2.2.0"
139 | },
140 | "engines": {
141 | "node": ">= 4"
142 | },
143 | "funding": {
144 | "url": "https://github.com/fb55/domhandler?sponsor=1"
145 | }
146 | },
147 | "node_modules/domutils": {
148 | "version": "2.8.0",
149 | "resolved": "https://registry.npmjs.org/domutils/-/domutils-2.8.0.tgz",
150 | "integrity": "sha512-w96Cjofp72M5IIhpjgobBimYEfoPjx1Vx0BSX9P30WBdZW2WIKU0T1Bd0kz2eNZ9ikjKgHbEyKx8BB6H1L3h3A==",
151 | "dependencies": {
152 | "dom-serializer": "^1.0.1",
153 | "domelementtype": "^2.2.0",
154 | "domhandler": "^4.2.0"
155 | },
156 | "funding": {
157 | "url": "https://github.com/fb55/domutils?sponsor=1"
158 | }
159 | },
160 | "node_modules/entities": {
161 | "version": "2.2.0",
162 | "resolved": "https://registry.npmjs.org/entities/-/entities-2.2.0.tgz",
163 | "integrity": "sha512-p92if5Nz619I0w+akJrLZH0MX0Pb5DX39XOwQTtXSdQQOaYH03S1uIQp4mhOZtAXrxq4ViO67YTiLBo2638o9A==",
164 | "funding": {
165 | "url": "https://github.com/fb55/entities?sponsor=1"
166 | }
167 | },
168 | "node_modules/follow-redirects": {
169 | "version": "1.15.2",
170 | "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.2.tgz",
171 | "integrity": "sha512-VQLG33o04KaQ8uYi2tVNbdrWp1QWxNNea+nmIB4EVM28v0hmP17z7aG1+wAkNzVq4KeXTq3221ye5qTJP91JwA==",
172 | "funding": [
173 | {
174 | "type": "individual",
175 | "url": "https://github.com/sponsors/RubenVerborgh"
176 | }
177 | ],
178 | "engines": {
179 | "node": ">=4.0"
180 | },
181 | "peerDependenciesMeta": {
182 | "debug": {
183 | "optional": true
184 | }
185 | }
186 | },
187 | "node_modules/form-data": {
188 | "version": "4.0.0",
189 | "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz",
190 | "integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==",
191 | "dependencies": {
192 | "asynckit": "^0.4.0",
193 | "combined-stream": "^1.0.8",
194 | "mime-types": "^2.1.12"
195 | },
196 | "engines": {
197 | "node": ">= 6"
198 | }
199 | },
200 | "node_modules/googlethis": {
201 | "version": "1.7.1",
202 | "resolved": "https://registry.npmjs.org/googlethis/-/googlethis-1.7.1.tgz",
203 | "integrity": "sha512-2rJWqcNQeY5/D3HUQn4aM9piru5LjPFG6TTumww5hRMm6tmdnHu/WwSXji6ueUbKVZUHsHQO8mICV7mJK9I7ug==",
204 | "dependencies": {
205 | "axios": "^0.21.1",
206 | "cheerio": "1.0.0-rc.10",
207 | "form-data": "^4.0.0",
208 | "unraw": "^2.0.1"
209 | },
210 | "engines": {
211 | "node": ">=14"
212 | },
213 | "funding": {
214 | "url": "https://github.com/sponsors/LuanRT"
215 | }
216 | },
217 | "node_modules/htmlparser2": {
218 | "version": "6.1.0",
219 | "resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-6.1.0.tgz",
220 | "integrity": "sha512-gyyPk6rgonLFEDGoeRgQNaEUvdJ4ktTmmUh/h2t7s+M8oPpIPxgNACWa+6ESR57kXstwqPiCut0V8NRpcwgU7A==",
221 | "funding": [
222 | "https://github.com/fb55/htmlparser2?sponsor=1",
223 | {
224 | "type": "github",
225 | "url": "https://github.com/sponsors/fb55"
226 | }
227 | ],
228 | "dependencies": {
229 | "domelementtype": "^2.0.1",
230 | "domhandler": "^4.0.0",
231 | "domutils": "^2.5.2",
232 | "entities": "^2.0.0"
233 | }
234 | },
235 | "node_modules/mime-db": {
236 | "version": "1.52.0",
237 | "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
238 | "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
239 | "engines": {
240 | "node": ">= 0.6"
241 | }
242 | },
243 | "node_modules/mime-types": {
244 | "version": "2.1.35",
245 | "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
246 | "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
247 | "dependencies": {
248 | "mime-db": "1.52.0"
249 | },
250 | "engines": {
251 | "node": ">= 0.6"
252 | }
253 | },
254 | "node_modules/nth-check": {
255 | "version": "2.1.1",
256 | "resolved": "https://registry.npmjs.org/nth-check/-/nth-check-2.1.1.tgz",
257 | "integrity": "sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==",
258 | "dependencies": {
259 | "boolbase": "^1.0.0"
260 | },
261 | "funding": {
262 | "url": "https://github.com/fb55/nth-check?sponsor=1"
263 | }
264 | },
265 | "node_modules/parse5": {
266 | "version": "6.0.1",
267 | "resolved": "https://registry.npmjs.org/parse5/-/parse5-6.0.1.tgz",
268 | "integrity": "sha512-Ofn/CTFzRGTTxwpNEs9PP93gXShHcTq255nzRYSKe8AkVpZY7e1fpmTfOyoIvjP5HG7Z2ZM7VS9PPhQGW2pOpw=="
269 | },
270 | "node_modules/parse5-htmlparser2-tree-adapter": {
271 | "version": "6.0.1",
272 | "resolved": "https://registry.npmjs.org/parse5-htmlparser2-tree-adapter/-/parse5-htmlparser2-tree-adapter-6.0.1.tgz",
273 | "integrity": "sha512-qPuWvbLgvDGilKc5BoicRovlT4MtYT6JfJyBOMDsKoiT+GiuP5qyrPCnR9HcPECIJJmZh5jRndyNThnhhb/vlA==",
274 | "dependencies": {
275 | "parse5": "^6.0.1"
276 | }
277 | },
278 | "node_modules/tslib": {
279 | "version": "2.5.0",
280 | "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.5.0.tgz",
281 | "integrity": "sha512-336iVw3rtn2BUK7ORdIAHTyxHGRIHVReokCR3XjbckJMK7ms8FysBfhLR8IXnAgy7T0PTPNBWKiH514FOW/WSg=="
282 | },
283 | "node_modules/unraw": {
284 | "version": "2.0.1",
285 | "resolved": "https://registry.npmjs.org/unraw/-/unraw-2.0.1.tgz",
286 | "integrity": "sha512-tdOvLfRzHolwYcHS6HIX860MkK9LQ4+oLuNwFYL7bpgTEO64PZrcQxkisgwJYCfF8sKiWLwwu1c83DvMkbefIQ=="
287 | }
288 | }
289 | }
290 |
--------------------------------------------------------------------------------
/modules/execution_tools.py:
--------------------------------------------------------------------------------
1 | import json
2 | import shutil
3 | import re
4 | import os
5 | from typing import Dict, List
6 | from pathlib import Path
7 | from langchain.utilities import BashProcess
8 | from langchain.tools.python.tool import PythonREPLTool
9 | from modules.memory import MemoryModule
10 | from llama_index.optimization.optimizer import SentenceEmbeddingOptimizer
11 | from llama_index import GPTSimpleVectorIndex
12 | from llama_index import download_loader
13 | from langchain.vectorstores import DeepLake
14 | from langchain.utilities import BashProcess
15 | from langchain.text_splitter import CharacterTextSplitter
16 | from langchain.embeddings.openai import OpenAIEmbeddings
17 | from langchain.chat_models import ChatOpenAI
18 | from langchain.chains import ConversationalRetrievalChain
19 | from langchain.agents import Tool
20 | from langchain.agents.tools import BaseTool
21 | from datetime import datetime
22 | import rich
23 | import traceback
24 | from colorama import Fore, Back, Style
25 |
26 |
27 | current_datetime = datetime.now().strftime("%Y%m%d_%H%M%S")
28 | PREFIX_PATH = f"{str(Path(__file__).resolve().parent.parent)}/runs/test_output_{current_datetime}/"
29 |
30 |
31 | class GuardRailTool(BaseTool):
32 | child_tool: BaseTool
33 | input_args: Dict[str, str]
34 |
35 | def __init__(self, child_tool: BaseTool, input_args: Dict[str, str]):
36 | super().__init__(name=child_tool.name, description=child_tool.description, child_tool=child_tool, input_args=input_args)
37 |
38 | def _run(self, input_str: str) -> str: # sourcery skip: avoid-builtin-shadow
39 | try:
40 | result = json.loads(input_str)
41 | action = result["action"]
42 | input = result[action] if action in result else result
43 | if len(self.input_args) == 1:
44 | final_input = str(input[list(self.input_args.keys())[0]])
45 | else:
46 | final_input = {key: str(input[key]) for key in self.input_args}
47 | print(f"\n{Fore.LIGHTBLUE_EX} Using action {action} with input:\n{final_input}{Fore.RESET}\n")
48 | return str(self.child_tool.run(final_input))
49 |
50 | except Exception as e:
51 | rich.print(traceback.format_exc())
52 | return f"Error occurred while executing tool {self.name}: {str(e)}\ninput_str: {input_str}"
53 |
54 | async def _arun(self, input_str: str) -> str:
55 | raise NotImplementedError("Tool does not support async")
56 |
57 |
58 | def get_tools(llm, memory_module: MemoryModule) -> List[GuardRailTool]:
59 | make_directory(".")
60 |
61 | tools = []
62 |
63 | return tools + [
64 | bash_tool,
65 | google_search_tool,
66 | write_tool,
67 | read_tool,
68 | # tree_tool,
69 | # mkdir_tool,
70 | # replace_content_tool,
71 | # copy_tool,
72 | # move_tool,
73 | # delete_tool,
74 | # append_tool,
75 | # search_memory_tool_factory(memory_module),
76 | # simple_web_page_reader_tool,
77 | # read_remote_depth_tool,
78 | # apply_patch_tool,
79 | # read_web_unstructured_tool,
80 | query_website_tool,
81 | # git_tool,
82 | # directory_qa_tool,
83 | # python_tool
84 | ]
85 |
86 |
87 | def absolute(path):
88 | cwd = PREFIX_PATH
89 | return path if path.startswith(cwd) else os.path.join(cwd, path.lstrip("/"))
90 |
91 |
92 | BeautifulSoupWebReader = download_loader("BeautifulSoupWebReader")
93 | bash = BashProcess(strip_newlines=True, return_err_output=False)
94 |
95 |
96 | def query_website(url: str, question: str) -> str:
97 | loader = BeautifulSoupWebReader()
98 | documents = loader.load_data(urls=[url])
99 | index = GPTSimpleVectorIndex.from_documents(documents)
100 | return index.query(question).response
101 |
102 |
103 | def query_local_directory(q: str) -> str:
104 | SimpleDirectoryReader = download_loader("SimpleDirectoryReader")
105 | loader = SimpleDirectoryReader(PREFIX_PATH, recursive=True, exclude_hidden=True)
106 | documents = loader.load_data()
107 | index = GPTSimpleVectorIndex.from_documents(documents)
108 | return index.query(q)
109 |
110 |
111 | def bash_func(command):
112 | # Check if command uses sudo
113 | if "sudo" in command:
114 | return "Error: Command cannot use sudo"
115 | return bash.run(f"cd {PREFIX_PATH} && {command}")
116 |
117 |
118 | def read_remote_depth(url: str, depth: int, query: str) -> str:
119 | RemoteDepthReader = download_loader("RemoteDepthReader")
120 | loader = RemoteDepthReader(depth=depth, domain_lock=True)
121 | documents = loader.load_data(url=url)
122 | index = GPTSimpleVectorIndex.from_documents(documents)
123 | return index.query(query, optimizer=SentenceEmbeddingOptimizer(percentile_cutoff=0.5))
124 |
125 |
126 | def write_file(relative_path: str, content: str) -> str:
127 | path = absolute(relative_path)
128 | os.makedirs(os.path.dirname(path), exist_ok=True)
129 | with open(path, "w") as file:
130 | file.write(content)
131 | return f"Written content to file at {relative_path}"
132 |
133 |
134 | def apply_patch(patch_content: str) -> str:
135 | patch_file = f"{PREFIX_PATH}temp_patch_file.patch"
136 | with open(patch_file, "w") as file:
137 | file.write(patch_content)
138 | bash = bash
139 | result = bash.run(f"cd {PREFIX_PATH} && patch -p1 -u -f -i temp_patch_file.patch")
140 | os.remove(patch_file)
141 | return f"Patch applied:\n{result}"
142 |
143 |
144 | def read_file(relative_path: str) -> str:
145 | path = absolute(relative_path)
146 | with open(path, "r") as file:
147 | content = file.read()
148 | return content
149 |
150 |
151 | def tree() -> str:
152 | return bash.run(f"tree --noreport -L 2 {PREFIX_PATH}")
153 |
154 |
155 | def make_directory(relative_path: str) -> str:
156 | path = absolute(relative_path)
157 | os.makedirs(path, exist_ok=True)
158 | return f"Created directory at {path}"
159 |
160 |
161 | def replace_content(relative_path: str, pattern: str, replacement: str) -> str:
162 | path = absolute(relative_path)
163 | with open(path, "r") as file:
164 | content = file.read()
165 | content = re.sub(pattern, replacement, content)
166 | with open(path, "w") as file:
167 | file.write(content)
168 | return f"Replaced content in file at {relative_path}"
169 |
170 |
171 | def copy_file(source_path: str, destination_path: str) -> str:
172 | src_path = absolute(source_path)
173 | dest_path = absolute(destination_path)
174 | shutil.copy(src_path, dest_path)
175 | return f"Copied file from {source_path} to {destination_path}"
176 |
177 |
178 | def move_file(source_path: str, destination_path: str) -> str:
179 | src_path = absolute(source_path)
180 | dest_path = absolute(destination_path)
181 | shutil.move(src_path, dest_path)
182 | return f"Moved file from {source_path} to {destination_path}"
183 |
184 |
185 | def delete_file(relative_path: str) -> str:
186 | path = absolute(relative_path)
187 | if os.path.isfile(path):
188 | os.remove(path)
189 | elif os.path.isdir(path):
190 | shutil.rmtree(path)
191 | else:
192 | return "Invalid path."
193 | return f"Deleted {relative_path}"
194 |
195 |
196 | def append_file(relative_path: str, content: str) -> str:
197 | path = absolute(relative_path)
198 | os.makedirs(os.path.dirname(path), exist_ok=True)
199 | with open(path, "a") as file:
200 | file.write(content)
201 | return f"Appended content to file at {relative_path}"
202 |
203 |
204 | # Git tool function
205 | def git_func(relative_path: str, command: str) -> str:
206 | return bash.run(f"cd {relative_path} && git {command}")
207 |
208 |
209 | # Git tool
210 | git_tool = GuardRailTool(
211 | child_tool=Tool(
212 | name="git",
213 | func=git_func,
214 | description="Execute a git command within the current work directory. Input is the command.",
215 | ),
216 | input_args={
217 | "relative_path": "The relative path of the folder where git command should be executed.",
218 | "command": "The git command to execute.",
219 | },
220 | )
221 |
222 | import json
223 |
224 | def google_search_func(query: str) -> str:
225 | result = bash.run(f'cd {os.path.dirname(os.path.realpath(__file__))}/tools && node google.js "{query}"')
226 | result_json = json.loads(result)
227 | results = result_json['results']
228 | return "".join(
229 | f"{r['title']}\n{r['description']}\n{r['url']}\n" for r in results
230 | )
231 |
232 | google_search_tool = GuardRailTool(
233 | child_tool=Tool(
234 | name="google_search",
235 | func=google_search_func,
236 | description="This is Google. Use this tool to search the internet. Input should be a string",
237 | ),
238 | input_args={"action_input": "The search query to be passed to Google."},
239 | )
240 |
241 |
242 | bash_tool = GuardRailTool(
243 | child_tool=Tool(
244 | name="bash",
245 | func=bash_func,
246 | description="Execute a bash command within the current work directory. Input is the command.",
247 | ),
248 | input_args={"action_input": "The bash command to execute."},
249 | )
250 |
251 | python_tool = GuardRailTool(child_tool=PythonREPLTool(), input_args={"action_input": "The Python code to execute in the REPL environment."})
252 | python_tool.name = "python_repl"
253 |
254 |
255 | query_website_tool = GuardRailTool(
256 | child_tool=Tool(
257 | name="qa_about_website",
258 | func=query_website,
259 | description=("Useful when you want to get an answer for a question about the content on a websites."),
260 | ),
261 | input_args={
262 | "url": "The URL of the website to query.",
263 | "question": "The question to ask about the content of the website.",
264 | },
265 | )
266 |
267 | directory_qa_tool = GuardRailTool(
268 | child_tool=Tool(
269 | name="qa_about_local_directory",
270 | func=query_local_directory,
271 | description=("Useful when you want answer questions about the files in your local directory."),
272 | ),
273 | input_args={"action_input": "The question to ask about the content of the local directory."},
274 | )
275 |
276 | read_remote_depth_tool = GuardRailTool(
277 | child_tool=Tool(
278 | name="read_remote_depth",
279 | func=read_remote_depth,
280 | description="Read data from a remote url with a specified depth and answers provided question. Input is the url, depth and question separated by a new line character. Depth is an integer. Example: url\n2\nQuestion",
281 | ),
282 | input_args={
283 | "url": "The URL of the website to load.",
284 | "depth": "The depth to load content from the website.",
285 | "question": "The question to ask about the content of the website.",
286 | },
287 | )
288 |
289 | simple_web_page_reader = download_loader("SimpleWebPageReader")(html_to_text=True)
290 |
291 |
292 | def read_simple_web_page(url: str) -> str:
293 | documents = simple_web_page_reader.load_data([url])
294 | return "".join([doc.text for doc in documents])
295 |
296 |
297 | simple_web_page_reader_tool = GuardRailTool(
298 | child_tool=Tool(
299 | name="read_simple_web_page",
300 | func=read_simple_web_page,
301 | description="Read web pages using the SimpleWebPageReader. Input is a list of URLs.",
302 | ),
303 | input_args={"action_input": "The URL to read using the SimpleWebPageReader."},
304 | )
305 |
306 |
307 | def search_memory_tool_factory(memory_module: MemoryModule):
308 | def search_memory(input_str: str) -> str:
309 | return memory_module.retrieve_related_information(input_str, top_k=20)
310 |
311 | return GuardRailTool(
312 | child_tool=Tool(
313 | name="search_memory",
314 | func=search_memory,
315 | description="Search through your memory of completed tasks and research results. Input is a search query.",
316 | ),
317 | input_args={"action_input": "The search query to search through memory."},
318 | )
319 |
320 |
321 | write_tool = GuardRailTool(
322 | child_tool=Tool(
323 | name="write_file",
324 | func=write_file,
325 | description="Write content to a file. Input first line is the relative path, the rest is the content.",
326 | ),
327 | input_args={
328 | "relative_path": "The relative path of the file to write content to.",
329 | "content": "The content to write to the file.",
330 | },
331 | )
332 |
333 | apply_patch_tool = GuardRailTool(
334 | child_tool=Tool(
335 | name="apply_patch",
336 | func=apply_patch,
337 | description="Apply a patch to the current folder. Input is the patch file content.",
338 | ),
339 | input_args={"action_input": "The content of the patch file to apply."},
340 | )
341 |
342 | read_tool = GuardRailTool(
343 | child_tool=Tool(
344 | name="read_file",
345 | func=read_file,
346 | description="Read content from a file. Input is the relative path.",
347 | ),
348 | input_args={"action_input": "The relative path of the file to read content from."},
349 | )
350 |
351 | tree_tool = GuardRailTool(
352 | child_tool=Tool(
353 | name="directory_tree",
354 | func=lambda _: tree(),
355 | description="Display the directory tree.",
356 | ),
357 | input_args={"action_input": "No input required."},
358 | )
359 |
360 | mkdir_tool = GuardRailTool(
361 | child_tool=Tool(
362 | name="make_directory",
363 | func=make_directory,
364 | description="Create a new directory. Input is the relative path.",
365 | ),
366 | input_args={"action_input": "The relative path of the directory to create."},
367 | )
368 |
369 | replace_content_tool = GuardRailTool(
370 | child_tool=Tool(
371 | name="replace_content",
372 | func=replace_content,
373 | description="Replace content in a file using regex. Input is the relative path, pattern, and replacement separated by new lines.",
374 | ),
375 | input_args={
376 | "relative_path": "The relative path of the file to replace content in.",
377 | "pattern": "The regex pattern to match content to replace.",
378 | "replacement": "The replacement content for the matched pattern.",
379 | },
380 | )
381 |
382 | copy_tool = GuardRailTool(
383 | child_tool=Tool(
384 | name="copy_file",
385 | func=copy_file,
386 | description="Copy a file. Input is the source and destination relative paths separated by a new line.",
387 | ),
388 | input_args={
389 | "source_path": "The source relative path of the file to copy.",
390 | "destination_path": "The destination relative path to copy the file to.",
391 | },
392 | )
393 |
394 | move_tool = GuardRailTool(
395 | child_tool=Tool(
396 | name="move_file",
397 | func=move_file,
398 | description="Move a file. Input is the source and destination relative paths separated by a new line.",
399 | ),
400 | input_args={
401 | "source_path": "The source relative path of the file to move.",
402 | "destination_path": "The destination relative path to move the file to.",
403 | },
404 | )
405 |
406 | delete_tool = GuardRailTool(
407 | child_tool=Tool(
408 | name="delete_file",
409 | func=delete_file,
410 | description="Delete a file or directory. Input is the relative path.",
411 | ),
412 | input_args={"action_input": "The relative path of the file or directory to delete."},
413 | )
414 |
415 | append_tool = GuardRailTool(
416 | child_tool=Tool(
417 | name="append_to_file",
418 | func=append_file,
419 | description="Append content to a file. Input first line is the relative path, the rest is the content.",
420 | ),
421 | input_args={
422 | "relative_path": "The relative path of the file to append content to.",
423 | "content": "The content to append to the file.",
424 | },
425 | )
426 |
--------------------------------------------------------------------------------