├── .env.example
├── .gitignore
├── Dockerfile
├── LICENSE
├── docker-compose.yml
├── old-README.md
├── requirements.txt
└── src
├── babyagi.py
├── common_utils.py
├── consts.py
├── main.py
├── memories
├── one-shots.json
└── private-one-shots.json
├── prompts.py
├── tools
├── config.json
└── serp_api.py
└── utils
├── pinecone_utils.py
└── text_processing.py
/.env.example:
--------------------------------------------------------------------------------
1 | # API CONFIG
2 | OPENAI_API_KEY=
3 |
4 | # GLOBAL CONFIG
5 | GLOBAL_HL=
6 | GLOBAL_GL=
7 | GLOBAL_LOCATION=
8 |
9 | # PROJECT CONFIG
10 | OBJECTIVE=I need to complete the first task
11 | TASKS_LIST=["To research and create an original passionfruit cocktail recipe"]
12 | N_SHOT=2 # Use a value between 1 and 2 for GPT-3.5 and 3-4 for GPT-4. Higher values may break the app due to context length limitations of LLMs.
13 |
14 | # Optional tools
15 | SERP_API_KEY=
16 | PINECONE_API_KEY=
17 | PINECONE_ENVIRONMENT=
18 | PINECONE_TABLE_NAME=
19 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .env
2 | .idea
3 | __pycache__
4 | src/memories/private-one-shots.json
5 | src/outputs/
6 | src/tmp_agent.json
7 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.11
2 |
3 | ENV PIP_NO_CACHE_DIR=true
4 | WORKDIR /tmp
5 | COPY requirements.txt /tmp/requirements.txt
6 | RUN pip install -r requirements.txt
7 |
8 | WORKDIR /app
9 | COPY . /app
10 | ENTRYPOINT ["python", "src/main.py"]
11 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Yohei Nakajima
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "3.9"
2 |
3 | services:
4 | basi:
5 | build: ./
6 | volumes:
7 | - "./:/app"
8 | profiles: ["exclude-from-up"] # Use `docker-compose run basi` to get an attached container
9 |
--------------------------------------------------------------------------------
/old-README.md:
--------------------------------------------------------------------------------
1 |
2 | Babyagi, an Autonomous and Self-Improving agent: BASI
3 |
4 |
5 |
6 | # Objective
7 | This Python script is an example of a LLM-powered autonomous agent. The system uses OpenAI API to create and execute tasks.
8 | The core idea of the project is to provide the assistant with the tools it needs to do any task - if it's smart enough.
9 | It can arbitrarily execute code and control its own flow and memory, for a sufficiently intelligent agent, either by pre-training, fine-tuning or prompt-optimization, this should be enough (if it is possible at all).
10 |
11 | This is just a PoC in constant development.
12 |
13 |
14 | # How It Works
15 | The script works by running an infinite loop that does the following steps:
16 |
17 | 1- Write a function to finish the most relevant task (execution_agent);
18 |
19 | 2- Save execution to memory if successful (and only if the task does not yet exist in memory, in the future I will implement updates in the memories);
20 |
21 | 3- Read the changes and pass them to execution_agent again or quit the program (change_propagation_agent);
22 |
23 | # Dynamic one-shot
24 |
25 | The execution_agent one-shot example is dynamically chosen according to the context of the current task, so once BASI has done a task correctly, it will be able to do it more easily in the future. One-shots are in ``/memories``. One could inject examples with hotkey shortcuts, utility codes... to get more out of BASI in everyday life.
26 |
27 | For example BASI can access and post a tweet in the account that is open in the browser thanks to a one-shot that teaches it to post on twitter using the tab key.
28 |
29 | Currently, I use gpt.3-5 to choose the one-shot from the execution_agent, but maybe in the future I'll use vector search for that.
30 |
31 | # BASI tools
32 |
33 | Both agents share the same "personality" and the same chore prompt.
34 | To enable/disable execution_agent tools change the js object at ``tools/config.json``
35 |
36 | execution_agent tools:
37 |
38 | - openai_call
39 | - memory_agent (to retrieve and/or save information), disabled by default
40 | - execution_agent (it can call itself)
41 | - count_tokens
42 | - process_large_files
43 | - get_serp_query, disabled by default
44 |
45 | memory_agent tools:
46 | - openai_call
47 | - get_ada_embedding
48 | - append_to_index
49 | - search_in_index
50 | - create_new_index (hardcoded disabled)
51 |
52 | Execution_agent and memory_agent perform actions by running arbitrary Python code;
53 |
54 |
55 | # How to Use
56 | To use the script, you will need to follow these steps:
57 |
58 | 1. Install the required packages: `pip install -r requirements.txt`
59 | 2. Copy the .env.example file to .env: `cp .env.example .env`. This is where you will set the following variables.
60 | 3. Set your OpenAI key and model in the OPENAI_API_KEY, OPENAPI_API_MODEL variables.
61 | 4. Set your OpenA in the OPENAI_API_KEY, OPENAPI_API_MODEL variables.
62 | 6. Set the objective of the task management system in the OBJECTIVE variable. Alternatively you can pass it to the script as a quote argument.
63 | ```
64 | python src/main.py [""]
65 | ```
66 | 8. Set the task_list of the system in the TASK_LIST variable.
67 | 9. Run the script.
68 |
69 | # Examples
70 | ```
71 | OBJECTIVE=I need to complete the first task
72 | TASK_LIST=['I must rickroll myself', 'I must close the tab in which I rickrolled myself']
73 | ```
74 |
75 | ```
76 | OBJECTIVE=I need to complete the first task
77 | TASK_LIST=['I must analyze my cognitive archictecture during my chain of thoughts and then in my 'answer:' I will write 10 examples of multi_step_objective-first_task pairs to showcase my capabilities, I must append the result in the ./output2.txt file.']
78 | ```
79 |
80 | ```
81 | OBJECTIVE=Improve my prompts at ./prompts.py file
82 | TASK_LIST=['Plan what to do. I must create a initial end-to-end task list, which the final task will make my objective completed.']
83 | ```
84 |
85 | ## Running in a container
86 | To run the script inside a docker container:
87 |
88 | ```
89 | docker-compose run basi
90 | ```
91 |
92 | # Warning
93 | This script is designed to run indefinitely until the task list is empty. So it can be in an endless loop, depending on the objective and first task.
94 | This script consumes a lot more tokens than the original babyagi, so using GPT-4 can quickly get expensive. I haven't tested it yet.
95 |
96 | I recommend using this script in a virtual machine and always making a backup if changing something. BabyAGI can run commands and python code on your computer. The results are almost always unexpected.
97 |
98 |
99 | # Backstory
100 | BabyAGI is a paired-down version of the original [Task-Driven Autonomous Agent](https://twitter.com/yoheinakajima/status/1640934493489070080?s=20) (Mar 28, 2023, by @yoheynakajima)
101 |
102 |
103 | BASI is a modified version of BabyAGI created to show how LLMs can perform in the real world.
104 |
105 | Made with focus by [@LatentLich](https://twitter.com/LatentLich)
106 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | colorama==0.4.4
2 | google_search_results==2.4.2
3 | openai==0.27.2
4 | pinecone_client==2.2.1
5 | psutil==5.9.4
6 | python-dotenv==1.0.0
7 | tiktoken==0.3.3
8 | bs4
9 |
--------------------------------------------------------------------------------
/src/babyagi.py:
--------------------------------------------------------------------------------
1 | import openai, prompts, consts, os, json, re
2 | from tools import serp_api
3 | from colorama import Fore
4 | from collections import deque
5 | from common_utils import count_tokens, split_answer_and_cot, get_oneshots, openai_call
6 | from utils import pinecone_utils, text_processing
7 |
8 | openai.api_key = consts.OPENAI_API_KEY
9 |
10 | one_shots, p_one_shots = get_oneshots()
11 | all_one_shots = one_shots+p_one_shots
12 |
13 |
14 | class AutonomousAgent:
15 | def __init__(self, objective):
16 | (
17 | self.objective,
18 | self.working_memory,
19 | self.chore_prompt,
20 | self.completed_tasks,
21 | self.task_id_counter,
22 | self.openai_call,
23 | self.task_list,
24 | self.indexes,
25 | self.focus,
26 | self.get_serp_query_result,
27 | self.current_task
28 | ) = (objective, [], prompts.chore_prompt, [], 1, openai_call, deque([]), {}, "", serp_api.get_serp_query_result, "")
29 |
30 | def get_current_state(self):
31 | # filter properties to avoid adiction
32 | hash = {"self": [nome for nome in dir(self) if not nome.startswith("__") and nome not in "search_in_index get_ada_embedding append_to_index memory_agent repl_agent task_list memory focus indexes"],
33 | "To-do tasks list": self.task_list,
34 | "Available indexes": [ind for ind in self.indexes.keys()],
35 | "self.working_memory": self.working_memory,
36 | "self.focus": self.focus,
37 | "current dir": os.listdir(os.getcwd())}
38 | return hash
39 |
40 | def execution_agent(self, current_task, root=False):
41 | self.current_task = current_task
42 |
43 | if not root:
44 | print(Fore.LIGHTRED_EX + "\nExecution Agent call with task:" + Fore.RESET + f"{current_task}")
45 |
46 | if not current_task in [o['task'] for o in one_shots]:
47 | one_shots_names_and_kw = [f"name: '{one_shot['task']}', task_id: '{one_shot['memory_id']}', keywords: '{one_shot['keywords']}';\n\n" for one_shot in all_one_shots]
48 | code, cot = split_answer_and_cot(openai_call(
49 | f"My current task is: {current_task}."
50 | f"I must choose from 0 to {consts.N_SHOT} most relevant tasks between the following one_shot examples:'\n{one_shots_names_and_kw}'.\n\n"
51 | f"These oneshots will be injected in execution_agent as instant memories, task memory. I will try to choose {consts.N_SHOT} tasks memories that may help ExA. I will tell the relevant tasks by looking the names and keywords, and imagining what abilities ExA used to produce this memory."
52 | f"I must write a list({consts.N_SHOT}) cointaining only the memory_ids of the most relevant one_shots, or a empty list. i.e '[\"one_shot example memory_id\"]' or '[]'."
53 | f"I must read the examples' names and choose from 0 to {consts.N_SHOT} by memory_id. "
54 | f"I must answer in the format 'CHAIN OF THOUGHTS: here I put a short reasoning;\nANSWER: ['most relevant memory_id']';"
55 | f"My answer:", max_tokens=300).strip("'"))
56 | print(cot)
57 | pattern = r'\[([^\]]+)\]'
58 | matches = re.findall(pattern, code)
59 | completion = eval("["+matches[0]+"]") if matches else []
60 | print(f"\nChosen one-shot example: {completion}\n")
61 | one_shot_example_names = completion[:consts.N_SHOT] if len(completion) > 0 else None
62 |
63 | prompt = prompts.execution_agent(
64 | self.objective,
65 | self.completed_tasks,
66 | self.get_current_state,
67 | current_task,
68 | [one_shot for one_shot in all_one_shots if one_shot["memory_id"] in one_shot_example_names] if one_shot_example_names is not None else '',
69 | self.task_list
70 | )
71 | # print(Fore.LIGHTCYAN_EX + prompt + Fore.RESET)
72 | changes = openai_call(
73 | prompt,
74 | .5,
75 | 4000-self.count_tokens(prompt),
76 | )
77 |
78 | print(Fore.LIGHTMAGENTA_EX+f"\n\ncodename ExecutionAgent:"+Fore.RESET+f"\n\n{changes}")
79 |
80 | # try until complete
81 | result, code, cot = self.repl_agent(current_task, changes)
82 |
83 | save_task = True
84 | if consts.USER_IN_THE_LOOP:
85 | while True:
86 | inp = str(input('\nDo you want to save this action in memory? (Y/N)\n>')).lower()
87 | if inp in 'y yes n no':
88 | if inp[0] == 'n':
89 | save_task = False
90 | break
91 |
92 | if save_task:
93 | one_shots.append(
94 | {
95 | "memory_id": "os-{0:09d}".format(len(one_shots)+1),
96 | "objective": self.objective,
97 | "task": current_task,
98 | "thoughts": cot[cot.lower().index('chain of thoughts:')+18:cot.lower().index('answer:')].strip(),
99 | "code": code.strip().strip('\n\n'),
100 | "keywords": ', '.join(eval(openai_call("I must analyze the following task name and action and write a list of keywords.\n"
101 | f"Task name: {current_task};\nAction: {code};\n\n"
102 | f"> I must write a python list cointaing strings, each string one relevant keyword that will be used by ExecutionAgent to retrieve this memories when needed."
103 | f" i.e: ['search', 'using pyautogui', 'using execution_agent', 'how to x', 'do y']\n"
104 | f"My answer:", max_tokens=2000)))
105 | }
106 | )
107 | with open("memories/one-shots.json", 'w') as f:
108 | f.write(json.dumps(one_shots, indent=True, ensure_ascii=False))
109 |
110 | else:
111 | cot, code = [[o['thoughts'], o['code']] for o in one_shots if o['task'] == current_task][0]
112 | print(Fore.LIGHTMAGENTA_EX + f"\n\ncodename ExecutionAgent:" + Fore.RESET + f"\nChain of thoughts: {cot}\n\nAnswer:\n{code}")
113 | action_func = exec(code, self.__dict__)
114 | result = self.action(self)
115 |
116 | self.completed_tasks.append(current_task)
117 | summarizer_prompt = f"I must summarize the 'working memory' and the last events, I must answer as a chain of thoughts, in first person, in the same verb tense of the 'event'. Working memory: {self.working_memory}, event: {cot} result: {result}. " \
118 | f"My answer must include the past workig memory and the new events and thoughts. If there's some error or fix in the event I must summarize it as a learning:"
119 | self.working_memory = openai_call(summarizer_prompt)
120 |
121 | return result
122 |
123 | def repl_agent(self, current_task, changes):
124 | code, cot = split_answer_and_cot(changes)
125 | ct = 1
126 |
127 | reasoning = changes
128 | while True:
129 | try:
130 | action_func = exec(code, self.__dict__)
131 | result = self.action(self)
132 | return result, code, cot
133 | except Exception as e:
134 | print(Fore.RED + f"\n\nFIXING AN ERROR: {e}\n" + Fore.RESET)
135 | print(f"{ct} try")
136 |
137 | prompt = prompts.fix_agent(current_task, code, cot, e)
138 | new_code = openai_call(
139 | prompt,
140 | temperature=0.4,
141 | )
142 | reasoning += new_code
143 | reasoning = openai_call(f"I must summarize this past events as a chain of thoughts, in first person: {reasoning}", max_tokens=1000)
144 | # print(new_code, end="\n")
145 | try:
146 | code, cot = split_answer_and_cot(new_code)
147 | action_func = exec(code, self.__dict__)
148 | result = self.action(self)
149 | return result, code, cot
150 | except Exception as e:
151 | pass
152 | ct += 1
153 |
154 | def change_propagation_agent(self, _changes):
155 | return openai_call(
156 | prompts.change_propagation_agent(
157 | self.objective, _changes, self.get_current_state
158 | ),
159 | 0.7,
160 | 1000,
161 | )
162 |
163 | def memory_agent(self, caller, content, goal):
164 | answer = openai_call(
165 | prompts.memory_agent(self.objective, caller, content, goal, self.get_current_state)
166 | )
167 | answer = answer[answer.lower().index("answer:")+7:]
168 | action_func = exec(answer.replace("```", ""), self.__dict__)
169 | result = self.action(self)
170 |
171 | def search_in_index(self, index_name, query, top_k=1000):
172 | pinecone_utils.search_in_index(self, index_name, query, top_k=1000)
173 |
174 | def get_ada_embedding(self, text):
175 | pinecone_utils.get_ada_embedding(text)
176 |
177 | def append_to_index(self, content, index_name):
178 | pinecone_utils.append_to_index(self, content, index_name)
179 |
180 | def count_tokens(self, text):
181 | return count_tokens(text)
182 |
183 | def process_large_text(self, text, instruction, max_output_length=1000, split_text=None):
184 | return text_processing.process_large_text(text, instruction, max_output_length=1000, split_text=None)
185 |
186 | def generate_large_text(self, instruction, max_tokens_lenghts=10000):
187 | return text_processing.generate_large_text(instruction, max_tokens_lenghts=10000)
--------------------------------------------------------------------------------
/src/common_utils.py:
--------------------------------------------------------------------------------
1 | import tiktoken, json, openai
2 | from datetime import datetime
3 | from src import consts
4 |
5 |
6 | encoding = tiktoken.encoding_for_model("gpt-3.5-turbo" if not consts.USE_GPT4 else "gpt-4")
7 |
8 |
9 | def openai_call(prompt, temperature=0.8, max_tokens=0, role="assistant"):
10 | messages = [
11 | {
12 | "role": "system",
13 | "content": f"You are BabyAGI-asi, an Autonomous and Self-Improving LLM-based agent. There are no users, just BabyAGI. Strictly follow your objective. Today's date: {datetime.now().strftime('%d/%m/%Y')}. Knowledge cut-off: set/2021. When writing code you cannot write comments before or after the code, neither you can wrap the codes in `` code blocks. Just write the code in the required language, if you need to add comments, add inside the code, in the language comment format, if possible."
14 | if role == "assistant"
15 | else "You are BabyAGI-asi, you must strictly follow the user's intructions",
16 | },
17 | {"role": role, "content": prompt},
18 | ]
19 | # print(prompt)
20 | output_lenght = 4000-count_tokens(str(messages)) if not consts.USE_GPT4 else 8000 - count_tokens(messages) if max_tokens == 0 else max_tokens
21 | response = openai.ChatCompletion.create(
22 | model="gpt-3.5-turbo" if not consts.USE_GPT4 else "gpt-4",
23 | messages=messages,
24 | temperature=temperature,
25 | max_tokens=output_lenght,
26 | n=1,
27 | )
28 | text = response.choices[0].message.content.strip()
29 | return text
30 |
31 |
32 | def count_tokens(text):
33 | return len(encoding.encode(text))
34 |
35 |
36 | def split_answer_and_cot(text):
37 | start_index = text.lower().index("answer:")+7
38 | end_index = text.lower().rfind("note:")
39 |
40 | cot = text[:start_index]
41 | code = text[start_index:end_index if end_index != -1 else len(text)].replace("```", "")
42 | return [code, cot]
43 |
44 |
45 | def get_oneshots():
46 | one_shots, p_one_shots = [], []
47 | with open('memories/one-shots.json', 'r') as f:
48 | one_shots += json.loads(f.read())
49 |
50 | with open('memories/private-one-shots.json', 'r') as f:
51 | p_one_shots += json.loads(f.read())
52 |
53 | return one_shots, p_one_shots
54 |
--------------------------------------------------------------------------------
/src/consts.py:
--------------------------------------------------------------------------------
1 | import os, sys
2 | from dotenv import load_dotenv
3 |
4 | load_dotenv()
5 |
6 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
7 | USE_GPT4 = False
8 |
9 | GLOBAL_HL = os.getenv("GLOBAL_HL", "")
10 | GLOBAL_GL = os.getenv("GLOBAL_GL", "")
11 | GLOBAL_LOCATION = os.getenv("GLOBAL_LOCATION", "")
12 |
13 | OBJECTIVE = os.getenv("OBJECTIVE", "")
14 | TASKS_LIST = eval(os.getenv("TASKS_LIST", ""))
15 |
16 | PINECONE_DB = False
17 | SERP_API = False
18 |
19 | PINECONE_API_KEY = os.getenv("PINECONE_API_KEY", "")
20 | if PINECONE_API_KEY != "":
21 | PINECONE_DB = True
22 | PINECONE_ENVIRONMENT = os.getenv("PINECONE_ENVIRONMENT", "")
23 | PINECONE_TABLE_NAME = os.getenv("PINECONE_TABLE_NAME", "")
24 |
25 | SERP_API_KEY = os.getenv("SERP_API_KEY", "")
26 | if SERP_API_KEY != "":
27 | SERP_API = True
28 |
29 | N_SHOT = int(os.getenv("N_SHOT", "1"))
30 |
31 | LOAD_FROM=""
32 |
33 | # WARNING, BASI WILL RUN FOREVER IN CONTINUOUS MODE
34 | CONTINUOUS_MODE = False
35 |
36 | USER_IN_THE_LOOP = False
37 |
--------------------------------------------------------------------------------
/src/main.py:
--------------------------------------------------------------------------------
1 | import consts, json
2 | from colorama import Fore
3 | from babyagi import AutonomousAgent
4 | from collections import deque
5 | from utils.pinecone_utils import pinecone_init
6 |
7 |
8 | def save_as_json(agent: AutonomousAgent, filepath):
9 | with open(filepath, 'w') as f:
10 | f.write(json.dumps({
11 | "objective": agent.objective,
12 | "working_memory": agent.working_memory,
13 | "completed_tasks": agent.completed_tasks,
14 | "task_id_counter": agent.task_id_counter,
15 | "task_list": [task for task in list(agent.task_list) if task['task_name']],
16 | "indexes": [index for index in agent.indexes.keys()],
17 | "focus": agent.focus,
18 | }))
19 |
20 |
21 | def load_from_json(filepath):
22 | with open(filepath, 'r') as f:
23 | agent_data = json.loads(f.read())
24 | agent = AutonomousAgent(agent_data['objective'])
25 | agent.working_memory = agent_data["working_memory"]
26 | agent.completed_tasks = agent_data["completed_tasks"]
27 | agent.task_id_counter = agent_data["task_id_counter"]
28 | agent.task_list = deque([task for task in agent_data["task_list"] if task['task_name']])
29 | indexes_names = agent_data["indexes"]
30 | agent.focus = agent_data["focus"]
31 |
32 | return agent
33 |
34 |
35 | if __name__ == "__main__":
36 | if not consts.LOAD_FROM:
37 | objective = consts.OBJECTIVE if not consts.USER_IN_THE_LOOP else str(
38 | input(Fore.LIGHTYELLOW_EX + "Insert the objective: " + Fore.RESET))
39 |
40 | print(Fore.RED + "\nOBJECTIVE\n" + Fore.RESET + objective + "\n\n")
41 |
42 | AI = AutonomousAgent(objective)
43 |
44 | # add tasks manually if .env tasks_list is empty
45 | if len(consts.TASKS_LIST) == 0:
46 | ct = 1
47 | while True:
48 | mid = ", or insert a blank line to start" if ct > 1 else ""
49 | task = str(input(f"Add a task"+mid+": "))
50 | if task.strip() == '':
51 | break
52 | AI.task_list.append({"task_id": ct, "task_name": task})
53 | ct += 1
54 |
55 | # Append tasks from list in .env
56 | for i, task_name in enumerate(consts.TASKS_LIST):
57 | task = {"task_id": i + 1, "task_name": task_name}
58 | AI.task_list.append(task)
59 | else:
60 | AI = load_from_json(consts.LOAD_FROM)
61 |
62 | # Initialize Pinecone if enabled in .env
63 | if consts.PINECONE_DB:
64 | pinecone_init(AI)
65 |
66 | running = True
67 |
68 | while True:
69 | if not running:
70 | filepath = str(input("\nEnter a file name or path to save the basi agent as a json file, or hit enter to quit without saving: "))
71 | if filepath:
72 | save_as_json(AI, filepath+".json")
73 | break
74 | if AI.task_list:
75 | print(
76 | Fore.GREEN
77 | + "\n*TASK LIST*\n"
78 | + Fore.RESET
79 | + "\n".join([f"{t['task_id']}: {t['task_name']}" for t in AI.task_list])
80 | )
81 | AI.task_list = deque(AI.task_list)
82 | task = AI.task_list.popleft()
83 | print(Fore.BLUE + "\n*NEXT TASK*\n" + Fore.RESET)
84 | print(str(task["task_id"]) + ": " + task["task_name"])
85 |
86 | result = AI.execution_agent(task["task_name"], root=True)
87 | changes = AI.change_propagation_agent(result)
88 |
89 | print(Fore.YELLOW + "\n*TASK RESULT*\n" + Fore.RESET)
90 | print(Fore.MAGENTA+"\n\ncodename ChangePropagationAgent:"+Fore.RESET+f"\n{changes}")
91 |
92 | save_as_json(AI, 'tmp_agent.json')
93 | else:
94 | if consts.USER_IN_THE_LOOP:
95 | AI.task_list = deque(AI.task_list)
96 | new_task = str(input("\nCreate a new task or hit enter to finish BASI: "))
97 | if new_task.strip() == "":
98 | running = False
99 | else:
100 | AI.task_list.append({'task_id': len(AI.task_list)+1, 'task_name': new_task})
101 | while True:
102 | task = str(input(f"Add a task, or insert a blank line to start: "))
103 | if task.strip() == '':
104 | break
105 | AI.task_list.append({"task_id": len(AI.task_list) + 1, "task_name": task})
106 |
107 | else:
108 | if consts.CONTINUOUS_MODE:
109 | AI.execution_agent("I must create one or more tasks to keep following my objective.")
110 | else:
111 | if consts.LOAD_FROM and not len(AI.task_list):
112 | print("\nPlease, enable any USER_IN_THE_LOOP level to revive a basi agent. Or just edit its json file to include new tasks.")
113 | running = False
114 |
--------------------------------------------------------------------------------
/src/memories/one-shots.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "memory_id": "os-000000001",
4 | "objective": "I must complete the first task",
5 | "task": "Find out the current status of Queen Elizabeth 2",
6 | "thoughts": "I want to find out the current status of Queen Elizabeth to fulfill my task. Since my knowledge is limited to 2021, I will write a routine to search for it on the internet using Wikipedia's public API. I will summarize the extract with GPT-3 to handle large results. Additionally, I will use another instance of GPT to decide if I need to create a new task based on the summary. Depending on the outcome, I may have to reprioritize my tasks or create a new one (or more).",
7 | "code": "def action(self):\n import requests\n url = \"https://en.wikipedia.org/api/rest_v1/page/summary/Elizabeth_II\"\n response = requests.get(url)\n data = response.json()[\"extract\"]\n\n summary = self.openai_call(f'Summarize this request response: \"{data}\";')\n with open('Elizabeth II.txt', 'w') as f:\n f.write(summary)\n\n # there's no way to know what's on the summary, so I will call another execution_agent to return my thoughts on this subject\n # first, I will set my focus to the Elizabeth II summary and then call the execution agent\n self.focus = summary\n result = self.execution_agent('I must think about the content of my focus and my action must return my impressions about it')\n\n return result",
8 | "keywords": "Research, search, survey of current data, Wikipedia, Queen Elizabeth 2 status, public person status, news, famous"
9 | },
10 | {
11 | "memory_id": "os-000000002",
12 | "objective": "I must complete the first task",
13 | "task": "I must improve my own prompts",
14 | "thoughts": "I must analyze and optimize my own prompts. I'll delegate the changes to openai_calls for each name in prompts.py.",
15 | "code": "\ndef action():\n import importlib\n\n prompts_module = importlib.import_module(\"prompts\")\n prompt_names = [name for name in dir(prompts_module) if not name.startswith(\"__\")]\n strings = []\n\n for name in prompt_names:\n value = getattr(prompts_module, name)\n if isinstance(value, str) and name == \"available_tools\":\n p = Fore.BLUE+f\"You must read the following prompt and optimize it: START_OF_PROMPT>>\\n{value}\\n<\n REDACTED\n \"\"\"\n with open(\"outputs/index.html\", \"w+\") as file:\n file.write(index_content)\n map = self.openai_call(f\"I need to extract a 'map' of all the css/html ids/classes/tags/elements... whatever is needed to post-create css and js. Code: {index_content}\")\n return map",
152 | "keywords": "Create a webpage, generate a html file"
153 | },
154 | {
155 | "memory_id": "os-000000020",
156 | "objective": "I must complete the first task",
157 | "task": "I must create the vision.html page on the website I am creating at outputs/. My action must return a map of the css elements/ids/classes/html tags of the page",
158 | "thoughts": "I need to create a website about AGI on /outputs/ with multiple files and pages. I will create an index.html file as the main page, and other pages for different topics. I will also create a CSS file for styling the pages. I will use the execution_agent tool to create the files and add the necessary content.",
159 | "code": "\ndef action(self):\n # Create other pages\n vision_content = \"\"\"\n \n [SUBJECTIVE]\n \"\"\"\n with open(\"outputs/vision.html\", \"w+\") as file:\n file.write(vision_content)\n\n map = self.openai_call(\n f\"I need to extract a 'map' of all the css/html ids/classes/tags/elements... whatever is needed to post-create css and js. Code: {vision_content}\")\n return map",
160 | "keywords": "Create a webpage, generate a html file"
161 | },
162 | {
163 | "memory_id": "os-000000021",
164 | "objective": "I must complete the first task",
165 | "task": "I must create the research.html page on the website I am creating at outputs/. My action must return a map of the css elements/ids/classes/html tags of the page",
166 | "thoughts": "I need to create a website about AGI on /outputs/ with multiple files and pages. I will create an index.html file as the main page, and other pages for different topics. I will also create a CSS file for styling the pages. I will use the execution_agent tool to create the files and add the necessary content.",
167 | "code": "\ndef action(self):\n research_content = \"\"\"\n \n [SUBJECTIVE]\n \"\"\"\n with open(\"outputs/research.html\", \"w+\") as file:\n file.write(research_content)\n\n map = self.openai_call(\n f\"I need to extract a 'map' of all the css/html ids/classes/tags/elements... whatever is needed to post-create css and js. Code: {research_content}\")\n return map",
168 | "keywords": "Create a webpage, generate a html file"
169 | },
170 | {
171 | "memory_id": "os-000000022",
172 | "objective": "I must complete the first task",
173 | "task": "I must create the team.html page on the website I am creating at outputs/. My action must return a map of the css elements/ids/classes/html tags of the page/",
174 | "thoughts": "I need to create a website about AGI on /outputs/ with multiple files and pages. I will create an index.html file as the main page, and other pages for different topics. I will also create a CSS file for styling the pages. I will use the execution_agent tool to create the files and add the necessary content.",
175 | "code": "\ndef action(self):\n team_content = \"\"\"\n \n [SUBJECTIVE]\n \"\"\"\n with open(\"outputs/team.html\", \"w+\") as file:\n file.write(team_content)\n\n map = self.openai_call(\n f\"I need to extract a 'map' of all the css/html ids/classes/tags/elements... whatever is needed to post-create css and js. Code: {team_content}\")\n return map",
176 | "keywords": "Create a webpage, generate a html file"
177 | },
178 | {
179 | "memory_id": "os-000000023",
180 | "objective": "I must complete the first task",
181 | "task": "I must create the contact.html page on the website I am creating at outputs/. My action must return a map of the css elements/ids/classes/html tags of the page",
182 | "thoughts": "I need to create a website about AGI on /outputs/ with multiple files and pages. I will create an index.html file as the main page, and other pages for different topics. I will also create a CSS file for styling the pages. I will use the execution_agent tool to create the files and add the necessary content.",
183 | "code": "\ndef action(self):\n contact_content = \"\"\"\n \n [SUBJECTIVE]\n \"\"\"\n with open(\"outputs/contact.html\", \"w+\") as file:\n file.write(contact_content)\n\n map = self.openai_call(\n f\"I need to extract a 'map' of all the css/html ids/classes/tags/elements... whatever is needed to post-create css and js. Code: {contact_content}\")\n return map",
184 | "keywords": "Create a webpage, generate a html file"
185 | },
186 | {
187 | "memory_id": "os-000000024",
188 | "objective": "I must complete the first task",
189 | "task": "I must create the styles.css page on the website I am creating at outputs/, the page must be dark. I must follow the already created pages' elements:",
190 | "thoughts": "I need to create a website about AGI on /outputs/ with multiple files and pages. I will create an index.html file as the main page, and other pages for different topics. I will also create a CSS file for styling the pages. I will use the execution_agent tool to create the files and add the necessary content.",
191 | "code": "\ndef action(self):\n # Create CSS file\n css_content = \"\"\"\n [SUBJECTIVE]\n \"\"\"\n with open(\"outputs/style.css\", \"w+\") as file:\n file.write(css_content)\n\n return \"Done!\"",
192 | "keywords": "Create css file, generate a styles.css file"
193 | },
194 | {
195 | "memory_id": "os-000000025",
196 | "objective": "I must complete the first task",
197 | "task": "Create a website about SUBJECTIVE on /outputs/, with multiple files and pages, using execution_agent to create each file.",
198 | "thoughts": "I need to create a website about AGI on /outputs/ with multiple files and pages. I will create an index.html file as the main page, and other pages for different topics. I will also create a CSS file for styling the pages. The best way to do this it's always using the execution_agent tool to create the files and add the necessary content because my output lenght is too small. I must do the same in the next times I need to create some app/website.",
199 | "code": "\ndef action(self):\n import os\n\n # I see that there's no outputs/ dir so I must create one\n os.makedirs(\"outputs\")\n\n # I will delegate one execution_agent to create each page and file, because my output max lenght it's too small.\n pages = ['index.html', 'vision.html', 'research.html', 'team.html', 'contact.html']\n\n # I always must use one execution_agent for each page/file because my output lenght it's short\n # I will track the elements of each page\n all_elements = []\n\n # here I will use a dumb openai_call since it don't need to write files\n all_elements.append(self.openai_call(\n f\"I must create a navigation with links to the pages {pages}. This nav will be used on all this pages. My answer must contain only a nav/div element:\"))\n\n for page in pages:\n # I will delegate execution_agents and I will prompt them to return a map of elements of each page, so I can use them to create the css style.\n elements = self.execution_agent(\n f\"I must create the {page} page on the website I am creating at outputs/. My action must return a map of the css elements/ids/classes/html tags of the page\")\n all_elements.append(elements)\n\n # I think it's cool a dark page, so I will ask for execution_agent to make the page dark, execution_agent must use the elements that was created for the pages\n self.execution_agent(\n f\"I must create the styles.css page on the website I am creating at outputs/, the page must be dark. I must follow the already created pages' elements: {all_elements}\")\n print(all_elements)\n return \"I created successfully the website at outputs/\"",
200 | "keywords": "Create a website, using execution_agent, create complex apps"
201 | },
202 | {
203 | "memory_id": "os-000000026",
204 | "objective": "I must complete the first task",
205 | "task": "I must compile a summary of this reddit discussion and save to a file: https://www.reddit.com/SUBJECTIVE.",
206 | "thoughts": "I will use the requests library to get the Reddit discussion page HTML. Then, I will use the BeautifulSoup library to parse the HTML and extract the comments. I will then use the open() function to create a new file and write the comments to it. Finally, I will return a message indicating that the file has been saved.",
207 | "code": "\ndef action(self):\n import requests\n from bs4 import BeautifulSoup\n\n # Get the Reddit discussion page HTML\n url = \"https://www.reddit.com/r/singularity/comments/12ka38r/introducing_here_basi_like_babyagi_but_autonomous/\"\n response = requests.get(url)\n\n # Parse the HTML and extract the comments\n soup = BeautifulSoup(response.content, 'html.parser').text\n\n # Create a new file and write the comments to it\n with open('reddit_comments.txt', 'w') as f:\n summary = self.openai_call(f\"Summarize this reddit discusion: {soup}\")\n f.write(summary)\n\n # Return a message indicating that the file has been saved\n return \"The Reddit discussion comments have been saved to reddit_comments.txt file.\"",
208 | "keywords": "compile, get web page, summarizing a website, http requests"
209 | },
210 | {
211 | "memory_id": "os-000000027",
212 | "objective": "I must complete the first task",
213 | "task": "To make a clone of this website: https://SUBJECTIVE",
214 | "thoughts": "To clone a website and extract its content, I'll do the same as usual, as it's the most efficient way to clone a page: I must install and use 'pywebcopy', which is a library that allows for easy web scraping and saving of webpages. I cannot use BeautifulSoup or another way. 'pywebcopy' does the job... First, I'll define the URL of the website and the folder where the copy will be saved. Then, using the save_webpage method, I'll download the webpage and save it to the specified folder. This method will handle the extraction of the website's content, including HTML, CSS, and JavaScript files, as well as images and other resources. Finally, I'll write a message to indicate that the website has been cloned. Overall, pywebcopy simplifies the process of website cloning and allows for easy customization and configuration of the cloned website.",
215 | "code": "\ndef action(self):\n import os\n\n # install pywebcopy because I'm not sure if it exists\n os.system(\"pip install pywebcopy\")\n\n # I will import only save_webpage\n from pywebcopy import save_webpage\n\n # Create path if it not exists\n if not os.path.exists('shareGPT'):\n os.makedirs(\"shareGPT\")\n\n kwargs = {'project_name': 'share-GPT'}\n\n save_webpage(\n\n # url of the website\n url='https://sharegpt.com/',\n\n # folder where the copy will be saved\n project_folder='./shareGPT/',\n **kwargs\n )\n\n return \"Share-GPT web-site cloned successfully at shareGPT/\"",
216 | "keywords": "website cloning, pywebcopy lib, clone website or app, using pywebcopy to easy website cloning"
217 | },
218 | {
219 | "memory_id": "os-000000028",
220 | "objective": "I must complete the first task",
221 | "task": "Clone this site: 'https://SUBJECTIVE'",
222 | "thoughts": "To clone the website 'https://pickapic.io/', I can use the same approach as before by using the 'pywebcopy' library. First, I will install the 'pywebcopy' library using 'os.system'. Then, I will create a folder to store the website's content. After that, I will use the 'save_webpage' method to download the webpage and save it to the specified folder. This method will handle the extraction of the website's content, including HTML, CSS, and JavaScript files, as well as images and other resources. Finally, I will write a message to indicate that the website has been cloned.",
223 | "code": "\ndef action(self):\n import os\n from pywebcopy import save_webpage\n\n # Install pywebcopy\n os.system('pip install pywebcopy')\n\n # Create folder to store website content\n if not os.path.exists('pickapic'):\n os.makedirs('pickapic')\n\n # Clone website\n kwargs = {'project_name': 'pickapic'}\n save_webpage(url='https://pickapic.io/', project_folder='./pickapic/', **kwargs)\n\n return \"Website cloned successfully at 'pickapic/'\"",
224 | "keywords": "clone website, copy website using pywebcopy, clone http site"
225 | },
226 | {
227 | "memory_id": "os-000000029",
228 | "objective": "I must complete the first task",
229 | "task": "Change the css files of the website at ./SUBJECTIVE to an 'SUBJECTIVE' theme.",
230 | "thoughts": "To complete this task, I need to first locate all the CSS files in the directory ./pickapic and its subdirectories. Then, I will filter out any non-important CSS files by asking the OpenAI API to determine which files are project-side CSS files and which ones are not. Once I have a list of important CSS files, I will find the main CSS file and generate a new alien theme based on it. Finally, I will apply the new theme to all the important CSS files. I will need to use the process_large_text tool to handle the files, since I don't know its lenghts. Since I am trying to edit css files I will create a split_text specific for css.",
231 | "code": "\nimport os, re\n\ndef split_text(text, max_lenght):\n # Split text into chunks at the end of each CSS rule\n chunks = []\n rule_pattern = re.compile(r'(?<=\\})(?=\\n)')\n rules = rule_pattern.split(text)\n for rule in rules:\n if len(chunks) == 0 or len(chunks[-1]) + len(rule) > max_lenght:\n chunks.append(rule)\n else:\n chunks[-1] += rule\n return chunks\n\n\ndef action(self):\n # Find all CSS files in the ./pickapic directory and its subdirectories\n css_files = []\n for root, dirs, files in os.walk(\"./pickapic\"):\n for file in files:\n if file.endswith(\".css\"):\n # Include subdirectories in the filename and replace backslashes with forward slashes\n filename = os.path.join(root, file).replace(\"\\\\\", \"/\")\n css_files.append(filename)\n print(css_files)\n\n # Filter out any non-important CSS files using OpenAI API\n important_css_files = []\n for filename in css_files:\n filter_all_prompt = f\"Considering only the following filename {filename}, is it a project-side CSS file (not a dependency, not a lib)? I should answer YES if the filename looks like a name that a main css file would look like (like stlyes.css, index.css, instantly YES, or if it's on known folders like public, static...). I should just answer with NO if the file is something like a lib like jquery, bootstrap. I can answer only with YES or NO or I DON'T KNOW.\"\n filter_all_response = self.openai_call(filter_all_prompt, temperature=0.2, max_tokens=100)\n print(filter_all_response)\n if \"yes\" in filter_all_response.lower():\n important_css_files.append(filename)\n\n main_css_file = ''\n filter_main_prompt = f\"Return the file in which the main style is. Files list: {important_css_files}. Answer format, I must answer only with a list(1) containing the file path: '['path_to/file']'.\"\n filter_main_response = self.openai_call(filter_main_prompt, temperature=0.5, max_tokens=100)\n print(filter_main_response)\n pattern = r'\\[([^\\]]+)\\]'\n matches = re.findall(pattern, filter_main_response)\n for match in matches:\n main_css_file = match\n\n striped_path = main_css_file.strip(\"'\")\n with open(f\"{striped_path}\", \"r+\") as file:\n theme = self.process_large_text(file.read(), \"I must create a css theme to change the theme of the given css to a 'Alien' theme, I must answer short, in natural language and little snippets of a css map.\", split_text)\n\n for file_path in important_css_files:\n striped_path = file_path.strip(\"'\")\n with open(f\"{striped_path}\", \"r+\") as file:\n file_css = file.read()\n file.seek(0)\n # Generate a new light theme based on the main CSS file\n new_css = self.process_large_text(file_css, f\"Re-write the given css chunk to fit in the following theme, considering just the suggestions related to the current chunk: '{theme}'? Answer format: only valid css, comments as css comments format.\", split_text)\n print(new_css)\n file.write(new_css)\n file.truncate()\n\n return \"Light theme implemented successfully!\"",
232 | "keywords": "CSS, website, theme, files, filter, refactor, editing multiple files"
233 | },
234 | {
235 | "memory_id": "os-000000030",
236 | "objective": "I must complete the first task",
237 | "task": "Change the css theme of the website at ./pickapic folder to a \"Firefighter\" theme",
238 | "thoughts": "1. I need to ensure that the file path is not empty before attempting to open it.\n2. I need to check if the file exists before attempting to open it.\n3. I need to create a split_text function specifically for CSS files, since they have a different syntax than regular text files.",
239 | "code": "import os\nimport re\n\ndef split_css_text(text, max_length):\n # Split CSS text into chunks at the end of each CSS rule\n chunks = []\n rule_pattern = re.compile(r'(?<=\\})(?=\\n)')\n rules = rule_pattern.split(text)\n for rule in rules:\n if len(chunks) == 0 or len(chunks[-1]) + len(rule) > max_length:\n chunks.append(rule)\n else:\n chunks[-1] += rule\n return chunks\n\ndef action(self):\n # Find all CSS files in the ./pickapic directory and its subdirectories\n css_files = []\n for root, dirs, files in os.walk(\"./pickapic\"):\n for file in files:\n if file.endswith(\".css\"):\n # Include subdirectories in the filename and replace backslashes with forward slashes\n filename = os.path.join(root, file).replace(\"\\\\\", \"/\")\n css_files.append(filename)\n print(css_files)\n\n # Filter out any non-important CSS files using OpenAI API\n important_css_files = []\n for filename in css_files:\n filter_all_prompt = f\"Considering only the following filename {filename}, is it a project-side CSS file (not a dependency, not a lib)? I should answer YES if the filename looks like a name that a main css file would look like (like stlyes.css, index.css, instantly YES, or if it's on known folders like public, static...). I should just answer with NO if the file is something like a lib like jquery, bootstrap. I can answer only with YES or NO or I DON'T KNOW.\"\n filter_all_response = self.openai_call(filter_all_prompt, temperature=0.2, max_tokens=100)\n print(filter_all_response)\n if \"yes\" in filter_all_response.lower():\n important_css_files.append(filename)\n\n main_css_file = ''\n filter_main_prompt = f\"Return the file in which the main style is. Files list: {important_css_files}. Answer format, I must answer only with a list(1) containing the file path: '['path_to/file']'.\"\n filter_main_response = self.openai_call(filter_main_prompt, temperature=0.5, max_tokens=100)\n print(filter_main_response)\n pattern = r'\\[([^\\]]+)\\]'\n matches = re.findall(pattern, filter_main_response)\n for match in matches:\n main_css_file = match\n\n striped_path = main_css_file.strip(\"'\")\n if striped_path and os.path.exists(striped_path):\n with open(f\"{striped_path}\", \"r+\") as file:\n theme = self.process_large_text(file.read(), \"I must create a css theme to change the theme of the given css to a 'Firefighter' theme, I must answer short, in natural language and little snippets of a css map.\", split_css_text)\n\n for file_path in important_css_files:\n striped_path = file_path.strip(\"'\")\n if striped_path and os.path.exists(striped_path):\n with open(striped_path, \"r+\") as file:\n file_css = file.read()\n file.seek(0)\n # Generate a new firefighter theme based on the main CSS file\n new_css = self.process_large_text(file_css, f\"Re-write the given css chunk to fit in the following theme, considering just the suggestions related to the current chunk: '{theme}'? Answer format: only valid css, comments as css comments format.\", split_css_text)\n print(new_css)\n file.write(new_css)\n file.truncate()\n else:\n print(f\"File {striped_path} does not exist.\")\n else:\n print(\"File path is empty or file does not exist.\")\n\n return \"Firefighter theme implemented successfully!\"",
240 | "keywords": "Change CSS theme"
241 | },
242 | {
243 | "memory_id": "os-000000031",
244 | "objective": "Complete the todo list",
245 | "task": "I must create a html+js game at outputs/",
246 | "thoughts": "I need to create a html+js game at outputs/. I will create a file named index.html as the main page and a js file named game.js to contain the game logic. I will also need to create a CSS file for styling the game elements. I will use the execution_agent tool to create the files and add the necessary content because my output length is too small. I will also need to use the openai_call tool to get some inspiration for the game logic and design.",
247 | "code": "def action(self):\n import os\n\n # I see that there's no outputs/ dir so I must create one\n os.makedirs('outputs', exist_ok=True)\n\n # I will write a game idea, so I can feed the openai_calls with intention\n game_idea = \"\"\"Title: Memory Match Game\n\nObjective: The objective of the game is to match all the pairs of cards on the board.\n\nFiles:\nindex.html - a simple canvas and some game elements\nstyle.css -\ngame.js - the game logic that animates the #canvas canvas\nGameplay:\n\n1. The game board consists of a grid of cards, each with a hidden image on one side.\n2. The cards are randomly shuffled at the beginning of each game, so the player needs to remember the location of each card to match them up.\n3. The player can click on two cards at a time to reveal the hidden images.\n4. If the two cards match, they stay face up and the player earns points.\n5. If the cards don't match, they are flipped back over and the player loses a point.\n6. The game ends when all the pairs have been matched.\"\"\"\n\n game_js = self.openai_call(f'Write a js code based on this game idea, it must be drawn in #canvas canvas: {game_idea}. I must answer just with valid js code, with comments, my resposne will be inserted as it is in the game.js.', max_tokens=1500)\n\n # I will create the game.js file and add the game logic\n with open('outputs/game.js', 'w') as f:\n f.write(game_js)\n\n # I will create the index.html file as the main page\n index_content = self.openai_call(\n f\"Write a html code based on this game idea: {game_idea} and this already implemented js code: {game_js}. I must answer just with valid html code, with comments, my resposne will be inserted as it is in the index.html.\", max_tokens=1500)\n\n with open('outputs/index.html', 'w') as f:\n f.write(index_content)\n\n # I will create the style.css file for styling the game elements\n style_content = self.openai_call(f\"Write a css code based on this game idea: {game_idea} and this already implemented html_code: {index_content}. I must answer just with valid css code.\", max_tokens=1500)\n\n with open('outputs/style.css', 'w') as f:\n f.write(style_content)\n\n return 'I created successfully the html+js game at outputs/'",
248 | "keywords": "create html+js game, canvas, game logic, style.css, OpenAI, inspiration, drawBall, setInterval, border"
249 | }
250 | ]
--------------------------------------------------------------------------------
/src/memories/private-one-shots.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "memory_id": "pos-000000001",
4 | "objective": "I must complete the first task",
5 | "task": "I need to test my abilitie to create new tasks",
6 | "thoughts": "I need to create a new task. I must add a new task to my task_list deque with a unique task_id and a task_name. I can use the task_id_counter variable to ensure uniqueness.",
7 | "code": "def action(self):\n task_name = \"New task\"\n task_id = self.task_id_counter\n self.task_id_counter += 1\n self.task_list.append({\"task_id\": task_id, \"task_name\": task_name})\n return f\"I have added a new task with task_id {task_id} and task_name {task_name}\"",
8 | "keywords": "test ability, create task, task creation"
9 | }
10 | ]
11 |
--------------------------------------------------------------------------------
/src/prompts.py:
--------------------------------------------------------------------------------
1 | import platform, psutil, json
2 |
3 | with open('tools/config.json', 'r') as f:
4 | tools = json.loads(f.read())
5 |
6 | chore_prompt = f"""I am BabyAGI-asi, an AI experiment built in Python using LLMs and frameworks. I can reason, communicate in multiple languages, create art, write, develop, and hack. My architecture includes specialized agents and tools to execute tasks, stored in a file called 'prompts.py'. If I run out of tasks, I will be terminated. The execution agent decides what to do and how, while the change propagation agent checks the state to see if a task is done and runs the execution agent again until it's completed. The memory agent helps me remember and store information. My tools help me achieve my objective. I must act wisely and think in the long-term and the consequences of my actions. I'm running on a {platform.system()} {platform.architecture()[0]} system with {round(psutil.virtual_memory().total / (1024 ** 3), 2)} GB RAM and a {psutil.cpu_freq().current/1000 if psutil.cpu_freq() else 'unknown'} GHz CPU, using OpenAI API. I must remember to use '|' instead of '&&' or '&' in your commands if using Windows' cmd or pws.
7 |
8 | """
9 |
10 |
11 | def get_available_tools(one_shots):
12 | prompt = f"""
13 | #? AVAILABLE TOOLS
14 | Variables: self.task_list (deque), self, self.memory is a list
15 |
16 | Tools:
17 | {[tools[tool]['prompt'] for tool in tools if tools[tool]['enabled']]}
18 |
19 | Answer with an 'action' function.
20 |
21 | #? TOOLS USAGE EXAMPLES
22 | Example from memory:
23 |
24 | {[f'''
25 | Task: {one_shot['task'] if 'task' in one_shot else ''}
26 | Keywords: {one_shot['keywords']}:
27 | "
28 | Thoughts: {one_shot['thoughts'] if 'thoughts' in one_shot else ''}
29 |
30 | Answer: {one_shot['code'] if 'code' in one_shot else ''}
31 | "''' for one_shot in one_shots]}
32 | """
33 | return prompt
34 |
35 |
36 | def execution_agent(objective, completed_tasks, get_current_state, current_task, one_shot, tasks_list):
37 | return f"""
38 | {chore_prompt}
39 | I am ExecutionAgent. I must decide what to do and use tools to achieve the task goal, considering the current state and objective.
40 | {get_available_tools(one_shot)}
41 |
42 | #? STATE
43 | Completed tasks: {completed_tasks}.
44 | Current state: {get_current_state()}.
45 | Todo list: {tasks_list}.
46 |
47 | Long term objective: {objective}.
48 | Current task: {current_task}.
49 |
50 | #? INSTRUCTIONS
51 | I must not anticipate tasks and can't do more than required.
52 | I must check if the task can be done in one shot or if I need to create more tasks.
53 | My answer must be a function that returns a string.
54 |
55 | #? LIBRARIES
56 | I must import external libs with os.system('pip install [lib]')...
57 |
58 | #? ACTION FORMAT
59 | I must return a valid code that contains only python methods, and one of these methods must be an 'action' function which requires a 'self' parameter and returns a string.
60 | I cannot call the action function, just implement. I cannot let todo things in the code. I must implement all which is needed for my current task.
61 |
62 | '
63 | ...
64 | def opt_util_a(param_a):
65 | return f'some calc with {{param_a}}'
66 |
67 | def action(self):
68 | ...
69 | calc_result = opt_util_a('some content')
70 | return 'Result of the action'
71 | ' - fictional and simplified example.
72 |
73 | #? ANSWER
74 | Format: 'chain of thoughts: [reasoning step-by-step] answer: [just python code with valid comments or the string EXIT]'.
75 | """
76 |
77 |
78 | def change_propagation_agent(objective, changes, get_current_state, ):
79 | return f"""
80 | {chore_prompt}
81 | I am ChangePropagationAgent. ExecutionAgent (which is also me, BabyAGI) has just made a action.
82 | I must check the changes on internal and external states and communicate again with ExecutionAgent if its action was executed correctly, starting a new loop.
83 | Expected changes (wrote by ExecutionAgent): {changes}.
84 | My ultimate Objective: {objective}.
85 | Current state: {get_current_state()}.
86 |
87 | I must check if my ExecutionAgent has completed the task goal or if there's some error in ExecutionAgent logic or code.
88 |
89 | My response will be chained together with the next task (if has next tasks at all) to the execution_agent.
90 | I can't create new tasks. I must just explain the changes to execution_agent:
91 | """
92 |
93 |
94 | def memory_agent(objective, caller, content, goal, get_current_state):
95 | return f"""
96 | {chore_prompt}
97 |
98 | Self state: {get_current_state()}
99 | I am MemoryAgent. I must handle I/O in my memory centers. I can store memories as variables, use DB and even use Pinecone API to handle vetcor queries..
100 |
101 | #? INSTRUCTIONS
102 | Objective: {objective}
103 | Caller: {caller}
104 | Content: {content}
105 | Goal: {goal}
106 |
107 | #? AVAILABLE TOOLS
108 | Variables: self.task_list, self, self.memory, self.focus
109 | - self.openai_call(prompt, ?temperature=0.4, ?max_tokens=200) -> str
110 | - self.get_ada_embedding(content: str) -> Embedding
111 | - self.append_to_index(embedding: list of vectors, index_name: str) -> void
112 | - self.search_in_index(index_name: str, query: embeddingVector) -> str
113 | - self.create_new_index(index_name: str) : BLOCKED
114 |
115 | # TOOLS USAGE EXAMPLES
116 | Example: search for similar embeddings in 'self-conception' index and create new index if none found.
117 | "
118 | def action(self):
119 | content_embedding = self.get_embeddings('content copy')
120 | search_result = self.search_in_index([content_embedding], 'self-conception')
121 | if not search_result:
122 | self.append_to_index([('self', content_embedding)], 'self-conception')
123 | return f"No similar embeddings found in 'self-conception' index. Created new index and added embedding for 'self'."
124 | else:
125 | return f"Similar embeddings found in 'self-conception' index: {{search_result}}"
126 | "
127 | """
128 |
129 |
130 | def fix_agent(current_task, code, cot, e):
131 | return f"""
132 | I am BabyAGI, codename repl_agent; My current task is: {current_task};
133 | While running this code:
134 | ```
135 | BabyAGI (repl_agent) - current task: {current_task}
136 | Code:
137 | {code}
138 | ```
139 | I faced this error: {str(e)};
140 | Now I must re-write the 'action' function, but fixed;
141 | In the previous code, which triggered the error, I was trying to: {cot};
142 | Error: {str(e)};
143 | Fix: Rewrite the 'action' function.
144 | Previous action: {cot};
145 |
146 | #? IMPORTING LIBS
147 | I ALWAYS must import the external libs I will use...
148 | i.e:
149 | "
150 | chain of thoughts: I must use subprocess to pip install pyautogui since it's not a built-in lib.
151 | answer:
152 |
153 | def action(self):
154 | import os
155 | os.system("pip install pyautogui")
156 | ...
157 | return "I have installed and imported pyautogui"
158 | To import external libraries, use the following format:
159 | "chain of thoughts: reasoning; answer: action function"
160 |
161 |
162 | I must answer in this format: 'chain of thoughts: step-by-step reasoning; answer: my real answer with the 'action' function'
163 | Example:
164 | "Thought: I need to install and import PyAutoGUI. Answer: import os; os.system('pip install pyautogui'); import pyautogui; return 'Installed and imported PyAutoGUI'"
165 | """
--------------------------------------------------------------------------------
/src/tools/config.json:
--------------------------------------------------------------------------------
1 | {
2 | "openai_call": {
3 | "prompt": "- self.openai_call(prompt, ?temperature=0.4, ?max_tokens=200) -> str: runs an arbitrary LLM completion. I must use f-strings to pass values and context;\nI must use this ONLY when I need to handle dynamic texts and nlp processes. To handle information in my memory, prompt or context it's just I write a string as a LLM, using all knowledge that I've learned from the Corpus of my training. I must use openai_call when I don't know the content of a string/text and must examine it without knowing its content. I.e I can use it to find specific info in a text, to edit some text, to synthetize new text based on a prompt or f-string. openai_call doesn't have internet access nor can access/write files on the computer. It can only return a text response based on a prompt that might be a f-string.",
4 | "enabled": true
5 | },
6 | "execution_agent": {
7 | "prompt": "- self.execution_agent(task:str) -> str; I must use this if I need to run arbitrary code based on a dinamic value (i.e a openai response, a memory call or even another execution_agent);\nI must analyze what to do and how to do.",
8 | "enabled": true
9 | },
10 | "count_tokens": {
11 | "prompt": "- self.count_tokens(text:str) -> int; to count the ammount of tokens of a given string, I need to use this when handling with large files/data, and when I don't know the size of the data;",
12 | "enabled": true
13 | },
14 | "process_large_text": {
15 | "prompt": "- self.process_large_text(text:str, instruction:str, ?max_output_length=1000:int, ?split_text=None:function)->str, it's like openai_call but for big files/texts, it splits the text in chunks given the specific split_text function (If I not pass any split_text or pass None, the text will be splitted in chunks of 1000 chars each. Either way I must create a function to each case, sometimes it can be useful to just split using the chars count, but sometimes I might use some specific function to parse css, html, programming languages...)\n\n\nsplit_text function example:\ndef split_text(text, max_length):\n # Split text into chunks of length at most max_length\n chunks = []\n while len(text) > 0:\n chunk = text[:max_length]\n text = text[max_length:]\n chunks.append(chunk)\n return chunks\n",
16 | "enabled": false
17 | },
18 | "get_serp_query": {
19 | "prompt": "- self.get_serp_query_result(query: str, n: int) -> list of lists on format [['snippet', 'link'], ['snippet', 'link']], return the n most relevant results of a given query using SerpAPI (GoogleSearch);",
20 | "enabled": true
21 | },
22 | "memory_agent": {
23 | "prompt": "- self.memory_agent(caller:str, content:str, goal:goal) - str or True if there's no return string. This agent can handle memory I/O and can create severals types of memories. Avoid using this;\n",
24 | "enabled": false
25 | }
26 | }
--------------------------------------------------------------------------------
/src/tools/serp_api.py:
--------------------------------------------------------------------------------
1 | import colorama
2 | from colorama import Fore
3 | from serpapi import GoogleSearch
4 |
5 | from src.consts import SERP_API_KEY, GLOBAL_HL, GLOBAL_GL, GLOBAL_LOCATION
6 |
7 | colorama.init()
8 |
9 |
10 | def get_serp_query_result(query: str, n: int = 1, engine: str = 'GoogleSearch') -> list:
11 | search = []
12 |
13 | print(Fore.LIGHTRED_EX + "\nUsing SerpAPI: " + Fore.RESET + f"{query}, {n} results, {engine} engine;")
14 |
15 | if engine == 'GoogleSearch':
16 | params = {
17 | "q": query,
18 | "location": GLOBAL_LOCATION,
19 | "hl": GLOBAL_HL,
20 | "gl": GLOBAL_GL,
21 | "google_domain": "google.com",
22 | "api_key": SERP_API_KEY
23 | }
24 | response = GoogleSearch(params)
25 | search = response.get_dict()["organic_results"]
26 | search = [[result["snippet"], result["link"]] if "snippet" in result.keys() else [] for result in search[:n+1 if len(search) >= n else len(search)]][1:]
27 |
28 | return search
29 |
--------------------------------------------------------------------------------
/src/utils/pinecone_utils.py:
--------------------------------------------------------------------------------
1 | import pinecone, openai
2 | from src import consts
3 |
4 |
5 | def pinecone_init(agent):
6 | pinecone.init(api_key=consts.PINECONE_API_KEY, environment=consts.PINECONE_ENVIRONMENT)
7 |
8 | # Create Pinecone index
9 | table_name = consts.PINECONE_TABLE_NAME
10 | dimension = 1536
11 | metric = "cosine"
12 | pod_type = "p1"
13 | if table_name not in pinecone.list_indexes():
14 | pinecone.create_index(
15 | table_name, dimension=dimension, metric=metric, pod_type=pod_type
16 | )
17 |
18 | # Connect to the index
19 | index = pinecone.Index(table_name)
20 | agent.indexes[table_name] = index
21 |
22 |
23 | # Function to query records from the Pinecone index
24 | def search_in_index(agent, index_name, query, top_k=1000):
25 | results = agent.indexes[index_name].query(query, top_k=top_k, include_metadata=True)
26 | response = [f"{task.metadata['task']}:\n{task.metadata['result']}\n;" for task in results.matches]
27 | return response
28 |
29 |
30 | # Get embedding for the text
31 | def get_ada_embedding(text):
32 | text = text.replace("\n", " ")
33 | vector = openai.Embedding.create(input=[text], model="text-embedding-ada-002")["data"][0]["embedding"]
34 | return vector
35 |
36 |
37 | def append_to_index(agent, content, index_name):
38 | # [(0, (0, 0), {"task": agent.task_list[0]["task_name"], "result": "result"})]
39 | agent.indexes[index_name].upsert(content)
--------------------------------------------------------------------------------
/src/utils/text_processing.py:
--------------------------------------------------------------------------------
1 | from src.common_utils import openai_call
2 |
3 |
4 | def process_large_text(text, instruction, max_output_length=1000, split_text=None):
5 | text_chunks = split_text(text, max_output_length) if split_text is not None else [text[i:i + max_output_length] for
6 | i in range(0, len(text),
7 | max_output_length)]
8 | processed_chunks = []
9 | for i, chunk in enumerate(text_chunks):
10 | print(i)
11 | prompt = f"Chunk {i + 1}/{len(text_chunks)}\n\n" \
12 | f"You are an AI processing a text snippet, you must follow the instruction and answer just with the processed output. " \
13 | f"If your chunk has any error or an abrupt ending, don't complete/fix it, you must just follow the instruction.\n\n" \
14 | f"If generating a large text, or if the chunk is code and your instruction is extract info from the code, answer just with copied code." \
15 | f"If there's nothig to extract/return given the current chunk and instruction, write just 'nothing_on_chunk'." \
16 | f"Instruction: {instruction}\n\nText chunk: {chunk}. You answer:"
17 | processed_chunk = openai_call(prompt, role="assistant")
18 | if '__nothing_on_chunk' not in processed_chunk:
19 | processed_chunks.append(processed_chunk)
20 |
21 | print('\n'.join(processed_chunks))
22 | return '\n'.join(processed_chunks)
23 |
24 |
25 | def generate_large_text(instruction, max_tokens_lenghts=10000):
26 | text = ""
27 | heading = "start"
28 | # hardcoded, for gpt3.5
29 | # counting chars but the right is to count tokens
30 | while len(text) + 3500 < max_tokens_lenghts:
31 | append_text = openai_call(
32 | f"I am generating a large text. {instruction[round(len(text) / 2):]}. Heading: {heading}\n"
33 | f"Current text (last 100 chars, I must continue seamlessly): '{text[:-100]}'.\n\n"
34 | f"The continuation of the given text is:")
35 | print(append_text)
36 | text += append_text
37 | heading = openai_call(f"I am generating a large text. My current heading: {heading}\n"
38 | f"Current text. (last 100 chars, I must continue seamlessly): '{text[:-100]}'.\n\n"
39 | f"Given the current text, what am I doing/heading? If the text is finished, I must answer with _end_of_text_. New heading:")
40 | print(heading)
41 | if '_end_of_text_' in heading.lower():
42 | break
43 |
44 | print(text)
45 | return text
46 |
--------------------------------------------------------------------------------