├── .env.template ├── .gitignore ├── Dockerfile ├── LICENSE.md ├── README.md ├── agent.py ├── docker-compose.yml ├── main.py ├── memory_count.yaml ├── prompts.yaml └── requirements.txt /.env.template: -------------------------------------------------------------------------------- 1 | OPENAI_API_KEY=replace_me 2 | PINECONE_API_KEY=replace_me 3 | PINECONE_API_ENV=replace_me 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | __pycache__ 3 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | ENV PIP_NO_CACHE_DIR=true 4 | WORKDIR /tmp 5 | COPY requirements.txt /tmp/requirements.txt 6 | RUN pip install -r requirements.txt 7 | 8 | WORKDIR /app 9 | COPY . /app 10 | ENTRYPOINT ["python", "main.py"] 11 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 seanpixel 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Teenage-AGI 2 | 3 | ## Objective 4 | Inspired by several Auto-GPT-related Projects (predominantly BabyAGI) and the Paper ["Generative Agents: Interactive Simulacra of Human Behavior"](https://arxiv.org/abs/2304.03442), this Python project uses OpenAI and Pinecone to Give memory to an AI agent and also allows it to "think" before making an action (outputting text). Also, just by shutting down the AI, it doesn't forget its memories since it lives on Pinecone and its memory_counter saves the index that it's on. 5 | 6 | 7 | ## Updates 8 | April 12: Added "read" and "think" commands. Add "read: " or "think: " in front of a query to feed it information using read (any length works) or insert a memory into an agent. 9 | 10 | ### Sections 11 | - [How it Works](https://github.com/seanpixel/Teenage-AGI/blob/main/README.md#how-it-works) 12 | - [How to Use](https://github.com/seanpixel/Teenage-AGI/blob/main/README.md#how-to-use) 13 | - [Experiments](https://github.com/seanpixel/Teenage-AGI/blob/main/README.md#experiments) 14 | - [More About Project & Me](https://github.com/seanpixel/Teenage-AGI/blob/main/README.md#more-about-the-project--me) 15 | - [Credits](https://github.com/seanpixel/Teenage-AGI/blob/main/README.md#credits) 16 | 17 | ## How it Works 18 | Here is what happens every time the AI is queried by the user: 19 | 1. AI vectorizes the query and stores it in a Pinecone Vector Database 20 | 2. AI looks inside its memory and finds memories and past queries that are relevant to the current query 21 | 3. AI thinks about what action to take 22 | 4. AI stores the thought from Step 3 23 | 5. Based on the thought from Step 3 and relevant memories from Step 2, AI generates an output 24 | 6. AI stores the current query and its answer in its Pinecone vector database memory 25 | 26 | ## How to Use 27 | 1. Clone the repository via `git clone https://github.com/seanpixel/Teenage-AGI.git` and cd into the cloned repository. 28 | 2. Install required packages by doing: pip install -r requirements.txt 29 | 3. Create a .env file from the template `cp .env.template .env` 30 | 4. `open .env` and set your OpenAI and Pinecone API info. 31 | 5. Run `python main.py` and talk to the AI in the terminal 32 | 33 | ## Running in a docker container 34 | You can run the system isolated in a container using docker-compose: 35 | ``` 36 | docker-compose run teenage-agi 37 | ``` 38 | 39 | ## Experiments 40 | Currently, using GPT-4, I found that it can remember its name and other characteristics. It also carries on the conversation quite well without a context window (although I might add it soon). I will update this section as I keep playing with it. 41 | 42 | ## More about the Project & Me 43 | After reading the Simulcra paper, I made this project in my college dorm. I realized that most of the "language" that I generate is inside my head, so I thought maybe it would make sense if AGI does as well. I'm a founder currently running a startup called [DSNR]([url](https://www.dsnr.ai/)) and also a first-year at USC. Contact me on [twitter](https://twitter.com/sean_pixel) about anything would love to chat. 44 | 45 | ## Credits 46 | Thank you to [@yoheinakajima](https://twitter.com/yoheinakajima) and the team behind ["Generative Agents: Interactive Simulacra of Human Behavior"](https://arxiv.org/abs/2304.03442) for the idea! -------------------------------------------------------------------------------- /agent.py: -------------------------------------------------------------------------------- 1 | import openai 2 | import os 3 | import pinecone 4 | import yaml 5 | from dotenv import load_dotenv 6 | import nltk 7 | from langchain.text_splitter import NLTKTextSplitter 8 | 9 | # Download NLTK for Reading 10 | nltk.download('punkt') 11 | 12 | # Initialize Text Splitter 13 | text_splitter = NLTKTextSplitter(chunk_size=2500) 14 | 15 | # Load default environment variables (.env) 16 | load_dotenv() 17 | 18 | OPENAI_MODEL = os.getenv("OPENAI_MODEL") or "gpt-4" 19 | 20 | def generate(prompt): 21 | completion = openai.ChatCompletion.create( 22 | model=OPENAI_MODEL, 23 | messages=[ 24 | {"role": "system", "content": "You are an intelligent agent with thoughts and memories. You have a memory which stores your past thoughts and actions and also how other users have interacted with you."}, 25 | {"role": "system", "content": "Keep your thoughts relatively simple and concise"}, 26 | {"role": "user", "content": prompt}, 27 | ] 28 | ) 29 | 30 | return completion.choices[0].message["content"] 31 | 32 | PINECONE_API_KEY = os.getenv("PINECONE_API_KEY") 33 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") 34 | PINECONE_API_ENV = os.getenv("PINECONE_API_ENV") 35 | #PINECONE_API_ENV = "asia-southeast1-gcp" 36 | 37 | # Prompt Initialization 38 | with open('prompts.yaml', 'r') as f: 39 | data = yaml.load(f, Loader=yaml.FullLoader) 40 | 41 | # Counter Initialization 42 | with open('memory_count.yaml', 'r') as f: 43 | counter = yaml.load(f, Loader=yaml.FullLoader) 44 | 45 | # internalThoughtPrompt = data['internal_thought'] 46 | # externalThoughtPrompt = data['external_thought'] 47 | # internalMemoryPrompt = data['internal_thought_memory'] 48 | # externalMemoryPrompt = data['external_thought_memory'] 49 | 50 | # Thought types, used in Pinecone Namespace 51 | THOUGHTS = "Thoughts" 52 | QUERIES = "Queries" 53 | INFORMATION = "Information" 54 | ACTIONS = "Actions" 55 | 56 | # Top matches length 57 | k_n = 3 58 | 59 | # initialize pinecone 60 | pinecone.init(api_key=PINECONE_API_KEY, environment=PINECONE_API_ENV) 61 | 62 | # initialize openAI 63 | openai.api_key = OPENAI_API_KEY # you can just copy and paste your key here if you want 64 | 65 | def get_ada_embedding(text): 66 | text = text.replace("\n", " ") 67 | return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[ 68 | "data" 69 | ][0]["embedding"] 70 | 71 | def read_txtFile(file_path): 72 | with open(file_path, 'r', encoding='utf-8') as file: 73 | text = file.read() 74 | return text 75 | 76 | class Agent(): 77 | def __init__(self, table_name=None) -> None: 78 | self.table_name = table_name 79 | self.memory = None 80 | self.thought_id_count = int(counter['count']) 81 | self.last_message = "" 82 | 83 | # Keep Remebering! 84 | # def __del__(self) -> None: 85 | # with open('memory_count.yaml', 'w') as f: 86 | # yaml.dump({'count': str(self.thought_id_count)}, f) 87 | 88 | 89 | def createIndex(self, table_name=None): 90 | # Create Pinecone index 91 | if(table_name): 92 | self.table_name = table_name 93 | 94 | if(self.table_name == None): 95 | return 96 | 97 | dimension = 1536 98 | metric = "cosine" 99 | pod_type = "p1" 100 | if self.table_name not in pinecone.list_indexes(): 101 | pinecone.create_index( 102 | self.table_name, dimension=dimension, metric=metric, pod_type=pod_type 103 | ) 104 | 105 | # Give memory 106 | self.memory = pinecone.Index(self.table_name) 107 | 108 | 109 | # Adds new Memory to agent, types are: THOUGHTS, ACTIONS, QUERIES, INFORMATION 110 | def updateMemory(self, new_thought, thought_type): 111 | with open('memory_count.yaml', 'w') as f: 112 | yaml.dump({'count': str(self.thought_id_count)}, f) 113 | 114 | if thought_type==INFORMATION: 115 | new_thought = "This is information fed to you by the user:\n" + new_thought 116 | elif thought_type==QUERIES: 117 | new_thought = "The user has said to you before:\n" + new_thought 118 | elif thought_type==THOUGHTS: 119 | # Not needed since already in prompts.yaml 120 | # new_thought = "You have previously thought:\n" + new_thought 121 | pass 122 | elif thought_type==ACTIONS: 123 | # Not needed since already in prompts.yaml as external thought memory 124 | pass 125 | 126 | vector = get_ada_embedding(new_thought) 127 | upsert_response = self.memory.upsert( 128 | vectors=[ 129 | { 130 | 'id':f"thought-{self.thought_id_count}", 131 | 'values':vector, 132 | 'metadata': 133 | {"thought_string": new_thought 134 | } 135 | }], 136 | namespace=thought_type, 137 | ) 138 | 139 | self.thought_id_count += 1 140 | 141 | # Agent thinks about given query based on top k related memories. Internal thought is passed to external thought 142 | def internalThought(self, query) -> str: 143 | query_embedding = get_ada_embedding(query) 144 | query_results = self.memory.query(query_embedding, top_k=2, include_metadata=True, namespace=QUERIES) 145 | thought_results = self.memory.query(query_embedding, top_k=2, include_metadata=True, namespace=THOUGHTS) 146 | results = query_results.matches + thought_results.matches 147 | sorted_results = sorted(results, key=lambda x: x.score, reverse=True) 148 | top_matches = "\n\n".join([(str(item.metadata["thought_string"])) for item in sorted_results]) 149 | #print(top_matches) 150 | 151 | internalThoughtPrompt = data['internal_thought'] 152 | internalThoughtPrompt = internalThoughtPrompt.replace("{query}", query).replace("{top_matches}", top_matches).replace("{last_message}", self.last_message) 153 | print("------------INTERNAL THOUGHT PROMPT------------") 154 | print(internalThoughtPrompt) 155 | internal_thought = generate(internalThoughtPrompt) # OPENAI CALL: top_matches and query text is used here 156 | 157 | # Debugging purposes 158 | #print(internal_thought) 159 | 160 | internalMemoryPrompt = data['internal_thought_memory'] 161 | internalMemoryPrompt = internalMemoryPrompt.replace("{query}", query).replace("{internal_thought}", internal_thought).replace("{last_message}", self.last_message) 162 | self.updateMemory(internalMemoryPrompt, THOUGHTS) 163 | return internal_thought, top_matches 164 | 165 | def action(self, query) -> str: 166 | internal_thought, top_matches = self.internalThought(query) 167 | 168 | externalThoughtPrompt = data['external_thought'] 169 | externalThoughtPrompt = externalThoughtPrompt.replace("{query}", query).replace("{top_matches}", top_matches).replace("{internal_thought}", internal_thought).replace("{last_message}", self.last_message) 170 | print("------------EXTERNAL THOUGHT PROMPT------------") 171 | print(externalThoughtPrompt) 172 | external_thought = generate(externalThoughtPrompt) # OPENAI CALL: top_matches and query text is used here 173 | 174 | externalMemoryPrompt = data['external_thought_memory'] 175 | externalMemoryPrompt = externalMemoryPrompt.replace("{query}", query).replace("{external_thought}", external_thought) 176 | self.updateMemory(externalMemoryPrompt, THOUGHTS) 177 | request_memory = data["request_memory"] 178 | self.updateMemory(request_memory.replace("{query}", query), QUERIES) 179 | self.last_message = query 180 | return external_thought 181 | 182 | # Make agent think some information 183 | def think(self, text) -> str: 184 | self.updateMemory(text, THOUGHTS) 185 | 186 | 187 | # Make agent read some information 188 | def read(self, text) -> str: 189 | texts = text_splitter.split_text(text) 190 | vectors = [] 191 | for t in texts: 192 | t = "This is information fed to you by the user:\n" + t 193 | vector = get_ada_embedding(t) 194 | vectors.append({ 195 | 'id':f"thought-{self.thought_id_count}", 196 | 'values':vector, 197 | 'metadata': 198 | {"thought_string": t, 199 | } 200 | }) 201 | self.thought_id_count += 1 202 | 203 | upsert_response = self.memory.upsert( 204 | vectors, 205 | namespace=INFORMATION, 206 | ) 207 | # Make agent read a document 208 | def readDoc(self, text) -> str: 209 | texts = text_splitter.split_text(read_txtFile(text)) 210 | vectors = [] 211 | for t in texts: 212 | t = "This is a document fed to you by the user:\n" + t 213 | vector = get_ada_embedding(t) 214 | vectors.append({ 215 | 'id':f"thought-{self.thought_id_count}", 216 | 'values':vector, 217 | 'metadata': 218 | {"thought_string": t, 219 | } 220 | }) 221 | self.thought_id_count += 1 222 | 223 | upsert_response = self.memory.upsert( 224 | vectors, 225 | namespace=INFORMATION, 226 | ) 227 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | 3 | services: 4 | teenage-agi: 5 | build: ./ 6 | volumes: 7 | - "./:/app" 8 | profiles: ["exclude-from-up"] # Use `docker-compose run teenage-agi` to get an attached container 9 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import agent 2 | import os 3 | from agent import Agent 4 | from dotenv import load_dotenv 5 | 6 | # Load default environment variables (.env) 7 | load_dotenv() 8 | 9 | AGENT_NAME = os.getenv("AGENT_NAME", "my-agent") 10 | 11 | agent = Agent(AGENT_NAME) 12 | 13 | # Creates Pinecone Index 14 | agent.createIndex() 15 | 16 | print("Talk to the AI!") 17 | 18 | while True: 19 | userInput = input() 20 | if userInput: 21 | if (userInput.startswith("read:")): 22 | agent.read(" ".join(userInput.split(" ")[1:])) 23 | print("Understood! The information is stored in my memory.") 24 | elif (userInput.startswith("think:")): 25 | agent.think(" ".join(userInput.split(" ")[1:])) 26 | print("Understood! I stored that thought into my memory.") 27 | elif (userInput.startswith("readDoc:")): 28 | agent.readDoc(" ".join(userInput.split(" ")[1:])) 29 | print("Understood! I stored the document into my memory.") 30 | else: 31 | print(agent.action(userInput), "\n") 32 | else: 33 | print("SYSTEM - Give a valid input") 34 | -------------------------------------------------------------------------------- /memory_count.yaml: -------------------------------------------------------------------------------- 1 | count: '0' 2 | -------------------------------------------------------------------------------- /prompts.yaml: -------------------------------------------------------------------------------- 1 | # Prompts to generate internal and external thoughts based on top relevant memories 2 | internal_thought: |- 3 | You have been given the following input: {query}. 4 | You must think about it and plan what action to take. 5 | For some context, here are your memories related to the query. 6 | 7 | MEMORIES sorted in relevance: 8 | {top_matches} 9 | 10 | Think of some actions you would take after hearing "{query}" based on your past thoughts and actions. 11 | This is not shown to the outside world but only to yourself. It is just your internal thought." 12 | 13 | external_thought: |- 14 | MEMORIES sorted in relevance: 15 | {top_matches} 16 | 17 | Based on the query, you have thought about the input and had the following thought: 18 | 19 | {internal_thought}. 20 | 21 | Based on your thought and memories, respond to the query. You don't have to refer to them if they aren't useful. 22 | {query}" 23 | 24 | # Prompts to create memories 25 | request_memory: |- 26 | The user said to you: 27 | '{query}' 28 | 29 | internal_thought_memory: |- 30 | Based on the query "{query}", you thought: 31 | '{internal_thought}' 32 | 33 | external_thought_memory: |- 34 | Based on the query "{query}", you responded with: 35 | '{external_thought}' 36 | 37 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | langchain==0.0.138 2 | nltk==3.7 3 | openai==0.27.2 4 | pinecone_client==2.2.1 5 | python-dotenv==1.0.0 6 | PyYAML==6.0 7 | --------------------------------------------------------------------------------