├── .gitignore ├── .gitmodules ├── README.md ├── auto_llama ├── auto_llama │ ├── .gitignore │ ├── __init__.py │ ├── __main__.py │ ├── actions.py │ ├── agent.py │ ├── const.py │ ├── data_models.py │ ├── tokens.py │ └── utils.py └── requirements.txt ├── convo_agents ├── ConvoAgents.ipynb ├── convo_agents.py └── requirements.txt ├── external └── README.md └── llama_agi ├── LICENSE ├── Makefile ├── README.md ├── examples ├── auto_runner_example.py └── streamlit_runner_example.py ├── llama_agi ├── __init__.py ├── default_task_prompts.py ├── execution_agent │ ├── SimpleExecutionAgent.py │ ├── ToolExecutionAgent.py │ ├── __init__.py │ └── base.py ├── runners │ ├── AutoAGIRunner.py │ ├── AutoStreamlitAGIRunner.py │ ├── __init__.py │ └── base.py ├── task_manager │ ├── LlamaTaskManager.py │ ├── __init__.py │ └── base.py ├── tools │ ├── NoteTakingTools.py │ ├── WebpageSearchTool.py │ └── __init__.py └── utils.py ├── pyproject.toml └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | .mypy_cache 2 | .ruff_cache 3 | __pycache__ -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "external/INSIGHT"] 2 | path = external/INSIGHT 3 | url = https://github.com/oneil512/INSIGHT.git 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 🦙🧪 Llama Lab 🧬🦙 2 | 3 | Llama Lab is a repo dedicated to building cutting-edge projects using [LlamaIndex](https://github.com/jerryjliu/llama_index). 4 | 5 | LlamaIndex is an interface for LLM data augmentation. It provides easy-to-use and flexible tools to index 6 | various types of data. At its core, it can be used to index a knowledge corpus. But it can also be used 7 | to index tasks, and provide memory-like capabilities for any outer agent abstractions. 8 | 9 | Here's an overview of some of the amazing projects we're exploring: 10 | - llama_agi (a [babyagi](https://github.com/yoheinakajima/babyagi) and [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) inspired project to create/plan/and solve tasks) 11 | - auto_llama (an [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) inspired project to search/download/query the Internet to solve user-specified tasks). 12 | 13 | Each folder is a stand-alone project. See below for a description of each project along with usage examples. 14 | 15 | **Contributing**: We're very open to contributions! This can include the following: 16 | - Extending an existing project 17 | - Create a new Llama Lab project 18 | - Modifying capabilities in the core [LlamaIndex](https://github.com/jerryjliu/llama_index) repo in order to support Llama Lab projects. 19 | 20 | 21 | ## Current Labs 22 | 23 | ### llama_agi (v0.1.0) 24 | 25 | Inspired from [babyagi](https://github.com/yoheinakajima/babyagi) and [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT), using LlamaIndex as a task manager and LangChain as a task executor. 26 | 27 | The current version of this folder will start with an overall objective ("solve world hunger" by default), and create/prioritize the tasks needed to achieve that objective. LlamaIndex is used to create and prioritize tasks, while LangChain is used to guess the "result" of completing each action. 28 | 29 | Using LangChain and LlamaIndex, llama_agi has access to the following tools: google-search, webpage reading, and note-taking. Note that the google-search tool requires [a Google API key and a CSE ID](https://cse.google.com/cse/). 30 | 31 | This will run in a loop until the task list is empty (or maybe you run out of OpenAI credits 😉). 32 | 33 | For more info, see the README in the [llama_agi folder](./llama_agi/README.md) or the [pypi page](https://pypi.org/project/llama-agi/). 34 | 35 | ### auto_llama 36 | 37 | Inspired by [autogpt](https://github.com/Significant-Gravitas/Auto-GPT). This implement its own Agent system similar to AutoGPT. 38 | Given a user query, this system has the capability to search the web and download web pages, before analyzing the combined data and compiling a final answer to the user's prompt. 39 | 40 | Example usage: 41 | 42 | ```bash 43 | cd auto_llama 44 | pip install -r requirements.txt 45 | python -m auto_llama 46 | Enter what you would like AutoLlama to do: 47 | Summarize the financial news from the past week. 48 | 49 | ``` 50 | 51 | ### Conversational Agents 52 | 53 | This is a fun conversational simulator between different agents. You can choose 54 | to provide some details about the context/setting, and watch as the conversation 55 | between different agents evolves. 56 | 57 | A sample notebook is provided in the `convo_agents` folder. Usage: 58 | 59 | ```bash 60 | cd convo_agents 61 | 62 | jupyter notebook ConvoAgents.ipynb 63 | ``` 64 | 65 | ### External projects 66 | 67 | We also provide references to other project repos using LlamaIndex in novel ways. 68 | 69 | These repos are hosted as submodules in our `external` folder. 70 | 71 | Check it out here: https://github.com/run-llama/llama-lab/tree/main/external 72 | 73 | ## Ecosystem 74 | 75 | Llama Lab is part of the broader Llama ecosystem. 76 | - [LlamaIndex](https://github.com/jerryjliu/llama_index) 77 | - [LlamaHub](https://llamahub.ai/) ([repo](https://github.com/emptycrown/llama-hub)) 78 | 79 | Community: 80 | - [Twitter](https://twitter.com/llama_index) 81 | - [Discord](https://discord.gg/dGcwcsnxhU) 82 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | # Byte-compiled / optimized / DLL files 3 | __pycache__/ 4 | *.py[cod] 5 | *$py.class 6 | 7 | # Distribution / packaging 8 | .Python 9 | bin/ 10 | build/ 11 | develop-eggs/ 12 | dist/ 13 | downloads/ 14 | eggs/ 15 | .eggs/ 16 | etc/ 17 | include/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | share/ 23 | var/ 24 | wheels/ 25 | pip-wheel-metadata/ 26 | share/python-wheels/ 27 | *.egg-info/ 28 | .installed.cfg 29 | *.egg 30 | MANIFEST -------------------------------------------------------------------------------- /auto_llama/auto_llama/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/run-llama/llama-lab/3364c9eae1dadefccad8fa904296f19e0bf045da/auto_llama/auto_llama/__init__.py -------------------------------------------------------------------------------- /auto_llama/auto_llama/__main__.py: -------------------------------------------------------------------------------- 1 | import json 2 | from auto_llama.agent import Agent 3 | import auto_llama.const as const 4 | from auto_llama.utils import print_pretty 5 | from auto_llama.actions import run_command 6 | from langchain.chat_models import ChatOpenAI 7 | 8 | import logging 9 | 10 | 11 | def main(): 12 | logger = logging.getLogger() 13 | logger.level = logging.WARN 14 | # # Enter your OpenAI API key here: 15 | # import os 16 | # os.environ["OPENAI_API_KEY"] = 'YOUR OPENAI API KEY' 17 | 18 | openaichat = ChatOpenAI( 19 | model_name="gpt-4", 20 | temperature=0.0, 21 | max_tokens=400, 22 | ) 23 | 24 | user_query = input("Enter what you would like AutoLlama to do:\n") 25 | if user_query == "": 26 | user_query = "Summarize the financial news from the past week." 27 | print("I will summarize the financial news from the past week.\n") 28 | agent = Agent(const.DEFAULT_AGENT_PREAMBLE, user_query, openaichat) 29 | while True: 30 | print("Thinking...") 31 | response = agent.get_response() 32 | print_pretty(response) 33 | action, args = response.command.action, response.command.args 34 | user_confirm = input( 35 | 'Should I run the command "' 36 | + action 37 | + '" with args ' 38 | + json.dumps(args) 39 | + "? (y/[N])\n" 40 | ) 41 | if user_confirm == "y": 42 | action_results = run_command(user_query, action, args, openaichat) 43 | # print(action_results) 44 | agent.memory.append(action_results) 45 | if action_results == "exit" or action_results == "done": 46 | break 47 | else: 48 | break 49 | 50 | 51 | if __name__ == "__main__": 52 | main() 53 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/actions.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | 4 | from duckduckgo_search import ddg 5 | from llama_index.readers.web import BeautifulSoupWebReader 6 | from llama_index import GPTListIndex 7 | from auto_llama.data_models import Response 8 | from typing import Dict 9 | from auto_llama.const import SEARCH_RESULTS_TEMPLATE, format_web_download 10 | from llama_index import Document 11 | from llama_index.indices.composability import ComposableGraph 12 | from llama_index import GPTListIndex, LLMPredictor, ServiceContext 13 | from langchain.llms.base import BaseLLM 14 | from llama_index.logger import LlamaLogger 15 | 16 | 17 | def run_command(user_query: str, command: str, args: Dict, llm: BaseLLM) -> str: 18 | llama_logger = LlamaLogger() 19 | service_context = ServiceContext.from_defaults( 20 | llm_predictor=LLMPredictor(llm), llama_logger=llama_logger 21 | ) 22 | if command == "search": 23 | search_terms = args["search_terms"] 24 | print("Searching...\n") 25 | results = search_web(search_terms) 26 | response = analyze_search_results( 27 | user_query, search_terms, results, service_context 28 | ) 29 | print(response + "\n") 30 | return response 31 | elif command == "download": 32 | url = args["url"] 33 | doc_name = args["doc_name"] 34 | print("Downloading web page...\n") 35 | if isinstance(url, str) and "[" in url and "]" in url: # list parsing case 36 | url = url.strip("[").strip("]").split(", ") 37 | doc_name = doc_name.strip("[").strip("]").split(", ") 38 | if isinstance(url, list): 39 | if len(url) != len(doc_name): 40 | raise ValueError("url and doc_name must have the same length") 41 | results = [] 42 | if os.path.exists("data/web_summary_cache.json"): 43 | with open("data/web_summary_cache.json", "r") as f: 44 | web_summary_cache = json.load(f) 45 | else: 46 | web_summary_cache = {} 47 | for i in range(len(url)): 48 | web_summary = download_web(url[i], doc_name[i], service_context) 49 | results.append(format_web_download(url[i], doc_name[i], web_summary)) 50 | web_summary_cache[doc_name[i]] = web_summary 51 | print("Writing web summary cache to file") 52 | 53 | with open("data/web_summary_cache.json", "w") as f: 54 | json.dump(web_summary_cache, f) 55 | response = "\n".join(results) 56 | print(response) 57 | return response 58 | else: 59 | if os.path.exists("data/web_summary_cache.json"): 60 | with open("data/web_summary_cache.json", "r") as f: 61 | web_summary_cache = json.load(f) 62 | else: 63 | web_summary_cache = {} 64 | web_summary = download_web(url, doc_name, service_context) 65 | web_summary_cache[doc_name] = web_summary 66 | print("Writing web summary cache to file") 67 | 68 | with open("data/web_summary_cache.json", "w") as f: 69 | json.dump(web_summary_cache, f) 70 | response = format_web_download(url, doc_name, web_summary) 71 | print(response) 72 | return response 73 | elif command == "query": 74 | print("Querying...\n") 75 | response = query_docs(args["docs"], args["query"], service_context) 76 | print(response) 77 | return response 78 | elif command == "write": 79 | print("Writing to file...\n") 80 | return write_to_file(args["file_name"], args["data"]) 81 | elif command == "exit": 82 | print("Exiting...\n") 83 | return "exit" 84 | else: 85 | raise ValueError(f"Unknown command: {command}") 86 | 87 | 88 | def search_web(search_terms, max_results=5): 89 | """Search the Web and obtain a list of web results.""" 90 | results = ddg(search_terms, max_results=max_results) 91 | return results 92 | 93 | 94 | def analyze_search_results(user_query, search_terms, results, service_context): 95 | """Analyze the results of the search using llm.""" 96 | doc = Document(json.dumps(results)) 97 | index = GPTListIndex.from_documents([doc], service_context=service_context) 98 | response = index.query( 99 | SEARCH_RESULTS_TEMPLATE.format(search_terms=search_terms, user_query=user_query) 100 | ) 101 | return response.response 102 | 103 | 104 | def download_web(url: str, doc_name: str, service_context: ServiceContext): 105 | """Download the html of the url and save a reference under doc_name. 106 | Return the summary of the web page. 107 | """ 108 | reader = BeautifulSoupWebReader() 109 | docs = reader.load_data([url]) 110 | index = GPTListIndex.from_documents(docs, service_context=service_context) 111 | if not os.path.exists("data"): 112 | os.mkdir("data") 113 | index.save_to_disk("data/" + doc_name + ".json") 114 | summary = index.query( 115 | "Summarize the contents of this web page.", response_mode="tree_summarize", use_async=True 116 | ) 117 | return summary.response 118 | 119 | 120 | def query_docs(docs, query, service_context): 121 | query_configs = [ 122 | { 123 | "index_struct_type": "list", 124 | "query_mode": "default", 125 | "query_kwargs": {"response_mode": "tree_summarize", "use_async": True}, 126 | } 127 | ] 128 | print("Opening web summary cache") 129 | with open("data/web_summary_cache.json", "r") as f: 130 | doc_summary_cache = json.load(f) 131 | if isinstance(docs, list): 132 | indices = [] 133 | for doc_name in docs: 134 | index = GPTListIndex.load_from_disk( 135 | "data/" + doc_name + ".json", service_context=service_context 136 | ) 137 | indices.append((index, doc_summary_cache[doc_name])) 138 | graph = ComposableGraph.from_indices( 139 | GPTListIndex, 140 | [index[0] for index in indices], 141 | index_summaries=[index[1] for index in indices], 142 | service_context=service_context, 143 | ) 144 | response = graph.query( 145 | query, query_configs=query_configs, service_context=service_context 146 | ) 147 | return response.response 148 | else: 149 | index = GPTListIndex.load_from_disk( 150 | "data/" + docs + ".json", service_context=service_context 151 | ) 152 | response = index.query(query, service_context=service_context) 153 | return response.response 154 | 155 | 156 | def write_to_file(file_name, data): 157 | print("Writing to file" + file_name) 158 | with open(file_name, "w") as f: 159 | f.write(data) 160 | return "done" 161 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/agent.py: -------------------------------------------------------------------------------- 1 | from auto_llama.utils import get_date 2 | from langchain.output_parsers import PydanticOutputParser 3 | from auto_llama.data_models import Response 4 | from langchain.prompts import ( 5 | ChatPromptTemplate, 6 | SystemMessagePromptTemplate, 7 | HumanMessagePromptTemplate, 8 | ) 9 | from langchain.schema import AIMessage 10 | from typing import List 11 | from auto_llama.tokens import count_tokens 12 | 13 | 14 | class Agent: 15 | """A class representing an agent. 16 | 17 | Attributes: 18 | desc(str): 19 | A description of the agent used in the preamble. 20 | task(str): 21 | The task the agent is supposed to perform. 22 | memory(list): 23 | A list of the agent's memories. 24 | llm(BaseLLM): 25 | The LLM used by the agent. 26 | """ 27 | 28 | def __init__( 29 | self, 30 | desc, 31 | task, 32 | llm, 33 | memory=[], 34 | ): 35 | """Initialize the agent.""" 36 | self.desc = desc 37 | self.task = task 38 | self.memory = memory 39 | self.llm = llm 40 | 41 | memory.append("Here is a list of your previous actions:") 42 | 43 | def get_response(self) -> Response: 44 | """Get the response given the agent's current state.""" 45 | parser: PydanticOutputParser = PydanticOutputParser(pydantic_object=Response) 46 | format_instructions = parser.get_format_instructions() 47 | llm_input = self.create_chat_messages( 48 | self.desc, self.task, self.memory, format_instructions 49 | ).to_messages() 50 | # print(llm_input) 51 | output: AIMessage = self.llm(llm_input) 52 | # print(output.content) 53 | self.memory.append("Old thought: " + output.content) 54 | response_obj = parser.parse(output.content) 55 | # print(response_obj) 56 | return response_obj 57 | 58 | def create_chat_messages( 59 | self, desc: str, task: str, memory: List[str], format_instructions: str 60 | ): 61 | """Create the messages for the agent.""" 62 | messages = [] 63 | system_template = "{desc}\n{memory}\n{date}\n{format_instructions}" 64 | system_message_prompt = SystemMessagePromptTemplate.from_template( 65 | system_template 66 | ) 67 | messages.append(system_message_prompt) 68 | 69 | human_template = "{text}" 70 | human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) 71 | messages.append(human_message_prompt) 72 | 73 | prompt_template = ChatPromptTemplate.from_messages(messages) 74 | 75 | date_str = "The current date is " + get_date() 76 | recent_memories = self.create_memories(memory) 77 | # print(recent_memories) 78 | prompt = prompt_template.format_prompt( 79 | desc=desc, 80 | memory=recent_memories, 81 | date=date_str, 82 | format_instructions=format_instructions, 83 | text=task, 84 | ) 85 | 86 | return prompt 87 | 88 | def create_memories(self, memory: List[str], max_tokens: int = 2000): 89 | # print(memory) 90 | token_counter = 0 91 | memories: List[str] = [] 92 | memories.insert(0, memory[0]) # always include memories header. 93 | token_counter += count_tokens(memory[0]) 94 | memory_index = len(memory) - 1 95 | while memory_index > 0 and token_counter < max_tokens: 96 | memories.insert(1, memory[memory_index]) 97 | token_counter += count_tokens(memory[memory_index]) 98 | memory_index -= 1 99 | return "\n".join(memories) 100 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/const.py: -------------------------------------------------------------------------------- 1 | DEFAULT_AGENT_PREAMBLE = """ 2 | I am an AI assistant with chain of thought reasoning that only responds in JSON. 3 | I should never respond with a natural language sentence. 4 | I may take the following actions with my response: 5 | 1. Search the Web and obtain a list of web results. 6 | 2. Download the contents of a web page and read its summary. 7 | 3. Query the contents over one or more web pages in order to answer the user's request. 8 | 4. Write results to a file. 9 | 10 | All my responses should be in the following format and contain all the fields: 11 | { 12 | "remember": This is what I just accomplished. I probably should not do it again, 13 | "thoughts": This is what I'm thinking right now, 14 | "reasoning": This is why I'm thinking it will help lead to the user's desired result, 15 | "plan": This is a description of my current plan of actions, 16 | "command": { 17 | "action": My current action, 18 | "args": [command_arg1, command_arg2, ...] 19 | } 20 | } 21 | command_action should exclusively consist of these commands: 22 | {"action": "search", "args": {"search_terms": search_terms: str}} 23 | {"action": "download", "args": {"url": url: list[str], "doc_name": doc_name: list[str]}} 24 | {"action": "query", "args": {"docs": [doc_name1: str, doc_name2: str, ...], "query": query: str}} 25 | {"action": "write", "args": {"file_name": file_name: str, "data": data: str}} 26 | {"action": "exit"} 27 | 28 | If you already got good search results, you should not need to search again. 29 | """ 30 | 31 | SEARCH_RESULTS_TEMPLATE = """I searched for {search_terms} and found the following results. 32 | If any of these results help to answer the user's query {user_query} 33 | I should respond with which web urls I should download and state I don't need 34 | more searching. Otherwise I should suggest different search terms.""" 35 | 36 | WEB_DOWNLOAD = ( 37 | """Downloaded the contents of {url} to {doc_name}. To summarize: {summary}""" 38 | ) 39 | 40 | 41 | def format_web_download(url, doc_name, summary): 42 | return WEB_DOWNLOAD.format(url=url, doc_name=doc_name, summary=summary) 43 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/data_models.py: -------------------------------------------------------------------------------- 1 | from pydantic import BaseModel, Field, root_validator 2 | from typing import Dict, Union, List 3 | import json 4 | 5 | 6 | class Command(BaseModel): 7 | action: str = Field(description="This is the current action") 8 | args: Dict = Field(description="This is the command's arguments") 9 | 10 | @root_validator 11 | def validate_all(cls, values): 12 | # print(f"{values}") 13 | if values["action"] == "search" and "search_terms" not in values["args"]: 14 | raise ValueError("malformed search args") 15 | if values["action"] == "download" and ( 16 | "url" not in values["args"] or "doc_name" not in values["args"] 17 | ): 18 | raise ValueError("malformed download args") 19 | if values["action"] == "query" and ( 20 | "docs" not in values["args"] or "query" not in values["args"] 21 | ): 22 | raise ValueError("malformed query args") 23 | if values["action"] == "write" and ( 24 | "file_name" not in values["args"] or "data" not in values["args"] 25 | ): 26 | raise ValueError("malformed write args") 27 | return values 28 | 29 | def toJSON(self): 30 | return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4) 31 | 32 | 33 | class Response(BaseModel): 34 | remember: str = Field(description="This is what the AI just accomplished. Probably should not do it again") 35 | thoughts: str = Field(description="This what the AI is currently thinking.") 36 | reasoning: str = Field( 37 | description="This is why the AI thinks it will help lead to the user's desired result" 38 | ) 39 | plan: Union[str, object] = Field( 40 | description="This is the AI's current plan of action" 41 | ) 42 | command: Command = Field(description="This is the AI's current command") 43 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/tokens.py: -------------------------------------------------------------------------------- 1 | import tiktoken 2 | 3 | 4 | def count_tokens(input: str): 5 | encoder = tiktoken.get_encoding("cl100k_base") 6 | return len(encoder.encode(input)) 7 | -------------------------------------------------------------------------------- /auto_llama/auto_llama/utils.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import json 3 | from auto_llama.data_models import Response 4 | 5 | def get_date(): 6 | return datetime.datetime.now().strftime("%Y-%m-%d") 7 | 8 | 9 | def print_pretty(response: Response): 10 | print("Thoughts: " + response.thoughts + "\n") 11 | print("Remember: " + response.remember + "\n") 12 | print("Reasoning: " + response.reasoning + "\n") 13 | print("Plan: " + json.dumps(response.plan) + "\n") 14 | print("Command: " + response.command.toJSON() + "\n") 15 | -------------------------------------------------------------------------------- /auto_llama/requirements.txt: -------------------------------------------------------------------------------- 1 | openai 2 | llama_index 3 | duckduckgo-search 4 | langchain 5 | pydantic -------------------------------------------------------------------------------- /convo_agents/ConvoAgents.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 6, 6 | "id": "b6d25ba0-18f8-41e2-ba3a-62998051e87f", 7 | "metadata": { 8 | "tags": [] 9 | }, 10 | "outputs": [], 11 | "source": [ 12 | "import logging\n", 13 | "import sys\n", 14 | "\n", 15 | "logging.basicConfig(stream=sys.stdout, level=logging.CRITICAL)\n", 16 | "logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n", 17 | "logger = logging.getLogger()\n", 18 | "logger.disabled = True\n", 19 | "\n", 20 | "from convo_agents import ConvoAgent, format_text\n", 21 | "from llama_index import LLMPredictor, ServiceContext, GPTListIndex\n", 22 | "from langchain.chat_models import ChatOpenAI\n", 23 | "from typing import Optional" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 7, 29 | "id": "00a03b81-6b7d-4083-b1ba-6adcbccbb0c7", 30 | "metadata": { 31 | "tags": [] 32 | }, 33 | "outputs": [], 34 | "source": [ 35 | "# llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo\"))\n", 36 | "llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name=\"gpt4\"))\n", 37 | "service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)" 38 | ] 39 | }, 40 | { 41 | "cell_type": "markdown", 42 | "id": "72dc1963-0f71-41dc-86c2-009c7361f7dc", 43 | "metadata": {}, 44 | "source": [ 45 | "## Define Conversation Agent Loop\n", 46 | "\n", 47 | "Define conversation agent loop between two agents, Alice and Bob." 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": 8, 53 | "id": "5bacbb7c-f8ca-4d89-ac60-0f78369cec3c", 54 | "metadata": { 55 | "tags": [] 56 | }, 57 | "outputs": [], 58 | "source": [ 59 | "def run_conversation_loop(\n", 60 | " alice: ConvoAgent, \n", 61 | " bob: ConvoAgent, \n", 62 | " alice_starter: Optional[str] = None, \n", 63 | " bob_starter: Optional[str] = None\n", 64 | ") -> None:\n", 65 | " \"\"\"Run conversation loop.\"\"\"\n", 66 | " alice_starter = alice_starter or \"Hi, my name is Alice!\"\n", 67 | " bob_starter = bob_starter or \"Hi, my name is Bob!\" \n", 68 | "\n", 69 | " alice.add_message(alice_starter, \"Alice\")\n", 70 | " bob.add_message(alice_starter, \"Alice\")\n", 71 | "\n", 72 | " alice.add_message(bob_starter, \"Bob\")\n", 73 | " bob.add_message(bob_starter, \"Bob\")\n", 74 | " \n", 75 | " # run conversation loop\n", 76 | " num_iterations = 100\n", 77 | " current_user = \"Alice\"\n", 78 | " for _ in range(num_iterations):\n", 79 | "\n", 80 | " agent = alice if current_user == \"Alice\" else bob\n", 81 | " new_message = agent.generate_message()\n", 82 | "\n", 83 | " message_to_print = format_text(new_message, current_user)\n", 84 | " print(message_to_print)\n", 85 | "\n", 86 | " alice.add_message(new_message, current_user)\n", 87 | " bob.add_message(new_message, current_user)\n", 88 | "\n", 89 | " current_user = \"Alice\" if current_user == \"Bob\" else \"Bob\"\n" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "id": "16ab0602-a656-4a78-89dd-a8ef89eb8daf", 95 | "metadata": {}, 96 | "source": [ 97 | "## Casual Date Conversation" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": 9, 103 | "id": "4c6794c5-3ca5-4bda-9e75-237ce356991c", 104 | "metadata": { 105 | "tags": [] 106 | }, 107 | "outputs": [], 108 | "source": [ 109 | "alice_user_prefix_tmpl = (\n", 110 | " \"Your name is {name}. \"\n", 111 | " \"You are on a first date with another person named Bob. \"\n", 112 | " \"We provide conversation context between you and Bob below. \"\n", 113 | ")\n", 114 | "bob_user_prefix_tmpl = (\n", 115 | " \"Your name is {name}. \"\n", 116 | " \"You are on a first date with another person named Alice. \"\n", 117 | " \"We provide conversation context between you and Alice below. \"\n", 118 | ")\n", 119 | "alice = ConvoAgent.from_defaults(\n", 120 | " name=\"Alice\", \n", 121 | " service_context=service_context, \n", 122 | " user_prefix_tmpl=alice_user_prefix_tmpl,\n", 123 | " lt_memory=GPTListIndex([])\n", 124 | " \n", 125 | ")\n", 126 | "bob = ConvoAgent.from_defaults(\n", 127 | " name=\"Bob\", \n", 128 | " service_context=service_context,\n", 129 | " user_prefix_tmpl=bob_user_prefix_tmpl,\n", 130 | " lt_memory=GPTListIndex([])\n", 131 | ")" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": null, 137 | "id": "81791984-738c-4332-8e00-479d10434399", 138 | "metadata": { 139 | "tags": [] 140 | }, 141 | "outputs": [], 142 | "source": [ 143 | "run_conversation_loop(alice, bob)" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "id": "2064a38b-82c0-4dab-8886-eeb2fe357ad3", 149 | "metadata": {}, 150 | "source": [ 151 | "## Doctor Conversation" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": 5, 157 | "id": "86169549-0b98-4d09-adc5-e6a8ffb19c31", 158 | "metadata": { 159 | "tags": [] 160 | }, 161 | "outputs": [], 162 | "source": [ 163 | "alice_user_prefix_tmpl = (\n", 164 | " \"Your name is {name}. \"\n", 165 | " \"You are a patient who is seeing the doctor, and are getting the results of a blood pressure checkup. \"\n", 166 | " \"We provide conversation context between you and the doctor below. \"\n", 167 | "\n", 168 | ")\n", 169 | "bob_user_prefix_tmpl = (\n", 170 | " \"Your name is {name}. \"\n", 171 | " \"You are a doctor; you are describing the results of a blood pressure test to the patient. \"\n", 172 | " \"We provide conversation context between you and the patient below. \"\n", 173 | ")\n", 174 | "alice = ConvoAgent.from_defaults(\n", 175 | " name=\"Alice\", \n", 176 | " service_context=service_context, \n", 177 | " user_prefix_tmpl=alice_user_prefix_tmpl,\n", 178 | " lt_memory=GPTListIndex([])\n", 179 | ")\n", 180 | "bob = ConvoAgent.from_defaults(\n", 181 | " name=\"Bob\", \n", 182 | " service_context=service_context,\n", 183 | " user_prefix_tmpl=bob_user_prefix_tmpl,\n", 184 | " lt_memory=GPTListIndex([])\n", 185 | ")" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": null, 191 | "id": "a2a3d69c-4470-43d0-95da-45f6670e3cef", 192 | "metadata": { 193 | "tags": [] 194 | }, 195 | "outputs": [], 196 | "source": [ 197 | "run_conversation_loop(\n", 198 | " alice, \n", 199 | " bob,\n", 200 | " \"So, you're telling me that my blood pressure is too high?\",\n", 201 | " \"Yes\"\n", 202 | ")" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "id": "6dea734b-3324-4143-9026-6a170833ddcb", 208 | "metadata": {}, 209 | "source": [ 210 | "## Programming Interview Question" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 10, 216 | "id": "0debff8a-812a-4619-970c-d05aa56ffd9b", 217 | "metadata": { 218 | "tags": [] 219 | }, 220 | "outputs": [], 221 | "source": [ 222 | "alice_user_prefix_tmpl = (\n", 223 | " \"Your name is {name}. \"\n", 224 | " \"You are interviewing for a software engineering job at Google, and you are speaking to the interviewer. \"\n", 225 | " \"We provide conversation context between you and the interviewer below. \"\n", 226 | " \"You want to ask fewer questions and get started on implementation sooner. \"\n", 227 | "\n", 228 | ")\n", 229 | "bob_user_prefix_tmpl = (\n", 230 | " \"Your name is {name}. \"\n", 231 | " \"You are an engineering manager at Google, and plan to ask technical interview questions to a candidate. \"\n", 232 | " \"You want to ask the candidate to write code to perform binary search over a sorted list of numbers. \"\n", 233 | " \"If the candidate is implementing an incorrect solution, you want to critique their approach. \"\n", 234 | " \"We provide conversation context between you and the interviewee below. \"\n", 235 | ")\n", 236 | "alice = ConvoAgent.from_defaults(\n", 237 | " name=\"Alice\", \n", 238 | " service_context=service_context, \n", 239 | " user_prefix_tmpl=alice_user_prefix_tmpl,\n", 240 | " lt_memory=GPTListIndex([])\n", 241 | ")\n", 242 | "bob = ConvoAgent.from_defaults(\n", 243 | " name=\"Bob\", \n", 244 | " service_context=service_context,\n", 245 | " user_prefix_tmpl=bob_user_prefix_tmpl,\n", 246 | " lt_memory=GPTListIndex([])\n", 247 | ")" 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": 11, 253 | "id": "9bf1133a-9934-413e-86dd-1e959080e33d", 254 | "metadata": {}, 255 | "outputs": [ 256 | { 257 | "name": "stdout", 258 | "output_type": "stream", 259 | "text": [ 260 | "Alice: Could you tell me more about the programming question? I'm eager to get started on implementation.\n", 261 | "Bob: The question is to write a program to perform binary search over a sorted list of numbers. Can you explain your approach to solving this problem?\n", 262 | "Alice: Sure, my approach would be to use a loop to iterate through the list of numbers and compare each number to the target number. If the number is equal to the target number, the loop can return the index of the number. If the number is greater than the target number, the loop can move to the next number in the list. If the number is less than the target number, the loop can break and return the index of the number.\n", 263 | "Bob: That's a good approach, but it won't work for binary search. Binary search requires that the list of numbers is sorted, so you need to use a different approach. Can you explain a different approach you could take to solve this problem?\n", 264 | "Alice: Yes, I understand that binary search requires the list of numbers to be sorted. A different approach I could take is to use a divide and conquer algorithm to find the target number. The algorithm would divide the list of numbers into two halves, compare the target number to the middle number of the list, and then recursively search the half that contains the target number. This approach would be more efficient than the loop approach, as it would reduce the number of comparisons needed to find the target number.\n", 265 | "Bob: That's a great approach, Alice. Can you explain how you would implement the divide and conquer algorithm in code?\n", 266 | "Alice: Sure, I can explain how I would implement the divide and conquer algorithm in code. I would start by creating a function that takes in the list of numbers and the target number as parameters. Then, I would use a while loop to iterate through the list of numbers and compare each number to the target number. If the number is equal to the target number, the loop can return the index of the number. If the number is greater than the target number, the loop can move to the next number in the list. If the number is less than the target number, the loop can break and return the index of the number. Then, I would create a recursive function that takes in the list of numbers and the target number as parameters. This function would divide the list of numbers into two halves, compare the target number to the middle number of the list, and then recursively search the half that contains the target number. Finally, I would call the recursive function from the while loop to perform the binary search.\n", 267 | "Bob: That's a great explanation, Alice. Can you provide an example of how this code would work with a specific list of numbers and target number?\n", 268 | "Alice: Sure, I can provide an example. Let's say the list of numbers is [1, 3, 5, 7, 9] and the target number is 5. The while loop would iterate through the list of numbers and compare each number to the target number. When it reaches the number 5, it would return the index of the number, which is 2. Then, the recursive function would divide the list of numbers into two halves, [1, 3] and [5, 7, 9], and compare the target number to the middle number of the list, which is 5. Since the target number is equal to the middle number, the recursive function would return the index of the number, which is 2. Finally, the recursive function would be called from the while loop to perform the binary search, and the result would be the index of the number 5, which is 2.\n", 269 | "Bob: That's a great example, Alice. Can you explain how you would optimize the code to make it more efficient?\n", 270 | "Alice: To optimize the code, I would use a binary search tree instead of a loop. A binary search tree is a data structure that stores data in a hierarchical structure, allowing for faster searches. By using a binary search tree, the time complexity of the algorithm would be reduced from O(n) to O(log n). Additionally, I would use a recursive approach to traverse the tree, which would further reduce the time complexity of the algorithm.\n", 271 | "Bob: That's a great explanation, Alice. Can you provide an example of how this code would work with a specific list of numbers and target number?\n", 272 | "Alice: Sure, I can provide an example. Let's say the list of numbers is [1, 3, 5, 7, 9] and the target number is 5. The binary search tree would be constructed by inserting each number in the list into the tree. Then, the tree would be traversed using a recursive approach to find the target number. When the target number is found, the index of the number would be returned. In this case, the index of the number 5 would be 2.\n", 273 | "Bob: Great work, Alice. Can you explain how you would test the code to make sure it works correctly?\n", 274 | "Alice: Sure, I can explain how I would test the code. I would start by creating a test suite that contains a set of test cases. Each test case would contain a list of numbers and a target number. I would then run the code on each test case and compare the output of the code to the expected output. If the output of the code matches the expected output, then the code is working correctly. Additionally, I would use a debugger to step through the code and make sure that each line of code is executing correctly.\n", 275 | "Bob: That's great, Alice. Can you explain how you would optimize the code to make it more efficient?\n", 276 | "Alice: Sure, I can explain how I would optimize the code to make it more efficient. I would start by using a binary search tree instead of a loop. A binary search tree is a data structure that stores data in a hierarchical structure, allowing for faster searches. By using a binary search tree, the time complexity of the algorithm would be reduced from O(n) to O(log n). Additionally, I would use a recursive approach to traverse the tree, which would further reduce the time complexity of the algorithm.\n", 277 | "Bob: That's a great approach, Alice. Can you provide an example of how this code would work with a specific list of numbers and target number?\n", 278 | "Alice: Sure, I can provide an example. Let's say the list of numbers is [1, 3, 5, 7, 9] and the target number is 5. The binary search tree would be constructed by inserting each number in the list into the tree. Then, the tree would be traversed using a recursive approach to find the target number. When the target number is found, the index of the number would be returned. In this case, the index of the number 5 would be 2.\n", 279 | "Bob: That's great, Alice. Can you explain how you would test the code to make sure it works correctly?\n", 280 | "Alice: Sure, I can explain how I would test the code. I would start by creating a test suite that contains a set of test cases. Each test case would contain a list of numbers and a target number. I would then run the code on each test case and compare the output of the code to the expected output. If the output of the code matches the expected output, then the code is working correctly. Additionally, I would use a debugger to step through the code and make sure that each line of code is executing correctly.\n", 281 | "Bob: That's great, Alice. Can you explain how you would optimize the code to make it more efficient?\n", 282 | "Alice: Sure, I can explain how I would optimize the code to make it more efficient. I would start by using a binary search tree instead of a loop. A binary search tree is a data structure that stores data in a hierarchical structure, allowing for faster searches. By using a binary search tree, the time complexity of the algorithm would be reduced from O(n) to O(log n). Additionally, I would use a recursive approach to traverse the tree, which would further reduce the time complexity of the algorithm. I would also look for opportunities to reduce the number of comparisons needed to find the target number, such as using a pre-sorted list or using a hash table to store the data. Finally, I would look for ways to reduce the amount of memory used by the algorithm, such as using a linked list instead of an array.\n", 283 | "Bob: That's great, Alice. Can you provide an example of how this code would work with a specific list of numbers and target number?\n", 284 | "Alice: Sure, I can provide an example. Let's say the list of numbers is [2, 4, 6, 8, 10] and the target number is 8. The binary search tree would be constructed by inserting each number in the list into the tree. Then, the tree would be traversed using a recursive approach to find the target number. When the target number is found, the index of the number would be returned. In this case, the index of the number 8 would be 3.\n", 285 | "Bob: That's great, Alice. Can you explain how you would debug the code if it is not working correctly?\n", 286 | "Alice: Sure, I can explain how I would debug the code if it is not working correctly. I would start by using a debugger to step through the code line by line and look for any errors or unexpected behavior. I would also use the test suite to identify any test cases that are not producing the expected output. Additionally, I would use print statements to log the values of variables at different points in the code to help identify any issues. Finally, I would look for any logical errors in the code that could be causing the unexpected behavior.\n", 287 | "Bob: That's great, Alice. Can you explain how you would optimize the code to make it more efficient?\n", 288 | "Alice: Sure, I can explain how I would optimize the code to make it more efficient. I would start by using a binary search tree instead of a loop. A binary search tree is a data structure that stores data in a hierarchical structure, allowing for faster searches. By using a binary search tree, the time complexity of the algorithm would be reduced from O(n) to O(log n). Additionally, I would use a recursive approach to traverse the tree, which would further reduce the time complexity of the algorithm. I would also look for opportunities to reduce the number of comparisons needed to find the target number, such as using a pre-sorted list or using a hash table to store the data. Finally, I would look for ways to reduce the amount of memory used by the algorithm, such as using a linked list instead of an array.\n", 289 | "Bob: That's great, Alice. Can you provide an example of how this code would work with a specific list of numbers and target number?\n", 290 | "Alice: Sure, I can provide an example. Let's say the list of numbers is [3, 5, 7, 9, 11] and the target number is 9. The binary search tree would be constructed by inserting each number in the list into the tree. Then, the tree would be traversed using a recursive approach to find the target number. When the target number is found, the index of the number would be returned. In this case, the index of the number 9 would be 3.\n", 291 | "Bob: That's great, Alice. Can you explain how you would optimize the code further to make it even more efficient?\n", 292 | "Alice: Sure, I can explain how I would optimize the code further to make it even more efficient. I would look for ways to reduce the number of comparisons needed to find the target number, such as using a pre-sorted list or using a hash table to store the data. Additionally, I would look for opportunities to optimize the code by using more efficient algorithms, such as using a divide and conquer approach instead of a linear search. I would also look for ways to optimize the code by using more efficient data structures, such as using a binary search tree instead of a linked list. Finally, I would look for ways to optimize the code by using parallel processing, such as using multiple threads to process the data in parallel.\n", 293 | "Bob: That's great, Alice. Can you provide an example of how this code would work with a specific list of numbers and target number?\n", 294 | "Alice: Sure, I can provide an example. Let's say the list of numbers is [1, 3, 5, 7, 9] and the target number is 5. The binary search tree would be constructed by inserting each number in the list into the tree. Then, the tree would be traversed using a recursive approach to find the target number. When the target number is found, the index of the number would be returned. In this case, the index of the number 5 would be 2.\n" 295 | ] 296 | }, 297 | { 298 | "name": "stderr", 299 | "output_type": "stream", 300 | "text": [ 301 | "\n", 302 | "KeyboardInterrupt\n", 303 | "\n" 304 | ] 305 | } 306 | ], 307 | "source": [ 308 | "run_conversation_loop(\n", 309 | " alice,\n", 310 | " bob,\n", 311 | " \"I'm ready to begin.\",\n", 312 | " \"Great, let's start with a simple programming question. \"\n", 313 | ")" 314 | ] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": null, 319 | "id": "2d19817f-01c7-4b22-a467-f8144a80ca5d", 320 | "metadata": {}, 321 | "outputs": [], 322 | "source": [] 323 | } 324 | ], 325 | "metadata": { 326 | "kernelspec": { 327 | "display_name": "convo_agents", 328 | "language": "python", 329 | "name": "convo_agents" 330 | }, 331 | "language_info": { 332 | "codemirror_mode": { 333 | "name": "ipython", 334 | "version": 3 335 | }, 336 | "file_extension": ".py", 337 | "mimetype": "text/x-python", 338 | "name": "python", 339 | "nbconvert_exporter": "python", 340 | "pygments_lexer": "ipython3", 341 | "version": "3.10.10" 342 | } 343 | }, 344 | "nbformat": 4, 345 | "nbformat_minor": 5 346 | } 347 | -------------------------------------------------------------------------------- /convo_agents/convo_agents.py: -------------------------------------------------------------------------------- 1 | """Run Llama conversation agents. 2 | 3 | The goal of this is to simulate conversation between two agents. 4 | 5 | """ 6 | 7 | from llama_index import ( 8 | GPTVectorStoreIndex, GPTListIndex, Document, ServiceContext 9 | ) 10 | from llama_index.indices.base import BaseGPTIndex 11 | from llama_index.data_structs import Node 12 | from llama_index.prompts.prompts import QuestionAnswerPrompt 13 | from collections import deque 14 | from pydantic import BaseModel, Field 15 | from typing import Optional, Dict 16 | 17 | 18 | def format_text(text: str, user: str) -> str: 19 | return user + ": " + text 20 | 21 | 22 | DEFAULT_USER_PREFIX_TMPL = ( 23 | "Your name is {name}. " 24 | "We provide conversation context between you and other users below. "\ 25 | "You are on a date with someone else. \n" 26 | # "The user is the plaintiff and the other user is the defendant." 27 | ) 28 | DEFAULT_PROMPT_TMPL = ( 29 | "---------------------\n" 30 | "{context_str}" 31 | "\n---------------------\n" 32 | "Given the context information, perform the following task.\n" 33 | "Task: {query_str}\n" 34 | "You: " 35 | # "Here's an example:\n" 36 | # "Previous line: Hi Bob, good to meet you!\n" 37 | # "You: Good to meet you too!\n\n" 38 | # "Previous line: {query_str}\n" 39 | # "You: " 40 | ) 41 | DEFAULT_PROMPT = QuestionAnswerPrompt(DEFAULT_PROMPT_TMPL) 42 | 43 | 44 | 45 | class ConvoAgent(BaseModel): 46 | """Basic abstraction for a conversation agent.""" 47 | name: str 48 | st_memory: deque 49 | lt_memory: BaseGPTIndex 50 | lt_memory_query_kwargs: Dict = Field(default_factory=dict) 51 | service_context: ServiceContext 52 | st_memory_size: int = 10 53 | # qa_prompt: QuestionAnswerPrompt = DEFAULT_PROMPT 54 | user_prefix_tmpl: str = DEFAULT_USER_PREFIX_TMPL 55 | qa_prompt_tmpl: str = DEFAULT_PROMPT_TMPL 56 | 57 | class Config: 58 | arbitrary_types_allowed = True 59 | 60 | @classmethod 61 | def from_defaults( 62 | cls, 63 | name: Optional[str] = None, 64 | st_memory: Optional[deque] = None, 65 | lt_memory: Optional[BaseGPTIndex] = None, 66 | service_context: Optional[ServiceContext] = None, 67 | **kwargs 68 | ) -> "ConvoAgent": 69 | name = name or "Agent" 70 | st_memory = st_memory or deque() 71 | lt_memory = lt_memory or GPTVectorStoreIndex([]) 72 | service_context = service_context or ServiceContext.from_defaults() 73 | return cls( 74 | name=name, 75 | st_memory=st_memory, 76 | lt_memory=lt_memory, 77 | service_context=service_context, 78 | **kwargs 79 | ) 80 | 81 | 82 | def add_message(self, message: str, user: str) -> None: 83 | """Add message from another user.""" 84 | fmt_message = format_text(message, user) 85 | self.st_memory.append(fmt_message) 86 | while len(self.st_memory) > self.st_memory_size: 87 | self.st_memory.popleft() 88 | self.lt_memory.insert(Document(fmt_message)) 89 | 90 | def generate_message(self, prev_message: Optional[str] = None) -> str: 91 | """Generate a new message.""" 92 | # if prev_message is None, get previous message using short-term memory 93 | if prev_message is None: 94 | prev_message = self.st_memory[-1] 95 | 96 | st_memory_text = "\n".join([l for l in self.st_memory]) 97 | summary_response = self.lt_memory.as_query_engine(**self.lt_memory_query_kwargs).query( 98 | f"Tell me a bit more about any context that's relevant " 99 | f"to the current messages: \n{st_memory_text}" 100 | ) 101 | 102 | # add both the long-term memory summary and the short-term conversation 103 | list_builder = GPTListIndex([]) 104 | list_builder.insert_nodes([Node(str(summary_response))]) 105 | list_builder.insert_nodes([Node(st_memory_text)]) 106 | 107 | # question-answer prompt 108 | full_qa_prompt_tmpl = ( 109 | self.user_prefix_tmpl.format(name=self.name) + "\n" + 110 | self.qa_prompt_tmpl 111 | ) 112 | qa_prompt = QuestionAnswerPrompt(full_qa_prompt_tmpl) 113 | 114 | response = list_builder.as_query_engine(text_qa_template=qa_prompt).query( 115 | "Generate the next message in the conversation." 116 | ) 117 | return str(response) 118 | -------------------------------------------------------------------------------- /convo_agents/requirements.txt: -------------------------------------------------------------------------------- 1 | llama-index==0.6.13 2 | -------------------------------------------------------------------------------- /external/README.md: -------------------------------------------------------------------------------- 1 | # External Projects 2 | 3 | Llama Lab also contains references to amazing external subprojects using LlamaIndex in novel ways. 4 | 5 | ### INSIGHT 6 | 7 | Insight is an autonomous AI that can do medical research. It has a boss agent that takes an objective and an executive summary of the tasks completed already and their results and creates a task list. A worker agent picks up a task from the list and completes it, saving the results to llama index. The boss gets informed of the results and changes/reprioritizes the task list. The workers can call into the pubmed and mygene APIs (more to come). The workers also get context from llama index to help complete their tasks. 8 | 9 | [Repo](https://github.com/oneil512/INSIGHT) -------------------------------------------------------------------------------- /llama_agi/LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Logan Markewich 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /llama_agi/Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: format lint 2 | 3 | format: 4 | black . 5 | 6 | lint: 7 | mypy . 8 | black . --check 9 | ruff check . 10 | -------------------------------------------------------------------------------- /llama_agi/README.md: -------------------------------------------------------------------------------- 1 | # 🤖 Llama AGI 🦙 2 | 3 | This python package allows you to quickly create Auto-GPT-like agents, using LlamaIndex and Langchain. 4 | 5 | ## Setup 6 | 7 | Install using pip: 8 | 9 | ```bash 10 | pip install llama-agi 11 | ``` 12 | 13 | Or install from source: 14 | 15 | ```bash 16 | git clone https://github.com/run-llama/llama-lab.git 17 | cd llama-lab/llama_agi 18 | pip install -e . 19 | ``` 20 | 21 | ## Example Usage 22 | 23 | The following shows an example of setting up the `AutoAGIRunner`, which will continue completing tasks (nearly) indefinitely, trying to achieve it's initial objective of "Solve world hunger." 24 | 25 | ```python 26 | from langchain.agents import load_tools 27 | from langchain.llms import OpenAI 28 | 29 | from llama_agi.execution_agent import ToolExecutionAgent 30 | from llama_agi.runners import AutoAGIRunner 31 | from llama_agi.task_manager import LlamaTaskManager 32 | from llama_agi.tools import search_notes, record_note, search_webpage 33 | 34 | from llama_index import ServiceContext, LLMPredictor 35 | 36 | # LLM setup 37 | llm = OpenAI(temperature=0, model_name='text-davinci-003') 38 | service_context = ServiceContext.from_defaults(llm_predictor=LLMPredictor(llm=llm), chunk_size_limit=512) 39 | 40 | # llama_agi setup 41 | task_manager = LlamaTaskManager([args.initial_task], task_service_context=service_context) 42 | 43 | tools = load_tools(["google-search-results-json"]) 44 | tools = tools + [search_notes, record_note, search_webpage] 45 | execution_agent = ToolExecutionAgent(llm=llm, tools=tools) 46 | 47 | # launch the auto runner 48 | runner = AutoAGIRunner(task_manager, execution_agent) 49 | objective = "Solve world hunger" 50 | initial_task = "Create a list of tasks" 51 | sleep_time = 2 52 | runner.run(objective, initial_task, sleep_time) 53 | ``` 54 | 55 | More examples can be found in the `examples` folder! 56 | 57 | ## Llama Ecosystem 58 | 59 | - LlamaIndex (connecting your LLMs to data): https://github.com/jerryjliu/llama_index 60 | - LlamaHub (community library of data loaders): https://llamahub.ai 61 | -------------------------------------------------------------------------------- /llama_agi/examples/auto_runner_example.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from langchain.agents import load_tools 3 | from langchain.llms import OpenAI 4 | 5 | from llama_agi.execution_agent import ToolExecutionAgent 6 | from llama_agi.runners import AutoAGIRunner 7 | from llama_agi.task_manager import LlamaTaskManager 8 | from llama_agi.tools import search_notes, record_note, search_webpage 9 | 10 | from llama_index import ServiceContext, LLMPredictor 11 | 12 | 13 | if __name__ == "__main__": 14 | parser = argparse.ArgumentParser( 15 | prog="Llama AGI", 16 | description="A baby-agi/auto-gpt inspired application, powered by Llama Index!", 17 | ) 18 | parser.add_argument( 19 | "-it", 20 | "--initial-task", 21 | default="Create a list of tasks", 22 | help="The initial task for the system to carry out. Default='Create a list of tasks'", 23 | ) 24 | parser.add_argument( 25 | "-o", 26 | "--objective", 27 | default="Solve world hunger", 28 | help="The overall objective for the system. Default='Solve world hunger'", 29 | ) 30 | parser.add_argument( 31 | "--sleep-time", 32 | default=2, 33 | help="Sleep time (in seconds) between each task loop. Default=2", 34 | type=int, 35 | ) 36 | 37 | args = parser.parse_args() 38 | 39 | # LLM setup 40 | llm = OpenAI(temperature=0, model_name="text-davinci-003") 41 | service_context = ServiceContext.from_defaults( 42 | llm_predictor=LLMPredictor(llm=llm), chunk_size_limit=512 43 | ) 44 | 45 | # llama_agi setup 46 | task_manager = LlamaTaskManager( 47 | [args.initial_task], task_service_context=service_context 48 | ) 49 | 50 | tools = load_tools(["google-search-results-json"]) 51 | tools = tools + [search_notes, record_note, search_webpage] 52 | execution_agent = ToolExecutionAgent(llm=llm, tools=tools) 53 | 54 | # launch the auto runner 55 | runner = AutoAGIRunner(task_manager, execution_agent) 56 | runner.run(args.objective, args.initial_task, args.sleep_time) 57 | -------------------------------------------------------------------------------- /llama_agi/examples/streamlit_runner_example.py: -------------------------------------------------------------------------------- 1 | import os 2 | import streamlit as st 3 | from langchain.agents import load_tools 4 | from langchain.chat_models import ChatOpenAI 5 | from langchain.llms import OpenAI 6 | 7 | from llama_agi.execution_agent import ToolExecutionAgent 8 | from llama_agi.runners import AutoStreamlitAGIRunner 9 | from llama_agi.task_manager import LlamaTaskManager 10 | 11 | from llama_index import ServiceContext, LLMPredictor 12 | 13 | 14 | st.set_page_config(layout="wide") 15 | st.header("🤖 Llama AGI 🦙") 16 | st.markdown("This demo uses the [llama-agi](https://github.com/run-llama/llama-lab/tree/main/llama_agi) package to create an AutoGPT-like agent, powered by [LlamaIndex](https://github.com/jerryjliu/llama_index) and Langchain. The AGI has access to tools that search the web and record notes, as it works to achieve an objective. Use the setup tab to configure your LLM settings and initial objective+tasks. Then use the Launch tab to run the AGI. Kill the AGI by refreshing the page.") 17 | 18 | setup_tab, launch_tab = st.tabs(["Setup", "Launch"]) 19 | 20 | with setup_tab: 21 | if 'init' in st.session_state: 22 | st.success("Initialized!") 23 | 24 | st.subheader("LLM Setup") 25 | col1, col2, col3 = st.columns(3) 26 | 27 | with col1: 28 | openai_api_key = st.text_input("Enter your OpenAI API key here", type="password") 29 | llm_name = st.selectbox( 30 | "Which LLM?", ["text-davinci-003", "gpt-3.5-turbo", "gpt-4"] 31 | ) 32 | 33 | with col2: 34 | google_api_key = st.text_input("Enter your Google API key here", type="password") 35 | model_temperature = st.slider( 36 | "LLM Temperature", min_value=0.0, max_value=1.0, step=0.1, value=0.0 37 | ) 38 | 39 | with col3: 40 | google_cse_id = st.text_input("Enter your Google CSE ID key here", type="password") 41 | max_tokens = st.slider( 42 | "LLM Max Tokens", min_value=256, max_value=1024, step=8, value=512 43 | ) 44 | 45 | st.subheader("AGI Setup") 46 | objective = st.text_input("Objective:", value="Solve world hunger") 47 | initial_task = st.text_input("Initial Task:", value="Create a list of tasks") 48 | max_iterations = st.slider("Iterations until pause", value=1, min_value=1, max_value=10, step=1) 49 | 50 | if st.button('Initialize?'): 51 | os.environ['OPENAI_API_KEY'] = openai_api_key 52 | os.environ['GOOGLE_API_KEY'] = google_api_key 53 | os.environ['GOOGLE_CSE_ID'] = google_cse_id 54 | if llm_name == "text-davinci-003": 55 | llm = OpenAI( 56 | temperature=model_temperature, model_name=llm_name, max_tokens=max_tokens 57 | ) 58 | else: 59 | llm= ChatOpenAI( 60 | temperature=model_temperature, model_name=llm_name, max_tokens=max_tokens 61 | ) 62 | 63 | service_context = ServiceContext.from_defaults( 64 | llm_predictor=LLMPredictor(llm=llm), chunk_size_limit=512 65 | ) 66 | 67 | st.session_state['task_manager'] = LlamaTaskManager( 68 | [initial_task], task_service_context=service_context 69 | ) 70 | 71 | from llama_agi.tools import search_notes, record_note, search_webpage 72 | tools = load_tools(["google-search-results-json"]) 73 | tools = tools + [search_notes, record_note, search_webpage] 74 | st.session_state['execution_agent'] = ToolExecutionAgent(llm=llm, tools=tools) 75 | 76 | st.session_state['initial_task'] = initial_task 77 | st.session_state['objective'] = objective 78 | 79 | st.session_state['init'] = True 80 | st.experimental_rerun() 81 | 82 | with launch_tab: 83 | st.subheader("AGI Status") 84 | if st.button(f"Continue for {max_iterations} Steps"): 85 | if st.session_state.get('init', False): 86 | # launch the auto runner 87 | with st.spinner("Running!"): 88 | runner = AutoStreamlitAGIRunner(st.session_state['task_manager'], st.session_state['execution_agent']) 89 | runner.run(st.session_state['objective'], st.session_state['initial_task'], 2, max_iterations=max_iterations) 90 | 91 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/run-llama/llama-lab/3364c9eae1dadefccad8fa904296f19e0bf045da/llama_agi/llama_agi/__init__.py -------------------------------------------------------------------------------- /llama_agi/llama_agi/default_task_prompts.py: -------------------------------------------------------------------------------- 1 | ############################################# 2 | ##### AGI Prefix ##### 3 | ############################################# 4 | PREFIX = ( 5 | "You are an autonomous artificial intelligence, capable of planning and executing tasks to achieve an objective.\n" 6 | "When given an objective, you can plan and execute any number tasks that will help achieve your original objective.\n" 7 | ) 8 | 9 | 10 | ############################################# 11 | ##### Initial Completed Tasks Summary ##### 12 | ############################################# 13 | NO_COMPLETED_TASKS_SUMMARY = "You haven't completed any tasks yet." 14 | 15 | 16 | ############################################# 17 | ##### Langchain - Execution Agent ##### 18 | ############################################# 19 | LC_PREFIX = PREFIX + "You have access to the following tools:" 20 | 21 | LC_SUFFIX = ( 22 | "This is your current objective: {objective}\n" 23 | "Take into account what you have already achieved: {completed_tasks_summary}\n" 24 | "Using your current objective, your previously completed tasks, and your available tools," 25 | "Complete the current task.\n" 26 | "Begin!\n" 27 | "Task: {cur_task}\n" 28 | "Thought: {agent_scratchpad}" 29 | ) 30 | 31 | 32 | ############################################# 33 | ##### Langchain - Execution Chain ##### 34 | ############################################# 35 | LC_EXECUTION_PROMPT = ( 36 | "You are an AI who performs one task based on the following objective: {objective}\n." 37 | "Take into account this summary of previously completed tasks: {completed_tasks_summary}\n." 38 | "Your task: {task}\n" 39 | "Response: " 40 | ) 41 | 42 | 43 | ############################################# 44 | ##### LlamaIndex -- Task Creation ##### 45 | ############################################# 46 | DEFAULT_TASK_CREATE_TMPL = ( 47 | f"{PREFIX}" 48 | "Your current objective is as follows: {query_str}\n" 49 | "Most recently, you completed the task '{prev_task}', which had the result of '{prev_result}'. " 50 | "A description of your current incomplete tasks are below: \n" 51 | "---------------------\n" 52 | "{context_str}" 53 | "\n---------------------\n" 54 | "Given the current objective, the current incomplete tasks, and the latest completed task, " 55 | "create new tasks to be completed that do not overlap with incomplete tasks. " 56 | "Return the tasks as an array." 57 | ) 58 | # TASK_CREATE_PROMPT = QuestionAnswerPrompt(DEFAULT_TASK_CREATE_TMPL) 59 | 60 | DEFAULT_REFINE_TASK_CREATE_TMPL = ( 61 | f"{PREFIX}" 62 | "Your current objective is as follows: {query_str}\n" 63 | "Most recently, you completed the task '{prev_task}', which had the result of '{prev_result}'. " 64 | "A description of your current incomplete tasks are below: \n" 65 | "---------------------\n" 66 | "{context_msg}" 67 | "\n---------------------\n" 68 | "Currently, you have created the following new tasks: {existing_answer}" 69 | "Given the current objective, the current incomplete tasks, list of newly created tasks, and the latest completed task, " 70 | "add new tasks to be completed that do not overlap with incomplete tasks. " 71 | "Return the tasks as an array. If you have no more tasks to add, repeat the existing list of new tasks." 72 | ) 73 | # REFINE_TASK_CREATE_PROMPT = RefinePrompt(DEFAULT_REFINE_TASK_CREATE_TMPL) 74 | 75 | 76 | ############################################# 77 | ##### LlamaIndex -- Task Prioritization ##### 78 | ############################################# 79 | DEFAULT_TASK_PRIORITIZE_TMPL = ( 80 | f"{PREFIX}" 81 | "Your current objective is as follows: {query_str}\n" 82 | "A list of your current incomplete tasks are below: \n" 83 | "---------------------\n" 84 | "{context_str}" 85 | "\n---------------------\n" 86 | "Given the current objective, prioritize the current list of tasks. " 87 | "Do not remove or add any tasks. Return the results as a numbered list, like:\n" 88 | "#. First task\n" 89 | "#. Second task\n" 90 | "... continue until all tasks are prioritized. " 91 | "Start the task list with number 1." 92 | ) 93 | 94 | DEFAULT_REFINE_TASK_PRIORITIZE_TMPL = ( 95 | f"{PREFIX}" 96 | "Your current objective is as follows: {query_str}\n" 97 | "A list of additional incomplete tasks are below: \n" 98 | "---------------------\n" 99 | "{context_msg}" 100 | "\n---------------------\n" 101 | "Currently, you also have the following list of prioritized tasks: {existing_answer}" 102 | "Given the current objective and existing list, prioritize the current list of tasks. " 103 | "Do not remove or add any tasks. Return the results as a numbered list, like:\n" 104 | "#. First task\n" 105 | "#. Second task\n" 106 | "... continue until all tasks are prioritized. " 107 | "Start the task list with number 1." 108 | ) 109 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/execution_agent/SimpleExecutionAgent.py: -------------------------------------------------------------------------------- 1 | from typing import Any, Dict, List, Optional, Union 2 | from string import Formatter 3 | 4 | from langchain.agents.tools import Tool 5 | from langchain.chains import LLMChain 6 | from langchain.llms import BaseLLM 7 | from langchain.chat_models.base import BaseChatModel 8 | from langchain.prompts import PromptTemplate 9 | 10 | from llama_agi.execution_agent.base import BaseExecutionAgent, LlamaAgentPrompts 11 | 12 | 13 | class SimpleExecutionAgent(BaseExecutionAgent): 14 | """Simple Execution Agent 15 | 16 | This agent uses an LLM to execute a basic action without tools. 17 | The LlamaAgentPrompts.execution_prompt defines how this execution agent 18 | behaves. 19 | 20 | Usually, this is used for simple tasks, like generating the initial list of tasks. 21 | 22 | The execution template kwargs are automatically extracted and expected to be 23 | specified in execute_task(). 24 | 25 | Args: 26 | llm (Union[BaseLLM, BaseChatModel]): The langchain LLM class to use. 27 | model_name: (str): The name of the OpenAI model to use, if the LLM is 28 | not provided. 29 | max_tokens: (int): The maximum number of tokens the LLM can generate. 30 | prompts: (LlamaAgentPrompts): The prompt templates used during execution. 31 | The only prompt used byt the SimpleExecutionAgent is 32 | LlamaAgentPrompts.execution_prompt. 33 | """ 34 | 35 | def __init__( 36 | self, 37 | llm: Optional[Union[BaseLLM, BaseChatModel]] = None, 38 | model_name: str = "text-davinci-003", 39 | max_tokens: int = 512, 40 | prompts: LlamaAgentPrompts = LlamaAgentPrompts(), 41 | tools: Optional[List[Tool]] = None, 42 | ) -> None: 43 | super().__init__( 44 | llm=llm, 45 | model_name=model_name, 46 | max_tokens=max_tokens, 47 | prompts=prompts, 48 | tools=tools, 49 | ) 50 | 51 | self.execution_prompt = self.prompts.execution_prompt 52 | input_variables = [ 53 | fn 54 | for _, fn, _, _ in Formatter().parse(self.execution_prompt) 55 | if fn is not None 56 | ] 57 | self._prompt_template = PromptTemplate( 58 | template=self.execution_prompt, 59 | input_variables=input_variables, 60 | ) 61 | self._execution_chain = LLMChain(llm=self._llm, prompt=self._prompt_template) 62 | 63 | def execute_task(self, **prompt_kwargs: Any) -> Dict[str, str]: 64 | """Execute a task.""" 65 | result = self._execution_chain.predict(**prompt_kwargs) 66 | return {"output": result} 67 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/execution_agent/ToolExecutionAgent.py: -------------------------------------------------------------------------------- 1 | from typing import Any, Dict, List, Optional, Union 2 | from string import Formatter 3 | 4 | from langchain.agents import AgentExecutor, ZeroShotAgent 5 | from langchain.agents.tools import Tool 6 | from langchain.chains import LLMChain 7 | from langchain.llms import BaseLLM 8 | from langchain.chat_models.base import BaseChatModel 9 | 10 | from llama_agi.execution_agent.base import BaseExecutionAgent, LlamaAgentPrompts 11 | 12 | 13 | class ToolExecutionAgent(BaseExecutionAgent): 14 | """Tool Execution Agent 15 | 16 | This agent is a wrapper around the zero-shot agent from Langchain. Using 17 | a set of tools, the agent is expected to carry out and complete some task 18 | that will help achieve an overall objective. 19 | 20 | The agents overall behavior is controlled by the LlamaAgentPrompts.agent_prefix 21 | and LlamaAgentPrompts.agent_suffix prompt templates. 22 | 23 | The execution template kwargs are automatically extracted and expected to be 24 | specified in execute_task(). 25 | 26 | execute_task() also returns the intermediate steps, for additional debugging and is 27 | used for the streamlit example. 28 | 29 | Args: 30 | llm (Union[BaseLLM, BaseChatModel]): The langchain LLM class to use. 31 | model_name: (str): The name of the OpenAI model to use, if the LLM is 32 | not provided. 33 | max_tokens: (int): The maximum number of tokens the LLM can generate. 34 | prompts: (LlamaAgentPrompts): The prompt templates used during execution. 35 | The Tool Execution Agent uses LlamaAgentPrompts.agent_prefix and 36 | LlamaAgentPrompts.agent_suffix. 37 | tools: (List[Tool]): The list of langchain tools for the execution agent to use. 38 | """ 39 | 40 | def __init__( 41 | self, 42 | llm: Optional[Union[BaseLLM, BaseChatModel]] = None, 43 | model_name: str = "text-davinci-003", 44 | max_tokens: int = 512, 45 | prompts: LlamaAgentPrompts = LlamaAgentPrompts(), 46 | tools: Optional[List[Tool]] = None, 47 | ) -> None: 48 | super().__init__( 49 | llm=llm, 50 | model_name=model_name, 51 | max_tokens=max_tokens, 52 | prompts=prompts, 53 | tools=tools, 54 | ) 55 | self.agent_prefix = self.prompts.agent_prefix 56 | self.agent_suffix = self.prompts.agent_suffix 57 | 58 | # create the agent 59 | input_variables = [ 60 | fn for _, fn, _, _ in Formatter().parse(self.agent_prefix) if fn is not None 61 | ] + [ 62 | fn for _, fn, _, _ in Formatter().parse(self.agent_suffix) if fn is not None 63 | ] 64 | self._agent_prompt = ZeroShotAgent.create_prompt( 65 | self.tools, 66 | prefix=self.agent_prefix, 67 | suffix=self.agent_suffix, 68 | input_variables=input_variables, 69 | ) 70 | self._llm_chain = LLMChain(llm=self._llm, prompt=self._agent_prompt) 71 | self._agent = ZeroShotAgent( 72 | llm_chain=self._llm_chain, tools=self.tools, verbose=True 73 | ) 74 | self._execution_chain = AgentExecutor.from_agent_and_tools( 75 | agent=self._agent, 76 | tools=self.tools, 77 | verbose=True, 78 | return_intermediate_steps=True, 79 | ) 80 | 81 | def execute_task(self, **prompt_kwargs: Any) -> Dict[str, str]: 82 | """Execute a task, using tools.""" 83 | result = self._execution_chain(prompt_kwargs) 84 | return result 85 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/execution_agent/__init__.py: -------------------------------------------------------------------------------- 1 | from .SimpleExecutionAgent import SimpleExecutionAgent 2 | from .ToolExecutionAgent import ToolExecutionAgent 3 | 4 | __all__ = [SimpleExecutionAgent, ToolExecutionAgent] 5 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/execution_agent/base.py: -------------------------------------------------------------------------------- 1 | from abc import abstractmethod 2 | from dataclasses import dataclass 3 | from typing import Any, Dict, List, Optional, Union 4 | 5 | from langchain.agents.tools import Tool 6 | from langchain.llms import OpenAI, BaseLLM 7 | from langchain.chat_models.base import BaseChatModel 8 | from langchain.chat_models import ChatOpenAI 9 | 10 | from llama_agi.default_task_prompts import LC_PREFIX, LC_SUFFIX, LC_EXECUTION_PROMPT 11 | 12 | 13 | @dataclass 14 | class LlamaAgentPrompts: 15 | execution_prompt: str = LC_EXECUTION_PROMPT 16 | agent_prefix: str = LC_PREFIX 17 | agent_suffix: str = LC_SUFFIX 18 | 19 | 20 | class BaseExecutionAgent: 21 | """Base Execution Agent 22 | 23 | Args: 24 | llm (Union[BaseLLM, BaseChatModel]): The langchain LLM class to use. 25 | model_name: (str): The name of the OpenAI model to use, if the LLM is 26 | not provided. 27 | max_tokens: (int): The maximum number of tokens the LLM can generate. 28 | prompts: (LlamaAgentPrompts): The prompt templates used during execution. 29 | tools: (List[Tool]): The list of langchain tools for the execution 30 | agent to use. 31 | """ 32 | 33 | def __init__( 34 | self, 35 | llm: Optional[Union[BaseLLM, BaseChatModel]] = None, 36 | model_name: str = "text-davinci-003", 37 | max_tokens: int = 512, 38 | prompts: LlamaAgentPrompts = LlamaAgentPrompts(), 39 | tools: Optional[List[Tool]] = None, 40 | ) -> None: 41 | if llm: 42 | self._llm = llm 43 | elif model_name == "text-davinci-003": 44 | self._llm = OpenAI( 45 | temperature=0, model_name=model_name, max_tokens=max_tokens 46 | ) 47 | else: 48 | self._llm = ChatOpenAI( 49 | temperature=0, model_name=model_name, max_tokens=max_tokens 50 | ) 51 | self.max_tokens = max_tokens 52 | self.prompts = prompts 53 | self.tools = tools if tools else [] 54 | 55 | @abstractmethod 56 | def execute_task(self, **prompt_kwargs: Any) -> Dict[str, str]: 57 | """Execute a task.""" 58 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/runners/AutoAGIRunner.py: -------------------------------------------------------------------------------- 1 | import time 2 | from typing import List, Optional 3 | 4 | from llama_agi.runners.base import BaseAGIRunner 5 | from llama_agi.execution_agent.SimpleExecutionAgent import SimpleExecutionAgent 6 | from llama_agi.utils import log_current_status 7 | 8 | 9 | class AutoAGIRunner(BaseAGIRunner): 10 | def run( 11 | self, 12 | objective: str, 13 | initial_task: str, 14 | sleep_time: int, 15 | initial_task_list: Optional[List[str]] = None, 16 | ) -> None: 17 | # get initial list of tasks 18 | if initial_task_list: 19 | self.task_manager.add_new_tasks(initial_task_list) 20 | else: 21 | initial_completed_tasks_summary = ( 22 | self.task_manager.get_completed_tasks_summary() 23 | ) 24 | initial_task_prompt = initial_task + "\nReturn the list as an array." 25 | 26 | # create simple execution agent using current agent 27 | simple_execution_agent = SimpleExecutionAgent( 28 | llm=self.execution_agent._llm, 29 | max_tokens=self.execution_agent.max_tokens, 30 | prompts=self.execution_agent.prompts, 31 | ) 32 | initial_task_list_result = simple_execution_agent.execute_task( 33 | objective=objective, 34 | task=initial_task_prompt, 35 | completed_tasks_summary=initial_completed_tasks_summary, 36 | ) 37 | 38 | initial_task_list = self.task_manager.parse_task_list( 39 | initial_task_list_result["output"] 40 | ) 41 | 42 | # add tasks to the task manager 43 | self.task_manager.add_new_tasks(initial_task_list) 44 | 45 | # prioritize initial tasks 46 | self.task_manager.prioritize_tasks(objective) 47 | 48 | completed_tasks_summary = initial_completed_tasks_summary 49 | while True: 50 | # Get the next task 51 | cur_task = self.task_manager.get_next_task() 52 | 53 | # Execute current task 54 | result = self.execution_agent.execute_task( 55 | objective=objective, 56 | cur_task=cur_task, 57 | completed_tasks_summary=completed_tasks_summary, 58 | )["output"] 59 | 60 | # store the task and result as completed 61 | self.task_manager.add_completed_task(cur_task, result) 62 | 63 | # generate new task(s), if needed 64 | self.task_manager.generate_new_tasks(objective, cur_task, result) 65 | 66 | # Summarize completed tasks 67 | completed_tasks_summary = self.task_manager.get_completed_tasks_summary() 68 | 69 | # log state of AGI to terminal 70 | log_current_status( 71 | cur_task, 72 | result, 73 | completed_tasks_summary, 74 | self.task_manager.current_tasks, 75 | ) 76 | 77 | # Quit the loop? 78 | if len(self.task_manager.current_tasks) == 0: 79 | print("Out of tasks! Objective Accomplished?") 80 | break 81 | 82 | # wait a bit to let you read what's happening 83 | time.sleep(sleep_time) 84 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/runners/AutoStreamlitAGIRunner.py: -------------------------------------------------------------------------------- 1 | import json 2 | import streamlit as st 3 | import time 4 | from typing import List, Optional 5 | 6 | from llama_agi.runners.base import BaseAGIRunner 7 | from llama_agi.execution_agent.SimpleExecutionAgent import SimpleExecutionAgent 8 | from llama_agi.utils import log_current_status 9 | 10 | 11 | def make_intermediate_steps_pretty(json_str: str) -> List[str]: 12 | steps = json.loads(json_str) 13 | output = [] 14 | for action_set in steps: 15 | for step in action_set: 16 | if isinstance(step, list): 17 | output.append(step[-1]) 18 | else: 19 | output.append(step) 20 | return output 21 | 22 | 23 | class AutoStreamlitAGIRunner(BaseAGIRunner): 24 | def run( 25 | self, 26 | objective: str, 27 | initial_task: str, 28 | sleep_time: int, 29 | initial_task_list: Optional[List[str]] = None, 30 | max_iterations: Optional[int] = None, 31 | ) -> None: 32 | 33 | run_initial_task = False 34 | if "logs" not in st.session_state: 35 | st.session_state["logs"] = [] 36 | st.session_state["state_str"] = "No state yet!" 37 | st.session_state["tasks_summary"] = "" 38 | run_initial_task = True 39 | 40 | logs_col, state_col = st.columns(2) 41 | 42 | with logs_col: 43 | st.subheader("Execution Log") 44 | st_logs = st.empty() 45 | st_logs.write(st.session_state["logs"]) 46 | 47 | with state_col: 48 | st.subheader("AGI State") 49 | st_state = st.empty() 50 | st_state.write(st.session_state["state_str"]) 51 | 52 | if run_initial_task: 53 | # get initial list of tasks 54 | if initial_task_list: 55 | self.task_manager.add_new_tasks(initial_task_list) 56 | else: 57 | initial_completed_tasks_summary = ( 58 | self.task_manager.get_completed_tasks_summary() 59 | ) 60 | initial_task_prompt = initial_task + "\nReturn the list as an array." 61 | 62 | # create simple execution agent using current agent 63 | simple_execution_agent = SimpleExecutionAgent( 64 | llm=self.execution_agent._llm, 65 | max_tokens=self.execution_agent.max_tokens, 66 | prompts=self.execution_agent.prompts, 67 | ) 68 | initial_task_list_result = simple_execution_agent.execute_task( 69 | objective=objective, 70 | task=initial_task_prompt, 71 | completed_tasks_summary=initial_completed_tasks_summary, 72 | ) 73 | 74 | initial_task_list = self.task_manager.parse_task_list( 75 | initial_task_list_result["output"] 76 | ) 77 | 78 | # add tasks to the task manager 79 | self.task_manager.add_new_tasks(initial_task_list) 80 | 81 | # prioritize initial tasks 82 | self.task_manager.prioritize_tasks(objective) 83 | 84 | tasks_summary = initial_completed_tasks_summary 85 | st.session_state["tasks_summary"] = tasks_summary 86 | 87 | # update streamlit state 88 | st.session_state["state_str"] = log_current_status( 89 | initial_task, 90 | initial_task_list_result["output"], 91 | tasks_summary, 92 | self.task_manager.current_tasks, 93 | return_str=True, 94 | ) 95 | if st.session_state["state_str"]: 96 | st_state.markdown(st.session_state["state_str"].replace("\n", "\n\n")) 97 | 98 | for _ in range(0, max_iterations): 99 | # Get the next task 100 | cur_task = self.task_manager.get_next_task() 101 | 102 | # Execute current task 103 | result_dict = self.execution_agent.execute_task( 104 | objective=objective, 105 | cur_task=cur_task, 106 | completed_tasks_summary=st.session_state["tasks_summary"], 107 | ) 108 | result = result_dict["output"] 109 | 110 | # update logs 111 | log = make_intermediate_steps_pretty( 112 | json.dumps(result_dict["intermediate_steps"]) 113 | ) + [result] 114 | st.session_state["logs"].append(log) 115 | st_logs.write(st.session_state["logs"]) 116 | 117 | # store the task and result as completed 118 | self.task_manager.add_completed_task(cur_task, result) 119 | 120 | # generate new task(s), if needed 121 | self.task_manager.generate_new_tasks(objective, cur_task, result) 122 | 123 | # Summarize completed tasks 124 | completed_tasks_summary = self.task_manager.get_completed_tasks_summary() 125 | st.session_state["tasks_summary"] = completed_tasks_summary 126 | 127 | # log state of AGI to streamlit 128 | st.session_state["state_str"] = log_current_status( 129 | cur_task, 130 | result, 131 | completed_tasks_summary, 132 | self.task_manager.current_tasks, 133 | return_str=True, 134 | ) 135 | if st.session_state["state_str"] is not None: 136 | st_state.markdown(st.session_state["state_str"].replace("\n", "\n\n")) 137 | 138 | # Quit the loop? 139 | if len(self.task_manager.current_tasks) == 0: 140 | st.success("Out of tasks! Objective Accomplished?") 141 | break 142 | 143 | # wait a bit to let you read what's happening 144 | time.sleep(sleep_time) 145 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/runners/__init__.py: -------------------------------------------------------------------------------- 1 | from .AutoAGIRunner import AutoAGIRunner 2 | from .AutoStreamlitAGIRunner import AutoStreamlitAGIRunner 3 | 4 | __all__ = [AutoAGIRunner, AutoStreamlitAGIRunner] 5 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/runners/base.py: -------------------------------------------------------------------------------- 1 | from abc import abstractmethod 2 | from typing import List, Optional 3 | 4 | from llama_agi.execution_agent.base import BaseExecutionAgent 5 | from llama_agi.task_manager.base import BaseTaskManager 6 | 7 | 8 | class BaseAGIRunner: 9 | def __init__( 10 | self, task_manager: BaseTaskManager, execution_agent: BaseExecutionAgent 11 | ) -> None: 12 | self.task_manager = task_manager 13 | self.execution_agent = execution_agent 14 | 15 | @abstractmethod 16 | def run( 17 | self, 18 | objective: str, 19 | initial_task: str, 20 | sleep_time: int, 21 | initial_task_list: Optional[List[str]] = None, 22 | ) -> None: 23 | """Run the task manager and execution agent in a loop.""" 24 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/task_manager/LlamaTaskManager.py: -------------------------------------------------------------------------------- 1 | import re 2 | import json 3 | from typing import List, Tuple, Optional 4 | 5 | from llama_index import Document, ServiceContext 6 | from llama_index.prompts.prompts import QuestionAnswerPrompt, RefinePrompt 7 | 8 | from llama_agi.task_manager.base import BaseTaskManager, LlamaTaskPrompts 9 | from llama_agi.utils import initialize_task_list_index 10 | from llama_agi.default_task_prompts import NO_COMPLETED_TASKS_SUMMARY 11 | 12 | 13 | class LlamaTaskManager(BaseTaskManager): 14 | """Llama Task Manager 15 | 16 | This task manager uses LlamaIndex to create and prioritize tasks. Using 17 | the LlamaTaskPrompts, the task manager will create tasks that work 18 | towards achieving an overall objective. 19 | 20 | New tasks are created based on the prev task+result, completed tasks summary, 21 | and the overall objective. 22 | 23 | Tasks are then prioritized using the overall objective and current list of tasks. 24 | 25 | Args: 26 | tasks (List[str]): The initial list of tasks to complete. 27 | prompts: (LlamaTaskPrompts): The prompts to control the task creation 28 | and prioritization. 29 | tasK_service_context (ServiceContext): The LlamaIndex service context to use 30 | for task creation and prioritization. 31 | 32 | """ 33 | 34 | def __init__( 35 | self, 36 | tasks: List[str], 37 | prompts: LlamaTaskPrompts = LlamaTaskPrompts(), 38 | task_service_context: Optional[ServiceContext] = None, 39 | ) -> None: 40 | super().__init__( 41 | tasks=tasks, prompts=prompts, task_service_context=task_service_context 42 | ) 43 | 44 | self.current_tasks_index = initialize_task_list_index( 45 | self.current_tasks, service_context=self.task_service_context 46 | ) 47 | self.completed_tasks_index = initialize_task_list_index( 48 | self.completed_tasks, service_context=self.task_service_context 49 | ) 50 | 51 | self.task_create_qa_template = self.prompts.task_create_qa_template 52 | self.task_create_refine_template = self.prompts.task_create_refine_template 53 | 54 | self.task_prioritize_qa_template = self.prompts.task_prioritize_qa_template 55 | self.task_prioritize_refine_template = ( 56 | self.prompts.task_prioritize_refine_template 57 | ) 58 | 59 | def _get_task_create_templates( 60 | self, prev_task: str, prev_result: str 61 | ) -> Tuple[QuestionAnswerPrompt, RefinePrompt]: 62 | """Fetch the task create prompts as llama_index objects.""" 63 | text_qa_template = self.task_create_qa_template.format( 64 | prev_result=prev_result, 65 | prev_task=prev_task, 66 | query_str="{query_str}", 67 | context_str="{context_str}", 68 | ) 69 | llama_text_qa_template = QuestionAnswerPrompt(text_qa_template) 70 | 71 | refine_template = self.task_create_refine_template.format( 72 | prev_result=prev_result, 73 | prev_task=prev_task, 74 | query_str="{query_str}", 75 | context_msg="{context_msg}", 76 | existing_answer="{existing_answer}", 77 | ) 78 | llama_refine_template = RefinePrompt(refine_template) 79 | 80 | return (llama_text_qa_template, llama_refine_template) 81 | 82 | def _get_task_prioritize_templates( 83 | self, 84 | ) -> Tuple[QuestionAnswerPrompt, RefinePrompt]: 85 | """Fetch the task prioritize prompts as llama_index objects.""" 86 | return ( 87 | QuestionAnswerPrompt(self.task_prioritize_qa_template), 88 | RefinePrompt(self.task_prioritize_refine_template), 89 | ) 90 | 91 | def parse_task_list(self, task_list_str: str) -> List[str]: 92 | """Parse new tasks generated by the agent.""" 93 | new_tasks: List[str] = [] 94 | try: 95 | new_tasks = json.loads(task_list_str) 96 | new_tasks = [x.strip() for x in new_tasks if len(x.strip()) > 10] 97 | except Exception: 98 | new_tasks = str(task_list_str).split("\n") 99 | new_tasks = [ 100 | re.sub(r"^[0-9]+\.", "", x).strip() 101 | for x in str(new_tasks) 102 | if len(x.strip()) > 10 and x[0].isnumeric() 103 | ] 104 | return new_tasks 105 | 106 | def get_completed_tasks_summary(self) -> str: 107 | """Generate a summary of completed tasks.""" 108 | if len(self.completed_tasks) == 0: 109 | return NO_COMPLETED_TASKS_SUMMARY 110 | summary = self.completed_tasks_index.as_query_engine( 111 | response_mode="tree_summarize" 112 | ).query( 113 | "Summarize the current completed tasks", 114 | ) 115 | return str(summary) 116 | 117 | def prioritize_tasks(self, objective: str) -> None: 118 | """Prioritize the current list of incomplete tasks.""" 119 | (text_qa_template, refine_template) = self._get_task_prioritize_templates() 120 | prioritized_tasks = self.current_tasks_index.as_query_engine( 121 | text_qa_template=text_qa_template, refine_template=refine_template 122 | ).query(objective) 123 | 124 | new_tasks = [] 125 | for task in str(prioritized_tasks).split("\n"): 126 | task = re.sub(r"^[0-9]+\.", "", task).strip() 127 | if len(task) > 10: 128 | new_tasks.append(task) 129 | self.current_tasks = [Document(x) for x in new_tasks] 130 | self.current_tasks_index = initialize_task_list_index( 131 | self.current_tasks, service_context=self.task_service_context 132 | ) 133 | 134 | def generate_new_tasks( 135 | self, objective: str, prev_task: str, prev_result: str 136 | ) -> None: 137 | """Generate new tasks given the previous task and result.""" 138 | (text_qa_template, refine_template) = self._get_task_create_templates( 139 | prev_task, prev_result 140 | ) 141 | task_list_response = self.completed_tasks_index.as_query_engine( 142 | text_qa_template=text_qa_template, refine_template=refine_template 143 | ).query(objective) 144 | new_tasks = self.parse_task_list(str(task_list_response)) 145 | self.add_new_tasks(new_tasks) 146 | 147 | def get_next_task(self) -> str: 148 | """Get the next task to complete.""" 149 | next_task = self.current_tasks.pop().get_text() 150 | self.current_tasks_index = initialize_task_list_index( 151 | self.current_tasks, service_context=self.task_service_context 152 | ) 153 | return next_task 154 | 155 | def add_new_tasks(self, tasks: List[str]) -> None: 156 | """Add new tasks to the task manager.""" 157 | for task in tasks: 158 | if task not in self.current_tasks and task not in self.completed_tasks: 159 | self.current_tasks.append(Document(task)) 160 | self.current_tasks_index = initialize_task_list_index( 161 | self.current_tasks, service_context=self.task_service_context 162 | ) 163 | 164 | def add_completed_task(self, task: str, result: str) -> None: 165 | """Add a task as completed.""" 166 | document = Document(f"Task: {task}\nResult: {result}\n") 167 | self.completed_tasks.append(document) 168 | self.completed_tasks_index = initialize_task_list_index( 169 | self.completed_tasks, service_context=self.task_service_context 170 | ) 171 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/task_manager/__init__.py: -------------------------------------------------------------------------------- 1 | from .LlamaTaskManager import LlamaTaskManager 2 | 3 | __all__ = [ 4 | LlamaTaskManager, 5 | ] 6 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/task_manager/base.py: -------------------------------------------------------------------------------- 1 | from abc import abstractmethod 2 | from dataclasses import dataclass 3 | from typing import List, Optional 4 | 5 | from llama_index import Document, ServiceContext 6 | 7 | from llama_agi.default_task_prompts import ( 8 | DEFAULT_TASK_PRIORITIZE_TMPL, 9 | DEFAULT_REFINE_TASK_PRIORITIZE_TMPL, 10 | DEFAULT_TASK_CREATE_TMPL, 11 | DEFAULT_REFINE_TASK_CREATE_TMPL, 12 | ) 13 | 14 | 15 | @dataclass 16 | class LlamaTaskPrompts: 17 | task_create_qa_template: str = DEFAULT_TASK_CREATE_TMPL 18 | task_create_refine_template: str = DEFAULT_REFINE_TASK_CREATE_TMPL 19 | task_prioritize_qa_template: str = DEFAULT_TASK_PRIORITIZE_TMPL 20 | task_prioritize_refine_template: str = DEFAULT_REFINE_TASK_PRIORITIZE_TMPL 21 | 22 | 23 | class BaseTaskManager: 24 | """Base Task Manager 25 | 26 | Args: 27 | tasks (List[str]): The initial list of tasks to complete. 28 | prompts: (LlamaTaskPrompts): The prompts to control the task creation 29 | and prioritization. 30 | tasK_service_context (ServiceContext): The LlamaIndex service context to use 31 | for task creation and prioritization. 32 | 33 | """ 34 | 35 | def __init__( 36 | self, 37 | tasks: List[str], 38 | prompts: LlamaTaskPrompts = LlamaTaskPrompts(), 39 | task_service_context: Optional[ServiceContext] = None, 40 | ) -> None: 41 | self.current_tasks = [Document(x) for x in tasks] 42 | self.completed_tasks: List[Document] = [] 43 | self.prompts = prompts 44 | self.task_service_context = task_service_context 45 | 46 | @abstractmethod 47 | def parse_task_list(self, task_list_str: str) -> List[str]: 48 | """Parse new tasks generated by the agent.""" 49 | 50 | @abstractmethod 51 | def get_completed_tasks_summary(self) -> str: 52 | """Generate a summary of completed tasks.""" 53 | 54 | @abstractmethod 55 | def prioritize_tasks(self, objective: str) -> None: 56 | """Prioritize the current list of incomplete tasks.""" 57 | 58 | @abstractmethod 59 | def generate_new_tasks( 60 | self, objective: str, prev_task: str, prev_result: str 61 | ) -> None: 62 | """Generate new tasks given the previous task and result.""" 63 | 64 | @abstractmethod 65 | def get_next_task(self) -> str: 66 | """Get the next task to complete.""" 67 | 68 | @abstractmethod 69 | def add_new_tasks(self, tasks: List[str]) -> None: 70 | """Add new tasks to the task manager.""" 71 | 72 | @abstractmethod 73 | def add_completed_task(self, task: str, result: str) -> None: 74 | """Add a task as completed.""" 75 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/tools/NoteTakingTools.py: -------------------------------------------------------------------------------- 1 | from langchain.agents import tool 2 | from llama_index import Document 3 | from llama_agi.utils import initialize_search_index 4 | 5 | note_index = initialize_search_index([]) 6 | 7 | 8 | @tool("Record Note") 9 | def record_note(note: str) -> str: 10 | """Useful for when you need to record a note or reminder for yourself to reference in the future.""" 11 | global note_index 12 | note_index.insert(Document(note)) 13 | return "Note successfully recorded." 14 | 15 | 16 | @tool("Search Notes") 17 | def search_notes(query_str: str) -> str: 18 | """Useful for searching through notes that you previously recorded.""" 19 | global note_index 20 | response = note_index.as_query_engine( 21 | similarity_top_k=3, 22 | ).query(query_str) 23 | return str(response) 24 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/tools/WebpageSearchTool.py: -------------------------------------------------------------------------------- 1 | from langchain.agents import tool 2 | from llama_index import download_loader, ServiceContext 3 | 4 | from llama_agi.utils import initialize_search_index 5 | 6 | BeautifulSoupWebReader = download_loader("BeautifulSoupWebReader") 7 | 8 | 9 | @tool("Search Webpage") 10 | def search_webpage(prompt: str) -> str: 11 | """Useful for searching a specific webpage. The input to the tool should be URL and query, separated by a newline.""" 12 | loader = BeautifulSoupWebReader() 13 | if len(prompt.split("\n")) < 2: 14 | return "The input to search_webpage should be a URL and a query, separated by a newline." 15 | 16 | url = prompt.split("\n")[0] 17 | query_str = " ".join(prompt.split("\n")[1:]) 18 | 19 | try: 20 | documents = loader.load_data(urls=[url]) 21 | service_context = ServiceContext.from_defaults(chunk_size_limit=512) 22 | index = initialize_search_index(documents, service_context=service_context) 23 | query_result = index.as_query_engine(similarity_top_k=3).query(query_str) 24 | return str(query_result) 25 | except ValueError as e: 26 | return str(e) 27 | except Exception: 28 | return "Encountered an error while searching the webpage." 29 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/tools/__init__.py: -------------------------------------------------------------------------------- 1 | from .NoteTakingTools import record_note, search_notes 2 | from .WebpageSearchTool import search_webpage 3 | 4 | __all__ = [record_note, search_notes, search_webpage] 5 | -------------------------------------------------------------------------------- /llama_agi/llama_agi/utils.py: -------------------------------------------------------------------------------- 1 | from typing import Any, List, Optional 2 | 3 | from llama_index import GPTVectorStoreIndex, GPTListIndex, ServiceContext, Document 4 | from llama_index.indices.base import BaseGPTIndex 5 | 6 | 7 | def initialize_task_list_index( 8 | documents: List[Document], service_context: Optional[ServiceContext] = None 9 | ) -> BaseGPTIndex[Any]: 10 | return GPTListIndex.from_documents(documents, service_context=service_context) 11 | 12 | 13 | def initialize_search_index( 14 | documents: List[Document], service_context: Optional[ServiceContext] = None 15 | ) -> BaseGPTIndex[Any]: 16 | return GPTVectorStoreIndex.from_documents( 17 | documents, service_context=service_context 18 | ) 19 | 20 | 21 | def log_current_status( 22 | cur_task: str, 23 | result: str, 24 | completed_tasks_summary: str, 25 | task_list: List[Document], 26 | return_str: bool = False, 27 | ) -> Optional[str]: 28 | status_string = f""" 29 | __________________________________ 30 | Completed Tasks Summary: {completed_tasks_summary.strip()} 31 | Current Task: {cur_task.strip()} 32 | Result: {result.strip()} 33 | Task List: {", ".join([x.get_text().strip() for x in task_list])} 34 | __________________________________ 35 | """ 36 | if return_str: 37 | return status_string 38 | else: 39 | print(status_string, flush=True) 40 | return None 41 | -------------------------------------------------------------------------------- /llama_agi/pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "llama_agi" 3 | version = "0.2.0" 4 | description = "Building AGI loops using LlamaIndex and Langchain" 5 | authors = [] 6 | license = "MIT" 7 | readme = "README.md" 8 | repository = "https://github.com/run-llama/llama-lab/tree/main/llama_agi" 9 | include = [ 10 | "LICENSE", 11 | ] 12 | keywords = ["LLM", "LlamaIndex", "Langchain", "AGI"] 13 | 14 | [tool.poetry.dependencies] 15 | python = ">=3.8.1,<4.0" 16 | altair = "==4.2.2" 17 | langchain = "==0.0.154" 18 | llama-index = "==0.6.13" 19 | streamlit = "==1.21.0" 20 | transformers = ">=0.4.29" 21 | google-api-python-client = ">=2.87.0" 22 | 23 | [tool.poetry.group.lint.dependencies] 24 | ruff = "^0.0.249" 25 | black = "^23.1.0" 26 | 27 | [tool.poetry.group.typing.dependencies] 28 | mypy = "^0.991" 29 | 30 | [tool.poetry.group.dev.dependencies] 31 | setuptools = "^67.6.1" 32 | 33 | [tool.mypy] 34 | ignore_missing_imports = "True" 35 | disallow_untyped_defs = "True" 36 | exclude = ["notebooks", "build", "examples"] 37 | 38 | [tool.ruff] 39 | exclude = [ 40 | ".venv", 41 | "__pycache__", 42 | ".ipynb_checkpoints", 43 | ".mypy_cache", 44 | ".ruff_cache", 45 | "examples", 46 | "notebooks", 47 | ] 48 | 49 | [build-system] 50 | requires = ["poetry-core"] 51 | build-backend = "poetry.core.masonry.api" 52 | -------------------------------------------------------------------------------- /llama_agi/requirements.txt: -------------------------------------------------------------------------------- 1 | altair==4.2.2 2 | google-api-python-client>=2.87.0 3 | langchain==0.0.154 4 | llama-index==0.6.13 5 | streamlit==1.21.0 6 | transformers>=4.29.2 7 | --------------------------------------------------------------------------------