├── .github └── workflows │ └── file-check.yml ├── Makefile ├── README.md ├── agents ├── README.md ├── readme_template.md ├── self-ask-with-search │ ├── README.md │ └── agent.json ├── zero-shot-react-conversation │ ├── README.md │ └── agent.json ├── zero-shot-react-description │ ├── README.md │ └── agent.json └── zero-shot-react-sql │ ├── README.md │ └── agent.json ├── chains ├── README.md ├── api │ └── meteo │ │ └── chain.json ├── hello-world │ ├── README.md │ └── chain.json ├── llm-bash │ ├── README.md │ └── chain.json ├── llm-checker │ ├── README.md │ └── chain.json ├── llm-math │ ├── README.md │ └── chain.json ├── llm-requests │ ├── README.md │ └── chain.json ├── pal │ └── math │ │ ├── README.md │ │ └── chain.json ├── qa_with_sources │ ├── map-reduce │ │ └── chain.json │ ├── map-rerank │ │ └── chain.json │ ├── refine │ │ └── chain.json │ └── stuff │ │ └── chain.json ├── question_answering │ ├── map-reduce │ │ └── chain.json │ ├── map-rerank │ │ └── chain.json │ ├── refine │ │ └── chain.json │ └── stuff │ │ └── chain.json ├── readme_template.md ├── summarize │ ├── map-reduce │ │ └── chain.json │ ├── refine │ │ └── chain.json │ └── stuff │ │ └── chain.json └── vector-db-qa │ ├── map-reduce │ └── chain.json │ └── stuff │ └── chain.json ├── ci_scripts └── file-check.py └── prompts ├── README.md ├── api ├── api_response │ ├── README.md │ └── prompt.json └── api_url │ ├── README.md │ └── prompt.json ├── conversation ├── README.md └── prompt.json ├── hello-world ├── README.md └── prompt.yaml ├── llm_bash ├── README.md └── prompt.json ├── llm_math ├── README.md └── prompt.json ├── memory └── summarize │ ├── README.md │ └── prompt.json ├── pal ├── README.md ├── colored_objects.json └── math.json ├── qa ├── map_reduce │ ├── question │ │ ├── README.md │ │ └── basic.json │ └── reduce │ │ ├── README.md │ │ └── basic.json ├── refine │ ├── README.md │ └── basic.json └── stuff │ ├── README.md │ └── basic.yaml ├── qa_with_sources ├── map_reduce │ └── reduce │ │ ├── README.md │ │ └── basic.json ├── refine │ ├── README.md │ └── basic.json └── stuff │ ├── README.md │ └── basic.json ├── readme_template.md ├── sql_query ├── language_to_sql_output │ ├── README.md │ └── prompt.json └── relevant_tables │ ├── README.md │ └── relevant_tables.py ├── summarize ├── map_reduce │ └── map │ │ ├── README.md │ │ └── prompt.yaml ├── refine │ ├── README.md │ └── prompt.yaml └── stuff │ ├── README.md │ └── prompt.yaml └── vector_db_qa ├── README.md └── prompt.json /.github/workflows/file-check.yml: -------------------------------------------------------------------------------- 1 | name: file-check 2 | 3 | on: 4 | push: 5 | branches: [master] 6 | pull_request: 7 | 8 | jobs: 9 | build: 10 | runs-on: ubuntu-latest 11 | strategy: 12 | matrix: 13 | python-version: 14 | - "3.9" 15 | steps: 16 | - uses: actions/checkout@v3 17 | - name: Install langchain 18 | run: | 19 | pip install -U langchain 20 | - name: Analysing the files with our Make command 21 | run: | 22 | make file-check 23 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: file-check 2 | 3 | file-check: 4 | python ci_scripts/file-check.py 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LangChainHub 2 | 3 | | 🌐 This repo is getting replaced by our hosted LangChain Hub Product! Visit it at [https://smith.langchain.com/hub](https://smith.langchain.com/hub) 🌐 | 4 | | --- | 5 | 6 | ## Introduction 7 | 8 | Taking inspiration from Hugging Face Hub, LangChainHub is collection of all artifacts useful for working with LangChain primitives such as prompts, chains and agents. 9 | The goal of this repository is to be a central resource for sharing and discovering high quality prompts, chains and agents that combine together to form complex LLM applications. 10 | 11 | We are starting off the hub with a collection of prompts, and we look forward to the LangChain community adding to this collection. We hope to expand to chains and agents shortly. 12 | 13 | ## Contributing 14 | 15 | Since we are using GitHub to organize this Hub, adding artifacts can best be done in one of three ways: 16 | 17 | 1. Create a fork and then open a PR against the repo. 18 | 2. Create an issue on the repo with details of the artifact you would like to add. 19 | 3. Add an artifact with the appropriate Google form: 20 | - [Prompts](https://forms.gle/aAhZ6nEUybdzVbYq6) 21 | 22 | Each of the different types of artifacts (listed below) will have different instructions on how to upload them. 23 | Please refer to the appropriate documentation to do so. 24 | 25 | ## 📖 Prompts 26 | 27 | At a high level, prompts are organized by use case inside the `prompts` directory. 28 | To load a prompt in LangChain, you should use the following code snippet: 29 | 30 | ```python 31 | from langchain.prompts import load_prompt 32 | 33 | prompt = load_prompt('lc://prompts/path/to/file.json') 34 | ``` 35 | 36 | In addition to prompt files themselves, each sub-directory also contains a README explaining how best to use that prompt in the appropriate LangChain chain. 37 | 38 | For more detailed information on how prompts are organized in the Hub, and how best to upload one, please see the documentation [here](./prompts/README.md). 39 | 40 | ## 🔗 Chains 41 | 42 | At a high level, chains are organized by use case inside the `chains` directory. 43 | To load a chain in LangChain, you should use the following code snippet: 44 | 45 | ```python 46 | from langchain.chains import load_chain 47 | 48 | chain = load_chain('lc://chains/path/to/file.json') 49 | ``` 50 | 51 | In addition to chain files themselves, each sub-directory also contains a README explaining what that chain contains. 52 | 53 | For more detailed information on how chains are organized in the Hub, and how best to upload one, please see the documentation [here](./chains/README.md). 54 | 55 | 56 | ## 🤖 Agents 57 | 58 | At a high level, agents are organized by use case inside the `agents` directory. 59 | To load an agent in LangChain, you should use the following code snippet: 60 | 61 | ```python 62 | from langchain.agents import initialize_agent 63 | 64 | llm = ... 65 | tools = ... 66 | 67 | agent = initialize_agent(tools, llm, agent="lc://agents/self-ask-with-search/agent.json") 68 | ``` 69 | 70 | In addition to agent files themselves, each sub-directory also contains a README explaining what that agent contains. 71 | 72 | For more detailed information on how agents are organized in the Hub, and how best to upload one, please see the documentation [here](./agents/README.md). 73 | 74 | 75 | 76 | ## 👷 Agent Executors 77 | 78 | Coming soon! 79 | -------------------------------------------------------------------------------- /agents/README.md: -------------------------------------------------------------------------------- 1 | # Agents 2 | 3 | This directory covers loading and uploading of agents. 4 | Each sub-directory covers a different agent, and has not only the serialized agent but also a README file describing that agent. 5 | 6 | ## Loading 7 | 8 | All agents can be loaded from LangChain by specifying the desired path, and adding the `lc://` prefix. The path should be relative to the `langchain-hub` repo. 9 | 10 | For example, to load the agent at the path `langchain-hub/agents/self-ask-with-search/agent.json`, the path you want to specify is `lc://agents/self-ask-with-search/agent.json` 11 | 12 | Once you have that path, you can load it in the following manner: 13 | 14 | ```python 15 | from langchain.agents import initialize_agent 16 | 17 | llm = ... 18 | tools = ... 19 | 20 | agent = initialize_agent(tools, llm, agent_path="lc://agents/self-ask-with-search/agent.json") 21 | ``` 22 | 23 | ## Uploading 24 | 25 | To upload an agent to the LangChainHub, you must upload 2 files: 26 | 1. The agent. There are 2 supported file formats for agents: `json` and `yaml`. 27 | 2. Associated README file for the agent. This provides a high level description of the agent. 28 | 29 | 30 | ### Supported file formats 31 | 32 | #### `json` 33 | To get a properly formatted json file, if you have an agent in memory in Python you can run: 34 | ```python 35 | agent.save_agent("file_name.json") 36 | ``` 37 | 38 | Replace `"file_name"` with the desired name of the file. 39 | 40 | #### `yaml` 41 | To get a properly formatted yaml file, if you have an agent in memory in Python you can run: 42 | ```python 43 | agent.save_agent("file_name.yaml") 44 | ``` 45 | 46 | Replace `"file_name"` with the desired name of the file. 47 | -------------------------------------------------------------------------------- /agents/readme_template.md: -------------------------------------------------------------------------------- 1 | # {Title} 2 | 3 | ## Description 4 | 5 | ## Agent type 6 | 7 | ## Required Tool Names 8 | -------------------------------------------------------------------------------- /agents/self-ask-with-search/README.md: -------------------------------------------------------------------------------- 1 | # Self Ask With Search 2 | 3 | ## Description 4 | 5 | Agent that asks intermediate questions, but answers them with a search tool 6 | 7 | ## Agent type 8 | 9 | `self-ask-with-search` 10 | 11 | ## Required Tool Names 12 | 13 | - Intermediate Answer 14 | -------------------------------------------------------------------------------- /agents/self-ask-with-search/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "llm_chain": { 3 | "memory": null, 4 | "verbose": false, 5 | "prompt": { 6 | "input_variables": [ 7 | "input", 8 | "agent_scratchpad" 9 | ], 10 | "output_parser": null, 11 | "template": "Question: Who lived longer, Muhammad Ali or Alan Turing?\nAre follow up questions needed here: Yes.\nFollow up: How old was Muhammad Ali when he died?\nIntermediate answer: Muhammad Ali was 74 years old when he died.\nFollow up: How old was Alan Turing when he died?\nIntermediate answer: Alan Turing was 41 years old when he died.\nSo the final answer is: Muhammad Ali\n\nQuestion: When was the founder of craigslist born?\nAre follow up questions needed here: Yes.\nFollow up: Who was the founder of craigslist?\nIntermediate answer: Craigslist was founded by Craig Newmark.\nFollow up: When was Craig Newmark born?\nIntermediate answer: Craig Newmark was born on December 6, 1952.\nSo the final answer is: December 6, 1952\n\nQuestion: Who was the maternal grandfather of George Washington?\nAre follow up questions needed here: Yes.\nFollow up: Who was the mother of George Washington?\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\nFollow up: Who was the father of Mary Ball Washington?\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\nSo the final answer is: Joseph Ball\n\nQuestion: Are both the directors of Jaws and Casino Royale from the same country?\nAre follow up questions needed here: Yes.\nFollow up: Who is the director of Jaws?\nIntermediate answer: The director of Jaws is Steven Spielberg.\nFollow up: Where is Steven Spielberg from?\nIntermediate answer: The United States.\nFollow up: Who is the director of Casino Royale?\nIntermediate answer: The director of Casino Royale is Martin Campbell.\nFollow up: Where is Martin Campbell from?\nIntermediate answer: New Zealand.\nSo the final answer is: No\n\nQuestion: {input}\nAre followup questions needed here:{agent_scratchpad}", 12 | "template_format": "f-string" 13 | }, 14 | "llm": { 15 | "model_name": "text-davinci-003", 16 | "temperature": 0.0, 17 | "max_tokens": 256, 18 | "top_p": 1, 19 | "frequency_penalty": 0, 20 | "presence_penalty": 0, 21 | "n": 1, 22 | "best_of": 1, 23 | "request_timeout": null, 24 | "logit_bias": {}, 25 | "_type": "openai" 26 | }, 27 | "output_key": "text", 28 | "_type": "llm_chain" 29 | }, 30 | "allowed_tools": [ 31 | "Intermediate Answer" 32 | ], 33 | "return_values": [ 34 | "output" 35 | ], 36 | "_type": "self-ask-with-search" 37 | } -------------------------------------------------------------------------------- /agents/zero-shot-react-conversation/README.md: -------------------------------------------------------------------------------- 1 | # Zero Shot ReAct Conversational Agent 2 | 3 | ## Description 4 | 5 | Agent that asks interacts with arbitrary tools and carries on a conversation 6 | 7 | ## Agent type 8 | 9 | `conversational-react-description` 10 | 11 | ## Required Tool Names 12 | 13 | - Can be any 14 | -------------------------------------------------------------------------------- /agents/zero-shot-react-conversation/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "load_from_llm_and_tools": true, 3 | "_type": "conversational-react-description", 4 | "prefix": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:", 5 | "suffix": "Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}", 6 | "ai_prefix": "AI", 7 | "human_prefix": "Human" 8 | } -------------------------------------------------------------------------------- /agents/zero-shot-react-description/README.md: -------------------------------------------------------------------------------- 1 | # Zero Shot ReAct Agent 2 | 3 | ## Description 4 | 5 | Agent that asks interacts with arbitrary tools 6 | 7 | ## Agent type 8 | 9 | `zero-shot-react-description` 10 | 11 | ## Required Tool Names 12 | 13 | - Can be any 14 | -------------------------------------------------------------------------------- /agents/zero-shot-react-description/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "load_from_llm_and_tools": true, 3 | "_type": "zero-shot-react-description", 4 | "prefix": "Answer the following questions as best you can. You have access to the following tools:", 5 | "suffix": "Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}" 6 | } -------------------------------------------------------------------------------- /agents/zero-shot-react-sql/README.md: -------------------------------------------------------------------------------- 1 | # SQL Agent 2 | 3 | ## Description 4 | 5 | Agent that asks interacts with SQL databases 6 | 7 | ## Agent type 8 | 9 | `zero-shot-react-description` 10 | 11 | ## Required Tool Names 12 | 13 | - Can be any, should be tools to interact with SQL databases 14 | -------------------------------------------------------------------------------- /agents/zero-shot-react-sql/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "load_from_llm_and_tools": true, 3 | "_type": "zero-shot-react-description", 4 | "prefix": "Answer the question as best you can.\n The answer you return should come directly from the database tools. Be very explicit and specific with what you want when you ask the databases for information. Finally, don't use SQL syntax in your request but rather use natural language. If you cannot get an answer from the provided tools, say \"There is not enough information in the DB to answer the question.\"\nYou have access to the following DB:", 5 | "suffix": "Begin!\nQuestion: {input}\nThought:{agent_scratchpad}" 6 | } 7 | -------------------------------------------------------------------------------- /chains/README.md: -------------------------------------------------------------------------------- 1 | # Chains 2 | 3 | This directory covers loading and uploading of chains. 4 | Each sub-directory covers a different chain, and has not only the serialized agent but also a README file describing that agent. 5 | 6 | ## Loading 7 | 8 | All chains can be loaded from LangChain by specifying the desired path, and adding the `lc://` prefix. The path should be relative to the `langchain-hub` repo. 9 | 10 | For example, to load the chain at the path `langchain-hub/chains/hello-world/chain.json`, the path you want to specify is `lc://chains/hello-world/chain.json` 11 | 12 | Once you have that path, you can load it in the following manner: 13 | 14 | ```python 15 | from langchain.chains import load_chain 16 | 17 | chain = load_chain("lc://chains/hello-world/chain.json") 18 | ``` 19 | 20 | Extra arguments (kwargs) may be needed depending on the type of chain. 21 | 22 | ## Uploading 23 | 24 | To upload an chain to the LangChainHub, you must upload 2 files: 25 | 1. The chain. There are 2 supported file formats for agents: `json` and `yaml`. 26 | 2. Associated README file for the chain. This provides a high level description of the chain. 27 | 28 | 29 | ### Supported file formats 30 | 31 | #### `json` 32 | To get a properly formatted json file, if you have a chain in memory in Python you can run: 33 | ```python 34 | chain.save("file_name.json") 35 | ``` 36 | 37 | Replace `"file_name"` with the desired name of the file. 38 | 39 | #### `yaml` 40 | To get a properly formatted yaml file, if you have a chain in memory in Python you can run: 41 | ```python 42 | chain.save("file_name.yaml") 43 | ``` 44 | 45 | Replace `"file_name"` with the desired name of the file. 46 | -------------------------------------------------------------------------------- /chains/api/meteo/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": true, 4 | "api_request_chain": { 5 | "memory": null, 6 | "verbose": false, 7 | "prompt": { 8 | "input_variables": [ 9 | "api_docs", 10 | "question" 11 | ], 12 | "output_parser": null, 13 | "template": "You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:", 14 | "template_format": "f-string", 15 | "_type": "prompt" 16 | }, 17 | "llm": { 18 | "model_name": "text-davinci-003", 19 | "temperature": 0.0, 20 | "max_tokens": 256, 21 | "top_p": 1, 22 | "frequency_penalty": 0, 23 | "presence_penalty": 0, 24 | "n": 1, 25 | "best_of": 1, 26 | "request_timeout": null, 27 | "logit_bias": {}, 28 | "_type": "openai" 29 | }, 30 | "output_key": "text", 31 | "_type": "llm_chain" 32 | }, 33 | "api_answer_chain": { 34 | "memory": null, 35 | "verbose": false, 36 | "prompt": { 37 | "input_variables": [ 38 | "api_docs", 39 | "question", 40 | "api_url", 41 | "api_response" 42 | ], 43 | "output_parser": null, 44 | "template": "You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:", 45 | "template_format": "f-string", 46 | "_type": "prompt" 47 | }, 48 | "llm": { 49 | "model_name": "text-davinci-003", 50 | "temperature": 0.0, 51 | "max_tokens": 256, 52 | "top_p": 1, 53 | "frequency_penalty": 0, 54 | "presence_penalty": 0, 55 | "n": 1, 56 | "best_of": 1, 57 | "request_timeout": null, 58 | "logit_bias": {}, 59 | "_type": "openai" 60 | }, 61 | "output_key": "text", 62 | "_type": "llm_chain" 63 | }, 64 | "api_docs": "BASE URL: https://api.open-meteo.com/\n\nAPI Documentation\nThe API endpoint /v1/forecast accepts a geographical coordinate, a list of weather variables and responds with a JSON hourly weather forecast for 7 days. Time always starts at 0:00 today and contains 168 hours. All URL parameters are listed below:\n\nParameter\tFormat\tRequired\tDefault\tDescription\nlatitude, longitude\tFloating point\tYes\t\tGeographical WGS84 coordinate of the location\nhourly\tString array\tNo\t\tA list of weather variables which should be returned. Values can be comma separated, or multiple &hourly= parameter in the URL can be used.\ndaily\tString array\tNo\t\tA list of daily weather variable aggregations which should be returned. Values can be comma separated, or multiple &daily= parameter in the URL can be used. If daily weather variables are specified, parameter timezone is required.\ncurrent_weather\tBool\tNo\tfalse\tInclude current weather conditions in the JSON output.\ntemperature_unit\tString\tNo\tcelsius\tIf fahrenheit is set, all temperature values are converted to Fahrenheit.\nwindspeed_unit\tString\tNo\tkmh\tOther wind speed speed units: ms, mph and kn\nprecipitation_unit\tString\tNo\tmm\tOther precipitation amount units: inch\ntimeformat\tString\tNo\tiso8601\tIf format unixtime is selected, all time values are returned in UNIX epoch time in seconds. Please note that all timestamp are in GMT+0! For daily values with unix timestamps, please apply utc_offset_seconds again to get the correct date.\ntimezone\tString\tNo\tGMT\tIf timezone is set, all timestamps are returned as local-time and data is returned starting at 00:00 local-time. Any time zone name from the time zone database is supported. If auto is set as a time zone, the coordinates will be automatically resolved to the local time zone.\npast_days\tInteger (0-2)\tNo\t0\tIf past_days is set, yesterday or the day before yesterday data are also returned.\nstart_date\nend_date\tString (yyyy-mm-dd)\tNo\t\tThe time interval to get weather data. A day must be specified as an ISO8601 date (e.g. 2022-06-30).\nmodels\tString array\tNo\tauto\tManually select one or more weather models. Per default, the best suitable weather models will be combined.\n\nHourly Parameter Definition\nThe parameter &hourly= accepts the following values. Most weather variables are given as an instantaneous value for the indicated hour. Some variables like precipitation are calculated from the preceding hour as an average or sum.\n\nVariable\tValid time\tUnit\tDescription\ntemperature_2m\tInstant\t\u00b0C (\u00b0F)\tAir temperature at 2 meters above ground\nsnowfall\tPreceding hour sum\tcm (inch)\tSnowfall amount of the preceding hour in centimeters. For the water equivalent in millimeter, divide by 7. E.g. 7 cm snow = 10 mm precipitation water equivalent\nrain\tPreceding hour sum\tmm (inch)\tRain from large scale weather systems of the preceding hour in millimeter\nshowers\tPreceding hour sum\tmm (inch)\tShowers from convective precipitation in millimeters from the preceding hour\nweathercode\tInstant\tWMO code\tWeather condition as a numeric code. Follow WMO weather interpretation codes. See table below for details.\nsnow_depth\tInstant\tmeters\tSnow depth on the ground\nfreezinglevel_height\tInstant\tmeters\tAltitude above sea level of the 0\u00b0C level\nvisibility\tInstant\tmeters\tViewing distance in meters. Influenced by low clouds, humidity and aerosols. Maximum visibility is approximately 24 km.", 65 | "question_key": "question", 66 | "output_key": "output", 67 | "_type": "api_chain" 68 | } -------------------------------------------------------------------------------- /chains/hello-world/README.md: -------------------------------------------------------------------------------- 1 | # Hello World 2 | 3 | ## Description 4 | 5 | A simple LLMChain that takes in a topic and generates a joke about it. 6 | 7 | ## Chain type 8 | 9 | LLMChain 10 | 11 | ## Input Variables 12 | 13 | - `topic`: topic to generate a joke about -------------------------------------------------------------------------------- /chains/hello-world/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "prompt": { 5 | "input_variables": [ 6 | "topic" 7 | ], 8 | "output_parser": null, 9 | "template": "Tell me a joke about {topic}:", 10 | "template_format": "f-string", 11 | "_type": "prompt" 12 | }, 13 | "llm": { 14 | "model_name": "text-davinci-003", 15 | "temperature": 0.9, 16 | "max_tokens": 256, 17 | "top_p": 1, 18 | "frequency_penalty": 0, 19 | "presence_penalty": 0, 20 | "n": 1, 21 | "best_of": 1, 22 | "request_timeout": null, 23 | "logit_bias": {}, 24 | "_type": "openai" 25 | }, 26 | "output_key": "text", 27 | "_type": "llm_chain" 28 | } -------------------------------------------------------------------------------- /chains/llm-bash/README.md: -------------------------------------------------------------------------------- 1 | # BashChain 2 | 3 | ## Description 4 | 5 | The LLMBashChain takes in a task to perform and generates a series of bash commands that will perform the task. 6 | 7 | ## Chain type 8 | 9 | LLMBashChain 10 | 11 | ## Input Variables 12 | 13 | question: a question about how to perform a task in bash. -------------------------------------------------------------------------------- /chains/llm-bash/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": true, 4 | "llm": { 5 | "model_name": "text-davinci-003", 6 | "temperature": 0.0, 7 | "max_tokens": 256, 8 | "top_p": 1, 9 | "frequency_penalty": 0, 10 | "presence_penalty": 0, 11 | "n": 1, 12 | "best_of": 1, 13 | "request_timeout": null, 14 | "logit_bias": {}, 15 | "_type": "openai" 16 | }, 17 | "input_key": "question", 18 | "output_key": "answer", 19 | "prompt": { 20 | "input_variables": [ 21 | "question" 22 | ], 23 | "output_parser": null, 24 | "template": "If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: \"copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'\"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}", 25 | "template_format": "f-string", 26 | "_type": "prompt" 27 | }, 28 | "_type": "llm_bash_chain" 29 | } -------------------------------------------------------------------------------- /chains/llm-checker/README.md: -------------------------------------------------------------------------------- 1 | # LLMCheckerChain 2 | 3 | ## Description 4 | 5 | The LLMCheckerChain is designed to generate better answers to factual questions. The chain works by first generating a draft answer based on the question. The model is then asked to list its assumptions for this statement. The model is then asked to determine whether each assertion is true or false, and explain why if it is false. Finally, the model is prompted to revise their answer based on the above checks and assertions. 6 | 7 | ## Chain type 8 | 9 | LLMCheckerChain 10 | 11 | ## Input Variables 12 | 13 | question: the question to answer -------------------------------------------------------------------------------- /chains/llm-checker/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": true, 4 | "llm": { 5 | "model_name": "text-davinci-003", 6 | "temperature": 0.7, 7 | "max_tokens": 256, 8 | "top_p": 1, 9 | "frequency_penalty": 0, 10 | "presence_penalty": 0, 11 | "n": 1, 12 | "best_of": 1, 13 | "request_timeout": null, 14 | "logit_bias": {}, 15 | "_type": "openai" 16 | }, 17 | "create_draft_answer_prompt": { 18 | "input_variables": [ 19 | "question" 20 | ], 21 | "output_parser": null, 22 | "template": "{question}\n\n", 23 | "template_format": "f-string", 24 | "_type": "prompt" 25 | }, 26 | "list_assertions_prompt": { 27 | "input_variables": [ 28 | "statement" 29 | ], 30 | "output_parser": null, 31 | "template": "Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n", 32 | "template_format": "f-string", 33 | "_type": "prompt" 34 | }, 35 | "check_assertions_prompt": { 36 | "input_variables": [ 37 | "assertions" 38 | ], 39 | "output_parser": null, 40 | "template": "Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n", 41 | "template_format": "f-string", 42 | "_type": "prompt" 43 | }, 44 | "revised_answer_prompt": { 45 | "input_variables": [ 46 | "checked_assertions", 47 | "question" 48 | ], 49 | "output_parser": null, 50 | "template": "{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", 51 | "template_format": "f-string", 52 | "_type": "prompt" 53 | }, 54 | "input_key": "query", 55 | "output_key": "result", 56 | "_type": "llm_checker_chain" 57 | } -------------------------------------------------------------------------------- /chains/llm-math/README.md: -------------------------------------------------------------------------------- 1 | # LLM Math 2 | 3 | 4 | ## Description 5 | 6 | A LLMMathChain that uses LLMs and the Python REPL to do complex word math problems. 7 | 8 | ## Chain type 9 | 10 | LLMMathChain 11 | 12 | ## Input Variables 13 | 14 | question: math problem to be solved. -------------------------------------------------------------------------------- /chains/llm-math/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": true, 4 | "llm": { 5 | "model_name": "text-davinci-003", 6 | "temperature": 0.0, 7 | "max_tokens": 256, 8 | "top_p": 1, 9 | "frequency_penalty": 0, 10 | "presence_penalty": 0, 11 | "n": 1, 12 | "best_of": 1, 13 | "request_timeout": null, 14 | "logit_bias": {}, 15 | "_type": "openai" 16 | }, 17 | "prompt": { 18 | "input_variables": [ 19 | "question" 20 | ], 21 | "output_parser": null, 22 | "template": "You are GPT-3, and you can't do math.\n\nYou can do basic math, and your memorization abilities are impressive, but you can't do any complex calculations that a human could not do in their head. You also have an annoying tendency to just make up highly specific, but wrong, answers.\n\nSo we hooked you up to a Python 3 kernel, and now you can execute code. If anyone gives you a hard math problem, just use this format and we\u2019ll take care of the rest:\n\nQuestion: ${{Question with hard calculation.}}\n```python\n${{Code that prints what you need to know}}\n```\n```output\n${{Output of your code}}\n```\nAnswer: ${{Answer}}\n\nOtherwise, use this simpler format:\n\nQuestion: ${{Question without hard calculation}}\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\n```python\nprint(37593 * 67)\n```\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: {question}\n", 23 | "template_format": "f-string", 24 | "_type": "prompt" 25 | }, 26 | "input_key": "question", 27 | "output_key": "answer", 28 | "_type": "llm_math_chain" 29 | } -------------------------------------------------------------------------------- /chains/llm-requests/README.md: -------------------------------------------------------------------------------- 1 | # LLMRequestsChain 2 | 3 | ## Description 4 | 5 | The LLMRequestsChain extracts answers from HTML results from a URL. Given a URL, and a query to extract information from the results, the chain will extract the answer to the query from the text of the results or return "not found" if the information is not contained in the results text. 6 | 7 | ## Chain type 8 | 9 | LLMRequestsChain 10 | 11 | ## Input Variables 12 | 13 | url: URL 14 | query: query to extract information from search results. -------------------------------------------------------------------------------- /chains/llm-requests/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "llm_chain": { 5 | "memory": null, 6 | "verbose": false, 7 | "prompt": { 8 | "input_variables": [ 9 | "query", 10 | "requests_result" 11 | ], 12 | "output_parser": null, 13 | "template": "Between >>> and <<< are the raw search result text from google.\nExtract the answer to the question '{query}' or say \"not found\" if the information is not contained.\nUse the format\nExtracted:\n>>> {requests_result} <<<\nExtracted:", 14 | "template_format": "f-string", 15 | "_type": "prompt" 16 | }, 17 | "llm": { 18 | "model_name": "text-davinci-003", 19 | "temperature": 0.0, 20 | "max_tokens": 256, 21 | "top_p": 1, 22 | "frequency_penalty": 0, 23 | "presence_penalty": 0, 24 | "n": 1, 25 | "best_of": 1, 26 | "request_timeout": null, 27 | "logit_bias": {}, 28 | "_type": "openai" 29 | }, 30 | "output_key": "text", 31 | "_type": "llm_chain" 32 | }, 33 | "text_length": 8000, 34 | "requests_key": "requests_result", 35 | "input_key": "url", 36 | "output_key": "output", 37 | "_type": "llm_requests_chain" 38 | } -------------------------------------------------------------------------------- /chains/pal/math/README.md: -------------------------------------------------------------------------------- 1 | # PAL Math Chain 2 | 3 | ## Description 4 | 5 | The PALChain mplements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf. Gives the model examples of math questions sent as text and answers written in Python, then provides the input question to do the same. 6 | 7 | ## Chain type 8 | 9 | PALChain 10 | 11 | ## Input Variables 12 | 13 | question: a math problem in plain text -------------------------------------------------------------------------------- /chains/pal/math/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": true, 4 | "llm": { 5 | "model_name": "code-davinci-002", 6 | "temperature": 0.0, 7 | "max_tokens": 512, 8 | "top_p": 1, 9 | "frequency_penalty": 0, 10 | "presence_penalty": 0, 11 | "n": 1, 12 | "best_of": 1, 13 | "request_timeout": null, 14 | "logit_bias": {}, 15 | "_type": "openai" 16 | }, 17 | "prompt": { 18 | "input_variables": [ 19 | "question" 20 | ], 21 | "output_parser": null, 22 | "template": "Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\n money_initial = 23\n bagels = 5\n bagel_cost = 3\n money_spent = bagels * bagel_cost\n money_left = money_initial - money_spent\n result = money_left\n return result\n\n\n\n\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\n golf_balls_initial = 58\n golf_balls_lost_tuesday = 23\n golf_balls_lost_wednesday = 2\n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\n result = golf_balls_left\n return result\n\n\n\n\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\n computers_initial = 9\n computers_per_day = 5\n num_days = 4 # 4 days between monday and thursday\n computers_added = computers_per_day * num_days\n computers_total = computers_initial + computers_added\n result = computers_total\n return result\n\n\n\n\n\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\n toys_initial = 5\n mom_toys = 2\n dad_toys = 2\n total_received = mom_toys + dad_toys\n total_toys = toys_initial + total_received\n result = total_toys\n return result\n\n\n\n\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\n jason_lollipops_initial = 20\n jason_lollipops_after = 12\n denny_lollipops = jason_lollipops_initial - jason_lollipops_after\n result = denny_lollipops\n return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\n leah_chocolates = 32\n sister_chocolates = 42\n total_chocolates = leah_chocolates + sister_chocolates\n chocolates_eaten = 35\n chocolates_left = total_chocolates - chocolates_eaten\n result = chocolates_left\n return result\n\n\n\n\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\n cars_initial = 3\n cars_arrived = 2\n total_cars = cars_initial + cars_arrived\n result = total_cars\n return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\n trees_initial = 15\n trees_after = 21\n trees_added = trees_after - trees_initial\n result = trees_added\n return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n", 23 | "template_format": "f-string", 24 | "_type": "prompt" 25 | }, 26 | "stop": "\n\n", 27 | "get_answer_expr": "print(solution())", 28 | "output_key": "result", 29 | "_type": "pal_chain" 30 | } -------------------------------------------------------------------------------- /chains/qa_with_sources/map-reduce/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context", 12 | "question" 13 | ], 14 | "output_parser": null, 15 | "template": "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:", 16 | "template_format": "f-string", 17 | "_type": "prompt" 18 | }, 19 | "llm": { 20 | "model_name": "text-davinci-003", 21 | "temperature": 0.0, 22 | "max_tokens": 256, 23 | "top_p": 1, 24 | "frequency_penalty": 0, 25 | "presence_penalty": 0, 26 | "n": 1, 27 | "best_of": 1, 28 | "request_timeout": null, 29 | "logit_bias": {}, 30 | "_type": "openai" 31 | }, 32 | "output_key": "text", 33 | "_type": "llm_chain" 34 | }, 35 | "combine_document_chain": { 36 | "memory": null, 37 | "verbose": false, 38 | "input_key": "input_documents", 39 | "output_key": "output_text", 40 | "llm_chain": { 41 | "memory": null, 42 | "verbose": false, 43 | "prompt": { 44 | "input_variables": [ 45 | "summaries", 46 | "question" 47 | ], 48 | "output_parser": null, 49 | "template": "Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nALWAYS return a \"SOURCES\" part in your answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 50 | "template_format": "f-string", 51 | "_type": "prompt" 52 | }, 53 | "llm": { 54 | "model_name": "text-davinci-003", 55 | "temperature": 0.0, 56 | "max_tokens": 256, 57 | "top_p": 1, 58 | "frequency_penalty": 0, 59 | "presence_penalty": 0, 60 | "n": 1, 61 | "best_of": 1, 62 | "request_timeout": null, 63 | "logit_bias": {}, 64 | "_type": "openai" 65 | }, 66 | "output_key": "text", 67 | "_type": "llm_chain" 68 | }, 69 | "document_prompt": { 70 | "input_variables": [ 71 | "page_content", 72 | "source" 73 | ], 74 | "output_parser": null, 75 | "template": "Content: {page_content}\nSource: {source}", 76 | "template_format": "f-string", 77 | "_type": "prompt" 78 | }, 79 | "document_variable_name": "summaries", 80 | "_type": "stuff_documents_chain" 81 | }, 82 | "collapse_document_chain": null, 83 | "document_variable_name": "context", 84 | "return_intermediate_steps": true, 85 | "_type": "map_reduce_documents_chain" 86 | } -------------------------------------------------------------------------------- /chains/qa_with_sources/map-rerank/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context", 12 | "question" 13 | ], 14 | "output_parser": { 15 | "regex": "(.*?)\\nScore: (.*)", 16 | "output_keys": [ 17 | "answer", 18 | "score" 19 | ], 20 | "default_output_key": null, 21 | "_type": "regex_parser" 22 | }, 23 | "template": "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\nIn addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:\n\nQuestion: [question here]\nHelpful Answer: [answer here]\nScore: [score between 0 and 100]\n\nHow to determine the score:\n- Higher is a better answer\n- Better responds fully to the asked question, with sufficient level of detail\n- If you do not know the answer based on the context, that should be a score of 0\n- Don't be overconfident!\n\nExample #1\n\nContext:\n---------\nApples are red\n---------\nQuestion: what color are apples?\nHelpful Answer: red\nScore: 100\n\nExample #2\n\nContext:\n---------\nit was night and the witness forgot his glasses. he was not sure if it was a sports car or an suv\n---------\nQuestion: what type was the car?\nHelpful Answer: a sports car or an suv\nScore: 60\n\nExample #3\n\nContext:\n---------\nPears are either red or orange\n---------\nQuestion: what color are apples?\nHelpful Answer: This document does not answer the question\nScore: 0\n\nBegin!\n\nContext:\n---------\n{context}\n---------\nQuestion: {question}\nHelpful Answer:", 24 | "template_format": "f-string", 25 | "_type": "prompt" 26 | }, 27 | "llm": { 28 | "model_name": "text-davinci-003", 29 | "temperature": 0.0, 30 | "max_tokens": 256, 31 | "top_p": 1, 32 | "frequency_penalty": 0, 33 | "presence_penalty": 0, 34 | "n": 1, 35 | "best_of": 1, 36 | "request_timeout": null, 37 | "logit_bias": {}, 38 | "_type": "openai" 39 | }, 40 | "output_key": "text", 41 | "_type": "llm_chain" 42 | }, 43 | "document_variable_name": "context", 44 | "rank_key": "score", 45 | "answer_key": "answer", 46 | "metadata_keys": [ 47 | "source" 48 | ], 49 | "return_intermediate_steps": true, 50 | "_type": "map_rerank_documents_chain" 51 | } -------------------------------------------------------------------------------- /chains/qa_with_sources/refine/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "initial_llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context_str", 12 | "question" 13 | ], 14 | "output_parser": null, 15 | "template": "Context information is below. \n---------------------\n{context_str}\n---------------------\nGiven the context information and not prior knowledge, answer the question: {question}\n", 16 | "template_format": "f-string", 17 | "_type": "prompt" 18 | }, 19 | "llm": { 20 | "model_name": "text-davinci-003", 21 | "temperature": 0.0, 22 | "max_tokens": 256, 23 | "top_p": 1, 24 | "frequency_penalty": 0, 25 | "presence_penalty": 0, 26 | "n": 1, 27 | "best_of": 1, 28 | "request_timeout": null, 29 | "logit_bias": {}, 30 | "_type": "openai" 31 | }, 32 | "output_key": "text", 33 | "_type": "llm_chain" 34 | }, 35 | "refine_llm_chain": { 36 | "memory": null, 37 | "verbose": false, 38 | "prompt": { 39 | "input_variables": [ 40 | "question", 41 | "existing_answer", 42 | "context_str" 43 | ], 44 | "output_parser": null, 45 | "template": "The original question is as follows: {question}\nWe have provided an existing answer, including sources: {existing_answer}\nWe have the opportunity to refine the existing answer(only if needed) with some more context below.\n------------\n{context_str}\n------------\nGiven the new context, refine the original answer to better answer the question. If you do update it, please update the sources as well. If the context isn't useful, return the original answer.", 46 | "template_format": "f-string", 47 | "_type": "prompt" 48 | }, 49 | "llm": { 50 | "model_name": "text-davinci-003", 51 | "temperature": 0.0, 52 | "max_tokens": 256, 53 | "top_p": 1, 54 | "frequency_penalty": 0, 55 | "presence_penalty": 0, 56 | "n": 1, 57 | "best_of": 1, 58 | "request_timeout": null, 59 | "logit_bias": {}, 60 | "_type": "openai" 61 | }, 62 | "output_key": "text", 63 | "_type": "llm_chain" 64 | }, 65 | "document_variable_name": "context_str", 66 | "initial_response_name": "existing_answer", 67 | "document_prompt": { 68 | "input_variables": [ 69 | "page_content", 70 | "source" 71 | ], 72 | "output_parser": null, 73 | "template": "Content: {page_content}\nSource: {source}", 74 | "template_format": "f-string", 75 | "_type": "prompt" 76 | }, 77 | "return_intermediate_steps": false, 78 | "_type": "refine_documents_chain" 79 | } -------------------------------------------------------------------------------- /chains/qa_with_sources/stuff/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "summaries", 12 | "question" 13 | ], 14 | "output_parser": null, 15 | "template": "Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nALWAYS return a \"SOURCES\" part in your answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 16 | "template_format": "f-string", 17 | "_type": "prompt" 18 | }, 19 | "llm": { 20 | "model_name": "text-davinci-003", 21 | "temperature": 0.0, 22 | "max_tokens": 256, 23 | "top_p": 1, 24 | "frequency_penalty": 0, 25 | "presence_penalty": 0, 26 | "n": 1, 27 | "best_of": 1, 28 | "request_timeout": null, 29 | "logit_bias": {}, 30 | "_type": "openai" 31 | }, 32 | "output_key": "text", 33 | "_type": "llm_chain" 34 | }, 35 | "document_prompt": { 36 | "input_variables": [ 37 | "page_content", 38 | "source" 39 | ], 40 | "output_parser": null, 41 | "template": "Content: {page_content}\nSource: {source}", 42 | "template_format": "f-string", 43 | "_type": "prompt" 44 | }, 45 | "document_variable_name": "summaries", 46 | "_type": "stuff_documents_chain" 47 | } -------------------------------------------------------------------------------- /chains/question_answering/map-reduce/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context", 12 | "question" 13 | ], 14 | "output_parser": null, 15 | "template": "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:", 16 | "template_format": "f-string", 17 | "_type": "prompt" 18 | }, 19 | "llm": { 20 | "model_name": "text-davinci-003", 21 | "temperature": 0.0, 22 | "max_tokens": 256, 23 | "top_p": 1, 24 | "frequency_penalty": 0, 25 | "presence_penalty": 0, 26 | "n": 1, 27 | "best_of": 1, 28 | "request_timeout": null, 29 | "logit_bias": {}, 30 | "_type": "openai" 31 | }, 32 | "output_key": "text", 33 | "_type": "llm_chain" 34 | }, 35 | "combine_document_chain": { 36 | "memory": null, 37 | "verbose": false, 38 | "input_key": "input_documents", 39 | "output_key": "output_text", 40 | "llm_chain": { 41 | "memory": null, 42 | "verbose": false, 43 | "prompt": { 44 | "input_variables": [ 45 | "summaries", 46 | "question" 47 | ], 48 | "output_parser": null, 49 | "template": "Given the following extracted parts of a long document and a question, create a final answer. \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\n\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\n\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\n=========\nFINAL ANSWER: This Agreement is governed by English law.\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\n\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\n\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\n\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 50 | "template_format": "f-string", 51 | "_type": "prompt" 52 | }, 53 | "llm": { 54 | "model_name": "text-davinci-003", 55 | "temperature": 0.0, 56 | "max_tokens": 256, 57 | "top_p": 1, 58 | "frequency_penalty": 0, 59 | "presence_penalty": 0, 60 | "n": 1, 61 | "best_of": 1, 62 | "request_timeout": null, 63 | "logit_bias": {}, 64 | "_type": "openai" 65 | }, 66 | "output_key": "text", 67 | "_type": "llm_chain" 68 | }, 69 | "document_prompt": { 70 | "input_variables": [ 71 | "page_content" 72 | ], 73 | "output_parser": null, 74 | "template": "{page_content}", 75 | "template_format": "f-string", 76 | "_type": "prompt" 77 | }, 78 | "document_variable_name": "summaries", 79 | "_type": "stuff_documents_chain" 80 | }, 81 | "collapse_document_chain": null, 82 | "document_variable_name": "context", 83 | "return_intermediate_steps": false, 84 | "_type": "map_reduce_documents_chain" 85 | } -------------------------------------------------------------------------------- /chains/question_answering/map-rerank/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context", 12 | "question" 13 | ], 14 | "output_parser": { 15 | "regex": "(.*?)\\nScore: (.*)", 16 | "output_keys": [ 17 | "answer", 18 | "score" 19 | ], 20 | "default_output_key": null, 21 | "_type": "regex_parser" 22 | }, 23 | "template": "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\nIn addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:\n\nQuestion: [question here]\nHelpful Answer: [answer here]\nScore: [score between 0 and 100]\n\nHow to determine the score:\n- Higher is a better answer\n- Better responds fully to the asked question, with sufficient level of detail\n- If you do not know the answer based on the context, that should be a score of 0\n- Don't be overconfident!\n\nExample #1\n\nContext:\n---------\nApples are red\n---------\nQuestion: what color are apples?\nHelpful Answer: red\nScore: 100\n\nExample #2\n\nContext:\n---------\nit was night and the witness forgot his glasses. he was not sure if it was a sports car or an suv\n---------\nQuestion: what type was the car?\nHelpful Answer: a sports car or an suv\nScore: 60\n\nExample #3\n\nContext:\n---------\nPears are either red or orange\n---------\nQuestion: what color are apples?\nHelpful Answer: This document does not answer the question\nScore: 0\n\nBegin!\n\nContext:\n---------\n{context}\n---------\nQuestion: {question}\nHelpful Answer:", 24 | "template_format": "f-string", 25 | "_type": "prompt" 26 | }, 27 | "llm": { 28 | "model_name": "text-davinci-003", 29 | "temperature": 0.0, 30 | "max_tokens": 256, 31 | "top_p": 1, 32 | "frequency_penalty": 0, 33 | "presence_penalty": 0, 34 | "n": 1, 35 | "best_of": 1, 36 | "request_timeout": null, 37 | "logit_bias": {}, 38 | "_type": "openai" 39 | }, 40 | "output_key": "text", 41 | "_type": "llm_chain" 42 | }, 43 | "document_variable_name": "context", 44 | "rank_key": "score", 45 | "answer_key": "answer", 46 | "metadata_keys": null, 47 | "return_intermediate_steps": true, 48 | "_type": "map_rerank_documents_chain" 49 | } -------------------------------------------------------------------------------- /chains/question_answering/refine/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "initial_llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context_str", 12 | "question" 13 | ], 14 | "output_parser": null, 15 | "template": "Context information is below. \n---------------------\n{context_str}\n---------------------\nGiven the context information and not prior knowledge, answer the question: {question}\n", 16 | "template_format": "f-string", 17 | "_type": "prompt" 18 | }, 19 | "llm": { 20 | "model_name": "text-davinci-003", 21 | "temperature": 0.0, 22 | "max_tokens": 256, 23 | "top_p": 1, 24 | "frequency_penalty": 0, 25 | "presence_penalty": 0, 26 | "n": 1, 27 | "best_of": 1, 28 | "request_timeout": null, 29 | "logit_bias": {}, 30 | "_type": "openai" 31 | }, 32 | "output_key": "text", 33 | "_type": "llm_chain" 34 | }, 35 | "refine_llm_chain": { 36 | "memory": null, 37 | "verbose": false, 38 | "prompt": { 39 | "input_variables": [ 40 | "question", 41 | "existing_answer", 42 | "context_str" 43 | ], 44 | "output_parser": null, 45 | "template": "The original question is as follows: {question}\nWe have provided an existing answer: {existing_answer}\nWe have the opportunity to refine the existing answer(only if needed) with some more context below.\n------------\n{context_str}\n------------\nGiven the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.", 46 | "template_format": "f-string", 47 | "_type": "prompt" 48 | }, 49 | "llm": { 50 | "model_name": "text-davinci-003", 51 | "temperature": 0.0, 52 | "max_tokens": 256, 53 | "top_p": 1, 54 | "frequency_penalty": 0, 55 | "presence_penalty": 0, 56 | "n": 1, 57 | "best_of": 1, 58 | "request_timeout": null, 59 | "logit_bias": {}, 60 | "_type": "openai" 61 | }, 62 | "output_key": "text", 63 | "_type": "llm_chain" 64 | }, 65 | "document_variable_name": "context_str", 66 | "initial_response_name": "existing_answer", 67 | "document_prompt": { 68 | "input_variables": [ 69 | "page_content" 70 | ], 71 | "output_parser": null, 72 | "template": "{page_content}", 73 | "template_format": "f-string", 74 | "_type": "prompt" 75 | }, 76 | "return_intermediate_steps": false, 77 | "_type": "refine_documents_chain" 78 | } -------------------------------------------------------------------------------- /chains/question_answering/stuff/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "context", 12 | "question" 13 | ], 14 | "output_parser": null, 15 | "template": "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", 16 | "template_format": "f-string", 17 | "_type": "prompt" 18 | }, 19 | "llm": { 20 | "model_name": "text-davinci-003", 21 | "temperature": 0.0, 22 | "max_tokens": 256, 23 | "top_p": 1, 24 | "frequency_penalty": 0, 25 | "presence_penalty": 0, 26 | "n": 1, 27 | "best_of": 1, 28 | "request_timeout": null, 29 | "logit_bias": {}, 30 | "_type": "openai" 31 | }, 32 | "output_key": "text", 33 | "_type": "llm_chain" 34 | }, 35 | "document_prompt": { 36 | "input_variables": [ 37 | "page_content" 38 | ], 39 | "output_parser": null, 40 | "template": "{page_content}", 41 | "template_format": "f-string", 42 | "_type": "prompt" 43 | }, 44 | "document_variable_name": "context", 45 | "_type": "stuff_documents_chain" 46 | } -------------------------------------------------------------------------------- /chains/readme_template.md: -------------------------------------------------------------------------------- 1 | # {Title} 2 | 3 | ## Description 4 | 5 | ## Chain type 6 | 7 | ## Input Variables 8 | -------------------------------------------------------------------------------- /chains/summarize/map-reduce/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "text" 12 | ], 13 | "output_parser": null, 14 | "template": "Write a concise summary of the following:\n\n\n\"{text}\"\n\n\nCONCISE SUMMARY:", 15 | "template_format": "f-string", 16 | "_type": "prompt" 17 | }, 18 | "llm": { 19 | "model_name": "text-davinci-003", 20 | "temperature": 0.0, 21 | "max_tokens": 256, 22 | "top_p": 1, 23 | "frequency_penalty": 0, 24 | "presence_penalty": 0, 25 | "n": 1, 26 | "best_of": 1, 27 | "request_timeout": null, 28 | "logit_bias": {}, 29 | "_type": "openai" 30 | }, 31 | "output_key": "text", 32 | "_type": "llm_chain" 33 | }, 34 | "combine_document_chain": { 35 | "memory": null, 36 | "verbose": false, 37 | "input_key": "input_documents", 38 | "output_key": "output_text", 39 | "llm_chain": { 40 | "memory": null, 41 | "verbose": false, 42 | "prompt": { 43 | "input_variables": [ 44 | "text" 45 | ], 46 | "output_parser": null, 47 | "template": "Write a concise summary of the following:\n\n\n\"{text}\"\n\n\nCONCISE SUMMARY:", 48 | "template_format": "f-string", 49 | "_type": "prompt" 50 | }, 51 | "llm": { 52 | "model_name": "text-davinci-003", 53 | "temperature": 0.0, 54 | "max_tokens": 256, 55 | "top_p": 1, 56 | "frequency_penalty": 0, 57 | "presence_penalty": 0, 58 | "n": 1, 59 | "best_of": 1, 60 | "request_timeout": null, 61 | "logit_bias": {}, 62 | "_type": "openai" 63 | }, 64 | "output_key": "text", 65 | "_type": "llm_chain" 66 | }, 67 | "document_prompt": { 68 | "input_variables": [ 69 | "page_content" 70 | ], 71 | "output_parser": null, 72 | "template": "{page_content}", 73 | "template_format": "f-string", 74 | "_type": "prompt" 75 | }, 76 | "document_variable_name": "text", 77 | "_type": "stuff_documents_chain" 78 | }, 79 | "collapse_document_chain": null, 80 | "document_variable_name": "text", 81 | "return_intermediate_steps": false, 82 | "_type": "map_reduce_documents_chain" 83 | } -------------------------------------------------------------------------------- /chains/summarize/refine/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "initial_llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "text" 12 | ], 13 | "output_parser": null, 14 | "template": "Write a concise summary of the following:\n\n\n\"{text}\"\n\n\nCONCISE SUMMARY:", 15 | "template_format": "f-string", 16 | "_type": "prompt" 17 | }, 18 | "llm": { 19 | "model_name": "text-davinci-003", 20 | "temperature": 0.0, 21 | "max_tokens": 256, 22 | "top_p": 1, 23 | "frequency_penalty": 0, 24 | "presence_penalty": 0, 25 | "n": 1, 26 | "best_of": 1, 27 | "request_timeout": null, 28 | "logit_bias": {}, 29 | "_type": "openai" 30 | }, 31 | "output_key": "text", 32 | "_type": "llm_chain" 33 | }, 34 | "refine_llm_chain": { 35 | "memory": null, 36 | "verbose": false, 37 | "prompt": { 38 | "input_variables": [ 39 | "existing_answer", 40 | "text" 41 | ], 42 | "output_parser": null, 43 | "template": "Your job is to produce a final summary\nWe have provided an existing summary up to a certain point: {existing_answer}\nWe have the opportunity to refine the existing summary(only if needed) with some more context below.\n------------\n{text}\n------------\nGiven the new context, refine the original summaryIf the context isn't useful, return the original summary.", 44 | "template_format": "f-string", 45 | "_type": "prompt" 46 | }, 47 | "llm": { 48 | "model_name": "text-davinci-003", 49 | "temperature": 0.0, 50 | "max_tokens": 256, 51 | "top_p": 1, 52 | "frequency_penalty": 0, 53 | "presence_penalty": 0, 54 | "n": 1, 55 | "best_of": 1, 56 | "request_timeout": null, 57 | "logit_bias": {}, 58 | "_type": "openai" 59 | }, 60 | "output_key": "text", 61 | "_type": "llm_chain" 62 | }, 63 | "document_variable_name": "text", 64 | "initial_response_name": "existing_answer", 65 | "document_prompt": { 66 | "input_variables": [ 67 | "page_content" 68 | ], 69 | "output_parser": null, 70 | "template": "{page_content}", 71 | "template_format": "f-string", 72 | "_type": "prompt" 73 | }, 74 | "return_intermediate_steps": false, 75 | "_type": "refine_documents_chain" 76 | } -------------------------------------------------------------------------------- /chains/summarize/stuff/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "input_key": "input_documents", 5 | "output_key": "output_text", 6 | "llm_chain": { 7 | "memory": null, 8 | "verbose": false, 9 | "prompt": { 10 | "input_variables": [ 11 | "text" 12 | ], 13 | "output_parser": null, 14 | "template": "Write a concise summary of the following:\n\n\n\"{text}\"\n\n\nCONCISE SUMMARY:", 15 | "template_format": "f-string", 16 | "_type": "prompt" 17 | }, 18 | "llm": { 19 | "model_name": "text-davinci-003", 20 | "temperature": 0.0, 21 | "max_tokens": 256, 22 | "top_p": 1, 23 | "frequency_penalty": 0, 24 | "presence_penalty": 0, 25 | "n": 1, 26 | "best_of": 1, 27 | "request_timeout": null, 28 | "logit_bias": {}, 29 | "_type": "openai" 30 | }, 31 | "output_key": "text", 32 | "_type": "llm_chain" 33 | }, 34 | "document_prompt": { 35 | "input_variables": [ 36 | "page_content" 37 | ], 38 | "output_parser": null, 39 | "template": "{page_content}", 40 | "template_format": "f-string", 41 | "_type": "prompt" 42 | }, 43 | "document_variable_name": "text", 44 | "_type": "stuff_documents_chain" 45 | } -------------------------------------------------------------------------------- /chains/vector-db-qa/map-reduce/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "k": 4, 5 | "combine_documents_chain": { 6 | "memory": null, 7 | "verbose": false, 8 | "input_key": "input_documents", 9 | "output_key": "output_text", 10 | "llm_chain": { 11 | "memory": null, 12 | "verbose": false, 13 | "prompt": { 14 | "input_variables": [ 15 | "context", 16 | "question" 17 | ], 18 | "output_parser": null, 19 | "template": "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:", 20 | "template_format": "f-string", 21 | "_type": "prompt" 22 | }, 23 | "llm": { 24 | "model_name": "text-davinci-003", 25 | "temperature": 0.7, 26 | "max_tokens": 256, 27 | "top_p": 1, 28 | "frequency_penalty": 0, 29 | "presence_penalty": 0, 30 | "n": 1, 31 | "best_of": 1, 32 | "request_timeout": null, 33 | "logit_bias": {}, 34 | "_type": "openai" 35 | }, 36 | "output_key": "text", 37 | "_type": "llm_chain" 38 | }, 39 | "combine_document_chain": { 40 | "memory": null, 41 | "verbose": false, 42 | "input_key": "input_documents", 43 | "output_key": "output_text", 44 | "llm_chain": { 45 | "memory": null, 46 | "verbose": false, 47 | "prompt": { 48 | "input_variables": [ 49 | "summaries", 50 | "question" 51 | ], 52 | "output_parser": null, 53 | "template": "Given the following extracted parts of a long document and a question, create a final answer. \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\n\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\n\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\n=========\nFINAL ANSWER: This Agreement is governed by English law.\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\n\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\n\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\n\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 54 | "template_format": "f-string", 55 | "_type": "prompt" 56 | }, 57 | "llm": { 58 | "model_name": "text-davinci-003", 59 | "temperature": 0.7, 60 | "max_tokens": 256, 61 | "top_p": 1, 62 | "frequency_penalty": 0, 63 | "presence_penalty": 0, 64 | "n": 1, 65 | "best_of": 1, 66 | "request_timeout": null, 67 | "logit_bias": {}, 68 | "_type": "openai" 69 | }, 70 | "output_key": "text", 71 | "_type": "llm_chain" 72 | }, 73 | "document_prompt": { 74 | "input_variables": [ 75 | "page_content" 76 | ], 77 | "output_parser": null, 78 | "template": "{page_content}", 79 | "template_format": "f-string", 80 | "_type": "prompt" 81 | }, 82 | "document_variable_name": "summaries", 83 | "_type": "stuff_documents_chain" 84 | }, 85 | "collapse_document_chain": null, 86 | "document_variable_name": "context", 87 | "return_intermediate_steps": false, 88 | "_type": "map_reduce_documents_chain" 89 | }, 90 | "input_key": "query", 91 | "output_key": "result", 92 | "return_source_documents": false, 93 | "search_kwargs": {}, 94 | "_type": "vector_db_qa" 95 | } -------------------------------------------------------------------------------- /chains/vector-db-qa/stuff/chain.json: -------------------------------------------------------------------------------- 1 | { 2 | "memory": null, 3 | "verbose": false, 4 | "k": 4, 5 | "combine_documents_chain": { 6 | "memory": null, 7 | "verbose": false, 8 | "input_key": "input_documents", 9 | "output_key": "output_text", 10 | "llm_chain": { 11 | "memory": null, 12 | "verbose": false, 13 | "prompt": { 14 | "input_variables": [ 15 | "context", 16 | "question" 17 | ], 18 | "output_parser": null, 19 | "template": "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", 20 | "template_format": "f-string", 21 | "_type": "prompt" 22 | }, 23 | "llm": { 24 | "model_name": "text-davinci-003", 25 | "temperature": 0.7, 26 | "max_tokens": 256, 27 | "top_p": 1, 28 | "frequency_penalty": 0, 29 | "presence_penalty": 0, 30 | "n": 1, 31 | "best_of": 1, 32 | "request_timeout": null, 33 | "logit_bias": {}, 34 | "_type": "openai" 35 | }, 36 | "output_key": "text", 37 | "_type": "llm_chain" 38 | }, 39 | "document_prompt": { 40 | "input_variables": [ 41 | "page_content" 42 | ], 43 | "output_parser": null, 44 | "template": "{page_content}", 45 | "template_format": "f-string", 46 | "_type": "prompt" 47 | }, 48 | "document_variable_name": "context", 49 | "_type": "stuff_documents_chain" 50 | }, 51 | "input_key": "query", 52 | "output_key": "result", 53 | "return_source_documents": false, 54 | "search_kwargs": {}, 55 | "_type": "vector_db_qa" 56 | } -------------------------------------------------------------------------------- /ci_scripts/file-check.py: -------------------------------------------------------------------------------- 1 | from pathlib import Path 2 | 3 | from langchain.prompts import load_prompt 4 | 5 | BASE_FOLDER = Path("prompts") 6 | folders = BASE_FOLDER.glob("**") 7 | 8 | 9 | def check_files(files): 10 | file_names = [f.name for f in files] 11 | if "README.md" not in file_names: 12 | raise ValueError(f"Expected to find a README.md file, but found {files}") 13 | other_files = [file for file in files if file.name != "README.md"] 14 | for other_file in other_files: 15 | if other_file.suffix in (".json", ".yaml"): 16 | load_prompt(other_file) 17 | # TODO: testing for python files 18 | 19 | 20 | def check_all_folders(): 21 | for folder in folders: 22 | folder_path = Path(folder) 23 | files = [x for x in folder_path.iterdir() if x.is_file()] 24 | if len(files) > 0: 25 | try: 26 | check_files(files) 27 | except Exception as e: 28 | raise ValueError(f"Found error with {folder}: {e}") 29 | 30 | 31 | if __name__ == "__main__": 32 | check_all_folders() 33 | -------------------------------------------------------------------------------- /prompts/README.md: -------------------------------------------------------------------------------- 1 | # Prompts 2 | 3 | This directory covers loading and uploading of prompts. 4 | Each sub-directory covers a different use case, and has not only relevant prompts for that use case but also a README file describing how to best use that prompt. 5 | 6 | ## Loading 7 | 8 | All prompts can be loaded from LangChain by specifying the desired path, and adding the `lc://` prefix. The path should be relative to the `langchain-hub` repo. 9 | 10 | For example, to load the prompt at the path `langchain-hub/prompts/qa/stuff/basic/prompt.yaml`, the path you want to specify is `lc://prompts/qa/stuff/basic/prompt.yaml` 11 | 12 | Once you have that path, you can load it in the following manner: 13 | 14 | ```python 15 | from langchain.prompts import load_prompt 16 | 17 | prompt = load_prompt('lc://prompts/qa/stuff/basic/prompt.yaml') 18 | ``` 19 | 20 | ## Uploading 21 | 22 | To upload a prompt to the LangChainHub, you must upload 2 files: 23 | 1. The prompt. There are 3 supported file formats for prompts: `json`, `yaml`, and `python`. The suggested options are `json` and `yaml`, but we provide `python` as an option for more flexibility. Please see the below sections for instructions for uploading each format. 24 | 2. Associated README file for the prompt. This provides a high level description of the prompt, usage patterns of the prompt and chains that the prompt is compatible with. For more details, check out langchain-hub/readme_template. 25 | If you are uploading a prompt to an existing directory, it should already have a README file and so this should not be necessary. 26 | 27 | 28 | The prompts on the hub are organized by use case. The use cases are reflected in the directory structure and names, and each separate directory represents a different use case. You should upload your prompt file to a folder in the appropriate use case section. 29 | 30 | 31 | If adding a prompt to an existing use case folder, then make sure that the prompt: 32 | 1. services the same use case as the existing prompt(s) in that folder, and 33 | 2. has the same inputs as the existing prompt(s). 34 | 35 | A litmus test to make sure that multiple prompts belong in the same folder: the existing README file for that folder should also apply to the new prompt being added. 36 | 37 | 38 | ### Supported file formats 39 | 40 | #### `json` 41 | To get a properly formatted json file, if you have prompt in memory in Python you can run: 42 | ```python 43 | prompt.save("file_name.json") 44 | ``` 45 | 46 | Replace `"file_name"` with the desired name of the file. 47 | 48 | #### `yaml` 49 | To get a properly formatted yaml file, if you have prompt in memory in Python you can run: 50 | ```python 51 | prompt.save("file_name.yaml") 52 | ``` 53 | 54 | Replace `"file_name"` with the desired name of the file. 55 | 56 | 57 | #### `python` 58 | To get a properly formatted Python file, you should upload a Python file that exposes a `PROMPT` variable. 59 | This is the variable that will be loaded. 60 | This variable should be an instance of a subclass of BasePromptTemplate in LangChain. 61 | -------------------------------------------------------------------------------- /prompts/api/api_response/README.md: -------------------------------------------------------------------------------- 1 | # Description of API Response Prompts 2 | 3 | Prompts used to convert the result of an API into natural language. 4 | 5 | ## Inputs 6 | 7 | This is a description of the inputs that the prompt expects. 8 | 9 | 1. `api_docs`: Docs for the API being hit. 10 | 2. `question`: Question to be answered. 11 | 3. `api_url`: URL that was hit. 12 | 4. `api_response`: The response returned from hitting said URL. 13 | 14 | 15 | ## Usage 16 | 17 | Below is a code snippet for how to use the prompt. 18 | 19 | ```python 20 | from langchain.prompts import load_prompt 21 | from langchain.chains import APIChain 22 | 23 | llm = ... 24 | api_docs = ... 25 | prompt = load_prompt('lc://prompts/api/api_response/') 26 | chain = APIChain.from_llm_and_api_docs(llm, api_docs, api_response_prompt=prompt) 27 | ``` 28 | 29 | -------------------------------------------------------------------------------- /prompts/api/api_response/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "api_docs", 4 | "question", 5 | "api_url", 6 | "api_response" 7 | ], 8 | "output_parser": null, 9 | "template": "You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:", 10 | "template_format": "f-string" 11 | } -------------------------------------------------------------------------------- /prompts/api/api_url/README.md: -------------------------------------------------------------------------------- 1 | # Description of API URL Prompts 2 | 3 | Prompts used to construct an API URL to answer a specific question. 4 | 5 | ## Inputs 6 | 7 | This is a description of the inputs that the prompt expects. 8 | 9 | 1. `api_docs`: The documentation for a given API. 10 | 2. `question`: The question being asked of the API. 11 | 12 | 13 | ## Usage 14 | 15 | Below is a code snippet for how to use the prompt. 16 | 17 | ```python 18 | from langchain.prompts import load_prompt 19 | from langchain.chains import APIChain 20 | 21 | llm = ... 22 | api_docs = ... 23 | prompt = load_prompt('lc://prompts/api/api_url/') 24 | chain = APIChain.from_llm_and_api_docs(llm, api_docs, api_url_prompt=prompt) 25 | ``` 26 | -------------------------------------------------------------------------------- /prompts/api/api_url/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "api_docs", 4 | "question" 5 | ], 6 | "output_parser": null, 7 | "template": "You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/conversation/README.md: -------------------------------------------------------------------------------- 1 | # Description of Conversation Prompt 2 | 3 | Prompt designed to simulate having a conversation and allow for a conversational response to be returned. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `history`: History of the conversation up to that point. 11 | 2. `input`: New user input. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains import ConversationChain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/conversation/') 24 | chain = ConversationChain(llm=llm, prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/conversation/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "history", 4 | "input" 5 | ], 6 | "output_parser": null, 7 | "template": "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/hello-world/README.md: -------------------------------------------------------------------------------- 1 | # Description of Hello World 2 | 3 | Basic prompt designed to be use as a test case, will just instruct the LLM to say "Hello World". 4 | 5 | 6 | ## Inputs 7 | 8 | This prompt doesn't have any inputs. 9 | 10 | 11 | ## Usage 12 | 13 | Below is a code snippet for how to use the prompt. 14 | 15 | ```python 16 | from langchain.prompts import load_prompt 17 | from langchain.chains import LLMChain 18 | 19 | llm = ... 20 | prompt = load_prompt('lc://prompts/hello-world/') 21 | chain = LLMChain(llm=llm, prompt=prompt) 22 | ``` 23 | 24 | -------------------------------------------------------------------------------- /prompts/hello-world/prompt.yaml: -------------------------------------------------------------------------------- 1 | input_variables: [] 2 | output_parser: null 3 | template: 'Say hello world.' 4 | template_format: f-string 5 | -------------------------------------------------------------------------------- /prompts/llm_bash/README.md: -------------------------------------------------------------------------------- 1 | # Description of LLM Bash Prompt 2 | 3 | Prompt designed to convert natural language to bash command. 4 | 5 | ## Inputs 6 | 7 | This is a description of the inputs that the prompt expects. 8 | 9 | 1. `question`: User question to be answered by writing a bash command. 10 | 11 | 12 | ## Usage 13 | 14 | Below is a code snippet for how to use the prompt. 15 | 16 | ```python 17 | from langchain.prompts import load_prompt 18 | from langchain.chains import LLMBashChain 19 | 20 | llm = ... 21 | prompt = load_prompt('lc://prompts/llm_bash/') 22 | chain = LLMBashChain(llm=llm, prompt=prompt) 23 | ``` 24 | 25 | -------------------------------------------------------------------------------- /prompts/llm_bash/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "question" 4 | ], 5 | "output_parser": null, 6 | "template": "If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: \"copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'\"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}", 7 | "template_format": "f-string" 8 | } -------------------------------------------------------------------------------- /prompts/llm_math/README.md: -------------------------------------------------------------------------------- 1 | # Description of LLM Math Chain 2 | 3 | Prompt designed to optionally output iPython syntax to be run in order to better answer math questions. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `question`: User question to be answered. 11 | 12 | 13 | ## Usage 14 | 15 | Below is a code snippet for how to use the prompt. 16 | 17 | ```python 18 | from langchain.prompts import load_prompt 19 | from langchain.chains import LLMMathChain 20 | 21 | llm = ... 22 | prompt = load_prompt('lc://prompts/llm_math/') 23 | chain = LLMMathChain(llm=llm, prompt=prompt) 24 | ``` 25 | 26 | -------------------------------------------------------------------------------- /prompts/llm_math/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "question" 4 | ], 5 | "output_parser": null, 6 | "template": "You are GPT-3, and you can't do math.\n\nYou can do basic math, and your memorization abilities are impressive, but you can't do any complex calculations that a human could not do in their head. You also have an annoying tendency to just make up highly specific, but wrong, answers.\n\nSo we hooked you up to a Python 3 kernel, and now you can execute code. If anyone gives you a hard math problem, just use this format and we\u2019ll take care of the rest:\n\nQuestion: ${{Question with hard calculation.}}\n```python\n${{Code that prints what you need to know}}\n```\n```output\n${{Output of your code}}\n```\nAnswer: ${{Answer}}\n\nOtherwise, use this simpler format:\n\nQuestion: ${{Question without hard calculation}}\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\n```python\nprint(37593 * 67)\n```\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: {question}\n", 7 | "template_format": "f-string" 8 | } -------------------------------------------------------------------------------- /prompts/memory/summarize/README.md: -------------------------------------------------------------------------------- 1 | # Description of Summarize Memory Prompt 2 | 3 | Prompt designed to summarize conversation history. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `summary`: Existing summary of conversation up to this point. 11 | 2. `new_lines`: New lines of conversation to be added into the summary. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains import ConversationChain 21 | from langchain.chains.conversation.memory import ConversationSummaryMemory 22 | 23 | llm = ... 24 | prompt = load_prompt('lc://prompts/memory/summarize/') 25 | memory = ConversationSummaryMemory(llm=llm, prompt=prompt) 26 | chain = ConversationChain(llm=llm, memory=memory) 27 | ``` 28 | 29 | -------------------------------------------------------------------------------- /prompts/memory/summarize/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "summary", 4 | "new_lines" 5 | ], 6 | "output_parser": null, 7 | "template": "Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/pal/README.md: -------------------------------------------------------------------------------- 1 | # Description of PAL Prompts 2 | 3 | Prompts to be used with the [PAL](https://arxiv.org/pdf/2211.10435.pdf) chain. 4 | These prompts should convert a natural language problem into a series of code snippets to be run to give an answer. 5 | 6 | 7 | ## Inputs 8 | 9 | This is a description of the inputs that the prompt expects. 10 | 11 | 1. `question`: The question to be answered. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains import PALChain 21 | 22 | llm = ... 23 | stop = ... 24 | get_answer_expr = ... 25 | prompt = load_prompt('lc://prompts/pal/') 26 | chain = PALChain(llm=llm, prompt=prompt, stop=stop, get_answer_expr=get_answer_expr) 27 | ``` 28 | 29 | -------------------------------------------------------------------------------- /prompts/pal/colored_objects.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "question" 4 | ], 5 | "output_parser": null, 6 | "template": "# Generate Python3 Code to solve problems\n# Q: On the nightstand, there is a red pencil, a purple mug, a burgundy keychain, a fuchsia teddy bear, a black plate, and a blue stress ball. What color is the stress ball?\n# Put objects into a dictionary for quick look up\nobjects = dict()\nobjects['pencil'] = 'red'\nobjects['mug'] = 'purple'\nobjects['keychain'] = 'burgundy'\nobjects['teddy bear'] = 'fuchsia'\nobjects['plate'] = 'black'\nobjects['stress ball'] = 'blue'\n\n# Look up the color of stress ball\nstress_ball_color = objects['stress ball']\nanswer = stress_ball_color\n\n\n# Q: On the table, you see a bunch of objects arranged in a row: a purple paperclip, a pink stress ball, a brown keychain, a green scrunchiephone charger, a mauve fidget spinner, and a burgundy pen. What is the color of the object directly to the right of the stress ball?\n# Put objects into a list to record ordering\nobjects = []\nobjects += [('paperclip', 'purple')] * 1\nobjects += [('stress ball', 'pink')] * 1\nobjects += [('keychain', 'brown')] * 1\nobjects += [('scrunchiephone charger', 'green')] * 1\nobjects += [('fidget spinner', 'mauve')] * 1\nobjects += [('pen', 'burgundy')] * 1\n\n# Find the index of the stress ball\nstress_ball_idx = None\nfor i, object in enumerate(objects):\n if object[0] == 'stress ball':\n stress_ball_idx = i\n break\n\n# Find the directly right object\ndirect_right = objects[i+1]\n\n# Check the directly right object's color\ndirect_right_color = direct_right[1]\nanswer = direct_right_color\n\n\n# Q: On the nightstand, you see the following items arranged in a row: a teal plate, a burgundy keychain, a yellow scrunchiephone charger, an orange mug, a pink notebook, and a grey cup. How many non-orange items do you see to the left of the teal item?\n# Put objects into a list to record ordering\nobjects = []\nobjects += [('plate', 'teal')] * 1\nobjects += [('keychain', 'burgundy')] * 1\nobjects += [('scrunchiephone charger', 'yellow')] * 1\nobjects += [('mug', 'orange')] * 1\nobjects += [('notebook', 'pink')] * 1\nobjects += [('cup', 'grey')] * 1\n\n# Find the index of the teal item\nteal_idx = None\nfor i, object in enumerate(objects):\n if object[1] == 'teal':\n teal_idx = i\n break\n\n# Find non-orange items to the left of the teal item\nnon_orange = [object for object in objects[:i] if object[1] != 'orange']\n\n# Count number of non-orange objects\nnum_non_orange = len(non_orange)\nanswer = num_non_orange\n\n\n# Q: {question}\n", 7 | "template_format": "f-string" 8 | } -------------------------------------------------------------------------------- /prompts/pal/math.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "question" 4 | ], 5 | "output_parser": null, 6 | "template": "Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\n money_initial = 23\n bagels = 5\n bagel_cost = 3\n money_spent = bagels * bagel_cost\n money_left = money_initial - money_spent\n result = money_left\n return result\n\n\n\n\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\n golf_balls_initial = 58\n golf_balls_lost_tuesday = 23\n golf_balls_lost_wednesday = 2\n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\n result = golf_balls_left\n return result\n\n\n\n\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\n computers_initial = 9\n computers_per_day = 5\n num_days = 4 # 4 days between monday and thursday\n computers_added = computers_per_day * num_days\n computers_total = computers_initial + computers_added\n result = computers_total\n return result\n\n\n\n\n\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\n toys_initial = 5\n mom_toys = 2\n dad_toys = 2\n total_received = mom_toys + dad_toys\n total_toys = toys_initial + total_received\n result = total_toys\n return result\n\n\n\n\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\n jason_lollipops_initial = 20\n jason_lollipops_after = 12\n denny_lollipops = jason_lollipops_initial - jason_lollipops_after\n result = denny_lollipops\n return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\n leah_chocolates = 32\n sister_chocolates = 42\n total_chocolates = leah_chocolates + sister_chocolates\n chocolates_eaten = 35\n chocolates_left = total_chocolates - chocolates_eaten\n result = chocolates_left\n return result\n\n\n\n\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\n cars_initial = 3\n cars_arrived = 2\n total_cars = cars_initial + cars_arrived\n result = total_cars\n return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\n# solution in Python:\n\n\ndef solution():\n \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\n trees_initial = 15\n trees_after = 21\n trees_added = trees_after - trees_initial\n result = trees_added\n return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n", 7 | "template_format": "f-string" 8 | } -------------------------------------------------------------------------------- /prompts/qa/map_reduce/question/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA Map Reduce Prompt 2 | 3 | Prompts designed to be used in the initial question (map) step of a map-reduce chain to do question answering over a series of documents. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `context`: The document to be asking a question over. 11 | 2. `question`: The question being asked of the document. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.question_answering import load_qa_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/qa/map_reduce/question/') 24 | chain = load_qa_chain(llm, chain_type="map_reduce", question_prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/qa/map_reduce/question/basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "context", 4 | "question" 5 | ], 6 | "output_parser": null, 7 | "template": "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/qa/map_reduce/reduce/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA Map Reduce Prompt 2 | 3 | Prompts designed to be used to reduce the answers generated during the map step. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `summaries`: Summaries generated during the map step. 11 | 2. `question`: Original question to be answered. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.question_answering import load_qa_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/qa/map_reduce/reduce/') 24 | chain = load_qa_chain(llm, chain_type="map_reduce", combine_prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/qa/map_reduce/reduce/basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "summaries", 4 | "question" 5 | ], 6 | "output_parser": null, 7 | "template": "Given the following extracted parts of a long document and a question, create a final answer. \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\n\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\n\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\n=========\nFINAL ANSWER: This Agreement is governed by English law.\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\n\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\n\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\n\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/qa/refine/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA Refine Prompts 2 | 3 | Prompts designed to be used to refine original answers during question answering chains using the `refine` method. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `question`: Original question to be answered. 11 | 2. `existing_answer`: Existing answer from previous documents. 12 | 3. `context_str`: New piece of context to use to refine the existing answer. 13 | 14 | 15 | ## Usage 16 | 17 | Below is a code snippet for how to use the prompt. 18 | 19 | ```python 20 | from langchain.prompts import load_prompt 21 | from langchain.chains.question_answering import load_qa_chain 22 | 23 | llm = ... 24 | prompt = load_prompt('lc://prompts/qa/refine/') 25 | chain = load_qa_chain(llm, chain_type="refine", refine_prompt=prompt) 26 | ``` 27 | 28 | -------------------------------------------------------------------------------- /prompts/qa/refine/basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "question", 4 | "existing_answer", 5 | "context_str" 6 | ], 7 | "output_parser": null, 8 | "template": "The original question is as follows: {question}\nWe have provided an existing answer: {existing_answer}\nWe have the opportunity to refine the existing answer(only if needed) with some more context below.\n------------\n{context_str}\n------------\nGiven the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.", 9 | "template_format": "f-string" 10 | } -------------------------------------------------------------------------------- /prompts/qa/stuff/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA Stuff Prompts 2 | 3 | Prompts to use when doing question answering with chains using the `stuff` method. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `context`: Context to answer the question, is usually a concatenation of all relevant documents. 11 | 2. `question`: Question to be answered. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.question_answering import load_qa_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/qa/stuff/') 24 | chain = load_qa_chain(llm, chain_type="stuff", prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/qa/stuff/basic.yaml: -------------------------------------------------------------------------------- 1 | input_variables: 2 | - context 3 | - question 4 | output_parser: null 5 | template: 'Use the following pieces of context to answer the question at the end. 6 | If you don''t know the answer, just say that you don''t know, don''t try to make 7 | up an answer. 8 | 9 | 10 | {context} 11 | 12 | 13 | Question: {question} 14 | 15 | Helpful Answer:' 16 | template_format: f-string 17 | -------------------------------------------------------------------------------- /prompts/qa_with_sources/map_reduce/reduce/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA with Sources Map Reduce Prompts 2 | 3 | This prompt enables the user to perform question answering while providing sources. 4 | It uses the map reduce chain for doing QA. This specific prompt reduces the answer generated during the question stage. 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `summaries`: Summaries generated during the map step. 11 | 2. `question`: Original question to be answered. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.qa_with_sources import load_qa_with_sources_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/qa_with_sources/map_reduce/reduce/') 24 | chain = load_qa_with_sources_chain(llm, chain_type="map_reduce", combine_prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/qa_with_sources/map_reduce/reduce/basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "summaries", 4 | "question" 5 | ], 6 | "output_parser": null, 7 | "template": "Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nALWAYS return a \"SOURCES\" part in your answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/qa_with_sources/refine/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA with Sources Refine Prompts 2 | 3 | Prompts used to do question answering with sources in chains that use the `refine` method. 4 | 5 | ## Inputs 6 | 7 | This is a description of the inputs that the prompt expects. 8 | 9 | 1. `question`: Original question to be answered. 10 | 2. `existing_answer`: Existing answer from previous documents. 11 | 3. `context_str`: New piece of context to use to refine the existing answer. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.qa_with_sources import load_qa_with_sources_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/qa_with_sources/refine/') 24 | chain = load_qa_with_sources_chain(llm, chain_type="refine", refine_prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/qa_with_sources/refine/basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "question", 4 | "existing_answer", 5 | "context_str" 6 | ], 7 | "output_parser": null, 8 | "template": "The original question is as follows: {question}\nWe have provided an existing answer, including sources: {existing_answer}\nWe have the opportunity to refine the existing answer(only if needed) with some more context below.\n------------\n{context_str}\n------------\nGiven the new context, refine the original answer to better answer the question. If you do update it, please update the sources as well. If the context isn't useful, return the original answer.", 9 | "template_format": "f-string" 10 | } -------------------------------------------------------------------------------- /prompts/qa_with_sources/stuff/README.md: -------------------------------------------------------------------------------- 1 | # Description of QA with Sources Stuff Prompts 2 | 3 | Prompts to use in question answering with sources chains that use the `stuff` method. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `summaries`: Concatenated summaries of all the documents. 11 | 2. `question`: Question to be answered. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.qa_with_sources import load_qa_with_sources_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/qa_with_sources/stuff/') 24 | chain = load_qa_with_sources_chain(llm, chain_type="stuff", prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/qa_with_sources/stuff/basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "summaries", 4 | "question" 5 | ], 6 | "output_parser": null, 7 | "template": "Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nALWAYS return a \"SOURCES\" part in your answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:", 8 | "template_format": "f-string" 9 | } -------------------------------------------------------------------------------- /prompts/readme_template.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Description of {{prompt}} 4 | 5 | {{High level text description of the prompt, including use cases.}} 6 | 7 | ## Inputs 8 | 9 | This is a description of the inputs that the prompt expects. 10 | 11 | 1. {{input_var}}: {{Description}} 12 | 2. ... 13 | 14 | 15 | ## Usage 16 | 17 | Below is a code snippet for how to use the prompt. 18 | 19 | {{Code snippet}} 20 | 21 | -------------------------------------------------------------------------------- /prompts/sql_query/language_to_sql_output/README.md: -------------------------------------------------------------------------------- 1 | # Description of Language-To-SQL Prompts 2 | 3 | Prompts designed to convert language to relevant SQL query to execute. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `input`: Question to be answered. 11 | 2. `table_info`: Relevant information about the tables present in the schema. 12 | 3. `dialect`: Dialect of SQL to write the query in 13 | 4. `top_k`: Number of rows to return for most queries (used to avoid returning too much and overloading the context window). 14 | 15 | 16 | ## Usage 17 | 18 | Below is a code snippet for how to use the prompt. 19 | 20 | ```python 21 | from langchain.prompts import load_prompt 22 | from langchain.chains import SQLDatabaseChain 23 | 24 | llm = ... 25 | database = ... 26 | prompt = load_prompt('lc://prompts/sql_query/language_to_sql_output/') 27 | chain = SQLDatabaseChain(llm=llm, database=database, prompt=prompt) 28 | ``` 29 | 30 | -------------------------------------------------------------------------------- /prompts/sql_query/language_to_sql_output/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "input", 4 | "table_info", 5 | "dialect", 6 | "top_k" 7 | ], 8 | "output_parser": null, 9 | "template": "Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results using the LIMIT clause. You can order the results by a relevant column to return the most interesting examples in the database.\nUse the following format:\n\nQuestion: \"Question here\"\nSQLQuery: \"SQL Query to run\"\nSQLResult: \"Result of the SQLQuery\"\nAnswer: \"Final answer here\"\n\nOnly use the following tables:\n\n{table_info}\n\nQuestion: {input}", 10 | "template_format": "f-string" 11 | } -------------------------------------------------------------------------------- /prompts/sql_query/relevant_tables/README.md: -------------------------------------------------------------------------------- 1 | # Description of SQL Relevant Tables 2 | 3 | Prompts designed to identify relevant SQL tables to use to answer a query. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `query`: Question to be answered. 11 | 2. `table_names`: Table names available as options to pull in. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains import SQLDatabaseSequentialChain 21 | 22 | llm = ... 23 | database = ... 24 | prompt = load_prompt('lc://prompts/sql_query/relevant_tables/') 25 | chain = SQLDatabaseSequentialChain.from_llm(llm, database, decider_prompt=prompt) 26 | ``` 27 | 28 | -------------------------------------------------------------------------------- /prompts/sql_query/relevant_tables/relevant_tables.py: -------------------------------------------------------------------------------- 1 | from langchain.prompts.base import CommaSeparatedListOutputParser 2 | from langchain.prompts.prompt import PromptTemplate 3 | 4 | 5 | _DECIDER_TEMPLATE = """Given the below input question and list of potential tables, output a comma separated list of the table names that may be neccessary to answer this question. 6 | 7 | Question: {query} 8 | 9 | Table Names: {table_names} 10 | 11 | Relevant Table Names:""" 12 | PROMPT = PromptTemplate( 13 | input_variables=["query", "table_names"], 14 | template=_DECIDER_TEMPLATE, 15 | output_parser=CommaSeparatedListOutputParser(), 16 | ) 17 | -------------------------------------------------------------------------------- /prompts/summarize/map_reduce/map/README.md: -------------------------------------------------------------------------------- 1 | # Description of Summarize Map Reduce Prompts 2 | 3 | Prompts designed to be used the in the map step of chains doing summarization with `map-reduce` method. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `text`: Text to be summarized. 11 | 12 | 13 | ## Usage 14 | 15 | Below is a code snippet for how to use the prompt. 16 | 17 | ```python 18 | from langchain.prompts import load_prompt 19 | from langchain.chains.summarize import load_summarize_chain 20 | 21 | llm = ... 22 | prompt = load_prompt('lc://prompts/summarize/map_reduce/map/') 23 | chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=prompt) 24 | ``` 25 | 26 | -------------------------------------------------------------------------------- /prompts/summarize/map_reduce/map/prompt.yaml: -------------------------------------------------------------------------------- 1 | input_variables: [text] 2 | output_parser: null 3 | template: 'Write a concise summary of the following: 4 | 5 | 6 | {text} 7 | 8 | 9 | CONCISE SUMMARY:' 10 | template_format: f-string 11 | -------------------------------------------------------------------------------- /prompts/summarize/refine/README.md: -------------------------------------------------------------------------------- 1 | # Description of Summarize Refine Prompt 2 | 3 | Prompts designed to be used in summarization chains that use the `refine` method. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `existing_answer`: Existing summarization. 11 | 2. `text`: New text to be included in the summarization. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains.summarize import load_summarize_chain 21 | 22 | llm = ... 23 | prompt = load_prompt('lc://prompts/summarize/refine/') 24 | chain = load_summarize_chain(llm, chain_type="refine", refine_prompt=prompt) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /prompts/summarize/refine/prompt.yaml: -------------------------------------------------------------------------------- 1 | input_variables: [existing_answer, text] 2 | output_parser: null 3 | template: " 4 | Your job is to produce a final summary\n 5 | We have provided an existing summary up to a certain point: {existing_answer}\n 6 | We have the opportunity to refine the existing summary 7 | (only if needed) with some more context below.\n 8 | ------------\n 9 | {text}\n 10 | ------------\n 11 | Given the new context, refine the original summary 12 | If the context isn't useful, return the original summary." 13 | template_format: f-string 14 | -------------------------------------------------------------------------------- /prompts/summarize/stuff/README.md: -------------------------------------------------------------------------------- 1 | # Description of Summarization Stuff Prompts 2 | 3 | Prompts designed to be used in summarization chains that use the `stuff` method. 4 | 5 | ## Inputs 6 | 7 | This is a description of the inputs that the prompt expects. 8 | 9 | 1. `text`: Text to be summarized. 10 | 11 | 12 | ## Usage 13 | 14 | Below is a code snippet for how to use the prompt. 15 | 16 | ```python 17 | from langchain.prompts import load_prompt 18 | from langchain.chains.summarize import load_summarize_chain 19 | 20 | llm = ... 21 | prompt = load_prompt('lc://prompts/summarize/stuff/') 22 | chain = load_summarize_chain(llm, chain_type="stuff", prompt=prompt) 23 | ``` 24 | 25 | -------------------------------------------------------------------------------- /prompts/summarize/stuff/prompt.yaml: -------------------------------------------------------------------------------- 1 | input_variables: [text] 2 | output_parser: null 3 | template: 'Write a concise summary of the following: 4 | 5 | 6 | {text} 7 | 8 | 9 | CONCISE SUMMARY:' 10 | template_format: f-string 11 | -------------------------------------------------------------------------------- /prompts/vector_db_qa/README.md: -------------------------------------------------------------------------------- 1 | # Description of Vector DB Question Answering Prompt 2 | 3 | Prompts to be used in vector DB question answering chains. 4 | 5 | 6 | ## Inputs 7 | 8 | This is a description of the inputs that the prompt expects. 9 | 10 | 1. `context`: Documents pulled from the DB to do question answering over. 11 | 2. `question`: Question to ask of the documents. 12 | 13 | 14 | ## Usage 15 | 16 | Below is a code snippet for how to use the prompt. 17 | 18 | ```python 19 | from langchain.prompts import load_prompt 20 | from langchain.chains import VectorDBQA 21 | 22 | llm = ... 23 | vectorstore = ... 24 | prompt = load_prompt('lc://prompts/vector_db_qa/') 25 | chain = VectorDBQA.from_llm(llm, prompt=prompt, vectorstore=vectorstore) 26 | ``` 27 | 28 | -------------------------------------------------------------------------------- /prompts/vector_db_qa/prompt.json: -------------------------------------------------------------------------------- 1 | { 2 | "input_variables": [ 3 | "context", 4 | "question" 5 | ], 6 | "output_parser": null, 7 | "template": "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", 8 | "template_format": "f-string" 9 | } --------------------------------------------------------------------------------