├── .gitignore
├── README.md
├── requirements.txt
└── src
├── cove_chains.py
├── execute_verification_chain.py
├── main.py
├── prompts.py
└── route_chain.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | __pypackages__/
10 |
11 | **/__pycache__/
12 |
13 | .ipynb_checkpoints/
14 |
15 | .env
16 |
17 | *.txt
18 | *.json
19 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ⛓ chain-of-verification 💡
2 | How Chain-of-Verification (CoVe) works and how to implement it using Python 🐍 + Langchain 🔗 + OpenAI 🦾 + Search Tool 🔍
3 |
4 | 📄 **Article**: [I highly recommend reading this article before diving into the code.](https://sourajit16-02-93.medium.com/chain-of-verification-cove-understanding-implementation-e7338c7f4cb5)
5 |
6 | ## Architecture
7 | 
8 |
9 |
10 |
11 | ## 🚀 Getting Started
12 | 1. **Clone the Repository**
13 | 2. **Install Dependencies**:
14 | ```bash
15 | python3 -m pip install -r requirements.txt
16 | ```
17 | 3. **Set Up OpenAI API Key**:
18 | ```bash
19 | export OPENAI_API_KEY='sk-...'
20 | ```
21 | 4. **Run the Program**:
22 | ```bash
23 | cd src/
24 | python3 main.py --question "Who are some politicians born in Boston?"
25 | ```
26 |
27 | ## 🛠 Other Arguments
28 | ```bash
29 | python3 main.py --question "Who are some politicians born in Boston?" --llm-name "gpt-3.5-turbo-0613" --temperature 0.1 --max-tokens 500 --show-intermediate-steps
30 | ```
31 | - --question: This is the original query/question asked by the user
32 | - --llm-name: The OpenAI model name the user wants to use
33 | - --temperature: You know it 😉
34 | - --max-tokens: Tou know it as well 😉
35 | - --show-intermediate-steps: Activating this will alow printing of the intermediate results such as `baseline response`, `verification questions and answers`.
36 |
37 | # Few ways to improve
38 | This implementation provides a comprehensive guide for you to modify according to your need and use case. Although below are some of the ideas you can employ to make it more robust and effective.
39 | 1. **Prompt Engineering**: One of the major ways to improve performances of any LLM powered applications is through prompt engineering and prompt optimizations. You can check all the prompts used in the [prompts.py](https://github.com/ritun16/chain-of-verification/blob/b30cc401eece51ea59e81765077bb0481cc5747b/src/prompts.py#L1) file. Try your own prompt engineering and experiment in your use case.
40 | 2. **External Tools**: As the final output highly depends on the answers of the verification questions, based on different use cases you can try out different tools. For factual questions & answering you can use advanced search tools like google search or serp API etc. For custom use cases you can always use RAG methods or other retrieval techniques for answering the verification questions.
41 | 3. **More Chains**: I have implemented three chains according to the three question types (Wiki Data, Mutli-Span QA & Long-Form QA) the authors have used for their research. Depending on your use case you can create other chains which can handle other types of QA methods to increase the variabilty.
42 | 4. **Human In Loop (HIL)**: HIL is one of the important steps in many LLM powered applications. In your specific applications, the whole pipeline can be designed to incorporate HIL either for generating proper verification questions or answering verification questions to further improve the overall CoVe pipeline.
43 |
44 | ❤️ If this repository helps, please star ⭐, and share ✔️!
45 | If you also found the [article](https://sourajit16-02-93.medium.com/chain-of-verification-cove-understanding-implementation-e7338c7f4cb5) informative and think it could be beneficial to others, I'd be grateful if you could like 👍, follow 👉, and share✔️ the piece with others.
46 | Happy coding!
47 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | openai==0.28.0
2 | langchain==0.0.300
3 | duckduckgo-search
4 | tiktoken
5 | tenacity
6 | python-dotenv
--------------------------------------------------------------------------------
/src/cove_chains.py:
--------------------------------------------------------------------------------
1 | # from __future__ import annotations
2 |
3 | import os
4 | import re
5 | import itertools
6 | import openai
7 | import tiktoken
8 | import json
9 | from dotenv import load_dotenv
10 |
11 | from typing import Any, Dict, List, Optional
12 |
13 | from pydantic import Extra
14 |
15 | from langchain.schema.language_model import BaseLanguageModel
16 | from langchain.callbacks.manager import (
17 | AsyncCallbackManagerForChainRun,
18 | CallbackManagerForChainRun,
19 | )
20 | from langchain.schema import (
21 | AIMessage,
22 | HumanMessage,
23 | SystemMessage
24 | )
25 | from langchain.chains.base import Chain
26 | from langchain.prompts.base import BasePromptTemplate
27 | from langchain.tools import DuckDuckGoSearchRun
28 | import langchain
29 | from langchain.chat_models import ChatOpenAI
30 | from langchain.tools import DuckDuckGoSearchRun
31 | from langchain.schema import (
32 | AIMessage,
33 | HumanMessage,
34 | SystemMessage
35 | )
36 | from langchain.chains.llm import LLMChain
37 | from langchain.prompts import PromptTemplate
38 | from langchain.chains import SequentialChain
39 |
40 | import prompts
41 | from execute_verification_chain import ExecuteVerificationChain
42 |
43 |
44 | class WikiDataCategoryListCOVEChain(object):
45 | def __init__(self, llm):
46 | self.llm = llm
47 |
48 | def __call__(self):
49 | # Create baseline response chain
50 | baseline_response_prompt_template = PromptTemplate(input_variables=["original_question"],
51 | template=prompts.BASELINE_PROMPT_WIKI)
52 | baseline_response_chain = LLMChain(llm=self.llm,
53 | prompt=baseline_response_prompt_template,
54 | output_key="baseline_response")
55 | # Create plan verification chain
56 | ## Create plan verification template
57 | verification_question_template_prompt_template = PromptTemplate(input_variables=["original_question"],
58 | template=prompts.VERIFICATION_QUESTION_TEMPLATE_PROMPT_WIKI)
59 | verification_question_template_chain = LLMChain(llm=self.llm,
60 | prompt=verification_question_template_prompt_template,
61 | output_key="verification_question_template")
62 | ## Create plan verification questions
63 | verification_question_generation_prompt_template = PromptTemplate(input_variables=["original_question",
64 | "baseline_response",
65 | "verification_question_template"],
66 | template=prompts.VERIFICATION_QUESTION_PROMPT_WIKI)
67 | verification_question_generation_chain = LLMChain(llm=self.llm,
68 | prompt=verification_question_generation_prompt_template,
69 | output_key="verification_questions")
70 | # Create execution verification
71 | execute_verification_question_prompt_template = PromptTemplate(input_variables=["verification_questions"],
72 | template=prompts.EXECUTE_PLAN_PROMPT)
73 | execute_verification_question_chain = ExecuteVerificationChain(llm=self.llm,
74 | prompt=execute_verification_question_prompt_template,
75 | output_key="verification_answers")
76 | # Create final refined response
77 | final_answer_prompt_template = PromptTemplate(input_variables=["original_question",
78 | "baseline_response",
79 | "verification_answers"],
80 | template=prompts.FINAL_REFINED_PROMPT)
81 | final_answer_chain = LLMChain(llm=self.llm,
82 | prompt=final_answer_prompt_template,
83 | output_key="final_answer")
84 |
85 | # Create sequesntial chain
86 | wiki_data_category_list_cove_chain = SequentialChain(
87 | chains=[baseline_response_chain,
88 | verification_question_template_chain,
89 | verification_question_generation_chain,
90 | execute_verification_question_chain,
91 | final_answer_chain],
92 | input_variables=["original_question"],
93 | # Here we return multiple variables
94 | output_variables=["original_question",
95 | "baseline_response",
96 | "verification_question_template",
97 | "verification_questions",
98 | "verification_answers",
99 | "final_answer"],
100 | verbose=False)
101 | return wiki_data_category_list_cove_chain
102 |
103 |
104 | class MultiSpanCOVEChain(object):
105 | def __init__(self, llm):
106 | self.llm = llm
107 |
108 | def __call__(self):
109 | # Create baseline response chain
110 | baseline_response_prompt_template = PromptTemplate(input_variables=["original_question"],
111 | template=prompts.BASELINE_PROMPT_MULTI)
112 | baseline_response_chain = LLMChain(llm=self.llm,
113 | prompt=baseline_response_prompt_template,
114 | output_key="baseline_response")
115 | ## Create plan verification questions
116 | verification_question_generation_prompt_template = PromptTemplate(input_variables=["original_question",
117 | "baseline_response"],
118 | template=prompts.VERIFICATION_QUESTION_PROMPT_MULTI)
119 | verification_question_generation_chain = LLMChain(llm=self.llm,
120 | prompt=verification_question_generation_prompt_template,
121 | output_key="verification_questions")
122 | # Create execution verification
123 | execute_verification_question_prompt_template = PromptTemplate(input_variables=["verification_questions"],
124 | template=prompts.EXECUTE_PLAN_PROMPT)
125 | execute_verification_question_chain = ExecuteVerificationChain(llm=self.llm,
126 | prompt=execute_verification_question_prompt_template,
127 | output_key="verification_answers")
128 | # Create final refined response
129 | final_answer_prompt_template = PromptTemplate(input_variables=["original_question",
130 | "baseline_response",
131 | "verification_answers"],
132 | template=prompts.FINAL_REFINED_PROMPT)
133 | final_answer_chain = LLMChain(llm=self.llm,
134 | prompt=final_answer_prompt_template,
135 | output_key="final_answer")
136 |
137 | # Create sequesntial chain
138 | multi_span_cove_chain = SequentialChain(
139 | chains=[baseline_response_chain,
140 | verification_question_generation_chain,
141 | execute_verification_question_chain,
142 | final_answer_chain],
143 | input_variables=["original_question"],
144 | # Here we return multiple variables
145 | output_variables=["original_question",
146 | "baseline_response",
147 | "verification_questions",
148 | "verification_answers",
149 | "final_answer"],
150 | verbose=False)
151 | return multi_span_cove_chain
152 |
153 |
154 | class LongFormCOVEChain(object):
155 | def __init__(self, llm):
156 | self.llm = llm
157 |
158 | def __call__(self):
159 | # Create baseline response chain
160 | baseline_response_prompt_template = PromptTemplate(input_variables=["original_question"],
161 | template=prompts.BASELINE_PROMPT_LONG)
162 | baseline_response_chain = LLMChain(llm=self.llm,
163 | prompt=baseline_response_prompt_template,
164 | output_key="baseline_response")
165 | ## Create plan verification questions
166 | verification_question_generation_prompt_template = PromptTemplate(input_variables=["original_question",
167 | "baseline_response"],
168 | template=prompts.VERIFICATION_QUESTION_PROMPT_LONG)
169 | verification_question_generation_chain = LLMChain(llm=self.llm,
170 | prompt=verification_question_generation_prompt_template,
171 | output_key="verification_questions")
172 | # Create execution verification
173 | execute_verification_question_prompt_template = PromptTemplate(input_variables=["verification_questions"],
174 | template=prompts.EXECUTE_PLAN_PROMPT)
175 | execute_verification_question_chain = ExecuteVerificationChain(llm=self.llm,
176 | prompt=execute_verification_question_prompt_template,
177 | output_key="verification_answers")
178 | # Create final refined response
179 | final_answer_prompt_template = PromptTemplate(input_variables=["original_question",
180 | "baseline_response",
181 | "verification_answers"],
182 | template=prompts.FINAL_REFINED_PROMPT)
183 | final_answer_chain = LLMChain(llm=self.llm,
184 | prompt=final_answer_prompt_template,
185 | output_key="final_answer")
186 |
187 | # Create sequesntial chain
188 | long_form_cove_chain = SequentialChain(
189 | chains=[baseline_response_chain,
190 | verification_question_generation_chain,
191 | execute_verification_question_chain,
192 | final_answer_chain],
193 | input_variables=["original_question"],
194 | # Here we return multiple variables
195 | output_variables=["original_question",
196 | "baseline_response",
197 | "verification_questions",
198 | "verification_answers",
199 | "final_answer"],
200 | verbose=False)
201 | return long_form_cove_chain
--------------------------------------------------------------------------------
/src/execute_verification_chain.py:
--------------------------------------------------------------------------------
1 | # from __future__ import annotations
2 |
3 | import os
4 | import re
5 | import itertools
6 | import openai
7 | import tiktoken
8 | import json
9 | from dotenv import load_dotenv
10 |
11 | from typing import Any, Dict, List, Optional
12 |
13 | from pydantic import Extra
14 |
15 | from langchain.schema.language_model import BaseLanguageModel
16 | from langchain.callbacks.manager import (
17 | AsyncCallbackManagerForChainRun,
18 | CallbackManagerForChainRun,
19 | )
20 | from langchain.schema import (
21 | AIMessage,
22 | HumanMessage,
23 | SystemMessage
24 | )
25 | from langchain.chains.base import Chain
26 | from langchain.prompts.base import BasePromptTemplate
27 | from langchain.tools import DuckDuckGoSearchRun
28 | import langchain
29 | from langchain.chat_models import ChatOpenAI
30 | from langchain.tools import DuckDuckGoSearchRun
31 | from langchain.schema import (
32 | AIMessage,
33 | HumanMessage,
34 | SystemMessage
35 | )
36 | from langchain.chains.llm import LLMChain
37 | from langchain.prompts import PromptTemplate
38 | from langchain.chains import SequentialChain
39 |
40 | import prompts
41 |
42 |
43 |
44 | class ExecuteVerificationChain(Chain):
45 | """
46 | Implements the logic to execute the verification question for factual acuracy
47 | """
48 |
49 | prompt: BasePromptTemplate
50 | llm: BaseLanguageModel
51 | input_key: str = "verification_questions"
52 | output_key: str = "verification_answers"
53 | use_search_tool: bool = True
54 | search_tool: Any = DuckDuckGoSearchRun()
55 |
56 | class Config:
57 | """Configuration for this pydantic object."""
58 |
59 | extra = Extra.forbid
60 | arbitrary_types_allowed = True
61 |
62 | @property
63 | def input_keys(self) -> List[str]:
64 | """Will be whatever keys the prompt expects.
65 |
66 | :meta private:
67 | """
68 | return [self.input_key]
69 |
70 | @property
71 | def output_keys(self) -> List[str]:
72 | """Will always return text key.
73 |
74 | :meta private:
75 | """
76 | return [self.output_key]
77 |
78 | def search_for_verification_question(self,
79 | verification_question: str
80 | ) -> str:
81 | search_result = self.search_tool.run(verification_question)
82 | return search_result
83 |
84 | def _call(
85 | self,
86 | inputs: Dict[str, Any],
87 | run_manager: Optional[CallbackManagerForChainRun] = None,
88 | ) -> Dict[str, str]:
89 | verification_answers_list = list() # Will contain the answers of each verification questions
90 | question_answer_pair = "" # Final output of verification question and answer pair
91 |
92 | # Convert all the verification questions into a list of string
93 | sub_inputs = {k:v for k,v in inputs.items() if k==self.input_key}
94 | verification_questions_prompt_value = self.prompt.format_prompt(**sub_inputs)
95 | verification_questions_str = verification_questions_prompt_value.text
96 | verification_questions_list = verification_questions_str.split("\n")
97 |
98 | # Setting up prompt for both search tool and llm self evaluation
99 | execution_prompt_search_tool = PromptTemplate.from_template(prompts.EXECUTE_PLAN_PROMPT_SEARCH_TOOL)
100 | execution_prompt_self_llm = PromptTemplate.from_template(prompts.EXECUTE_PLAN_PROMPT_SELF_LLM)
101 |
102 | # Executing the verification questions, either using search tool or self llm
103 | for question in verification_questions_list:
104 | if self.use_search_tool:
105 | search_result = self.search_for_verification_question(question)
106 | execution_prompt_value = execution_prompt_search_tool.format_prompt(**{"search_result": search_result, "verification_question": question})
107 | else:
108 | execution_prompt_value = execution_prompt_self_llm.format_prompt(**{"verification_question": question})
109 | verification_answer_llm_result = self.llm.generate_prompt([execution_prompt_value], callbacks=run_manager.get_child() if run_manager else None)
110 | verification_answer_str = verification_answer_llm_result.generations[0][0].text
111 | verification_answers_list.append(verification_answer_str)
112 |
113 | # Create verification question and answer pair
114 | for question, answer in itertools.zip_longest(verification_questions_list, verification_answers_list):
115 | question_answer_pair += "Question: {} Answer: {}\n".format(question, answer)
116 |
117 | if run_manager:
118 | run_manager.on_text("Log something about this run")
119 |
120 | return {self.output_key: question_answer_pair}
121 |
122 | async def _acall(
123 | self,
124 | inputs: Dict[str, Any],
125 | run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
126 | ) -> Dict[str, str]:
127 | # Your custom chain logic goes here
128 | # This is just an example that mimics LLMChain
129 | prompt_value = self.prompt.format_prompt(**inputs)
130 |
131 | # Whenever you call a language model, or another chain, you should pass
132 | # a callback manager to it. This allows the inner run to be tracked by
133 | # any callbacks that are registered on the outer run.
134 | # You can always obtain a callback manager for this by calling
135 | # `run_manager.get_child()` as shown below.
136 | response = await self.llm.agenerate_prompt(
137 | [prompt_value], callbacks=run_manager.get_child() if run_manager else None
138 | )
139 |
140 | # If you want to log something about this run, you can do so by calling
141 | # methods on the `run_manager`, as shown below. This will trigger any
142 | # callbacks that are registered for that event.
143 | if run_manager:
144 | await run_manager.on_text("Log something about this run")
145 |
146 | return {self.output_key: response.generations[0][0].text}
147 |
148 | @property
149 | def _chain_type(self) -> str:
150 | return "execute_verification_chain"
--------------------------------------------------------------------------------
/src/main.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | from dotenv import load_dotenv
3 | from pprint import pprint
4 |
5 | from langchain.chat_models import ChatOpenAI
6 |
7 | from route_chain import RouteCOVEChain
8 |
9 | load_dotenv("/workspace/.env")
10 |
11 |
12 | if __name__ == "__main__":
13 | parser = argparse.ArgumentParser(description ='Chain of Verification (CoVE) parser.')
14 | parser.add_argument('--question',
15 | type = str,
16 | required = True,
17 | help ='The original question user wants to ask')
18 | parser.add_argument('--llm-name',
19 | type = str,
20 | required = False,
21 | default = "gpt-3.5-turbo-0613",
22 | help ='The openai llm name')
23 | parser.add_argument('--temperature',
24 | type = float,
25 | required = False,
26 | default = 0.1,
27 | help ='The temperature of the llm')
28 | parser.add_argument('--max-tokens',
29 | type = int,
30 | required = False,
31 | default = 500,
32 | help ='The max_tokens of the llm')
33 | parser.add_argument('--show-intermediate-steps',
34 | type = bool,
35 | required = False,
36 | default = True,
37 | help ='The max_tokens of the llm')
38 | args = parser.parse_args()
39 |
40 | original_query = args.question
41 | chain_llm = ChatOpenAI(model_name=args.llm_name,
42 | temperature=args.temperature,
43 | max_tokens=args.max_tokens)
44 |
45 | route_llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613",
46 | temperature=0.1,
47 | max_tokens=500)
48 |
49 | router_cove_chain_instance = RouteCOVEChain(original_query, route_llm, chain_llm, args.show_intermediate_steps)
50 | router_cove_chain = router_cove_chain_instance()
51 | router_cove_chain_result = router_cove_chain({"original_question":original_query})
52 |
53 | if args.show_intermediate_steps:
54 | print("\n" + 80*"#" + "\n")
55 | pprint(router_cove_chain_result)
56 | print("\n" + 80*"#" + "\n")
57 | print("Final Answer: {}".format(router_cove_chain_result["final_answer"]))
58 |
--------------------------------------------------------------------------------
/src/prompts.py:
--------------------------------------------------------------------------------
1 | ######################################################################## BASELINE PROMPTS ########################################################################
2 | BASELINE_PROMPT_WIKI = """Answer the below question which is asking for a list of entities (names, places, locations etc). Output should be a numbered list and only contains the relevant & concise enitites as answer. NO ADDITIONAL DETAILS.
3 |
4 | Question: {original_question}
5 |
6 | Answer:"""
7 |
8 | BASELINE_PROMPT_MULTI = """Answer the below question correctly and in a concise manner without much details. Only answer what the question is asked.
9 |
10 | Question: {original_question}
11 |
12 | Answer:"""
13 |
14 | BASELINE_PROMPT_LONG = """Answer the below question correctly.
15 |
16 | Question: {original_question}
17 |
18 | Answer:"""
19 |
20 | ################################################################### PLAN VERIFICATION PROMPTS ###################################################################
21 | VERIFICATION_QUESTION_TEMPLATE_PROMPT_WIKI = """Your task is to create a verification question based on the below question provided.
22 | Example Question: Who are some movie actors who were born in Boston?
23 | Example Verification Question: Was [movie actor] born in [Boston]
24 | Explanation: In the above example the verification question focused only on the ANSWER_ENTITY (name of the movie actor) and QUESTION_ENTITY (birth place).
25 | Similarly you need to focus on the ANSWER_ENTITY and QUESTION_ENTITY from the actual question and generate verification question.
26 |
27 | Actual Question: {original_question}
28 |
29 | Final Verification Question:"""
30 |
31 | VERIFICATION_QUESTION_PROMPT_WIKI = """Your task is to create a series of verification questions based on the below question, the verfication question template and baseline response.
32 | Example Question: Who are some movie actors who were born in Boston?
33 | Example Verification Question Template: Was [movie actor] born in Boston?
34 | Example Baseline Response: 1. Matt Damon - Famous for his roles in films like "Good Will Hunting," "The Bourne Identity" series, and "The Martian," Damon is an Academy Award-winning actor, screenwriter, and producer.
35 | 2. Chris Evans - Famous for his portrayal of Captain America in the Marvel Cinematic Universe, Evans has also appeared in movies like "Snowpiercer" and "Knives Out."
36 | Verification questions: 1. Was Matt Damon born in Boston?
37 | 2. Was Chirs Evans born in Boston?
38 | etc.
39 | Example Verification Question: 1. Was Matt Damon born in Boston?
40 | 2. Was Chris Evans born in Boston?
41 |
42 | Explanation: In the above example the verification questions focused only on the ANSWER_ENTITY (name of the movie actor) and QUESTION_ENTITY (birth place) based on the template and substitutes entity values from the baseline response.
43 | Similarly you need to focus on the ANSWER_ENTITY and QUESTION_ENTITY from the actual question and substitute the entity values from the baseline response to generate verification questions.
44 |
45 | Actual Question: {original_question}
46 | Baseline Response: {baseline_response}
47 | Verification Question Template: {verification_question_template}
48 |
49 | Final Verification Questions:"""
50 |
51 | VERIFICATION_QUESTION_PROMPT_MULTI = """Your task is to create verification questions based on the below original question and the baseline response. The verification questions are meant for verifying the factual acuracy in the baseline response.
52 | Example Question: Who invented the first printing press and in what year?
53 | Example Baseline Response: Johannes Gutenberg, 1450.
54 | Example Verification Questions: 1. Did Johannes Gutenberg invent first printing press?
55 | 2. Did Johannes Gutenberg invent first printing press in the year 1450?
56 |
57 | Explanation: The verification questions are highly aligned with both the qctual question and baseline response. The actual question is comprises of multiple independent questions which in turn has multiple independent answers in the baseline response. Hence, the verification questions should also be independent for factual verification.
58 |
59 | Actual Question: {original_question}
60 | Baseline Response: {baseline_response}
61 |
62 | Final Verification Questions:"""
63 |
64 | VERIFICATION_QUESTION_PROMPT_LONG = """Your task is to create verification questions based on the below original question and the baseline response. The verification questions are meant for verifying the factual acuracy in the baseline response. Output should be numbered list of verification questions.
65 |
66 | Actual Question: {original_question}
67 | Baseline Response: {baseline_response}
68 |
69 | Final Verification Questions:"""
70 |
71 | ################################################################## EXECUTE VERIFICATION PROMPTS ##################################################################
72 | EXECUTE_PLAN_PROMPT_SEARCH_TOOL = """Answer the following question correctly based on the provided context. The question could be tricky as well, so think step by step and answer it correctly.
73 |
74 | Context: {search_result}
75 |
76 | Question: {verification_question}
77 |
78 | Answer:"""
79 |
80 |
81 | EXECUTE_PLAN_PROMPT_SELF_LLM = """Answer the following question correctly.
82 |
83 | Question: {verification_question}
84 |
85 | Answer:"""
86 |
87 | EXECUTE_PLAN_PROMPT = "{verification_questions}"
88 |
89 | ################################################################## FINAL REFINED PROMPTS ##################################################################
90 | FINAL_REFINED_PROMPT = """Given the below `Original Query` and `Baseline Answer`, analyze the `Verification Questions & Answers` to finally filter the refined answer.
91 | Original Query: {original_question}
92 | Baseline Answer: {baseline_response}
93 |
94 | Verification Questions & Answer Pairs:
95 | {verification_answers}
96 |
97 | Final Refined Answer:"""
98 |
99 | ################################################################## ROUTER PROMPTS ##################################################################
100 | ROUTER_CHAIN_PROMPT = """Please classify the below question in on of the following categories. The output should be a JSON as shown in the Examples.
101 |
102 | Categories:
103 | WIKI_CHAIN: Good for answering questions which asks for a list or set of entites as its answer.
104 | MULTI_CHAIN: Good for answering questions which comprises of questions that have multiple independent answers (derived from a series of multiple discontiguous spans in the text) and multiple questions are asked in the original question.
105 | LONG_CHAIN: Good for answering questions whose answer is long.
106 |
107 | Examples:
108 | WIKI_CHAIN:
109 | Question: Name some Endemic orchids of Vietnam.
110 | JSON Output: {{"category": "WIKI_CHAIN"}}
111 | Question: Who are the scientist won nobel prize in the year 1970?
112 | JSON Output: {{"category": "WIKI_CHAIN"}}
113 | Question: List some cricket players who are playing in indian cricket team.
114 | JSON Output: {{"category": "WIKI_CHAIN"}}
115 | MULTI_CHAIN:
116 | Question: Who is known for developing the theory of relativity, and in which year was it introduced?
117 | JSON Output: {{"category": "MULTI_CHAIN"}}
118 | Question: Who is credited with inventing the telephone, and when did this invention take place?
119 | JSON Output: {{"category": "MULTI_CHAIN"}}
120 | Question: Who was the first person to orbit the Earth in space, and during which year did this historic event occur?
121 | JSON Output: {{"category": "MULTI_CHAIN"}}
122 | LONG_CHAIN:
123 | Question: Write few lines about Einstein.
124 | JSON Output: {{"category": "LONG_CHAIN"}}
125 | Question: Tell me in short about first moon landing.
126 | JSON Output: {{"category": "LONG_CHAIN"}}
127 | Question: Write a short biography of Carl Marx.
128 | JSON Output: {{"category": "LONG_CHAIN"}}
129 |
130 | Actual Question: {}
131 | Final JSON Output:"""
132 |
--------------------------------------------------------------------------------
/src/route_chain.py:
--------------------------------------------------------------------------------
1 | import json
2 |
3 | from langchain.chains.router import MultiPromptChain
4 | from langchain.chains.llm import LLMChain
5 | from langchain.chains import ConversationChain
6 | from langchain.prompts import PromptTemplate
7 | from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
8 | from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
9 | from langchain.schema import (
10 | AIMessage,
11 | HumanMessage,
12 | SystemMessage
13 | )
14 |
15 | from cove_chains import (
16 | WikiDataCategoryListCOVEChain,
17 | MultiSpanCOVEChain,
18 | LongFormCOVEChain
19 | )
20 | import prompts
21 |
22 |
23 | class RouteCOVEChain(object):
24 | def __init__(self, question, llm, chain_llm, show_intermediate_steps):
25 | self.llm = llm
26 | self.question = question
27 | self.show_intermediate_steps = show_intermediate_steps
28 |
29 | wiki_data_category_list_cove_chain_instance = WikiDataCategoryListCOVEChain(chain_llm)
30 | wiki_data_category_list_cove_chain = wiki_data_category_list_cove_chain_instance()
31 |
32 | multi_span_cove_chain_instance = MultiSpanCOVEChain(chain_llm)
33 | multi_span_cove_chain = multi_span_cove_chain_instance()
34 |
35 | long_form_cove_chain_instance = LongFormCOVEChain(chain_llm)
36 | long_form_cove_chain = long_form_cove_chain_instance()
37 |
38 | self.destination_chains = {
39 | "WIKI_CHAIN": wiki_data_category_list_cove_chain,
40 | "MULTI_CHAIN": multi_span_cove_chain,
41 | "LONG_CHAIN": long_form_cove_chain
42 | }
43 | self.default_chain = ConversationChain(llm=chain_llm, output_key="final_answer")
44 |
45 | def __call__(self):
46 | route_message = [HumanMessage(content=prompts.ROUTER_CHAIN_PROMPT.format(self.question))]
47 | response = self.llm(route_message)
48 | response_str = response.content
49 | try:
50 | chain_dict = json.loads(response_str)
51 | try:
52 | if self.show_intermediate_steps:
53 | print("Chain selected: {}".format(chain_dict["category"]))
54 | return self.destination_chains[chain_dict["category"]]
55 | except KeyError:
56 | if self.show_intermediate_steps:
57 | print("KeyError! Switching back to default chain. `ConversationChain`!")
58 | return self.default_chain
59 | except json.JSONDecodeError:
60 | if self.show_intermediate_steps:
61 | print("JSONDecodeError! Switching back to default chain. `ConversationChain`!")
62 | return self.default_chain
63 |
64 |
65 |
--------------------------------------------------------------------------------