├── Flowchart.png
├── IMG_1851.JPG
├── LICENSE
├── README.md
├── README_CN.md
├── WechatIMG325.jpg
├── commons
├── ThinkerInterface.py
├── __pycache__
│ ├── LLMCores.cpython-311.pyc
│ ├── MemoryComponents.cpython-311.pyc
│ ├── QueryComponents.cpython-311.pyc
│ ├── ThinkerComponents.cpython-311.pyc
│ └── ThinkerInterface.cpython-311.pyc
└── components
│ ├── LLMCores.py
│ ├── MemoryComponents.py
│ ├── QueryComponents.py
│ ├── ThinkComponents.py
│ ├── __pycache__
│ ├── LLMCores.cpython-311.pyc
│ ├── MemoryComponents.cpython-311.pyc
│ ├── QueryComponents.cpython-311.pyc
│ └── ThinkComponents.cpython-311.pyc
│ └── mechanics
│ ├── Loaders.py
│ ├── QueryOperations.py
│ └── __pycache__
│ ├── Loaders.cpython-311.pyc
│ └── QueryOperations.cpython-311.pyc
├── decision_maker_logo.png
├── requirements.txt
├── resources
├── .DS_Store
├── prompts
│ ├── prompt_03_brief
│ ├── prompt_04_predictions
│ ├── prompt_05_suggestions
│ ├── prompt_06_simulation
│ └── prompt_07_finalOutput
├── references
│ └── config.json
└── remote_services
│ └── api_key
└── user_interface.py
/Flowchart.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/Flowchart.png
--------------------------------------------------------------------------------
/IMG_1851.JPG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/IMG_1851.JPG
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Xinyu Bao
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | [中文版本](README_CN.md)
7 |
8 | # Thinker - A Decision Making Assistant
9 |
10 | Thinker provides personalized advice using ~~Anthropic's Claude AI~~ GPT based on your unique context.
11 |
12 | This Python script implements a decision making assistant using the Tree of Thoughts prompting technique with ~~the Claude API from Anthropic~~ GPT from OpenAI. It allows the user to iteratively explore potential actions by simulating and discussing interactive advice.
13 |
14 | FIND ME ON DISCORD! I am pretty sure that guys like you who find this project are talented to make it BETTER! My discord: [https://discord.gg/7bmgzFkn](https://discord.gg/7fY8EQtv)
15 |
16 | ## Background
17 | The Tree of Thoughts is a prompting approach that starts broad and gradually narrows down on useful specific ideas through an iterative cycle of generation, simulation, and ranking.
18 |
19 | This program follows that pattern:
20 |
21 | - The user provides a situation
22 | - LLM generates potential scenarios
23 | - LLM suggests actions for each scenario
24 | - The actions are scored by feasibility via simulation
25 | - By recursively prompting LLM to simulate and build on its own ideas, the assistant can rapidly explore the space of options and zoom in on targeted, relevant advice.
26 |
27 | Here is a flowchart that explains how the second generation of Thinker works:
28 |
A flowchart for the underneath design of Thinker Gen.2
29 |
30 | ## Features
31 | - Saves conversation history for contextual awareness
32 | - Uses NLP embedding similarity to find relevant past situations
33 | - Generates multi-step Tree of Thoughts:
34 | - Situational summary
35 | - Potential scenarios
36 | - Suggested actions
37 | - Simulated evaluations
38 | - Interactive discussion
39 | - Ranks suggestions by running Claude simulations
40 | - Extracts representative suggestions via clustering
41 | - Allows interactive elaboration on top advice
42 |
43 | ## Usage
44 | To run locally:
45 |
46 | First, you will need to secure an OpenAI API key at openai.com, as the current Thinker program needs the GPT models to power it up.
47 |
48 | Then, under `/resources/remote_services/api_key` file, here is how you put your api-key:
49 | ```
50 | {
51 | "openai_api_key":"your api key here",
52 | "openai_base_url":"put your base url here if you need a proxy",
53 | "openai_official_api_key":"your api key here"
54 | }
55 | ```
56 | In case if you need to use a proxy to access OpenAI services, you will need to modify `commons/components/LLMCores.py` as follows:
57 |
58 | Change
59 | ```
60 | api_type: str = 'openai'
61 | ```
62 |
63 | to
64 | ```
65 | api_type: str = 'proxy'
66 | ```
67 |
68 | Finally, use `pip install -r requirements.txt` to install all the dependencies before using `python3 user_interface.py` to run the gradio demo.
69 |
70 | ## Roadmap
71 | Ideas and improvements welcome! Some possibilities:
72 |
73 | - Integrate OpenAI models, or other capable LLMs, as the underneath engine.
74 | - Alternative simulation scoring methods
75 | - Front-end UI for better experience
76 | - Integration with user calendar and task tracking
77 | - Containerization for easy deployment
78 | - More refined clustering and ranking approaches
79 | - The core prompting framework could also be extended to other use cases like creative ideation, strategic planning, and more. Please open issues or PRs if you build on this project!
80 |
81 | ## Buy me a coffe if you like it :))
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 | ## License
91 | This project is licensed under the MIT license. Feel free join my efforts in making this app more useful and helpful to people!
92 | Just do not forget to reference this project when needed.
93 |
--------------------------------------------------------------------------------
/README_CN.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | [English](README.md)
7 |
8 | # Thinker - 决策助手
9 |
10 | Thinker使用基于您独特背景的~~Anthropic的Claude AI~~ GPT提供个性化建议。
11 |
12 | 此Python程序使用树状思维提示技术实现决策助手,使用~~Anthropic的Claude API~~ OpenAI的GPT。它允许用户通过模拟和讨论交互式建议来迭代地探索潜在行动。
13 |
14 | 欢迎添加个人微信交流:`baoxinyu2007`
15 |
16 | ## 背景
17 | 思维树是一种通过生成、模拟和排名的迭代循环,从广泛到具体的思考方法。
18 |
19 | 此程序遵循以下模式:
20 |
21 | - 用户提供情境
22 | - LLM生成潜在场景
23 | - LLM为每个场景建议行动
24 | - 通过模拟评估行动的可行性
25 | - 通过递归提示LLM模拟并建立在其自己的想法上,助手可以快速探索选项空间并聚焦于有针对性的建议。
26 |
27 | 这是解释第二代Thinker工作原理的流程图:
28 |
Thinker Gen.2底层设计的流程图
29 |
30 | ## 特点
31 | - 保存对话历史以保持上下文感知
32 | - 使用NLP嵌入相似性查找相关的过去情境
33 | - 生成多步思维树:
34 | - 情境摘要
35 | - 潜在场景
36 | - 建议的行动
37 | - 模拟评估
38 | - 交互式讨论
39 | - 通过运行Claude模拟对建议进行排名
40 | - 通过聚类提取代表性建议
41 | - 允许对顶级建议进行交互式阐述
42 |
43 | ## 使用
44 | 要在本地运行:
45 |
46 | 首先,您需要在 openai.com 上获取一个OpenAI API密钥,因为当前的Thinker程序需要GPT模型来运行。
47 |
48 | 然后,在 `/resources/remote_services/api_key` 文件下,这是如何放置您的api-key:
49 |
50 | ```
51 | {
52 | "openai_api_key":"您的api密钥在这里",
53 | "openai_base_url":"如果需要代理,请在此处放置您的基本url",
54 | "openai_official_api_key":"您的api密钥在这里"
55 | }
56 | ```
57 |
58 | 如果需要使用代理访问OpenAI服务,您需要修改 `commons/components/LLMCores.py` 如下:
59 |
60 | 更改
61 | ```
62 | api_type: str = 'openai'
63 | ```
64 |
65 | 为
66 | ```
67 | api_type: str = 'proxy'
68 | ```
69 |
70 |
71 | 最后,在使用 `python3 user_interface.py` 运行Gradio演示之前,请使用 `pip install -r requirements.txt` 安装所有依赖项。
72 |
73 | ## 路线图
74 | 欢迎提出想法和改进!一些可能性:
75 |
76 | - 集成OpenAI模型,或其他功能强大的LLMs,作为底层引擎。
77 | - 替代的模拟评分方法
78 | - 用于更好体验的前端UI
79 | - 与用户日历和任务跟踪的集成
80 | - 用于轻松部署的容器化
81 | - 更精细的聚类和排名方法
82 | - 核心提示框架也可以扩展到其他用例,如创造性思维、战略规划等。如果您在此项目的基础上构建,请打开问题或PR!
83 |
84 | ## 如果你喜欢的话给我买杯咖啡 :))
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 | ## 许可证
94 | 该项目采用MIT许可证。请随时加入我努力使此应用对人们更有用和有帮助!
95 | 只需在需要时引用此项目。
--------------------------------------------------------------------------------
/WechatIMG325.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/WechatIMG325.jpg
--------------------------------------------------------------------------------
/commons/ThinkerInterface.py:
--------------------------------------------------------------------------------
1 | from .components.MemoryComponents import MemoryComponent
2 | from .components.QueryComponents import QueryComponent
3 | from .components.ThinkComponents import StrategyComponent
4 | from .components.mechanics.Loaders import ConfigLoader
5 |
6 | # The `Thinker` class is a component that processes queries, retrieves suggestions, and elaborates on
7 | # selected suggestions using a strategy component.
8 | class Thinker:
9 |
10 | def __init__(
11 | self,
12 | situation: str,
13 | thoughts: str
14 | ) -> None:
15 |
16 | # retrieve configs
17 | self.config: dict = ConfigLoader().configurations()
18 |
19 | # extract relevant memories from the database
20 | self.memory: MemoryComponent = MemoryComponent(situation=situation, thoughts=thoughts)
21 |
22 | # initialize `QueryComponent` with the config
23 | self.query: QueryComponent = QueryComponent(
24 | memory=self.memory,
25 | base_tree_size=self.config['base_tree_size'],
26 | branch_size_factor=self.config['branch_size_factor'],
27 | top_n_advices=self.config['top_n_advices'],
28 | inference_model=self.config['inference_model']
29 | )
30 |
31 | # initialize `StrategyComponent`
32 | self.strategizer: StrategyComponent = StrategyComponent()
33 |
34 | async def query_process(self) -> None:
35 | """
36 | The `query_process` function saves the current situation into memory, performs a series of queries,
37 | and retrieves the suggestions from the `QueryComponent`.
38 | """
39 |
40 | # save the current situation into the memory
41 | await self.memory.store_memory()
42 |
43 | # query
44 | await self.query.brief()
45 | await self.query.predict()
46 | await self.query.suggest()
47 | await self.query.evaluate()
48 |
49 | # retrieve the final outputs of the `QueryComponent`
50 | self.the_suggestions: list = self.query.the_suggestions
51 |
52 | async def think_process(self, selected_suggestion: int) -> str:
53 | """
54 | The `think_process` function takes a selected suggestion, elaborates on it using the `strategizer`,
55 | and returns the suggestion steps.
56 |
57 | :param selected_suggestion: An integer representing the index of the selected suggestion from a list
58 | of suggestions
59 | :type selected_suggestion: int
60 | :return: The function `think_process` returns a string, which is the value of the variable
61 | `suggestion_steps`.
62 | """
63 |
64 | selection: str = self.the_suggestions[selected_suggestion]
65 | suggestion_steps: str = await self.strategizer.elaborate(
66 | suggestion=selection,
67 | query=self.query,
68 | memory=self.memory
69 | )
70 |
71 | return suggestion_steps
--------------------------------------------------------------------------------
/commons/__pycache__/LLMCores.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/__pycache__/LLMCores.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/__pycache__/MemoryComponents.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/__pycache__/MemoryComponents.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/__pycache__/QueryComponents.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/__pycache__/QueryComponents.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/__pycache__/ThinkerComponents.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/__pycache__/ThinkerComponents.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/__pycache__/ThinkerInterface.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/__pycache__/ThinkerInterface.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/components/LLMCores.py:
--------------------------------------------------------------------------------
1 | """
2 | The code defines a class called `TextGenerationCore` that initializes an OpenAI instance and
3 | provides a method to get a JSON response from the OpenAI language model. The main function
4 | demonstrates the usage of the class by generating responses to multiple prompts.
5 | """
6 |
7 | from typing import List, Any, Dict
8 | import random
9 | import json
10 | import logging
11 |
12 | import openai
13 | from openai.types.chat import ChatCompletion
14 | from openai import AsyncOpenAI
15 |
16 | import google.generativeai as genai
17 |
18 | class TextGenerationCore:
19 |
20 | def __init__(self, api_type: str = 'openai', model: str = 'gpt-3.5-turbo-1106') -> None:
21 | self.api_type = api_type
22 | self.overall_token_consumption: int = 0
23 | # initialize the variables
24 | match self.api_type:
25 | case 'proxy':
26 | try:
27 | with open('resources/remote_services/api_key', 'r') as config:
28 | config = json.load(config)
29 | self.model = model
30 | self.base_url = config['openai_base_url']
31 | self.api_key = config['openai_api_key']
32 | except IOError as e:
33 | logging.error(f'Error: {e}')
34 | raise
35 |
36 | case 'openai':
37 | try:
38 | with open('resources/remote_services/api_key', 'r') as config:
39 | config = json.load(config)
40 | self.model = model
41 | self.api_key = config['openai_official_api_key']
42 | except IOError as e:
43 | logging.error(f'Error: {e}')
44 | raise
45 |
46 | case 'google':
47 | ...
48 |
49 | def __initialize_OpenAI(self) -> AsyncOpenAI:
50 | """
51 | initialize an OpenAI instance.
52 | """
53 | if self.api_type == 'proxy':
54 | self.client: AsyncOpenAI = AsyncOpenAI(
55 | api_key=self.api_key,
56 | base_url=self.base_url
57 | )
58 | else:
59 | self.client: AsyncOpenAI = AsyncOpenAI(
60 | api_key=self.api_key,
61 | )
62 |
63 | return self.client
64 |
65 | async def __response(
66 | self,
67 | message: list,
68 | response_format: dict = None,
69 | seed: int = None
70 | ) -> ChatCompletion:
71 |
72 | """
73 | The function `__response` is an asynchronous function that takes in a list of messages, a
74 | response format dictionary, and a seed integer as parameters. It uses the OpenAI API to create a
75 | chat completion based on the given parameters and returns the response. If there is a timeout
76 | error, it logs the error message.
77 |
78 | :param message: The `message` parameter is a list of message objects that represent the
79 | conversation between the user and the AI model. Each message object has two properties: `role`
80 | and `content`. The `role` can be either "system", "user", or "assistant", and the `content`
81 | contains
82 | :type message: list
83 | :param response_format: The `response_format` parameter is an optional dictionary that allows
84 | you to specify the format in which you want the response to be returned. It can include the
85 | following keys:
86 | :type response_format: dict
87 | :param seed: The `seed` parameter is an optional integer that can be used to control the
88 | randomness of the model's response. By setting a specific seed value, you can ensure that the
89 | model generates the same response for the same input message. This can be useful for debugging
90 | or reproducibility purposes. If no
91 | :type seed: int
92 | :return: a ChatCompletion object.
93 | """
94 |
95 | try:
96 | response: ChatCompletion = await self.client.chat.completions.create(
97 | model=self.model,
98 | messages=message,
99 | response_format=response_format,
100 | seed=seed
101 | )
102 |
103 | remote_system_fingerprint: str = response.system_fingerprint
104 | logging.info(f"Current remote system fingerprint is {remote_system_fingerprint}")
105 |
106 | completion_tokens: int = response.usage.completion_tokens
107 | prompt_tokens: int = response.usage.prompt_tokens
108 | total_tokens: int = response.usage.total_tokens
109 | logging.info(
110 | f"Completion tokens: {completion_tokens}"
111 | f"\nPrompt tokens: {prompt_tokens}"
112 | f"\nTotal tokens: {total_tokens}"
113 | )
114 |
115 | # update the `self.overall_token_consumption`
116 | self.overall_token_consumption += total_tokens
117 | logging.info(f"Overall token consumption by far: {self.overall_token_consumption}")
118 |
119 | except openai.APITimeoutError as e:
120 | logging.error(f"Timeout error: {e}")
121 |
122 | except TimeoutError as e:
123 | logging.error(f"Timeout error: {e}")
124 |
125 | except openai.APIConnectionError as e:
126 | logging.error(f"Connection error: {e}")
127 |
128 | else:
129 | return response
130 |
131 | async def get_chat_response_OpenAI(self, message: str, seed: int = None) -> ChatCompletion:
132 |
133 | # we randomize the seed here instead of in the parameter input
134 | # to prevent the case, in which concurrent callings result in the same seeds
135 | if seed is None:
136 | seed = random.randint(a=0, b=999999)
137 |
138 | # OpenAI instance initiated
139 | self.__initialize_OpenAI()
140 |
141 | # log the generation specifications
142 | logging.info(f"Seed for generation: {seed}")
143 | logging.info("Generation mode: generic / all at once")
144 |
145 | # initiate an empty chat history for stateful conversation
146 | self.chat_message: list = []
147 | self.chat_message.append(
148 | {"role":"user", "content": str(message)}
149 | )
150 |
151 | response: ChatCompletion = await self.__response(
152 | message=self.chat_message,
153 | seed=seed
154 | )
155 |
156 | # append the assistant response to the `self.chat_message`
157 | self.chat_message.append(
158 | {"role":"assistant", "content":response.choices[0].message.content}
159 | )
160 |
161 | return response
162 |
163 | async def get_json_response_OpenAI(self, message: str, seed: int = None) -> ChatCompletion:
164 |
165 | # we randomize the seed here instead of in the parameter input
166 | # to prevent the case, in which concurrent callings result in the same seeds
167 | if seed is None:
168 | seed = random.randint(a=0, b=999999)
169 |
170 | # OpenAI instance initiated
171 | self.__initialize_OpenAI()
172 |
173 | # response format
174 | response_format: dict = {
175 | "type": "json_object"
176 | }
177 |
178 | # log the generation specifications
179 | logging.info(f"Seed for generation: {seed}")
180 | logging.info("Generation mode: json")
181 |
182 | message: List(Dict(str, str)) = [
183 | {"role":"user", "content": message}
184 | ]
185 |
186 | # get the response from the LLM
187 | response: ChatCompletion = await self.__response(
188 | message=message,
189 | response_format=response_format,
190 | seed=seed
191 | )
192 |
193 | return response
194 |
195 | if __name__ == "__main__":
196 |
197 | import asyncio
198 |
199 | logging.basicConfig(level=logging.INFO)
200 |
201 | async def main():
202 |
203 | TGC = TextGenerationCore()
204 |
205 | situation = "I lost my purse."
206 | prompt: str = f"""
207 | YOUR RESPONSE SHOULD BE IN JSON ONLY.
208 | [Task_01]:"breakdown the [situation] and [goal], then analyse it. Print it out in JSON."
209 | [Task_02]:"Give 5 [inference] on how the [situation] is going to develop. Then, provide 5 [prediction]. Give a score by percentage for each [predicted_scenario] on the probability. Print it out in JSON."
210 | [Task_03]:"Give 5 [suggestion] based on [Task_01] and [predicted_scenario]. Also, for each [suggestion], evaluate the pros and cons, then give a [recommendation_score] by percentage on how much that you recommend to execute. Print it out in JSON."
211 | [situation]:{situation}
212 | """
213 |
214 | messages: list = [
215 | [{"role":"user", "content":prompt}],
216 | [{"role":"user", "content":prompt}],
217 | [{"role":"user", "content":prompt}],
218 | ]
219 |
220 | tasks = [TGC.get_json_response_OpenAI(messages=message) for message in messages]
221 |
222 | results = await asyncio.gather(*tasks)
223 |
224 | for result in results:
225 | print(result.choices[0].message.content)
226 |
227 | asyncio.run(main())
--------------------------------------------------------------------------------
/commons/components/MemoryComponents.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import uuid
3 | import asyncio
4 |
5 | import chromadb
6 |
7 | from .LLMCores import *
8 |
9 | """
10 | [ ] Write down daily events, and then save them into the vector database for future retrieval
11 | [ ] Query the current event, as well as saving the queried event to the vector database
12 | [ ] Automatically dissect the current events, if multiple events are entered
13 | [ ] Feedback on events' eventual outcomes and records of the suggestions for the previous queried events
14 | """
15 |
16 | # The `MemoryComponent` class is a Python class that represents a memory component, which allows for
17 | # recording, storing, retrieving, and deleting memories based on a given situation.
18 | class MemoryComponent:
19 |
20 | def __init__(self, situation: str, thoughts: str) -> None:
21 | """
22 | The above function is the initialization method for a class, which sets the initial values for the
23 | situation and thoughts attributes, and also initializes the database and collection for storing
24 | data.
25 |
26 | :param situation: The "situation" parameter is a string that represents the current situation or
27 | context in which the code is being executed. It could be any relevant information or description of
28 | the current state of the program
29 | :type situation: str
30 | :param thoughts: The "thoughts" parameter in the `__init__` method is a string that represents the
31 | initial thoughts or reflections of the object. It is used to store the thoughts or reflections
32 | related to the situation that the object is in
33 | :type thoughts: str
34 | """
35 | # initiated right away, the attributes of a memory
36 | self.situation = situation
37 | self.thoughts = thoughts
38 | self.historical_events: chromadb.QueryResult = None
39 | # added later on
40 | self.selected_recommendation: str = ""
41 | self.eventual_outcome: str = ""
42 |
43 | # initiate the database and the collection
44 | self.client = chromadb.PersistentClient('resources/VectorDB')
45 | self.collection = self.client.create_collection(
46 | name="ThinkerMemory",
47 | get_or_create=True
48 | )
49 |
50 | async def record_selected_recommendation(self, recommendation: str) -> None:
51 | """
52 | The function `record_selected_recommendation` assigns the value of the `recommendation` parameter to
53 | the `selected_recommendation` attribute of the object.
54 |
55 | :param recommendation: The parameter "recommendation" is a string that represents the recommendation
56 | that is being recorded
57 | :type recommendation: str
58 | """
59 | setattr(self, "selected_recommendation", recommendation)
60 |
61 | async def record_eventual_outcome(self, memory_uuid: str, eventual_outcome: str) -> None:
62 | """
63 | The function `record_eventual_outcome` sets the `eventual_outcome` attribute and then calls the
64 | `__edit_memory` method.
65 |
66 | :param memory_uuid: A unique identifier for a memory
67 | :type memory_uuid: str
68 | :param eventual_outcome: The `eventual_outcome` parameter is a string that represents the eventual
69 | outcome of an event or process. It is used to update the `eventual_outcome` attribute of the object
70 | :type eventual_outcome: str
71 | :return: a coroutine object.
72 | """
73 | self.eventual_outcome = eventual_outcome
74 |
75 | return await self.__edit_memory(memory_uuid=memory_uuid)
76 |
77 | async def store_memory(self) -> None:
78 | """
79 | The `store_memory` function adds an event to the memory collection, including the situation,
80 | timestamp, selected recommendation, and an empty field for eventual outcome.
81 | """
82 | # add the event to the memory
83 | self.collection.add(
84 | ids=[str(uuid.uuid4())],
85 | documents=[self.situation],
86 | metadatas=[
87 | {
88 | "timestamp":str(datetime.datetime.now()),
89 | "selected_recommendation": self.selected_recommendation,
90 | "eventual_outcome": "", # this needs to be handled differently, as the eventual outcome usually won't be recorded along with the other threes
91 | }
92 | ]
93 | )
94 | logging.info("Memory stored")
95 |
96 | async def get_all_memories(self) -> list:
97 | """
98 | The function `get_all_memories` returns all the memories from a collection.
99 | :return: a list of all the memories.
100 | """
101 | return self.collection.get()
102 |
103 | async def delete_memories(self, ids: list) -> None:
104 | """
105 | The function `delete_a_memory` deletes a memory from a collection based on the given list of IDs.
106 |
107 | :param ids: A list of memory IDs that need to be deleted
108 | :type ids: list
109 | """
110 | self.collection.delete(
111 | ids=ids
112 | )
113 | logging.info("Memories deleted.")
114 |
115 | async def __edit_memory(self, memory_uuid: str) -> None:
116 | """
117 | The function edits the eventual outcome of an event in the memory collection.
118 |
119 | :param memory_uuid: A unique identifier for the memory that needs to be edited
120 | :type memory_uuid: str
121 | """
122 | # edit the event in the memory
123 | if self.eventual_outcome:
124 | self.collection.update(
125 | ids=[memory_uuid],
126 | metadatas=[
127 | {"eventual_outcome": self.eventual_outcome}
128 | ]
129 | )
130 | logging.info("Eventual outcome is recorded.")
131 | else:
132 | logging.info("Eventual outcome is not recorded yet. Edit operation aborted.")
133 |
134 | async def __retrieve_memory_organizer(self, event_content: str, metadata: dict) -> str:
135 | return "event_content: " + event_content + "\n" + "eventual_outcome: " + metadata['eventual_outcome'] + "\n" + "selected_recommendation: " + metadata['selected_recommendation'] + "\n" + "timestamp: " + metadata['timestamp']
136 |
137 | async def retrieve_memory(self) -> chromadb.QueryResult:
138 | """
139 | The function retrieves the most relevant historical events based on a given situation.
140 | :return: The function `retrieve_memory` returns a list of historical events.
141 | """
142 | # get the most relevant historical events
143 | historical_events: chromadb.QueryResult = self.collection.query(
144 | query_texts=[self.situation],
145 | n_results=10
146 | )
147 | logging.info("Historical events retrieved.")
148 | logging.info(f"Historical events' distances: {historical_events['distances']}")
149 | logging.info(f"Historical events: {historical_events['documents']}")
150 |
151 | # organize the `historical_events`
152 | tasks = [
153 | self.__retrieve_memory_organizer(
154 | event_content,
155 | metadata
156 | ) for event_content, metadata in zip(historical_events['documents'][0], historical_events['metadatas'][0])
157 | ]
158 | organized_historical_events: list = await asyncio.gather(*tasks)
159 |
160 | return organized_historical_events
161 |
162 | class FeedbackComponent:
163 |
164 | def __init__(self) -> None:
165 | pass
--------------------------------------------------------------------------------
/commons/components/QueryComponents.py:
--------------------------------------------------------------------------------
1 | import asyncio
2 | import json
3 | import re
4 |
5 | from .MemoryComponents import *
6 | from .mechanics.QueryOperations import QueryOperation
7 | from .LLMCores import *
8 |
9 |
10 | class QueryComponent:
11 |
12 | def __init__(
13 | self,
14 | memory: MemoryComponent,
15 | base_tree_size: int = 5,
16 | branch_size_factor: int = 5,
17 | top_n_advices: int = 5,
18 | inference_model: str = "gpt-3.5-turbo-1106",
19 | api_type: str = "openai",
20 | ) -> None:
21 |
22 | # initialized variables
23 | self.memory: MemoryComponent = memory
24 | self.base_tree_size: int = base_tree_size # the number determines how many predictions it makes for the query
25 | self.branch_size_factor: int = branch_size_factor # the number determines how many suggestions made for each prediction
26 | self.top_n_advices: int = top_n_advices # the number determines how many advices it will eventually sort out
27 | logging.info(f'QueryComponent initialized with base_tree_size: {base_tree_size}')
28 | logging.info(f'QueryComponent initialized with branch_size_factor: {branch_size_factor}')
29 | logging.info(f'QueryComponent initialized with top_n_advices: {top_n_advices}')
30 |
31 | # initialize the `text_generator`
32 | self.text_generator: TextGenerationCore = TextGenerationCore(api_type=api_type, model=inference_model)
33 |
34 | # other variables that we will get later on
35 | self.the_brief: dict = {}
36 | self.the_predictions: list = []
37 | self.the_suggestions: list = []
38 | self.the_final_output: list = []
39 |
40 | async def brief(self) -> None:
41 | """
42 | The `brief` function generates a brief prompt by combining various pieces of information and
43 | sends it to the OpenAI text generator to generate a brief.
44 | """
45 | with open('resources/prompts/prompt_03_brief', 'r') as prompt:
46 | prompt: str = prompt.read()
47 |
48 | # retrieve the historical events
49 | historical_events: list = await self.memory.retrieve_memory()
50 |
51 | # construct the prompt to send
52 | situation = "[situation]:" + self.memory.situation + "\n"
53 | context = "[context]:" + str(historical_events) + "\n"
54 | thoughts = "\n[my thoughts to the situation]:" + self.memory.thoughts + "\n"
55 | info_lookup = "[info_lookup]:" + "Current date is " + str(datetime.datetime.now()) + "\n"
56 |
57 | brief_prompt: str = prompt + thoughts + situation + info_lookup + context
58 | logging.info("Brief prompt generated.")
59 | logging.debug(brief_prompt)
60 |
61 | # get the brief
62 | try:
63 | while True:
64 | brief: ChatCompletion = await self.text_generator.get_json_response_OpenAI(message=brief_prompt)
65 | if brief is None:
66 | continue
67 |
68 | logging.info("Brief generated.")
69 |
70 | # record the brief into the object instance
71 | self.the_brief = json.loads(brief.choices[0].message.content)
72 | logging.info("Brief recorded.")
73 | break
74 | except json.JSONDecodeError as e:
75 | logging.error(f"Brief generation failed due to {e}. Retry.")
76 |
77 | async def predict(self) -> None:
78 | """
79 | The `predict` function reads a prompt from a file, appends a summary to it, generates a
80 | prediction using OpenAI's text generator, and records the prediction in the object instance.
81 | """
82 | with open('resources/prompts/prompt_04_predictions', 'r') as prompt:
83 | prompt: str = prompt.read()
84 |
85 | # construct the prompt to send
86 | prompt = prompt + "\n" + "[summary]:" + str(self.the_brief)
87 |
88 | # get the predictions based on the `base_tree_size`
89 | # original seed: seed=458282
90 | tasks = [self.text_generator.get_json_response_OpenAI(message=prompt) for i in range(self.base_tree_size)]
91 | predictions = await asyncio.gather(*tasks)
92 | logging.info("Predictions generated. ")
93 |
94 | for prediction in predictions:
95 |
96 | try:
97 |
98 | # we stop the current iteration and start the next in case if the API request failed.
99 | if prediction is None:
100 | continue
101 |
102 | content: str = prediction.choices[0].message.content
103 |
104 | matches = re.findall(r"```json\n([\s\S]*?)\n```", content)
105 |
106 | if matches:
107 | # record the prediction into the object instance
108 | self.the_predictions.append(json.loads(matches[0]))
109 | logging.info("Prediction recorded.")
110 | else:
111 | self.the_predictions.append(json.loads(content))
112 | logging.info("Prediction recorded.")
113 |
114 | except json.JSONDecodeError as e:
115 | logging.error(f"Prediction generation failed due to {e}. Retry.")
116 | valid_prediction = False
117 |
118 | while valid_prediction == False:
119 | prediction: ChatCompletion = self.text_generator.get_json_response_OpenAI(message=prompt)
120 |
121 | content: str = prediction.choices[0].message.content
122 | matches = re.findall(r"```json\n([\s\S]*?)\n```", content)
123 |
124 | if matches:
125 | # record the prediction into the object instance
126 | self.the_predictions.append(json.loads(matches[0]))
127 | valid_prediction = True
128 | logging.info("Retry succeeded. Prediction recorded.")
129 | else:
130 | self.the_predictions.append(json.loads(content))
131 | valid_prediction = True
132 | logging.info("Retry succeeded. Prediction recorded.")
133 |
134 | async def suggest(self) -> None:
135 | # load the prompt
136 | with open('resources/prompts/prompt_05_suggestions', 'r') as prompt:
137 | prompt: str = prompt.read()
138 |
139 | # construct the prompt to send
140 | prompts: list = []
141 | counter: int = 0
142 | for prediction in self.the_predictions:
143 | iterative_prompt = prompt + "[predictions]:" + str(prediction) + "\n" + "[summary]:" + str(self.the_brief) + "\n" + "[thoughts]:" + self.memory.thoughts + "\n"
144 | prompts.append(iterative_prompt)
145 | counter += 1
146 | logging.info(f"Suggestions prompt generated ({counter}/{len(self.the_predictions)})")
147 | logging.debug(f"Suggestions prompt preview: {iterative_prompt}")
148 |
149 | # get the suggestions
150 | # list comprehension to schedule 5 generations for each prompt
151 | tasks = [self.text_generator.get_json_response_OpenAI(message=prompt) for prompt in prompts for i in range(self.branch_size_factor)]
152 | results: list = await asyncio.gather(*tasks)
153 |
154 | # post-process the results
155 | for result in results:
156 |
157 | # we stop the current iteration and start the next in case if the API request failed.
158 | if result is None:
159 | continue
160 |
161 | content = result.choices[0].message.content
162 |
163 | # Use a regular expression to find JSON within triple backticks
164 | matches = re.findall(r"```json\n([\s\S]*?)\n```", content)
165 | try:
166 | if matches:
167 | # Take the first match as the JSON content
168 | json_str = matches[0]
169 | json_result: dict = json.loads(json_str)
170 | else:
171 | # Fallback to parsing the content directly if no markdown was found
172 | json_result: dict = json.loads(content)
173 |
174 | # post-process the `success_rate_in_percentage`
175 | if isinstance(json_result['success_rate_in_percentage'], str) and json_result['success_rate_in_percentage'].endswith('%'):
176 | json_result['success_rate_in_percentage'] = int(json_result['success_rate_in_percentage'].rstrip('%'))
177 | logging.info(f"Success rate is in percentage: {json_result['success_rate_in_percentage']}, converted")
178 |
179 | elif isinstance(json_result['success_rate_in_percentage'], int):
180 | logging.info(f"Success rate is already an integer: {json_result['success_rate_in_percentage']}")
181 | pass
182 |
183 | else:
184 | json_result['success_rate_in_percentage'] = int(json_result['success_rate_in_percentage'])
185 | logging.info(f"Success rate is not an integer: {json_result['success_rate_in_percentage']}, converted.")
186 |
187 | # Append the parsed JSON to the suggestions list
188 | self.the_suggestions.append(json_result)
189 |
190 | except json.JSONDecodeError as e:
191 | logging.error(f"Could not decode the JSON response: {e}")
192 | logging.error(f"Problematic content: {content}")
193 |
194 | except Exception as e:
195 | logging.error(f"An unexpected error occurred while parsing the JSON: {e}")
196 |
197 | # remove the duplicated branches
198 | self.the_suggestions = QueryOperation(query_object=self.the_suggestions).prune_branches(key='move')
199 |
200 | # for suggestion in self.the_suggestions:
201 | # print(suggestion['move'])
202 | # print(suggestion['rationale'])
203 | # print(suggestion['success_rate_in_percentage'])
204 |
205 | async def evaluate(self):
206 | """
207 | Evaluate the suggestions and decide whether to keep making branches or not.
208 | """
209 |
210 | self.the_suggestions.sort(key=lambda x: x['success_rate_in_percentage'], reverse=True)
211 |
212 | # we keep the `top_n_advices`
213 | self.the_suggestions = self.the_suggestions[:self.top_n_advices]
214 | logging.info(f"The top {self.top_n_advices} suggestions are kept.")
--------------------------------------------------------------------------------
/commons/components/ThinkComponents.py:
--------------------------------------------------------------------------------
1 | from .LLMCores import TextGenerationCore
2 | from .mechanics.Loaders import PromptLoader
3 | from .QueryComponents import QueryComponent
4 | from .MemoryComponents import MemoryComponent
5 |
6 | class StrategyComponent:
7 |
8 | def __init__(self) -> None:
9 | self.text_generation_core: TextGenerationCore = TextGenerationCore()
10 |
11 | async def elaborate(self, suggestion: str, query: QueryComponent, memory: MemoryComponent):
12 | """
13 | This component is used to expand the suggestions into concrete steps.
14 | """
15 | prompt: str = PromptLoader("prompt_07_finalOutput").prompt_constructor(
16 | summary=query.the_brief,
17 | thoughts=memory.thoughts,
18 | suggestion=suggestion,
19 | )
20 |
21 | response = await self.text_generation_core.get_chat_response_OpenAI(message=prompt)
22 |
23 | return response.choices[0].message.content
--------------------------------------------------------------------------------
/commons/components/__pycache__/LLMCores.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/components/__pycache__/LLMCores.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/components/__pycache__/MemoryComponents.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/components/__pycache__/MemoryComponents.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/components/__pycache__/QueryComponents.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/components/__pycache__/QueryComponents.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/components/__pycache__/ThinkComponents.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/components/__pycache__/ThinkComponents.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/components/mechanics/Loaders.py:
--------------------------------------------------------------------------------
1 | # The code defines two classes, `ConfigLoader` and `PromptLoader`, which are used to load
2 | # configurations from a JSON file and prompts from text files, respectively.
3 | import json
4 |
5 | # The `ConfigLoader` class loads a JSON configuration file and provides a method to access the loaded
6 | # configurations.
7 | class ConfigLoader:
8 |
9 | def __init__(self) -> None:
10 | self.config_path: str = "resources/references/config.json"
11 |
12 | with open(self.config_path, "r") as file:
13 | self.config: dict = json.load(file)
14 |
15 | def configurations(self) -> dict:
16 | return self.config
17 |
18 |
19 | # The `PromptLoader` class is a Python class that loads prompts from a file and allows for the
20 | # construction of prompts with provided arguments.
21 | class PromptLoader:
22 |
23 | def __init__(self, prompt_name: str) -> None:
24 | self.prompts_path: str = "resources/prompts/"
25 | self.prompt_name: str = prompt_name
26 | self.prompt: str = self.__load_prompt()
27 |
28 | def __repr__(self) -> str:
29 | return self.prompt
30 |
31 | def __len__(self) -> int:
32 | return len(self.prompt)
33 |
34 | def __load_prompt(self) -> str:
35 | with open(self.prompts_path + self.prompt_name, "r") as self.file:
36 | self.prompt: str = self.file.read()
37 | return self.prompt
38 |
39 | def prompt_constructor(self, *args, **kwargs) -> str:
40 | """
41 | The function `prompt_constructor` takes in arguments and keyword arguments, constructs a prompt
42 | using a predefined format, and returns the constructed prompt as a string.
43 | :return: The function `prompt_constructor` returns a string that is constructed using the
44 | `self.prompt` attribute, which is a string format template. The template is formatted with the
45 | `args` and `kwargs` arguments passed to the function.
46 | """
47 | constructed_prompt: str = self.prompt.format(*args, **kwargs)
48 |
49 | return constructed_prompt
--------------------------------------------------------------------------------
/commons/components/mechanics/QueryOperations.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from sklearn.feature_extraction.text import TfidfVectorizer
4 | from sklearn.metrics.pairwise import cosine_similarity
5 |
6 | class QueryOperation:
7 |
8 | def __init__(self, query_object: list) -> None:
9 | self.query_object = query_object
10 |
11 | def __tfidf_cleaner(self, documents, key: str) -> list:
12 | """
13 | The function `__tfidf_cleaner` takes a list of documents, calculates the TF-IDF matrix and
14 | cosine similarity matrix, filters out similar documents based on a similarity threshold, and
15 | returns a list of unique documents.
16 |
17 | :param documents: The `documents` parameter is a list of strings, where each string represents a
18 | document
19 | """
20 | moves: list = [document[key] for document in documents]
21 |
22 | vectorizer = TfidfVectorizer()
23 | tfidf_matrix = vectorizer.fit_transform(moves)
24 |
25 | # Calculate the cosine similarity matrix
26 | cosine_sim = cosine_similarity(tfidf_matrix, tfidf_matrix)
27 |
28 | # Set a threshold for filtering similar documents (you can adjust this)
29 | similarity_threshold = 0.8
30 |
31 | # Filter out similar documents based on the cosine similarity
32 | unique_doc_indices = []
33 | for i in range(cosine_sim.shape[0]):
34 | if all(cosine_sim[i, j] < similarity_threshold or i == j for j in unique_doc_indices):
35 | unique_doc_indices.append(i)
36 |
37 | # The resulting list of unique documents
38 | unique_moves: list = [moves[i] for i in unique_doc_indices]
39 |
40 | # Retrieve the original dictionaries corresponding to the unique moves
41 | unique_documents: list = [document for document in documents if document[key] in unique_moves]
42 | logging.info(f"The number of queries after cleaning: {len(unique_documents)}")
43 |
44 | return unique_documents
45 |
46 | def prune_branches(self, key: str) -> list:
47 | """
48 | The `prune_branches` function removes duplicate entries from a list of queries based on a
49 | specified key and then applies a cleaning process using the `__tfidf_cleaner` method.
50 |
51 | :param key: The `key` parameter is a string that represents the key in the query object
52 | dictionary that will be used to determine uniqueness
53 | :type key: str
54 | :return: the result of calling the `__tfidf_cleaner` method with the `temporary_query_object` as
55 | the `documents` parameter.
56 | """
57 |
58 | # the `window` is responsible for collecting unique moves
59 | window: set = set()
60 |
61 | # for storing the processed queries
62 | temporary_query_object: list = []
63 |
64 | # strip away the repeated `move`
65 | for query in self.query_object:
66 | if query[key] not in window:
67 | window.add(query[key])
68 | temporary_query_object.append(query)
69 |
70 | return self.__tfidf_cleaner(documents=temporary_query_object, key=key)
--------------------------------------------------------------------------------
/commons/components/mechanics/__pycache__/Loaders.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/components/mechanics/__pycache__/Loaders.cpython-311.pyc
--------------------------------------------------------------------------------
/commons/components/mechanics/__pycache__/QueryOperations.cpython-311.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/commons/components/mechanics/__pycache__/QueryOperations.cpython-311.pyc
--------------------------------------------------------------------------------
/decision_maker_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/decision_maker_logo.png
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | openai==1.3.9
2 | google-generativeai==0.3.1
3 | chromadb==0.4.15
4 | gradio==3.44.4
--------------------------------------------------------------------------------
/resources/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AspadaX/Thinker/1d555897a8b2b77e7362fbbce6199d026c97f853/resources/.DS_Store
--------------------------------------------------------------------------------
/resources/prompts/prompt_03_brief:
--------------------------------------------------------------------------------
1 | [instruction]:"Write a high level abstraction from [situation], [my thoughts to the situation], [context] and [info_lookup]. The 'event_content' in [context] indicates the content of a previous event, the 'eventual_outcome' indicates the final outcome of that event, the 'selected_recommendation' indicates the recommendation selected by me in the previous query and the 'timestamp' indicates when the event happened. The high level abstraction should be informative to a consultant who helps me to make a decision."
2 | [output_format]:"json"
--------------------------------------------------------------------------------
/resources/prompts/prompt_04_predictions:
--------------------------------------------------------------------------------
1 | [instruction]:"Based on the [summary], simulate 5 potential scenarios on how the situation is going to develop. Each potential scenario should come with a score of probability in percentage. Respond in json."
2 |
--------------------------------------------------------------------------------
/resources/prompts/prompt_05_suggestions:
--------------------------------------------------------------------------------
1 | [instruction]:"Give a suggested next move based on [summary], [thoughts], [predictions] and the Art of War. respond in json. The key values for the move are: 'move', 'rationale', 'success_rate_in_percentage'"
2 |
--------------------------------------------------------------------------------
/resources/prompts/prompt_06_simulation:
--------------------------------------------------------------------------------
1 | [instruction]:"Simulate how the situation in [summary] is going to develop if [suggested_next_move] is taken. Consider [summary] and [thoughts], then return me how much likely that you recommend the [suggested_next_move] by percentage. Your response should be like: Recommendation Score = "the percentage"."
2 |
--------------------------------------------------------------------------------
/resources/prompts/prompt_07_finalOutput:
--------------------------------------------------------------------------------
1 | [instruction]:"Act as a master of the Art of War. Consider [summary] and [thoughts]. Give me an elaborated version of [suggestion] to help me out and include at least 5 feasible steps to reach my [thoughts]."
2 | [summary]:{summary}
3 | [thoughts]:{thoughts}
4 | [suggestion]:{suggestion}
--------------------------------------------------------------------------------
/resources/references/config.json:
--------------------------------------------------------------------------------
1 | {
2 | "base_tree_size": 5,
3 | "branch_size_factor": 5,
4 | "top_n_advices": 5,
5 | "inference_model": "gpt-3.5-turbo-1106"
6 | }
--------------------------------------------------------------------------------
/resources/remote_services/api_key:
--------------------------------------------------------------------------------
1 | {
2 | "openai_api_key":"",
3 | "openai_base_url":"",
4 | "openai_official_api_key":""
5 | }
--------------------------------------------------------------------------------
/user_interface.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 |
3 | from commons.ThinkerInterface import Thinker
4 |
5 |
6 | class Wrapper:
7 |
8 | def __init__(self) -> None:
9 | pass
10 |
11 | async def thinker_wrapper_function(self, situation: str, thoughts: str) -> list:
12 | """
13 | This function is an asynchronous function used to encapsulate the use of the Thinker class.
14 |
15 | Parameters:
16 | situation (str): Represents the current situation.
17 | thoughts (str): Represents the current thoughts.
18 |
19 | Returns:
20 | list: A list of suggestions generated by the Thinker class.
21 | """
22 |
23 | self.thinker: Thinker = Thinker(
24 | situation=situation,
25 | thoughts=thoughts
26 | )
27 |
28 | await self.thinker.query_process()
29 |
30 | return self.thinker.the_suggestions
31 |
32 | async def elaboration_wrapper_function(self, selected_suggestion: int) -> str:
33 | """
34 | The function `elaboration_wrapper_function` takes in a selected suggestion and returns the
35 | elaboration generated by the `think_process` method of the `thinker` object.
36 |
37 | :param selected_suggestion: The `selected_suggestion` parameter is an integer that represents
38 | the index of the suggestion that the user has selected. It is used as an input to the
39 | `think_process` method of the `thinker` object
40 | :type selected_suggestion: int
41 | :return: a string, which is the elaboration generated by the `think_process` method of the
42 | `thinker` object.
43 | """
44 | elaboration: str = await self.thinker.think_process(selected_suggestion=selected_suggestion)
45 |
46 | return elaboration
47 |
48 |
49 | if __name__ == '__main__':
50 |
51 | import logging
52 | import os
53 |
54 | logging.basicConfig(level=logging.INFO)
55 | os.environ['TOKENIZERS_PARALLELISM'] = "false"
56 |
57 | wrapper: Wrapper = Wrapper()
58 |
59 | interface: gr.Interface = gr.Interface(
60 | fn=wrapper.thinker_wrapper_function,
61 | inputs=[
62 | gr.components.Textbox(
63 | "Situation",
64 | placeholder="Enter your situation here",
65 | default="I am feeling tired"
66 | ),
67 | gr.components.Textbox(
68 | "Thoughts",
69 | placeholder="Enter your thoughts here",
70 | default="I am feeling tired"
71 | )
72 | ],
73 | outputs=[
74 | gr.components.Textbox(label="Suggestions"),
75 | gr.components.Textbox(label="Elaboration")
76 | ],
77 | title="Thinker",
78 | description="A tool to help you think and make decisions",
79 | article="This tool is a wrapper for the Thinker class, which is a class that helps you think and make decisions.",
80 | examples=[
81 | ["I am feeling tired", "I am feeling tired"],
82 | ]
83 | )
84 |
85 | interface.launch()
--------------------------------------------------------------------------------