├── LICENSE ├── README.md └── tools ├── discord-webhook ├── README.md └── discord-webhook.py ├── gpt4-memory-mimic ├── README.md └── gpt4-memory-mimic.py ├── met-tables ├── README.md └── met-tables.py └── telegram-message-sender ├── README.md └── telegram-message-sender.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 MH 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Memory Enhancement and Communication Tools for Open WebUI 3 | 4 | ## Overview 5 | These tools are designed to enhance the functionality and interaction experience of Large Language Models (LLMs) within Open WebUI. They offer powerful memory management capabilities, allowing you to organize, recall, and manage data, as well as automated communication through Telegram and Discord webhooks. Whether you're looking to optimize your LLM's memory or automate notifications, these tools integrate seamlessly into your Open WebUI environment. 6 | 7 | Stay tuned—new tools will be added regularly! 8 | 9 | Have an idea for a tool? We'd love to hear from you! Please share your suggestions or feature requests in the [issues](https://github.com/soymh/open-webui-stuff/issues) section. Your input is highly appreciated! 10 | 11 | ## Tools 12 | 13 | ### 1. **GPT4 Memory Mimic Tool** (Version 1.5.0) 14 | 15 | #### Features 16 | - **Memory Management**: Add, update, delete, and retrieve memory entries. 17 | - **Tagging System**: Organize memories with tags such as "personal," "work," "education," etc. 18 | - **Debugging Support**: Enable debug mode for troubleshooting. 19 | - **File Management**: Create, switch, list, and delete memory files. 20 | - **Download Functionality**: Download specific memory files or all memories as a tarball. 21 | 22 | ### 2. **Memory Enhancement Tool (MET)** (Version 0.1.0) 23 | 24 | #### Features 25 | - **Recall Memories**: Retrieve all stored memories for the user. 26 | - **Add Memory**: Add new entries to the user's memory vault. 27 | - **User Identification**: Requires user ID for operations, integrated with OpenWebUI. 28 | 29 | ### 3. **Telegram Bot Message Sender Tool** (Version 0.0.1) 30 | 31 | #### Features 32 | - **Message Sending**: Send messages to a specified Telegram user via bot. 33 | - **Proxy Support**: Optional proxy settings for routing messages securely. 34 | 35 | ### 4. **Discord Webhook Tool** (Version 0.0.1) 36 | 37 | #### Features 38 | - **Message Sending**: Send messages to a specified Discord channel using a webhook. 39 | - **Proxy Support**: Optional proxy settings for routing messages securely. 40 | 41 | ## Installation 42 | To install each tool, refer to its specific README file and follow the installation link provided there. 43 | 44 | ### Installation Instructions for Communication Tools: 45 | - **Telegram Bot Message Sender Tool**: Refer to the README in the repository for step-by-step installation. 46 | - **Discord Webhook Tool**: Refer to the README in the repository for step-by-step installation. 47 | 48 | ## Contributing 49 | We welcome contributions! If you have ideas for improvements, new features, or bug fixes, feel free to open a pull request or issue on our [GitHub repository](https://github.com/soymh/open-webui-stuff). 50 | 51 | ## License 52 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. 53 | 54 | ## Acknowledgments 55 | Thank you for using these tools! We hope they enhance your experience with OpenWebUI. Special thanks to **[open-webui](https://github.com/open-webui)** for creating the Open WebUI, which these tools are built upon/for! Without such a powerful and versatile base, these tools wouldn't be possible. -------------------------------------------------------------------------------- /tools/discord-webhook/README.md: -------------------------------------------------------------------------------- 1 | # Discord Webhook Tool for Open WebUI 2 | 3 | ## **What's New?** 4 | 5 | The Discord Webhook Tool allows you to send messages directly to a specified Discord channel using a webhook. This simple tool integrates seamlessly with Open WebUI, making it easier than ever to automate notifications, alerts, or messages. 6 | 7 | ### **Effortless Message Sending** 8 | 9 | With the `send_message` function, sending messages to a Discord channel has never been more straightforward. Simply provide the message content, configure the webhook URL, and let the tool do the rest. Whether you're automating alerts or notifications, the webhook will take care of delivering the message. 10 | 11 | ### **Proxy Support** 12 | 13 | For added security and customization, the tool also supports optional proxy usage. If you need to route your requests through a specific proxy server, you can easily configure it in the settings. 14 | 15 | ### **Why ?** 16 | 17 | After developing and testing the tool, we identified the need for a reliable and simple solution to send messages to Discord channels automatically. This tool was born from that need, and its goal is to simplify the integration between Open WebUI and Discord. Now, with the ability to configure both webhook URLs and optional proxy settings, the tool is ready for real-world use and useful enough for various applications. 18 | 19 | ### **How to Use** 20 | 21 | Getting started with the Discord Webhook Tool is quick and easy. Here’s how to make the most of it: 22 | 23 | - **Sending a Message**: Tell the LLM to send messages to the discord channel, and then it uses the `send_message` function to send a message to a Discord channel. Pass the desired message content as an argument to the function. 24 | 25 | - **Configuring the Webhook**: Set the `WEBHOOK_URL` and optionally a `PROXY` in the tool to ensure messages are routed properly. 26 | 27 | ### **How to Install** 28 | 29 | - Follow [this](https://openwebui.com/t/mhio/discord_webhook_proxy) link, and click the "Get" button. 30 | 31 | ### **Contribute** 32 | 33 | We welcome contributions from the community! If you have ideas for improvements, new features, or bug fixes, please feel free to open a pull request or issue on our [GitHub repository](https://github.com/soymh/open-webui-stuff). 34 | 35 | ### **Acknowledgments** 36 | 37 | A huge thank you to everyone contributing to the development of this tool. Special thanks to **[open-webui](https://github.com/open-webui)** for creating the Open WebUI framework, which this tool is built upon. Without such a powerful and versatile base, this tool wouldn't be possible. 38 | 39 | --- 40 | 41 | For more details, please refer to the official Open WebUI documentation. 42 | -------------------------------------------------------------------------------- /tools/discord-webhook/discord-webhook.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Discord Webhook 3 | author: https://github.com/soymh 4 | version: 0.0.1 5 | """ 6 | 7 | import asyncio 8 | import requests 9 | from pydantic import BaseModel, Field 10 | from typing import Callable, Any 11 | 12 | 13 | class EventEmitter: 14 | def __init__(self, event_emitter: Callable[[dict], Any] = None): 15 | self.event_emitter = event_emitter 16 | 17 | async def emit(self, description="Unknown state", status="in_progress", done=False): 18 | if self.event_emitter: 19 | await self.event_emitter( 20 | { 21 | "type": "status", 22 | "data": { 23 | "status": status, 24 | "description": description, 25 | "done": done, 26 | }, 27 | } 28 | ) 29 | 30 | 31 | class Tools: 32 | class Valves(BaseModel): 33 | WEBHOOK_URL: str = Field( 34 | default="", 35 | description="The URL of the Discord webhook to send messages to.", 36 | ) 37 | PROXY: str = Field( 38 | default="", 39 | description="The proxy used to send the message through.", 40 | ) 41 | 42 | def __init__(self): 43 | self.valves = self.Valves() 44 | 45 | async def send_message(self, message_content: str, __event_emitter__: Callable[[dict], Any] = None) -> str: 46 | """ 47 | Send a message to a specified Discord channel using a webhook. 48 | 49 | :param message_content: The content of the message to be sent to the Discord channel. 50 | :param __event_emitter__: An optional callback for emitting events during processing. 51 | :return: Response message indicating success or failure. 52 | """ 53 | emitter = EventEmitter(__event_emitter__) 54 | 55 | # Check if the webhook URL has been set 56 | if not self.valves.WEBHOOK_URL: 57 | await emitter.emit(description="No webhook URL, Please configure the webhook URL.", status="no_webhook_url", done=True) 58 | return "Let the user know webhook URL was not provided. Please configure the webhook URL." 59 | 60 | # Emitting event that message sending is in progress 61 | await emitter.emit(description="Sending message to Discord channel.", status="sending_message", done=False) 62 | 63 | data = {"content": f"{message_content} - Sent from Open WebUI"} 64 | 65 | proxies = ( 66 | {"http": self.valves.PROXY, "https": self.valves.PROXY} 67 | if self.valves.PROXY 68 | else None 69 | ) 70 | 71 | try: 72 | response = requests.post(self.valves.WEBHOOK_URL, json=data, proxies=proxies) 73 | 74 | if response.status_code == 204: 75 | # Emitting event after successfully sending the message 76 | await emitter.emit(description="Message successfully sent.", status="message_sent", done=True) 77 | return f"Message successfully sent to the Discord channel." 78 | 79 | else: 80 | # Emitting event in case of failure 81 | await emitter.emit(description="Sending message was unsuccessful.", status="not_sent", done=True) 82 | return f"Failed to send message. HTTP Status Code: {response.status_code}. Let the user know there were some issues." 83 | 84 | except requests.exceptions.RequestException as e: 85 | # Emitting event when an exception occurs 86 | await emitter.emit(description=f"Error occurred: {e}", status="error", done=True) 87 | return f"Failed to send message to the specified discord channel, due to the exception: {e}" 88 | -------------------------------------------------------------------------------- /tools/gpt4-memory-mimic/README.md: -------------------------------------------------------------------------------- 1 | 2 | # Memory Enhancement Tool for Open WebUI- GPT4-Memory-Mimic 3 | 4 | ## Overview 5 | 6 | ### Developed by: soymh 7 | ### Latest Release: 1.5.0 8 | ### License: MIT 9 | 10 | Seeking a seamless memory management experience for your local Large Language Model (LLM) web UI? Look no further! The Memory Enhancement Tool for Open WebUI is your go-to solution, offering a streamlined suite of features through a simple API. 11 | 12 | ## Installation Guide 13 | 14 | ### Quick Start 15 | 16 | - [Explore Installation Instructions](https://openwebui.com/t/mhio/gpt4_memory_mimic) 17 | 18 | ### Manual Setup 19 | 20 | - Download the `Tools` directory's JSON file, importing it into the tools section of your Open WebUI. 21 | 22 | ## Explore the Functions 23 | 24 | ### `handle_input` 25 | 26 | - **Purpose**: Intelligently summarizes user input to improve AI responses using memory content. 27 | - **Usage Example**: `await tools.handle_input(input_text="X", tag="Y", user_wants_to_add=True, llm_wants_to_add=False, by="user")` 28 | 29 | ### `recall_memories` 30 | 31 | - **Purpose**: Fetches and presents all stored memories. 32 | - **Usage Example**: `await tools.recall_memories()` 33 | 34 | ### `clear_memories` 35 | 36 | - **Purpose**: Permanently deletes all memories post-double confirmation. 37 | - **Usage Example**: `await tools.clear_memories(user_confirmation=True)` 38 | 39 | ### `refresh_memory` 40 | 41 | - **Purpose**: Reorganizes and optimizes memory data. 42 | - **Usage Example**: `await tools.refresh_memory()` 43 | 44 | ### `update_memory_entry` 45 | 46 | - **Purpose**: Modifies an existing memory entry by index. 47 | - **Usage Example**: `await tools.update_memory_entry(index=X, tag="Y", memo="Z", by="user")` 48 | 49 | ### `add_multiple_memories` 50 | 51 | - **Purpose**: Adds multiple memory entries in one action. 52 | - **Usage Example**: `await tools.add_multiple_memories(memory_entries=[{"tag": "personal", "memo": "First note", "by": "user"}, {"tag": "work", "memo": "Project update", "by": "LLM"}], llm_wants_to_add=True)` 53 | 54 | ### `delete_memory_entry` 55 | 56 | - **Purpose**: Removes a memory entry by index, with confirmation. 57 | - **Usage Example**: `await tools.delete_memory_entry(index=X, llm_wants_to_delete=True)` 58 | 59 | ### `delete_multiple_memories` 60 | 61 | - **Purpose**: Deletes multiple entries by index. 62 | - **Usage Example**: `await tools.delete_multiple_memories(indices=[1, 3, 5], llm_wants_to_delete=True)` 63 | 64 | ### `create_or_switch_memory_file` 65 | 66 | - **Purpose**: Creates or transitions to a specified memory file. 67 | - **Usage Example**: `await tools.create_or_switch_memory_file("file_name")` 68 | 69 | ### `list_memory_files` 70 | 71 | - **Purpose**: Displays all available memory files. 72 | - **Usage Example**: `await tools.list_memory_files()` 73 | 74 | ### `current_memory_file` 75 | 76 | - **Purpose**: Reveals the active memory file. 77 | - **Usage Example**: `await tools.current_memory_file()` 78 | 79 | ### `delete_memory_file` 80 | 81 | - **Purpose**: Permanently deletes a memory file after confirmation. 82 | - **Usage Example**: `await tools.delete_memory_file(file_to_delete="file_name.json", user_confirmation=True)` 83 | 84 | ### `execute_functions_sequentially` 85 | 86 | - **Purpose**: Sequentially executes a series of functions. 87 | - **Usage Example**: `await tools.execute_functions_sequentially(function_calls=[{"name": "handle_input", "params": {"input_text": "Example", "tag": "work"}}, {"name": "recall_memories", "params": {}}])` 88 | 89 | --- 90 | 91 | ### **How to install** 92 | 93 | - follow [this](https://openwebui.com/t/mhio/gpt4_memory_mimic) link, and click "Get" button. 94 | 95 | --- 96 | 97 | ## Contribute 98 | 99 | Join our endeavor in making memory management more effective! If you're keen on contributing, consider forking the repository and forwarding your pull requests. 100 | 101 | --- 102 | 103 | ### Licensing 104 | 105 | This project is distributed under the **MIT License**. 106 | -------------------------------------------------------------------------------- /tools/gpt4-memory-mimic/gpt4-memory-mimic.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Memory Enhancement Tool for LLM Web UI 3 | author: https://github.com/mhioi 4 | version: 1.5.0 5 | license: MIT 6 | """ 7 | 8 | import os 9 | import json 10 | from typing import Callable, Any 11 | import asyncio 12 | import datetime 13 | from pydantic import BaseModel, Field 14 | import tarfile 15 | import socket 16 | import threading 17 | from http.server import SimpleHTTPRequestHandler 18 | from socketserver import TCPServer 19 | 20 | 21 | class MemoryFunctions: 22 | def __init__( 23 | self, memory_file="memory.json", debug=False, directory="memory_jsons" 24 | ): 25 | self.directory = directory 26 | os.makedirs(self.directory, exist_ok=True) # Ensure the directory exists 27 | self.memory_file = os.path.join(self.directory, memory_file) 28 | self.debug = debug 29 | self.memory_data = self.load_memory() 30 | self.tag_options = ["personal", "work", "education", "life", "person", "others"] 31 | 32 | def switch_memory_file(self, new_file: str): 33 | """Switch and initialize operations on a new memory file in designated directory.""" 34 | self.memory_file = os.path.join(self.directory, new_file) 35 | self.memory_data = self.load_memory() 36 | if self.debug: 37 | print(f"Switched to memory file: {self.memory_file}") 38 | 39 | def reindex_memory(self): 40 | if self.debug: 41 | print("Reindexing memory entries.") 42 | 43 | # Reindex memory in ascending order 44 | sorted_indices = sorted(self.memory_data.keys()) 45 | reindexed_memory = { 46 | new_index + 1: self.memory_data[old_index] 47 | for new_index, old_index in enumerate(sorted_indices) 48 | } 49 | self.memory_data = reindexed_memory 50 | self.save_memory() 51 | 52 | return "Memory reindexed successfully." 53 | 54 | def delete_memory_by_index(self, index: int): 55 | if index in self.memory_data: 56 | del self.memory_data[index] 57 | self.save_memory() 58 | return f"Memory index {index} deleted successfully." 59 | else: 60 | return f"Memory index {index} does not exist." 61 | 62 | def update_memory_by_index(self, index: int, tag: str, memo: str, by: str): 63 | if index in self.memory_data: 64 | if tag not in self.tag_options: 65 | tag = "others" 66 | 67 | # Update the entry 68 | self.memory_data[index]["tag"] = tag 69 | self.memory_data[index]["memo"] = memo 70 | self.memory_data[index]["by"] = by 71 | self.memory_data[index]["last_modified"] = datetime.datetime.now().strftime( 72 | "%Y-%m-%d_%H:%M:%S" 73 | ) 74 | self.save_memory() 75 | return f"Memory index {index} updated successfully." 76 | else: 77 | return f"Memory index {index} does not exist." 78 | 79 | async def update_multiple_memories( 80 | self, 81 | memory_updates: list, 82 | llm_wants_to_update: bool, 83 | __event_emitter__: Callable[[dict], Any] = None, 84 | ) -> str: 85 | """ 86 | Update multiple memory entries at once. 87 | 88 | :param memory_updates: A list of updates, each containing index, tag, memo, by. 89 | Example: [{'index': 1, 'tag': 'work', 'memo': 'Updated memo', 'by': 'LLM'}, ...] 90 | :param llm_wants_to_update: Boolean indicating if the LLM has requested the updates. 91 | :returns: A message indicating the success or failure of the operations. 92 | """ 93 | emitter = EventEmitter(__event_emitter__) 94 | responses = [] 95 | 96 | if not llm_wants_to_update: 97 | return "LLM has not requested to update multiple memories." 98 | 99 | for update in memory_updates: 100 | index = update.get("index") 101 | tag = update.get("tag", "others") 102 | memo = update.get("memo", "") 103 | by = update.get("by", "LLM") 104 | 105 | if tag not in self.memory.tag_options: 106 | tag = "others" # Default tag to 'others' if invalid 107 | 108 | if self.valves.DEBUG: 109 | print(f"Updating memory {index}: tag={tag}, memo={memo}, by={by}") 110 | 111 | # Update the memory 112 | update_message = self.memory.update_memory_by_index(index, tag, memo, by) 113 | responses.append(update_message) 114 | 115 | await emitter.emit( 116 | description=update_message, status="memory_update", done=False 117 | ) 118 | 119 | await emitter.emit( 120 | description="All requested memory updates have been processed.", 121 | status="memory_update_complete", 122 | done=True, 123 | ) 124 | 125 | return "\n".join(responses) 126 | 127 | def load_memory(self): 128 | if os.path.exists(self.memory_file): 129 | if self.debug: 130 | print(f"Loading memory from {self.memory_file}") 131 | with open(self.memory_file, "r") as file: 132 | return json.load(file) 133 | else: 134 | return {} 135 | 136 | def save_memory(self): 137 | if self.debug: 138 | print(f"Saving memory to {self.memory_file}") 139 | with open(self.memory_file, "w") as file: 140 | json.dump(self.memory_data, file, ensure_ascii=False, indent=4) 141 | 142 | def add_to_memory(self, tag: str, memo: str, by: str): 143 | if tag not in self.tag_options: 144 | tag = "others" 145 | 146 | index = len(self.memory_data) + 1 147 | entry = { 148 | "tag": tag, 149 | "memo": memo, 150 | "by": by, 151 | "last_modified": datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S"), 152 | } 153 | self.memory_data[index] = entry 154 | self.save_memory() 155 | 156 | # Other methods remain unchanged... 157 | def retrieve_from_memory(self, key: str): 158 | if self.debug: 159 | print(f"Retrieving from memory: {key}") 160 | return self.memory_data.get(key, None) 161 | 162 | def process_input_for_memory(self, input_text: str): 163 | return {"timestamp": str(datetime.datetime.now()), "input": input_text} 164 | 165 | def get_all_memories(self) -> dict: 166 | if self.debug: 167 | print("Retrieving all memories.") 168 | return self.memory_data 169 | 170 | def clear_memory(self): 171 | if self.debug: 172 | print("Clearing all memory entries.") 173 | self.memory_data.clear() 174 | self.save_memory() 175 | return "ALL MEMORIES CLEARED!" 176 | 177 | 178 | class EventEmitter: 179 | def __init__(self, event_emitter: Callable[[dict], Any] = None): 180 | self.event_emitter = event_emitter 181 | 182 | async def emit(self, description="Unknown state", status="in_progress", done=False): 183 | if self.event_emitter: 184 | await self.event_emitter( 185 | { 186 | "type": "status", 187 | "data": { 188 | "status": status, 189 | "description": description, 190 | "done": done, 191 | }, 192 | } 193 | ) 194 | 195 | 196 | class Tools: 197 | class Valves(BaseModel): 198 | USE_MEMORY: bool = Field( 199 | default=True, description="Enable or disable memory usage." 200 | ) 201 | MEMORY_REFRESH_INTERVAL: int = Field( 202 | default=60, 203 | description="Interval in minutes to refresh and analyze memory data.", 204 | ) 205 | DEBUG: bool = Field(default=True, description="Enable or disable debug mode.") 206 | 207 | def __init__(self): 208 | self.valves = self.Valves() 209 | self.memory = MemoryFunctions(debug=self.valves.DEBUG) 210 | self.confirmation_pending = False 211 | 212 | async def handle_input( 213 | self, 214 | input_text: str, 215 | tag: str, 216 | user_wants_to_add: bool, 217 | llm_wants_to_add: bool, 218 | by: str, 219 | __event_emitter__: Callable[[dict], Any] = None, 220 | ) -> str: 221 | """ 222 | AUTOMATICALLY Summarize user input and enhance responses using memory data. 223 | 224 | :params input_text: The TEXT . 225 | :returns: The response considering memory data. 226 | """ 227 | emitter = EventEmitter(__event_emitter__) 228 | 229 | if self.valves.DEBUG: 230 | print(f"Handling input: {input_text}") 231 | 232 | await emitter.emit(f"Analyzing input for memory: {input_text}") 233 | 234 | if self.valves.USE_MEMORY: 235 | # Assume 'by' is determined outside and 'tag' is selected by LLM 236 | if tag not in self.memory.tag_options: 237 | tag = "others" 238 | 239 | if user_wants_to_add: 240 | await emitter.emit( 241 | description=f"User requested to add to memory with tag {tag}", 242 | status="memory_update", 243 | done=False, 244 | ) 245 | self.memory.add_to_memory(tag, input_text, "user") 246 | return "added to memory by user's request!" 247 | elif llm_wants_to_add: 248 | await emitter.emit( 249 | description=f"LLM added to memory with tag {tag}", 250 | status="memory_update", 251 | done=False, 252 | ) 253 | self.memory.add_to_memory(tag, input_text, "LLM") 254 | return "added to memory by LLM's request!" 255 | 256 | # The remaining logic stays the same. 257 | 258 | async def recall_memories( 259 | self, __event_emitter__: Callable[[dict], Any] = None 260 | ) -> str: 261 | """ 262 | Retrieve all stored memories in current file and provide them to the user. 263 | 264 | :return: A structured representation of all memory contents. 265 | """ 266 | emitter = EventEmitter(__event_emitter__) 267 | await emitter.emit( 268 | "Retrieving all stored memories.", status="recall_in_progress" 269 | ) 270 | 271 | all_memories = self.memory.get_all_memories() 272 | if not all_memories: 273 | message = "No memory stored." 274 | if self.valves.DEBUG: 275 | print(message) 276 | await emitter.emit( 277 | description=message, 278 | status="recall_complete", 279 | done=True, 280 | ) 281 | return json.dumps({"message": message}, ensure_ascii=False) 282 | 283 | # Correctly format stored memories contents for readability 284 | formatted_memories = json.dumps(all_memories, ensure_ascii=False, indent=4) 285 | 286 | if self.valves.DEBUG: 287 | print(f"All stored memories retrieved: {formatted_memories}") 288 | 289 | await emitter.emit( 290 | description=f"All stored memories retrieved: {formatted_memories}", 291 | status="recall_complete", 292 | done=True, 293 | ) 294 | 295 | return f"Memories are : {formatted_memories}" 296 | 297 | async def clear_memories( 298 | self, user_confirmation: bool, __event_emitter__: Callable[[dict], Any] = None 299 | ) -> str: 300 | """ 301 | Clear all stored memories in current file after user confirmation;ask twice the user for confimation. 302 | 303 | :param user_confirmation: Boolean indicating user confirmation to clear memories. 304 | :return: A message indicating the status of the operation. 305 | """ 306 | emitter = EventEmitter(__event_emitter__) 307 | await emitter.emit( 308 | "Attempting to clear all memory entries.", status="clear_memory_attempt" 309 | ) 310 | 311 | if self.confirmation_pending and user_confirmation: 312 | self.memory.clear_memory() 313 | await emitter.emit( 314 | description="All memory entries have been cleared.", 315 | status="clear_memory_complete", 316 | done=True, 317 | ) 318 | self.confirmation_pending = False 319 | return json.dumps( 320 | {"message": "All memory entries cleared."}, ensure_ascii=False 321 | ) 322 | 323 | if not self.confirmation_pending: 324 | self.confirmation_pending = True 325 | await emitter.emit( 326 | description="Please confirm that you want to clear all memories. Call this function again with confirmation.", 327 | status="confirmation_required", 328 | done=False, 329 | ) 330 | return json.dumps( 331 | {"message": "Please confirm to clear all memories."}, ensure_ascii=False 332 | ) 333 | 334 | await emitter.emit( 335 | description="Clear memory operation aborted.", 336 | status="clear_memory_aborted", 337 | done=True, 338 | ) 339 | self.confirmation_pending = False 340 | return json.dumps( 341 | {"message": "Memory clear operation aborted."}, ensure_ascii=False 342 | ) 343 | 344 | async def refresh_memory(self, __event_emitter__: Callable[[dict], Any] = None): 345 | """ 346 | Periodically refresh and optimize memory data, includes reindexing. 347 | 348 | :returns: A message indicating the status of the refresh operation. 349 | """ 350 | emitter = EventEmitter(__event_emitter__) 351 | await emitter.emit("Starting memory refresh process.") 352 | 353 | if self.valves.DEBUG: 354 | print("Refreshing memory...") 355 | 356 | if self.valves.USE_MEMORY: 357 | refresh_message = self.memory.reindex_memory() 358 | 359 | if self.valves.DEBUG: 360 | print(refresh_message) 361 | 362 | await emitter.emit( 363 | description=refresh_message, status="memory_refresh", done=True 364 | ) 365 | 366 | return refresh_message 367 | 368 | if self.valves.DEBUG: 369 | print("Memory refreshed.") 370 | 371 | await emitter.emit( 372 | status="complete", description="Memory refresh completed.", done=True 373 | ) 374 | 375 | async def update_memory_entry( 376 | self, 377 | index: int, 378 | tag: str, 379 | memo: str, 380 | by: str, 381 | __event_emitter__: Callable[[dict], Any] = None, 382 | ) -> str: 383 | """ 384 | Update an existing memory entry based on its index. 385 | 386 | :param index: The index of the memory entry to update,STARTING FROM 1. 387 | :param tag: The tag for the memory entry. 388 | :param memo: The memory information to update. 389 | :param by: Who is making the update ('user' or 'LLM'). 390 | :returns: A message indicating the success or failure of the update. 391 | """ 392 | emitter = EventEmitter(__event_emitter__) 393 | 394 | if self.valves.DEBUG: 395 | print( 396 | f"Updating memory index {index} with tag: {tag}, memo: {memo}, by: {by}" 397 | ) 398 | 399 | update_message = self.memory.update_memory_by_index(index, tag, memo, by) 400 | 401 | await emitter.emit( 402 | description=update_message, status="memory_update", done=True 403 | ) 404 | 405 | return update_message 406 | 407 | async def add_multiple_memories( 408 | self, 409 | memory_entries: list, 410 | llm_wants_to_add: bool, 411 | __event_emitter__: Callable[[dict], Any] = None, 412 | ) -> str: 413 | """ 414 | Allows the LLM to add multiple memory entries at once. 415 | 416 | :param memory_entries: A list of dictionary entries, each containing tag, memo, by.Usage Example: memory_entries = [{"tag": "personal", "memo": "This is a personal note", "by": "LLM"},{"tag": "work", "memo": "Project deadline is tomorrow", "by": "LLM"}] 417 | :param llm_wants_to_add: Boolean indicating LLM's desire to add the memories. 418 | :returns: A message indicating the success or failure of the operations. 419 | """ 420 | emitter = EventEmitter(__event_emitter__) 421 | responses = [] 422 | 423 | if not llm_wants_to_add: 424 | return "LLM has not requested to add multiple memories." 425 | 426 | for idx, entry in enumerate(memory_entries): 427 | tag = entry.get("tag", "others") 428 | memo = entry.get("memo", "") 429 | by = entry.get("by", "LLM") 430 | 431 | if tag not in self.memory.tag_options: 432 | tag = "others" 433 | 434 | if self.valves.DEBUG: 435 | print(f"Adding memory {idx+1}: tag={tag}, memo={memo}, by={by}") 436 | 437 | # Add the memory 438 | self.memory.add_to_memory(tag, memo, by) 439 | response = f"Memory {idx+1} added with tag {tag} by {by}." 440 | responses.append(response) 441 | 442 | await emitter.emit(description=response, status="memory_update", done=False) 443 | 444 | await emitter.emit( 445 | description="All requested memories have been processed.", 446 | status="memory_update_complete", 447 | done=True, 448 | ) 449 | 450 | return "\n".join(responses) 451 | 452 | async def delete_memory_entry( 453 | self, 454 | index: int, 455 | llm_wants_to_delete: bool, 456 | __event_emitter__: Callable[[dict], Any] = None, 457 | ) -> str: 458 | """ 459 | Delete a memory entry based on its index. 460 | 461 | :param index: The index of the memory entry to delete,STARTING FROM 1. 462 | :param llm_wants_to_delete: Boolean indicating if the LLM has requested the deletion. 463 | :returns: A message indicating the success or failure of the deletion. 464 | """ 465 | emitter = EventEmitter(__event_emitter__) 466 | 467 | if not llm_wants_to_delete: 468 | return "LLM has not requested to delete a memory." 469 | 470 | if self.valves.DEBUG: 471 | print(f"Attempting to delete memory at index {index}") 472 | 473 | deletion_message = self.memory.delete_memory_by_index(index) 474 | 475 | await emitter.emit( 476 | description=deletion_message, status="memory_deletion", done=True 477 | ) 478 | 479 | return deletion_message 480 | 481 | async def delete_multiple_memories( 482 | self, 483 | indices: list, 484 | llm_wants_to_delete: bool, 485 | __event_emitter__: Callable[[dict], Any] = None, 486 | ) -> str: 487 | """ 488 | Delete multiple memory entries based on their indices. 489 | 490 | :param indices: A list of indices of the memory entries to delete,STARTING FROM 1. 491 | :param llm_wants_to_delete: Boolean indicating if the LLM has requested the deletions. 492 | :returns: A message indicating the success or failure of the deletions. 493 | """ 494 | emitter = EventEmitter(__event_emitter__) 495 | responses = [] 496 | 497 | if not llm_wants_to_delete: 498 | return "LLM has not requested to delete multiple memories." 499 | 500 | for index in indices: 501 | if self.valves.DEBUG: 502 | print(f"Attempting to delete memory at index {index}") 503 | 504 | deletion_message = self.memory.delete_memory_by_index(index) 505 | responses.append(deletion_message) 506 | 507 | await emitter.emit( 508 | description=deletion_message, status="memory_deletion", done=False 509 | ) 510 | 511 | await emitter.emit( 512 | description="All requested memory deletions have been processed.", 513 | status="memory_deletion_complete", 514 | done=True, 515 | ) 516 | 517 | return "\n".join(responses) 518 | 519 | async def create_or_switch_memory_file( 520 | self, new_file_name: str, __event_emitter__: Callable[[dict], Any] = None 521 | ) -> str: 522 | """ 523 | Create a new memory file or switch to an existing one. 524 | 525 | :param new_file_name: The name of the new or existing memory file. 526 | :returns: A message indicating the success or failure of the operation. 527 | """ 528 | emitter = EventEmitter(__event_emitter__) 529 | 530 | if self.valves.DEBUG: 531 | print(f"Switching to or creating memory file: {new_file_name}") 532 | 533 | self.memory.switch_memory_file(new_file_name + ".json") 534 | 535 | message = f"Memory file switched to {new_file_name}." 536 | 537 | await emitter.emit(description=message, status="file_switching", done=True) 538 | 539 | return message 540 | 541 | async def list_memory_files( 542 | self, __event_emitter__: Callable[[dict], Any] = None 543 | ) -> str: 544 | """ 545 | List available memory files in the designated directory. 546 | 547 | :returns: A message with the list of available memory files. 548 | """ 549 | emitter = EventEmitter(__event_emitter__) 550 | memory_files = [] 551 | 552 | if self.valves.DEBUG: 553 | print(f"Listing memory files in directory: {self.memory.directory}") 554 | 555 | try: 556 | for file in os.listdir(self.memory.directory): 557 | if file.endswith(".json"): 558 | if self.valves.DEBUG: 559 | print(f"Found memory file: {file}") 560 | memory_files.append(file) 561 | 562 | description = "Available memory files: " + ", ".join(memory_files) 563 | status = "file_listing_complete" 564 | 565 | except Exception as e: 566 | description = f"Error accessing directory: {str(e)}" 567 | status = "file_listing_error" 568 | 569 | await emitter.emit(description=description, status=status, done=True) 570 | 571 | return description 572 | 573 | async def current_memory_file( 574 | self, __event_emitter__: Callable[[dict], Any] = None 575 | ) -> str: 576 | """ 577 | Retrieve the name of the currently active memory file. 578 | 579 | :returns: A message indicating the current memory file. 580 | """ 581 | emitter = EventEmitter(__event_emitter__) 582 | 583 | current_file = self.memory.memory_file 584 | 585 | message = f"Currently using memory file: {current_file}" 586 | 587 | if self.valves.DEBUG: 588 | print(message) 589 | 590 | await emitter.emit( 591 | description=message, status="current_file_retrieved", done=True 592 | ) 593 | 594 | return message 595 | 596 | async def delete_memory_file( 597 | self, 598 | file_to_delete: str, 599 | user_confirmation: bool, 600 | __event_emitter__: Callable[[dict], Any] = None, 601 | ) -> str: 602 | """ 603 | Delete a memory file in the designated directory with confirmation and necessary file switching. 604 | 605 | :param file_to_delete: The name of the memory file to delete. 606 | :param user_confirmation: Boolean indicating user confirmation for deletion. 607 | :returns: A message indicating the success or failure of the deletion. 608 | """ 609 | emitter = EventEmitter(__event_emitter__) 610 | file_path = os.path.join(self.memory.directory, file_to_delete) 611 | available_files = [ 612 | f for f in os.listdir(self.memory.directory) if f.endswith(".json") 613 | ] 614 | 615 | if file_to_delete not in available_files: 616 | message = f"File '{file_to_delete}' does not exist in the directory." 617 | await emitter.emit(description=message, status="file_not_found", done=True) 618 | if self.valves.DEBUG: 619 | print(message) 620 | return message 621 | 622 | if self.confirmation_pending and user_confirmation: 623 | try: 624 | if self.memory.memory_file == file_path: 625 | # Switch to another file before deleting the current one 626 | alternative_file = next( 627 | (f for f in available_files if f != file_to_delete), None 628 | ) 629 | if not alternative_file: 630 | message = ( 631 | "No alternative memory file to switch to. Deletion aborted." 632 | ) 633 | await emitter.emit( 634 | description=message, status="no_alternative_file", done=True 635 | ) 636 | if self.valves.DEBUG: 637 | print(message) 638 | return message 639 | 640 | self.memory.switch_memory_file(alternative_file) 641 | switch_message = f"Switched to '{alternative_file}'. Now deleting '{file_to_delete}'." 642 | await emitter.emit( 643 | description=switch_message, status="file_switched", done=False 644 | ) 645 | if self.valves.DEBUG: 646 | print(switch_message) 647 | 648 | os.remove(file_path) 649 | message = f"File '{file_to_delete}' deleted successfully." 650 | status = "file_deletion_complete" 651 | except Exception as e: 652 | message = f"Error deleting file '{file_to_delete}': {str(e)}" 653 | status = "deletion_error" 654 | 655 | await emitter.emit(description=message, status=status, done=True) 656 | self.confirmation_pending = False 657 | return message 658 | 659 | if not self.confirmation_pending: 660 | self.confirmation_pending = True 661 | confirmation_message = ( 662 | "Please confirm that you want to delete the memory file. " 663 | "Call this function again with confirmation." 664 | ) 665 | await emitter.emit( 666 | description=confirmation_message, 667 | status="confirmation_required", 668 | done=False, 669 | ) 670 | return json.dumps( 671 | { 672 | "message": "Please confirm to delete the memory file.", 673 | "file": file_to_delete, 674 | }, 675 | ensure_ascii=False, 676 | ) 677 | 678 | await emitter.emit( 679 | description="Deletion of memory file aborted.", 680 | status="deletion_aborted", 681 | done=True, 682 | ) 683 | self.confirmation_pending = False 684 | return json.dumps( 685 | {"message": "Memory file deletion aborted.", "file": file_to_delete}, 686 | ensure_ascii=False, 687 | ) 688 | 689 | async def execute_functions_sequentially( 690 | self, 691 | function_calls: list, 692 | __event_emitter__: Callable[[dict], Any] = None, 693 | ) -> dict: 694 | """ 695 | Execute a series of functions in sequence. 696 | 697 | :param function_calls: A list of dictionaries each containing 'name' and 'params'. 698 | Example: [{'name': 'handle_input', 'params': {...}}, ...] 699 | :returns: A dictionary with results of each function call. 700 | """ 701 | emitter = EventEmitter(__event_emitter__) 702 | results = {} 703 | 704 | for call in function_calls: 705 | func_name = call.get("name") 706 | params = call.get("params", {}) 707 | 708 | if hasattr(self, func_name) and callable(getattr(self, func_name)): 709 | if self.valves.DEBUG: 710 | print(f"Executing function: {func_name} with params: {params}") 711 | 712 | await emitter.emit( 713 | f"Executing {func_name}", status="function_execution", done=False 714 | ) 715 | 716 | try: 717 | func = getattr(self, func_name) 718 | result = await func(__event_emitter__=__event_emitter__, **params) 719 | results[func_name] = result 720 | 721 | await emitter.emit( 722 | description=f"{func_name} executed successfully.", 723 | status="function_complete", 724 | done=False, 725 | ) 726 | except Exception as e: 727 | error_msg = f"Error executing {func_name}: {str(e)}" 728 | results[func_name] = error_msg 729 | 730 | await emitter.emit( 731 | description=error_msg, status="function_error", done=False 732 | ) 733 | else: 734 | error_msg = f"Function {func_name} not found or not callable." 735 | results[func_name] = error_msg 736 | 737 | await emitter.emit( 738 | description=error_msg, status="function_missing", done=False 739 | ) 740 | 741 | await emitter.emit( 742 | description="All requested functions have been processed.", 743 | status="execution_complete", 744 | done=True, 745 | ) 746 | 747 | return f"executed successfully :{results}" 748 | 749 | async def download_memory( 750 | self, 751 | memory_file_name: str, 752 | download_all: bool, 753 | __event_emitter__: Callable[[dict], Any] = None, 754 | ) -> str: 755 | """ 756 | Download a specific memory file or all memory files in a tarball;ONLY FOR 14 SECONDS, AND WHEN LLM ANSWERS THE LINK IS EXPIRED 757 | 758 | :param memory_file_name: Name of the memory file or target tarball name. 759 | :param download_all: Boolean indicating whether to download all memories as a tarball. 760 | :returns: A message with a link or status of the operation. 761 | """ 762 | emitter = EventEmitter(__event_emitter__) 763 | available_files = os.listdir(self.memory.directory) 764 | found_files = [] 765 | 766 | if download_all: 767 | tarball_path = os.path.join(self.memory.directory, "all_memories.tar.gz") 768 | with tarfile.open(tarball_path, "w:gz") as tar: 769 | for file in available_files: 770 | if file.endswith(".json"): 771 | tar.add(os.path.join(self.memory.directory, file), arcname=file) 772 | found_files.append(file) 773 | 774 | target_file = tarball_path 775 | else: 776 | file_path = os.path.join(self.memory.directory, memory_file_name) 777 | if memory_file_name in available_files: 778 | target_file = file_path 779 | found_files.append(memory_file_name) 780 | else: 781 | matched = sorted(f for f in available_files if memory_file_name in f) 782 | if matched: 783 | target_file = os.path.join(self.memory.directory, matched[0]) 784 | found_files.append(matched[0]) 785 | else: 786 | message = f"No memory file matching '{memory_file_name}' was found." 787 | await emitter.emit( 788 | description=message, status="file_not_found", done=True 789 | ) 790 | if self.valves.DEBUG: 791 | print(message) 792 | return message 793 | 794 | if not found_files: 795 | message = "No files were found to download." 796 | await emitter.emit(description=message, status="not_found", done=True) 797 | if self.valves.DEBUG: 798 | print(message) 799 | return message 800 | 801 | httpd = None 802 | try: 803 | handler = SimpleHTTPRequestHandler 804 | handler.directory = self.memory.directory 805 | httpd = TCPServer(("", 0), handler) 806 | ip, port = httpd.server_address 807 | server_url = f"http://{ip}:{port}/{target_file}" 808 | 809 | # Start the server in a new thread 810 | server_thread = threading.Thread(target=httpd.serve_forever, daemon=True) 811 | server_thread.start() 812 | 813 | message = f"Download available for 14 seconds in this link: {server_url}" 814 | await emitter.emit(description=message, status="download", done=True) 815 | if self.valves.DEBUG: 816 | print(message) 817 | 818 | # Give the user time to download 819 | await asyncio.sleep(14) 820 | return "TELL THE USER THAT LINK IS EXPIRED AND YOU SHOULD HAVE DOWNLOADED FILES!" 821 | except Exception as e: 822 | message = f"Error setting up download server: {str(e)}" 823 | await emitter.emit(description=message, status="download_error", done=True) 824 | if self.valves.DEBUG: 825 | print(message) 826 | finally: 827 | if httpd: 828 | httpd.shutdown() # Ensure server is shut down 829 | if os.path.exists(target_file): 830 | os.remove(target_file) # Delete the file after the server is shut down 831 | if self.valves.DEBUG: 832 | await emitter.emit( 833 | description="file deleted sucessfully", 834 | status=f"Deleted file: {target_file}", 835 | done=True, 836 | ) 837 | print(f"Deleted file: {target_file}") 838 | return message 839 | 840 | -------------------------------------------------------------------------------- /tools/met-tables/README.md: -------------------------------------------------------------------------------- 1 | # Memory Enhancement Tool for Open WebUI - MET 2 | 3 | ## **What's New?** 4 | 5 | Join us on this exciting journey of memory enhancement for Open WebUI equipped with our innovative `MemoriesTable` feature. This feature, accessible in the Settings / Personalization section, offers an efficient upgrade in memory management, ensuring that your Open WebUI tools remain top-notch with privacy at the forefront. 6 | 7 | ### **Improved Memory Handling** 8 | 9 | The `MemoriesTable` class presents an array of seamless functionalities. Enhancing your memory management efforts with ease, it allows for straightforward tasks such as insertion, retrieval, and update operations. Effortlessly navigate through your memories, ensuring they are always fresh and relevant. 10 | 11 | ### **Consistent Updates and Smooth Accessibility** 12 | 13 | Take a stroll down memory lane with recall, a function designed to bring back memories instantly. From asking "what are our memories" to uncovering the answer through the memories vault, the retrieval process is effortless and accurate, exactly as you remember it. 14 | 15 | ### **Why the Change?** 16 | 17 | As we explore the realm of memory management with our previous approach, we've uncovered a realm of improvements that promise a smoother and more efficient experience for all users. Built with cutting-edge techniques, we aim to expand performance, scalability, and refine the overall interaction with Open WebUI. 18 | 19 | Huge thanks to [dnl13](https://github.com/dnl13) for sparking the innovative conversations that formed the backbone of these enhancements. 20 | 21 | Additionally, special thanks to [CookSleep](https://openwebui.com/t/cooksleep/memory) for their contributions in adding the functionalities of updating and deleting memories in this tool. 22 | 23 | I recently came across a fork of this tool by **[CookSleep](https://openwebui.com/t/cooksleep/memory)**, and after reviewing their work, I realized that integrating their new methods would make our tool even more complete. Their efforts in adding memory update and delete functionalities offer a meaningful enhancement, and I believe incorporating these features would not only improve the tool but also bring greater value to the community. 24 | 25 | Their work has played a significant role in making this tool more robust and user-friendly. 26 | 27 | 28 | ### **How to Use** 29 | 30 | Unlock the potential of your memories by integrating our user-friendly functions: 31 | 32 | - **Recalling Memories**: Uncover your past with the `recall_memories` function or engage with generic prompts such as "what are our memories?" or "who am I?" allowing the system to draw from the memories vault in a personalized manner. 33 | 34 | - **Adding Memories**: Safeguard your thoughts through our `add_memory` functionality. Add textual content or keep the process automated by letting it decide when to pick up new entries. Remembering is now made easier – akin to jotting your thoughts down in a digital diary! 35 | 36 | - **Updating Memories**: Easily update any stored memories with the `update_memory` function. Keep your memories accurate and fresh by editing them whenever necessary. 37 | 38 | - **Deleting Memories**: Remove memories you no longer need or want by using the `delete_memory` function. Clean up your memory vault for better organization. 39 | 40 | ### **How to install** 41 | 42 | - Follow [this](https://openwebui.com/t/mhio/met) link, and click the "Get" button. 43 | 44 | ### **Contribute** 45 | 46 | Ready to contribute ideas or features? Dive in with passion, and share your enhancements with us. Our door is open for collaborations that push the boundaries of Open WebUI. 47 | 48 | ### **Acknowledgments** 49 | 50 | Thank you to everyone for their participation in this journey! Special praise to **[dnl13](https://github.com/dnl13)** for their help and ideas behind this approach. Additional thanks to **[CookSleep](https://openwebui.com/t/cooksleep/memory)** for contributing essential functionalities that complete this tool. 51 | -------------------------------------------------------------------------------- /tools/met-tables/met-tables.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Memory Enhancement Tool for LLM Web UI 3 | author: https://github.com/soymh 4 | version: 0.1.0 5 | license: MIT 6 | 7 | Thanks to https://github.com/CookSleep, 8 | we added functionalities of update and delete memory 9 | from https://openwebui.com/t/cooksleep/memory, 10 | so that this tool be completed and function the best as it can. 11 | Huge thanks to https://openwebui.com/t/cooksleep/memory 12 | """ 13 | 14 | import json 15 | from typing import Callable, Any, List 16 | 17 | from open_webui.models.memories import Memories 18 | from pydantic import BaseModel, Field 19 | 20 | 21 | class EventEmitter: 22 | def __init__(self, event_emitter: Callable[[dict], Any] = None): 23 | self.event_emitter = event_emitter 24 | 25 | async def emit(self, description="Unknown state", status="in_progress", done=False): 26 | """ 27 | Send a status event to the event emitter. 28 | 29 | :param description: Event description 30 | :param status: Event status 31 | :param done: Whether the event is complete 32 | """ 33 | if self.event_emitter: 34 | await self.event_emitter( 35 | { 36 | "type": "status", 37 | "data": { 38 | "status": status, 39 | "description": description, 40 | "done": done, 41 | }, 42 | } 43 | ) 44 | 45 | 46 | class Tools: 47 | class Valves(BaseModel): 48 | USE_MEMORY: bool = Field( 49 | default=True, description="Enable or disable memory usage." 50 | ) 51 | DEBUG: bool = Field(default=True, description="Enable or disable debug mode.") 52 | 53 | def __init__(self): 54 | self.valves = self.Valves() 55 | 56 | async def recall_memories( 57 | self, __user__: dict = None, __event_emitter__: Callable[[dict], Any] = None 58 | ) -> str: 59 | """ 60 | Retrieves all stored memories from the user's memory vault and provide them to the user for giving the best response. Be accurate and precise. Do not add any additional information. Always use the function to access memory or memories. If the user asks about what is currently stored, only return the exact details from the function. Do not invent or omit any information. 61 | 62 | :return: A numeric list of all memories. You MUST present the memorys to the user as text. It is important that all memorys are displayed without omissions. Please show each memory entry in full! 63 | """ 64 | emitter = EventEmitter(__event_emitter__) 65 | 66 | if not __user__: 67 | message = "User ID not provided." 68 | await emitter.emit(description=message, status="missing_user_id", done=True) 69 | return json.dumps({"message": message}, ensure_ascii=False) 70 | 71 | user_id = __user__.get("id") 72 | if not user_id: 73 | message = "User ID not provided." 74 | await emitter.emit(description=message, status="missing_user_id", done=True) 75 | return json.dumps({"message": message}, ensure_ascii=False) 76 | 77 | await emitter.emit( 78 | description="Retrieving stored memories.", 79 | status="recall_in_progress", 80 | done=False, 81 | ) 82 | 83 | user_memories = Memories.get_memories_by_user_id(user_id) 84 | if not user_memories: 85 | message = "No memory stored." 86 | await emitter.emit(description=message, status="recall_complete", done=True) 87 | return json.dumps({"message": message}, ensure_ascii=False) 88 | 89 | content_list = [ 90 | f"{index}. {memory.content}" 91 | for index, memory in enumerate( 92 | sorted(user_memories, key=lambda m: m.created_at), start=1 93 | ) 94 | ] 95 | 96 | await emitter.emit( 97 | description=f"{len(user_memories)} memories loaded", 98 | status="recall_complete", 99 | done=True, 100 | ) 101 | 102 | return f"Memories from the users memory vault: {content_list}" 103 | 104 | async def add_memory( 105 | self, 106 | input_text: List[str], 107 | __user__: dict = None, 108 | __event_emitter__: Callable[[dict], Any] = None, 109 | ) -> str: 110 | """ 111 | Add a new entry to the user's memory vault. Always use the function to actually store the data; do not simulate or pretend to save data without using the function. After adding the entry, retrieve all stored memories from the user's memory vault and provide them accurately. Do not invent or omit any information; only return the data obtained from the function. Do not assume that any input text already exists in the user's memories unless the function explicitly confirms that a duplicate entry is being added. Simply acknowledge the new entry without referencing prior content unless it is confirmed by the memory function. 112 | - User's name: "xyz" 113 | - User's age: "30" 114 | - User's profession: "programmer specializing in Python"" 115 | 116 | 117 | :params input_text: The TEXT . 118 | :returns: A numeric list of all memories. 119 | """ 120 | emitter = EventEmitter(__event_emitter__) 121 | if not __user__: 122 | message = "User ID not provided." 123 | await emitter.emit(description=message, status="missing_user_id", done=True) 124 | return json.dumps({"message": message}, ensure_ascii=False) 125 | 126 | user_id = __user__.get("id") 127 | if not user_id: 128 | message = "User ID not provided." 129 | await emitter.emit(description=message, status="missing_user_id", done=True) 130 | return json.dumps({"message": message}, ensure_ascii=False) 131 | 132 | if isinstance(input_text, str): 133 | input_text = [input_text] 134 | 135 | await emitter.emit( 136 | description="Adding entries to the memory vault.", 137 | status="add_in_progress", 138 | done=False, 139 | ) 140 | 141 | added_items = [] 142 | failed_items = [] 143 | 144 | for item in input_text: 145 | new_memory = Memories.insert_new_memory(user_id, item) 146 | if new_memory: 147 | added_items.append(item) 148 | else: 149 | failed_items.append(item) 150 | 151 | if not added_items: 152 | message = "Failed to add any memories." 153 | await emitter.emit(description=message, status="add_failed", done=True) 154 | return json.dumps({"message": message}, ensure_ascii=False) 155 | 156 | added_count = len(added_items) 157 | failed_count = len(failed_items) 158 | 159 | if failed_count > 0: 160 | message = ( 161 | f"Added {added_count} memories, failed to add {failed_count} memories." 162 | ) 163 | else: 164 | message = f"Successfully added {added_count} memories." 165 | 166 | await emitter.emit( 167 | description=message, 168 | status="add_complete", 169 | done=True, 170 | ) 171 | return json.dumps({"message": message}, ensure_ascii=False) 172 | 173 | async def delete_memory( 174 | self, 175 | indices: List[int], 176 | __user__: dict = None, 177 | __event_emitter__: Callable[[dict], Any] = None, 178 | ) -> str: 179 | """ 180 | Delete one or more memory entries from the user's memory vault. 181 | 182 | Use to remove outdated or incorrect memories. 183 | 184 | For single deletion: provide an integer index 185 | For multiple deletions: provide a list of integer indices 186 | 187 | Indices refer to the position in the sorted list (1-based). 188 | 189 | :param indices: Single index (int) or list of indices to delete 190 | :param __user__: User dictionary containing the user ID 191 | :param __event_emitter__: Optional event emitter 192 | :return: JSON string with result message 193 | """ 194 | emitter = EventEmitter(__event_emitter__) 195 | 196 | if not __user__: 197 | message = "User ID not provided." 198 | await emitter.emit(description=message, status="missing_user_id", done=True) 199 | return json.dumps({"message": message}, ensure_ascii=False) 200 | 201 | user_id = __user__.get("id") 202 | if not user_id: 203 | message = "User ID not provided." 204 | await emitter.emit(description=message, status="missing_user_id", done=True) 205 | return json.dumps({"message": message}, ensure_ascii=False) 206 | 207 | if isinstance(indices, int): 208 | indices = [indices] 209 | 210 | await emitter.emit( 211 | description=f"Deleting {len(indices)} memory entries.", 212 | status="delete_in_progress", 213 | done=False, 214 | ) 215 | 216 | user_memories = Memories.get_memories_by_user_id(user_id) 217 | if not user_memories: 218 | message = "No memories found to delete." 219 | await emitter.emit(description=message, status="delete_failed", done=True) 220 | return json.dumps({"message": message}, ensure_ascii=False) 221 | 222 | sorted_memories = sorted(user_memories, key=lambda m: m.created_at) 223 | responses = [] 224 | 225 | for index in indices: 226 | if index < 1 or index > len(sorted_memories): 227 | message = f"Memory index {index} does not exist." 228 | responses.append(message) 229 | await emitter.emit( 230 | description=message, status="delete_failed", done=False 231 | ) 232 | continue 233 | 234 | memory_to_delete = sorted_memories[index - 1] 235 | result = Memories.delete_memory_by_id(memory_to_delete.id) 236 | if not result: 237 | message = f"Failed to delete memory at index {index}." 238 | responses.append(message) 239 | await emitter.emit( 240 | description=message, status="delete_failed", done=False 241 | ) 242 | else: 243 | message = f"Memory at index {index} deleted successfully." 244 | responses.append(message) 245 | await emitter.emit( 246 | description=message, status="delete_success", done=False 247 | ) 248 | 249 | await emitter.emit( 250 | description="All requested memory deletions have been processed.", 251 | status="delete_complete", 252 | done=True, 253 | ) 254 | return json.dumps({"message": "\n".join(responses)}, ensure_ascii=False) 255 | 256 | async def update_memory( 257 | self, 258 | updates: List[dict], 259 | __user__: dict = None, 260 | __event_emitter__: Callable[[dict], Any] = None, 261 | ) -> str: 262 | """ 263 | Update one or more memory entries in the user's memory vault. 264 | 265 | Use to modify existing memories when information changes. 266 | 267 | For single update: provide a dict with 'index' and 'content' keys 268 | For multiple updates: provide a list of dicts with 'index' and 'content' keys 269 | 270 | The 'index' refers to the position in the sorted list (1-based). 271 | 272 | Common scenarios: Correcting information, adding details, 273 | updating preferences, or refining wording. 274 | 275 | :param updates: Dict with 'index' and 'content' keys OR a list of such dicts 276 | :param __user__: User dictionary containing the user ID 277 | :param __event_emitter__: Optional event emitter 278 | :return: JSON string with result message 279 | """ 280 | emitter = EventEmitter(__event_emitter__) 281 | 282 | if not __user__: 283 | message = "User ID not provided." 284 | await emitter.emit(description=message, status="missing_user_id", done=True) 285 | return json.dumps({"message": message}, ensure_ascii=False) 286 | 287 | user_id = __user__.get("id") 288 | if not user_id: 289 | message = "User ID not provided." 290 | await emitter.emit(description=message, status="missing_user_id", done=True) 291 | return json.dumps({"message": message}, ensure_ascii=False) 292 | 293 | await emitter.emit( 294 | description=f"Updating {len(updates)} memory entries.", 295 | status="update_in_progress", 296 | done=False, 297 | ) 298 | 299 | user_memories = Memories.get_memories_by_user_id(user_id) 300 | if not user_memories: 301 | message = "No memories found to update." 302 | await emitter.emit(description=message, status="update_failed", done=True) 303 | return json.dumps({"message": message}, ensure_ascii=False) 304 | 305 | sorted_memories = sorted(user_memories, key=lambda m: m.created_at) 306 | responses = [] 307 | 308 | for update_item in updates: 309 | index = update_item.get("index") 310 | content = update_item.get("content") 311 | 312 | if index < 1 or index > len(sorted_memories): 313 | message = f"Memory index {index} does not exist." 314 | responses.append(message) 315 | await emitter.emit( 316 | description=message, status="update_failed", done=False 317 | ) 318 | continue 319 | 320 | memory_to_update = sorted_memories[index - 1] 321 | updated_memory = Memories.update_memory_by_id(memory_to_update.id, content) 322 | if not updated_memory: 323 | message = f"Failed to update memory at index {index}." 324 | responses.append(message) 325 | await emitter.emit( 326 | description=message, status="update_failed", done=False 327 | ) 328 | else: 329 | message = f"Memory at index {index} updated successfully." 330 | responses.append(message) 331 | await emitter.emit( 332 | description=message, status="update_success", done=False 333 | ) 334 | 335 | await emitter.emit( 336 | description="All requested memory updates have been processed.", 337 | status="update_complete", 338 | done=True, 339 | ) 340 | return json.dumps({"message": "\n".join(responses)}, ensure_ascii=False) 341 | -------------------------------------------------------------------------------- /tools/telegram-message-sender/README.md: -------------------------------------------------------------------------------- 1 | Here is the README for the `Telegram Bot Message Sender` tool based on your provided format: 2 | 3 | --- 4 | 5 | # Telegram Bot Message Sender Tool for Open WebUI 6 | 7 | ## **What's New?** 8 | 9 | The Telegram Bot Message Sender Tool allows you to send messages directly to a specified Telegram user through a bot. This simple tool integrates seamlessly with Open WebUI, making it easy to send automated messages, alerts, or notifications. 10 | 11 | ### **Effortless Message Sending** 12 | 13 | With the `send_message` function, sending messages to a Telegram user has never been easier. Simply provide the message content, configure the bot token and chat ID, and let the tool handle the message delivery. Whether you're automating notifications or alerts, the tool will ensure your messages reach the intended recipient. 14 | 15 | ### **Proxy Support** 16 | 17 | For added flexibility, the tool supports optional proxy usage. If you need to route your requests through a specific proxy server, you can configure it in the settings. 18 | 19 | ### **Why?** 20 | 21 | The Telegram Bot Message Sender Tool was developed to fill the need for a reliable and straightforward solution to send messages to Telegram users automatically. After recognizing this gap, the tool was created with ease of integration and flexibility in mind. Now, with the ability to configure both bot tokens, chat IDs, and optional proxy settings, it’s ready for real-world use in various applications. 22 | 23 | ### **How to Use** 24 | 25 | Getting started with the Telegram Bot Message Sender Tool is quick and simple. Here’s how to make the most of it: 26 | 27 | - **Sending a Message**: To send messages to a Telegram user,tell the LLM to do so, then LLM invokes the `send_message` function. Passes the desired message content as an argument to the function, Then the message will be sent directly to the configured Telegram user. 28 | 29 | - **Configuring the Bot**: Set the `BOT_TOKEN` and `CHAT_ID` values to configure the bot, as well as the optional `PROXY` setting to ensure messages are routed correctly. 30 | 31 | ### **How to Install** 32 | 33 | - Follow [this link](https://openwebui.com/t/mhio/telegram_message_sender) to access the installation instructions and get started. 34 | 35 | ### **Contribute** 36 | 37 | We welcome contributions from the community! If you have ideas for improvements, new features, or bug fixes, please feel free to open a pull request or issue on our [GitHub repository](https://github.com/soymh/open-webui-stuff). 38 | 39 | ### **Acknowledgments** 40 | 41 | Special thanks to **[open-webui](https://github.com/open-webui)** for creating the Open WebUI, which this tool is built for. Without such a powerful and versatile base, this tool wouldn’t be possible. 42 | -------------------------------------------------------------------------------- /tools/telegram-message-sender/telegram-message-sender.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Telegram Bot Message Sender 3 | author: https://github.com/soymh 4 | version: 0.0.1 5 | """ 6 | 7 | import requests 8 | import asyncio 9 | from typing import Callable, Any 10 | 11 | from pydantic import BaseModel, Field 12 | 13 | 14 | class EventEmitter: 15 | def __init__(self, event_emitter: Callable[[dict], Any] = None): 16 | self.event_emitter = event_emitter 17 | 18 | async def emit(self, description="Unknown state", status="in_progress", done=False): 19 | if self.event_emitter: 20 | await self.event_emitter( 21 | { 22 | "type": "status", 23 | "data": { 24 | "status": status, 25 | "description": description, 26 | "done": done, 27 | }, 28 | } 29 | ) 30 | 31 | 32 | class Tools: 33 | class Valves(BaseModel): 34 | BOT_TOKEN: str = Field( 35 | default="", 36 | description="The token of the Telegram bot.", 37 | ) 38 | CHAT_ID: int = Field( 39 | default=0, 40 | description="The chat ID of the user to receive messages.", 41 | ) 42 | PROXY: str = Field( 43 | default="", 44 | description="The proxy used to send the message through.", 45 | ) 46 | 47 | def __init__(self): 48 | self.valves = self.Valves() 49 | 50 | async def send_message( 51 | self, message_content: str, __event_emitter__: Callable[[dict], Any] = None 52 | ) -> str: 53 | """ 54 | Send a message to a specified Telegram user using a bot. 55 | 56 | :param message_content: The content of the message to be sent. 57 | :param __event_emitter__: An optional callback for emitting events during processing. 58 | :return: Response message indicating success or failure. 59 | """ 60 | emitter = EventEmitter(__event_emitter__) 61 | 62 | # Check if the bot token and chat ID have been set 63 | if not self.valves.BOT_TOKEN or not self.valves.CHAT_ID: 64 | await emitter.emit( 65 | description="Bot token or chat ID not provided.", 66 | status="missing_configuration", 67 | done=True, 68 | ) 69 | return "Bot token or chat ID not provided. Please tell the user to configure them." 70 | 71 | # Start emitting status 72 | await emitter.emit( 73 | description="Sending message to Telegram user.", 74 | status="sending_message", 75 | done=False, 76 | ) 77 | 78 | url = f"https://api.telegram.org/bot{self.valves.BOT_TOKEN}/sendMessage" 79 | data = { 80 | "chat_id": self.valves.CHAT_ID, 81 | "text": message_content, 82 | } 83 | 84 | # Proxy support (optional) 85 | proxies = ( 86 | {"http": self.valves.PROXY, "https": self.valves.PROXY} 87 | if self.valves.PROXY 88 | else None 89 | ) 90 | 91 | try: 92 | response = requests.post(url, data=data, proxies=proxies) 93 | 94 | if response.status_code == 200: 95 | await emitter.emit( 96 | description="Message successfully sent.", 97 | status="message_sent", 98 | done=True, 99 | ) 100 | return "Message successfully sent to the Telegram user." 101 | else: 102 | await emitter.emit( 103 | description=f"Failed to send message. HTTP Status Code: {response.status_code}.", 104 | status="send_failed", 105 | done=True, 106 | ) 107 | return ( 108 | f"Failed to send message. HTTP Status Code: {response.status_code}." 109 | ) 110 | 111 | except requests.exceptions.RequestException as e: 112 | await emitter.emit( 113 | description=f"Failed to send message due to the exception: {e}", 114 | status="error", 115 | done=True, 116 | ) 117 | return f"Failed to send message due to the exception: {e}" 118 | --------------------------------------------------------------------------------