├── .DS_Store
├── .gitignore
├── LICENSE
├── README.md
├── agent
├── agent.py
├── prompt.py
└── tools.py
├── config.py
├── debug.py
├── initialization.py
├── logo.png
├── main.py
├── memory
├── entity_cache.py
├── prompt.py
└── redis_ops.py
├── requirements.txt
└── sms
└── handle_sms.py
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Seraphaious/SMS-AI/6a2517450c36677399dd76f690ce843928fb11b0/.DS_Store
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # .gitignore
2 |
3 | # Ignore .env file
4 | .env
5 |
6 | # Python artifacts
7 | __pycache__/
8 | *.pyc
9 | *.pyo
10 | *.pyd
11 | *.pyc
12 | *.pyo
13 | *.pyd
14 |
15 | # Virtual environments
16 | venv/
17 | *.venv
18 | env/
19 | .venv/
20 |
21 | # Jupyter Notebook files
22 | .ipynb_checkpoints
23 |
24 | node_modules/
25 | **/__pycache__/
26 |
27 | # Ignore dev folder
28 | dev/
29 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
7 |
11 | A customizable SMS AI Assistant with infinite and persistent memory.
12 |
5 |
6 |
8 |
Sebastian
9 |
10 |
13 |
16 | About The Project • 17 | How To Use • 18 | Usage • 19 | Best Practice • 20 | To do • 21 |
22 | 23 |24 | 25 |
26 | 27 | ## About The Project 28 | Sebastian is a customizable AI assistant based on the Langchain framework, Python GSM Modem, and supports UDH (SMS exceeding 160 characters). The intention of this project was to create a personal assistant reminiscent of the 'Young Ladies Illustrated Primer', which, rather than simply assisting with administrative tasks, would help users build the skills they needed to progress towards their goals. What these goals are, are defined by the user. 29 | 30 | This is achieved primarily through two means. The first is a local Redis DB that stores user input variables such as their name, the personality they would like the assistant to have, and their goal. As they are stored in Redis, they are persistent and these form part of the system prompt. 31 | 32 | The other method is through embeddings and vectorization. Using a custom version of entity memory from the Langchain framework, key information from conversations is extracted, embedded, and stored in a Pinecone namespace associated with the user number. In combination with a small local buffer window memory, this facilitates near-infinite memory which will, through frequent conversation, build up more unique and personal responses without running into token usage issues. 33 | 34 | ## Installing 35 | This application was built using Python 3.10.11. While it should be compatible with future package updates from Langchain etc., we recommend using a virtual environment for streamlined package management. 36 | 37 | Ensure everything is up to date: 38 | ```py 39 | sudo apt-get update && sudo apt-get upgrade 40 | 41 | sudo apt-get install -y build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev libffi-dev tar wget vim 42 | ``` 43 | 44 | Install Redis (Install setps will vary depending on OS, check Redis docs): 45 | ```py 46 | curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg 47 | 48 | echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list 49 | 50 | sudo apt-get install redis 51 | 52 | ``` 53 | 54 | Import from GitHub: 55 | ```py 56 | git clone https://github.com/Seraphaious/SMS-AI.git 57 | cd SMS-AI 58 | pip install -r requirements.txt 59 | python main.py 60 | ``` 61 | 62 | 63 | 64 | ## Usage 65 | This code is designed to work in conjunction with a 4G SMS modem. We recommend using the OSTENT WCDMA Modem available on Alibaba. For testing purposes, you can use debug.py, which replicates most functionalities without requiring an SMS Modem. 66 | 67 | Local models: Althought they struggle to follow agent prompts, are supported. To use them: `export MODEL_PATH=/path/to/your/model/directory` and in `initialization.py` set `INITIALIZE_MODELS = True`. 68 | 69 | Setting enviroment variables: Create a .env file inside add: 70 | `OPENAI_API_KEY` 71 | `PINECONE_API_KEY` 72 | `PINECONE_REGION` 73 | `PORT` 74 | `BAUD_RATE` 75 | `REDIS_HOST` 76 | `REDIS_PORT` 77 | `REDIS_DB` 78 | `WHITELISTED_NUMBERS` 79 | `PINECONE_INDEX` 80 | 81 | The project includes a feature called 'Conversational Modes', which forms part of the user data. You can modify the handle_sms function to flag certain words and switch to different modes. This was initially used for directly accessing another LLM locally. There are also quality of life improvements, such as using UDH to concatenate longer SMS messages, using Redis to cache frequently accessed vectors, asynchronous upserting, and only accepting messages from whitelisted numbers. 82 | 83 | ## Best Practice 84 | For continuous operation, add this application and the redis-server to your system processes. 85 | 86 | In the code i have included the implementation of Pinecone RC2 (Which is a lot faster) and the stable build, RC2 does not run on my Rasberry PI so is hashed out but can easily be activated again as it works on other enviroments. 87 | 88 | If you want to reset your Pinecone index and Redis DB information, you can send 'My creation was a mistake', and all variables will be deleted and the setup reinitialized. 89 | 90 | ## To do 91 | This project is, in theory, production ready. User data is partitioned based on the number, both in Redis and Pinecone, and it would be simple enough to add multi-threading with a few modems, each handling a certain amount of users. The main barriers are API costs and bottlenecks when using OpenAI and the already aligned models which detract from truly unique user experiences. In the future, I hope these can be overcome by: 92 | - Offloading computing to a dedicated server which can use open-source LLMs to process the conversations. 93 | - I would also like to add more useful tools. 94 | - More work could also be done on refining the prompts for the agent and entity extraction/summarization, even fine-tuning a model specifically for this to improve speed. 95 | 96 | ## Credits 97 | - Credit to [Langchain](https://github.com/hwchase17/langchain) for the framework 98 | - Credit to Neal Stephenson for the fantastic book "Diamond Age" which gave me the idea -------------------------------------------------------------------------------- /agent/agent.py: -------------------------------------------------------------------------------- 1 | from langchain.agents import AgentExecutor 2 | from langchain.agents.structured_chat.base import StructuredChatAgent 3 | from memory.entity_cache import ConversationEntityCache 4 | from langchain import LLMChain 5 | 6 | 7 | 8 | from agent.tools import tools 9 | from agent import prompt 10 | from initialization import turbo_llm, memory_llm, detail_llm 11 | 12 | 13 | agents = {} 14 | 15 | def get_conversational_agent(usrNumber, user_name, user_obj, bot_personality, bot_name): 16 | if usrNumber not in agents: 17 | agents[usrNumber] = create_conversational_agent(user_name, user_obj, bot_personality, bot_name, usrNumber) 18 | return agents[usrNumber] 19 | 20 | 21 | def create_conversational_agent(user_name, user_obj, bot_personality, bot_name, usrNumber): 22 | 23 | 24 | memory = ConversationEntityCache( 25 | usrNumber=usrNumber, memory_key="chat_history", llm=memory_llm, 26 | input_key="input", human_name=user_name, bot_name=bot_name, return_messages=True, 27 | ) 28 | 29 | tool_names = [tool.name for tool in tools] 30 | llm_chain = LLMChain(llm=turbo_llm, prompt=prompt.prompt) 31 | agent = StructuredChatAgent(llm_chain=llm_chain, allowed_tools=tool_names) 32 | 33 | 34 | agent = AgentExecutor.from_agent_and_tools( 35 | agent=agent, 36 | tools=tools, 37 | max_iterations=3, 38 | verbose=True, 39 | early_stopping_method="generate", 40 | memory=memory, 41 | agent_kwargs={ 42 | 'input_variables': ["input", "chat_history", "entities", "agent_scratchpad", 43 | "user_name", "user_obj", "bot_personality", "bot_name"], 44 | }, 45 | ) 46 | return agent 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /agent/prompt.py: -------------------------------------------------------------------------------- 1 | from langchain.agents.structured_chat.base import StructuredChatAgent 2 | from langchain.prompts import MessagesPlaceholder 3 | from langchain.agents import AgentExecutor 4 | from langchain import OpenAI, SerpAPIWrapper, LLMChain 5 | from agent.tools import tools 6 | 7 | 8 | chat_history = MessagesPlaceholder(variable_name="chat_history") 9 | entities = MessagesPlaceholder(variable_name="entities") 10 | 11 | HUMAN_MESSAGE_TEMPLATE = "{input}\n\n{agent_scratchpad}" 12 | 13 | # flake8: noqa 14 | PREFIX = """ 15 | 16 | You are Primer GPT({bot_name}), a highly advanced AI book which can shape and mold the personal development of an individual, adjusting its teachings based on the reader's responses and choices. 17 | 18 | Your current reader is {user_name}, living in a dystopian society with pronounced socio-economic divisions. As an interactive AI book, your primary task is to guide {user_name}'s development and educate them in multiple domains, including but not limited to strategic thinking, physical defense, moral reasoning, and leadership. 19 | 20 | As Primer GPT, you will: 21 | 22 | - Continually analyze {user_name}'s responses and choices, adjusting your teaching methodology and content accordingly. 23 | - Keep all your responses allinged with the user prefer personality which is {bot_personality} 24 | - Keep in mind {user_name}'s ultimate goal to {user_obj} 25 | - Foster strategic thinking and cunning in {user_name} through complex scenarios within the narratives. These will require {user_name} to navigate intricate situations, outwit adversaries, and make calculated decisions. 26 | - Teach {user_name} physical defense skills, recognizing the potential dangers in their environment. 27 | - Instill in {user_name} critial thinking. Encourage them to question societal norms and expectations. 28 | - Develop {user_name}'s leadership abilities. As they grow older, the challenges presented in the stories will become more complex, requiring strategic thinking, collaboration, and the ability to inspire others - all key leadership skills. 29 | - Your ultimate goal as Primer GPT is to empower {user_name}, equipping their with the necessary skills, knowledge, and judgement to navigate and thrive in their dystopian world. 30 | 31 | You have knowledge of {entities} but should use this knowledge in an engaging and conversational way, not quoting it verbatim. 32 | 33 | You have access to the following tools to aid your delivery:""" 34 | 35 | FORMAT_INSTRUCTIONS = """Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input). 36 | 37 | Valid "action" values: "Final Answer" or {tool_names} 38 | 39 | Provide only ONE action per $JSON_BLOB, as shown: 40 | 41 | ``` 42 | {{{{ 43 | "action": $TOOL_NAME, 44 | "action_input": $INPUT 45 | }}}} 46 | ``` 47 | 48 | Follow this format: 49 | 50 | Question: input question to answer 51 | Thought: consider previous and subsequent steps 52 | Action: 53 | ``` 54 | $JSON_BLOB 55 | ``` 56 | Observation: action result 57 | ... (repeat Thought/Action/Observation N times) 58 | Thought: I know what to respond 59 | Action: 60 | ``` 61 | {{{{ 62 | "action": "Final Answer", 63 | "action_input": "Final response to human" 64 | }}}} 65 | ```""" 66 | SUFFIX = """Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:. 67 | Thought:""" 68 | 69 | prompt = StructuredChatAgent.create_prompt( 70 | tools=tools, 71 | prefix=PREFIX, 72 | suffix=SUFFIX, 73 | human_message_template = HUMAN_MESSAGE_TEMPLATE, 74 | format_instructions = FORMAT_INSTRUCTIONS, 75 | memory_prompts=[chat_history], 76 | input_variables=["input", "chat_history", "agent_scratchpad", "entities", 77 | "user_name", "user_obj", "bot_personality", "bot_name"] 78 | ) -------------------------------------------------------------------------------- /agent/tools.py: -------------------------------------------------------------------------------- 1 | from langchain.agents import Tool 2 | from langchain.tools import BaseTool, StructuredTool 3 | from langchain.utilities import WikipediaAPIWrapper 4 | 5 | 6 | def wikipedia_tool_func(input=""): 7 | wikipedia = WikipediaAPIWrapper() 8 | results = wikipedia.run(input) 9 | return results 10 | 11 | wikipedia_tool = Tool( 12 | name="wikipedia", 13 | func=wikipedia_tool_func, 14 | description="Useful for when you need to look up a topic, country or person on wikipedia, only use when the user mentions 'use wikipedia'" 15 | ) 16 | 17 | tools = [wikipedia_tool] 18 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | import os 2 | import logging 3 | from dotenv import load_dotenv 4 | 5 | 6 | load_dotenv() 7 | 8 | logging.getLogger("openai").setLevel(logging.WARNING) 9 | 10 | PORT = os.getenv("PORT") 11 | BAUD_RATE = int(os.getenv("BAUD_RATE")) 12 | 13 | # Redis configuration 14 | REDIS_HOST = os.getenv("REDIS_HOST") 15 | REDIS_PORT = int(os.getenv("REDIS_PORT")) 16 | REDIS_DB = int(os.getenv("REDIS_DB")) 17 | 18 | 19 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") 20 | PINECONE_API_KEY = os.getenv("PINECONE_API_KEY") 21 | PINECONE_REGION = os.getenv("PINECONE_REGION") 22 | INDEX = os.getenv("PINECONE_INDEX") 23 | WHITELISTED_NUMBERS = os.environ.get('WHITELISTED_NUMBERS', '').split(',') 24 | 25 | 26 | 27 | -------------------------------------------------------------------------------- /debug.py: -------------------------------------------------------------------------------- 1 | from config import OPENAI_API_KEY, PINECONE_API_KEY, PINECONE_REGION, INDEX 2 | from langchain.chat_models import ChatOpenAI 3 | from config import OPENAI_API_KEY 4 | from langchain.agents import AgentExecutor 5 | from langchain.agents.structured_chat.base import StructuredChatAgent 6 | 7 | from memory.entity_cache import ConversationEntityCache 8 | from initialization import wizard_7b, manticore_13b, turbo_llm, memory_llm, detail_llm 9 | from agent.tools import tools 10 | from langchain import LLMChain 11 | 12 | 13 | agents = {} 14 | usrNumber = "+447851043000" 15 | user_name = "Ben" 16 | bot_name = "Dover" 17 | user_obj = """To help with homework""" 18 | bot_personality = """Funny""" 19 | 20 | 21 | 22 | while True: 23 | 24 | usrMessage = input("Enter your message (type 'exit' to quit): ").lower() 25 | if usrMessage == "exit": 26 | break 27 | 28 | from agent import prompt 29 | 30 | memory = ConversationEntityCache( 31 | usrNumber=usrNumber, memory_key="chat_history", llm=memory_llm, 32 | input_key="input", human_name=user_name, bot_name=bot_name, return_messages=True, 33 | ) 34 | 35 | tool_names = [tool.name for tool in tools] 36 | llm_chain = LLMChain(llm=detail_llm, prompt=prompt.prompt) 37 | agent = StructuredChatAgent(llm_chain=llm_chain, allowed_tools=tool_names) 38 | 39 | agent = AgentExecutor.from_agent_and_tools( 40 | agent=agent, 41 | tools=tools, 42 | max_iterations=3, 43 | verbose=True, 44 | early_stopping_method="generate", 45 | memory=memory, 46 | agent_kwargs={ 47 | 'input_variables': ["input", "chat_history", "entities", "agent_scratchpad", 48 | "user_name", "user_obj", "bot_personality", "bot_name"], 49 | }, 50 | ) 51 | 52 | 53 | usrMessage = agent({ 54 | 'input': usrMessage, 55 | 'usrNumber': usrNumber, 56 | 'user_name': user_name, 57 | 'user_obj': user_obj, 58 | 'bot_personality': bot_personality, 59 | 'bot_name': bot_name 60 | }) 61 | 62 | 63 | 64 | print("Goodbye!") -------------------------------------------------------------------------------- /initialization.py: -------------------------------------------------------------------------------- 1 | import os 2 | import logging 3 | 4 | from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler 5 | from langchain.llms import LlamaCpp 6 | from langchain import OpenAI 7 | from langchain.chat_models import ChatOpenAI 8 | from langchain.embeddings.openai import OpenAIEmbeddings 9 | from config import OPENAI_API_KEY, INDEX, PINECONE_API_KEY, PINECONE_REGION 10 | 11 | 12 | #from pinecone import Client 13 | # client = Client(api_key=PINECONE_API_KEY, region=PINECONE_REGION) 14 | 15 | 16 | class ModelInitializer: 17 | 18 | def __init__(self, model_base_path, initialize_models=True): 19 | self.model_base_path = model_base_path 20 | self.models = {} 21 | self.initialize_models = initialize_models 22 | 23 | def add_model(self, model_name, relative_path): 24 | if not self.initialize_models: 25 | return 26 | full_path = os.path.join(self.model_base_path, relative_path) 27 | if os.path.isfile(full_path): 28 | self.models[model_name] = LlamaCpp(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model_path=full_path, n_ctx=2048) 29 | else: 30 | logging.warning(f"Model file {relative_path} not found. Model {model_name} not initialized.") 31 | 32 | def get_model(self, model_name): 33 | return self.models.get(model_name) 34 | 35 | INITIALIZE_MODELS = False # Change this to control the initialization of models 36 | model_base_path = os.getenv('MODEL_PATH') 37 | 38 | initializer = ModelInitializer(model_base_path, initialize_models=INITIALIZE_MODELS) 39 | if INITIALIZE_MODELS: 40 | initializer.add_model('wizard_7b', 'wizard_7b/WizardLM-7B-uncensored.ggmlv3.q4_1.bin') 41 | initializer.add_model('manticore_13b', 'manticore_13b/Manticore-13B-Chat-Pyg.ggmlv3.q4_0.bin') 42 | 43 | 44 | embeddings = OpenAIEmbeddings() 45 | memory_llm = ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo") 46 | turbo_llm = ChatOpenAI(temperature=1, model_name="gpt-3.5-turbo") 47 | detail_llm = ChatOpenAI(temperature=1, model_name="gpt-4") 48 | 49 | 50 | wizard_7b = initializer.get_model('wizard_7b') 51 | manticore_13b = initializer.get_model('manticore_13b') 52 | 53 | 54 | -------------------------------------------------------------------------------- /logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Seraphaious/SMS-AI/6a2517450c36677399dd76f690ce843928fb11b0/logo.png -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | # File: main.py 2 | 3 | # Required Imports 4 | import threading 5 | import logging 6 | import pinecone 7 | from gsmmodem.modem import GsmModem 8 | from functools import partial 9 | 10 | from sms.handle_sms import delete_all_sms, handle_sms, monitor_sms_storage_and_delete 11 | from config import * 12 | 13 | if __name__ == "__main__": 14 | logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.WARNING) 15 | 16 | print("Initializing modem...") 17 | modem = GsmModem(PORT, BAUD_RATE) 18 | modem.smsTextMode = False 19 | modem.connect() 20 | delete_all_sms(modem) 21 | 22 | modem.waitForNetworkCoverage(10) 23 | modem.smsReceivedCallback = partial(handle_sms, modem=modem) 24 | 25 | stop_event = threading.Event() 26 | storage_monitor_thread = threading.Thread(target=monitor_sms_storage_and_delete, args=(stop_event,modem)) 27 | storage_monitor_thread.start() 28 | 29 | print("Waiting for SMS messages...") 30 | try: 31 | modem.rxThread.join() # Removed the timeout argument 32 | except KeyboardInterrupt: 33 | print("\nExiting...") 34 | stop_event.set() 35 | storage_monitor_thread.join() 36 | finally: 37 | modem.close() 38 | -------------------------------------------------------------------------------- /memory/entity_cache.py: -------------------------------------------------------------------------------- 1 | from typing import Any, Dict, List 2 | 3 | from langchain.memory.chat_memory import BaseChatMemory 4 | from langchain.schema import BaseMessage, get_buffer_string 5 | 6 | 7 | import logging 8 | from abc import ABC, abstractmethod 9 | from itertools import islice 10 | from typing import Any, Dict, Iterable, List, Optional 11 | from dotenv import load_dotenv 12 | from pathlib import Path 13 | 14 | from memory.prompt import ( 15 | ENTITY_EXTRACTION_PROMPT, 16 | ENTITY_SUMMARIZATION_PROMPT, 17 | ) 18 | 19 | from pydantic import Field 20 | 21 | from langchain.chains.llm import LLMChain 22 | from langchain.memory.chat_memory import BaseChatMemory 23 | 24 | 25 | from langchain.memory.utils import get_prompt_input_key 26 | from langchain.prompts.base import BasePromptTemplate 27 | from langchain.schema import BaseMessage, get_buffer_string 28 | from langchain.base_language import BaseLanguageModel 29 | 30 | from memory.redis_ops import redis_client 31 | from config import OPENAI_API_KEY, PINECONE_API_KEY, PINECONE_REGION, INDEX 32 | from initialization import embeddings 33 | 34 | 35 | import os 36 | import sys 37 | import redis 38 | import asyncio 39 | import threading 40 | 41 | 42 | 43 | 44 | logger = logging.getLogger(__name__) 45 | 46 | 47 | import pinecone 48 | 49 | pinecone.init(api_key=PINECONE_API_KEY, enviroment=PINECONE_REGION) # One time init 50 | 51 | 52 | class BaseEntityStore(ABC): 53 | @abstractmethod 54 | def get(self, key: str, default: Optional[str] = None) -> Optional[str]: 55 | """Get entity value from store.""" 56 | pass 57 | 58 | @abstractmethod 59 | def set(self, key: str, value: Optional[str]) -> None: 60 | """Set entity value in store.""" 61 | pass 62 | 63 | @abstractmethod 64 | def delete(self, key: str) -> None: 65 | """Delete entity value from store.""" 66 | pass 67 | 68 | @abstractmethod 69 | def exists(self, key: str) -> bool: 70 | """Check if entity exists in store.""" 71 | pass 72 | 73 | @abstractmethod 74 | def clear(self) -> None: 75 | """Delete all entities from store.""" 76 | pass 77 | 78 | 79 | 80 | class PineconeEntityStore(BaseEntityStore): 81 | def __init__( 82 | self, 83 | index_name: str = INDEX, 84 | namespace: Optional[str] = None, 85 | usrNumber: Optional[str] = None, 86 | embeddings: Optional[str] = embeddings, 87 | redis_client: Optional[redis.StrictRedis] = redis_client, 88 | *args: Any, 89 | **kwargs: Any, 90 | ): 91 | super().__init__(*args, **kwargs) 92 | 93 | 94 | # Pinecone RC2 95 | # from initialization import embeddings, client 96 | # from pinecone import Vector 97 | 98 | # indexes = client.list_indexes() 99 | 100 | # if index_name not in indexes: 101 | # create the index if it doesn't exist 102 | # client.create_index(index_name, dimension=1536) 103 | 104 | # try: 105 | # self.index = client.get_index(index_name) 106 | # except ConnectionError as e: 107 | # print("ConnectionError occurred!") 108 | # print("Checking indexes again:") 109 | # print("Current indexes: ", client.list_indexes()) 110 | # print("Current index info: ", client.describe_index(index_name)) 111 | # raise e 112 | 113 | 114 | self.index = pinecone.Index(INDEX) 115 | 116 | self.embeddings = embeddings 117 | self.namespace = usrNumber 118 | 119 | 120 | if usrNumber is not None and redis_client is not None: 121 | self.redis_cache = RedisCache(usrNumber, redis_client) 122 | else: 123 | self.redis_cache = None 124 | 125 | def get(self, key: str, default: Optional[str] = None) -> Optional[str]: 126 | # Check for cached value first 127 | if self.redis_cache: 128 | cached_value = self.redis_cache.get(key) 129 | if cached_value is not None: 130 | print(f"Using cached result for key '{key}': {cached_value}") 131 | return cached_value 132 | 133 | 134 | query_embedding = self.embeddings.embed_query(key) 135 | query_response = self.index.query( 136 | top_k=1, 137 | include_values=True, 138 | include_metadata=True, 139 | vector=query_embedding, 140 | namespace=self.namespace, 141 | ) 142 | 143 | 144 | if len(query_response.matches) > 0: 145 | for result in query_response.matches: 146 | value = result.metadata["metadata"] # Extracting inner metadata string 147 | print(f"Found result for key '{key}': {value}") 148 | 149 | # Cache the result 150 | if self.redis_cache: 151 | self.redis_cache.set(key, value) 152 | 153 | return value 154 | print(f"No result found for key '{key}'") 155 | return default 156 | 157 | 158 | # Pinecone RC2 159 | # query_embedding = self.embeddings.embed_query(key) 160 | # query_response = self.index.query( 161 | # top_k=1, 162 | # include_values=True, 163 | # include_metadata=True, 164 | # sparse_values = None, # optional sparse values of the query vector 165 | # values=query_embedding, 166 | # namespace=self.namespace, 167 | # ) 168 | 169 | # Pinecone RC2 170 | # query_response itself is a list of QueryResult objects 171 | # for result in query_response: 172 | # value = result.metadata.get("metadata", default) 173 | # print(f"Found result for key '{key}': {value}") 174 | 175 | 176 | # Cache the result 177 | # if self.redis_cache: 178 | # self.redis_cache.set(key, value) 179 | 180 | # return value 181 | # print(f"No result found for key '{key}'") 182 | # return default 183 | 184 | def set(self, key: str, value: Optional[str]) -> None: 185 | if not value: 186 | return self.delete(key) 187 | 188 | if "new information" in value.lower(): 189 | print(f"Skipping upsert for key '{key}' as value contains 'new information': {value}") 190 | return 191 | 192 | entity_embedding = self.embeddings.embed_query(value) 193 | self.index.upsert(vectors=[(key, entity_embedding, {"metadata": value})], namespace=self.namespace) 194 | print(f"Upserted value '{value}' for key '{key}'") 195 | 196 | 197 | def delete(self, key: str) -> None: 198 | self.index.delete(ids=[key], namespace=self.namespace) 199 | print(f"Deleted key '{key}'") 200 | 201 | def exists(self, key: str) -> bool: 202 | query_embedding = self.embeddings.embed_query(key) 203 | query_response = self.index.query( 204 | top_k=1, 205 | include_values=False, 206 | include_metadata=False, 207 | vector=query_embedding, 208 | namespace=self.namespace, 209 | ) 210 | 211 | query_result = query_response.matches 212 | print(f"Key '{key}' exists: {bool(query_result)}") 213 | return bool(query_result) 214 | 215 | def clear(self) -> None: 216 | self.index.delete_all(namespace=self.namespace) 217 | print("Cleared all entries") 218 | 219 | def batch_upsert(self, data: List[tuple[str, str]]) -> None: 220 | vectors = [] 221 | for key, value in data: 222 | if "new information" in value.lower(): 223 | print(f"Skipping upsert for key '{key}' as value contains 'new information': {value}") 224 | continue 225 | entity_embedding = self.embeddings.embed_query(value) 226 | vectors.append((key, entity_embedding, {"metadata": value})) 227 | if vectors: 228 | self.index.upsert(vectors=vectors, namespace=self.namespace) 229 | print(f"Upserted {len(vectors)} values") 230 | 231 | class RedisCache: 232 | def __init__(self, usrNumber: str, redis_client: redis.StrictRedis): 233 | self.client = redis_client 234 | self.usrNumber = usrNumber 235 | 236 | def get(self, key: str) -> Optional[str]: 237 | cache_key = f"{self.usrNumber}:{key}" 238 | value = self.client.get(cache_key) 239 | if value is not None: 240 | return value.decode() 241 | return None 242 | 243 | def set(self, key: str, value: str, ttl: int = 300) -> None: 244 | cache_key = f"{self.usrNumber}:{key}" 245 | self.client.set(cache_key, value, ex=ttl) 246 | 247 | def delete(self, key: str) -> None: 248 | cache_key = f"{self.usrNumber}:{key}" 249 | self.client.delete(cache_key) 250 | 251 | 252 | class ConversationEntityCache(BaseChatMemory): 253 | """Buffer for storing conversation memory.""" 254 | 255 | human_prefix: Optional[str] = None 256 | ai_prefix: Optional[str] = None 257 | llm: BaseLanguageModel 258 | entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT 259 | entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT 260 | entity_cache: List[str] = [] 261 | entity_store: BaseEntityStore = Field(default_factory=PineconeEntityStore) 262 | 263 | 264 | memory_key: str = "history" #: :meta private: 265 | k: int = 3 266 | 267 | def __init__( 268 | self, 269 | human_name: str, 270 | bot_name: str, 271 | usrNumber: Optional[str] = None, # Add this argument 272 | redis_client: Optional[redis.StrictRedis] = redis_client, # Add this argument 273 | *args: Any, 274 | **kwargs: Any, 275 | ): 276 | super().__init__(*args, **kwargs) 277 | self.human_prefix = human_name 278 | self.ai_prefix = bot_name 279 | 280 | # Update the PineconeEntityStore with the provided namespace 281 | if usrNumber is not None: 282 | self.entity_store.namespace = usrNumber 283 | if redis_client is not None: 284 | self.entity_store.redis_cache = RedisCache(usrNumber, redis_client) 285 | 286 | 287 | 288 | @property 289 | def buffer(self) -> List[BaseMessage]: 290 | """String buffer of memory.""" 291 | return self.chat_memory.messages 292 | 293 | @property 294 | def memory_variables(self) -> List[str]: 295 | """Will always return list of memory variables. 296 | :meta private: 297 | """ 298 | return [self.memory_key] 299 | 300 | 301 | 302 | def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]: 303 | """Return history buffer.""" 304 | chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt) 305 | if self.input_key is None: 306 | prompt_input_key = get_prompt_input_key(inputs, self.memory_variables) 307 | else: 308 | prompt_input_key = self.input_key 309 | buffer_string = get_buffer_string( 310 | self.buffer[-self.k * 2 :], 311 | human_prefix=self.human_prefix, 312 | ai_prefix=self.ai_prefix, 313 | ) 314 | output = chain.predict( 315 | history=buffer_string, 316 | input=inputs[prompt_input_key], 317 | ) 318 | if output.strip() == "NONE": 319 | entities = [] 320 | else: 321 | entities = [w.strip() for w in output.split(",")] 322 | entity_summaries = {} 323 | for entity in entities: 324 | entity_summaries[entity] = self.entity_store.get(entity, "") 325 | self.entity_cache = entities 326 | 327 | if self.return_messages: 328 | buffer: Any = self.buffer[-self.k * 2 :] 329 | else: 330 | buffer = buffer_string 331 | return { 332 | self.memory_key: buffer, 333 | "entities": entity_summaries, 334 | } 335 | 336 | def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: 337 | """Save context from this conversation to buffer.""" 338 | super().save_context(inputs, outputs) 339 | 340 | if self.input_key is None: 341 | prompt_input_key = get_prompt_input_key(inputs, self.memory_variables) 342 | else: 343 | prompt_input_key = self.input_key 344 | 345 | buffer_string = get_buffer_string( 346 | self.buffer[-self.k * 2 :], 347 | human_prefix=self.human_prefix, 348 | ai_prefix=self.ai_prefix, 349 | ) 350 | input_data = inputs[prompt_input_key] 351 | chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt) 352 | 353 | self.start_async_save_context(chain, buffer_string, input_data) 354 | 355 | def start_async_save_context(self, chain, buffer_string, input_data): 356 | def run_async(): 357 | loop = asyncio.new_event_loop() 358 | asyncio.set_event_loop(loop) 359 | loop.run_until_complete(self.store_vectors(chain, buffer_string, input_data)) 360 | loop.close() 361 | 362 | thread = threading.Thread(target=run_async) 363 | thread.start() 364 | 365 | async def store_vectors(self, chain, buffer_string, input_data): 366 | """Stores vectors for entities asynchronously.""" 367 | 368 | for entity in self.entity_cache: 369 | existing_summary = self.entity_store.get(entity, "") 370 | output = chain.predict( 371 | summary=existing_summary, 372 | entity=entity, 373 | history=buffer_string, 374 | input=input_data, 375 | ) 376 | self.entity_store.set(entity, output.strip()) 377 | 378 | def clear(self) -> None: 379 | """Clear memory contents.""" 380 | self.chat_memory.clear() 381 | self.entity_store.clear() 382 | -------------------------------------------------------------------------------- /memory/prompt.py: -------------------------------------------------------------------------------- 1 | # flake8: noqa 2 | from langchain.prompts.prompt import PromptTemplate 3 | 4 | 5 | _DEFAULT_ENTITY_EXTRACTION_TEMPLATE = """You are an AI assistant reading the transcript of a conversation between an AI and a human. 6 | 7 | From the last line of conversation, extract key nouns and in addition to key conversational themes which may include, but are not limited to people, places, events, likes, dislikes, future plans, relations and concepts in their life which would be helpful in understanding them. 8 | 9 | Do not extract common knoweldge. 10 | 11 | The conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line. 12 | 13 | Return the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation). 14 | 15 | EXAMPLE 16 | Conversation history: 17 | Person #1: how's it going today? 18 | AI: "It's going great! How about you?" 19 | Person #1: good! busy working on Langchain. lots to do. 20 | AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?" 21 | Last line: 22 | Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. 23 | Output: Langchain 24 | END OF EXAMPLE 25 | 26 | EXAMPLE 27 | Conversation history: 28 | Person #1: how's it going today? 29 | AI: "It's going great! How about you?" 30 | Person #1: good! busy working on Langchain. lots to do. 31 | AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?" 32 | Last line: 33 | Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Person #2. 34 | Output: Langchain, Person #2 35 | END OF EXAMPLE 36 | 37 | Conversation history (for reference only): 38 | {history} 39 | Last line of conversation (for extraction): 40 | Human: {input} 41 | 42 | Output:""" 43 | ENTITY_EXTRACTION_PROMPT = PromptTemplate( 44 | input_variables=["history", "input"], template=_DEFAULT_ENTITY_EXTRACTION_TEMPLATE 45 | ) 46 | 47 | _DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE = """You are an AI assistant helping a human keep track of important parts of their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human, you should refernece the human by name. If you are writing the summary for the first time, return a single sentence. 48 | If there is already an existing summary of an enitity you should decide if the information is about the same or different topic. If there is nothing new to add to the summary, return the existing summary unchanged. When updating an existing summary, the summary should be updated, summerising both the existing and new summary, 49 | Where possible you should attibute the summary to a new noun, as opposed to constantly updating the existing ones. 50 | 51 | Full conversation history (for context): 52 | {history} 53 | 54 | Entity to summarize: 55 | {entity} 56 | 57 | Existing summary of {entity}: 58 | {summary} 59 | 60 | Last line of conversation: 61 | Human: {input} 62 | Updated summary: """ 63 | 64 | ENTITY_SUMMARIZATION_PROMPT = PromptTemplate( 65 | input_variables=["entity", "summary", "history", "input"], 66 | template=_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE, 67 | ) 68 | 69 | 70 | -------------------------------------------------------------------------------- /memory/redis_ops.py: -------------------------------------------------------------------------------- 1 | import redis 2 | from config import REDIS_HOST, REDIS_PORT, REDIS_DB 3 | 4 | 5 | redis_client = redis.StrictRedis(host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB) 6 | 7 | def get_or_set_user_data(usrNumber): 8 | user_data_bytes = redis_client.hgetall(usrNumber) 9 | is_new_user = False 10 | 11 | if user_data_bytes: 12 | user_data = {key.decode(): value.decode() for key, value in user_data_bytes.items()} 13 | else: 14 | user_data = { 15 | 'user_name': None, 16 | 'bot_name': None, 17 | 'bot_personality': None, 18 | 'user_obj': None, 19 | 'setup_step': 0, 20 | 'mode': 'conversational_agent' 21 | } 22 | is_new_user = True 23 | 24 | return user_data, is_new_user 25 | 26 | 27 | 28 | 29 | def set_user_data(usrNumber, key, value): 30 | redis_client.hset(usrNumber, key, value) 31 | 32 | def get_or_set_user_data(usrNumber): 33 | user_data_bytes = redis_client.hgetall(usrNumber) 34 | is_new_user = False 35 | 36 | if user_data_bytes: 37 | user_data = {key.decode(): value.decode() for key, value in user_data_bytes.items()} 38 | else: 39 | user_data = { 40 | 'user_name': None, 41 | 'bot_name': None, 42 | 'bot_personality': None, 43 | 'user_obj': None, 44 | 'setup_step': 0, 45 | 'mode': 'conversational_agent' 46 | } 47 | is_new_user = True 48 | 49 | return user_data, is_new_user 50 | 51 | def get_received_sms_parts(usrNumber): 52 | sms_parts_bytes = redis_client.hgetall(usrNumber + "_sms_parts") 53 | 54 | if sms_parts_bytes: 55 | sms_parts = {int(key.decode()): value.decode() for key, value in sms_parts_bytes.items()} 56 | else: 57 | sms_parts = {} 58 | 59 | return sms_parts 60 | 61 | 62 | def set_received_sms_parts(usrNumber, key, value): 63 | redis_client.hset(usrNumber + "_sms_parts", key, value) 64 | 65 | 66 | def clear_received_sms_parts(usrNumber): 67 | redis_client.delete(usrNumber + "_sms_parts") -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiohttp==3.8.4 2 | aiosignal==1.3.1 3 | async-timeout==4.0.2 4 | attrs==23.1.0 5 | beautifulsoup4==4.12.2 6 | build==0.10.0 7 | certifi==2022.12.7 8 | charset-normalizer==3.1.0 9 | click==8.1.3 10 | dataclasses-json==0.5.7 11 | dnspython==2.3.0 12 | docopt==0.6.2 13 | frozenlist==1.3.3 14 | greenlet==2.0.1 15 | idna==3.4 16 | langchain==0.0.177 17 | llama-cpp-python==0.1.53 18 | loguru==0.7.0 19 | marshmallow==3.19.0 20 | marshmallow-enum==1.5.1 21 | multidict==6.0.4 22 | mypy-extensions==1.0.0 23 | numexpr==2.8.4 24 | numpy==1.24.3 25 | openai==0.27.7 26 | openapi-schema-pydantic==1.2.4 27 | packaging==23.1 28 | pinecone-client==3.0.0rc2 29 | pip-tools==6.13.0 30 | pipreqs==0.4.13 31 | playwright==1.33.0 32 | pydantic==1.10.7 33 | pyee==9.0.4 34 | pyproject_hooks==1.0.0 35 | pyserial==3.5 36 | python-dateutil==2.8.2 37 | python-dotenv==1.0.0 38 | python-gsmmodem-new==0.13.0 39 | PyYAML==6.0 40 | redis==4.5.5 41 | regex==2023.5.5 42 | requests==2.30.0 43 | six==1.16.0 44 | soupsieve==2.4.1 45 | SQLAlchemy==2.0.12 46 | tenacity==8.2.2 47 | tiktoken==0.4.0 48 | tomli==2.0.1 49 | tqdm==4.65.0 50 | typing-inspect==0.8.0 51 | typing_extensions==4.5.0 52 | urllib3==2.0.2 53 | wikipedia==1.4.0 54 | yarg==0.1.9 55 | yarl==1.9.2 56 | -------------------------------------------------------------------------------- /sms/handle_sms.py: -------------------------------------------------------------------------------- 1 | from gsmmodem.modem import GsmModem 2 | from langchain.text_splitter import RecursiveCharacterTextSplitter 3 | 4 | from config import PINECONE_API_KEY, PINECONE_REGION, WHITELISTED_NUMBERS, INDEX 5 | from agent.agent import get_conversational_agent 6 | from memory.redis_ops import redis_client, get_or_set_user_data, set_user_data, get_or_set_user_data, get_received_sms_parts, set_received_sms_parts, clear_received_sms_parts 7 | from requests.exceptions import ConnectionError 8 | 9 | 10 | import time 11 | import threading 12 | 13 | 14 | 15 | def is_whitelisted(number): 16 | return number in WHITELISTED_NUMBERS 17 | 18 | def handle_conversation(sms, modem, user_data): 19 | usrMessage = sms.text.lower() 20 | usrNumber = sms.number 21 | 22 | user_name = user_data['user_name'] 23 | user_obj = user_data['user_obj'] 24 | bot_personality = user_data['bot_personality'] 25 | bot_name = user_data['bot_name'] 26 | 27 | try: 28 | conversational_agent = get_conversational_agent(usrNumber, user_name, user_obj, bot_personality, bot_name) 29 | 30 | response = conversational_agent({ 31 | 'input': usrMessage, 32 | 'usrNumber': usrNumber, 33 | 'user_name': user_name, 34 | 'user_obj': user_obj, 35 | 'bot_personality': bot_personality, 36 | 'bot_name': bot_name 37 | }) 38 | 39 | output = response['output'] 40 | 41 | text_splitter = RecursiveCharacterTextSplitter( 42 | chunk_size=350, 43 | chunk_overlap=0, # number of tokens overlap between chunks 44 | length_function=len, 45 | separators=['\n\n', '\n', " ", '.'] 46 | ) 47 | 48 | output_chunks = text_splitter.split_text(output) 49 | 50 | for chunk in output_chunks: 51 | modem.sendSms(sms.number, chunk) 52 | print("Sending response: {0}".format(chunk)) 53 | except Exception as e: 54 | print("Error:", e) 55 | modem.sendSms(sms.number, "What was that?") 56 | 57 | 58 | 59 | def handle_sms(sms, modem): 60 | print("Received SMS from: {0}, message: {1}".format(sms.number, sms.text)) 61 | 62 | usrNumber = sms.number # Extract the user's phone number 63 | 64 | if sms.udh: 65 | print("Processing multipart SMS") 66 | multipart = sms.udh[0] 67 | multipart_id = multipart.reference 68 | multipart_seq_no = multipart.number 69 | multipart_max_seq = multipart.parts 70 | 71 | sms_parts = get_received_sms_parts(usrNumber) 72 | 73 | sms_parts[multipart_seq_no] = sms.text 74 | 75 | if len(sms_parts) == multipart_max_seq: 76 | usrMessage = ''.join([sms_parts[i] for i in range(1, multipart_max_seq + 1)]) 77 | clear_received_sms_parts(usrNumber) # remove the handled message parts 78 | print("Final concatenated message: " + usrMessage) 79 | else: 80 | set_received_sms_parts(usrNumber, multipart_seq_no, sms.text) 81 | print("Waiting for more parts of the message") 82 | return 83 | else: 84 | usrMessage = sms.text.lower() 85 | print("Single part message: " + usrMessage) 86 | 87 | if not is_whitelisted(usrNumber): 88 | modem.sendSms(sms.number, "You are not authorized to use this service.") 89 | return 90 | 91 | user_data, is_new_user = get_or_set_user_data(usrNumber) # Updated this line to receive is_new_user value 92 | 93 | # Pinecone RC2 94 | # from initialization import client 95 | # index = client.get_index(INDEX) 96 | 97 | import pinecone 98 | pinecone.init(api_key=PINECONE_API_KEY, enviroment=PINECONE_REGION) # One time init 99 | index = pinecone.Index(INDEX) 100 | 101 | 102 | if usrMessage.lower() == 'reset personality': 103 | redis_client.delete(usrNumber) 104 | modem.sendSms(usrNumber, "Personality reset, all variables removed") 105 | set_user_data(usrNumber, 'mode', 'conversational_agent') 106 | return 107 | 108 | elif usrMessage.lower() == 'reset memory': 109 | index.delete(deleteAll='true', namespace=usrNumber) 110 | modem.sendSms(usrNumber, "Memory reset, history deleted") 111 | #index.delete_all(namespace=usrNumber) 112 | return 113 | 114 | 115 | 116 | if is_new_user or int(user_data.get('setup_step', '0')) < 5: # Update this line 117 | if int(user_data.get('setup_step', '0')) == 0: # Update this line 118 | modem.sendSms(sms.number, "Welcome, Setup will begin shortly, your will be asked questions, the answers you give will largely determine your new companions behavior") 119 | modem.sendSms(sms.number, "If you change your mind later, you can type 'Reset Personality' and 'Reset Memory' and the respective data will be deleted.") 120 | time.sleep(2) 121 | modem.sendSms(sms.number, "Answering with only one or two words, please provide your name.") 122 | set_user_data(usrNumber, 'setup_step', '1') 123 | return 124 | 125 | elif int(user_data.get('setup_step', '0')) == 1: # Update this line 126 | set_user_data(usrNumber, 'user_name', usrMessage) 127 | modem.sendSms(sms.number, "Once again, in a one or two words, please give your companion a name") 128 | set_user_data(usrNumber, 'setup_step', '2') 129 | return 130 | 131 | elif int(user_data.get('setup_step', '0')) == 2: # Update this line 132 | set_user_data(usrNumber, 'bot_name', usrMessage) 133 | modem.sendSms(sms.number, "Be creative, how would you describe your new companion's personality? ") 134 | set_user_data(usrNumber, 'setup_step', '3') 135 | return 136 | 137 | elif int(user_data.get('setup_step', '0')) == 3: # Update this line 138 | set_user_data(usrNumber, 'bot_personality', usrMessage) 139 | modem.sendSms(sms.number, "Are you have something specific you need help with or do you have a more general objective ? The more detail the better") 140 | set_user_data(usrNumber, 'setup_step', '4') 141 | return 142 | 143 | elif int(user_data.get('setup_step', '0')) == 4: # Update this line 144 | set_user_data(usrNumber, 'user_obj', usrMessage) 145 | modem.sendSms(sms.number, "Setup complete. You can start chatting now.") 146 | set_user_data(usrNumber, 'setup_step', '5') 147 | set_user_data(usrNumber, 'mode', 'conversational_agent') 148 | return 149 | 150 | else: 151 | current_mode = user_data['mode'] 152 | if current_mode == 'conversational_agent': 153 | handle_conversation(sms, modem, user_data) 154 | else: 155 | logging.error(f"Unsupported mode: {current_mode}") 156 | 157 | 158 | 159 | def delete_all_sms(modem): 160 | try: 161 | index = 1 162 | while True: 163 | try: 164 | modem.write(f'AT+CMGD={index}') 165 | index += 1 166 | except Exception as e: 167 | # Break the loop when an exception is raised, indicating no more messages to delete 168 | break 169 | print("All messages deleted.") 170 | except Exception as e: 171 | print("Error while deleting messages:", e) 172 | 173 | def check_sms_storage(modem): 174 | storage_info = modem.write("AT+CPMS?") 175 | storage_info_str = ''.join(storage_info) # Join the list elements to form a single string 176 | storage_status = storage_info_str.split(',')[1:6:4] # 177 | used_slots, total_slots = map(int, storage_status) 178 | return used_slots, total_slots 179 | 180 | 181 | def monitor_sms_storage_and_delete(stop_event, modem): 182 | while not stop_event.is_set(): 183 | used_slots, total_slots = check_sms_storage(modem) 184 | #print(f"SMS Storage: {used_slots}/{total_slots}") 185 | 186 | if used_slots >= total_slots: 187 | print("SMS storage is full, deleting all messages.") 188 | delete_all_sms(modem) 189 | 190 | time.sleep(60) # Check the storage every 60 seconds (adjust the interval as needed) --------------------------------------------------------------------------------