├── .gitignore ├── README.md └── chatterstack ├── chatterstack.py ├── chatterstackadvanced.py └── commands.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | chatterstack/__pycache__/ 3 | dist/ 4 | build/ 5 | chatterstanck.egg-info/ 6 | .vscode/ 7 | .DS_Store 8 | ignored/ 9 | .python-version 10 | conversation.txt -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | chatterstack-logo-yellow-81 3 | 4 | Chatterstack is a dead simple and intuitive way to handle the "conversation" variables used by the ChatGPT API, while also giving you advanced functionality. 5 | 6 | # 🛠️ Setup 7 | Install package from pip: 8 | ```py 9 | pip install chatterstack 10 | ``` 11 | 12 | ### import & initialization: 13 | There are two options for Chatterstack. The Base Library, and the Advanced Library, which extends the Base. 14 | ```py 15 | import chatterstack 16 | 17 | # The base library if you are only really concerned about mananging 18 | # the conversation variable, it's order, and maybe tracking tokens. 19 | convo = chatterstack.Chatterstack() 20 | 21 | # The advanced library allows you issuing commands from the chat input, 22 | # gives the bot the ability to reach out to you with reminders, etc 23 | convo = chatterstack.ChatterstackAdvanced() 24 | ``` 25 | 26 | --- 27 | 28 | # 🤔 The Problem 29 | 30 | The ChatGPT APIs use a "conversation" variables to keep track of your interactions with the bot. This is a list of dictionaries, with each dictionary representing a turn in the conversation. 31 | 32 | This leaves your program being littered with things like this: 33 | 34 | ```py 35 | # append the bot's response to your conversation 36 | conversation.append({'role': response.choices[0].message.role, 'content': response.choices[0].message.content}) 37 | 38 | # to print the last message 39 | print(conversation[-1]["content"]) 40 | ``` 41 | Ridiculous. 42 | 43 | If you want to do anything more advanced, like "keep conversation variable to 8 messages max", or "make the System message always be the most recent message in the conversation"... well, I'll spare you the code snippet for those. 44 | 45 | ### But... 46 | Since many aspects of these dictionaries are higly predictable, and we are most often doing just a handful of basic tasks with them... 47 | 50 | 51 | ...this means we can actually abstract away a lot of this mess, in a way that makes it much more intuitive to use - while also keeping basically all of the inherent flexibility! That is a rare thing to be able to do. 52 | 53 | # Straightforward Solution: The Chatterstack Base Class 54 | 55 | This is a fully functional chatbot program using chatterstack: 56 | 57 | ```py 58 | import chatterstack 59 | import os 60 | 61 | os.environ['OPENAI_API_KEY'] = 'your_api_key_here' 62 | 63 | convo = chatterstack.Chatterstack() 64 | 65 | while True: 66 | convo.user_input() 67 | 68 | convo.send_to_bot() 69 | 70 | convo.print_last_message() 71 | ``` 72 | That's the whole thing! 73 | 74 | This is what that conversation would look like in the terminal: 75 | 76 | ```txt 77 | USER: hey, can you help me with something? 78 | 79 | ASSISTANT: Sure! What would you like help with? 80 | 81 | USER: I need to know if France has a President or a Prime Minister 82 | 83 | ASSISTANT: France has both a President and a Prime Minister, which [... bot goes on] 84 | ``` 85 | 86 | ## Keeping Flexibility 87 | This library is built to be *intuitive and flexible*, so you can easily change the behavior or look, in many ways, whatever suits your needs. Here's some basic examples, staying in the terminal for now: 88 | 89 | ```py 90 | 91 | while True: 92 | # Change the user's display name 93 | convo.user_input(prefix="ME: ") 94 | 95 | # change any of the API arguments 96 | convo.send_to_bot(model="gpt-4", temperature=1, max_tokens=40) 97 | 98 | # change the line spacing of the convo 99 | convo.print_last_message(prefix="GPT: ", lines_before=0, lines_after=2) 100 | 101 | # and let's have it print the total tokens after each turn 102 | print(convo.tokens_total_all) 103 | 104 | 105 | ``` 106 | Now your conversation is with GPT-4, and conversation looks like this, with token counts: 107 | 108 | ```txt 109 | ME: hey, can you help me with something? 110 | GPT: Of course! I'm here to help. Please let me know what you need assistance with, and I'll do my best to help you. 111 | 112 | 28 113 | 114 | ME: I need to know if France has a President or a Prime Minister 115 | GPT: France has both a President and a Prime Minister. The President of France is [...bot goes on] 116 | 117 | 87 118 | ``` 119 | There is more info about the current defaults and various methods to change them below in the section about sending messages to the API 120 | 121 | 122 | 123 | ## 📨 Getting Input 124 | The `user_input()` method is the same as the python `input()` method, except it also automatically appends the user's input as a correctly-formatted dict to your conversation variable. 125 | 126 | ```py 127 | convo.user_input() 128 | ``` 129 | As seen above, this method defaults to prompting the user with "USER: ", but you can change it to whatever you'd like 130 | ```py 131 | convo.user_input("Ask the bot: ") 132 | ``` 133 | ```txt 134 | Ask the bot: 135 | ``` 136 | 137 | ## 📨 Adding Messages 138 | Maybe you aren't using the terminal. 139 | 140 | Or maybe you want alter the input somehow before appending it. 141 | 142 | There are several ways to take any string variable and add them to the conversation as a correctly formatted dict: 143 | ```py 144 | # Use the .add() method. Pass it the role, then the content 145 | convo.add("user", message_string) 146 | 147 | # or, use the role-specific methods & just pass the content 148 | convo.add_user(message_string) 149 | convo.add_assistant("I'm a manually added assistant response") 150 | convo.add_system("SYSTEM INSTRUCTIONS - you are a helpful assistant who responds only in JSON") 151 | ``` 152 | 153 | There is also .insert() if you want to add a message at a specific index, instead of appending it to the end of the conversation: 154 | 155 | ```py 156 | # Here's the format 157 | convo.insert(index, role, content) 158 | 159 | # example 160 | convo.insert(4, "system", "IMPORTANT: Remember to not apologize to the user so much") 161 | ``` 162 | 163 | ## 💌 Sending messages to the API 164 | The chatterstack "send_to_bot" method is a standard OpenAI API call, but it's simpler to use and does a bunch of handy stuff for you in the background. Call it like this: 165 | 166 | ```py 167 | convo.send_to_bot() 168 | ``` 169 | That's it! 170 | 171 | It will take care of passing all the default values for you, as well as appending the response to your conversation. It also keeps token counts for you (and in the advanced class, much more). 172 | 173 | ### Changing the defaults for the send_to_bot() method: 174 | By default, chatterstack uses these values: 175 | 176 | ```py 177 | model="gpt-3.5-turbo", 178 | temperature=0.8, 179 | top_p=1, 180 | frequency_penalty=0, 181 | presence_penalty=0, 182 | max_tokens=200 183 | ``` 184 | 185 | There are several ways to change these. Choose what is most convenient to you. 186 | 187 | The most obvious way is just to pass them as arguments when you make the call. For instance, if you wanted GPT-4 and 800 max tokens: 188 | 189 | ```py 190 | convo.send_to_bot(model="gpt-4", max_tokens=800) 191 | ``` 192 | This approach is great when you want to make just one call with some different values. 193 | 194 | But if you know you want different values for the whole conversation, you can define them in caps at the top of your file, and initialize chatterstack using the `globals()` dict, like this: 195 | 196 | ```py 197 | MODEL = "gpt-4" 198 | TEMPERATURE = 0.6 199 | FREQUENCY_PENALTY = 1.25 200 | MAX_TOKENS = 500 201 | 202 | # initialize with 'globals()' 203 | convo = chatterstack.Chatterstack(user_defaults=globals()) 204 | 205 | # and now you can just call it like this again 206 | convo.send_to_bot() 207 | ``` 208 | 209 | Finally, if you want to just use the boilerplate OpenAI call, you can still do that! Just pass it the .list attribute of your Chatterstack, which is the raw list of dictionaries: 210 | 211 | ```py 212 | response = openai.ChatCompletion.create( 213 | model = "gpt-3.5-turbo", 214 | messages = convo.list, # <--- right here 215 | temperature = 0.9, 216 | top_p = 1, 217 | frequency_penalty = 0, 218 | presence_penalty = 0, 219 | max_tokens = 200, 220 | ) 221 | ``` 222 | 223 | ## 📂 Accessing and Printing Messages 224 | Super Simple: 225 | 226 | ```py 227 | # Print the "content" of the last message 228 | convo.print_last_message() 229 | ``` 230 | Or, if you wanted to do formatting on the string first... 231 | ```py 232 | # This represents/is the content string of the last message 233 | convo.last_message 234 | 235 | # So you can do stuff like this: 236 | message_in_caps = convo.last_message.upper() 237 | 238 | # print the message in all upper case 239 | print(message_in_caps) 240 | 241 | ``` 242 | 243 | ## 🪙 What About Tokens? 244 | 245 | oh yeah. We're keeping track of tokens. 246 | 247 | ```py 248 | # See the tokens used on the last API call 249 | self.last_call_prompt_tokens 250 | self.last_call_full_context_prompt_tokens 251 | self.last_call_completion_tokens 252 | self.last_call_tokens_all 253 | 254 | # At any time, check the totals for the whole conversation so far 255 | self.prompt_tokens_total 256 | self.assistant_tokens_total 257 | self.tokens_total_all 258 | ``` 259 | 260 | 261 | ## ⤵️ List Manipulation 262 | Various methods are available to manipulate the order of the conversation, here are a few: 263 | ```py 264 | # Insert message at a specified index 265 | convo.insert(index, role, content) 266 | 267 | # Remove N messages from the end of the list 268 | convo.remove_from_end(count) 269 | 270 | # Remove N messages from the start of the list 271 | convo.remove_from_start(count) 272 | ``` 273 | 274 | But much more importantly - methods for fine-grained control over the system message. 275 | 276 | System messages are usually used for instructions, and often it can be helpful to have the instructions appear more "recently" in the conversation. Which means tracking and moving this message, without disrupting the others. 277 | ```py 278 | # move your system message to be the most recent message in convo 279 | convo.move_system_to_end() 280 | 281 | # or second to last, etc 282 | convo.move_system_to_end(minus=1) 283 | ``` 284 | 285 | And my personal favorite - basically the whole reason I wrote this entire library - 286 | ```py 287 | convo.set_system_lock_index(-1) 288 | ``` 289 | Passing it a positive value to this function will lock the system message to the index. Anytime messages are added, removed, or re-ordered, it will make sure the system message stays at that position (or as close to it as possible). 290 | 291 | Passing it a negative value will lock your system message to the index counting from the end of the conversation (the example above would make it always be the second to last message in the conversation). 292 | 293 | *NOTE: Currently, these methods assume that you only have one system message* 294 | 295 | ## 📊 Track and Debug Your Conversation 296 | Print a formatted version of your conversation (great for debugging) 297 | ```py 298 | convo.print_formatted_conversation 299 | ``` 300 | By default, prints like this: 301 | ```txt 302 | System: You are a helpful assistant. 303 | User: hi! 304 | Assistant: Hi! How can I help you? 305 | ``` 306 | 307 | see the overall stats of your conversation. 308 | ```py 309 | convo.summary() 310 | ``` 311 | ```txt 312 | SUMMARY: 313 | {'total_messages': 2, 'prompt_tokens': 200, 'assistant_tokens': 78, 'total_tokens': 278} 314 | ``` 315 | 316 | # Advanced Library 317 | The Chatterstack Advanced class extends the base class, and has much more functionality built-in. It is also easily extensible. 318 | 319 | ## Reminders 320 | 321 | >**Warning**: 322 | > Recent changes to the model seem to have broken this functionality. It may or may not work when you try it. It will likely have to be re-written to use function-calling to work consistently again 323 | 324 | You can now tell the bot "remind me to take the take the garbage out at 8pm", or "remind me to take the garbage out in an hour" 325 | 326 | ``` 327 | USER: hey, can you remind me to take the garbage out in an hour 328 | 329 | ASSISTANT: Sure! I'll send you a reminder in an hour. 330 | 331 | USER: 332 | {...time passes...} 333 | 334 | ASSISTANT: Hey! Just a quick reminder to take the garbage out! 335 | ``` 336 | The bot can keep track of as many reminders at you want to issue. 337 | 338 | ## Issuing Commands 339 | Issue commands from the user input: 340 | ``` 341 | # saves the conversation to a txt file 342 | 343 | USER: [save] 344 | 345 | # quit the program 346 | 347 | USER: [quit] 348 | ``` 349 | By default you issue commands by using `[` and `]` as delimiters. But you can change these to whatever you want: 350 | 351 | ``` 352 | convo.open_command = "{{" 353 | convo.close_command = "}}" 354 | ``` 355 | 356 | you can also self-call any method (or set any attribute) of the chatterstack class itself, right from the chat interface: 357 | ``` 358 | # if you want to see what is currently in the conversation history 359 | USER: [print_formatted_conversation] 360 | 361 | # or how many tokens you have used so far 362 | USER: [print_total_tokens] 363 | 364 | # you can even pass arguments to commands 365 | USER: [set_system_lock_index(-2)] 366 | ``` 367 | 368 | So, while the initial chat program example at the start of this repo may have seemed simplistic at first, you can see that it's really all you need, as almost any functionality you want can actually be called from inside the chat itself. 369 | 370 | ## Extra Advanced: Adding your own commands 371 | If you want to write your own commands, chatterstack provides a simple interface class to do so, called `ICommand`. 372 | 373 | ```py 374 | class ICommand: 375 | 376 | def execute(self): 377 | pass 378 | ``` 379 | 380 | Basically, you write your command as a class, which inherits from the `ICommand` class, and has an "execute" method (which is what you want to actually happen when your command gets called.) 381 | 382 | Here is an example: 383 | 384 | ```py 385 | class ExampleCommand(ICommand): 386 | 387 | def execute(self): 388 | print("An example command that prints this statement right here.") 389 | ``` 390 | 391 | If your command needs arguments, you can also add an `__init__` method, and pass it `*args` exactly like this: 392 | 393 | ```py 394 | class ExampleCommand(ICommand): 395 | 396 | def __init__(self, *args): 397 | self.args = args 398 | 399 | def execute(self): 400 | print("Example command that print this statement with this extra stuff:", self.args) 401 | ``` 402 | 403 | The last thing you need to do is assign your command a trigger word or phrase by adding it to the __init__ method in the ChatterstackAdvanced class. 404 | 405 | ```py 406 | class ChatterstackAdvanced(Chatterstack): 407 | def __init__(self, ...) 408 | # ...Lots of other code here... 409 | 410 | # This command is already in the class: 411 | self.command_handler.register_command('save', SaveConversationCommand(self)) 412 | 413 | # Add your new command 414 | self.command_handler.register_command("example", ExampleCommand) 415 | ``` 416 | 417 | --- 418 | 419 | ## Javascript 420 | There is currently a Javascript version of Chatterstack, butt I have not made it available yet because I don't know Javascript as well, and am not so confident in its dependability. If you do know Javascript, and would like to help, please let me know! -------------------------------------------------------------------------------- /chatterstack/chatterstack.py: -------------------------------------------------------------------------------- 1 | 2 | import openai 3 | 4 | class Chatterstack: 5 | def __init__(self, user_defaults=None, existing_list=None): 6 | """Initialize the Chatist class with optional user default values & existing list of dictionaries, if any.""" 7 | self.config = { 8 | key: value 9 | for key, value in (user_defaults or {}).items() 10 | if key in {"MODEL", "TEMPERATURE", "TOP_P", "FREQUENCY_PENALTY", "PRESENCE_PENALTY", "MAX_TOKENS", "STOP", "STREAM", "LOGIT_BIAS"} 11 | } 12 | if existing_list is None: 13 | self.list = [] 14 | else: 15 | self.list = existing_list 16 | 17 | self.debug = False 18 | self.max_length = 4 19 | self.system_index = -1 20 | self.system_lock_index = None 21 | 22 | self.last_call_prompt_tokens=0 23 | self.last_call_full_context_prompt_tokens=0 24 | self.last_call_completion_tokens=0 25 | self.last_call_tokens_all=0 26 | 27 | self.prompt_tokens_total=0 28 | self.assistant_tokens_total=0 29 | self.tokens_total_all=0 30 | 31 | def __str__(self): 32 | """Return a string representation of the conversation.""" 33 | return str(self.list) 34 | 35 | def __len__(self): 36 | """Return the number of messages in the conversation.""" 37 | return len(self.list) 38 | 39 | def __getitem__(self, index): 40 | """Return the message at the specified index.""" 41 | if index < 0 or index >= len(self.list): 42 | raise IndexError("Index out of range") 43 | return self.list[index] 44 | 45 | def add(self, role, content): 46 | new_dict = {"role": role, "content": content} 47 | self.list.insert(len(self.list), new_dict) 48 | 49 | def add_system(self, content): 50 | """Add a system message with specified content to the end of the conversation.""" 51 | self.add("system", content) 52 | 53 | def add_assistant(self, content): 54 | """Add an assistant message with specified content to the end of the conversation.""" 55 | self.add("assistant", content) 56 | 57 | def add_user(self, content): 58 | """Add a user message with specified content to the end of the conversation.""" 59 | self.add("user", content) 60 | 61 | def user_input(self, prefix="USER: "): 62 | """Prompt the user for input and add it to the end of the conversation.""" 63 | user_input = input(prefix) 64 | self.add_user(user_input) 65 | return user_input 66 | 67 | def move_system_to(self, index): 68 | system_index = -1 69 | system_count = 0 70 | for i, d in enumerate(self.list): 71 | if d["role"] == "system": 72 | system_count += 1 73 | if system_count > 1: 74 | print("More than one 'system' dict found") 75 | return 76 | system_index = i 77 | if system_index == -1: 78 | print("No 'system' dict found") 79 | return 80 | if index < 0 or index >= len(self.list): 81 | print("Index out of range") 82 | return 83 | system_dict = self.list.pop(system_index) 84 | self.list.insert(index, system_dict) 85 | self.update_system_index() 86 | # Ensure the "system" dictionary stays as close as possible to the system_lock_index 87 | if self.system_lock_index is not None: 88 | if self.system_index > self.system_lock_index: 89 | self.move_system_to(self.system_lock_index) 90 | elif self.system_index < self.system_lock_index: 91 | self.move_system_to(min(self.system_lock_index, len(self.list) - 1)) 92 | 93 | def move_system_to_end(self, minus=0): 94 | if minus < 0: 95 | print("Minus value cannot be negative") 96 | return 97 | target_index = len(self.list) - minus 98 | self.move_system_to(target_index) 99 | self.update_system_index() 100 | 101 | def set_max_length(self, max_length): 102 | self.max_length = max_length 103 | 104 | def trim_to_max_length(self): 105 | if self.max_length is not None: 106 | if self.max_length <= 1: 107 | self.list = [self.list[self.system_index]] 108 | self.system_index = 0 109 | return 110 | while len(self.list) > self.max_length: 111 | if self.system_index > 0: 112 | self.remove_from_start(1) 113 | elif len(self.list) > 1: 114 | self.remove_from_start(2) 115 | else: 116 | break 117 | self.update_system_index() 118 | # Get system message as close as possible to its original position 119 | if self.system_lock_index is not None: 120 | if self.system_index > self.system_lock_index: 121 | self.move_system_to(self.system_lock_index) 122 | elif self.system_index < self.system_lock_index: 123 | self.move_system_to(min(self.system_lock_index, len(self.list) - 1)) 124 | 125 | def update_system_index(self): 126 | for i, d in enumerate(self.list): 127 | if d["role"] == "system": 128 | self.system_index = i 129 | return 130 | self.system_index = -1 131 | 132 | def set_system_lock_index(self, index): 133 | if index < 0: 134 | index = len(self.list) + index - 1 135 | 136 | if index < 0 or index >= len(self.list): 137 | print("Index out of range") 138 | return 139 | 140 | self.system_lock_index = index 141 | self.move_system_to(index) 142 | 143 | 144 | 145 | def send_to_bot(self, **kwargs): 146 | """Send the conversation to the OpenAI API and append the response to the end of the conversation. Uses 3.5-turbo by default.""" 147 | self.trim_to_max_length() 148 | model = kwargs.get("model", self.config.get("model", "gpt-3.5-turbo")) 149 | temperature = kwargs.get("temperature", self.config.get("temperature", 0.8)) 150 | top_p = kwargs.get("top_p", self.config.get("top_p", 1)) 151 | frequency_penalty = kwargs.get("frequency_penalty", self.config.get("frequency_penalty", 0)) 152 | presence_penalty = kwargs.get("presence_penalty", self.config.get("presence_penalty", 0)) 153 | max_tokens = kwargs.get("max_tokens", self.config.get("max_tokens", 200)) 154 | stop = kwargs.get("stop", self.config.get("stop", None)) 155 | stream = kwargs.get("stream", self.config.get("stream", False)) 156 | logit_bias = kwargs.get("logit_bias", self.config.get("logit_bias", {})) 157 | 158 | response = openai.ChatCompletion.create( 159 | model=model, 160 | messages=self.list, 161 | temperature=temperature, 162 | top_p=top_p, 163 | frequency_penalty=frequency_penalty, 164 | presence_penalty=presence_penalty, 165 | max_tokens=max_tokens, 166 | stop=stop, 167 | stream=stream, 168 | logit_bias=logit_bias, 169 | ) 170 | self.add_assistant(response.choices[0].message.content.strip()) 171 | 172 | api_usage = response['usage'] 173 | self.last_call_full_context_prompt_tokens = int((api_usage['prompt_tokens'])) 174 | self.last_call_completion_tokens = int((api_usage['completion_tokens'])) 175 | self.last_call_tokens_all = int((api_usage['total_tokens'])) 176 | 177 | self.last_call_prompt_tokens = self.last_call_full_context_prompt_tokens - self.tokens_total_all 178 | 179 | self.prompt_tokens_total += self.last_call_full_context_prompt_tokens 180 | self.assistant_tokens_total += self.last_call_completion_tokens 181 | self.tokens_total_all += self.last_call_tokens_all 182 | return self 183 | 184 | 185 | def remove_from_end(self, count): 186 | """Remove N messages (count) from the end of the list.""" 187 | if count < 0: 188 | print("Count must be a non-negative integer") 189 | return 190 | self.list = self.list[:-count] if count < len(self.list) else [] 191 | 192 | 193 | def remove_from_start(self, count): 194 | """Remove N messages (count) from the start of the list.""" 195 | if count < 0: 196 | print("Count must be a non-negative integer") 197 | return 198 | self.list = self.list[count:] if count < len(self.list) else [] 199 | 200 | 201 | def insert(self, index, role, content): 202 | """Insert a message at the specified index with the given role and content.""" 203 | if role not in ["system", "assistant", "user"]: 204 | print("Invalid role") 205 | return 206 | if index < 0 or index > len(self.list): 207 | print("Index out of range") 208 | return 209 | self.list.insert(index, {"role": role, "content": content}) 210 | 211 | 212 | def move_system_message(self, index, from_end=False): 213 | """Move the system message to the specified index. Only works if conversation contains one system message.""" 214 | system_index = -1 215 | system_count = 0 216 | for i, d in enumerate(self.list): 217 | if d["role"] == "system": 218 | system_count += 1 219 | if system_count > 1: 220 | print("More than one 'system' dict found") 221 | return 222 | system_index = i 223 | if system_index == -1: 224 | print("No 'system' dict found") 225 | return 226 | if from_end: 227 | index = len(self.list) - 1 - index 228 | if index < 0 or index >= len(self.list): 229 | print("Index out of range") 230 | return 231 | system_dict = self.list.pop(system_index) 232 | self.list.insert(index, system_dict) 233 | 234 | 235 | def find_message_containing(self, substring, check_lock=True): 236 | matching_indices = [i for i, d in enumerate(self.list) if substring in d["content"]] 237 | if len(matching_indices) == 0: 238 | print("Warning: No message containing the substring was found") 239 | return -1, False 240 | elif len(matching_indices) > 1: 241 | print("Warning: Multiple messages containing the substring were found") 242 | return -1, False 243 | else: 244 | message_index = matching_indices[0] 245 | is_locked = message_index == self.system_lock_index 246 | if is_locked and check_lock: 247 | print("Warning: The message containing the substring is locked") 248 | return message_index, is_locked 249 | 250 | def is_locked_by_substring(self, substring): 251 | _, is_locked = self.find_message_containing(substring, check_lock=False) 252 | return is_locked 253 | 254 | def move_message_containing(self, substring, index): 255 | if index < 0 or index >= len(self.list): 256 | print("Index out of range") 257 | return 258 | message_index, is_locked = self.find_message_containing(substring) 259 | if message_index == -1: 260 | return 261 | if is_locked: 262 | return 263 | message_dict = self.list.pop(message_index) 264 | self.list.insert(index, message_dict) 265 | 266 | def remove_message_containing(self, substring): 267 | message_index, is_locked = self.find_message_containing(substring) 268 | if message_index == -1: 269 | return 270 | if is_locked: 271 | return 272 | del self.list[message_index] 273 | self.dbprint(f'1 message containing the "{substring}" was removed') 274 | 275 | 276 | 277 | 278 | 279 | @property 280 | def last_message(self): 281 | return self.list[-1]["content"] 282 | 283 | @property 284 | def last_system_message(self): 285 | for message in reversed(self.list): 286 | if message["role"] == "system": 287 | return message["content"] 288 | return None 289 | 290 | @property 291 | def last_user_message(self): 292 | for message in reversed(self.list): 293 | if message["role"] == "user": 294 | return message["content"] 295 | return None 296 | 297 | @property 298 | def last_assistant_message(self): 299 | for message in reversed(self.list): 300 | if message["role"] == "assistant": 301 | return message["content"] 302 | return None 303 | 304 | 305 | def print_last_message(self, prefix="ASSISTANT: ", lines_before=1, lines_after=1): 306 | """Print the last message in the conversation.""" 307 | for i in range(lines_before): 308 | print() 309 | content = self.list[-1]["content"] 310 | print(f"{prefix}{content}") 311 | for i in range(lines_after): 312 | print() 313 | 314 | def print_total_tokens(self): 315 | """Print the total number of tokens used in the conversation.""" 316 | print(f"Total tokens: {self.tokens_total_all}") 317 | 318 | def print_formatted_conversation(self): 319 | for d in self.list: 320 | print(f'{d["role"].capitalize()}: {d["content"]}') 321 | 322 | 323 | def summary(self): 324 | """Return a summary of the conversation.""" 325 | summary_dict = { 326 | "total_messages": len(self.list), 327 | "prompt_tokens": self.prompt_tokens_total, 328 | "assistant_tokens": self.assistant_tokens_total, 329 | "total_tokens": self.tokens_total_all, 330 | } 331 | return summary_dict 332 | 333 | def dbprint(self, message): 334 | if self.debug: 335 | print(message) 336 | 337 | def clear(self): 338 | """Clear the conversation.""" 339 | self.list = [] 340 | 341 | 342 | 343 | 344 | 345 | 346 | -------------------------------------------------------------------------------- /chatterstack/chatterstackadvanced.py: -------------------------------------------------------------------------------- 1 | from chatterstack import * 2 | from commands import * 3 | import signal, ast, datetime, json, re 4 | 5 | 6 | # ---------------- --------------------- ---------------- --------------------- 7 | # CHATTERSTACK ADVANCED 8 | # ---------------- --------------------- ---------------- --------------------- 9 | 10 | 11 | class ChatterstackAdvanced(Chatterstack): 12 | def __init__(self, user_defaults=None, existing_list=None): 13 | super().__init__(user_defaults, existing_list) 14 | 15 | self.command_handler = CommandHandler() 16 | self.command_handler.register_command('quit', QuitCommand()) 17 | self.command_handler.register_command('save', SaveConversationCommand(self)) 18 | # the CallMethodCommand doesn't get registered here, because it is handled seperately, from within the parse_message_for_commands() method directly, for argument parsing reasons 19 | 20 | self.debug = False 21 | 22 | self.first_response_time = None 23 | self.last_response_time = None 24 | self.duration = None 25 | 26 | self.reminders = [] 27 | self.parse_for_reminders = True 28 | self.timestamps = False 29 | self.open_command = "[" 30 | self.close_command = "]" 31 | self.enable_commands = True 32 | self.enable_reminders = True 33 | 34 | self.next_event = None 35 | self.seconds_remaining = None 36 | 37 | 38 | 39 | 40 | def add(self, role, content): 41 | if self.timestamps and role != "assistant": 42 | timestamp = datetime.datetime.now().strftime('%m/%d %H:%M') 43 | content = f"{timestamp} {content}" 44 | new_dict = {"role": role, "content": content} 45 | self.list.insert(len(self.list), new_dict) 46 | return self 47 | 48 | 49 | def user_input(self, prefix="USER: ", parse_commands=None): 50 | while True: 51 | if parse_commands is None: 52 | parse_commands = self.enable_commands 53 | if self.enable_reminders: 54 | timeout = self.update_timeout() 55 | if timeout: 56 | signal.signal(signal.SIGALRM, self.alarm_handler) 57 | signal.alarm(timeout) 58 | try: 59 | user_text = input(prefix) 60 | if parse_commands: 61 | _, user_text = self.parse_message_for_commands(user_text) 62 | if user_text: 63 | self.add_user(user_text) 64 | if timeout: 65 | signal.alarm(0) 66 | break 67 | except TimeoutError: 68 | signal.alarm(0) 69 | self.send_reminder() 70 | 71 | @staticmethod 72 | def parse_argument(arg): 73 | try: 74 | return ast.literal_eval(arg) 75 | except (ValueError, SyntaxError): 76 | return arg.strip() 77 | 78 | @staticmethod 79 | def alarm_handler(signum, frame): 80 | raise TimeoutError() 81 | 82 | 83 | def parse_message_for_commands(self, message): 84 | pattern = re.escape(self.open_command) + r'([^' + re.escape(self.close_command) + r'\(]*)' + r'(\([^)]*\))?' + re.escape(self.close_command) 85 | match = re.search(pattern, message) 86 | if match: 87 | command = match.group(1) 88 | args_str = match.group(2) 89 | remaining_message = re.sub(pattern, '', message, count=1).strip() 90 | if args_str: 91 | # Split and parse arguments 92 | args_str = args_str.strip("()") 93 | args = [self.parse_argument(arg) for arg in args_str.split(",")] 94 | else: 95 | args = [] 96 | if command in self.command_handler.command_map: 97 | self.command_handler.execute_command(command) 98 | else: 99 | call_method_command = CallMethodCommand(self, command, *args) 100 | call_method_command.execute() 101 | return command, remaining_message if remaining_message else None 102 | else: 103 | return None, message 104 | 105 | 106 | def parse_message_for_reminders(self, message_to_parse=None): 107 | """WARNING: The June updates to GPT-4 have significantly reduced the models ability to accurately output reminders. This method probably needs to be reworked to use OpenAI's new "function calling" feature.""" 108 | if not self.list and message_to_parse is None: 109 | pass 110 | message = message_to_parse if message_to_parse is not None else self.list[-1]["content"] 111 | # Look for {{title|MM/DD HH:mm}} 112 | pattern = r'\{\{([^|]*?)\s*\|\s*([0-1]?[0-9]/[0-3]?[0-9]\s*[0-2]?[0-9]:[0-5][0-9])\}\}' 113 | matches = re.findall(pattern, message) 114 | for title, time in matches: 115 | self.reminders.append((title.strip(), time.strip())) # Store as tuple -- not a dict 116 | 117 | self.dbprint(f"\n\033[30mREMINDERS:\n{self.reminders}\033[0m") 118 | # Remove matches from message 119 | modified_message = re.sub(pattern, '', message) 120 | return matches, modified_message 121 | 122 | 123 | def to_json(self): 124 | """Return the conversation list as a JSON-formatted string.""" 125 | return json.dumps(self.list) 126 | 127 | 128 | def from_json(self, json_string, clear_all=False): 129 | """Load a conversation from a JSON-formatted string. 130 | WARNING: This will overwrite the current conversation. 131 | This may also lead to a mismatch between your conversation and some instance attributes. 132 | You can also clear all attributes by setting clear_all=True. 133 | Think about what you want when using this method.""" 134 | if clear_all: 135 | for attr in self.__dict__: 136 | self.__dict__[attr] = None 137 | self.list = json.loads(json_string) 138 | 139 | 140 | def get_conversation_duration(self): 141 | if self.first_response_time is None: 142 | print("No initial timestamp yet") 143 | return 144 | most_recent_message_time = int(datetime.datetime.now().timestamp()) 145 | duration_seconds = most_recent_message_time - self.first_response_time 146 | hours, remainder = divmod(duration_seconds, 3600) 147 | minutes, _ = divmod(remainder, 60) 148 | self.duration = f"{hours:02d}:{minutes:02d}" 149 | return self.duration 150 | 151 | 152 | def update_timeout(self): 153 | _, remaining_seconds = self.get_next_event() 154 | return remaining_seconds 155 | 156 | 157 | def send_reminder(self): 158 | title, time = self.next_event 159 | self.add_system(f"FROM SYSTEM: Generate the reminder message to send to the user now for: [{title}]. This message will be sent to the user via their calender reminder system. (DO NOT create a reminder at the beginning of your response to this message.)") 160 | self.reminders.remove(self.next_event) 161 | self.send_to_bot() 162 | self.remove_message_containing("FROM SYSTEM") 163 | self.print_last_message() 164 | 165 | 166 | def change_attribute(self, attribute, new_value): 167 | setattr(self, attribute, new_value) 168 | 169 | 170 | def get_next_event(self): 171 | now = datetime.datetime.now() 172 | remaining_seconds = float('inf') 173 | next_event_var = None 174 | for event in self.reminders: 175 | # Assuming event is a tuple with the format (title, time) 176 | title, time = event 177 | target = datetime.datetime.strptime(time, "%m/%d %H:%M").replace(year=now.year) 178 | if target < now: 179 | continue 180 | remaining = target - now 181 | if remaining.total_seconds() < remaining_seconds: 182 | self.seconds_remaining = int(remaining.total_seconds()) 183 | self.next_event = event 184 | next_event_var = event 185 | if next_event_var: 186 | self.dbprint(f"REMAINING SECONDS: {self.seconds_remaining}") 187 | return (next_event_var, self.seconds_remaining) if next_event_var else (None, None) 188 | 189 | 190 | def imagine_api(self, api_type, prompt, max_tokens=500, temperature=0.1): 191 | '''this method takes two strings - the first is what type of API you want the model to act as, and the second is the prompt you want to send to that API. It returns one string - just the content of the bot's response. This SHOULD be a JSON-formatted string, (if model follows its instructions).''' 192 | instructions = [{"role": "system", "content": f"You are an assistant that acts as an {api_type} API. Whatever the input, you must output a JSON string that would be returned from a {api_type} API. Do not include any other text or characters in your response but the JSON string, or it will break text parser. If you cannot come up with correct or relevant information for the API response, you should make up data (or return null data) rather than not return a JSON response."}, {"role": "user", "content": f"{prompt}"}] 193 | response = openai.ChatCompletion.create( 194 | model="gpt-4", 195 | messages=instructions, 196 | max_tokens=max_tokens, 197 | temperature=temperature, 198 | top_p=1, 199 | frequency_penalty=0, 200 | presence_penalty=0, 201 | ) 202 | response_string= response.choices[0].message.content.strip() 203 | self.dbprint(f"RESPONSE STRING: {response_string}") 204 | return response_string 205 | 206 | 207 | def get_completion(self, prompt, model="text-davinci-003", max_tokens=400, temperature=1, print_response=True): 208 | self.dbprint(prompt) 209 | response = openai.Completion.create( 210 | model=model, 211 | prompt=prompt, 212 | max_tokens=max_tokens, 213 | n=1, 214 | stop=None, 215 | temperature=1, 216 | ) 217 | if print_response: 218 | print(response.choices[0].text.strip()) 219 | return response.choices[0].text.strip() 220 | 221 | 222 | def send_to_bot(self, parse=None, **kwargs): 223 | """Send the conversation to the OpenAI API and append the response to the end of the conversation. Uses 3.5-turbo by default.""" 224 | self.trim_to_max_length() 225 | parse = kwargs.get('parse', None) 226 | model = kwargs.get("model", self.config.get("model", "gpt-3.5-turbo")) 227 | temperature = kwargs.get("temperature", self.config.get("temperature", 0.8)) 228 | top_p = kwargs.get("top_p", self.config.get("top_p", 1)) 229 | frequency_penalty = kwargs.get("frequency_penalty", self.config.get("frequency_penalty", 0)) 230 | presence_penalty = kwargs.get("presence_penalty", self.config.get("presence_penalty", 0)) 231 | max_tokens = kwargs.get("max_tokens", self.config.get("max_tokens", 200)) 232 | stop = kwargs.get("stop", self.config.get("stop", None)) 233 | stream = kwargs.get("stream", self.config.get("stream", False)) 234 | logit_bias = kwargs.get("logit_bias", self.config.get("logit_bias", {})) 235 | 236 | response = openai.ChatCompletion.create( 237 | model=model, 238 | messages=self.list, 239 | temperature=temperature, 240 | top_p=top_p, 241 | frequency_penalty=frequency_penalty, 242 | presence_penalty=presence_penalty, 243 | max_tokens=max_tokens, 244 | stop=stop, 245 | stream=stream, 246 | logit_bias=logit_bias, 247 | ) 248 | 249 | self.dbprint(response.choices[0].message.content.strip()) 250 | 251 | message_to_append = response.choices[0].message.content.strip() 252 | 253 | 254 | # Check conditions for parsing reminders 255 | if parse is True or (parse is None and self.parse_for_reminders is True): 256 | self.dbprint(f"BEFORE PARSE: {message_to_append}") 257 | _, message_to_append = self.parse_message_for_reminders(message_to_append) 258 | self.dbprint(f"AFTER PARSE: {message_to_append}") 259 | 260 | 261 | self.add_assistant(message_to_append) 262 | 263 | self.last_response_time = response.created 264 | if self.first_response_time is None: 265 | self.first_response_time = response.created 266 | self.dbprint(f"First response time: {self.first_response_time}") 267 | 268 | api_usage = response['usage'] 269 | self.last_call_full_context_prompt_tokens = int((api_usage['prompt_tokens'])) 270 | self.last_call_completion_tokens = int((api_usage['completion_tokens'])) 271 | self.last_call_tokens_all = int((api_usage['total_tokens'])) 272 | 273 | self.last_call_prompt_tokens = self.last_call_full_context_prompt_tokens - self.tokens_total_all 274 | 275 | self.prompt_tokens_total += self.last_call_full_context_prompt_tokens 276 | self.assistant_tokens_total += self.last_call_completion_tokens 277 | self.tokens_total_all += self.last_call_tokens_all 278 | return self 279 | -------------------------------------------------------------------------------- /chatterstack/commands.py: -------------------------------------------------------------------------------- 1 | 2 | class CommandHandler: 3 | def __init__(self): 4 | self.command_map = {} 5 | 6 | def register_command(self, command_name, command_instance): 7 | self.command_map[command_name] = command_instance 8 | 9 | def execute_command(self, command_name, *args, **kwargs): 10 | if command_name in self.command_map: 11 | self.command_map[command_name].execute(*args, **kwargs) 12 | 13 | 14 | # Commands must inherit from this class 15 | # and must have the execute() method 16 | class ICommand: 17 | def execute(self): 18 | pass 19 | 20 | 21 | # Commands made using the ICommand class 22 | class ExampleCommand(ICommand): 23 | def __init__(self, *args): 24 | self.args = args 25 | 26 | def execute(self): 27 | print("Example command called with arguments:", self.args) 28 | 29 | class QuitCommand(ICommand): 30 | def execute(self): 31 | print("Quitting the program.") 32 | exit(0) 33 | 34 | class SaveConversationCommand(ICommand): 35 | def __init__(self, chatterstack, file_name="conversation.txt"): 36 | self.chatterstack = chatterstack 37 | self.file_name = file_name 38 | 39 | def execute(self): 40 | print(f"Saving conversation to {self.file_name}") 41 | conversation_json = self.chatterstack.to_json() 42 | with open(self.file_name, "w") as file: 43 | file.write(conversation_json) 44 | print(f"Conversation saved to {self.file_name}") 45 | 46 | # This command allows you to call any method of the Chatterstack class 47 | # For instance, if you want to call the Chatterstack's add_system() method, 48 | # you would call this command like this: 49 | # [add_system("Hello, I am a system.")] 50 | class CallMethodCommand(ICommand): 51 | def __init__(self, chatterstack, method_name, *args): 52 | self.chatterstack = chatterstack 53 | self.method_name = method_name 54 | self.args = args 55 | 56 | def execute(self): 57 | method = getattr(self.chatterstack, self.method_name, None) 58 | if method and callable(method): 59 | try: 60 | method(*self.args) 61 | except Exception as e: 62 | print(f"Error calling method '{self.method_name}': {e}") 63 | else: 64 | print(f"Error: No method named '{self.method_name}' found in Chatterstack class.") 65 | --------------------------------------------------------------------------------