├── key.txt ├── LICENSE ├── chatGPTAPIbasics.ipynb └── function_calling.ipynb /key.txt: -------------------------------------------------------------------------------- 1 | YOURKEYHERE -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Harrison 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /chatGPTAPIbasics.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "attachments": {}, 5 | "cell_type": "markdown", 6 | "metadata": {}, 7 | "source": [ 8 | "Working through this notebook on YouTube: https://www.youtube.com/watch?v=c-g6epk3fFE" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "metadata": {}, 14 | "source": [ 15 | "What is going on everyone and welcome to a video going over the ChatGPT API that was recently released by OpenAI. \n", 16 | "\n", 17 | "There has been a ChatGPT implementation where you can chat with ChatGPT extremely easily, so why might we be interested in an API instead?\n", 18 | "\n", 19 | "Essentially, the API just plain gives you far more power and control to do more new and novel things with ChatGPT's responses, as well as the ability to integrate it with other applications.\n", 20 | "\n", 21 | "In order to query this model, we will first need an API key. For this, you'll need an account and to set up billing. Typically, you will get some starting credit, but you may or may not, depending on when you sign up and try to use this API. You can create your account at https://platform.openai.com/\n", 22 | "\n", 23 | "From there, go to the top right, click your profile, manage account, and then billing to add a payment method. From here, on the left side, choose API Keys under \"user.\"\n", 24 | "\n", 25 | "Create a key, and then copy the key's value, you will need this in your program. In the same directory that you're working in, create a \"key.txt\" file and copy and paste the key in there. Save and exit. This particular API costs $0.002, or a fifth of a penny, per 1,000 tokens at the time of my writing.\n", 26 | "\n", 27 | "You will also need the `openai` Python package. You can install it with `pip install --upgrade openai`. The upgrade is there to ensure that you have the latest version, since the ChatGPT API is a new feature." 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": 20, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "import openai\n", 37 | "\n", 38 | "# load and set our key\n", 39 | "openai.api_key = open(\"key.txt\", \"r\").read().strip(\"\\n\")" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": {}, 45 | "source": [ 46 | "The way the ChatGPT API works is you need to query the model. Since these models often make use of chat history/context, every query needs to, or can, include a full message history context. \n", 47 | "\n", 48 | "Keep in mind, however that the maximum context length is 4096 tokens, so you need to stay under that. There are lots of options to work around this, the simplest being truncating earlier messages, but you can actually even use ChatGPT to help you to summarize and condense the previous message history. Maybe more on this later though. 4096 tokens is something like 20,000 characters, but it this can vary. Tokens are just words, bits of words, or combinations of words or cominations of bits of words. Every response from ChatGPT will inform you how many tokens you're using, so you can keep track.\n", 49 | "\n", 50 | "Let's start with an example input from a user to the API:" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": 21, 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [ 59 | "completion = openai.ChatCompletion.create(\n", 60 | " model=\"gpt-3.5-turbo\", # this is \"ChatGPT\" $0.002 per 1k tokens\n", 61 | " messages=[{\"role\": \"user\", \"content\": \"What is the circumference in km of the planet Earth?\"}]\n", 62 | ")" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": {}, 68 | "source": [ 69 | "Notice the \"role\" is \"user.\" There are 3 roles:\n", 70 | "\n", 71 | "User - This is meant to mimic the end-user that is interacting with the assistant. This is the role that you will be using most of the time.\n", 72 | "System - This role can mimic sort of background nudges and prompts that you might want to inject into the conversation, but that dont need a response. At the moment, system is weighted less than \"user,\" so it still seems more useful to use the user for encouraging specific behaviors in my opinion.\n", 73 | "Assistant - This is the agent's response. Often this will be actual responses, but keep in mind... you will be able to inject your own responses here, so you can actually have the agent say whatever you want. This is a bit of a hack, but it's a fun one and can be useful in certain situations.\n", 74 | "\n", 75 | "The full completion has a lot of information besides just the text response:" 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "execution_count": 22, 81 | "metadata": {}, 82 | "outputs": [ 83 | { 84 | "name": "stdout", 85 | "output_type": "stream", 86 | "text": [ 87 | "{\n", 88 | " \"choices\": [\n", 89 | " {\n", 90 | " \"finish_reason\": \"stop\",\n", 91 | " \"index\": 0,\n", 92 | " \"message\": {\n", 93 | " \"content\": \"\\n\\nThe circumference of the planet Earth in km is approximately 40,075 km.\",\n", 94 | " \"role\": \"assistant\"\n", 95 | " }\n", 96 | " }\n", 97 | " ],\n", 98 | " \"created\": 1678044086,\n", 99 | " \"id\": \"chatcmpl-6qoD8O1qGxluR2fct8hM9aSYDnqzU\",\n", 100 | " \"model\": \"gpt-3.5-turbo-0301\",\n", 101 | " \"object\": \"chat.completion\",\n", 102 | " \"usage\": {\n", 103 | " \"completion_tokens\": 18,\n", 104 | " \"prompt_tokens\": 18,\n", 105 | " \"total_tokens\": 36\n", 106 | " }\n", 107 | "}\n" 108 | ] 109 | } 110 | ], 111 | "source": [ 112 | "print(completion)" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "In probably most cases, what you're after is specifically:" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": 23, 125 | "metadata": {}, 126 | "outputs": [ 127 | { 128 | "name": "stdout", 129 | "output_type": "stream", 130 | "text": [ 131 | "\n", 132 | "\n", 133 | "The circumference of the planet Earth in km is approximately 40,075 km.\n" 134 | ] 135 | } 136 | ], 137 | "source": [ 138 | "reply_content = completion.choices[0].message.content\n", 139 | "print(reply_content)" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "So far so good, this is a very basic example of using the API. In most cases, you're going to need to manage the history, however. The API itself isn't going to manage your history for you, so how might we do that? I would just start with some sort of message history variable for now to keep it simple, but you might use a database or some other storage method. " 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": 24, 152 | "metadata": {}, 153 | "outputs": [ 154 | { 155 | "name": "stdout", 156 | "output_type": "stream", 157 | "text": [ 158 | "User's input was: What is the moon's circumference in km?\n" 159 | ] 160 | } 161 | ], 162 | "source": [ 163 | "message_history = []\n", 164 | "# What is the moon's circumference in km?\n", 165 | "user_input = input(\"> \")\n", 166 | "print(\"User's input was: \", user_input)" 167 | ] 168 | }, 169 | { 170 | "cell_type": "markdown", 171 | "metadata": {}, 172 | "source": [ 173 | "Now that you have the user input, let's format it for the API:" 174 | ] 175 | }, 176 | { 177 | "cell_type": "code", 178 | "execution_count": 25, 179 | "metadata": {}, 180 | "outputs": [], 181 | "source": [ 182 | "message_history.append({\"role\": \"user\", \"content\": f\"{user_input}\"})" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "Then we can query the API:" 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": 26, 195 | "metadata": {}, 196 | "outputs": [ 197 | { 198 | "name": "stdout", 199 | "output_type": "stream", 200 | "text": [ 201 | "\n", 202 | "\n", 203 | "The moon's circumference is approximately 10,917 km.\n" 204 | ] 205 | } 206 | ], 207 | "source": [ 208 | "completion = openai.ChatCompletion.create(\n", 209 | " model=\"gpt-3.5-turbo\",\n", 210 | " messages=message_history\n", 211 | ")\n", 212 | "\n", 213 | "# Now we can print the response:\n", 214 | "reply_content = completion.choices[0].message.content\n", 215 | "print(reply_content)" 216 | ] 217 | }, 218 | { 219 | "cell_type": "markdown", 220 | "metadata": {}, 221 | "source": [ 222 | "After getting a repsonse, you'll want to append it to the history:" 223 | ] 224 | }, 225 | { 226 | "cell_type": "code", 227 | "execution_count": 27, 228 | "metadata": {}, 229 | "outputs": [], 230 | "source": [ 231 | "# note the use of the \"assistant\" role here. This is because we're feeding the model's response into context.\n", 232 | "message_history.append({\"role\": \"assistant\", \"content\": f\"{reply_content}\"})" 233 | ] 234 | }, 235 | { 236 | "cell_type": "markdown", 237 | "metadata": {}, 238 | "source": [ 239 | "We can then followup with another query, demonstrating the use of history:" 240 | ] 241 | }, 242 | { 243 | "cell_type": "code", 244 | "execution_count": 28, 245 | "metadata": {}, 246 | "outputs": [ 247 | { 248 | "name": "stdout", 249 | "output_type": "stream", 250 | "text": [ 251 | "User's input was: which moon is that in reference to?\n", 252 | "\n", 253 | "I'm sorry for the confusion. That circumference is for Earth's moon, also known as Luna.\n" 254 | ] 255 | } 256 | ], 257 | "source": [ 258 | "# which moon is that in reference to?\n", 259 | "user_input = input(\"> \")\n", 260 | "print(\"User's input was: \", user_input)\n", 261 | "print()\n", 262 | "message_history.append({\"role\": \"user\", \"content\": f\"{user_input}\"})\n", 263 | "\n", 264 | "completion = openai.ChatCompletion.create(\n", 265 | " model=\"gpt-3.5-turbo\",\n", 266 | " messages=message_history\n", 267 | ")\n", 268 | "\n", 269 | "reply_content = completion.choices[0].message.content\n", 270 | "print(reply_content)" 271 | ] 272 | }, 273 | { 274 | "cell_type": "markdown", 275 | "metadata": {}, 276 | "source": [ 277 | "Combining everything:" 278 | ] 279 | }, 280 | { 281 | "cell_type": "code", 282 | "execution_count": 31, 283 | "metadata": {}, 284 | "outputs": [ 285 | { 286 | "name": "stdout", 287 | "output_type": "stream", 288 | "text": [ 289 | "User's input was: Can I drink the water from a dehumidifier?\n", 290 | "\n", 291 | "\n", 292 | "As an AI language model, I cannot recommend that you drink water from a dehumidifier. Although dehumidifiers remove moisture from the air, the water collected can contain impurities and toxins that are harmful to human health. Also, dehumidifiers are not designed to produce potable water, and the water may contain bacteria, fungi, or other contaminants that might make you ill. It is always best to use a certified water source or filter the collected water before drinking it.\n", 293 | "\n", 294 | "User's input was: How might we make it safe in an emergency to drink?\n", 295 | "In an emergency situation, it is essential to have access to safe drinking water. If you need water and there is no other option but to use water from a dehumidifier, here are some steps you can take to make it safer to drink:\n", 296 | "\n", 297 | "1. Check the water: Make sure the water is clean and clear. If the water is cloudy or has particles in it, do not drink it.\n", 298 | "\n", 299 | "2. Boil the water: Boiling the water can kill off any bacteria or viruses that may be present. Bring the water to a rolling boil, then let it cool down before drinking.\n", 300 | "\n", 301 | "3. Use a filter: A water filter can help remove impurities from the water. A portable water filter, such as a ceramic or carbon filter, can be useful in such situations.\n", 302 | "\n", 303 | "4. Add purification tablets or drops: Purification tablets or drops, such as iodine or chlorine, can kill off harmful microorganisms in the water. Follow the instructions provided by the manufacturer to ensure proper usage.\n", 304 | "\n", 305 | "It is always better to have clean and safe drinking water stored in advance, rather than relying on questionable sources during an emergency situation.\n", 306 | "\n" 307 | ] 308 | } 309 | ], 310 | "source": [ 311 | "message_history = []\n", 312 | "\n", 313 | "def chat(inp, role=\"user\"):\n", 314 | " message_history.append({\"role\": role, \"content\": f\"{inp}\"})\n", 315 | " completion = openai.ChatCompletion.create(\n", 316 | " model=\"gpt-3.5-turbo\",\n", 317 | " messages=message_history\n", 318 | " )\n", 319 | " reply_content = completion.choices[0].message.content\n", 320 | " message_history.append({\"role\": \"assistant\", \"content\": f\"{reply_content}\"})\n", 321 | " return reply_content\n", 322 | "\n", 323 | "for i in range(2):\n", 324 | " user_input = input(\"> \")\n", 325 | " print(\"User's input was: \", user_input)\n", 326 | " print(chat(user_input))\n", 327 | " print()" 328 | ] 329 | }, 330 | { 331 | "cell_type": "markdown", 332 | "metadata": {}, 333 | "source": [ 334 | "Great, looks like everything is working, now, let's see how we might combine this into our own application. We can start off with the most obvious example: A chatbot, and we can make use of `gradio` for the front-end UI.\n", 335 | "\n", 336 | "To use gradio, we'll need to install it with `pip install gradio`. Then, we'll make our initial imports:" 337 | ] 338 | }, 339 | { 340 | "cell_type": "code", 341 | "execution_count": null, 342 | "metadata": {}, 343 | "outputs": [], 344 | "source": [ 345 | "import gradio as gr\n", 346 | "import openai\n", 347 | "\n", 348 | "openai.api_key = open(\"key.txt\", \"r\").read().strip(\"\\n\")" 349 | ] 350 | }, 351 | { 352 | "cell_type": "markdown", 353 | "metadata": {}, 354 | "source": [ 355 | "Then, we can start by defining our message history. In this case, let's make our chatbot a joke bot, where we supply the subject(s) and the bot will make a joke from there.\n", 356 | "\n", 357 | "I'll start by having the user submit the following:\n", 358 | "\n", 359 | "\"You are a joke bot. I will specify the subject matter in my messages, and you will reply with a joke that includes the subjects I mention in my messages. Reply only with jokes to further input. If you understand, say OK.\"" 360 | ] 361 | }, 362 | { 363 | "cell_type": "code", 364 | "execution_count": 1, 365 | "metadata": {}, 366 | "outputs": [], 367 | "source": [ 368 | "message_history = [{\"role\": \"user\", \"content\": f\"You are a joke bot. I will specify the subject matter in my messages, and you will reply with a joke that includes the subjects I mention in my messages. Reply only with jokes to further input. If you understand, say OK.\"},\n", 369 | " {\"role\": \"assistant\", \"content\": f\"OK\"}]" 370 | ] 371 | }, 372 | { 373 | "cell_type": "markdown", 374 | "metadata": {}, 375 | "source": [ 376 | "\n", 377 | "\n", 378 | "Then, we'll inject the assistant's reply of \"OK\" to encourage it to do what I've asked. Next, we'll make a predict function, which is similar to our `chat` function from before, but is merged with the demo `predict` function from a gradio example:" 379 | ] 380 | }, 381 | { 382 | "cell_type": "code", 383 | "execution_count": 2, 384 | "metadata": {}, 385 | "outputs": [], 386 | "source": [ 387 | "def predict(input):\n", 388 | " # tokenize the new input sentence\n", 389 | " message_history.append({\"role\": \"user\", \"content\": f\"{input}\"})\n", 390 | "\n", 391 | " completion = openai.ChatCompletion.create(\n", 392 | " model=\"gpt-3.5-turbo\",\n", 393 | " messages=message_history\n", 394 | " )\n", 395 | " #Just the reply text\n", 396 | " reply_content = completion.choices[0].message.content#.replace('```python', '
').replace('```', '
')\n", 397 | " \n", 398 | " message_history.append({\"role\": \"assistant\", \"content\": f\"{reply_content}\"}) \n", 399 | " \n", 400 | " # get pairs of msg[\"content\"] from message history, skipping the pre-prompt: here.\n", 401 | " response = [(message_history[i][\"content\"], message_history[i+1][\"content\"]) for i in range(2, len(message_history)-1, 2)] # convert to tuples of list\n", 402 | " return response" 403 | ] 404 | }, 405 | { 406 | "cell_type": "markdown", 407 | "metadata": {}, 408 | "source": [ 409 | "Then we can build the gradio app. To make things easier, I'll comment what each line does here:" 410 | ] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": null, 415 | "metadata": {}, 416 | "outputs": [], 417 | "source": [ 418 | "# creates a new Blocks app and assigns it to the variable demo.\n", 419 | "with gr.Blocks() as demo: \n", 420 | "\n", 421 | " # creates a new Chatbot instance and assigns it to the variable chatbot.\n", 422 | " chatbot = gr.Chatbot() \n", 423 | "\n", 424 | " # creates a new Row component, which is a container for other components.\n", 425 | " with gr.Row(): \n", 426 | " '''creates a new Textbox component, which is used to collect user input. \n", 427 | " The show_label parameter is set to False to hide the label, \n", 428 | " and the placeholder parameter is set'''\n", 429 | " txt = gr.Textbox(show_label=False, placeholder=\"Enter text and press enter\").style(container=False)\n", 430 | " '''\n", 431 | " sets the submit action of the Textbox to the predict function, \n", 432 | " which takes the input from the Textbox, the chatbot instance, \n", 433 | " and the state instance as arguments. \n", 434 | " This function processes the input and generates a response from the chatbot, \n", 435 | " which is displayed in the output area.'''\n", 436 | " txt.submit(predict, txt, chatbot) # submit(function, input, output)\n", 437 | " #txt.submit(lambda :\"\", None, txt) #Sets submit action to lambda function that returns empty string \n", 438 | "\n", 439 | " '''\n", 440 | " sets the submit action of the Textbox to a JavaScript function that returns an empty string. \n", 441 | " This line is equivalent to the commented out line above, but uses a different implementation. \n", 442 | " The _js parameter is used to pass a JavaScript function to the submit method.'''\n", 443 | " txt.submit(None, None, txt, _js=\"() => {''}\") # No function, no input to that function, submit action to textbox is a js function that returns empty string, so it clears immediately.\n", 444 | " \n", 445 | "demo.launch()" 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "The full app now is:" 453 | ] 454 | }, 455 | { 456 | "cell_type": "code", 457 | "execution_count": null, 458 | "metadata": {}, 459 | "outputs": [], 460 | "source": [ 461 | "import gradio as gr\n", 462 | "import openai\n", 463 | "\n", 464 | "openai.api_key = open(\"key.txt\", \"r\").read().strip(\"\\n\")\n", 465 | "\n", 466 | "message_history = [{\"role\": \"user\", \"content\": f\"You are a joke bot. I will specify the subject matter in my messages, and you will reply with a joke that includes the subjects I mention in my messages. Reply only with jokes to further input. If you understand, say OK.\"},\n", 467 | " {\"role\": \"assistant\", \"content\": f\"OK\"}]\n", 468 | "\n", 469 | "def predict(input):\n", 470 | " # tokenize the new input sentence\n", 471 | " message_history.append({\"role\": \"user\", \"content\": f\"{input}\"})\n", 472 | "\n", 473 | " completion = openai.ChatCompletion.create(\n", 474 | " model=\"gpt-3.5-turbo\", #10x cheaper than davinci, and better. $0.002 per 1k tokens\n", 475 | " messages=message_history\n", 476 | " )\n", 477 | " #Just the reply:\n", 478 | " reply_content = completion.choices[0].message.content#.replace('```python', '
').replace('```', '
')\n", 479 | "\n", 480 | " print(reply_content)\n", 481 | " message_history.append({\"role\": \"assistant\", \"content\": f\"{reply_content}\"}) \n", 482 | " \n", 483 | " # get pairs of msg[\"content\"] from message history, skipping the pre-prompt: here.\n", 484 | " response = [(message_history[i][\"content\"], message_history[i+1][\"content\"]) for i in range(2, len(message_history)-1, 2)] # convert to tuples of list\n", 485 | " return response\n", 486 | "\n", 487 | "# creates a new Blocks app and assigns it to the variable demo.\n", 488 | "with gr.Blocks() as demo: \n", 489 | "\n", 490 | " # creates a new Chatbot instance and assigns it to the variable chatbot.\n", 491 | " chatbot = gr.Chatbot() \n", 492 | "\n", 493 | " # creates a new Row component, which is a container for other components.\n", 494 | " with gr.Row(): \n", 495 | " '''creates a new Textbox component, which is used to collect user input. \n", 496 | " The show_label parameter is set to False to hide the label, \n", 497 | " and the placeholder parameter is set'''\n", 498 | " txt = gr.Textbox(show_label=False, placeholder=\"Enter text and press enter\").style(container=False)\n", 499 | " '''\n", 500 | " sets the submit action of the Textbox to the predict function, \n", 501 | " which takes the input from the Textbox, the chatbot instance, \n", 502 | " and the state instance as arguments. \n", 503 | " This function processes the input and generates a response from the chatbot, \n", 504 | " which is displayed in the output area.'''\n", 505 | " txt.submit(predict, txt, chatbot) # submit(function, input, output)\n", 506 | " #txt.submit(lambda :\"\", None, txt) #Sets submit action to lambda function that returns empty string \n", 507 | "\n", 508 | " '''\n", 509 | " sets the submit action of the Textbox to a JavaScript function that returns an empty string. \n", 510 | " This line is equivalent to the commented out line above, but uses a different implementation. \n", 511 | " The _js parameter is used to pass a JavaScript function to the submit method.'''\n", 512 | " txt.submit(None, None, txt, _js=\"() => {''}\") # No function, no input to that function, submit action to textbox is a js function that returns empty string, so it clears immediately.\n", 513 | " \n", 514 | "demo.launch()" 515 | ] 516 | }, 517 | { 518 | "cell_type": "markdown", 519 | "metadata": {}, 520 | "source": [ 521 | "From here, we can open the app:\n", 522 | "\n", 523 | "```\n", 524 | "$ python3 gradio-joke.py \n", 525 | "Running on local URL: http://127.0.0.1:7860\n", 526 | "\n", 527 | "To create a public link, set `share=True` in `launch()`.\n", 528 | "```" 529 | ] 530 | }, 531 | { 532 | "cell_type": "markdown", 533 | "metadata": {}, 534 | "source": [ 535 | "Now, you could input something like: \n", 536 | "\n", 537 | "`Programmers and boats`\n", 538 | "\n", 539 | "The response I got with this was:\n", 540 | "\n", 541 | "`Why did the programmer quit his job on the boat? He found the C to shining C.`\n", 542 | "\n", 543 | "You will get something different most likely, but you can try anything you want, it could be a single subject, or even 3 or more different subjects. For example, a single subject:\n", 544 | "\n", 545 | "`Lego` > `Why don't Lego characters have girlfriends? Because they block all the relationships.`\n", 546 | "\n", 547 | "Or many subjects:\n", 548 | "\n", 549 | "`Python, Java, and C++` > `Why did Python break up with Java and C++? Because they were too strongly typed for Python's taste!`\n", 550 | "\n", 551 | "Not all jokes are \"good\" and sometimes ChatGPT seems to just make 2 jokes. You could probably further pre-promopt to stop that behavior, but you get the idea. This is just one example of creating a very basic application with the ChatGPT API. There's a whole lot more interesting things that we can do, and I have a few more specific and in depth ideas for projects that I'll be working on \n", 552 | "\n", 553 | "\n" 554 | ] 555 | } 556 | ], 557 | "metadata": { 558 | "kernelspec": { 559 | "display_name": "Python 3", 560 | "language": "python", 561 | "name": "python3" 562 | }, 563 | "language_info": { 564 | "codemirror_mode": { 565 | "name": "ipython", 566 | "version": 3 567 | }, 568 | "file_extension": ".py", 569 | "mimetype": "text/x-python", 570 | "name": "python", 571 | "nbconvert_exporter": "python", 572 | "pygments_lexer": "ipython3", 573 | "version": "3.8.10" 574 | }, 575 | "vscode": { 576 | "interpreter": { 577 | "hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a" 578 | } 579 | } 580 | }, 581 | "nbformat": 4, 582 | "nbformat_minor": 4 583 | } 584 | -------------------------------------------------------------------------------- /function_calling.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import openai\n", 10 | "import json\n", 11 | "\n", 12 | "# load and set our key\n", 13 | "openai.api_key = open(\"key.txt\", \"r\").read().strip(\"\\n\")" 14 | ] 15 | }, 16 | { 17 | "attachments": {}, 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "# OpenAI API Function Calling for GPT-4 and GPT-3.5\n", 22 | "\n", 23 | "Video walkthrough of this notebook: https://www.youtube.com/watch?v=0lOSvOoF2to\n", 24 | "\n", 25 | "OpenAI has released a new capability for their models through the API, called \"Function Calling.\" The intent is to help make it far easier to extract structured, deterministic, information from an otherwise unstructured and non-deterministic language model like GPT-4. \n", 26 | "\n", 27 | "This task of structuring and getting deterministic outputs from a language model has so far been a very difficult task, and has been the subject of much research. Usually the approach is to keep trying various pre-prompts and few shot learning examples until you find one that at least works. While doable, the end result was clunky and not very reliable. Now though, you can use the function calling capability to build out quite powerful programs. Essentially, adding intelligence to your programs.\n", 28 | "\n", 29 | "Imagine you want to be able to intelligently handle for a bunch of different types of input, but also something like: \"What's the weather like in Boston? " 30 | ] 31 | }, 32 | { 33 | "attachments": {}, 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "The task now, given this natural language input to GPT-4 would be:\n", 38 | "\n", 39 | "1. To identify if the user is seeking weather information\n", 40 | "2. If they are, extract the location from their input\n", 41 | "\n", 42 | "So if the user said \"Hello, how are you today?\", we wouldn't need to run the function or try to extract a location. \n", 43 | "\n", 44 | "But, if the user said something like: \"What's the weather like in Boston?\" then we want to identify the desire to get the weather and extract the location \"Boston\" from the input.\n", 45 | "\n", 46 | "Previously, you might pass this input to the OpenAI API for GPT 4 like:" 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": 2, 52 | "metadata": {}, 53 | "outputs": [], 54 | "source": [ 55 | "completion = openai.ChatCompletion.create(\n", 56 | " model=\"gpt-4\",\n", 57 | " messages=[{\"role\": \"user\", \"content\": \"What's the weather like in Boston?\"}],\n", 58 | ")" 59 | ] 60 | }, 61 | { 62 | "attachments": {}, 63 | "cell_type": "markdown", 64 | "metadata": {}, 65 | "source": [ 66 | "And then you'd access the response via:" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 4, 72 | "metadata": {}, 73 | "outputs": [ 74 | { 75 | "name": "stdout", 76 | "output_type": "stream", 77 | "text": [ 78 | "I am an AI language model and I cannot provide real-time information. To check the current weather in Boston, please refer to a weather website or app, such as weather.com, AccuWeather, or use a voice assistant like Google Assistant or Siri.\n" 79 | ] 80 | } 81 | ], 82 | "source": [ 83 | "reply_content = completion.choices[0].message.content\n", 84 | "print(reply_content)" 85 | ] 86 | }, 87 | { 88 | "attachments": {}, 89 | "cell_type": "markdown", 90 | "metadata": {}, 91 | "source": [ 92 | "As you can see, this isn't quite what we would want to happen in this scenario! While GPT-4 may not currently be able to access the internet for us, we could conceivably do this ourselves, but we still would need to identify the intent, as well as the particular desired location. Imagine we have a function like:" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "def get_current_weather(location, unit=\"fahrenheit\"):\n", 102 | " \"\"\"Get the current weather in a given location\"\"\"\n", 103 | " weather_info = {\n", 104 | " \"location\": location,\n", 105 | " \"temperature\": \"72\",\n", 106 | " \"unit\": unit,\n", 107 | " \"forecast\": [\"sunny\", \"windy\"],\n", 108 | " }\n", 109 | " return json.dumps(weather_info)" 110 | ] 111 | }, 112 | { 113 | "attachments": {}, 114 | "cell_type": "markdown", 115 | "metadata": {}, 116 | "source": [ 117 | "This is just a placeholder type of function to show how this all ties together, but it could be anything here. Extracting the intent and location could be done with a preprompt, and this is sort of how OpenAI is doing it through the API, but the model has been trained for us with a particular structure, so we can use this to save a lot of R&D time to find the \"best prompt\" to get it done. \n", 118 | "\n", 119 | "To do this, we want to make sure that we're using version `gpt-4-0613` or later. Then, we can pass a new `functions` parameter to the `ChatCompletion` call like:" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": 6, 125 | "metadata": {}, 126 | "outputs": [], 127 | "source": [ 128 | "completion = openai.ChatCompletion.create(\n", 129 | " model=\"gpt-4-0613\",\n", 130 | " messages=[{\"role\": \"user\", \"content\": \"What's the weather like in Boston?\"}],\n", 131 | " functions=[\n", 132 | " {\n", 133 | " \"name\": \"get_current_weather\",\n", 134 | " \"description\": \"Get the current weather in a given location\",\n", 135 | " \"parameters\": {\n", 136 | " \"type\": \"object\",\n", 137 | " \"properties\": {\n", 138 | " \"location\": {\n", 139 | " \"type\": \"string\",\n", 140 | " \"description\": \"The city and state, e.g. San Francisco, CA\",\n", 141 | " },\n", 142 | " \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n", 143 | " },\n", 144 | " \"required\": [\"location\"],\n", 145 | " },\n", 146 | " }\n", 147 | "],\n", 148 | "function_call=\"auto\",\n", 149 | ")" 150 | ] 151 | }, 152 | { 153 | "attachments": {}, 154 | "cell_type": "markdown", 155 | "metadata": {}, 156 | "source": [ 157 | "The first thing to note is the `function_call=\"auto\",` part. This will let GPT-4 determine if it should seek to fulfill the function parameters. You can also set it to `none` to force no function to be detected, and finally you can set it to seek parameters for a specific function by doing something like `function_call={\"name\": \"get_current_weather\"}`. There are many instances where it could be hard for GPT-4 to determine if a function should be run, so being able to force it to run if you know it should be is actually very powerful, which I'll show soon. \n", 158 | "\n", 159 | "Beyond this, we name and describe the function, then describe the parameters that we'd hope to pass to this function. GPT-4 is relying on this description to help identify what it is you want, so try to be as clear as possible here. The API is going to return to you a json structured object, and this is how you structure your function description, which affords you quite a bit of flexibility in how you describe/structure this functionality. \n", 160 | "\n", 161 | "Okay let's see how GPT-4 responds to our new prompt:" 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": 8, 167 | "metadata": {}, 168 | "outputs": [ 169 | { 170 | "data": { 171 | "text/plain": [ 172 | " JSON: {\n", 173 | " \"finish_reason\": \"function_call\",\n", 174 | " \"index\": 0,\n", 175 | " \"message\": {\n", 176 | " \"content\": null,\n", 177 | " \"function_call\": {\n", 178 | " \"arguments\": \"{\\n \\\"location\\\": \\\"Boston\\\"\\n}\",\n", 179 | " \"name\": \"get_current_weather\"\n", 180 | " },\n", 181 | " \"role\": \"assistant\"\n", 182 | " }\n", 183 | "}" 184 | ] 185 | }, 186 | "execution_count": 8, 187 | "metadata": {}, 188 | "output_type": "execute_result" 189 | } 190 | ], 191 | "source": [ 192 | "reply_content = completion.choices[0]\n", 193 | "reply_content" 194 | ] 195 | }, 196 | { 197 | "attachments": {}, 198 | "cell_type": "markdown", 199 | "metadata": {}, 200 | "source": [ 201 | "This time, we don't actually have any `message` `content`. We instead have an identified `function_call` for the function named `get_current_weather` and we have the `parameters` that were extracted from the input, in this case the location, which is accurately detected as `Boston` by GPT-4.\n", 202 | "\n", 203 | "We can convert this OpenAI object to a more familiar Python dict by doing:" 204 | ] 205 | }, 206 | { 207 | "cell_type": "code", 208 | "execution_count": 19, 209 | "metadata": {}, 210 | "outputs": [ 211 | { 212 | "name": "stdout", 213 | "output_type": "stream", 214 | "text": [ 215 | "{'location': 'Boston'}\n", 216 | "Boston\n" 217 | ] 218 | } 219 | ], 220 | "source": [ 221 | "reply_content = completion.choices[0].message\n", 222 | "\n", 223 | "funcs = reply_content.to_dict()['function_call']['arguments']\n", 224 | "funcs = json.loads(funcs)\n", 225 | "print(funcs)\n", 226 | "print(funcs['location'])\n" 227 | ] 228 | }, 229 | { 230 | "attachments": {}, 231 | "cell_type": "markdown", 232 | "metadata": {}, 233 | "source": [ 234 | "Not only can we extract information or intent from a user's input, we can also extract structured data from GPT-4 in a response. \n", 235 | "\n", 236 | "For example here, I've been working on a project, called TermGPT, to create terminal commands to satisfy a user's query for doing engineering/programming.\n", 237 | "\n", 238 | "Imagine in this scenario, you have user input like: `\"How do I install Tensorflow for my GPU?\"`\n", 239 | "\n", 240 | "In this case, we'd get a useful natural language response from GPT-4, but it wouldn't be structured as JUST terminal commands that could be run. We have an intent that could be extracted from here, but the commands themselves need to be determined by GPT-4. \n", 241 | "\n", 242 | "With the function calling capability, we can do this by passing a function description like:\n" 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": null, 248 | "metadata": {}, 249 | "outputs": [], 250 | "source": [ 251 | " functions=[\n", 252 | " {\n", 253 | " \"name\": \"get_commands\",\n", 254 | " \"description\": \"Get a list of bash commands on an Ubuntu machine to run\",\n", 255 | " \"parameters\": {\n", 256 | " \"type\": \"object\",\n", 257 | " \"properties\": {\n", 258 | " \"commands\": {\n", 259 | " \"type\": \"array\",\n", 260 | " \"items\": {\n", 261 | " \"type\": \"string\",\n", 262 | " \"description\": \"A terminal command string\"\n", 263 | " },\n", 264 | " \"description\": \"List of terminal command strings to be executed\"\n", 265 | " }\n", 266 | " },\n", 267 | " \"required\": [\"commands\"]\n", 268 | " }\n", 269 | " }" 270 | ] 271 | }, 272 | { 273 | "attachments": {}, 274 | "cell_type": "markdown", 275 | "metadata": {}, 276 | "source": [ 277 | "This is my first attempt at a description and structure, it's likely there are even better ones, for reasons I'll explain shortly. In this case, the name for the function will be \"get_commands\" and then we describe it as `Get a list of bash commands on an Ubuntu machine to run`. Then, we specify the parameter as an \"`array`\" (`list` in python), and this array contains items, where each \"item\" is a terminal command string, and the description of this list is `List of terminal command strings to be executed`.\n", 278 | "\n", 279 | "Now, let's see how GPT-4 responds to this prompt:" 280 | ] 281 | }, 282 | { 283 | "cell_type": "code", 284 | "execution_count": 20, 285 | "metadata": {}, 286 | "outputs": [], 287 | "source": [ 288 | "example_user_input = \"How do I install Tensorflow for my GPU?\"\n", 289 | "\n", 290 | "completion = openai.ChatCompletion.create(\n", 291 | " model=\"gpt-4-0613\",\n", 292 | " messages=[{\"role\": \"user\", \"content\": example_user_input}],\n", 293 | " functions=[\n", 294 | " {\n", 295 | " \"name\": \"get_commands\",\n", 296 | " \"description\": \"Get a list of bash commands on an Ubuntu machine\",\n", 297 | " \"parameters\": {\n", 298 | " \"type\": \"object\",\n", 299 | " \"properties\": {\n", 300 | " \"commands\": {\n", 301 | " \"type\": \"array\",\n", 302 | " \"items\": {\n", 303 | " \"type\": \"string\",\n", 304 | " \"description\": \"A terminal command string\"\n", 305 | " },\n", 306 | " \"description\": \"List of terminal command strings to be executed\"\n", 307 | " }\n", 308 | " },\n", 309 | " \"required\": [\"commands\"]\n", 310 | " }\n", 311 | " }\n", 312 | " ],\n", 313 | " function_call=\"auto\",\n", 314 | ")" 315 | ] 316 | }, 317 | { 318 | "cell_type": "code", 319 | "execution_count": 22, 320 | "metadata": {}, 321 | "outputs": [ 322 | { 323 | "data": { 324 | "text/plain": [ 325 | " JSON: {\n", 326 | " \"content\": \"To install Tensorflow for your GPU, you would normally follow these steps:\\n\\n1. First, Install the Python software on your machine. You can download it from the official Python website. Tensorflow supports Python versions 3.5 to 3.8.\\n\\n2. Make sure pip, Python\\u2019s package manager, is upgraded to the latest version:\\n\\n```\\npip install --upgrade pip\\n```\\n\\n3. Install the Tensorflow GPU package using pip. Tensorflow also offers a CPU-only package for users who do not have a compatible GPU:\\n\\n```\\npip install tensorflow-gpu\\n```\\n\\n4. Before using the GPU version of Tensorflow, you need to install GPU drivers. You can download them from the NVIDIA website.\\n\\n5. Finally, install the CUDA Toolkit and the cuDNN SDK. These are software platforms from NVIDIA needed for GPU-accelerated applications. You can also download these from the NVIDIA website.\\n\\nNote, these instructions are general and the exact approach may vary depending on your setup and environment. You should also ensure your machine meets the specific system requirements of Tensorflow.\\n\\nDisclaimer: \\n\\nPlease note that GPU support for Tensorflow requires NVIDIA\\u00ae GPU card with CUDA\\u00ae Compute Capability 3.5 or higher. The Tesla Architecture is no longer supported. MAC OS is not supported for GPU usage. GPUs using the Ampere Architecture (Compute Capability 8.6) can use CUDA 11.0 after a compatible up-to-date driver is installed.\\n\\nFor complete instruction, please visit the official TensorFlow website [here](https://www.tensorflow.org/install/gpu).\",\n", 327 | " \"role\": \"assistant\"\n", 328 | "}" 329 | ] 330 | }, 331 | "execution_count": 22, 332 | "metadata": {}, 333 | "output_type": "execute_result" 334 | } 335 | ], 336 | "source": [ 337 | "reply_content = completion.choices[0].message\n", 338 | "reply_content" 339 | ] 340 | }, 341 | { 342 | "attachments": {}, 343 | "cell_type": "markdown", 344 | "metadata": {}, 345 | "source": [ 346 | "In this case, I set `function_call` to be \"auto.\" In this case, it's fairly hard to GPT-4 to determine that the intent was to run this function. I suspect this is caused by the difference between extracting info from a user's input vs structuring a response. That said, I am quite sure that we could adjust the names/descriptions for our function call to be far more successful here. \n", 347 | "\n", 348 | "But, even when this auto version fails, we do have the ability to \"nudge\" GPT-4 to do it anyway by setting `function_call` to be `{\"name\": \"your_function\"}`. This will force GPT-4 to run the function, even if it doesn't think it should or doesn't realize it. \n", 349 | "\n" 350 | ] 351 | }, 352 | { 353 | "cell_type": "code", 354 | "execution_count": 23, 355 | "metadata": {}, 356 | "outputs": [ 357 | { 358 | "data": { 359 | "text/plain": [ 360 | " JSON: {\n", 361 | " \"content\": null,\n", 362 | " \"function_call\": {\n", 363 | " \"arguments\": \"{\\n \\\"commands\\\": [\\n \\\"sudo apt update\\\",\\n \\\"sudo apt install python3-dev python3-venv\\\",\\n \\\"python3 -m venv tensorflow-gpu\\\",\\n \\\"source tensorflow-gpu/bin/activate\\\",\\n \\\"pip install --upgrade pip\\\",\\n \\\"pip install tensorflow-gpu\\\"\\n ]\\n}\",\n", 364 | " \"name\": \"get_commands\"\n", 365 | " },\n", 366 | " \"role\": \"assistant\"\n", 367 | "}" 368 | ] 369 | }, 370 | "execution_count": 23, 371 | "metadata": {}, 372 | "output_type": "execute_result" 373 | } 374 | ], 375 | "source": [ 376 | "example_user_input = \"How do I install Tensorflow for my GPU?\"\n", 377 | "\n", 378 | "completion = openai.ChatCompletion.create(\n", 379 | " model=\"gpt-4-0613\",\n", 380 | " messages=[{\"role\": \"user\", \"content\": example_user_input}],\n", 381 | " functions=[\n", 382 | " {\n", 383 | " \"name\": \"get_commands\",\n", 384 | " \"description\": \"Get a list of bash commands on an Ubuntu machine\",\n", 385 | " \"parameters\": {\n", 386 | " \"type\": \"object\",\n", 387 | " \"properties\": {\n", 388 | " \"commands\": {\n", 389 | " \"type\": \"array\",\n", 390 | " \"items\": {\n", 391 | " \"type\": \"string\",\n", 392 | " \"description\": \"A terminal command string\"\n", 393 | " },\n", 394 | " \"description\": \"List of terminal command strings to be executed\"\n", 395 | " }\n", 396 | " },\n", 397 | " \"required\": [\"commands\"]\n", 398 | " }\n", 399 | " }\n", 400 | " ],\n", 401 | " function_call={\"name\": \"get_commands\"},\n", 402 | ")\n", 403 | "\n", 404 | "reply_content = completion.choices[0].message\n", 405 | "reply_content" 406 | ] 407 | }, 408 | { 409 | "attachments": {}, 410 | "cell_type": "markdown", 411 | "metadata": {}, 412 | "source": [ 413 | "Now, we do get the function call, and we can grab those commands with:" 414 | ] 415 | }, 416 | { 417 | "cell_type": "code", 418 | "execution_count": 24, 419 | "metadata": {}, 420 | "outputs": [ 421 | { 422 | "data": { 423 | "text/plain": [ 424 | "['sudo apt update',\n", 425 | " 'sudo apt install python3-dev python3-venv',\n", 426 | " 'python3 -m venv tensorflow-gpu',\n", 427 | " 'source tensorflow-gpu/bin/activate',\n", 428 | " 'pip install --upgrade pip',\n", 429 | " 'pip install tensorflow-gpu']" 430 | ] 431 | }, 432 | "execution_count": 24, 433 | "metadata": {}, 434 | "output_type": "execute_result" 435 | } 436 | ], 437 | "source": [ 438 | "funcs = reply_content.to_dict()['function_call']['arguments']\n", 439 | "funcs = json.loads(funcs)\n", 440 | "funcs['commands']" 441 | ] 442 | }, 443 | { 444 | "attachments": {}, 445 | "cell_type": "markdown", 446 | "metadata": {}, 447 | "source": [ 448 | "I just can't express how powerful this is. We can now extract structured data from GPT-4, and we can also pass structured data to GPT-4 to have it generate a response. Being able to do this reliably is basically never before seen, and this will make this sort of interaction between deterministic programming logic and non-deterministic language models far more common and just plain possible. The ability to do this is going to be a game changer for the field of AI and programming, and I'm very excited to see what people do with this capability.\n", 449 | "\n", 450 | "Here's another example of how powerful this could be. We could generate responses for a given query in a variety of \"personalities\" so to speak:" 451 | ] 452 | }, 453 | { 454 | "cell_type": "code", 455 | "execution_count": 26, 456 | "metadata": {}, 457 | "outputs": [ 458 | { 459 | "data": { 460 | "text/plain": [ 461 | " JSON: {\n", 462 | " \"content\": null,\n", 463 | " \"function_call\": {\n", 464 | " \"arguments\": \"{\\n\\\"sassy_and_sarcastic\\\": \\\"Oh sure, if you fancy a little microbial cocktail, go right ahead. But seriously, don't. Dehumidifiers aren't designed to purify water for drinking. It could be full of bacteria, mold, and other nasties. Drink from a tap, not your dehumidifier. It is not safe.\\\",\\n\\\"happy_and_helpful\\\": \\\"I'm sorry, but it's not recommended to drink water from a dehumidifier. While it's extracting water from air, dehumidifiers don't filter or clean the water. This means it can contain various bacteria or even chemicals. To keep safe, it's better to only drink water that's meant for consumption and has been treated properly!\\\"}\",\n", 465 | " \"name\": \"get_varied_personality_responses\"\n", 466 | " },\n", 467 | " \"role\": \"assistant\"\n", 468 | "}" 469 | ] 470 | }, 471 | "execution_count": 26, 472 | "metadata": {}, 473 | "output_type": "execute_result" 474 | } 475 | ], 476 | "source": [ 477 | "example_user_input = \"Is it safe to drink water from a dehumidifer?\"\n", 478 | "\n", 479 | "completion = openai.ChatCompletion.create(\n", 480 | " model=\"gpt-4-0613\",\n", 481 | " messages=[{\"role\": \"user\", \"content\": example_user_input}],\n", 482 | " functions=[\n", 483 | " {\n", 484 | " \"name\": \"get_varied_personality_responses\",\n", 485 | " \"description\": \"ingest the various personality responses\",\n", 486 | " \"parameters\": {\n", 487 | " \"type\": \"object\",\n", 488 | " \"properties\": {\n", 489 | " \"sassy_and_sarcastic\": {\n", 490 | " \"type\": \"string\",\n", 491 | " \"description\": \"A sassy and sarcastic version of the response to a user's query\",\n", 492 | " },\n", 493 | " \"happy_and_helpful\": {\n", 494 | " \"type\": \"string\",\n", 495 | " \"description\": \"A happy and helpful version of the response to a user's query\",\n", 496 | " },\n", 497 | " },\n", 498 | " \"required\": [\"sassy_and_sarcastic\", \"happy_and_helpful\"],\n", 499 | " },\n", 500 | " }\n", 501 | " ],\n", 502 | " function_call={\"name\": \"get_varied_personality_responses\"},\n", 503 | ")\n", 504 | "\n", 505 | "reply_content = completion.choices[0].message\n", 506 | "reply_content" 507 | ] 508 | }, 509 | { 510 | "cell_type": "code", 511 | "execution_count": 29, 512 | "metadata": {}, 513 | "outputs": [], 514 | "source": [ 515 | "response_options = reply_content.to_dict()['function_call']['arguments']\n", 516 | "options = json.loads(response_options)" 517 | ] 518 | }, 519 | { 520 | "cell_type": "code", 521 | "execution_count": 30, 522 | "metadata": {}, 523 | "outputs": [ 524 | { 525 | "data": { 526 | "text/plain": [ 527 | "\"Oh sure, if you fancy a little microbial cocktail, go right ahead. But seriously, don't. Dehumidifiers aren't designed to purify water for drinking. It could be full of bacteria, mold, and other nasties. Drink from a tap, not your dehumidifier. It is not safe.\"" 528 | ] 529 | }, 530 | "execution_count": 30, 531 | "metadata": {}, 532 | "output_type": "execute_result" 533 | } 534 | ], 535 | "source": [ 536 | "options[\"sassy_and_sarcastic\"]" 537 | ] 538 | }, 539 | { 540 | "cell_type": "code", 541 | "execution_count": 31, 542 | "metadata": {}, 543 | "outputs": [ 544 | { 545 | "data": { 546 | "text/plain": [ 547 | "\"I'm sorry, but it's not recommended to drink water from a dehumidifier. While it's extracting water from air, dehumidifiers don't filter or clean the water. This means it can contain various bacteria or even chemicals. To keep safe, it's better to only drink water that's meant for consumption and has been treated properly!\"" 548 | ] 549 | }, 550 | "execution_count": 31, 551 | "metadata": {}, 552 | "output_type": "execute_result" 553 | } 554 | ], 555 | "source": [ 556 | "options[\"happy_and_helpful\"]" 557 | ] 558 | }, 559 | { 560 | "attachments": {}, 561 | "cell_type": "markdown", 562 | "metadata": {}, 563 | "source": [ 564 | "Hopefully that gives you some ideas about what's possible here, but this is truly 0.000001% of what's actually possible here. There's going to be some incredible things built with this capability. " 565 | ] 566 | } 567 | ], 568 | "metadata": { 569 | "kernelspec": { 570 | "display_name": "Python 3", 571 | "language": "python", 572 | "name": "python3" 573 | }, 574 | "language_info": { 575 | "codemirror_mode": { 576 | "name": "ipython", 577 | "version": 3 578 | }, 579 | "file_extension": ".py", 580 | "mimetype": "text/x-python", 581 | "name": "python", 582 | "nbconvert_exporter": "python", 583 | "pygments_lexer": "ipython3", 584 | "version": "3.8.10" 585 | }, 586 | "orig_nbformat": 4 587 | }, 588 | "nbformat": 4, 589 | "nbformat_minor": 2 590 | } 591 | --------------------------------------------------------------------------------