├── 01_NLP_App.ipynb ├── 02_Image_Captioning_App.ipynb ├── 03_Image_Generation_App.ipynb ├── 04_AI_Game_App.ipynb ├── 05_Chat_App.ipynb ├── 06_Graphs.ipynb ├── 07_UI.ipynb ├── Images ├── BART_and_BERT.png └── Stable_Diffusion.png ├── bird_flight.jpeg ├── christmas_dog.jpeg ├── components.png ├── cover.jpg ├── cow.jpeg └── readme.md /01_NLP_App.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "88535ead", 6 | "metadata": {}, 7 | "source": [ 8 | "# **Gradio:** NLP App 🗞️" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "6faa43ba", 14 | "metadata": {}, 15 | "source": [ 16 | "Load your HF API key and relevant Python libraries." 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "d2698081-4deb-436a-a821-8ea48bdd6e6a", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "import os\n", 27 | "import io\n", 28 | "from IPython.display import Image, display, HTML\n", 29 | "from PIL import Image\n", 30 | "import base64 \n", 31 | "from dotenv import load_dotenv, find_dotenv\n", 32 | "_ = load_dotenv(find_dotenv()) # read local .env file\n", 33 | "hf_api_key = os.environ['HF_API_KEY']" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "id": "a106ab02-f248-4c03-9dd8-b1991db7f778", 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "# Helper function\n", 44 | "import requests, json\n", 45 | "\n", 46 | "#Summarization endpoint\n", 47 | "def get_completion(inputs, parameters=None,ENDPOINT_URL=os.environ['HF_API_SUMMARY_BASE']): \n", 48 | " headers = {\n", 49 | " \"Authorization\": f\"Bearer {hf_api_key}\",\n", 50 | " \"Content-Type\": \"application/json\"\n", 51 | " }\n", 52 | " data = { \"inputs\": inputs }\n", 53 | " if parameters is not None:\n", 54 | " data.update({\"parameters\": parameters})\n", 55 | " response = requests.request(\"POST\",\n", 56 | " ENDPOINT_URL, headers=headers,\n", 57 | " data=json.dumps(data)\n", 58 | " )\n", 59 | " return json.loads(response.content.decode(\"utf-8\"))" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "id": "a97a06f9", 65 | "metadata": {}, 66 | "source": [ 67 | "## Building a text summarization app" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "id": "d4794502", 73 | "metadata": {}, 74 | "source": [ 75 | "Here we are using an [Inference Endpoint](https://huggingface.co/inference-endpoints) for the `shleifer/distilbart-cnn-12-6`, a 306M parameter distilled model from `facebook/bart-large-cnn`. " 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "id": "01892d1a", 81 | "metadata": {}, 82 | "source": [ 83 | "### How about running it locally?\n", 84 | "The code would look very similar if you were running it locally instead of from an API. The same is true for all the models in the rest of the course, make sure to check the [Pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) documentation page\n", 85 | "\n", 86 | "```py\n", 87 | "from transformers import pipeline\n", 88 | "\n", 89 | "get_completion = pipeline(\"summarization\", model=\"shleifer/distilbart-cnn-12-6\")\n", 90 | "\n", 91 | "def summarize(input):\n", 92 | " output = get_completion(input)\n", 93 | " return output[0]['summary_text']\n", 94 | " \n", 95 | "```" 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": null, 101 | "id": "c2f0fc58-91d6-48f2-a014-052192586be8", 102 | "metadata": {}, 103 | "outputs": [], 104 | "source": [ 105 | "text = ('''The tower is 324 metres (1,063 ft) tall, about the same height\n", 106 | " as an 81-storey building, and the tallest structure in Paris. \n", 107 | " Its base is square, measuring 125 metres (410 ft) on each side. \n", 108 | " During its construction, the Eiffel Tower surpassed the Washington \n", 109 | " Monument to become the tallest man-made structure in the world,\n", 110 | " a title it held for 41 years until the Chrysler Building\n", 111 | " in New York City was finished in 1930. It was the first structure \n", 112 | " to reach a height of 300 metres. Due to the addition of a broadcasting \n", 113 | " aerial at the top of the tower in 1957, it is now taller than the \n", 114 | " Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the \n", 115 | " Eiffel Tower is the second tallest free-standing structure in France \n", 116 | " after the Millau Viaduct.''')\n", 117 | "\n", 118 | "get_completion(text)" 119 | ] 120 | }, 121 | { 122 | "cell_type": "markdown", 123 | "id": "f144593f", 124 | "metadata": {}, 125 | "source": [ 126 | "### Getting started with Gradio `gr.Interface` \n", 127 | "\n", 128 | "#### How about running it locally?\n", 129 | "The code would look very similar if you were running it locally. Simply remove all the paramters in the launch method\n", 130 | "\n", 131 | "```py\n", 132 | "demo.launch()\n", 133 | "```" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "id": "3eb11460", 140 | "metadata": {}, 141 | "outputs": [], 142 | "source": [ 143 | "import gradio as gr\n", 144 | "def summarize(input):\n", 145 | " output = get_completion(input)\n", 146 | " return output[0]['summary_text']\n", 147 | " \n", 148 | "gr.close_all()\n", 149 | "demo = gr.Interface(fn=summarize, inputs=\"text\", outputs=\"text\")\n", 150 | "demo.launch(share=True, server_port=int(os.environ['PORT1']))" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "id": "9b310770", 156 | "metadata": {}, 157 | "source": [ 158 | "You can add `demo.launch(share=True)` to create a public link to share with your team or friends." 159 | ] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "execution_count": null, 164 | "id": "60684b55-c7ae-4c9e-88ea-bbc2e702ecdb", 165 | "metadata": {}, 166 | "outputs": [], 167 | "source": [ 168 | "import gradio as gr\n", 169 | "\n", 170 | "def summarize(input):\n", 171 | " output = get_completion(input)\n", 172 | " return output[0]['summary_text']\n", 173 | " \n", 174 | "gr.close_all()\n", 175 | "demo = gr.Interface(fn=summarize, \n", 176 | " inputs=[gr.Textbox(label=\"Text to summarize\", lines=6)],\n", 177 | " outputs=[gr.Textbox(label=\"Result\", lines=3)],\n", 178 | " title=\"Text summarization with distilbart-cnn\",\n", 179 | " description=\"Summarize any text using the `shleifer/distilbart-cnn-12-6` model under the hood!\"\n", 180 | " )\n", 181 | "demo.launch(share=True, server_port=int(os.environ['PORT2']))" 182 | ] 183 | }, 184 | { 185 | "cell_type": "markdown", 186 | "id": "4b300d17", 187 | "metadata": {}, 188 | "source": [ 189 | "## Building a Named Entity Recognition app" 190 | ] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "id": "c0d1043f", 195 | "metadata": {}, 196 | "source": [ 197 | "We are using this [Inference Endpoint](https://huggingface.co/inference-endpoints) for `dslim/bert-base-NER`, a 108M parameter fine-tuned BART model on the NER task." 198 | ] 199 | }, 200 | { 201 | "cell_type": "markdown", 202 | "id": "f663dcbb", 203 | "metadata": {}, 204 | "source": [ 205 | "### How about running it locally?\n", 206 | "\n", 207 | "```py\n", 208 | "from transformers import pipeline\n", 209 | "\n", 210 | "get_completion = pipeline(\"ner\", model=\"dslim/bert-base-NER\")\n", 211 | "\n", 212 | "def ner(input):\n", 213 | " output = get_completion(input)\n", 214 | " return {\"text\": input, \"entities\": output}\n", 215 | " \n", 216 | "```" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": null, 222 | "id": "0db4a922-b300-4dbc-8768-955b6a18dce4", 223 | "metadata": {}, 224 | "outputs": [], 225 | "source": [ 226 | "API_URL = os.environ['HF_API_NER_BASE'] #NER endpoint\n", 227 | "text = \"My name is Andrew, I'm building DeepLearningAI and I live in California\"\n", 228 | "get_completion(text, parameters=None, ENDPOINT_URL= API_URL)" 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": null, 234 | "id": "e5c21254-128d-446c-b6dd-e30af26d436d", 235 | "metadata": {}, 236 | "outputs": [], 237 | "source": [ 238 | "def ner(input):\n", 239 | " output = get_completion(input, parameters=None, ENDPOINT_URL=API_URL)\n", 240 | " return {\"text\": input, \"entities\": output}\n", 241 | "\n", 242 | "gr.close_all()\n", 243 | "demo = gr.Interface(fn=ner,\n", 244 | " inputs=[gr.Textbox(label=\"Text to find entities\", lines=2)],\n", 245 | " outputs=[gr.HighlightedText(label=\"Text with entities\")],\n", 246 | " title=\"NER with dslim/bert-base-NER\",\n", 247 | " description=\"Find entities using the `dslim/bert-base-NER` model under the hood!\",\n", 248 | " allow_flagging=\"never\",\n", 249 | " #Here we introduce a new tag, examples, easy to use examples for your application\n", 250 | " examples=[\"My name is Andrew and I live in California\", \"My name is Poli and work at HuggingFace\"])\n", 251 | "demo.launch(share=True, server_port=int(os.environ['PORT3']))" 252 | ] 253 | }, 254 | { 255 | "cell_type": "markdown", 256 | "id": "60f16ad4", 257 | "metadata": {}, 258 | "source": [ 259 | "### Adding a helper function to merge tokens" 260 | ] 261 | }, 262 | { 263 | "cell_type": "code", 264 | "execution_count": null, 265 | "id": "4dc278e9-87b4-420b-89e9-7120dc4be754", 266 | "metadata": {}, 267 | "outputs": [], 268 | "source": [ 269 | "def merge_tokens(tokens):\n", 270 | " merged_tokens = []\n", 271 | " for token in tokens:\n", 272 | " if merged_tokens and token['entity'].startswith('I-') and merged_tokens[-1]['entity'].endswith(token['entity'][2:]):\n", 273 | " # If current token continues the entity of the last one, merge them\n", 274 | " last_token = merged_tokens[-1]\n", 275 | " last_token['word'] += token['word'].replace('##', '')\n", 276 | " last_token['end'] = token['end']\n", 277 | " last_token['score'] = (last_token['score'] + token['score']) / 2\n", 278 | " else:\n", 279 | " # Otherwise, add the token to the list\n", 280 | " merged_tokens.append(token)\n", 281 | "\n", 282 | " return merged_tokens\n", 283 | "\n", 284 | "def ner(input):\n", 285 | " output = get_completion(input, parameters=None, ENDPOINT_URL=API_URL)\n", 286 | " merged_tokens = merge_tokens(output)\n", 287 | " return {\"text\": input, \"entities\": merged_tokens}\n", 288 | "\n", 289 | "gr.close_all()\n", 290 | "demo = gr.Interface(fn=ner,\n", 291 | " inputs=[gr.Textbox(label=\"Text to find entities\", lines=2)],\n", 292 | " outputs=[gr.HighlightedText(label=\"Text with entities\")],\n", 293 | " title=\"NER with dslim/bert-base-NER\",\n", 294 | " description=\"Find entities using the `dslim/bert-base-NER` model under the hood!\",\n", 295 | " allow_flagging=\"never\",\n", 296 | " examples=[\"My name is Andrew, I'm building DeeplearningAI and I live in California\", \"My name is Poli, I live in Vienna and work at HuggingFace\"])\n", 297 | "\n", 298 | "demo.launch(share=True, server_port=int(os.environ['PORT4']))" 299 | ] 300 | }, 301 | { 302 | "cell_type": "code", 303 | "execution_count": null, 304 | "id": "3cccdb9b-0c3a-406e-95bc-106705aeb010", 305 | "metadata": {}, 306 | "outputs": [], 307 | "source": [ 308 | "gr.close_all()" 309 | ] 310 | }, 311 | { 312 | "cell_type": "code", 313 | "execution_count": null, 314 | "id": "5c428938-aec5-46f6-9315-af93a7a8abb5", 315 | "metadata": {}, 316 | "outputs": [], 317 | "source": [] 318 | }, 319 | { 320 | "cell_type": "code", 321 | "execution_count": null, 322 | "id": "ca47b403-f187-4d7f-929e-a1c09012b741", 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": null, 330 | "id": "d33eac55-3520-443a-a7fd-a2a6c8768ec7", 331 | "metadata": {}, 332 | "outputs": [], 333 | "source": [] 334 | }, 335 | { 336 | "cell_type": "code", 337 | "execution_count": null, 338 | "id": "a37d1a9d-9f3a-4f38-bdbc-08b97d17a72c", 339 | "metadata": {}, 340 | "outputs": [], 341 | "source": [] 342 | }, 343 | { 344 | "cell_type": "code", 345 | "execution_count": null, 346 | "id": "c468c2f8-5123-4121-962c-326befeb7789", 347 | "metadata": {}, 348 | "outputs": [], 349 | "source": [] 350 | }, 351 | { 352 | "cell_type": "code", 353 | "execution_count": null, 354 | "id": "5eda551f-ca2e-4a8c-8fc0-f921ba1eb172", 355 | "metadata": {}, 356 | "outputs": [], 357 | "source": [] 358 | }, 359 | { 360 | "cell_type": "code", 361 | "execution_count": null, 362 | "id": "2af69688-df59-45a6-be9f-1e66cae7ac1f", 363 | "metadata": {}, 364 | "outputs": [], 365 | "source": [] 366 | }, 367 | { 368 | "cell_type": "code", 369 | "execution_count": null, 370 | "id": "3b808da8-a99e-4fd0-b984-32640d90c13f", 371 | "metadata": {}, 372 | "outputs": [], 373 | "source": [] 374 | }, 375 | { 376 | "cell_type": "code", 377 | "execution_count": null, 378 | "id": "bc373ed3-ce8d-402e-bbda-837e9478c2c5", 379 | "metadata": {}, 380 | "outputs": [], 381 | "source": [] 382 | }, 383 | { 384 | "cell_type": "code", 385 | "execution_count": null, 386 | "id": "31eba1e9-e994-4480-9076-99a7b71d3211", 387 | "metadata": {}, 388 | "outputs": [], 389 | "source": [] 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": null, 394 | "id": "189844a4-1fcc-4ccc-bcff-9c9d3ec6759c", 395 | "metadata": {}, 396 | "outputs": [], 397 | "source": [] 398 | }, 399 | { 400 | "cell_type": "code", 401 | "execution_count": null, 402 | "id": "2bce126b-c54e-4fec-8d4b-462317149f1e", 403 | "metadata": {}, 404 | "outputs": [], 405 | "source": [] 406 | }, 407 | { 408 | "cell_type": "code", 409 | "execution_count": null, 410 | "id": "6833b2ac-f605-4566-b84a-c0c2bc529bd7", 411 | "metadata": {}, 412 | "outputs": [], 413 | "source": [] 414 | }, 415 | { 416 | "cell_type": "code", 417 | "execution_count": null, 418 | "id": "2ef1d07f-ac8b-4f38-9c6c-ecdd5942ffb6", 419 | "metadata": {}, 420 | "outputs": [], 421 | "source": [] 422 | }, 423 | { 424 | "cell_type": "code", 425 | "execution_count": null, 426 | "id": "ab5c6dd3-bc15-4139-bee8-d7291c20a1e8", 427 | "metadata": {}, 428 | "outputs": [], 429 | "source": [] 430 | }, 431 | { 432 | "cell_type": "code", 433 | "execution_count": null, 434 | "id": "62ddfa37-da43-40be-a3da-05aa3d2ee9c8", 435 | "metadata": {}, 436 | "outputs": [], 437 | "source": [] 438 | }, 439 | { 440 | "cell_type": "code", 441 | "execution_count": null, 442 | "id": "e50dafb5-50bf-4b8f-9eec-84682ae339e9", 443 | "metadata": {}, 444 | "outputs": [], 445 | "source": [] 446 | }, 447 | { 448 | "cell_type": "code", 449 | "execution_count": null, 450 | "id": "13755d46-db04-435f-bbe8-c6b6df9f46fc", 451 | "metadata": {}, 452 | "outputs": [], 453 | "source": [] 454 | }, 455 | { 456 | "cell_type": "code", 457 | "execution_count": null, 458 | "id": "63ea1054-75c1-42c5-8fe3-1f5a1eb9b8fc", 459 | "metadata": {}, 460 | "outputs": [], 461 | "source": [] 462 | }, 463 | { 464 | "cell_type": "code", 465 | "execution_count": null, 466 | "id": "af00aba6-d0e7-49be-816c-0fa4670737fd", 467 | "metadata": {}, 468 | "outputs": [], 469 | "source": [] 470 | } 471 | ], 472 | "metadata": { 473 | "kernelspec": { 474 | "display_name": "Python 3 (ipykernel)", 475 | "language": "python", 476 | "name": "python3" 477 | }, 478 | "language_info": { 479 | "codemirror_mode": { 480 | "name": "ipython", 481 | "version": 3 482 | }, 483 | "file_extension": ".py", 484 | "mimetype": "text/x-python", 485 | "name": "python", 486 | "nbconvert_exporter": "python", 487 | "pygments_lexer": "ipython3", 488 | "version": "3.11.3" 489 | } 490 | }, 491 | "nbformat": 4, 492 | "nbformat_minor": 5 493 | } 494 | -------------------------------------------------------------------------------- /02_Image_Captioning_App.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "adf933f7", 6 | "metadata": {}, 7 | "source": [ 8 | "# **Gradio**: Image Captioning App 🖼️📝" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "8b8996b4", 14 | "metadata": {}, 15 | "source": [ 16 | "Load your HF API key and relevant Python libraries" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "3471d1ed-41a0-473c-b3c7-99a9e14dffaf", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "import os\n", 27 | "import io\n", 28 | "import IPython.display\n", 29 | "from PIL import Image\n", 30 | "import base64 \n", 31 | "from dotenv import load_dotenv, find_dotenv\n", 32 | "_ = load_dotenv(find_dotenv()) # read local .env file\n", 33 | "hf_api_key = os.environ['HF_API_KEY']" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "id": "05896028-3b43-408e-a899-109bde9625e5", 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "# Helper functions\n", 44 | "import requests, json\n", 45 | "\n", 46 | "#Image-to-text endpoint\n", 47 | "def get_completion(inputs, parameters=None, ENDPOINT_URL=os.environ['HF_API_ITT_BASE']): \n", 48 | " headers = {\n", 49 | " \"Authorization\": f\"Bearer {hf_api_key}\",\n", 50 | " \"Content-Type\": \"application/json\"\n", 51 | " }\n", 52 | " data = { \"inputs\": inputs }\n", 53 | " if parameters is not None:\n", 54 | " data.update({\"parameters\": parameters})\n", 55 | " response = requests.request(\"POST\",\n", 56 | " ENDPOINT_URL,\n", 57 | " headers=headers,\n", 58 | " data=json.dumps(data))\n", 59 | " return json.loads(response.content.decode(\"utf-8\"))" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "id": "2541e21e", 65 | "metadata": {}, 66 | "source": [ 67 | "## Building an image captioning app " 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "id": "f1ca462d", 73 | "metadata": {}, 74 | "source": [ 75 | "Here we'll be using an [Inference Endpoint](https://huggingface.co/inference-endpoints) for `Salesforce/blip-image-captioning-base` a 14M parameter captioning model." 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "id": "f9e3deb8", 81 | "metadata": {}, 82 | "source": [ 83 | "The code would look very similar if you were running it locally instead of from an API. You can check the [Pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) documentation page.\n", 84 | "\n", 85 | "```py\n", 86 | "from transformers import pipeline\n", 87 | "\n", 88 | "get_completion = pipeline(\"image-to-text\",model=\"Salesforce/blip-image-captioning-base\")\n", 89 | "\n", 90 | "def summarize(input):\n", 91 | " output = get_completion(input)\n", 92 | " return output[0]['generated_text']\n", 93 | " \n", 94 | "```" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "id": "b98cf3f7", 100 | "metadata": {}, 101 | "source": [ 102 | "The free images are available on: https://free-images.com/" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": null, 108 | "id": "eb7154b9-fa1c-416e-8ee2-635c27720539", 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [ 112 | "image_url = \"https://free-images.com/sm/9596/dog_animal_greyhound_983023.jpg\"\n", 113 | "display(IPython.display.Image(url=image_url))\n", 114 | "get_completion(image_url)" 115 | ] 116 | }, 117 | { 118 | "cell_type": "markdown", 119 | "id": "90da616d", 120 | "metadata": {}, 121 | "source": [ 122 | "## Captioning with `gr.Interface()`" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "id": "8021fac0-9300-44d5-adb1-6ac7111f38f6", 129 | "metadata": {}, 130 | "outputs": [], 131 | "source": [ 132 | "import gradio as gr \n", 133 | "\n", 134 | "def image_to_base64_str(pil_image):\n", 135 | " byte_arr = io.BytesIO()\n", 136 | " pil_image.save(byte_arr, format='PNG')\n", 137 | " byte_arr = byte_arr.getvalue()\n", 138 | " return str(base64.b64encode(byte_arr).decode('utf-8'))\n", 139 | "\n", 140 | "def captioner(image):\n", 141 | " base64_image = image_to_base64_str(image)\n", 142 | " result = get_completion(base64_image)\n", 143 | " return result[0]['generated_text']\n", 144 | "\n", 145 | "gr.close_all()\n", 146 | "demo = gr.Interface(fn=captioner,\n", 147 | " inputs=[gr.Image(label=\"Upload image\", type=\"pil\")],\n", 148 | " outputs=[gr.Textbox(label=\"Caption\")],\n", 149 | " title=\"Image Captioning with BLIP\",\n", 150 | " description=\"Caption any image using the BLIP model\",\n", 151 | " allow_flagging=\"never\",\n", 152 | " examples=[\"christmas_dog.jpeg\", \"bird_flight.jpeg\", \"cow.jpeg\"])\n", 153 | "\n", 154 | "demo.launch(share=True, server_port=int(os.environ['PORT1']))" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": null, 160 | "id": "4e036a7f-c34b-42d3-84bc-ca7da41bccb6", 161 | "metadata": {}, 162 | "outputs": [], 163 | "source": [ 164 | "gr.close_all()" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "id": "9bc2a151-b192-46c0-ac6f-b5a876799904", 171 | "metadata": {}, 172 | "outputs": [], 173 | "source": [] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": null, 178 | "id": "7a215ec9-8e11-4282-b32e-77babd2d9c12", 179 | "metadata": {}, 180 | "outputs": [], 181 | "source": [] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": null, 186 | "id": "509e418b-fdcb-450e-b5c2-aad55cccffe4", 187 | "metadata": {}, 188 | "outputs": [], 189 | "source": [] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "id": "2bae4df7-1eb8-4153-8931-53285e8fe68b", 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": null, 202 | "id": "f69af786-0324-4621-9737-ab69ab7f1a92", 203 | "metadata": {}, 204 | "outputs": [], 205 | "source": [] 206 | }, 207 | { 208 | "cell_type": "code", 209 | "execution_count": null, 210 | "id": "781744e2-6840-4af9-92ae-835028fc24b7", 211 | "metadata": {}, 212 | "outputs": [], 213 | "source": [] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": null, 218 | "id": "b918c135-ab79-446e-9b0a-f4c1632ba545", 219 | "metadata": {}, 220 | "outputs": [], 221 | "source": [] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "id": "9e64c5f9-be66-4758-8d38-46559cfc823e", 227 | "metadata": {}, 228 | "outputs": [], 229 | "source": [] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": null, 234 | "id": "c47a6eb1-602d-40a5-b905-8baa1484c35e", 235 | "metadata": {}, 236 | "outputs": [], 237 | "source": [] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": null, 242 | "id": "aa9190dc-c9aa-47d3-a77a-afa5a5e36430", 243 | "metadata": {}, 244 | "outputs": [], 245 | "source": [] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "499fb577-ad51-4deb-bf0f-010c55c13d76", 251 | "metadata": {}, 252 | "outputs": [], 253 | "source": [] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": null, 258 | "id": "ae8ce1c4-c8fe-4a01-9fd4-828d2fdb4100", 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [] 262 | }, 263 | { 264 | "cell_type": "code", 265 | "execution_count": null, 266 | "id": "d1ba830c-4a75-4aca-ac09-a3451dea7180", 267 | "metadata": {}, 268 | "outputs": [], 269 | "source": [] 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": null, 274 | "id": "13a87f3c-8906-48e6-a2f2-c1d74e8190ff", 275 | "metadata": {}, 276 | "outputs": [], 277 | "source": [] 278 | }, 279 | { 280 | "cell_type": "code", 281 | "execution_count": null, 282 | "id": "c4666b4b-4b79-4557-89c9-6b8401e07303", 283 | "metadata": {}, 284 | "outputs": [], 285 | "source": [] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "id": "cabdf254-ba60-4ae3-a53b-ec76c2325e97", 291 | "metadata": {}, 292 | "outputs": [], 293 | "source": [] 294 | }, 295 | { 296 | "cell_type": "code", 297 | "execution_count": null, 298 | "id": "d28faa71-fd87-46c1-9c75-f81d85c64538", 299 | "metadata": {}, 300 | "outputs": [], 301 | "source": [] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "id": "5d854f3f-b354-4d1b-9063-67a7b15e53e5", 307 | "metadata": {}, 308 | "outputs": [], 309 | "source": [] 310 | }, 311 | { 312 | "cell_type": "code", 313 | "execution_count": null, 314 | "id": "505950a7-c516-4dcc-bb50-a0628e7b57b3", 315 | "metadata": {}, 316 | "outputs": [], 317 | "source": [] 318 | }, 319 | { 320 | "cell_type": "code", 321 | "execution_count": null, 322 | "id": "491bf91e-2f37-4762-89b1-85dd8165d564", 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": null, 330 | "id": "fb62c6c5-6a6b-41b6-9612-41a6181bd8f1", 331 | "metadata": {}, 332 | "outputs": [], 333 | "source": [] 334 | }, 335 | { 336 | "cell_type": "code", 337 | "execution_count": null, 338 | "id": "ac50ad76-caf8-4fba-acd0-0de2ffa0a36a", 339 | "metadata": {}, 340 | "outputs": [], 341 | "source": [] 342 | }, 343 | { 344 | "cell_type": "code", 345 | "execution_count": null, 346 | "id": "8d70c5c8-4ad2-4c86-a7d4-683ff4d613e2", 347 | "metadata": {}, 348 | "outputs": [], 349 | "source": [] 350 | }, 351 | { 352 | "cell_type": "code", 353 | "execution_count": null, 354 | "id": "709519e8-bf9c-4b60-be03-e1a0282c3489", 355 | "metadata": {}, 356 | "outputs": [], 357 | "source": [] 358 | }, 359 | { 360 | "cell_type": "code", 361 | "execution_count": null, 362 | "id": "890563c2-979b-4f5d-9840-6fc25715803d", 363 | "metadata": {}, 364 | "outputs": [], 365 | "source": [] 366 | }, 367 | { 368 | "cell_type": "code", 369 | "execution_count": null, 370 | "id": "a50823fd-f756-4cfa-9314-44272bc88038", 371 | "metadata": {}, 372 | "outputs": [], 373 | "source": [] 374 | }, 375 | { 376 | "cell_type": "code", 377 | "execution_count": null, 378 | "id": "14e2ae36-774f-4b08-8bf5-f47eb85da243", 379 | "metadata": {}, 380 | "outputs": [], 381 | "source": [] 382 | } 383 | ], 384 | "metadata": { 385 | "kernelspec": { 386 | "display_name": "Python 3 (ipykernel)", 387 | "language": "python", 388 | "name": "python3" 389 | }, 390 | "language_info": { 391 | "codemirror_mode": { 392 | "name": "ipython", 393 | "version": 3 394 | }, 395 | "file_extension": ".py", 396 | "mimetype": "text/x-python", 397 | "name": "python", 398 | "nbconvert_exporter": "python", 399 | "pygments_lexer": "ipython3", 400 | "version": "3.11.3" 401 | } 402 | }, 403 | "nbformat": 4, 404 | "nbformat_minor": 5 405 | } 406 | -------------------------------------------------------------------------------- /03_Image_Generation_App.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "2c635960", 6 | "metadata": {}, 7 | "source": [ 8 | "# **Gradio:** Image Generation App 🎨" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "a2c186d0", 14 | "metadata": {}, 15 | "source": [ 16 | "Load your HF API key and relevant Python libraries" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "e0447fb8-b684-4835-9a31-1d0a824d2607", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "import os\n", 27 | "import io\n", 28 | "import IPython.display\n", 29 | "from PIL import Image\n", 30 | "import base64 \n", 31 | "from dotenv import load_dotenv, find_dotenv\n", 32 | "_ = load_dotenv(find_dotenv()) # read local .env file\n", 33 | "hf_api_key = os.environ['HF_API_KEY']" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "id": "2e1b5677-271a-46ac-a02e-656f2e315e15", 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "# Helper function\n", 44 | "import requests, json\n", 45 | "\n", 46 | "#Text-to-image endpoint\n", 47 | "def get_completion(inputs, parameters=None, ENDPOINT_URL=os.environ['HF_API_TTI_BASE']):\n", 48 | " headers = {\n", 49 | " \"Authorization\": f\"Bearer {hf_api_key}\",\n", 50 | " \"Content-Type\": \"application/json\"\n", 51 | " } \n", 52 | " data = { \"inputs\": inputs }\n", 53 | " if parameters is not None:\n", 54 | " data.update({\"parameters\": parameters})\n", 55 | " response = requests.request(\"POST\",\n", 56 | " ENDPOINT_URL,\n", 57 | " headers=headers,\n", 58 | " data=json.dumps(data))\n", 59 | " return json.loads(response.content.decode(\"utf-8\"))" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "id": "764a85ed", 65 | "metadata": {}, 66 | "source": [ 67 | "## Building an image generation app " 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "id": "c05fab52", 73 | "metadata": {}, 74 | "source": [ 75 | "Here we are going to run `runwayml/stable-diffusion-v1-5` using the `🧨 diffusers` library." 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "id": "47bde488", 81 | "metadata": {}, 82 | "source": [ 83 | "### How about running it locally?\n", 84 | "The code would look very similar if you were running it locally instead of from an API.\n", 85 | "```py\n", 86 | "from diffusers import DiffusionPipeline\n", 87 | "\n", 88 | "pipeline = DiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\n", 89 | "\n", 90 | "def get_completion(prompt):\n", 91 | " return pipeline(prompt).images[0] \n", 92 | "```" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "id": "a28748bf-2618-4b8c-aa6f-7aad9c573af8", 99 | "metadata": {}, 100 | "outputs": [], 101 | "source": [ 102 | "prompt = \"a dog in a park\"\n", 103 | "\n", 104 | "result = get_completion(prompt)\n", 105 | "IPython.display.HTML(f'')" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "id": "1ee90440", 111 | "metadata": {}, 112 | "source": [ 113 | "## Generating with `gr.Interface()`" 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": null, 119 | "id": "024026ce-7832-4749-8992-50d31d61fcda", 120 | "metadata": {}, 121 | "outputs": [], 122 | "source": [ 123 | "import gradio as gr \n", 124 | "\n", 125 | "#A helper function to convert the PIL image to base64\n", 126 | "#so you can send it to the API\n", 127 | "def base64_to_pil(img_base64):\n", 128 | " base64_decoded = base64.b64decode(img_base64)\n", 129 | " byte_stream = io.BytesIO(base64_decoded)\n", 130 | " pil_image = Image.open(byte_stream)\n", 131 | " return pil_image\n", 132 | "\n", 133 | "def generate(prompt):\n", 134 | " output = get_completion(prompt)\n", 135 | " result_image = base64_to_pil(output)\n", 136 | " return result_image\n", 137 | "\n", 138 | "gr.close_all()\n", 139 | "demo = gr.Interface(fn=generate,\n", 140 | " inputs=[gr.Textbox(label=\"Your prompt\")],\n", 141 | " outputs=[gr.Image(label=\"Result\")],\n", 142 | " title=\"Image Generation with Stable Diffusion\",\n", 143 | " description=\"Generate any image with Stable Diffusion\",\n", 144 | " allow_flagging=\"never\",\n", 145 | " examples=[\"the spirit of a tamagotchi wandering in the city of Vienna\",\"a mecha robot in a favela\"])\n", 146 | "\n", 147 | "demo.launch(share=True, server_port=int(os.environ['PORT1']))" 148 | ] 149 | }, 150 | { 151 | "cell_type": "code", 152 | "execution_count": null, 153 | "id": "37d16450", 154 | "metadata": {}, 155 | "outputs": [], 156 | "source": [ 157 | "demo.close()" 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "id": "3e7a7330", 163 | "metadata": {}, 164 | "source": [ 165 | "## Building a more advanced interface" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "id": "0c844370-4c46-4115-858e-f5cd5fe337ab", 172 | "metadata": {}, 173 | "outputs": [], 174 | "source": [ 175 | "import gradio as gr \n", 176 | "\n", 177 | "#A helper function to convert the PIL image to base64 \n", 178 | "# so you can send it to the API\n", 179 | "def base64_to_pil(img_base64):\n", 180 | " base64_decoded = base64.b64decode(img_base64)\n", 181 | " byte_stream = io.BytesIO(base64_decoded)\n", 182 | " pil_image = Image.open(byte_stream)\n", 183 | " return pil_image\n", 184 | "\n", 185 | "def generate(prompt, negative_prompt, steps, guidance, width, height):\n", 186 | " params = {\n", 187 | " \"negative_prompt\": negative_prompt,\n", 188 | " \"num_inference_steps\": steps,\n", 189 | " \"guidance_scale\": guidance,\n", 190 | " \"width\": width,\n", 191 | " \"height\": height\n", 192 | " }\n", 193 | " \n", 194 | " output = get_completion(prompt, params)\n", 195 | " pil_image = base64_to_pil(output)\n", 196 | " return pil_image\n", 197 | "\n", 198 | "gr.close_all()\n", 199 | "demo = gr.Interface(fn=generate,\n", 200 | " inputs=[\n", 201 | " gr.Textbox(label=\"Your prompt\"),\n", 202 | " gr.Textbox(label=\"Negative prompt\"),\n", 203 | " gr.Slider(label=\"Inference Steps\", minimum=1, maximum=100, value=25,\n", 204 | " info=\"In how many steps will the denoiser denoise the image?\"),\n", 205 | " gr.Slider(label=\"Guidance Scale\", minimum=1, maximum=20, value=7, \n", 206 | " info=\"Controls how much the text prompt influences the result\"),\n", 207 | " gr.Slider(label=\"Width\", minimum=64, maximum=512, step=64, value=512),\n", 208 | " gr.Slider(label=\"Height\", minimum=64, maximum=512, step=64, value=512),\n", 209 | " ],\n", 210 | " outputs=[gr.Image(label=\"Result\")],\n", 211 | " title=\"Image Generation with Stable Diffusion\",\n", 212 | " description=\"Generate any image with Stable Diffusion\",\n", 213 | " allow_flagging=\"never\"\n", 214 | " )\n", 215 | "\n", 216 | "demo.launch(share=True, server_port=int(os.environ['PORT2']))" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": null, 222 | "id": "71999364", 223 | "metadata": {}, 224 | "outputs": [], 225 | "source": [ 226 | "demo.close()" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "id": "3a95b525", 232 | "metadata": {}, 233 | "source": [ 234 | "## `gr.Blocks()` to the rescue!" 235 | ] 236 | }, 237 | { 238 | "cell_type": "code", 239 | "execution_count": null, 240 | "id": "e6818b5e-1342-439b-b129-a8904719d09c", 241 | "metadata": {}, 242 | "outputs": [], 243 | "source": [ 244 | "with gr.Blocks() as demo:\n", 245 | " gr.Markdown(\"# Image Generation with Stable Diffusion\")\n", 246 | " prompt = gr.Textbox(label=\"Your prompt\")\n", 247 | " with gr.Row():\n", 248 | " with gr.Column():\n", 249 | " negative_prompt = gr.Textbox(label=\"Negative prompt\")\n", 250 | " steps = gr.Slider(label=\"Inference Steps\", minimum=1, maximum=100, value=25,\n", 251 | " info=\"In many steps will the denoiser denoise the image?\")\n", 252 | " guidance = gr.Slider(label=\"Guidance Scale\", minimum=1, maximum=20, value=7,\n", 253 | " info=\"Controls how much the text prompt influences the result\")\n", 254 | " width = gr.Slider(label=\"Width\", minimum=64, maximum=512, step=64, value=512)\n", 255 | " height = gr.Slider(label=\"Height\", minimum=64, maximum=512, step=64, value=512)\n", 256 | " btn = gr.Button(\"Submit\")\n", 257 | " with gr.Column():\n", 258 | " output = gr.Image(label=\"Result\")\n", 259 | "\n", 260 | " btn.click(fn=generate, inputs=[prompt,negative_prompt,steps,guidance,width,height], outputs=[output])\n", 261 | "gr.close_all()\n", 262 | "demo.launch(share=True, server_port=int(os.environ['PORT3']))" 263 | ] 264 | }, 265 | { 266 | "cell_type": "code", 267 | "execution_count": null, 268 | "id": "655908ab-fa08-4b86-b218-f685f62d3638", 269 | "metadata": {}, 270 | "outputs": [], 271 | "source": [ 272 | "with gr.Blocks() as demo:\n", 273 | " gr.Markdown(\"# Image Generation with Stable Diffusion\")\n", 274 | " with gr.Row():\n", 275 | " with gr.Column(scale=4):\n", 276 | " prompt = gr.Textbox(label=\"Your prompt\") #Give prompt some real estate\n", 277 | " with gr.Column(scale=1, min_width=50):\n", 278 | " btn = gr.Button(\"Submit\") #Submit button side by side!\n", 279 | " with gr.Accordion(\"Advanced options\", open=False): #Let's hide the advanced options!\n", 280 | " negative_prompt = gr.Textbox(label=\"Negative prompt\")\n", 281 | " with gr.Row():\n", 282 | " with gr.Column():\n", 283 | " steps = gr.Slider(label=\"Inference Steps\", minimum=1, maximum=100, value=25,\n", 284 | " info=\"In many steps will the denoiser denoise the image?\")\n", 285 | " guidance = gr.Slider(label=\"Guidance Scale\", minimum=1, maximum=20, value=7,\n", 286 | " info=\"Controls how much the text prompt influences the result\")\n", 287 | " with gr.Column():\n", 288 | " width = gr.Slider(label=\"Width\", minimum=64, maximum=512, step=64, value=512)\n", 289 | " height = gr.Slider(label=\"Height\", minimum=64, maximum=512, step=64, value=512)\n", 290 | " output = gr.Image(label=\"Result\") #Move the output up too\n", 291 | " \n", 292 | " btn.click(fn=generate, inputs=[prompt,negative_prompt,steps,guidance,width,height], outputs=[output])\n", 293 | "\n", 294 | "gr.close_all()\n", 295 | "demo.launch(share=True, server_port=int(os.environ['PORT4']))" 296 | ] 297 | }, 298 | { 299 | "cell_type": "code", 300 | "execution_count": null, 301 | "id": "d91ece7c-f5a5-4025-8558-23bd2e007ed2", 302 | "metadata": {}, 303 | "outputs": [], 304 | "source": [ 305 | "gr.close_all()" 306 | ] 307 | }, 308 | { 309 | "cell_type": "code", 310 | "execution_count": null, 311 | "id": "f30d2cbd-a799-40dc-9c50-a3e26080a16b", 312 | "metadata": {}, 313 | "outputs": [], 314 | "source": [] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": null, 319 | "id": "3d29fd6c-ab10-44e1-93b9-46404dba15ac", 320 | "metadata": {}, 321 | "outputs": [], 322 | "source": [] 323 | }, 324 | { 325 | "cell_type": "code", 326 | "execution_count": null, 327 | "id": "d73a8109-6904-4d08-9df9-8bacc300ac93", 328 | "metadata": {}, 329 | "outputs": [], 330 | "source": [] 331 | }, 332 | { 333 | "cell_type": "code", 334 | "execution_count": null, 335 | "id": "4a03f8af-25d4-4f50-a2a9-203348440550", 336 | "metadata": {}, 337 | "outputs": [], 338 | "source": [] 339 | }, 340 | { 341 | "cell_type": "code", 342 | "execution_count": null, 343 | "id": "53ea6093-7353-4e40-8b61-2c1b712f6e4f", 344 | "metadata": {}, 345 | "outputs": [], 346 | "source": [] 347 | }, 348 | { 349 | "cell_type": "code", 350 | "execution_count": null, 351 | "id": "3a4483b1-850c-4491-9e2b-b2ddc4d155b8", 352 | "metadata": {}, 353 | "outputs": [], 354 | "source": [] 355 | }, 356 | { 357 | "cell_type": "code", 358 | "execution_count": null, 359 | "id": "a28194a7-1d6a-4bfa-a3c8-f1acee01c74c", 360 | "metadata": {}, 361 | "outputs": [], 362 | "source": [] 363 | }, 364 | { 365 | "cell_type": "code", 366 | "execution_count": null, 367 | "id": "361a6a03-227a-4095-9ba1-2d5cd29d3cab", 368 | "metadata": {}, 369 | "outputs": [], 370 | "source": [] 371 | }, 372 | { 373 | "cell_type": "code", 374 | "execution_count": null, 375 | "id": "f0e20904-8f5f-4cc9-a0e6-19a46c38e3ce", 376 | "metadata": {}, 377 | "outputs": [], 378 | "source": [] 379 | }, 380 | { 381 | "cell_type": "code", 382 | "execution_count": null, 383 | "id": "6a24cc12-247a-451a-b274-5100c89b1411", 384 | "metadata": {}, 385 | "outputs": [], 386 | "source": [] 387 | }, 388 | { 389 | "cell_type": "code", 390 | "execution_count": null, 391 | "id": "9444cad0-10e4-4fee-a7a3-61ec2d663a09", 392 | "metadata": {}, 393 | "outputs": [], 394 | "source": [] 395 | }, 396 | { 397 | "cell_type": "code", 398 | "execution_count": null, 399 | "id": "6184fbc4-0c18-499e-b409-ece2ef223736", 400 | "metadata": {}, 401 | "outputs": [], 402 | "source": [] 403 | }, 404 | { 405 | "cell_type": "code", 406 | "execution_count": null, 407 | "id": "8adf5565-fb5d-47f7-919c-0c6790090883", 408 | "metadata": {}, 409 | "outputs": [], 410 | "source": [] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": null, 415 | "id": "45274e82-2919-4f2e-9abc-c2d6b9781b69", 416 | "metadata": {}, 417 | "outputs": [], 418 | "source": [] 419 | }, 420 | { 421 | "cell_type": "code", 422 | "execution_count": null, 423 | "id": "59e908aa-cab9-4c9f-876f-4eeb3d746f21", 424 | "metadata": {}, 425 | "outputs": [], 426 | "source": [] 427 | }, 428 | { 429 | "cell_type": "code", 430 | "execution_count": null, 431 | "id": "5eec29fb-a992-47c6-9148-dee01802a6b1", 432 | "metadata": {}, 433 | "outputs": [], 434 | "source": [] 435 | }, 436 | { 437 | "cell_type": "code", 438 | "execution_count": null, 439 | "id": "c4bd6045-da58-474d-8db4-e7eeb6de36f3", 440 | "metadata": {}, 441 | "outputs": [], 442 | "source": [] 443 | }, 444 | { 445 | "cell_type": "code", 446 | "execution_count": null, 447 | "id": "d4212dc4-f33d-4b55-8ed6-5f4103b2bfa0", 448 | "metadata": {}, 449 | "outputs": [], 450 | "source": [] 451 | } 452 | ], 453 | "metadata": { 454 | "kernelspec": { 455 | "display_name": "Python 3 (ipykernel)", 456 | "language": "python", 457 | "name": "python3" 458 | }, 459 | "language_info": { 460 | "codemirror_mode": { 461 | "name": "ipython", 462 | "version": 3 463 | }, 464 | "file_extension": ".py", 465 | "mimetype": "text/x-python", 466 | "name": "python", 467 | "nbconvert_exporter": "python", 468 | "pygments_lexer": "ipython3", 469 | "version": "3.11.3" 470 | } 471 | }, 472 | "nbformat": 4, 473 | "nbformat_minor": 5 474 | } 475 | -------------------------------------------------------------------------------- /04_AI_Game_App.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "6652a472", 6 | "metadata": {}, 7 | "source": [ 8 | "# **Gradio:** Describe-and-Generate game 🖍️" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "6b21aa74", 14 | "metadata": {}, 15 | "source": [ 16 | "Load your HF API key and relevant Python libraries" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "92f16527-fcab-41e0-bce8-634fae58b1b8", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "import os\n", 27 | "import io\n", 28 | "from IPython.display import Image, display, HTML\n", 29 | "from PIL import Image\n", 30 | "import base64 \n", 31 | "\n", 32 | "from dotenv import load_dotenv, find_dotenv\n", 33 | "_ = load_dotenv(find_dotenv()) # read local .env file\n", 34 | "hf_api_key = os.environ['HF_API_KEY']" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "id": "ac9eab7f-4013-42ec-b34e-b413b0f8adb6", 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "# Helper function\n", 45 | "import requests, json\n", 46 | "\n", 47 | "#Here we are going to call multiple endpoints!\n", 48 | "def get_completion(inputs, parameters=None, ENDPOINT_URL=\"\"):\n", 49 | " headers = {\n", 50 | " \"Authorization\": f\"Bearer {hf_api_key}\",\n", 51 | " \"Content-Type\": \"application/json\"\n", 52 | " } \n", 53 | " data = { \"inputs\": inputs }\n", 54 | " if parameters is not None:\n", 55 | " data.update({\"parameters\": parameters})\n", 56 | " response = requests.request(\"POST\",\n", 57 | " ENDPOINT_URL,\n", 58 | " headers=headers,\n", 59 | " data=json.dumps(data))\n", 60 | " return json.loads(response.content.decode(\"utf-8\"))" 61 | ] 62 | }, 63 | { 64 | "cell_type": "code", 65 | "execution_count": null, 66 | "id": "52cb1837", 67 | "metadata": {}, 68 | "outputs": [], 69 | "source": [ 70 | "#text-to-image\n", 71 | "TTI_ENDPOINT = os.environ['HF_API_TTI_BASE']\n", 72 | "#image-to-text\n", 73 | "ITT_ENDPOINT = os.environ['HF_API_ITT_BASE']" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "id": "b3f9c4d4", 79 | "metadata": {}, 80 | "source": [ 81 | "## Building your game with `gr.Blocks()`" 82 | ] 83 | }, 84 | { 85 | "cell_type": "code", 86 | "execution_count": null, 87 | "id": "76170d93-0de0-4359-8f13-3d1bcc41f86a", 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "#Bringing the functions from lessons 3 and 4!\n", 92 | "def image_to_base64_str(pil_image):\n", 93 | " byte_arr = io.BytesIO()\n", 94 | " pil_image.save(byte_arr, format='PNG')\n", 95 | " byte_arr = byte_arr.getvalue()\n", 96 | " return str(base64.b64encode(byte_arr).decode('utf-8'))\n", 97 | "\n", 98 | "def base64_to_pil(img_base64):\n", 99 | " base64_decoded = base64.b64decode(img_base64)\n", 100 | " byte_stream = io.BytesIO(base64_decoded)\n", 101 | " pil_image = Image.open(byte_stream)\n", 102 | " return pil_image\n", 103 | "\n", 104 | "def captioner(image):\n", 105 | " base64_image = image_to_base64_str(image)\n", 106 | " result = get_completion(base64_image, None, ITT_ENDPOINT)\n", 107 | " return result[0]['generated_text']\n", 108 | "\n", 109 | "def generate(prompt):\n", 110 | " output = get_completion(prompt, None, TTI_ENDPOINT)\n", 111 | " result_image = base64_to_pil(output)\n", 112 | " return result_image" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "id": "bd1f3485", 118 | "metadata": {}, 119 | "source": [ 120 | "### First attempt, just captioning" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": null, 126 | "id": "64e403a9-bfcb-47e8-b741-743d5f4980fd", 127 | "metadata": {}, 128 | "outputs": [], 129 | "source": [ 130 | "import gradio as gr \n", 131 | "with gr.Blocks() as demo:\n", 132 | " gr.Markdown(\"# Describe-and-Generate game 🖍️\")\n", 133 | " image_upload = gr.Image(label=\"Your first image\",type=\"pil\")\n", 134 | " btn_caption = gr.Button(\"Generate caption\")\n", 135 | " caption = gr.Textbox(label=\"Generated caption\")\n", 136 | " \n", 137 | " btn_caption.click(fn=captioner, inputs=[image_upload], outputs=[caption])\n", 138 | "\n", 139 | "gr.close_all()\n", 140 | "demo.launch(share=True, server_port=int(os.environ['PORT1']))" 141 | ] 142 | }, 143 | { 144 | "cell_type": "markdown", 145 | "id": "966cd4e8", 146 | "metadata": {}, 147 | "source": [ 148 | "### Let's add generation" 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": null, 154 | "id": "c5ea9a93-9f97-43f8-86ca-2e8e4ea9dda8", 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [ 158 | "with gr.Blocks() as demo:\n", 159 | " gr.Markdown(\"# Describe-and-Generate game 🖍️\")\n", 160 | " image_upload = gr.Image(label=\"Your first image\",type=\"pil\")\n", 161 | " btn_caption = gr.Button(\"Generate caption\")\n", 162 | " caption = gr.Textbox(label=\"Generated caption\")\n", 163 | " btn_image = gr.Button(\"Generate image\")\n", 164 | " image_output = gr.Image(label=\"Generated Image\")\n", 165 | " btn_caption.click(fn=captioner, inputs=[image_upload], outputs=[caption])\n", 166 | " btn_image.click(fn=generate, inputs=[caption], outputs=[image_output])\n", 167 | "\n", 168 | "gr.close_all()\n", 169 | "demo.launch(share=True, server_port=int(os.environ['PORT2']))" 170 | ] 171 | }, 172 | { 173 | "cell_type": "markdown", 174 | "id": "22da53d1", 175 | "metadata": {}, 176 | "source": [ 177 | "### Doing it all at once! " 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": null, 183 | "id": "024a4529-7f38-41fb-aab3-c38525b61abe", 184 | "metadata": {}, 185 | "outputs": [], 186 | "source": [ 187 | "def caption_and_generate(image):\n", 188 | " caption = captioner(image)\n", 189 | " image = generate(caption)\n", 190 | " return [caption, image]\n", 191 | "\n", 192 | "with gr.Blocks() as demo:\n", 193 | " gr.Markdown(\"# Describe-and-Generate game 🖍️\")\n", 194 | " image_upload = gr.Image(label=\"Your first image\",type=\"pil\")\n", 195 | " btn_all = gr.Button(\"Caption and generate\")\n", 196 | " caption = gr.Textbox(label=\"Generated caption\")\n", 197 | " image_output = gr.Image(label=\"Generated Image\")\n", 198 | "\n", 199 | " btn_all.click(fn=caption_and_generate, inputs=[image_upload], outputs=[caption, image_output])\n", 200 | "\n", 201 | "gr.close_all()\n", 202 | "demo.launch(share=True, server_port=int(os.environ['PORT3']))" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": null, 208 | "id": "ba8bd128-fea1-4c92-8363-cdfba40aed8e", 209 | "metadata": {}, 210 | "outputs": [], 211 | "source": [ 212 | "gr.close_all()" 213 | ] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": null, 218 | "id": "075c64d1-91a7-4627-b69a-1c2c83587511", 219 | "metadata": {}, 220 | "outputs": [], 221 | "source": [] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "id": "cb304a3b-f7a4-4cea-9ed3-c57d14903d64", 227 | "metadata": {}, 228 | "outputs": [], 229 | "source": [] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": null, 234 | "id": "32c6cd4b-ed0c-4beb-a968-a9490ab11e5c", 235 | "metadata": {}, 236 | "outputs": [], 237 | "source": [] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": null, 242 | "id": "1437d3a5-e9a6-4b25-b412-622f41a2de55", 243 | "metadata": {}, 244 | "outputs": [], 245 | "source": [] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "55709796-a89f-4560-bfa7-61e205d12bee", 251 | "metadata": {}, 252 | "outputs": [], 253 | "source": [] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": null, 258 | "id": "0e6cbbbd-84ff-4f4f-94c6-25b7fda09cc6", 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [] 262 | }, 263 | { 264 | "cell_type": "code", 265 | "execution_count": null, 266 | "id": "0d6fc064-ab3a-4797-9741-4af170190cb0", 267 | "metadata": {}, 268 | "outputs": [], 269 | "source": [] 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": null, 274 | "id": "eb7f415f-534e-4d32-a84a-0d2d9e4cc67d", 275 | "metadata": {}, 276 | "outputs": [], 277 | "source": [] 278 | }, 279 | { 280 | "cell_type": "code", 281 | "execution_count": null, 282 | "id": "7c68d508-c68a-453b-af66-dd4fcaddca22", 283 | "metadata": {}, 284 | "outputs": [], 285 | "source": [] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "id": "5ea71482-35e0-445d-b810-38d6a6787b60", 291 | "metadata": {}, 292 | "outputs": [], 293 | "source": [] 294 | }, 295 | { 296 | "cell_type": "code", 297 | "execution_count": null, 298 | "id": "df4c62dd-05eb-424b-a910-12084ce27e2f", 299 | "metadata": {}, 300 | "outputs": [], 301 | "source": [] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "id": "95158cbd-0f60-4ccb-9329-c4425602b70d", 307 | "metadata": {}, 308 | "outputs": [], 309 | "source": [] 310 | }, 311 | { 312 | "cell_type": "code", 313 | "execution_count": null, 314 | "id": "9e5f1210-f8c5-42c7-8563-3c2d0b168703", 315 | "metadata": {}, 316 | "outputs": [], 317 | "source": [] 318 | }, 319 | { 320 | "cell_type": "code", 321 | "execution_count": null, 322 | "id": "1f3e7f12-547e-4e89-b958-c7a06dfe26d5", 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": null, 330 | "id": "07734551-8737-4c56-8b47-c26a7177f670", 331 | "metadata": {}, 332 | "outputs": [], 333 | "source": [] 334 | }, 335 | { 336 | "cell_type": "code", 337 | "execution_count": null, 338 | "id": "5c9f4ef6-2d3a-43e6-9e9f-69beee532f1c", 339 | "metadata": {}, 340 | "outputs": [], 341 | "source": [] 342 | }, 343 | { 344 | "cell_type": "code", 345 | "execution_count": null, 346 | "id": "3dafaaa3-e48a-4625-a324-934fc37eaa58", 347 | "metadata": {}, 348 | "outputs": [], 349 | "source": [] 350 | } 351 | ], 352 | "metadata": { 353 | "kernelspec": { 354 | "display_name": "Python 3 (ipykernel)", 355 | "language": "python", 356 | "name": "python3" 357 | }, 358 | "language_info": { 359 | "codemirror_mode": { 360 | "name": "ipython", 361 | "version": 3 362 | }, 363 | "file_extension": ".py", 364 | "mimetype": "text/x-python", 365 | "name": "python", 366 | "nbconvert_exporter": "python", 367 | "pygments_lexer": "ipython3", 368 | "version": "3.11.3" 369 | } 370 | }, 371 | "nbformat": 4, 372 | "nbformat_minor": 5 373 | } 374 | -------------------------------------------------------------------------------- /05_Chat_App.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "c5753a0c", 6 | "metadata": {}, 7 | "source": [ 8 | "# **Gradio:** Chat (LLM) App 💬" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "a01a3724", 14 | "metadata": {}, 15 | "source": [ 16 | "Load your HF API key and relevant Python libraries" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "0fa6fa00-6bd1-4839-bcaf-8bae9267ee79", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "import os\n", 27 | "import io\n", 28 | "import IPython.display\n", 29 | "from PIL import Image\n", 30 | "import base64 \n", 31 | "import requests \n", 32 | "requests.adapters.DEFAULT_TIMEOUT = 60\n", 33 | "\n", 34 | "from dotenv import load_dotenv, find_dotenv\n", 35 | "_ = load_dotenv(find_dotenv()) # read local .env file\n", 36 | "hf_api_key = os.environ['HF_API_KEY']" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "id": "095da8fe-24aa-4dc7-8e08-aa2f949ae21f", 43 | "metadata": {}, 44 | "outputs": [], 45 | "source": [ 46 | "# Helper function\n", 47 | "import requests, json\n", 48 | "from text_generation import Client\n", 49 | "\n", 50 | "#FalcomLM-instruct endpoint on the text_generation library\n", 51 | "client = Client(os.environ['HF_API_FALCOM_BASE'], headers={\"Authorization\": f\"Basic {hf_api_key}\"}, timeout=120)" 52 | ] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "id": "bfe6fc97", 57 | "metadata": {}, 58 | "source": [ 59 | "## Building an app to chat with any LLM!" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "id": "745a3c9b", 65 | "metadata": {}, 66 | "source": [ 67 | "Here we'll be using an [Inference Endpoint](https://huggingface.co/inference-endpoints) for `falcon-40b-instruct` , one of best ranking open source LLM on the [🤗 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). \n", 68 | "\n", 69 | "To run it locally, one can use the [Transformers library](https://huggingface.co/docs/transformers/index) or the [text-generation-inference](https://github.com/huggingface/text-generation-inference) " 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "id": "a7065860-3c0b-490d-9e7c-22e5b79fc004", 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "prompt = \"Has math been invented or discovered?\"\n", 80 | "client.generate(prompt, max_new_tokens=256).generated_text" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": null, 86 | "id": "0dcb659e-b71b-46da-b9d2-6ee62498995f", 87 | "metadata": {}, 88 | "outputs": [], 89 | "source": [ 90 | "#Back to Lesson 2, time flies!\n", 91 | "import gradio as gr\n", 92 | "def generate(input, slider):\n", 93 | " output = client.generate(input, max_new_tokens=slider).generated_text\n", 94 | " return output\n", 95 | "\n", 96 | "demo = gr.Interface(fn=generate, inputs=[gr.Textbox(label=\"Prompt\"), gr.Slider(label=\"Max new tokens\", value=20, maximum=1024, minimum=1)], outputs=[gr.Textbox(label=\"Completion\")])\n", 97 | "gr.close_all()\n", 98 | "demo.launch(share=True, server_port=int(os.environ['PORT1']))" 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "id": "8e5f55e2", 104 | "metadata": {}, 105 | "source": [ 106 | "## `gr.Chatbot()` to the rescue!" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": null, 112 | "id": "43beebb7-40a6-4af5-a701-882821b6ed36", 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [ 116 | "import random\n", 117 | "\n", 118 | "def respond(message, chat_history):\n", 119 | " #No LLM here, just respond with a random pre-made message\n", 120 | " bot_message = random.choice([\"Tell me more about it\", \n", 121 | " \"Cool, but I'm not interested\", \n", 122 | " \"Hmmmm, ok then\"]) \n", 123 | " chat_history.append((message, bot_message))\n", 124 | " return \"\", chat_history\n", 125 | "\n", 126 | "with gr.Blocks() as demo:\n", 127 | " chatbot = gr.Chatbot(height=240) #just to fit the notebook\n", 128 | " msg = gr.Textbox(label=\"Prompt\")\n", 129 | " btn = gr.Button(\"Submit\")\n", 130 | " clear = gr.ClearButton(components=[msg, chatbot], value=\"Clear console\")\n", 131 | "\n", 132 | " btn.click(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])\n", 133 | " msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot]) #Press enter to submit\n", 134 | "gr.close_all()\n", 135 | "demo.launch(share=True, server_port=int(os.environ['PORT2']))" 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": null, 141 | "id": "55bae99d-7a63-4a40-bab7-de7d10b8ab1b", 142 | "metadata": {}, 143 | "outputs": [], 144 | "source": [ 145 | "def format_chat_prompt(message, chat_history):\n", 146 | " prompt = \"\"\n", 147 | " for turn in chat_history:\n", 148 | " user_message, bot_message = turn\n", 149 | " prompt = f\"{prompt}\\nUser: {user_message}\\nAssistant: {bot_message}\"\n", 150 | " prompt = f\"{prompt}\\nUser: {message}\\nAssistant:\"\n", 151 | " return prompt\n", 152 | "\n", 153 | "def respond(message, chat_history):\n", 154 | " formatted_prompt = format_chat_prompt(message, chat_history)\n", 155 | " bot_message = client.generate(formatted_prompt,\n", 156 | " max_new_tokens=1024,\n", 157 | " stop_sequences=[\"\\nUser:\", \"<|endoftext|>\"]).generated_text\n", 158 | " chat_history.append((message, bot_message))\n", 159 | " return \"\", chat_history\n", 160 | "\n", 161 | "with gr.Blocks() as demo:\n", 162 | " chatbot = gr.Chatbot(height=240) #just to fit the notebook\n", 163 | " msg = gr.Textbox(label=\"Prompt\")\n", 164 | " btn = gr.Button(\"Submit\")\n", 165 | " clear = gr.ClearButton(components=[msg, chatbot], value=\"Clear console\")\n", 166 | "\n", 167 | " btn.click(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])\n", 168 | " msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot]) #Press enter to submit\n", 169 | "gr.close_all()\n", 170 | "demo.launch(share=True, server_port=int(os.environ['PORT3']))" 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "id": "f22b8de8", 176 | "metadata": {}, 177 | "source": [ 178 | "### Adding other advanced features" 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": null, 184 | "id": "09873dfd-5b6c-41d6-9479-12e8c8894295", 185 | "metadata": {}, 186 | "outputs": [], 187 | "source": [ 188 | "def format_chat_prompt(message, chat_history, instruction):\n", 189 | " prompt = f\"System:{instruction}\"\n", 190 | " for turn in chat_history:\n", 191 | " user_message, bot_message = turn\n", 192 | " prompt = f\"{prompt}\\nUser: {user_message}\\nAssistant: {bot_message}\"\n", 193 | " prompt = f\"{prompt}\\nUser: {message}\\nAssistant:\"\n", 194 | " return prompt\n", 195 | "\n", 196 | "def respond(message, chat_history, instruction, temperature=0.7):\n", 197 | " prompt = format_chat_prompt(message, chat_history, instruction)\n", 198 | " chat_history = chat_history + [[message, \"\"]]\n", 199 | " stream = client.generate_stream(prompt,\n", 200 | " max_new_tokens=1024,\n", 201 | " stop_sequences=[\"\\nUser:\", \"<|endoftext|>\"],\n", 202 | " temperature=temperature)\n", 203 | " #stop_sequences to not generate the user answer\n", 204 | " acc_text = \"\"\n", 205 | " #Streaming the tokens\n", 206 | " for idx, response in enumerate(stream):\n", 207 | " text_token = response.token.text\n", 208 | "\n", 209 | " if response.details:\n", 210 | " return\n", 211 | "\n", 212 | " if idx == 0 and text_token.startswith(\" \"):\n", 213 | " text_token = text_token[1:]\n", 214 | "\n", 215 | " acc_text += text_token\n", 216 | " last_turn = list(chat_history.pop(-1))\n", 217 | " last_turn[-1] += acc_text\n", 218 | " chat_history = chat_history + [last_turn]\n", 219 | " yield \"\", chat_history\n", 220 | " acc_text = \"\"\n", 221 | "\n", 222 | "with gr.Blocks() as demo:\n", 223 | " chatbot = gr.Chatbot(height=240) #just to fit the notebook\n", 224 | " msg = gr.Textbox(label=\"Prompt\")\n", 225 | " with gr.Accordion(label=\"Advanced options\",open=False):\n", 226 | " system = gr.Textbox(label=\"System message\", lines=2, value=\"A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.\")\n", 227 | " temperature = gr.Slider(label=\"temperature\", minimum=0.1, maximum=1, value=0.7, step=0.1)\n", 228 | " btn = gr.Button(\"Submit\")\n", 229 | " clear = gr.ClearButton(components=[msg, chatbot], value=\"Clear console\")\n", 230 | "\n", 231 | " btn.click(respond, inputs=[msg, chatbot, system], outputs=[msg, chatbot])\n", 232 | " msg.submit(respond, inputs=[msg, chatbot, system], outputs=[msg, chatbot]) #Press enter to submit\n", 233 | "gr.close_all()\n", 234 | "demo.queue().launch(share=True, server_port=int(os.environ['PORT4']))" 235 | ] 236 | }, 237 | { 238 | "cell_type": "code", 239 | "execution_count": null, 240 | "id": "8d9ec80a-39ad-4f58-b79e-4f413c5074c0", 241 | "metadata": {}, 242 | "outputs": [], 243 | "source": [ 244 | "gr.close_all()" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "3c7e28b0-0300-4e58-a01d-1d7eabe188eb", 251 | "metadata": {}, 252 | "outputs": [], 253 | "source": [] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": null, 258 | "id": "2ca70638-2db0-4a01-9b6c-b866ab50b891", 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [] 262 | }, 263 | { 264 | "cell_type": "code", 265 | "execution_count": null, 266 | "id": "9571be90-43c3-4663-ae8a-6ab54e634161", 267 | "metadata": {}, 268 | "outputs": [], 269 | "source": [] 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": null, 274 | "id": "9beb304e-fdb6-484b-a04a-f352f6d70ec3", 275 | "metadata": {}, 276 | "outputs": [], 277 | "source": [] 278 | }, 279 | { 280 | "cell_type": "code", 281 | "execution_count": null, 282 | "id": "b666122e-95f3-42ac-8208-75b43e20cfc6", 283 | "metadata": {}, 284 | "outputs": [], 285 | "source": [] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "id": "b39aca63-841b-47f1-972c-5097231cc40e", 291 | "metadata": {}, 292 | "outputs": [], 293 | "source": [] 294 | }, 295 | { 296 | "cell_type": "code", 297 | "execution_count": null, 298 | "id": "792baf87-7f2e-4ed3-a9a2-3c86a53a3a37", 299 | "metadata": {}, 300 | "outputs": [], 301 | "source": [] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "id": "ea585333-a583-4b92-91d1-ea6ee7e1aa0c", 307 | "metadata": {}, 308 | "outputs": [], 309 | "source": [] 310 | }, 311 | { 312 | "cell_type": "code", 313 | "execution_count": null, 314 | "id": "77afcf4d-25d4-4c89-a6ba-c222cb476f2e", 315 | "metadata": {}, 316 | "outputs": [], 317 | "source": [] 318 | }, 319 | { 320 | "cell_type": "code", 321 | "execution_count": null, 322 | "id": "5f8fbdcb-b583-4645-baa3-397c8d06cdc3", 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": null, 330 | "id": "a3c64ed0-9dc6-4740-92b2-8bfef66ae258", 331 | "metadata": {}, 332 | "outputs": [], 333 | "source": [] 334 | }, 335 | { 336 | "cell_type": "code", 337 | "execution_count": null, 338 | "id": "f1d10d21-6a67-48bc-94fd-17a9d6c645d4", 339 | "metadata": {}, 340 | "outputs": [], 341 | "source": [] 342 | }, 343 | { 344 | "cell_type": "code", 345 | "execution_count": null, 346 | "id": "3ff096e7-8e2e-4791-a79b-9c5ef1c45a21", 347 | "metadata": {}, 348 | "outputs": [], 349 | "source": [] 350 | }, 351 | { 352 | "cell_type": "code", 353 | "execution_count": null, 354 | "id": "62c34731-b8f9-4dcd-888a-f5f5f5cc9eaf", 355 | "metadata": {}, 356 | "outputs": [], 357 | "source": [] 358 | }, 359 | { 360 | "cell_type": "code", 361 | "execution_count": null, 362 | "id": "2683f4f8-ab96-4685-adfa-11e872a5f373", 363 | "metadata": {}, 364 | "outputs": [], 365 | "source": [] 366 | }, 367 | { 368 | "cell_type": "code", 369 | "execution_count": null, 370 | "id": "3e72fcaf-3740-4704-b299-d8096d79b99c", 371 | "metadata": {}, 372 | "outputs": [], 373 | "source": [] 374 | }, 375 | { 376 | "cell_type": "code", 377 | "execution_count": null, 378 | "id": "dee4edd3-1b95-468a-8177-12aa0677b92a", 379 | "metadata": {}, 380 | "outputs": [], 381 | "source": [] 382 | }, 383 | { 384 | "cell_type": "code", 385 | "execution_count": null, 386 | "id": "8104dd16-9c65-42a3-bf09-504aa67c1c10", 387 | "metadata": {}, 388 | "outputs": [], 389 | "source": [] 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": null, 394 | "id": "12cf9b3a-4202-4e3a-9c6b-941fa1290ab8", 395 | "metadata": {}, 396 | "outputs": [], 397 | "source": [] 398 | } 399 | ], 400 | "metadata": { 401 | "kernelspec": { 402 | "display_name": "Python 3 (ipykernel)", 403 | "language": "python", 404 | "name": "python3" 405 | }, 406 | "language_info": { 407 | "codemirror_mode": { 408 | "name": "ipython", 409 | "version": 3 410 | }, 411 | "file_extension": ".py", 412 | "mimetype": "text/x-python", 413 | "name": "python", 414 | "nbconvert_exporter": "python", 415 | "pygments_lexer": "ipython3", 416 | "version": "3.11.3" 417 | } 418 | }, 419 | "nbformat": 4, 420 | "nbformat_minor": 5 421 | } 422 | -------------------------------------------------------------------------------- /06_Graphs.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# **Gradio:** Graphs" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "### Show Gradio BarGraph" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": null, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "import gradio as gr\n", 24 | "\n", 25 | "def agent_function(agent):\n", 26 | " agent_data = BSA_results[BSA_results[\"Agent Name\"]==agent]\n", 27 | " report = agent_data[\"Agent Report\"].values[0]\n", 28 | " guidelines = agent_data[\"Guideline/Tips\"].values[0]\n", 29 | " calls = agent_data[\"Total Calls\"].values[0]\n", 30 | " sales = agent_data[\"Total Sales\"].values[0]\n", 31 | " state = agent_data[\"Agent State\"].values[0]\n", 32 | " agent_features = eval(agent_data['Agent (Average) LIME Features'].reset_index(drop=True)[0])\n", 33 | " top_agent_features = eval(agent_data['Top Agents (Average) LIME Features'].reset_index(drop=True)[0])\n", 34 | " simple = pd.DataFrame(\n", 35 | " {\n", 36 | " \"Features\": agent_features.keys(),\n", 37 | " \"Agent\": agent_features.values(),\n", 38 | " \"Top_Agents\": top_agent_features.values(),\n", 39 | " })\n", 40 | "\n", 41 | "\n", 42 | " return calls, sales, state, report, guidelines, simple\n", 43 | "\n", 44 | "simple = pd.DataFrame(\n", 45 | "{\n", 46 | " \"Features\": agent_features.keys(),\n", 47 | " \"Agent\": agent_features.values(),\n", 48 | " \"Top_Agents\": top_agent_features.values(),\n", 49 | "})\n", 50 | "\n", 51 | "\n", 52 | "with gr.Blocks() as demo:\n", 53 | " gr.Markdown(\"## Agent Reporting Dashboard\")\n", 54 | " with gr.Row():\n", 55 | " inp = [gr.Dropdown(BSA_results[\"Agent Name\"].values.tolist(), label=\"Select Agent\")]\n", 56 | " out1 = [gr.Text(label=\"Total Calls\"),\n", 57 | " gr.Text(label=\"Total Sales\"),\n", 58 | " gr.Text(label=\"Agent State\")]\n", 59 | " with gr.Row():\n", 60 | " out2 = [gr.Text(label=\"Agent Performance Report\"),\n", 61 | " gr.Text(label=\"Training Guidelines/Tips\")]\n", 62 | " with gr.Row():\n", 63 | " out3 = [gr.BarPlot(\n", 64 | " simple,\n", 65 | " x=\"Features\",\n", 66 | " y=\"Agent\",\n", 67 | " title=\"Agent Vs Top Agents (Average)\",\n", 68 | " tooltip=[\"Features\", \"Agent\"],\n", 69 | " y_lim=[0, 100],\n", 70 | " )]\n", 71 | " \n", 72 | " btn = gr.Button(\"Run\")\n", 73 | " btn.click(fn=agent_function, inputs=inp, outputs=(out1+out2+out3))\n", 74 | "\n", 75 | "demo.launch()" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "### Show MatplotLib Graph on Gradio" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "import gradio as gr\n", 92 | "\n", 93 | "import matplotlib.pyplot as plt\n", 94 | "\n", 95 | "def agent_comparison_chart(agent_name, data):\n", 96 | " # Find the row index where 'Agent Name' matches the provided agent_name\n", 97 | " index = data['Agent Name'].index[data['Agent Name'] == agent_name].tolist()\n", 98 | " if not index:\n", 99 | " print(f\"Agent {agent_name} not found in the dataset.\")\n", 100 | " return\n", 101 | "\n", 102 | " # Extract emotions and values for the specified agent and top agents\n", 103 | " agent_features = eval(data['Agent (Average) LIME Features'][index[0]])\n", 104 | " top_agent_features = eval(data['Top Agents (Average) LIME Features'][index[0]])\n", 105 | "\n", 106 | " emotions = list(agent_features.keys())\n", 107 | " agent_values = list(agent_features.values())\n", 108 | " top_agent_values = list(top_agent_features.values())\n", 109 | "\n", 110 | " bar_width = 0.4\n", 111 | " positions = range(len(emotions))\n", 112 | "\n", 113 | " plt.figure(figsize=(12, 6))\n", 114 | " agent_bars = plt.bar(positions, agent_values, bar_width, label=f'Agent {agent_name} (Average) LIME Features')\n", 115 | " top_agent_bars = plt.bar([pos + bar_width for pos in positions], top_agent_values, bar_width, label='Top Agents (Average) LIME Features')\n", 116 | " plt.xlabel('Emotions')\n", 117 | " plt.ylabel('Values')\n", 118 | " plt.title(f'Comparison of Emotion Values between Agent {agent_name} and Top Agents')\n", 119 | " plt.xticks([pos + bar_width / 2 for pos in positions], emotions, rotation=90)\n", 120 | " plt.legend()\n", 121 | "\n", 122 | " # Display values on top of the bars\n", 123 | " for bar, value in zip(agent_bars, agent_values):\n", 124 | " plt.text(bar.get_x() + bar.get_width() / 2, value, str(int(value)), ha='center', va='bottom')\n", 125 | " for bar, value in zip(top_agent_bars, top_agent_values):\n", 126 | " plt.text(bar.get_x() + bar.get_width() / 2, value, str(int(value)), ha='center', va='bottom')\n", 127 | "\n", 128 | " return plt\n", 129 | "\n", 130 | "\n", 131 | "def agent_function(agent):\n", 132 | " agent_data = BSA_results[BSA_results[\"Agent Name\"]==agent]\n", 133 | " report = agent_data[\"Agent Report\"].values[0]\n", 134 | " guidelines = agent_data[\"Guideline/Tips\"].values[0]\n", 135 | " calls = agent_data[\"Total Calls\"].values[0]\n", 136 | " sales = agent_data[\"Total Sales\"].values[0]\n", 137 | " state = agent_data[\"Agent State\"].values[0]\n", 138 | " return calls, sales, state, report, guidelines, agent_comparison_chart(agent, BSA_results)\n", 139 | "\n", 140 | "with gr.Blocks() as demo:\n", 141 | " gr.Markdown(\"## Agent Reporting Dashboard\")\n", 142 | " with gr.Row():\n", 143 | " inp = [gr.Dropdown(BSA_results[\"Agent Name\"].values.tolist(), label=\"Select Agent\")]\n", 144 | " out1 = [gr.Text(label=\"Total Calls\"),\n", 145 | " gr.Text(label=\"Total Sales\"),\n", 146 | " gr.Text(label=\"Agent State\")]\n", 147 | " with gr.Row():\n", 148 | " out2 = [gr.Text(label=\"Agent Performance Report\"),\n", 149 | " gr.Text(label=\"Training Guidelines/Tips\")]\n", 150 | " with gr.Row():\n", 151 | " out3 = [gr.Plot(scale=2)]\n", 152 | " \n", 153 | " btn = gr.Button(\"Run\")\n", 154 | " btn.click(fn=agent_function, inputs=inp, outputs=(out1+out2+out3))\n", 155 | "\n", 156 | "demo.launch()\n", 157 | "\n", 158 | "\n", 159 | "# 'Yono Noyman'\n", 160 | "# Aditya Chawla" 161 | ] 162 | } 163 | ], 164 | "metadata": { 165 | "language_info": { 166 | "name": "python" 167 | } 168 | }, 169 | "nbformat": 4, 170 | "nbformat_minor": 2 171 | } 172 | -------------------------------------------------------------------------------- /07_UI.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# **Gradio:** Advanced UI" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import gradio as gr\n", 17 | "import calendar\n", 18 | "from datetime import datetime, timedelta\n", 19 | "\n", 20 | "def agent_feature_calls(month, agent, projectid, features):\n", 21 | " start_time, end_time = get_month_dates(month)\n", 22 | " return agent_lowest_calls(agent, start_time, end_time, projectid, features)\n", 23 | "\n", 24 | "def top_agent_feature_calls(month, agent, projectid, features):\n", 25 | " start_time, end_time = get_month_dates(month)\n", 26 | " return agent_top_calls(agent, start_time, end_time, projectid, features)\n", 27 | "\n", 28 | "def top_diff_features(data):\n", 29 | " data = data.reset_index(drop=True)\n", 30 | " agent_f = eval(data['Agent (Average) Features'][0])\n", 31 | " top_agent_f = eval(data['Top Agents (Average) Features'][0])\n", 32 | " percentage_differences = {key: (abs(top_agent_f.get(key, 0) - agent_f.get(key, 0))/ ((top_agent_f.get(key, 0) + agent_f.get(key, 0))/2))*100 for key in set(top_agent_f)}\n", 33 | " percentage_differences = dict(sorted(percentage_differences.items(), key=lambda x:x[1], reverse=True))\n", 34 | " top_3_features_with_max_difference = list(percentage_differences.keys())[:3]\n", 35 | " return top_3_features_with_max_difference\n", 36 | "\n", 37 | "def get_month_dates(month):\n", 38 | " # Get the current year\n", 39 | " current_year = datetime.now().year\n", 40 | " # Get the first day of the month\n", 41 | " start_date = datetime(current_year, int(month), 1)\n", 42 | " # Get the last day of the month\n", 43 | " _, last_day = calendar.monthrange(current_year, month)\n", 44 | " end_date = datetime(current_year, month, last_day)\n", 45 | " return start_date.strftime(\"%Y-%m-%d\"), end_date.strftime(\"%Y-%m-%d\")\n", 46 | "\n", 47 | "def agent_function(month, agent, projectid):\n", 48 | " start_time, end_time = get_month_dates(month)\n", 49 | " agent_data = user_inputs(start_time, end_time, projectid, agent, con)\n", 50 | " report = agent_data[\"Agent Report\"].values[0]\n", 51 | " guidelines = agent_data[\"Guideline/Tips\"].values[0]\n", 52 | " calls = agent_data[\"Total Calls\"].values[0]\n", 53 | " sales = agent_data[\"Total Sales\"].values[0]\n", 54 | " score = agent_data[\"Score\"].values[0]\n", 55 | " state = agent_data[\"Agent State\"].values[0]\n", 56 | " projectId = agent_data[\"ProjectID\"].values[0]\n", 57 | " features = top_diff_features(agent_data)\n", 58 | " top_agents = eval(agent_data[\"Top Agent Names\"].values[0])\n", 59 | " return (\n", 60 | " calls,\n", 61 | " sales,\n", 62 | " score,\n", 63 | " state,\n", 64 | " report,\n", 65 | " guidelines,\n", 66 | " agent_comparison_chart(agent, agent_data),\n", 67 | " agent_talk_chart(agent, agent_data),\n", 68 | " agent_difference_percentages(agent, agent_data),\n", 69 | " sentiment_diff(agent, agent_data),\n", 70 | " top_agents,\n", 71 | " features\n", 72 | " )\n", 73 | "\n", 74 | "with gr.Blocks() as demo:\n", 75 | " gr.Markdown(\"## Agent Reporting Dashboard\")\n", 76 | " # INPUTS\n", 77 | " with gr.Row():\n", 78 | " month_sel = [gr.Dropdown([1,2,3,4,5], label=\"Select Month\")]\n", 79 | " agent_sel = [gr.Dropdown(['Abdul Ali', 'Yono Noyman', 'Aditya Chawla'], label=\"Select Agent\")]\n", 80 | " project_sel = [gr.Text(label=\"ProjectID\", value=168)]\n", 81 | " run_button = gr.Button(\"Run\")\n", 82 | " \n", 83 | " # OUTPUTS \n", 84 | " with gr.Row():\n", 85 | " calls_output = [gr.Text(label=\"Total Calls\"),\n", 86 | " gr.Text(label=\"Total Sales\"),\n", 87 | " gr.Text(label=\"Score\"),\n", 88 | " gr.Text(label=\"Agent State\")]\n", 89 | " with gr.Row():\n", 90 | " report_output = [gr.Text(label=\"Agent Performance Report\"),\n", 91 | " gr.Text(label=\"Training Guidelines/Tips\")]\n", 92 | " with gr.Row():\n", 93 | " emotion_graph = [gr.Plot(label=\"Emotions Graph\")]\n", 94 | " with gr.Row():\n", 95 | " talk_graph = [gr.Plot(label=\"Talk Metrics Graph\")]\n", 96 | " with gr.Row():\n", 97 | " differences_graph = [gr.Plot(label=\"Percentange Differences Graph\")]\n", 98 | " with gr.Row():\n", 99 | " sentiment_graph = [gr.Plot(label=\"Sentiment Differences Graph\")]\n", 100 | " with gr.Row():\n", 101 | " top_agents_names = [gr.Text(label=\"Top 5 Agents\")]\n", 102 | " top_feature_diff = [gr.Text(label=\"Top % Difference Features\")]\n", 103 | " top_feature_drop = [gr.Dropdown(top_feature_diff[0].value, label=\"Select Feature\")]\n", 104 | " with gr.Row():\n", 105 | " agent_calls_button = gr.Button(\"Agent Calls\")\n", 106 | " agents_calls = gr.Dataframe(label=\"Agent's Lowest Calls\")\n", 107 | " with gr.Row():\n", 108 | " top_agent_calls_button = gr.Button(\"Top Agent Calls\")\n", 109 | " top_agents_calls = gr.Dataframe(label=\"Top Agent's Highest Calls\")\n", 110 | "\n", 111 | " # BUTTONS\n", 112 | " inputs = month_sel + agent_sel + project_sel\n", 113 | " outputs = calls_output + report_output + emotion_graph + talk_graph + differences_graph + sentiment_graph + top_agents_names + top_feature_diff\n", 114 | " run_button.click(agent_function, inputs=inputs, outputs=outputs)\n", 115 | " \n", 116 | " agent_calls_inputs = inputs + top_feature_diff\n", 117 | " agent_calls_button.click(agent_feature_calls, inputs=agent_calls_inputs, outputs=agents_calls) \n", 118 | " \n", 119 | " top_agents_inputs = month_sel + top_agents_names + project_sel + top_feature_diff\n", 120 | " top_agent_calls_button.click(top_agent_feature_calls, inputs=top_agents_inputs, outputs=top_agents_calls) \n", 121 | "\n", 122 | "\n", 123 | "demo.launch(share=True)" 124 | ] 125 | } 126 | ], 127 | "metadata": { 128 | "language_info": { 129 | "name": "python" 130 | } 131 | }, 132 | "nbformat": 4, 133 | "nbformat_minor": 2 134 | } 135 | -------------------------------------------------------------------------------- /Images/BART_and_BERT.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/Images/BART_and_BERT.png -------------------------------------------------------------------------------- /Images/Stable_Diffusion.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/Images/Stable_Diffusion.png -------------------------------------------------------------------------------- /bird_flight.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/bird_flight.jpeg -------------------------------------------------------------------------------- /christmas_dog.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/christmas_dog.jpeg -------------------------------------------------------------------------------- /components.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/components.png -------------------------------------------------------------------------------- /cover.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/cover.jpg -------------------------------------------------------------------------------- /cow.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/e83b776974f14338161bc4063ed7df86d50cfbbe/cow.jpeg -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Building Machine Learning Apps with Gradio 2 | ![cover](https://github.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/blob/main/cover.jpg) 3 | ![components](https://github.com/ArslanKAS/Building-Machine-Learning-Demos-with-Gradio/blob/main/components.png) 4 | --------------------------------------------------------------------------------