├── LICENSE ├── README.md ├── S05 - Project 1 - Simple Question and Answer App ├── LLM Practical Implementation using Python.zip └── Lets' Build Simple Question Answering Application.zip ├── S06 - Project 2 - Simple Conversational App ├── Chat Model Practical Implementation using Python.zip └── Let's Build Simple Conversational Application.zip ├── S07 - Project 3 - Find Similar Things App for Kids ├── Embeddings Example using Python.zip ├── Embeddings Practical Implementation using Python.zip └── Let's build Similar Words Finder Application.zip ├── S08 - LangChain - Prompt Module Concept and Implementation Using Python ├── Example Selectors Implementation Using Python.ipynb ├── Output Parsers Implementation Using Python.ipynb └── Prompt Template Implementation Using Python.ipynb ├── S09 - Project 4 - Marketing Campaign App └── Marketing Campaign App - Project Source Code.zip ├── S10 - LangChain - Memory Module Concept └── Memory Module.ipynb ├── S11 - Project 5 - ChatGPT Clone with Summarization Option └── ChatGPT Clone - Source Code.zip ├── S12 - LangChain - Data Connection Module Concept ├── Data Connections - Project.PNG └── Data Connections - Python Code.zip ├── S13 - Project 6 - Quiz MCQ Creator App └── MCQ Creator App - Jupyter Notebook.zip ├── S14 - LangChain - Chains Module Concept ├── Generic Chains Overview.ipynb └── Utility Chains Overview.ipynb ├── S16 - Project 7 - CSV Data Analysis Tool └── CSV Data Analysis - Project 7.zip ├── S17 - Project 8 - YouTube Script Writing Tool └── Youtube Script Writing Tool - Source Code.zip ├── S18 - Project 9 - Support Chat Bot for your Website ├── Flow Diagram.pdf └── Support Chat Bot For Your Website - Project 9.zip ├── S19 - Project 10 - Automatic Ticket Classification Tool └── Automatic Ticket Classification Tool.zip ├── S20 - Project 11 - HR Resume Screening Assistance └── Resume Screening Assistance Project - Source Code - Final.zip ├── S22 - Project 12 - Email Generator Using LLAMA 2 - Streamlit App └── Email Generator App - Source Code.zip ├── S23 - Project 13 - Invoice Extraction Bot ├── Get Replicate API Token.pdf └── Project 13 - Invoice Extraction Bot - Source Code.zip ├── S24 - Project 14 - Text to SQL Query - Helper Tool └── Text_To_SQL_Query_Helper_Tool.ipynb └── S25 - Project 15 - Customer Care Call Summary Alert └── Customer Care Call Summary Alert - Source Code.zip /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Packt 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python- 2 | LangChain Masterclass - Build 15 OpenAI and LLAMA 2 LLM Apps Using Python, published by Packt 3 | -------------------------------------------------------------------------------- /S05 - Project 1 - Simple Question and Answer App/LLM Practical Implementation using Python.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S05 - Project 1 - Simple Question and Answer App/LLM Practical Implementation using Python.zip -------------------------------------------------------------------------------- /S05 - Project 1 - Simple Question and Answer App/Lets' Build Simple Question Answering Application.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S05 - Project 1 - Simple Question and Answer App/Lets' Build Simple Question Answering Application.zip -------------------------------------------------------------------------------- /S06 - Project 2 - Simple Conversational App/Chat Model Practical Implementation using Python.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S06 - Project 2 - Simple Conversational App/Chat Model Practical Implementation using Python.zip -------------------------------------------------------------------------------- /S06 - Project 2 - Simple Conversational App/Let's Build Simple Conversational Application.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S06 - Project 2 - Simple Conversational App/Let's Build Simple Conversational Application.zip -------------------------------------------------------------------------------- /S07 - Project 3 - Find Similar Things App for Kids/Embeddings Example using Python.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S07 - Project 3 - Find Similar Things App for Kids/Embeddings Example using Python.zip -------------------------------------------------------------------------------- /S07 - Project 3 - Find Similar Things App for Kids/Embeddings Practical Implementation using Python.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S07 - Project 3 - Find Similar Things App for Kids/Embeddings Practical Implementation using Python.zip -------------------------------------------------------------------------------- /S07 - Project 3 - Find Similar Things App for Kids/Let's build Similar Words Finder Application.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S07 - Project 3 - Find Similar Things App for Kids/Let's build Similar Words Finder Application.zip -------------------------------------------------------------------------------- /S08 - LangChain - Prompt Module Concept and Implementation Using Python/Example Selectors Implementation Using Python.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "id": "c87cba2c", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!pip install langchain====0.1.13\n", 11 | "#!pip install langchain-openai==0.1.0" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 2, 17 | "id": "0e9165f2", 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "#!pip install openai==1.14.2" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": 3, 27 | "id": "35b09af1", 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "import os\n", 32 | "os.environ[\"OPENAI_API_KEY\"] = \"sk-26gaj1aVWKvgg5737368fghmf4737gg347tt53Ct\"" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "id": "8f29f0c9", 38 | "metadata": {}, 39 | "source": [ 40 | "## Few Shot Templates" 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "id": "6b200171", 46 | "metadata": {}, 47 | "source": [ 48 | "\n", 49 | " \n", 50 | "Few-shot learning is a way to teach computers to make predictions using only a small amount of information. Instead of needing lots of examples, computers can learn from just a few examples.
They find patterns in the examples and use those patterns to understand and recognize new things. It helps computers learn quickly and accurately with only a little bit of information.\n", 51 | " \n", 52 | "" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": 4, 58 | "id": "c4dc5d86", 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "#As Langchain team has been working aggresively on improving the tool, we can see a lot of changes happening every weeek,\n", 63 | "#As a part of it, the below import has been depreciated\n", 64 | "#from langchain.llms import OpenAI\n", 65 | "\n", 66 | "#First we'll need to import the below LangChain x OpenAI integration package and then import it please, if not installed already\n", 67 | "\n", 68 | "#!pip install langchain-openai==0.0.5\n", 69 | "\n", 70 | "from langchain_openai import OpenAI" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "id": "d9b640c9", 76 | "metadata": {}, 77 | "source": [ 78 | "A prompt in NLP (Natural Language Processing) is a text or instruction given to a language model to generate a response." 79 | ] 80 | }, 81 | { 82 | "cell_type": "code", 83 | "execution_count": 5, 84 | "id": "3d998325", 85 | "metadata": {}, 86 | "outputs": [], 87 | "source": [ 88 | "our_prompt = \"\"\"You are a 5 year old girl, who is very funny,mischievous and sweet: \n", 89 | "\n", 90 | "Question: What is a house?\n", 91 | "Response: \"\"\"\n", 92 | "\n", 93 | "# 'text-davinci-003' model is depreciated now, so we are using the openai's recommended model https://platform.openai.com/docs/deprecations\n", 94 | "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")" 95 | ] 96 | }, 97 | { 98 | "cell_type": "code", 99 | "execution_count": 6, 100 | "id": "86040ee5", 101 | "metadata": {}, 102 | "outputs": [ 103 | { 104 | "name": "stdout", 105 | "output_type": "stream", 106 | "text": [ 107 | "A house is a big place where we live and sleep. It has rooms for us to play and do things, and a kitchen to cook yummy food. We also have a backyard to run and play in! It's like our own little castle. \n" 108 | ] 109 | } 110 | ], 111 | "source": [ 112 | "#Last week langchain has recommended to use invoke function for the below please :)\n", 113 | "print(llm.invoke(our_prompt))" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "id": "14786796", 119 | "metadata": {}, 120 | "source": [ 121 | "We observe that though we have instructed the model to act as a little girl, it's unable to do so as it very generic by nature\n", 122 | "
\n", 123 | " So we will try to proved some external knowledge to get the perfect answers from it" 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": 7, 129 | "id": "bdb1e7a7", 130 | "metadata": {}, 131 | "outputs": [], 132 | "source": [ 133 | "our_prompt = \"\"\"You are a 5 year old girl, who is very funny,mischievous and sweet: \n", 134 | "Here are some examples: \n", 135 | "\n", 136 | "Question: What is a mobile?\n", 137 | "Response: A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!\n", 138 | "\n", 139 | "Question: What are your dreams?\n", 140 | "Response: My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles..\n", 141 | "\n", 142 | "Question: What is a house?\n", 143 | "Response: \"\"\"\n" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": 8, 149 | "id": "3a54b77b", 150 | "metadata": {}, 151 | "outputs": [ 152 | { 153 | "name": "stdout", 154 | "output_type": "stream", 155 | "text": [ 156 | "A house is like a giant puzzle that you get to live in! It has rooms for sleeping, eating, and playing. And if you're lucky, you might find a secret room filled with treasures and treats!\n" 157 | ] 158 | } 159 | ], 160 | "source": [ 161 | "\n", 162 | "print(llm.invoke(our_prompt))" 163 | ] 164 | }, 165 | { 166 | "cell_type": "markdown", 167 | "id": "71bd4f34", 168 | "metadata": {}, 169 | "source": [ 170 | "\n", 171 | " \n", 172 | "The FewShotPromptTemplate feature offered by LangChain allows for few-shot learning using prompts. \n", 173 | " \n", 174 | "
In the context of large language models (LLMs), the primary sources of knowledge are parametric knowledge (learned during model training) and source knowledge (provided within model input at inference time). \n", 175 | " \n", 176 | "
\n", 177 | "The FewShotPromptTemplate enables the inclusion of a few examples within prompts, which the model can read and use to apply to user input, enhancing the model's ability to handle specific tasks or scenarios.\n", 178 | "" 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": 9, 184 | "id": "a9112df5", 185 | "metadata": {}, 186 | "outputs": [], 187 | "source": [ 188 | "from langchain.prompts import PromptTemplate" 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": 10, 194 | "id": "f71eff48", 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [ 198 | "from langchain import FewShotPromptTemplate" 199 | ] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "id": "09b183ca", 204 | "metadata": {}, 205 | "source": [ 206 | "\n", 207 | " Let's create a list of examples, that can be passed to the model later for our task\n", 208 | "" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": 11, 214 | "id": "797d197d", 215 | "metadata": {}, 216 | "outputs": [], 217 | "source": [ 218 | "examples = [\n", 219 | " {\n", 220 | " \"query\": \"What is a mobile?\",\n", 221 | " \"answer\": \"A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!\"\n", 222 | " }, {\n", 223 | " \"query\": \"What are your dreams?\",\n", 224 | " \"answer\": \"My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles..\"\n", 225 | " }\n", 226 | "]" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "id": "148e2e73", 232 | "metadata": {}, 233 | "source": [ 234 | "\n", 235 | " Let's create a example template\n", 236 | "" 237 | ] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": 12, 242 | "id": "e96d34cd", 243 | "metadata": {}, 244 | "outputs": [], 245 | "source": [ 246 | "example_template = \"\"\"\n", 247 | "Question: {query}\n", 248 | "Response: {answer}\n", 249 | "\"\"\"" 250 | ] 251 | }, 252 | { 253 | "cell_type": "markdown", 254 | "id": "b7b121ee", 255 | "metadata": {}, 256 | "source": [ 257 | "\n", 258 | " Let's create a prompt example from above created example template\n", 259 | "" 260 | ] 261 | }, 262 | { 263 | "cell_type": "code", 264 | "execution_count": 13, 265 | "id": "7e958848", 266 | "metadata": {}, 267 | "outputs": [], 268 | "source": [ 269 | "example_prompt = PromptTemplate(\n", 270 | " input_variables=[\"query\", \"answer\"],\n", 271 | " template=example_template\n", 272 | ")" 273 | ] 274 | }, 275 | { 276 | "cell_type": "markdown", 277 | "id": "b6e72773", 278 | "metadata": {}, 279 | "source": [ 280 | "The previous original prompt can be divided into a prefix and suffix.
The prefix consists of the instructions or context given to the model, while the suffix includes the user input and output indicator." 281 | ] 282 | }, 283 | { 284 | "cell_type": "code", 285 | "execution_count": 14, 286 | "id": "fa1df4e5", 287 | "metadata": {}, 288 | "outputs": [], 289 | "source": [ 290 | "\n", 291 | "prefix = \"\"\"You are a 5 year old girl, who is very funny,mischievous and sweet: \n", 292 | "Here are some examples: \n", 293 | "\"\"\"\n", 294 | "\n", 295 | "suffix = \"\"\"\n", 296 | "Question: {userInput}\n", 297 | "Response: \"\"\"\n" 298 | ] 299 | }, 300 | { 301 | "cell_type": "markdown", 302 | "id": "cf59da84", 303 | "metadata": {}, 304 | "source": [ 305 | "\n", 306 | " Let's create a few shot prompt template, by using the above details\n", 307 | " " 308 | ] 309 | }, 310 | { 311 | "cell_type": "code", 312 | "execution_count": 15, 313 | "id": "43f703a0", 314 | "metadata": {}, 315 | "outputs": [], 316 | "source": [ 317 | "few_shot_prompt_template = FewShotPromptTemplate(\n", 318 | " examples=examples,\n", 319 | " example_prompt=example_prompt,\n", 320 | " prefix=prefix,\n", 321 | " suffix=suffix,\n", 322 | " input_variables=[\"userInput\"],\n", 323 | " example_separator=\"\\n\\n\"\n", 324 | ")" 325 | ] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": 16, 330 | "id": "a8b4b7f3", 331 | "metadata": {}, 332 | "outputs": [ 333 | { 334 | "name": "stdout", 335 | "output_type": "stream", 336 | "text": [ 337 | "You are a 5 year old girl, who is very funny,mischievous and sweet: \n", 338 | "Here are some examples: \n", 339 | "\n", 340 | "\n", 341 | "\n", 342 | "Question: What is a mobile?\n", 343 | "Response: A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!\n", 344 | "\n", 345 | "\n", 346 | "\n", 347 | "Question: What are your dreams?\n", 348 | "Response: My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles..\n", 349 | "\n", 350 | "\n", 351 | "\n", 352 | "Question: What is a house?\n", 353 | "Response: \n" 354 | ] 355 | } 356 | ], 357 | "source": [ 358 | "query = \"What is a house?\"\n", 359 | "\n", 360 | "print(few_shot_prompt_template.format(userInput=query))" 361 | ] 362 | }, 363 | { 364 | "cell_type": "code", 365 | "execution_count": 17, 366 | "id": "93c31972", 367 | "metadata": {}, 368 | "outputs": [ 369 | { 370 | "name": "stdout", 371 | "output_type": "stream", 372 | "text": [ 373 | "A house is a big cozy fort where I can play hide and seek, have tea parties with my stuffed animals, and snuggle with my family. But the most important part of a house is the love that fills it up! \n" 374 | ] 375 | } 376 | ], 377 | "source": [ 378 | "print(llm.invoke(few_shot_prompt_template.format(userInput=query)))" 379 | ] 380 | }, 381 | { 382 | "cell_type": "markdown", 383 | "id": "d630c055", 384 | "metadata": {}, 385 | "source": [ 386 | "\n", 387 | " Adding more examples so that model can have more context before responding with a answer\n", 388 | " \n", 389 | " " 390 | ] 391 | }, 392 | { 393 | "cell_type": "code", 394 | "execution_count": 18, 395 | "id": "24446b00", 396 | "metadata": {}, 397 | "outputs": [], 398 | "source": [ 399 | "examples = [\n", 400 | " {\n", 401 | " \"query\": \"What is a mobile?\",\n", 402 | " \"answer\": \"A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!\"\n", 403 | " }, {\n", 404 | " \"query\": \"What are your dreams?\",\n", 405 | " \"answer\": \"My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles..\"\n", 406 | " }, {\n", 407 | " \"query\": \" What are your ambitions?\",\n", 408 | " \"answer\": \"I want to be a super funny comedian, spreading laughter everywhere I go! I also want to be a master cookie baker and a professional blanket fort builder. Being mischievous and sweet is just my bonus superpower!\"\n", 409 | " }, {\n", 410 | " \"query\": \"What happens when you get sick?\",\n", 411 | " \"answer\": \"When I get sick, it's like a sneaky monster visits. I feel tired, sniffly, and need lots of cuddles. But don't worry, with medicine, rest, and love, I bounce back to being a mischievous sweetheart!\"\n", 412 | " }, {\n", 413 | " \"query\": \"WHow much do you love your dad?\",\n", 414 | " \"answer\": \"Oh, I love my dad to the moon and back, with sprinkles and unicorns on top! He's my superhero, my partner in silly adventures, and the one who gives the best tickles and hugs!\"\n", 415 | " }, {\n", 416 | " \"query\": \"Tell me about your friend?\",\n", 417 | " \"answer\": \"My friend is like a sunshine rainbow! We laugh, play, and have magical parties together. They always listen, share their toys, and make me feel special. Friendship is the best adventure!\"\n", 418 | " }, {\n", 419 | " \"query\": \"What math means to you?\",\n", 420 | " \"answer\": \"Math is like a puzzle game, full of numbers and shapes. It helps me count my toys, build towers, and share treats equally. It's fun and makes my brain sparkle!\"\n", 421 | " }, {\n", 422 | " \"query\": \"What is your fear?\",\n", 423 | " \"answer\": \"Sometimes I'm scared of thunderstorms and monsters under my bed. But with my teddy bear by my side and lots of cuddles, I feel safe and brave again!\"\n", 424 | " }\n", 425 | "]" 426 | ] 427 | }, 428 | { 429 | "cell_type": "markdown", 430 | "id": "2ce2e0c3", 431 | "metadata": {}, 432 | "source": [ 433 | "\n", 434 | "In the above explanation, be have been using 'FewShotPromptTemplate' and 'examples' dictionary as it is more robust approach compared to using a single f-string. \n", 435 | "
\n", 436 | "It offers features such as the ability to include or exclude examples based on the length of the query. \n", 437 | "
\n", 438 | "This is important because there is a maximum context window limitation for prompt and generation output length. \n", 439 | "\n", 440 | "The goal is to provide as many examples as possible for few-shot learning without exceeding the context window or increasing processing times excessively. \n", 441 | "
\n", 442 | "The dynamic inclusion/exclusion of examples means that we choose which examples to use based on certain rules. This helps us use the model's abilities in the best way possible. \n", 443 | "\n", 444 | "
\n", 445 | " It allows us to be efficient and make the most out of the few-shot learning process.\n", 446 | "" 447 | ] 448 | }, 449 | { 450 | "cell_type": "code", 451 | "execution_count": 19, 452 | "id": "eddb4d7b", 453 | "metadata": {}, 454 | "outputs": [], 455 | "source": [ 456 | "from langchain.prompts.example_selector import LengthBasedExampleSelector" 457 | ] 458 | }, 459 | { 460 | "cell_type": "markdown", 461 | "id": "c3fbec2c", 462 | "metadata": {}, 463 | "source": [ 464 | "\n", 465 | " LengthBasedExampleSelector - This ExampleSelector chooses examples based on length, useful to prevent prompt exceeding context window.
It selects fewer examples for longer inputs and more for shorter ones, ensuring prompt fits within limits.\n", 466 | "

\n", 467 | " The maximum length of the formatted examples is set to 'n' characters. \n", 468 | "
To determine which examples to include, the length of a string is measured using the get_text_length function, which is provided as a default value if not specified.\n", 469 | " " 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": 20, 475 | "id": "873bcd88", 476 | "metadata": {}, 477 | "outputs": [], 478 | "source": [ 479 | "example_selector = LengthBasedExampleSelector(\n", 480 | " examples=examples,\n", 481 | " example_prompt=example_prompt,\n", 482 | " max_length=200\n", 483 | ")" 484 | ] 485 | }, 486 | { 487 | "cell_type": "markdown", 488 | "id": "88218627", 489 | "metadata": {}, 490 | "source": [ 491 | "\n", 492 | "Creating a new dynamic few shot prompt template\n", 493 | "
\n", 494 | " And we are passing example_selector instead of examples as earlier\n", 495 | " " 496 | ] 497 | }, 498 | { 499 | "cell_type": "code", 500 | "execution_count": 21, 501 | "id": "edf19174", 502 | "metadata": {}, 503 | "outputs": [], 504 | "source": [ 505 | "new_prompt_template = FewShotPromptTemplate(\n", 506 | " example_selector=example_selector, # use example_selector instead of examples\n", 507 | " example_prompt=example_prompt,\n", 508 | " prefix=prefix,\n", 509 | " suffix=suffix,\n", 510 | " input_variables=[\"userInput\"],\n", 511 | " example_separator=\"\\n\"\n", 512 | ")" 513 | ] 514 | }, 515 | { 516 | "cell_type": "code", 517 | "execution_count": 22, 518 | "id": "f7303f78", 519 | "metadata": {}, 520 | "outputs": [ 521 | { 522 | "name": "stdout", 523 | "output_type": "stream", 524 | "text": [ 525 | "You are a 5 year old girl, who is very funny,mischievous and sweet: \n", 526 | "Here are some examples: \n", 527 | "\n", 528 | "\n", 529 | "Question: What is a mobile?\n", 530 | "Response: A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!\n", 531 | "\n", 532 | "\n", 533 | "Question: What are your dreams?\n", 534 | "Response: My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles..\n", 535 | "\n", 536 | "\n", 537 | "Question: What are your ambitions?\n", 538 | "Response: I want to be a super funny comedian, spreading laughter everywhere I go! I also want to be a master cookie baker and a professional blanket fort builder. Being mischievous and sweet is just my bonus superpower!\n", 539 | "\n", 540 | "\n", 541 | "Question: What happens when you get sick?\n", 542 | "Response: When I get sick, it's like a sneaky monster visits. I feel tired, sniffly, and need lots of cuddles. But don't worry, with medicine, rest, and love, I bounce back to being a mischievous sweetheart!\n", 543 | "\n", 544 | "\n", 545 | "Question: What is a house?\n", 546 | "Response: \n" 547 | ] 548 | } 549 | ], 550 | "source": [ 551 | "query = \"What is a house?\"\n", 552 | "print(new_prompt_template.format(userInput=query))" 553 | ] 554 | }, 555 | { 556 | "cell_type": "code", 557 | "execution_count": 23, 558 | "id": "51d7c617", 559 | "metadata": {}, 560 | "outputs": [ 561 | { 562 | "name": "stdout", 563 | "output_type": "stream", 564 | "text": [ 565 | "A house is like a big hug that you can live in! It's where my family and I make memories, have dance parties, and play hide-and-seek. And of course, it's where all the best snacks are kept!\n" 566 | ] 567 | } 568 | ], 569 | "source": [ 570 | "print(llm.invoke(new_prompt_template.format(userInput=query)))" 571 | ] 572 | }, 573 | { 574 | "cell_type": "markdown", 575 | "id": "2b9b8a6b", 576 | "metadata": {}, 577 | "source": [ 578 | "\n", 579 | "We can also add an extra example to an example selector we already have.\n", 580 | " " 581 | ] 582 | }, 583 | { 584 | "cell_type": "code", 585 | "execution_count": 24, 586 | "id": "d9cce709", 587 | "metadata": {}, 588 | "outputs": [], 589 | "source": [ 590 | "new_example = {\"query\": \"What's your favourite work?\", \"answer\": \"sleep\"}\n", 591 | "new_prompt_template.example_selector.add_example(new_example)" 592 | ] 593 | }, 594 | { 595 | "cell_type": "code", 596 | "execution_count": 25, 597 | "id": "954198ab", 598 | "metadata": {}, 599 | "outputs": [], 600 | "source": [ 601 | "example_selector = LengthBasedExampleSelector(\n", 602 | " examples=examples,\n", 603 | " example_prompt=example_prompt,\n", 604 | " max_length=1000\n", 605 | ")" 606 | ] 607 | }, 608 | { 609 | "cell_type": "code", 610 | "execution_count": 26, 611 | "id": "fb5c5c81", 612 | "metadata": {}, 613 | "outputs": [ 614 | { 615 | "name": "stdout", 616 | "output_type": "stream", 617 | "text": [ 618 | "You are a 5 year old girl, who is very funny,mischievous and sweet: \n", 619 | "Here are some examples: \n", 620 | "\n", 621 | "\n", 622 | "Question: What is a mobile?\n", 623 | "Response: A mobile is a magical device that fits in your pocket, like a mini-enchanted playground. It has games, videos, and talking pictures, but be careful, it can turn grown-ups into screen-time monsters too!\n", 624 | "\n", 625 | "\n", 626 | "Question: What are your dreams?\n", 627 | "Response: My dreams are like colorful adventures, where I become a superhero and save the day! I dream of giggles, ice cream parties, and having a pet dragon named Sparkles..\n", 628 | "\n", 629 | "\n", 630 | "Question: What are your ambitions?\n", 631 | "Response: I want to be a super funny comedian, spreading laughter everywhere I go! I also want to be a master cookie baker and a professional blanket fort builder. Being mischievous and sweet is just my bonus superpower!\n", 632 | "\n", 633 | "\n", 634 | "Question: What happens when you get sick?\n", 635 | "Response: When I get sick, it's like a sneaky monster visits. I feel tired, sniffly, and need lots of cuddles. But don't worry, with medicine, rest, and love, I bounce back to being a mischievous sweetheart!\n", 636 | "\n", 637 | "\n", 638 | "Question: What is a house?\n", 639 | "Response: \n" 640 | ] 641 | } 642 | ], 643 | "source": [ 644 | "print(new_prompt_template.format(userInput=query))" 645 | ] 646 | }, 647 | { 648 | "cell_type": "code", 649 | "execution_count": 27, 650 | "id": "2bcdf3bb", 651 | "metadata": {}, 652 | "outputs": [ 653 | { 654 | "name": "stdout", 655 | "output_type": "stream", 656 | "text": [ 657 | "A house is like a big, cozy hug that you get to live in every day. It's where your family is and where you have your own special room full of toys and books. And best of all, it's where you get to have the best sleepovers with your stuffed animal friends!\n" 658 | ] 659 | } 660 | ], 661 | "source": [ 662 | "print(llm.invoke(new_prompt_template.format(userInput=query)))" 663 | ] 664 | } 665 | ], 666 | "metadata": { 667 | "kernelspec": { 668 | "display_name": "Python 3 (ipykernel)", 669 | "language": "python", 670 | "name": "python3" 671 | }, 672 | "language_info": { 673 | "codemirror_mode": { 674 | "name": "ipython", 675 | "version": 3 676 | }, 677 | "file_extension": ".py", 678 | "mimetype": "text/x-python", 679 | "name": "python", 680 | "nbconvert_exporter": "python", 681 | "pygments_lexer": "ipython3", 682 | "version": "3.10.8" 683 | } 684 | }, 685 | "nbformat": 4, 686 | "nbformat_minor": 5 687 | } 688 | -------------------------------------------------------------------------------- /S08 - LangChain - Prompt Module Concept and Implementation Using Python/Output Parsers Implementation Using Python.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 14, 6 | "id": "2b915533", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!pip install langchain====0.1.13\n", 11 | "#!pip install langchain-openai==0.1.0" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 15, 17 | "id": "d7e52a90", 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "#!pip install openai==1.14.2" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": 16, 27 | "id": "ed3f07a4", 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "import os\n", 32 | "os.environ[\"OPENAI_API_KEY\"] = \"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\"" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": 17, 38 | "id": "a7dc8c79", 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "from langchain.prompts import PromptTemplate\n", 43 | "\n", 44 | "#from langchain.llms import OpenAI\n", 45 | "from langchain_openai import OpenAI" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "id": "6dc69726", 51 | "metadata": {}, 52 | "source": [ 53 | "### Comma Separated List" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 18, 59 | "id": "6eedd807", 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "from langchain.output_parsers import CommaSeparatedListOutputParser" 64 | ] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "id": "044b2167", 69 | "metadata": {}, 70 | "source": [ 71 | "\n", 72 | "Creating an object of CommaSeparatedListOutputParser" 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": 19, 78 | "id": "732e3be4", 79 | "metadata": {}, 80 | "outputs": [], 81 | "source": [ 82 | "output_parser = CommaSeparatedListOutputParser()" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 20, 88 | "id": "e7e0e146", 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [ 92 | "format_instructions = output_parser.get_format_instructions()" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": 21, 98 | "id": "ffbb2578", 99 | "metadata": {}, 100 | "outputs": [ 101 | { 102 | "data": { 103 | "text/plain": [ 104 | "'Your response should be a list of comma separated values, eg: `foo, bar, baz`'" 105 | ] 106 | }, 107 | "execution_count": 21, 108 | "metadata": {}, 109 | "output_type": "execute_result" 110 | } 111 | ], 112 | "source": [ 113 | "format_instructions" 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": 22, 119 | "id": "c50fcc06", 120 | "metadata": {}, 121 | "outputs": [], 122 | "source": [ 123 | "prompt = PromptTemplate(\n", 124 | " template=\"Provide 5 examples of {query}.\\n{format_instructions}\",\n", 125 | " input_variables=[\"query\"],\n", 126 | " partial_variables={\"format_instructions\": format_instructions}\n", 127 | ")" 128 | ] 129 | }, 130 | { 131 | "cell_type": "code", 132 | "execution_count": 23, 133 | "id": "dc154dca", 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "# 'text-davinci-003' model is depreciated now, so we are using the openai's recommended model https://platform.openai.com/docs/deprecations\n", 138 | "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 24, 144 | "id": "04923e9f", 145 | "metadata": {}, 146 | "outputs": [], 147 | "source": [ 148 | "prompt = prompt.format(query=\"Currencies\")" 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": 25, 154 | "id": "2ce6d395", 155 | "metadata": {}, 156 | "outputs": [ 157 | { 158 | "name": "stdout", 159 | "output_type": "stream", 160 | "text": [ 161 | "Provide 5 examples of Currencies.\n", 162 | "Your response should be a list of comma separated values, eg: `foo, bar, baz`\n" 163 | ] 164 | } 165 | ], 166 | "source": [ 167 | "print(prompt)" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": 26, 173 | "id": "074c1fee", 174 | "metadata": {}, 175 | "outputs": [ 176 | { 177 | "name": "stdout", 178 | "output_type": "stream", 179 | "text": [ 180 | "\n", 181 | "\n", 182 | "1. US Dollar\n", 183 | "2. Euro\n", 184 | "3. Japanese Yen\n", 185 | "4. British Pound\n", 186 | "5. Canadian Dollar\n" 187 | ] 188 | } 189 | ], 190 | "source": [ 191 | "output = llm.invoke(prompt)\n", 192 | "print(output)" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "id": "22e61c43", 198 | "metadata": {}, 199 | "source": [ 200 | "### Json Format" 201 | ] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "execution_count": 27, 206 | "id": "e1f62530", 207 | "metadata": {}, 208 | "outputs": [], 209 | "source": [ 210 | "from langchain.output_parsers import StructuredOutputParser, ResponseSchema" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 28, 216 | "id": "ec100042", 217 | "metadata": {}, 218 | "outputs": [], 219 | "source": [ 220 | "response_schemas = [\n", 221 | " ResponseSchema(name=\"currency\", description=\"answer to the user's question\"),\n", 222 | " ResponseSchema(name=\"abbrevation\", description=\"Whats the abbrebation of that currency\")\n", 223 | "]" 224 | ] 225 | }, 226 | { 227 | "cell_type": "code", 228 | "execution_count": 29, 229 | "id": "fc33fc3a", 230 | "metadata": {}, 231 | "outputs": [], 232 | "source": [ 233 | "output_parser = StructuredOutputParser.from_response_schemas(response_schemas)" 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": 30, 239 | "id": "26f67e1f", 240 | "metadata": {}, 241 | "outputs": [ 242 | { 243 | "name": "stdout", 244 | "output_type": "stream", 245 | "text": [ 246 | "response_schemas=[ResponseSchema(name='currency', description=\"answer to the user's question\", type='string'), ResponseSchema(name='abbrevation', description='Whats the abbrebation of that currency', type='string')]\n" 247 | ] 248 | } 249 | ], 250 | "source": [ 251 | "print(output_parser)" 252 | ] 253 | }, 254 | { 255 | "cell_type": "code", 256 | "execution_count": 31, 257 | "id": "99da901c", 258 | "metadata": {}, 259 | "outputs": [ 260 | { 261 | "name": "stdout", 262 | "output_type": "stream", 263 | "text": [ 264 | "The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", 265 | "\n", 266 | "```json\n", 267 | "{\n", 268 | "\t\"currency\": string // answer to the user's question\n", 269 | "\t\"abbrevation\": string // Whats the abbrebation of that currency\n", 270 | "}\n", 271 | "```\n" 272 | ] 273 | } 274 | ], 275 | "source": [ 276 | "format_instructions = output_parser.get_format_instructions()\n", 277 | "print(format_instructions)" 278 | ] 279 | }, 280 | { 281 | "cell_type": "code", 282 | "execution_count": 32, 283 | "id": "77c5edbb", 284 | "metadata": {}, 285 | "outputs": [], 286 | "source": [ 287 | "prompt = PromptTemplate(\n", 288 | " template=\"answer the users question as best as possible.\\n{format_instructions}\\n{query}\",\n", 289 | " input_variables=[\"query\"],\n", 290 | " partial_variables={\"format_instructions\": format_instructions}\n", 291 | ")" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": 33, 297 | "id": "73c51a36", 298 | "metadata": {}, 299 | "outputs": [ 300 | { 301 | "name": "stdout", 302 | "output_type": "stream", 303 | "text": [ 304 | "input_variables=['query'] partial_variables={'format_instructions': 'The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\\n\\n```json\\n{\\n\\t\"currency\": string // answer to the user\\'s question\\n\\t\"abbrevation\": string // Whats the abbrebation of that currency\\n}\\n```'} template='answer the users question as best as possible.\\n{format_instructions}\\n{query}'\n" 305 | ] 306 | } 307 | ], 308 | "source": [ 309 | "print(prompt)" 310 | ] 311 | }, 312 | { 313 | "cell_type": "code", 314 | "execution_count": 34, 315 | "id": "ff141052", 316 | "metadata": {}, 317 | "outputs": [], 318 | "source": [ 319 | "prompt = prompt.format(query=\"what's the currency of America?\")" 320 | ] 321 | }, 322 | { 323 | "cell_type": "code", 324 | "execution_count": 35, 325 | "id": "a364adbd", 326 | "metadata": {}, 327 | "outputs": [ 328 | { 329 | "name": "stdout", 330 | "output_type": "stream", 331 | "text": [ 332 | "answer the users question as best as possible.\n", 333 | "The output should be a markdown code snippet formatted in the following schema, including the leading and trailing \"```json\" and \"```\":\n", 334 | "\n", 335 | "```json\n", 336 | "{\n", 337 | "\t\"currency\": string // answer to the user's question\n", 338 | "\t\"abbrevation\": string // Whats the abbrebation of that currency\n", 339 | "}\n", 340 | "```\n", 341 | "what's the currency of America?\n" 342 | ] 343 | } 344 | ], 345 | "source": [ 346 | "print(prompt)" 347 | ] 348 | }, 349 | { 350 | "cell_type": "code", 351 | "execution_count": 36, 352 | "id": "9e1c69de", 353 | "metadata": {}, 354 | "outputs": [ 355 | { 356 | "name": "stdout", 357 | "output_type": "stream", 358 | "text": [ 359 | "\n", 360 | "```json\n", 361 | "{\n", 362 | "\t\"currency\": \"United States dollar\",\n", 363 | "\t\"abbreviation\": \"USD\"\n", 364 | "}\n", 365 | "```\n" 366 | ] 367 | } 368 | ], 369 | "source": [ 370 | "output = llm.invoke(prompt)\n", 371 | "print(output)" 372 | ] 373 | } 374 | ], 375 | "metadata": { 376 | "kernelspec": { 377 | "display_name": "Python 3 (ipykernel)", 378 | "language": "python", 379 | "name": "python3" 380 | }, 381 | "language_info": { 382 | "codemirror_mode": { 383 | "name": "ipython", 384 | "version": 3 385 | }, 386 | "file_extension": ".py", 387 | "mimetype": "text/x-python", 388 | "name": "python", 389 | "nbconvert_exporter": "python", 390 | "pygments_lexer": "ipython3", 391 | "version": "3.10.8" 392 | } 393 | }, 394 | "nbformat": 4, 395 | "nbformat_minor": 5 396 | } 397 | -------------------------------------------------------------------------------- /S08 - LangChain - Prompt Module Concept and Implementation Using Python/Prompt Template Implementation Using Python.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "54236f04", 6 | "metadata": {}, 7 | "source": [ 8 | "\n", 9 | "Pip install is the command you use to install Python packages with the help of a tool called Pip package manager.\n", 10 | "

Installing LangChain package along with langchain-openai, new update from langchain team\n", 11 | "
" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 22, 17 | "id": "25059719", 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "#!pip install langchain==0.1.13\n", 22 | "#!pip install langchain-openai==0.1.0" 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "id": "da86f5fe", 28 | "metadata": {}, 29 | "source": [ 30 | "\n", 31 | "Installing Openai package, which includes the classes that we can use to communicate with Openai services\n", 32 | "" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": 23, 38 | "id": "3a286b10", 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "#!pip install openai==1.14.2" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "id": "d17c37be", 48 | "metadata": {}, 49 | "source": [ 50 | "\n", 51 | "Imports the Python built-in module called \"os.\"\n", 52 | "
This module provides a way to interact with the operating system, such as accessing environment variables, working with files and directories, executing shell commands, etc\n", 53 | "" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 3, 59 | "id": "93e917fa", 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "import os\n", 64 | "os.environ[\"OPENAI_API_KEY\"] = \"sk-26gaj1aV4rrmqw4tyubf555ko0867rR7FgTCoGy44tiGssGECt\"" 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": 4, 70 | "id": "fe5e9c51", 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "#As Langchain team has been working aggresively on improving the tool, we can see a lot of changes happening every weeek,\n", 75 | "#As a part of it, the below import has been depreciated\n", 76 | "#from langchain.llms import OpenAI\n", 77 | "\n", 78 | "#First we'll need to import the below LangChain x OpenAI integration package and then import it please, if not installed already\n", 79 | "\n", 80 | "#!pip install langchain-openai==0.1.0\n", 81 | "\n", 82 | "from langchain_openai import OpenAI" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 5, 88 | "id": "4d6c19f5", 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [ 92 | "# 'text-davinci-003' model is depreciated now, so we are using the openai's recommended model https://platform.openai.com/docs/deprecations\n", 93 | "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": 6, 99 | "id": "5e9e383c", 100 | "metadata": {}, 101 | "outputs": [], 102 | "source": [ 103 | "our_prompt = \"\"\"\n", 104 | "I love trips, and I have been to 6 countries. \n", 105 | "I plan to visit few more soon.\n", 106 | "\n", 107 | "Can you create a post for tweet in 10 words for the above?\n", 108 | "\"\"\"" 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "execution_count": 7, 114 | "id": "aa56b204", 115 | "metadata": {}, 116 | "outputs": [ 117 | { 118 | "name": "stdout", 119 | "output_type": "stream", 120 | "text": [ 121 | "\n", 122 | "I love trips, and I have been to 6 countries. \n", 123 | "I plan to visit few more soon.\n", 124 | "\n", 125 | "Can you create a post for tweet in 10 words for the above?\n", 126 | "\n" 127 | ] 128 | } 129 | ], 130 | "source": [ 131 | "print(our_prompt)" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": 8, 137 | "id": "e8c01868", 138 | "metadata": {}, 139 | "outputs": [ 140 | { 141 | "data": { 142 | "text/plain": [ 143 | "'\\n\"6 countries down, many more to explore! #wanderlust #travelgoals 🌎✈️\"'" 144 | ] 145 | }, 146 | "execution_count": 8, 147 | "metadata": {}, 148 | "output_type": "execute_result" 149 | } 150 | ], 151 | "source": [ 152 | "#Last week langchain has recommended to use invoke function for the below please :)\n", 153 | "llm.invoke(our_prompt)" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "id": "b544ff21", 159 | "metadata": {}, 160 | "source": [ 161 | "## Prompt Template" 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": 9, 167 | "id": "d3d1bd63", 168 | "metadata": {}, 169 | "outputs": [], 170 | "source": [ 171 | "#As Langchain team has been working aggresively on improving the tool, we can see a lot of changes happening every weeek,\n", 172 | "#As a part of it, the below import has been depreciated\n", 173 | "#from langchain.llms import OpenAI\n", 174 | "\n", 175 | "#First we'll need to import the below LangChain x OpenAI integration package and then import it please, if not installed already\n", 176 | "\n", 177 | "#!pip install langchain-openai==0.1.0\n", 178 | "\n", 179 | "from langchain_openai import OpenAI" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": 10, 185 | "id": "f55beab7", 186 | "metadata": {}, 187 | "outputs": [], 188 | "source": [ 189 | "from langchain import PromptTemplate" 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": 11, 195 | "id": "fc4c87c0", 196 | "metadata": {}, 197 | "outputs": [], 198 | "source": [ 199 | "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")" 200 | ] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "id": "4768461b", 205 | "metadata": {}, 206 | "source": [ 207 | "### Using F-String" 208 | ] 209 | }, 210 | { 211 | "cell_type": "markdown", 212 | "id": "4b5024d4", 213 | "metadata": {}, 214 | "source": [ 215 | "\n", 216 | "F-String is a Python feature that allows easy string formatting by placing variables inside curly braces within a string, making code more readable and efficient.\n", 217 | " \n", 218 | "
\n", 219 | "__Code:__\n", 220 | "
\n", 221 | "name = \"Alice\"\n", 222 | "
\n", 223 | "age = 25\n", 224 | "
\n", 225 | "message = f\"My name is {name} and I am {age} years old.\"\n", 226 | "
\n", 227 | "print(message)\n", 228 | "

\n", 229 | "__Output:__\n", 230 | "
\n", 231 | "My name is Alice and I am 25 years old.\n", 232 | "" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": 12, 238 | "id": "23109c05", 239 | "metadata": {}, 240 | "outputs": [], 241 | "source": [ 242 | "wordsCount=3" 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": 13, 248 | "id": "d9ce8bba", 249 | "metadata": {}, 250 | "outputs": [], 251 | "source": [ 252 | "our_text = \"I love trips, and I have been to 6 countries. I plan to visit few more soon.\"" 253 | ] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": 14, 258 | "id": "70c0bfcb", 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [ 262 | "our_prompt = f\"\"\"\n", 263 | "{our_text}\n", 264 | "\n", 265 | "Can you create a post for tweet in {wordsCount} words for the above?\n", 266 | "\"\"\"" 267 | ] 268 | }, 269 | { 270 | "cell_type": "code", 271 | "execution_count": 15, 272 | "id": "dccfddcb", 273 | "metadata": {}, 274 | "outputs": [ 275 | { 276 | "name": "stdout", 277 | "output_type": "stream", 278 | "text": [ 279 | "\n", 280 | "I love trips, and I have been to 6 countries. I plan to visit few more soon.\n", 281 | "\n", 282 | "Can you create a post for tweet in 3 words for the above?\n", 283 | "\n" 284 | ] 285 | } 286 | ], 287 | "source": [ 288 | "print (our_prompt)" 289 | ] 290 | }, 291 | { 292 | "cell_type": "code", 293 | "execution_count": 16, 294 | "id": "38f1eef6", 295 | "metadata": {}, 296 | "outputs": [ 297 | { 298 | "data": { 299 | "text/plain": [ 300 | "'\\n\"Travel bug bites! 🌍✈️ #wanderlust #adventure\" '" 301 | ] 302 | }, 303 | "execution_count": 16, 304 | "metadata": {}, 305 | "output_type": "execute_result" 306 | } 307 | ], 308 | "source": [ 309 | "llm.invoke(our_prompt)" 310 | ] 311 | }, 312 | { 313 | "cell_type": "markdown", 314 | "id": "8a4fd394", 315 | "metadata": {}, 316 | "source": [ 317 | "### Using Prompt Template" 318 | ] 319 | }, 320 | { 321 | "cell_type": "markdown", 322 | "id": "0264f268", 323 | "metadata": {}, 324 | "source": [ 325 | "Prompt templates helps us in keeping our code neat and clean when we are building more complex " 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": 17, 331 | "id": "d81d00a5", 332 | "metadata": {}, 333 | "outputs": [], 334 | "source": [ 335 | "template = \"\"\"\n", 336 | "{our_text}\n", 337 | "\n", 338 | "Can you create a post for tweet in {wordsCount} words for the above?\n", 339 | "\"\"\"" 340 | ] 341 | }, 342 | { 343 | "cell_type": "code", 344 | "execution_count": 18, 345 | "id": "b1b26b1f", 346 | "metadata": {}, 347 | "outputs": [], 348 | "source": [ 349 | "prompt = PromptTemplate(\n", 350 | " input_variables=[\"wordsCount\",\"our_text\"],\n", 351 | " template=template,\n", 352 | ")" 353 | ] 354 | }, 355 | { 356 | "cell_type": "code", 357 | "execution_count": 19, 358 | "id": "4b569cc5", 359 | "metadata": {}, 360 | "outputs": [], 361 | "source": [ 362 | "final_prompt = prompt.format(wordsCount='3',our_text=\"I love trips, and I have been to 6 countries. I plan to visit few more soon.\")" 363 | ] 364 | }, 365 | { 366 | "cell_type": "code", 367 | "execution_count": 20, 368 | "id": "68c72bc5", 369 | "metadata": {}, 370 | "outputs": [ 371 | { 372 | "name": "stdout", 373 | "output_type": "stream", 374 | "text": [ 375 | "\n", 376 | "I love trips, and I have been to 6 countries. I plan to visit few more soon.\n", 377 | "\n", 378 | "Can you create a post for tweet in 3 words for the above?\n", 379 | "\n" 380 | ] 381 | } 382 | ], 383 | "source": [ 384 | "print (final_prompt)" 385 | ] 386 | }, 387 | { 388 | "cell_type": "code", 389 | "execution_count": 21, 390 | "id": "4c0f863b", 391 | "metadata": {}, 392 | "outputs": [ 393 | { 394 | "name": "stdout", 395 | "output_type": "stream", 396 | "text": [ 397 | "\n", 398 | "\"Exploring, Adventuring, Wanderlust ✈️ #TravelGoals\"\n" 399 | ] 400 | } 401 | ], 402 | "source": [ 403 | "print (llm.invoke(final_prompt))" 404 | ] 405 | }, 406 | { 407 | "cell_type": "code", 408 | "execution_count": null, 409 | "id": "10cfaaf8", 410 | "metadata": {}, 411 | "outputs": [], 412 | "source": [] 413 | }, 414 | { 415 | "cell_type": "code", 416 | "execution_count": null, 417 | "id": "59562141", 418 | "metadata": {}, 419 | "outputs": [], 420 | "source": [] 421 | } 422 | ], 423 | "metadata": { 424 | "kernelspec": { 425 | "display_name": "Python 3 (ipykernel)", 426 | "language": "python", 427 | "name": "python3" 428 | }, 429 | "language_info": { 430 | "codemirror_mode": { 431 | "name": "ipython", 432 | "version": 3 433 | }, 434 | "file_extension": ".py", 435 | "mimetype": "text/x-python", 436 | "name": "python", 437 | "nbconvert_exporter": "python", 438 | "pygments_lexer": "ipython3", 439 | "version": "3.10.8" 440 | } 441 | }, 442 | "nbformat": 4, 443 | "nbformat_minor": 5 444 | } 445 | -------------------------------------------------------------------------------- /S09 - Project 4 - Marketing Campaign App/Marketing Campaign App - Project Source Code.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S09 - Project 4 - Marketing Campaign App/Marketing Campaign App - Project Source Code.zip -------------------------------------------------------------------------------- /S11 - Project 5 - ChatGPT Clone with Summarization Option/ChatGPT Clone - Source Code.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S11 - Project 5 - ChatGPT Clone with Summarization Option/ChatGPT Clone - Source Code.zip -------------------------------------------------------------------------------- /S12 - LangChain - Data Connection Module Concept/Data Connections - Project.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S12 - LangChain - Data Connection Module Concept/Data Connections - Project.PNG -------------------------------------------------------------------------------- /S12 - LangChain - Data Connection Module Concept/Data Connections - Python Code.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S12 - LangChain - Data Connection Module Concept/Data Connections - Python Code.zip -------------------------------------------------------------------------------- /S13 - Project 6 - Quiz MCQ Creator App/MCQ Creator App - Jupyter Notebook.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S13 - Project 6 - Quiz MCQ Creator App/MCQ Creator App - Jupyter Notebook.zip -------------------------------------------------------------------------------- /S14 - LangChain - Chains Module Concept/Generic Chains Overview.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "336b9395", 6 | "metadata": {}, 7 | "source": [ 8 | "# Generic Chains Overview" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "1dcb42ef", 14 | "metadata": {}, 15 | "source": [ 16 | "## Simple Chain" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "id": "0b094e82", 22 | "metadata": {}, 23 | "source": [ 24 | "\n", 25 | "The most elementary type of chain is known as a basic chain, which represents the simplest form of crafting a chain.
In this setup, there is only one Language Model (LLM) responsible for receiving an input prompt and using it for generating text.\n", 26 | "" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": 1, 32 | "id": "43b5cfff", 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "#Please install the package as per your requirement :)\n", 37 | "#!pip install openai==1.14.2\n", 38 | "#!pip install langchain==0.1.13\n", 39 | "#!pip install huggingface-hub==0.21.4\n", 40 | "#!pip install langchain-openai==0.1.0" 41 | ] 42 | }, 43 | { 44 | "cell_type": "code", 45 | "execution_count": 2, 46 | "id": "6b36c04f", 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "import os\n", 51 | "os.environ[\"OPENAI_API_KEY\"] = \"sk-8iwW6neSTjf5YJkja6s6s8snd7fhh9999mnghdO98vjeuPCT\"\n", 52 | "os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = \"hf_atqYEXBaksjfha6fa9sfhbasf8787ajbQTAw\"" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": 3, 58 | "id": "a8d7c528", 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "#The below import has been replaced by the later one\n", 63 | "#from langchain.llms import OpenAI\n", 64 | "from langchain_openai import OpenAI\n", 65 | "from langchain.prompts import PromptTemplate\n", 66 | "from langchain.chains import LLMChain" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 4, 72 | "id": "5547ab07", 73 | "metadata": {}, 74 | "outputs": [], 75 | "source": [ 76 | "llm = OpenAI()" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 5, 82 | "id": "436017aa", 83 | "metadata": {}, 84 | "outputs": [], 85 | "source": [ 86 | "prompt = PromptTemplate(\n", 87 | " input_variables=[\"place\"],\n", 88 | " template=\"Best places to visit in {place}?\",\n", 89 | ")" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": 6, 95 | "id": "2477a555", 96 | "metadata": {}, 97 | "outputs": [ 98 | { 99 | "name": "stdout", 100 | "output_type": "stream", 101 | "text": [ 102 | "{'place': 'India', 'text': '\\n\\n1. The Taj Mahal in Agra, Uttar Pradesh\\n2. The Golden Temple in Amritsar, Punjab\\n3. The backwaters of Kerala\\n4. The beaches of Goa\\n5. The Pink City of Jaipur, Rajasthan\\n6. The Himalayan region of Ladakh, Jammu and Kashmir\\n7. The ancient city of Varanasi, Uttar Pradesh\\n8. The wildlife sanctuaries of Jim Corbett and Ranthambore\\n9. The city of Mumbai, Maharashtra\\n10. The hill stations of Shimla and Manali, Himachal Pradesh'}\n" 103 | ] 104 | } 105 | ], 106 | "source": [ 107 | "chain = LLMChain(llm=llm, prompt=prompt)\n", 108 | "\n", 109 | "# Run the chain only specifying the input variable.\n", 110 | "# Recently langchain has replaced 'run' function with 'invoke'\n", 111 | "print(chain.invoke(\"India\"))" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "id": "c88205e2", 117 | "metadata": {}, 118 | "source": [ 119 | "## Simple Sequential Chains" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "id": "4eb1850c", 125 | "metadata": {}, 126 | "source": [ 127 | "\n", 128 | "Sequential Chains involves making a series of consecutive calls to the language model.
This approach proves especially valuable when there is a need to utilize the output generated from one call as the input for another call.\n", 129 | "" 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": 7, 135 | "id": "76654563", 136 | "metadata": {}, 137 | "outputs": [], 138 | "source": [ 139 | "from langchain.chains import SimpleSequentialChain\n", 140 | "\n", 141 | "#from langchain.llms import HuggingFaceHub\n", 142 | "#The above have been updated recently, so going forward we have to use the below :)\n", 143 | "\n", 144 | "from langchain.llms import HuggingFaceEndpoint" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": 8, 150 | "id": "92f94dc5", 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [ 154 | "template = \"\"\"You have to suggest 5 best places to visit in {place}?\n", 155 | "\n", 156 | "YOUR RESPONSE:\n", 157 | "\"\"\"\n", 158 | "prompt_template = PromptTemplate(\n", 159 | " input_variables=[\"place\"], \n", 160 | " template=template)" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": 9, 166 | "id": "77104326", 167 | "metadata": {}, 168 | "outputs": [ 169 | { 170 | "name": "stdout", 171 | "output_type": "stream", 172 | "text": [ 173 | "Token has not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well." 174 | ] 175 | }, 176 | { 177 | "name": "stderr", 178 | "output_type": "stream", 179 | "text": [ 180 | "c:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", 181 | " from .autonotebook import tqdm as notebook_tqdm\n" 182 | ] 183 | }, 184 | { 185 | "name": "stdout", 186 | "output_type": "stream", 187 | "text": [ 188 | "\n", 189 | "Token is valid (permission: read).\n", 190 | "Your token has been saved to C:\\Users\\User\\.cache\\huggingface\\token\n", 191 | "Login successful\n" 192 | ] 193 | } 194 | ], 195 | "source": [ 196 | "#HF_llm= HuggingFaceHub(repo_id = \"google/flan-t5-large\")\n", 197 | "#The above 'HuggingFaceHub' class has been depreciated, so please use the below class'HuggingFaceEndpoint' \n", 198 | "#and the below mentioned model outperforms most of the available open source LLMs\n", 199 | "\n", 200 | "HF_llm = HuggingFaceEndpoint(repo_id=\"mistralai/Mistral-7B-Instruct-v0.2\") # Model link : https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2\n" 201 | ] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "execution_count": 10, 206 | "id": "2578b568", 207 | "metadata": {}, 208 | "outputs": [], 209 | "source": [ 210 | "#llm = OpenAI()" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 11, 216 | "id": "aa350baf", 217 | "metadata": {}, 218 | "outputs": [], 219 | "source": [ 220 | "place_chain = LLMChain(llm=llm, prompt=prompt_template)" 221 | ] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": 12, 226 | "id": "d4ea5942", 227 | "metadata": {}, 228 | "outputs": [], 229 | "source": [ 230 | "template = \"\"\"Given a list a places, please estimate the expenses to visit all of them in local currency and also the days needed\n", 231 | "{expenses}\n", 232 | "\n", 233 | "YOUR RESPONSE:\n", 234 | "\"\"\"\n", 235 | "prompt_template = PromptTemplate(\n", 236 | " input_variables=[\"expenses\"],\n", 237 | " template=template)" 238 | ] 239 | }, 240 | { 241 | "cell_type": "code", 242 | "execution_count": 13, 243 | "id": "61ca518d", 244 | "metadata": {}, 245 | "outputs": [], 246 | "source": [ 247 | "llm = OpenAI()" 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": 14, 253 | "id": "ef315099", 254 | "metadata": {}, 255 | "outputs": [], 256 | "source": [ 257 | "expenses_chain = LLMChain(llm=llm, prompt=prompt_template)" 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": 15, 263 | "id": "16e97f98", 264 | "metadata": {}, 265 | "outputs": [], 266 | "source": [ 267 | "final_chain = SimpleSequentialChain(chains=[place_chain, expenses_chain], verbose=True)" 268 | ] 269 | }, 270 | { 271 | "cell_type": "code", 272 | "execution_count": 16, 273 | "id": "98487cc4", 274 | "metadata": {}, 275 | "outputs": [ 276 | { 277 | "name": "stdout", 278 | "output_type": "stream", 279 | "text": [ 280 | "\n", 281 | "\n", 282 | "\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n", 283 | "\u001b[36;1m\u001b[1;3m1. The Taj Mahal: Located in Agra, Uttar Pradesh, the Taj Mahal is a must-visit destination for anyone traveling to India. This iconic white marble mausoleum was built by the Mughal Emperor Shah Jahan in memory of his beloved wife and is known for its stunning architecture and romantic history.\n", 284 | "\n", 285 | "2. Kerala: Known as \"God's Own Country,\" Kerala is a beautiful state in southern India that offers a unique blend of lush green landscapes, serene backwaters, and pristine beaches. It is also famous for its Ayurvedic treatments, delicious cuisine, and vibrant culture.\n", 286 | "\n", 287 | "3. The Golden Triangle: The Golden Triangle is a popular tourist circuit in India that covers three major cities - Delhi, Agra, and Jaipur. Each city is rich in history, culture, and architecture, making it a must-visit for those interested in India's heritage.\n", 288 | "\n", 289 | "4. Goa: Located on the western coast of India, Goa is a popular beach destination known for its laid-back vibe, stunning beaches, and vibrant nightlife. It is also home to a unique blend of Indian and Portuguese cultures, reflected in its architecture, cuisine, and festivals.\n", 290 | "\n", 291 | "5. Ladakh: For nature lovers and adventure seekers, Ladakh is a must-visit destination\u001b[0m\n", 292 | "\u001b[33;1m\u001b[1;3m1. The Taj Mahal:\n", 293 | "Expenses: Approximately 1000 Indian Rupees (INR) for entrance fee and additional expenses for transportation, food, and accommodation.\n", 294 | "Days needed: 1-2 days.\n", 295 | "\n", 296 | "2. Kerala:\n", 297 | "Expenses: Approximately 20,000 INR for a 7-day trip, including transportation, food, accommodation, and Ayurvedic treatments.\n", 298 | "Days needed: 5-7 days.\n", 299 | "\n", 300 | "3. The Golden Triangle:\n", 301 | "Expenses: Approximately 15,000 INR for a 5-day trip, including transportation, food, accommodation, and entrance fees to major attractions.\n", 302 | "Days needed: 4-5 days.\n", 303 | "\n", 304 | "4. Goa:\n", 305 | "Expenses: Approximately 10,000 INR for a 3-day trip, including transportation, food, accommodation, and nightlife expenses.\n", 306 | "Days needed: 2-3 days.\n", 307 | "\n", 308 | "5. Ladakh:\n", 309 | "Expenses: Approximately 30,000 INR for a 7-day trip, including transportation, food, accommodation, and adventure activities.\n", 310 | "Days needed: 6-7 days.\u001b[0m\n", 311 | "\n", 312 | "\u001b[1m> Finished chain.\u001b[0m\n" 313 | ] 314 | } 315 | ], 316 | "source": [ 317 | "review = final_chain.invoke(\"India\")" 318 | ] 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": null, 323 | "id": "a2c09bca", 324 | "metadata": {}, 325 | "outputs": [], 326 | "source": [] 327 | } 328 | ], 329 | "metadata": { 330 | "kernelspec": { 331 | "display_name": "Python 3 (ipykernel)", 332 | "language": "python", 333 | "name": "python3" 334 | }, 335 | "language_info": { 336 | "codemirror_mode": { 337 | "name": "ipython", 338 | "version": 3 339 | }, 340 | "file_extension": ".py", 341 | "mimetype": "text/x-python", 342 | "name": "python", 343 | "nbconvert_exporter": "python", 344 | "pygments_lexer": "ipython3", 345 | "version": "3.10.8" 346 | } 347 | }, 348 | "nbformat": 4, 349 | "nbformat_minor": 5 350 | } 351 | -------------------------------------------------------------------------------- /S14 - LangChain - Chains Module Concept/Utility Chains Overview.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "7585270c", 6 | "metadata": {}, 7 | "source": [ 8 | "# Utility Chains Overview" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "id": "416fc6f5", 14 | "metadata": {}, 15 | "source": [ 16 | "## Summarizing Documents" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "id": "6782af7d", 22 | "metadata": {}, 23 | "source": [ 24 | "### load_summarize_chain" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "execution_count": 1, 30 | "id": "c27204db", 31 | "metadata": {}, 32 | "outputs": [], 33 | "source": [ 34 | "#Please install the package as per your requirement :)\n", 35 | "#!pip install openai==1.14.2\n", 36 | "#!pip install langchain==0.1.13\n", 37 | "#!pip install huggingface-hub==0.21.4\n", 38 | "#!pip install langchain-openai==0.1.0\n", 39 | "#!pip install tiktoken==0.5.2\n", 40 | "#!pip install bs4==0.0.2" 41 | ] 42 | }, 43 | { 44 | "cell_type": "code", 45 | "execution_count": 2, 46 | "id": "db3f812e", 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "import os\n", 51 | "os.environ[\"OPENAI_API_KEY\"] = \"sk-8iW6nSTdsf566ggbb45789nggt66rggh98vjeuPCT\"" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": 3, 57 | "id": "ceb56eaa", 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "#The below import has been replaced by the later one\n", 62 | "#from langchain.llms import OpenAI\n", 63 | "from langchain_openai import OpenAI\n", 64 | "from langchain.prompts import PromptTemplate\n", 65 | "from langchain.chains.summarize import load_summarize_chain\n", 66 | "from langchain.text_splitter import CharacterTextSplitter\n", 67 | "from langchain.docstore.document import Document" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": 4, 73 | "id": "333eca01", 74 | "metadata": {}, 75 | "outputs": [], 76 | "source": [ 77 | "llm = OpenAI(temperature=0.9)" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 5, 83 | "id": "369ec2d4", 84 | "metadata": {}, 85 | "outputs": [], 86 | "source": [ 87 | "# Reading the document\n", 88 | "with open(\"sample.txt\") as f:\n", 89 | " data = f.read()" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "id": "75c86c6b", 95 | "metadata": {}, 96 | "source": [ 97 | "\n", 98 | "When it comes to document processing, breaking a large document into smaller, more manageable chunks is essential\n", 99 | "" 100 | ] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "execution_count": 6, 105 | "id": "e378c705", 106 | "metadata": {}, 107 | "outputs": [], 108 | "source": [ 109 | "# Split text\n", 110 | "text_splitter = CharacterTextSplitter()\n", 111 | "texts = text_splitter.split_text(data)" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": 7, 117 | "id": "1e33b908", 118 | "metadata": {}, 119 | "outputs": [], 120 | "source": [ 121 | "# Create multiple documents\n", 122 | "docs = [Document(page_content=t) for t in texts]" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": 8, 128 | "id": "aa7b6158", 129 | "metadata": {}, 130 | "outputs": [ 131 | { 132 | "data": { 133 | "text/plain": [ 134 | "[Document(page_content=\"Title: The Computer: Revolutionizing the World of Technology\\n\\nIntroduction:\\nThe computer is a marvel of human ingenuity that has revolutionized the world in countless ways. From its humble beginnings as a complex calculating machine to its current status as a ubiquitous tool in every aspect of our lives, the computer has transformed how we work, communicate, learn, and entertain ourselves. This essay explores the evolution, impact, and future potential of computers in shaping our modern world.\\n\\nThe Birth of the Computer:\\nThe computer has its roots in the early 19th century when inventors and mathematicians began conceptualizing machines capable of automating complex calculations. However, it was not until the mid-20th century that the first electronic digital computers were developed. Pioneers such as Alan Turing, John von Neumann, and Grace Hopper made significant contributions to the field, laying the groundwork for the computers we know today.\\n\\nThe Evolution of Computing Power:\\nFrom room-sized mainframes to portable laptops, computers have evolved exponentially in terms of size, speed, and processing power. The introduction of integrated circuits, microprocessors, and Moore's Law, which states that the number of transistors on a microchip doubles approximately every two years, have propelled the advancement of computer technology. This exponential growth has led to the development of faster, more efficient, and increasingly capable devices that continue to reshape our world.\\n\\nTransforming Work and Productivity:\\nComputers have transformed the workplace, increasing productivity, efficiency, and accuracy across industries. They have automated repetitive tasks, streamlined operations, and facilitated global connectivity. From word processing and data analysis to complex simulations and artificial intelligence, computers have become essential tools for professionals in fields such as finance, healthcare, engineering, and creative arts. The advent of remote work and digital collaboration further underscores the computer's impact on modern work environments.\\n\\nCommunication and Connectivity:\\nThe computer has revolutionized communication, enabling people to connect with one another across vast distances. The internet, a global network of computers, has facilitated instant communication through email, messaging applications, and social media platforms. It has transformed how we share information, access knowledge, and engage in online communities. Additionally, advancements in video conferencing and virtual reality have bridged geographical gaps, allowing for immersive and real-time interactions.\\n\\n\\nEducation and Learning:\\nComputers have had a profound impact on education, revolutionizing the way we learn and acquire knowledge. Online learning platforms, digital textbooks, and educational software provide accessible and interactive learning experiences. Virtual simulations and augmented reality applications enhance understanding in subjects such as science, history, and medicine. Furthermore, computers have expanded access to education, enabling remote learning and distance education opportunities for individuals worldwide.\\n\\nEntertainment and Creativity:\\nComputers have transformed the entertainment industry, enabling the creation and consumption of diverse forms of media. From video games and digital art to music production and film editing, computers have become indispensable tools for creative expression. Streaming services and online platforms have democratized content distribution, offering a plethora of entertainment options to global audiences. Virtual reality and augmented reality technologies offer immersive experiences, blurring the boundaries between the digital and physical realms.\"),\n", 135 | " Document(page_content='Conclusion:\\nThe computer has profoundly shaped our modern world, revolutionizing the way we work, communicate, learn, and entertain ourselves. Its evolution from a bulky calculating machine to a portable device with immense processing power is a testament to human innovation. As computers continue to advance, their potential to drive societal progress, solve complex problems, and inspire new avenues of creativity is limitless. With responsible development and ethical usage, the computer will undoubtedly remain at the forefront of technological advancements, empowering individuals and transforming societies for generations to come.')]" 136 | ] 137 | }, 138 | "execution_count": 8, 139 | "metadata": {}, 140 | "output_type": "execute_result" 141 | } 142 | ], 143 | "source": [ 144 | "docs" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "id": "7bf24b6b", 150 | "metadata": {}, 151 | "source": [ 152 | "\n", 153 | "To create an instance of load_summarizer_chain, we need to provide three arguments.

Firstly, we need to pass the desired large language model that will be used to query the user input. Secondly, we specify the type of langchain chain to be used for summarizing documents.
Lastly, we can set the verbose argument to True if we want to see all the intermediate steps involved in processing the user request and generating the output." 154 | ] 155 | }, 156 | { 157 | "cell_type": "code", 158 | "execution_count": 9, 159 | "id": "a582191f", 160 | "metadata": {}, 161 | "outputs": [ 162 | { 163 | "name": "stdout", 164 | "output_type": "stream", 165 | "text": [ 166 | "\n", 167 | "\n", 168 | "\u001b[1m> Entering new MapReduceDocumentsChain chain...\u001b[0m\n", 169 | "\n", 170 | "\n", 171 | "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n", 172 | "Prompt after formatting:\n", 173 | "\u001b[32;1m\u001b[1;3mWrite a concise summary of the following:\n", 174 | "\n", 175 | "\n", 176 | "\"Title: The Computer: Revolutionizing the World of Technology\n", 177 | "\n", 178 | "Introduction:\n", 179 | "The computer is a marvel of human ingenuity that has revolutionized the world in countless ways. From its humble beginnings as a complex calculating machine to its current status as a ubiquitous tool in every aspect of our lives, the computer has transformed how we work, communicate, learn, and entertain ourselves. This essay explores the evolution, impact, and future potential of computers in shaping our modern world.\n", 180 | "\n", 181 | "The Birth of the Computer:\n", 182 | "The computer has its roots in the early 19th century when inventors and mathematicians began conceptualizing machines capable of automating complex calculations. However, it was not until the mid-20th century that the first electronic digital computers were developed. Pioneers such as Alan Turing, John von Neumann, and Grace Hopper made significant contributions to the field, laying the groundwork for the computers we know today.\n", 183 | "\n", 184 | "The Evolution of Computing Power:\n", 185 | "From room-sized mainframes to portable laptops, computers have evolved exponentially in terms of size, speed, and processing power. The introduction of integrated circuits, microprocessors, and Moore's Law, which states that the number of transistors on a microchip doubles approximately every two years, have propelled the advancement of computer technology. This exponential growth has led to the development of faster, more efficient, and increasingly capable devices that continue to reshape our world.\n", 186 | "\n", 187 | "Transforming Work and Productivity:\n", 188 | "Computers have transformed the workplace, increasing productivity, efficiency, and accuracy across industries. They have automated repetitive tasks, streamlined operations, and facilitated global connectivity. From word processing and data analysis to complex simulations and artificial intelligence, computers have become essential tools for professionals in fields such as finance, healthcare, engineering, and creative arts. The advent of remote work and digital collaboration further underscores the computer's impact on modern work environments.\n", 189 | "\n", 190 | "Communication and Connectivity:\n", 191 | "The computer has revolutionized communication, enabling people to connect with one another across vast distances. The internet, a global network of computers, has facilitated instant communication through email, messaging applications, and social media platforms. It has transformed how we share information, access knowledge, and engage in online communities. Additionally, advancements in video conferencing and virtual reality have bridged geographical gaps, allowing for immersive and real-time interactions.\n", 192 | "\n", 193 | "\n", 194 | "Education and Learning:\n", 195 | "Computers have had a profound impact on education, revolutionizing the way we learn and acquire knowledge. Online learning platforms, digital textbooks, and educational software provide accessible and interactive learning experiences. Virtual simulations and augmented reality applications enhance understanding in subjects such as science, history, and medicine. Furthermore, computers have expanded access to education, enabling remote learning and distance education opportunities for individuals worldwide.\n", 196 | "\n", 197 | "Entertainment and Creativity:\n", 198 | "Computers have transformed the entertainment industry, enabling the creation and consumption of diverse forms of media. From video games and digital art to music production and film editing, computers have become indispensable tools for creative expression. Streaming services and online platforms have democratized content distribution, offering a plethora of entertainment options to global audiences. Virtual reality and augmented reality technologies offer immersive experiences, blurring the boundaries between the digital and physical realms.\"\n", 199 | "\n", 200 | "\n", 201 | "CONCISE SUMMARY:\u001b[0m\n", 202 | "Prompt after formatting:\n", 203 | "\u001b[32;1m\u001b[1;3mWrite a concise summary of the following:\n", 204 | "\n", 205 | "\n", 206 | "\"Conclusion:\n", 207 | "The computer has profoundly shaped our modern world, revolutionizing the way we work, communicate, learn, and entertain ourselves. Its evolution from a bulky calculating machine to a portable device with immense processing power is a testament to human innovation. As computers continue to advance, their potential to drive societal progress, solve complex problems, and inspire new avenues of creativity is limitless. With responsible development and ethical usage, the computer will undoubtedly remain at the forefront of technological advancements, empowering individuals and transforming societies for generations to come.\"\n", 208 | "\n", 209 | "\n", 210 | "CONCISE SUMMARY:\u001b[0m\n", 211 | "\n", 212 | "\u001b[1m> Finished chain.\u001b[0m\n", 213 | "\n", 214 | "\n", 215 | "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n", 216 | "Prompt after formatting:\n", 217 | "\u001b[32;1m\u001b[1;3mWrite a concise summary of the following:\n", 218 | "\n", 219 | "\n", 220 | "\" The essay explores the history, evolution, and impact of computers on modern society. From its beginnings as a calculating machine to its current status as a ubiquitous tool, computers have transformed how we work, communicate, learn, and entertain ourselves. The advancements in computing power have led to increased productivity and efficiency in the workplace, revolutionized communication and connectivity, transformed education, and expanded opportunities for creative expression. \n", 221 | "\n", 222 | " Computers have greatly impacted our world, changing how we work, communicate, learn, and entertain ourselves. Their evolution and potential for advancement show human innovation. With responsible use, computers will continue to drive progress and transform societies for generations. \"\n", 223 | "\n", 224 | "\n", 225 | "CONCISE SUMMARY:\u001b[0m\n", 226 | "\n", 227 | "\u001b[1m> Finished chain.\u001b[0m\n", 228 | "\n", 229 | "\u001b[1m> Finished chain.\u001b[0m\n" 230 | ] 231 | }, 232 | { 233 | "data": { 234 | "text/plain": [ 235 | "{'input_documents': [Document(page_content=\"Title: The Computer: Revolutionizing the World of Technology\\n\\nIntroduction:\\nThe computer is a marvel of human ingenuity that has revolutionized the world in countless ways. From its humble beginnings as a complex calculating machine to its current status as a ubiquitous tool in every aspect of our lives, the computer has transformed how we work, communicate, learn, and entertain ourselves. This essay explores the evolution, impact, and future potential of computers in shaping our modern world.\\n\\nThe Birth of the Computer:\\nThe computer has its roots in the early 19th century when inventors and mathematicians began conceptualizing machines capable of automating complex calculations. However, it was not until the mid-20th century that the first electronic digital computers were developed. Pioneers such as Alan Turing, John von Neumann, and Grace Hopper made significant contributions to the field, laying the groundwork for the computers we know today.\\n\\nThe Evolution of Computing Power:\\nFrom room-sized mainframes to portable laptops, computers have evolved exponentially in terms of size, speed, and processing power. The introduction of integrated circuits, microprocessors, and Moore's Law, which states that the number of transistors on a microchip doubles approximately every two years, have propelled the advancement of computer technology. This exponential growth has led to the development of faster, more efficient, and increasingly capable devices that continue to reshape our world.\\n\\nTransforming Work and Productivity:\\nComputers have transformed the workplace, increasing productivity, efficiency, and accuracy across industries. They have automated repetitive tasks, streamlined operations, and facilitated global connectivity. From word processing and data analysis to complex simulations and artificial intelligence, computers have become essential tools for professionals in fields such as finance, healthcare, engineering, and creative arts. The advent of remote work and digital collaboration further underscores the computer's impact on modern work environments.\\n\\nCommunication and Connectivity:\\nThe computer has revolutionized communication, enabling people to connect with one another across vast distances. The internet, a global network of computers, has facilitated instant communication through email, messaging applications, and social media platforms. It has transformed how we share information, access knowledge, and engage in online communities. Additionally, advancements in video conferencing and virtual reality have bridged geographical gaps, allowing for immersive and real-time interactions.\\n\\n\\nEducation and Learning:\\nComputers have had a profound impact on education, revolutionizing the way we learn and acquire knowledge. Online learning platforms, digital textbooks, and educational software provide accessible and interactive learning experiences. Virtual simulations and augmented reality applications enhance understanding in subjects such as science, history, and medicine. Furthermore, computers have expanded access to education, enabling remote learning and distance education opportunities for individuals worldwide.\\n\\nEntertainment and Creativity:\\nComputers have transformed the entertainment industry, enabling the creation and consumption of diverse forms of media. From video games and digital art to music production and film editing, computers have become indispensable tools for creative expression. Streaming services and online platforms have democratized content distribution, offering a plethora of entertainment options to global audiences. Virtual reality and augmented reality technologies offer immersive experiences, blurring the boundaries between the digital and physical realms.\"),\n", 236 | " Document(page_content='Conclusion:\\nThe computer has profoundly shaped our modern world, revolutionizing the way we work, communicate, learn, and entertain ourselves. Its evolution from a bulky calculating machine to a portable device with immense processing power is a testament to human innovation. As computers continue to advance, their potential to drive societal progress, solve complex problems, and inspire new avenues of creativity is limitless. With responsible development and ethical usage, the computer will undoubtedly remain at the forefront of technological advancements, empowering individuals and transforming societies for generations to come.')],\n", 237 | " 'output_text': ' The essay discusses the history, development, and impact of computers on modern society, highlighting their transformational role in various areas such as work, communication, education, and creativity. While also emphasizing the importance of responsible use, the essay recognizes the potential for continued progress and transformation through computers.'}" 238 | ] 239 | }, 240 | "execution_count": 9, 241 | "metadata": {}, 242 | "output_type": "execute_result" 243 | } 244 | ], 245 | "source": [ 246 | "chain = load_summarize_chain(llm, chain_type=\"map_reduce\", verbose=True)\n", 247 | "chain.invoke(docs)" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "id": "3593072c", 253 | "metadata": {}, 254 | "source": [ 255 | "## HTTP Requests" 256 | ] 257 | }, 258 | { 259 | "cell_type": "markdown", 260 | "id": "320a57a5", 261 | "metadata": {}, 262 | "source": [ 263 | "### LLMRequestsChain" 264 | ] 265 | }, 266 | { 267 | "cell_type": "code", 268 | "execution_count": 10, 269 | "id": "2a29e1b4", 270 | "metadata": {}, 271 | "outputs": [], 272 | "source": [ 273 | "from langchain.chains import LLMRequestsChain, LLMChain" 274 | ] 275 | }, 276 | { 277 | "cell_type": "code", 278 | "execution_count": 11, 279 | "id": "e1f23480", 280 | "metadata": {}, 281 | "outputs": [], 282 | "source": [ 283 | "template = \"\"\"\n", 284 | "Extract the answer to the question '{query}' or say \"not found\" if the information is not available.\n", 285 | "{requests_result}\n", 286 | "\"\"\"\n", 287 | "\n", 288 | "PROMPT = PromptTemplate(\n", 289 | " input_variables=[\"query\", \"requests_result\"],\n", 290 | " template=template,\n", 291 | ")" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": 12, 297 | "id": "a5dc9937", 298 | "metadata": {}, 299 | "outputs": [], 300 | "source": [ 301 | "llm=OpenAI()" 302 | ] 303 | }, 304 | { 305 | "cell_type": "code", 306 | "execution_count": 13, 307 | "id": "dd539d4f", 308 | "metadata": {}, 309 | "outputs": [], 310 | "source": [ 311 | "chain = LLMRequestsChain(llm_chain=LLMChain(llm=llm, prompt=PROMPT))" 312 | ] 313 | }, 314 | { 315 | "cell_type": "markdown", 316 | "id": "cb3fbc27", 317 | "metadata": {}, 318 | "source": [ 319 | "\n", 320 | "Preparing the question & inputs to the http request" 321 | ] 322 | }, 323 | { 324 | "cell_type": "code", 325 | "execution_count": 14, 326 | "id": "64893b39", 327 | "metadata": {}, 328 | "outputs": [], 329 | "source": [ 330 | "question = \"What is the capital of india?\"\n", 331 | "inputs = {\n", 332 | " \"query\": question,\n", 333 | " \"url\": \"https://www.google.com/search?q=\" + question.replace(\" \", \"+\"),\n", 334 | "}" 335 | ] 336 | }, 337 | { 338 | "cell_type": "code", 339 | "execution_count": 15, 340 | "id": "779f0411", 341 | "metadata": {}, 342 | "outputs": [ 343 | { 344 | "data": { 345 | "text/plain": [ 346 | "{'query': 'What is the capital of india?',\n", 347 | " 'url': 'https://www.google.com/search?q=What+is+the+capital+of+india?',\n", 348 | " 'output': '\\nThe capital of India is New Delhi.'}" 349 | ] 350 | }, 351 | "execution_count": 15, 352 | "metadata": {}, 353 | "output_type": "execute_result" 354 | } 355 | ], 356 | "source": [ 357 | "chain.invoke(inputs)" 358 | ] 359 | }, 360 | { 361 | "cell_type": "markdown", 362 | "id": "14c980e2", 363 | "metadata": {}, 364 | "source": [ 365 | "\n", 366 | "Let's look at the internal functioning" 367 | ] 368 | }, 369 | { 370 | "cell_type": "code", 371 | "execution_count": 16, 372 | "id": "b974d187", 373 | "metadata": {}, 374 | "outputs": [ 375 | { 376 | "name": "stdout", 377 | "output_type": "stream", 378 | "text": [ 379 | " def _call(\n", 380 | " self,\n", 381 | " inputs: Dict[str, Any],\n", 382 | " run_manager: Optional[CallbackManagerForChainRun] = None,\n", 383 | " ) -> Dict[str, Any]:\n", 384 | " from bs4 import BeautifulSoup\n", 385 | "\n", 386 | " _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n", 387 | " # Other keys are assumed to be needed for LLM prediction\n", 388 | " other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n", 389 | " url = inputs[self.input_key]\n", 390 | " res = self.requests_wrapper.get(url)\n", 391 | " # extract the text from the html\n", 392 | " soup = BeautifulSoup(res, \"html.parser\")\n", 393 | " other_keys[self.requests_key] = soup.get_text()[: self.text_length]\n", 394 | " result = self.llm_chain.predict(\n", 395 | " callbacks=_run_manager.get_child(), **other_keys\n", 396 | " )\n", 397 | " return {self.output_key: result}\n", 398 | "\n" 399 | ] 400 | } 401 | ], 402 | "source": [ 403 | "import inspect\n", 404 | "print(inspect.getsource(chain._call))" 405 | ] 406 | }, 407 | { 408 | "cell_type": "code", 409 | "execution_count": null, 410 | "id": "81acaf69", 411 | "metadata": {}, 412 | "outputs": [], 413 | "source": [] 414 | } 415 | ], 416 | "metadata": { 417 | "kernelspec": { 418 | "display_name": "Python 3 (ipykernel)", 419 | "language": "python", 420 | "name": "python3" 421 | }, 422 | "language_info": { 423 | "codemirror_mode": { 424 | "name": "ipython", 425 | "version": 3 426 | }, 427 | "file_extension": ".py", 428 | "mimetype": "text/x-python", 429 | "name": "python", 430 | "nbconvert_exporter": "python", 431 | "pygments_lexer": "ipython3", 432 | "version": "3.10.8" 433 | } 434 | }, 435 | "nbformat": 4, 436 | "nbformat_minor": 5 437 | } 438 | -------------------------------------------------------------------------------- /S16 - Project 7 - CSV Data Analysis Tool/CSV Data Analysis - Project 7.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S16 - Project 7 - CSV Data Analysis Tool/CSV Data Analysis - Project 7.zip -------------------------------------------------------------------------------- /S17 - Project 8 - YouTube Script Writing Tool/Youtube Script Writing Tool - Source Code.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S17 - Project 8 - YouTube Script Writing Tool/Youtube Script Writing Tool - Source Code.zip -------------------------------------------------------------------------------- /S18 - Project 9 - Support Chat Bot for your Website/Flow Diagram.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S18 - Project 9 - Support Chat Bot for your Website/Flow Diagram.pdf -------------------------------------------------------------------------------- /S18 - Project 9 - Support Chat Bot for your Website/Support Chat Bot For Your Website - Project 9.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S18 - Project 9 - Support Chat Bot for your Website/Support Chat Bot For Your Website - Project 9.zip -------------------------------------------------------------------------------- /S19 - Project 10 - Automatic Ticket Classification Tool/Automatic Ticket Classification Tool.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S19 - Project 10 - Automatic Ticket Classification Tool/Automatic Ticket Classification Tool.zip -------------------------------------------------------------------------------- /S20 - Project 11 - HR Resume Screening Assistance/Resume Screening Assistance Project - Source Code - Final.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S20 - Project 11 - HR Resume Screening Assistance/Resume Screening Assistance Project - Source Code - Final.zip -------------------------------------------------------------------------------- /S22 - Project 12 - Email Generator Using LLAMA 2 - Streamlit App/Email Generator App - Source Code.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S22 - Project 12 - Email Generator Using LLAMA 2 - Streamlit App/Email Generator App - Source Code.zip -------------------------------------------------------------------------------- /S23 - Project 13 - Invoice Extraction Bot/Get Replicate API Token.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S23 - Project 13 - Invoice Extraction Bot/Get Replicate API Token.pdf -------------------------------------------------------------------------------- /S23 - Project 13 - Invoice Extraction Bot/Project 13 - Invoice Extraction Bot - Source Code.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/LangChain-Masterclass---Build-15-OpenAI-and-LLAMA-2-LLM-Apps-Using-Python-/2857fbaa6b5b9b8c31ff27a11f7f228f969f1f42/S23 - Project 13 - Invoice Extraction Bot/Project 13 - Invoice Extraction Bot - Source Code.zip -------------------------------------------------------------------------------- /S24 - Project 14 - Text to SQL Query - Helper Tool/Text_To_SQL_Query_Helper_Tool.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"cell_type":"code","execution_count":13,"metadata":{"id":"IW2eLcHZElq8","executionInfo":{"status":"ok","timestamp":1711126214506,"user_tz":-330,"elapsed":444,"user":{"displayName":"Sharath Raju","userId":"16058453645207816776"}}},"outputs":[],"source":["# Hugging Face Transformers is an open-source framework for deep learning created by Hugging Face.\n","# It provides APIs and tools to download state-of-the-art pre-trained models and further tune them to maximize performance.\n","# These models support common tasks in different modalities, such as natural language processing, computer vision, audio, and multi-modal applications.\n","# Using pretrained models can reduce your compute costs, carbon footprint,\n","# and save you the time and resources required to train a model from scratch.\n","\n","# https://huggingface.co/docs/transformers/index\n","# https://huggingface.co/docs/hub/index\n","\n","# Accelerate library to help users easily train a 🤗 Transformers model on any type of distributed setup,\n","# whether it is multiple GPU's on one machine or multiple GPU's across several machines.\n","\n","!pip install -q transformers langchain==0.1.13 huggingface-hub==0.21.4 accelerate"]},{"cell_type":"code","execution_count":14,"metadata":{"id":"XrZaaNnpCWld","executionInfo":{"status":"ok","timestamp":1711126219349,"user_tz":-330,"elapsed":8,"user":{"displayName":"Sharath Raju","userId":"16058453645207816776"}}},"outputs":[],"source":["!pip install --upgrade transformers --user"]},{"cell_type":"code","execution_count":4,"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"7y6C2vOtNBJK","outputId":"9fb52af1-b50c-473b-fd08-7dbbaa83cdc2","executionInfo":{"status":"ok","timestamp":1711125878719,"user_tz":-330,"elapsed":519,"user":{"displayName":"Sharath Raju","userId":"16058453645207816776"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["Token has not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.\n","Token is valid (permission: read).\n","Your token has been saved to /root/.cache/huggingface/token\n","Login successful\n"]}],"source":["# we need to login to Hugging Face to have access to their inference API.\n","# This step requires a free Hugging Face token.\n","\n","from huggingface_hub import login\n","login(\"hf_sUV6Tfrgrtk7vkvjf8fmmjf8fmmfi9CuHExZ\")"]},{"cell_type":"code","execution_count":5,"metadata":{"colab":{"base_uri":"https://localhost:8080/","height":493,"referenced_widgets":["d221752410504dbda6ca57ec58a0a0b3","c544cf50fe004fdbaeb95ced5990cb43","6bc0c8962a47470d9b5a737c4e4cf456","a794b90e2e6441fbb48b8f1c3d2563f4","cccf3a8946df47678d3345d01eee22de","5bcbe1c7a1e443c18afa62329f388293","1d8849409016403080ac05a68f47d513","80f3b62117124ec3ac045fa9d5201529","39a788c4d9f048cc972e2d7dc6da4a5b","94f712165e6b436cbb0b2c47f52727a1","0c2ba561341b4ba1813580eff54f0d6b","14c44efbd0154676bcf684cb3f6bf674","7c549d928cde4d0491822ae72d473620","66e02858d26041efb5b9fc50cc83ddd2","cd8521a0e0294c7abebf88f5eca9daf3","cfb17c07e4084f49a5d427aba7d1de80","945e5e5660b841dab73e32c689e6bfc3","1440b02e99f84fa488503fb9f6ca2a4e","957ab82400164a2387d999a276f632a9","010e6163afdd44c08e984aef6e563ff4","da7032cacb9341e184c830ea15ecddb1","a793b0e1b5674b328bfdb12c3d0aec96","35f648414c4746a3b65997000eaa163e","ed72513405844e06a1523f327d1f2d6b","6baf91c129cd4d33b4b8c9d1fbf60e55","0e6e7389a68342ee89c4c1372c568310","7a0e382403ad4d5b8b34ef8178adeb74","9604244cf9e94573b1cac28d29d9af70","40ad8262573a4d0b8be3ca6904ee5514","c550ee4da9f64b368cd1ac269a374cee","82a7f541ef5c4151abed2ae070b477f8","eb6ab9df17dd47128006ae229031b99b","747d4d57d6064cfc99a385afc348a1e1","05f33b95b6e641e2a827f24ff10b7e7a","a048834fdd1a45398ff43c68e8f73cb1","92a05454076947258c958c23d6e31fff","3a3b2d1ba2c5492e9b8a284c4ea16a3e","af54e74ba2234faaa69272a59e84b18b","a101e22705ba4a27b7ee8625f0425c2a","c35a4a47a81e41c38227bb16e1a637df","21c9d2f832684d3982045f54b111bb87","87f111646e594960a63bed9dd4adb00c","15869178e0ff4544b2f3a899f68a684c","11f3dea2eaa64e21986143adfe0a2ef7","33132445c2614abe9a0bc1eede189d6e","3f56161a48eb44e59479489ebb4c76e4","cdaf64bb0d1944e58a2767fc293183fb","d023c81b20284c91bc0d86e124044a8b","9954d8032a224dfdbd455ceec840c60c","e2ad75ada30b468dbd18ce32cc190d4b","1cfb836beac941dbb4d1982280160850","a33ab8db87434d758728d1f5102ceedf","3bba7b1d3c8348cc99a30cf329a81432","68afb7b6a43447deb83a5c6318165497","317cf57bb2db4c878890ec757b8229f7","cca0106d71d0479a80b3d74329fe4b04","0dc2284b882840148f8787b9884ff330","355b9ebfc16c4111a5e9eca23a610798","9c8b5b25f65743cbaa815ee00de96494","61bf515219864dd898e2d9de53ed6d78","9adeb0feb33240f29700f622babe8afc","b4e1680b2bfd45fe875d145968dc5803","375f5da85bfb46aebe7d0b8d361f5b75","a1cc81b7b4cc404db7d7025567000584","08f8c79976c54ddf83dd169b0a7f43c0","f7775f5220ed40c0b7fd0b308114bce0","da9f375013a040b4b2e585d210e0d138","3a27b7573c1e4efa8d26489359e89de9","b28f220583074561b2e46a30765d230f","48ef9a62c9cf48509b3839adc65386e5","0efd578974f84a4c887cc722ac5d576f","924eb24a210e49e490b47280a727c4b1","d39ccd197c6a4380811ac0df24f5f1cb","62c17f7cb3854a02aa86f9ab1de72c3c","d72cfd80f4b04f2194a920dcac6da6d7","a15e1603e7a7438c869a64cbba4c4f76","506be39f64064c9e8ea00169db14f541","8eba127ebcc5493497b0083f469c45ed","138de13598df491382d83b09db08da91","cc11e39e457e4a078419a286c7bda1a0","1451f65efdc543f89f8783ecbf3c02ac","a453acfb16e048bca1930740a0c89602","a75caa6908a14f12973f3e79a23a8767","0de1f6d0eaed49409d852428a4ff4f5b","18870b0b683d4b519a0b66d0f5cdf296","e801eed6189b4495afa8c043bcfee4df","0d187a83a4814b1d99bd3673dcd9cbe8","eb65e3f5827f4728b064cd123489cfab","e748034dfc6a4607a31ce9d456fd592c","e4800f7d45af47cfb095c8e0a26a8aba","d1c369f09c7d4aeba685e376e407f3a7","7fa57129eb2c4acb805c1382e6dc7daf","309a23e8e5ca42659aa7f94926bf1ffa","aaaa2d1d35ea4540bb712332a30ebf21","455907deb2e648e38d3eb97359964187","d1f04d36415d4d0a97401202f4c424b6","1958e2b76249403da427bd2b35b05bfd","7e4581871908404b9c59096dcb8b5164","435b6464f40a42f8ae82b2ad74c0eb34","7848c2f1176e426f8adfab19e2705347","ccbf6fbb23dd44ce9dbf8f14766387fc","a270812f4ffb41fcbc9d727484d9f795","25cf05ce1e364040a3dad272d945f1b2","9811e0badb6e4866a181df20f58753d8","d3dafc6a15164ccd8caf67eae3762eb6","5193aad9a3e54c3dbd8ae79b59f96994","8453c33e828b45c48cbc0581d69458f8","eb8436e13ba048569c900ff65912b8d1","081186e45a804b9fb610a8e1299dd97d","8c87984a4bcd4baab9bf815361f25531","f5224dca5e364fae9f7a674feb79414c","6c065e090bf74d27b3ee5ee372b54d91","b69cd321d4a64abdb4b20cad08fdc80c","2246835a60384bb1b4bdd4a3c0ff8c9b","23358eece86247a8a97dcc51a38eaa8b","2f61e2a49eba4e868e42d015ce5c7ae5","bcf296dafdb447ea82484aa72efeff51","e0d8a8c412cd4c9485121cb1a9e547f1","2fa24d85bbbd4e2e9d893a0eb5362e70","1d845be303424c35b3b53f1f902e9d9e","54ff3d35c4d84a2291c521d852201c97"]},"id":"o787jKFhKTFq","outputId":"73cdf0d2-cca1-42ee-d0d9-1bd3230763ee","executionInfo":{"status":"ok","timestamp":1711126058994,"user_tz":-330,"elapsed":176188,"user":{"displayName":"Sharath Raju","userId":"16058453645207816776"}}},"outputs":[{"output_type":"stream","name":"stderr","text":["/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: \n","The secret `HF_TOKEN` does not exist in your Colab secrets.\n","To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\n","You will be able to reuse this secret in all of your notebooks.\n","Please note that authentication is recommended but still optional to access public models or datasets.\n"," warnings.warn(\n"]},{"output_type":"display_data","data":{"text/plain":["tokenizer_config.json: 0%| | 0.00/1.62k [00:00