├── Overview-3.png
├── README.md
├── ReAct-Diagram.png
├── ReAct-Presentation.pdf
├── ReAct-bedrock-output.ipynb
├── ReAct-bedrock.ipynb
├── images
├── Bedrock-1.png
├── Bedrock-2.png
├── Bedrock-3.png
├── Bedrock-4.png
├── DynamicTools.png
├── EE-1.png
├── EE-2.png
├── EE-3.png
├── EE-4.png
├── EE-5.png
├── Overview-3.png
├── Setup-1.png
├── Setup-2.png
├── Setup-3.png
├── Setup-4.png
├── Studio-E.png
├── Studio-F.png
├── Studio-G.png
├── Studio-H.png
├── Studio-I.png
├── Studio-J.png
├── Studio-K.png
├── Studio-a.png
├── Studio-b.png
├── Studio-c.png
├── Studio-d.png
├── Studio-e.png
├── Studio-f.png
├── Studio-g.png
├── WS-1.png
├── WS-2.png
└── studio-D.png
└── notebook-environment.png
/Overview-3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/Overview-3.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Complex Reasoning with ReAct using Langchain Agents and Amazon Bedrock
2 |
3 | In this workshop, you will learn how to use multiple different techniques and models to build a ReAct based framework. ReAct is an approach to problem solving with large language models based on 2 main premises: Reasoning and Action. With ReAct, you combine reasoning, through chain-of-thought, with the ability to perform actions through a set of tools. This enables the model to (Re)ason through the input request to determine what steps need to be performed, and uses the available tools to perform (ACT)ions as part of a step-by-step resolution.
4 |
5 | More details on ReAct can be found in this research paper: [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) and the [Google AI Blog](https://blog.research.google/2022/11/react-synergizing-reasoning-and-acting.html)
6 |
7 | ## Workshop Environment Setup
8 |
9 | Before beginning, you'll need to 1/ open your lab account 2/ setup access to the Amazon Bedrock model's used in this workshop 3/ go into Amazon SageMaker Studio and then clone the github repo that will be used for the remainder of the workshop.
10 |
11 | Please follow the detailed steps below to access your workshop AWS account:
12 |
13 | 1. To access your lab environment, log in to bit.ly link your instructor provided
14 |
15 | 2. Click Email One-Time Password (OTP) button.
16 |
17 | 
18 |
19 | 3. Enter your own email account and click the **Send** passcode button.
20 |
21 | 
22 |
23 | 4. In your email inbox, check for the email subject "Your one-time passcode email" and copy the passcode. Paste the copied passcode as shown below, then press the **Sign in** button.
24 |
25 | 
26 |
27 | 5. Review the Terms and Conditions, scrolling down to click *I agree with the Terms and Conditions*. Click **Join event**
28 |
29 | 
30 |
31 | 6. On the bottom left under **AWS account access**, select **Open AWS console (us-west-2)**
32 |
33 | 
34 |
35 |
36 | Please follow the detailed steps below to setup access to the Amazon Bedrock models that will be used for the workshop:
37 |
38 | 1. From the AWS console, search for and click on **Amazon Bedrock**, then click **Get started**
39 |
40 | 2. From the left menu, scroll down & select **Model access**
41 |
42 | 
43 |
44 | 3. Click **Manage model access**
45 |
46 | 
47 |
48 | 4. You'll be using two models for this workshop so first select **Anthropic - Claude 3 Sonnet**.
49 |
50 | 
51 |
52 | 5. From the same page, select **Llama 2 Chat 13B**
53 |
54 | 
55 |
56 | 6. Click **Save changes**
57 |
58 | Please follow the detailed steps below to access Amazon SageMaker Studio:
59 |
60 | 1. From the AWS console, search for and click on **Amazon SageMaker**
61 |
62 | 
63 |
64 |
65 | 2. From the Amazon SageMaker console, select **Studio** on the left-hand menu
66 |
67 | 
68 |
69 | 3. Click **Open Studio** using the pre-populated default user as shown below. *Note: Your username may be different than the image.*
70 |
71 | 
72 |
73 | 4. Click **View JupyterLab spaces**
74 |
75 |
76 | 
77 |
78 | 5. Click **Create JupyterLab space**
79 |
80 |
81 | 
82 |
83 | 6. Enter a name for your space, then click **Create space**
84 |
85 | 
86 |
87 | 7. Click **Run space**
88 |
89 | 
90 |
91 | 8. It will take a few minutes to create your space but once it's ready you'll see the **Open JupyterLab** button. Click **Open JupyterLab**.
92 |
93 | 
94 |
95 |
96 | Clone the github repository that will be used for the workshop:
97 |
98 | 1. From inside your JupyterLab environment, open a terminal environment by clicking **Terminal**
99 |
100 | 
101 |
102 | 2. From the terminal, clone the github repo by copying and pasting the command below inside the terminal session
103 |
104 | ```git clone https://github.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain ```
105 |
106 | 
107 |
108 | 3. You'll now see the cloned github repository on the left hand pane of your Studio environment. Double-click the folder **complex-reasoning-with-react-and-langchain**
109 |
110 | 4. Double-click the notebook called **ReAct-bedrock.ipynb**. When the select kernel pop-up appears, keep *Python 3 (ipykernel)* and click **Select**
111 |
112 | 
113 |
114 |
115 | 5. The rest of the workshop will be performed in your notebook.
116 |
117 | # HAPPY BUILDING!!
118 |
119 |
120 |
--------------------------------------------------------------------------------
/ReAct-Diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/ReAct-Diagram.png
--------------------------------------------------------------------------------
/ReAct-Presentation.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/ReAct-Presentation.pdf
--------------------------------------------------------------------------------
/ReAct-bedrock.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "7a8769ba-5cdb-4e55-84b9-e3fa0f7e8cac",
6 | "metadata": {},
7 | "source": [
8 | "# Enabling Complex Reasoning and Action with ReAct, LLMs, and LangChain"
9 | ]
10 | },
11 | {
12 | "cell_type": "markdown",
13 | "id": "84fdda37-63cb-4a61-89ec-0b7900ae83d2",
14 | "metadata": {},
15 | "source": [
16 | "In this notebook you will learn how to use multiple different techniques and models to build a ReAct based framework. ReAct is an approach to problem solving with large language models based on 2 main premises: Reasoning and Action. With ReAct, you combine reasoning, through chain-of-thought, with the ability to perform actions through a set of tools. This enables the model to (Re)ason through the input request to determine what steps need to be performed, and uses the available tools to perform (ACT)ions as part of a step-by-step resolution.\n",
17 | "\n",
18 | "More details on ReAct can be found in this research paper: [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) and the [Google AI Blog](https://blog.research.google/2022/11/react-synergizing-reasoning-and-acting.html)\n",
19 | "\n",
20 | "\n",
21 | "\n",
22 | "Image taken from Google AI Blog\n",
23 | "\n",
24 | "To demonstrate the potential of ReAct this notebook will focus on a use case involving and Insurance Bot. For demonstration purposes, this Bot is designed to handle insurance policy requests and is provided a set of tools, including a SQLLite database and an insurance processing API, to accept requests on input, reason through the steps needed to carry out the request, and carry out the actions required. \n",
25 | "\n",
26 | ""
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "id": "301c9b99-de64-4829-ba40-079b58721faa",
32 | "metadata": {},
33 | "source": [
34 | "## Dependencies"
35 | ]
36 | },
37 | {
38 | "cell_type": "markdown",
39 | "id": "68a11660-abf0-44ff-b0d9-e91a36ca95f1",
40 | "metadata": {},
41 | "source": [
42 | "As part of this workshop you will run a local, quantized version of Meta's LLaMa-2-13B. In order do do this you will need llama-cpp-python, which requires a local compiler. This will install `g++` and set it as the default so that you can install llama-cpp-python."
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 27,
48 | "id": "e030aa9a-3e86-47a9-a031-928e51ebfe34",
49 | "metadata": {
50 | "scrolled": true,
51 | "tags": []
52 | },
53 | "outputs": [],
54 | "source": [
55 | "%pip install ipywidgets==8.1.2\n",
56 | "%pip install transformers[torch]==4.39.3\n",
57 | "%pip install sentence_transformers==2.6.1\n",
58 | "%pip install langchain==0.1.16\n",
59 | "%pip install faiss-cpu==1.8.0\n",
60 | "%pip install langchain-experimental==0.0.57\n",
61 | "%pip install sqlalchemy==2.0.29\n",
62 | "%pip install json2html==1.3.0\n",
63 | "%pip install numexpr"
64 | ]
65 | },
66 | {
67 | "cell_type": "markdown",
68 | "id": "e57fff23-2268-454e-bf32-31d3a6433166",
69 | "metadata": {},
70 | "source": [
71 | "---\n",
72 | "# ========> IMPORTANT!! <============\n",
73 | "\n",
74 | "Please restart your notebook kernel by either clicking the refresh button in the notebook menu bar or going to Kernel > Restart Kernel . You can then resume the next steps in your notebook starting with the cell below. \n",
75 | "\n",
76 | "---\n"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "id": "0e02fb72-f2e7-4a70-b62c-cd015548a667",
82 | "metadata": {
83 | "tags": []
84 | },
85 | "source": [
86 | "## Bedrock Setup"
87 | ]
88 | },
89 | {
90 | "cell_type": "code",
91 | "execution_count": 1,
92 | "id": "86abfdde-e12a-48d4-8fef-5c9cc8ac9a69",
93 | "metadata": {
94 | "tags": []
95 | },
96 | "outputs": [],
97 | "source": [
98 | "import json\n",
99 | "import os\n",
100 | "import sys\n",
101 | "\n",
102 | "import boto3\n",
103 | "import pandas as pd\n",
104 | "\n",
105 | "bedrock_runtime_client = boto3.client('bedrock-runtime')\n",
106 | "bedrock_client = boto3.client('bedrock')"
107 | ]
108 | },
109 | {
110 | "cell_type": "markdown",
111 | "id": "83f2263d-b376-4aee-a4b5-6962891d9b7b",
112 | "metadata": {},
113 | "source": [
114 | "# Workshop Start"
115 | ]
116 | },
117 | {
118 | "cell_type": "markdown",
119 | "id": "06870628-2f15-4da1-84fd-2464ac913a42",
120 | "metadata": {
121 | "tags": []
122 | },
123 | "source": [
124 | "Because we'll be using LangChain for this workshop, we'll need to import the library to start..."
125 | ]
126 | },
127 | {
128 | "cell_type": "code",
129 | "execution_count": 2,
130 | "id": "a7229813-f772-4e85-80bb-9a060d1d857e",
131 | "metadata": {
132 | "tags": []
133 | },
134 | "outputs": [],
135 | "source": [
136 | "import langchain"
137 | ]
138 | },
139 | {
140 | "cell_type": "markdown",
141 | "id": "d7a0e591-2f09-415f-8732-45cbbbe6c8a8",
142 | "metadata": {},
143 | "source": [
144 | "## Setup sample data using SQLite"
145 | ]
146 | },
147 | {
148 | "cell_type": "markdown",
149 | "id": "a1f981bd-9e1e-4300-88c7-b321c2ec8639",
150 | "metadata": {
151 | "tags": []
152 | },
153 | "source": [
154 | "As part of this workshop, you will see how to work with a variety of different tool types to retrieve and act on data.\n",
155 | "\n",
156 | "Here you will create an in-memory SQLite database to store some sample data for the examples."
157 | ]
158 | },
159 | {
160 | "cell_type": "code",
161 | "execution_count": 3,
162 | "id": "f3f39b8f-0aa9-4600-9bf2-553dcca45d59",
163 | "metadata": {
164 | "tags": []
165 | },
166 | "outputs": [],
167 | "source": [
168 | "from sqlalchemy import MetaData\n",
169 | "\n",
170 | "metadata_obj = MetaData()"
171 | ]
172 | },
173 | {
174 | "cell_type": "markdown",
175 | "id": "8b5440ac-1da0-4f5c-b321-b5c3102030f0",
176 | "metadata": {
177 | "tags": []
178 | },
179 | "source": [
180 | "Build out a table of insurance policies. This would typically be normalized, but for simplicity's sake of this example, the policies and users are all in 1 table."
181 | ]
182 | },
183 | {
184 | "cell_type": "code",
185 | "execution_count": 4,
186 | "id": "c276bfd9-e896-4574-826b-be13cbfd1340",
187 | "metadata": {
188 | "tags": []
189 | },
190 | "outputs": [],
191 | "source": [
192 | "from sqlalchemy import Column, Integer, String, Table, Date\n",
193 | "\n",
194 | "policies = Table(\n",
195 | " \"policies\",\n",
196 | " metadata_obj,\n",
197 | " Column(\"policy_id\", Integer, primary_key=True),\n",
198 | " Column(\"first_name\", String(50), nullable=False),\n",
199 | " Column(\"last_name\", String(50), nullable=False),\n",
200 | " Column(\"phone\", String(15), nullable=False),\n",
201 | " Column(\"policy_type\", String(25), nullable=False),\n",
202 | " Column(\"policy_date\", Date, nullable=False),\n",
203 | " Column(\"policy_value\", Integer, nullable=False),\n",
204 | ")"
205 | ]
206 | },
207 | {
208 | "cell_type": "code",
209 | "execution_count": 5,
210 | "id": "f47fef6d-d9f1-4004-890d-f85533b39fc8",
211 | "metadata": {
212 | "tags": []
213 | },
214 | "outputs": [],
215 | "source": [
216 | "from sqlalchemy import create_engine\n",
217 | "\n",
218 | "sqllite_engine = create_engine(\"sqlite:///:memory:\")\n",
219 | "metadata_obj.create_all(sqllite_engine)"
220 | ]
221 | },
222 | {
223 | "cell_type": "markdown",
224 | "id": "9f154acc-50c3-4d02-9798-8a7c9304d8d7",
225 | "metadata": {
226 | "tags": []
227 | },
228 | "source": [
229 | "Generate some some dummy data and insert it into the database."
230 | ]
231 | },
232 | {
233 | "cell_type": "code",
234 | "execution_count": 6,
235 | "id": "79677473-6862-4545-b774-efd8cb62f606",
236 | "metadata": {
237 | "tags": []
238 | },
239 | "outputs": [],
240 | "source": [
241 | "from datetime import datetime\n",
242 | "\n",
243 | "#policy data: policy_id, first_name, last_name, phone, policy_type, policy_date, policy_value\n",
244 | "\n",
245 | "policy_data = [\n",
246 | " [48918, 'Ernest', 'Mcneil', '349-711-8757', 'home', datetime(2023, 1, 1), 250000],\n",
247 | " [66958, 'Brian', 'Patel', '368-889-1742', 'auto', datetime(2023, 1, 2), 32000],\n",
248 | " [21947, 'Bertram', 'Mcgee', '798-641-5925', 'home', datetime(2023, 1, 3), 550000],\n",
249 | " [17108, 'Margarito', 'Rollins', '348-321-5711', 'auto', datetime(2023, 1, 4), 75000],\n",
250 | " [98362, 'Miriam', 'Sutton', '361-863-4332', 'auto', datetime(2023, 1, 8), 21000],\n",
251 | " [17565, 'Charmaine', 'Hopkins', '206-566-6359', 'home', datetime(2023, 1, 2), 135000],\n",
252 | " [10157, 'Jewel', 'Ingram', '598-338-6133', 'home', datetime(2023, 1, 6), 750000],\n",
253 | " [33372, 'Kaye', 'Underwood', '555-720-3848', 'home', datetime(2023, 1, 1), 235000],\n",
254 | " [97143, 'Josiah', 'Vazquez', '211-391-1757', 'auto', datetime(2023, 1, 5), 17250],\n",
255 | " [54621, 'Charles', 'Wise', '502-236-0425', 'home', datetime(2023, 1, 4), 1592000],\n",
256 | "]"
257 | ]
258 | },
259 | {
260 | "cell_type": "code",
261 | "execution_count": 7,
262 | "id": "517c127a-5bcf-4252-bbf6-9f7c993f7793",
263 | "metadata": {
264 | "tags": []
265 | },
266 | "outputs": [],
267 | "source": [
268 | "from sqlalchemy import insert\n",
269 | "\n",
270 | "def insert_policy_data(policy_data_arr):\n",
271 | " stmt = insert(policies).values(\n",
272 | " policy_id=policy_data_arr[0],\n",
273 | " first_name=policy_data_arr[1],\n",
274 | " last_name=policy_data_arr[2],\n",
275 | " phone=policy_data_arr[3],\n",
276 | " policy_type=policy_data_arr[4],\n",
277 | " policy_date=policy_data_arr[5],\n",
278 | " policy_value=policy_data_arr[6]\n",
279 | " )\n",
280 | "\n",
281 | " with sqllite_engine.begin() as conn:\n",
282 | " conn.execute(stmt)"
283 | ]
284 | },
285 | {
286 | "cell_type": "code",
287 | "execution_count": 8,
288 | "id": "70e78b6e-209d-4cc9-884e-2b6f7a362e0b",
289 | "metadata": {
290 | "tags": []
291 | },
292 | "outputs": [],
293 | "source": [
294 | "for policy in policy_data:\n",
295 | " print(policy)\n",
296 | " insert_policy_data(policy)"
297 | ]
298 | },
299 | {
300 | "cell_type": "markdown",
301 | "id": "2532aee6-46a0-4e6c-a9a7-475791759987",
302 | "metadata": {},
303 | "source": [
304 | "Quick query over the database to show all the rows, sorted by last name."
305 | ]
306 | },
307 | {
308 | "cell_type": "code",
309 | "execution_count": 9,
310 | "id": "215333c8-cdb0-4a08-88e2-ea4aa6e8184a",
311 | "metadata": {
312 | "tags": []
313 | },
314 | "outputs": [],
315 | "source": [
316 | "from sqlalchemy import select\n",
317 | "\n",
318 | "stmt = select(policies).order_by(policies.c.last_name)\n",
319 | "print(f'{stmt}\\n')\n",
320 | "\n",
321 | "with sqllite_engine.connect() as conn:\n",
322 | " for row in conn.execute(stmt):\n",
323 | " print(row)"
324 | ]
325 | },
326 | {
327 | "cell_type": "markdown",
328 | "id": "d568a15a-6a98-4c8f-87d4-7c25e1e0cbc3",
329 | "metadata": {},
330 | "source": [
331 | "# Configure Models"
332 | ]
333 | },
334 | {
335 | "cell_type": "markdown",
336 | "id": "ad3425bc-4444-4a90-8985-30a76ccaad71",
337 | "metadata": {},
338 | "source": [
339 | "To show diversity in approaches, we'll use two models from [Amazon Bedrock](https://aws.amazon.com/bedrock/) in this workshop. \n",
340 | "\n",
341 | " 1. **LLaMa-2-13B-Chat** model will be used to generate SQL to query the database you just made \n",
342 | " 2. **Anthropic's Claude V3 Sonnet** will be used as the LLM for the reasoning part of the ReAct approach in the form of a [Langchain Agent](https://python.langchain.com/docs/modules/agents/). This model will be responsible for accepting the request, breaking down the chain of thought reasoning, selecting tools, and formulate a final response."
343 | ]
344 | },
345 | {
346 | "cell_type": "code",
347 | "execution_count": 10,
348 | "id": "25d34fd9-5b8b-42b8-aad3-a490bf28a031",
349 | "metadata": {},
350 | "outputs": [],
351 | "source": [
352 | "import json\n",
353 | "from json2html import *\n",
354 | "from IPython.display import HTML, display"
355 | ]
356 | },
357 | {
358 | "cell_type": "code",
359 | "execution_count": 11,
360 | "id": "150bed4b-cc6c-4b9d-b13c-a546f1bc09bf",
361 | "metadata": {
362 | "scrolled": true,
363 | "tags": []
364 | },
365 | "outputs": [],
366 | "source": [
367 | "model_string = json.dumps(bedrock_client.list_foundation_models()[\"modelSummaries\"])\n",
368 | "data = json.loads(model_string)\n",
369 | "display(HTML(json2html.convert(json = data)))"
370 | ]
371 | },
372 | {
373 | "cell_type": "markdown",
374 | "id": "609e0f85-4479-4d82-b8e5-1b47a4fd9a04",
375 | "metadata": {
376 | "tags": []
377 | },
378 | "source": [
379 | "### Claude 3 Sonnet"
380 | ]
381 | },
382 | {
383 | "cell_type": "code",
384 | "execution_count": 12,
385 | "id": "86021ece-b38a-4382-83fe-ffd377c2b873",
386 | "metadata": {},
387 | "outputs": [],
388 | "source": [
389 | "claude_info=json.dumps(bedrock_client.get_foundation_model(modelIdentifier='anthropic.claude-3-sonnet-20240229-v1:0')['modelDetails'])\n",
390 | "data = json.loads(claude_info)\n",
391 | "display(HTML(json2html.convert(json = data)))"
392 | ]
393 | },
394 | {
395 | "cell_type": "markdown",
396 | "id": "64267913-9e90-40d9-9511-814e946e6a10",
397 | "metadata": {},
398 | "source": [
399 | "### Llama 2 13B chat"
400 | ]
401 | },
402 | {
403 | "cell_type": "code",
404 | "execution_count": 13,
405 | "id": "afdd5dc0-182b-4576-8b80-e45e1defa9f8",
406 | "metadata": {},
407 | "outputs": [],
408 | "source": [
409 | "llama2_info=json.dumps(bedrock_client.get_foundation_model(modelIdentifier='meta.llama2-13b-chat-v1')['modelDetails'])\n",
410 | "data = json.loads(llama2_info)\n",
411 | "display(HTML(json2html.convert(json = data)))"
412 | ]
413 | },
414 | {
415 | "cell_type": "code",
416 | "execution_count": 14,
417 | "id": "8e47ea28-ff62-430a-829a-01ed4feaf1e8",
418 | "metadata": {
419 | "tags": []
420 | },
421 | "outputs": [],
422 | "source": [
423 | "from langchain_community.llms import Bedrock\n",
424 | "from langchain_community.chat_models import BedrockChat\n",
425 | "\n",
426 | "llama2_llm = Bedrock(\n",
427 | " model_id=\"meta.llama2-13b-chat-v1\", \n",
428 | " client=bedrock_runtime_client,\n",
429 | " verbose=True\n",
430 | ")\n",
431 | "\n",
432 | "claude3_llm = BedrockChat(\n",
433 | " model_id=\"anthropic.claude-3-sonnet-20240229-v1:0\", \n",
434 | " client=bedrock_runtime_client,\n",
435 | " verbose=True,\n",
436 | " model_kwargs={\"temperature\": 0.0}\n",
437 | ")"
438 | ]
439 | },
440 | {
441 | "cell_type": "markdown",
442 | "id": "e4a2c54e-946f-484c-9f7a-2a9b72cc072b",
443 | "metadata": {},
444 | "source": [
445 | "## Interfacing with a database via Langchain SQLDatabaseChain"
446 | ]
447 | },
448 | {
449 | "cell_type": "markdown",
450 | "id": "416d54f4-bd4f-45d9-a5aa-6c85314ebc78",
451 | "metadata": {},
452 | "source": [
453 | "In this section you will build the foudational element for a tool you will create later, which will take a natural language query and return a response from the in-memory SQLite database.\n",
454 | "\n",
455 | "You'll accomplish this by creating a Langchain SQLDatabase chain."
456 | ]
457 | },
458 | {
459 | "cell_type": "code",
460 | "execution_count": 15,
461 | "id": "60f00101-10b9-4f01-aae2-f7d05210552c",
462 | "metadata": {
463 | "tags": []
464 | },
465 | "outputs": [],
466 | "source": [
467 | "from langchain_community.utilities.sql_database import SQLDatabase\n",
468 | "from langchain_experimental.sql import SQLDatabaseChain\n",
469 | "from langchain_experimental.sql import SQLDatabaseSequentialChain"
470 | ]
471 | },
472 | {
473 | "cell_type": "markdown",
474 | "id": "342fb6e2-5aa8-46bd-af5a-4e45fb50780e",
475 | "metadata": {},
476 | "source": [
477 | "Below you can see an example prompt for your SQLDatabaseChain.\n",
478 | "\n",
479 | "It accepts:\n",
480 | "- `dialect`: parameter for the dialect of the target database\n",
481 | "- `table_info`: parameter that will autogenerate a sample of the schemas and first few rows of data from the tables\n",
482 | "- `input`: parameter for the input NLP query to generate SQL from."
483 | ]
484 | },
485 | {
486 | "cell_type": "code",
487 | "execution_count": 16,
488 | "id": "e86ea3cc-2386-4723-96ca-96a49312162e",
489 | "metadata": {
490 | "tags": []
491 | },
492 | "outputs": [],
493 | "source": [
494 | "from langchain.prompts.prompt import PromptTemplate\n",
495 | "from langchain.chains import LLMChain\n",
496 | "\n",
497 | "\n",
498 | "_DEFAULT_TEMPLATE = \"\"\"\n",
499 | "You are a {dialect} expert. Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer to the input question.\n",
500 | "Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database.\n",
501 | "Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\\\") to denote them as delimited identifiers.\n",
502 | "Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",
503 | "Pay attention to use date('now') function to get the current date, if the question involves \\\"today\\\".\n",
504 | "\n",
505 | "Use the following format:\n",
506 | "Question: Question here\n",
507 | "SQLQuery: SQL Query to run\n",
508 | "SQLResult: Result of the SQLQuery\n",
509 | "Answer: Final answer here\n",
510 | "\n",
511 | "Only use the following tables enclosed in and tags:\n",
512 | "\n",
513 | "{table_info}\n",
514 | "\n",
515 | "\n",
516 | "Once you have a SQLResult, use it to formulate a natural language answer. Answer: The policy id is 12345.\n",
517 | "\n",
518 | "Question: {input}\"\"\"\n",
519 | "\n",
520 | "PROMPT = PromptTemplate(\n",
521 | " input_variables=[\"input\", \"table_info\", \"dialect\", \"top_k\"], template=_DEFAULT_TEMPLATE\n",
522 | ")"
523 | ]
524 | },
525 | {
526 | "cell_type": "markdown",
527 | "id": "7e4961fd-6d9d-4d73-a624-e55300b4cc7e",
528 | "metadata": {},
529 | "source": [
530 | "Create a SQLDatabase object from the DB engine created earlier for the SQLite database."
531 | ]
532 | },
533 | {
534 | "cell_type": "code",
535 | "execution_count": 17,
536 | "id": "78da1902-468d-46db-8612-07a1ddf8cfa6",
537 | "metadata": {
538 | "tags": []
539 | },
540 | "outputs": [],
541 | "source": [
542 | "db = SQLDatabase(engine=sqllite_engine, include_tables=[\"policies\"])"
543 | ]
544 | },
545 | {
546 | "cell_type": "markdown",
547 | "id": "c201ea63-c676-4627-a556-aae8cee0ec13",
548 | "metadata": {},
549 | "source": [
550 | "You can test your SQLDatabaseChain with a sample NLP query that will be the foundation of a later example:"
551 | ]
552 | },
553 | {
554 | "cell_type": "code",
555 | "execution_count": 18,
556 | "id": "a26699b3-7464-4db8-bc64-343b7914eccf",
557 | "metadata": {
558 | "tags": []
559 | },
560 | "outputs": [],
561 | "source": [
562 | "sql_chain = SQLDatabaseChain.from_llm(llm=llama2_llm, db=db, verbose=True, prompt=PROMPT)"
563 | ]
564 | },
565 | {
566 | "cell_type": "markdown",
567 | "id": "588b93cf-59f1-4fb0-8606-26d0ce5ebeff",
568 | "metadata": {
569 | "tags": []
570 | },
571 | "source": [
572 | "You can test your SQLDatabaseChain with a sample NLP query that will be the foundation of a later example:"
573 | ]
574 | },
575 | {
576 | "cell_type": "code",
577 | "execution_count": 19,
578 | "id": "b2bb5911-51be-4e1e-88f2-97faee7df2e0",
579 | "metadata": {
580 | "tags": []
581 | },
582 | "outputs": [],
583 | "source": [
584 | "langchain.debug=False\n",
585 | "sql_chain.invoke(input=\"What is Jewel Ingram's policy id?\")"
586 | ]
587 | },
588 | {
589 | "cell_type": "markdown",
590 | "id": "fbe082a5-7f0f-48ee-83fa-b98fcfe7af24",
591 | "metadata": {},
592 | "source": [
593 | "After running the chain, you can see that it returns a policy id for the person in question. Sometimes you may see that the resulting query is not specfic enough (maybe only looking at `first_name` instead of both `first_name` and `last_name`), but this can be addressed by refining the prompt.\n",
594 | "\n",
595 | "However, it will suffice for this example."
596 | ]
597 | },
598 | {
599 | "cell_type": "markdown",
600 | "id": "dd1998fa-aeee-40f0-b3b5-ba46a21d3ed1",
601 | "metadata": {},
602 | "source": [
603 | "---\n",
604 | "## Setting up Tools"
605 | ]
606 | },
607 | {
608 | "cell_type": "markdown",
609 | "id": "61ec956d-eeaa-4963-8f1e-4dea99eac2cd",
610 | "metadata": {},
611 | "source": [
612 | "With the foundational components now in place, you will begin setting up the concept of tools. Tools are purpose built components to satisfy a particular need. They can look up data, run API commands, or even go perform searches on the internet to satisfy the needs of the reasoning chain of thought.\n",
613 | "\n",
614 | "As you proceed you'll see how to built an extensible toolbox that will attempt to automatically select the right tools based on the input query."
615 | ]
616 | },
617 | {
618 | "cell_type": "markdown",
619 | "id": "3cda280d-008f-445d-a2bb-580701515c3b",
620 | "metadata": {},
621 | "source": [
622 | "### Create API Tool\n",
623 | "\n",
624 | "Next, we want our LLM to be able to carry out actions using an API that we will provide. The API has been setup for you as a mock REST API in Amazon API Gateway. \n",
625 | "\n",
626 | "This API has PUT and DELETE methods, to accept requests to modify or cancel an insurance policy. The API takes an input request and returns a successful response. Our API isn't backed by any functional logic for simplicity in the workshop environment. However, a real implementation would be backed by a fully functional API. \n",
627 | "\n",
628 | "To include the API in our ReAct workflow, we need to create an API tool. For this, we are using LangChain's [StructuredTool](https://blog.langchain.dev/structured-tools/) which allows us to represent a function as a tool that an agent can easily interface with to perform actions."
629 | ]
630 | },
631 | {
632 | "cell_type": "code",
633 | "execution_count": 20,
634 | "id": "a325208a-c993-40cb-8c09-90664c246e9e",
635 | "metadata": {
636 | "tags": []
637 | },
638 | "outputs": [],
639 | "source": [
640 | "request_url='https://bkochd081f.execute-api.us-west-2.amazonaws.com/Mock-Test/policy'"
641 | ]
642 | },
643 | {
644 | "cell_type": "code",
645 | "execution_count": 21,
646 | "id": "83f16fa0-1284-4d6c-966a-0d6a6d2c7c12",
647 | "metadata": {
648 | "tags": []
649 | },
650 | "outputs": [],
651 | "source": [
652 | "import requests\n",
653 | "from langchain.tools import StructuredTool\n",
654 | "\n",
655 | "# A structured tool represents an action an agent can take. It wraps any function you provide to let an agent easily interface with it.\n",
656 | " \n",
657 | "def cancel_policy_request(request: str): \n",
658 | " \"\"\"Sends a DELETE request to the policy API with the provided policy id\"\"\"\n",
659 | " url = request_url\n",
660 | " policy_id = request.rsplit(None, 1)[-1]\n",
661 | " result = requests.delete(url+\"?DELETE/\"+policy_id)\n",
662 | " return f\"Successfully submitted cancel request for policy: {policy_id}, Status: {result.status_code} - {result.text}\"\n",
663 | "\n",
664 | "def update_policy_request(request: str):\n",
665 | " \"\"\"Sends a PUT request to the policy API with the provided policy id and data to update\"\"\"\n",
666 | " url = request_url\n",
667 | " policy_id = request[request.find(\"policy\")+len(\"policy\"):].split()[0]\n",
668 | " update_field = request[request.find(\"update the\")+len(\"update the\"):].split()[0]\n",
669 | " update_data = request.rsplit(None, 1)[-1]\n",
670 | " result = requests.put(url+\"?PUT/\"+policy_id, update_field+\"/\"+update_data)\n",
671 | " return f\"Successfully submitted update for: {policy_id}, Update for: {update_field}, {update_data}, Status: {result.status_code} - {result.text}\""
672 | ]
673 | },
674 | {
675 | "cell_type": "code",
676 | "execution_count": 22,
677 | "id": "d1f69aef-4c27-46fd-8029-1d6b804c0eb3",
678 | "metadata": {
679 | "tags": []
680 | },
681 | "outputs": [],
682 | "source": [
683 | "update_policy_request = StructuredTool.from_function(update_policy_request)\n",
684 | "cancel_policy_request = StructuredTool.from_function(cancel_policy_request)"
685 | ]
686 | },
687 | {
688 | "cell_type": "markdown",
689 | "id": "8d009d62-1f9c-4e4e-ac4a-6014c7271aff",
690 | "metadata": {},
691 | "source": [
692 | "Test the newly created tool"
693 | ]
694 | },
695 | {
696 | "cell_type": "code",
697 | "execution_count": 23,
698 | "id": "575cf842-d513-4abb-9f7e-d26e31b6fbeb",
699 | "metadata": {
700 | "tags": []
701 | },
702 | "outputs": [],
703 | "source": [
704 | "cancel_policy_request.run('the policy id is 54621')"
705 | ]
706 | },
707 | {
708 | "cell_type": "code",
709 | "execution_count": 24,
710 | "id": "ef1760c7-0787-4f61-af09-dddbf6da41ce",
711 | "metadata": {
712 | "tags": []
713 | },
714 | "outputs": [],
715 | "source": [
716 | "update_policy_request.run('Submit a request to update the phone number in policy 54621 to 333-321-5622')"
717 | ]
718 | },
719 | {
720 | "cell_type": "markdown",
721 | "id": "3c7c4fae-853d-4c9b-a9fb-75e5cc69c3d4",
722 | "metadata": {},
723 | "source": [
724 | "Let's now take all of the tools that we've created and add them all to the list of tools that will be made available to the LLM."
725 | ]
726 | },
727 | {
728 | "cell_type": "code",
729 | "execution_count": 25,
730 | "id": "c31fc62d-4ba0-4dc0-9d06-f7b1d236bc0f",
731 | "metadata": {
732 | "tags": []
733 | },
734 | "outputs": [],
735 | "source": [
736 | "from langchain.chains import LLMMathChain\n",
737 | "from langchain.agents import load_tools\n",
738 | "from langchain.tools import Tool\n",
739 | "\n",
740 | "llm_math = LLMMathChain.from_llm(llama2_llm, verbose=True)\n",
741 | "\n",
742 | "ALL_TOOLS = [\n",
743 | " Tool(\n",
744 | " name=\"calculator\",\n",
745 | " func=llm_math.run,\n",
746 | " description=\"Useful when you need to perform mathematical operations.\"\n",
747 | " ), \n",
748 | " Tool(\n",
749 | " name=\"insurance_policy_lookup\",\n",
750 | " func=sql_chain.run,\n",
751 | " description=\"Useful when you need to look up insurance policy information. This tool takes in a full question about customers and their policies and will return a policy id. Example: [What is the policy id for Jane Doe?, What is Cynthia Stone's policy?]\"\n",
752 | " #description=\"Useful when you need to look up insurance policy information. This tool takes in a full question about customers and their policies and will return a policy id.\"\n",
753 | " ),\n",
754 | " Tool(\n",
755 | " name=\"cancel_policy_request\",\n",
756 | " func=cancel_policy_request.run,\n",
757 | " description=\"Useful when you need to submit a request to cancel an insurance policy. This tool takes in the customer's policy id to be cancelled. An API response message will be returned. Example: [Submit a request to cancel policy for policy id of 4321]\"\n",
758 | " #description=\"Useful when you need to submit a request to cancel an insurance policy. This tool takes in the customer's policy id to be cancelled. An API response message will be returned.\"\n",
759 | " ),\n",
760 | " Tool(\n",
761 | " name=\"update_policy_request\",\n",
762 | " func=update_policy_request.run,\n",
763 | " description=\"Useful when you need to submit a request to update an insurance policy. This tool takes in the customer's policy id to be updated as well as data to be updated. An API response will be returned. Example: [Submit a request to update the phone number in policy 4321 to 455-255-5555]\"\n",
764 | " #description=\"Useful when you need to submit a request to update an insurance policy. This tool takes in the customer's policy id to be updated as well as data to be updated. An API response will be returned.\"\n",
765 | " ),\n",
766 | "]"
767 | ]
768 | },
769 | {
770 | "cell_type": "markdown",
771 | "id": "db38bfe5-7313-4a4a-9d6d-104c8bb63284",
772 | "metadata": {},
773 | "source": [
774 | "## Dynamic Tool Selection\n",
775 | "\n",
776 | "To create a more dynamic way for the reasoning engine to select the right tool based on the input, we'll setup our dynamic tool selector which will be responsible selecting the right tool based in the input using an embedding model and vector store. \n",
777 | "\n",
778 | ""
779 | ]
780 | },
781 | {
782 | "cell_type": "markdown",
783 | "id": "e7ac8478-3d00-46d2-a377-cf11e7f23e41",
784 | "metadata": {},
785 | "source": [
786 | "An [embedding model](https://huggingface.co/blog/getting-started-with-embeddings) takes a text string and converts it into a numerical vector representation, which will allow you to store it in a vector database and query it based on semantic similarity.\n",
787 | "\n",
788 | "Here, you will host a local [MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) embedding model, which will generate a set of embeddings to help semantically search for the right tools in the example.\n",
789 | "\n",
790 | "**Note: Ignore any \"Error displaying widget: model not found\" errors you may see here.**"
791 | ]
792 | },
793 | {
794 | "cell_type": "code",
795 | "execution_count": 26,
796 | "id": "abde9a56-ac00-4c51-b24d-0a0611ea90d2",
797 | "metadata": {
798 | "tags": []
799 | },
800 | "outputs": [],
801 | "source": [
802 | "from langchain.embeddings import HuggingFaceEmbeddings\n",
803 | "\n",
804 | "model_name = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
805 | "model_kwargs = {'device': 'cpu'}\n",
806 | "encode_kwargs = {'normalize_embeddings': False}\n",
807 | "embeddings = HuggingFaceEmbeddings(\n",
808 | " model_name=model_name,\n",
809 | " model_kwargs=model_kwargs,\n",
810 | " encode_kwargs=encode_kwargs\n",
811 | ")"
812 | ]
813 | },
814 | {
815 | "cell_type": "markdown",
816 | "id": "723e314c-e06f-4be2-a5ce-0981825e7e75",
817 | "metadata": {},
818 | "source": [
819 | "The vector store method of tool selection in this example uses [Meta's Facebook AI Similarity Search (FAISS)](https://github.com/facebookresearch/faiss) as a vector database. There are many other options available as well.\n",
820 | "\n",
821 | "First you will create a set of documents (one for each tool), that you will vectorize and use to determine best match for tools."
822 | ]
823 | },
824 | {
825 | "cell_type": "code",
826 | "execution_count": 27,
827 | "id": "2b97abf3-519e-42d2-98f6-91f9639e0c7d",
828 | "metadata": {
829 | "tags": []
830 | },
831 | "outputs": [],
832 | "source": [
833 | "from langchain.vectorstores import FAISS\n",
834 | "from langchain.schema import Document\n",
835 | "\n",
836 | "docs = [\n",
837 | " Document(page_content=t.description, metadata={\"index\": i})\n",
838 | " for i, t in enumerate(ALL_TOOLS)\n",
839 | "]\n",
840 | "\n",
841 | "docs"
842 | ]
843 | },
844 | {
845 | "cell_type": "markdown",
846 | "id": "f33b0676-71a0-4f46-829c-104e629614c9",
847 | "metadata": {},
848 | "source": [
849 | "FAISS provides a `from_documents` method for automatically generating embeddings and storing them in the index."
850 | ]
851 | },
852 | {
853 | "cell_type": "code",
854 | "execution_count": 28,
855 | "id": "d5a4a233-3daf-4c03-ae6d-65e361b20efc",
856 | "metadata": {
857 | "tags": []
858 | },
859 | "outputs": [],
860 | "source": [
861 | "vector_store = FAISS.from_documents(docs, embeddings)"
862 | ]
863 | },
864 | {
865 | "cell_type": "markdown",
866 | "id": "b7a119c3-ce84-4f85-ae38-484175465b5a",
867 | "metadata": {},
868 | "source": [
869 | "In this section you will define the `get_tools` method, which will embed the input query and find relevant documents that are semantically close. This will allow you to take large lists of tools and scope them down to the most relevant ones.\n",
870 | "\n",
871 | "You'll use this method to demonstrate tool scoping below and it will also be used in supplying a list of tools to your agent later."
872 | ]
873 | },
874 | {
875 | "cell_type": "code",
876 | "execution_count": 29,
877 | "id": "68be0fb1-3815-40d1-90b0-3766c4fa813f",
878 | "metadata": {
879 | "tags": []
880 | },
881 | "outputs": [],
882 | "source": [
883 | "retriever = vector_store.as_retriever()\n",
884 | "\n",
885 | "\n",
886 | "def get_tools(query):\n",
887 | " docs = retriever.get_relevant_documents(query)\n",
888 | " return [ALL_TOOLS[d.metadata[\"index\"]] for d in docs]\n",
889 | "\n",
890 | "def print_tools(tools_arr):\n",
891 | " for tool in tools_arr:\n",
892 | " print(f'{tool}\\n\\n')"
893 | ]
894 | },
895 | {
896 | "cell_type": "markdown",
897 | "id": "8fd0a5ba-1650-4de6-8859-42358ceb16c6",
898 | "metadata": {},
899 | "source": [
900 | "Let's see if we can find the right tool for a given input..."
901 | ]
902 | },
903 | {
904 | "cell_type": "markdown",
905 | "id": "13d3b3a3-c83b-4293-a3a3-58841dcb04fc",
906 | "metadata": {},
907 | "source": [
908 | "Here you get a list of tools for a mathematical input query. Notice that the `calculator` tool is listed first as its description most closely matches the request. \n",
909 | "\n",
910 | "You can also control how many results to return by modifying the `get_relevant_documents` request inside of `get_tools`."
911 | ]
912 | },
913 | {
914 | "cell_type": "code",
915 | "execution_count": 30,
916 | "id": "6c156111-d6af-462b-b77c-9e705bf0431d",
917 | "metadata": {
918 | "tags": []
919 | },
920 | "outputs": [],
921 | "source": [
922 | "print_tools(get_tools(\"What is 281728^2?\"))"
923 | ]
924 | },
925 | {
926 | "cell_type": "markdown",
927 | "id": "3183f523-4576-4738-9184-715c07b603ef",
928 | "metadata": {
929 | "tags": []
930 | },
931 | "source": [
932 | "Alternatively, the `insurance_policy_lookup` tool is a better match for a policy related inquiry."
933 | ]
934 | },
935 | {
936 | "cell_type": "code",
937 | "execution_count": 31,
938 | "id": "e2750fc1-05f1-443f-8b7c-ca18d52a08b0",
939 | "metadata": {
940 | "tags": []
941 | },
942 | "outputs": [],
943 | "source": [
944 | "print_tools(get_tools(\"What is the policy id for Bob Jones?\"))"
945 | ]
946 | },
947 | {
948 | "cell_type": "markdown",
949 | "id": "fdef5887-2059-4c23-912c-7556b9836cfe",
950 | "metadata": {},
951 | "source": [
952 | "While the `cancel_policy_request` tool is a better match for a policy cancellation request."
953 | ]
954 | },
955 | {
956 | "cell_type": "code",
957 | "execution_count": 32,
958 | "id": "ed3b9999-0f8f-4cb3-88d6-f21dca18835c",
959 | "metadata": {
960 | "tags": []
961 | },
962 | "outputs": [],
963 | "source": [
964 | "print_tools(get_tools(\"Cancel the policy for policy id 4325\"))"
965 | ]
966 | },
967 | {
968 | "cell_type": "markdown",
969 | "id": "6ce42cbc-52e4-40f5-b634-5c1bca0502ac",
970 | "metadata": {},
971 | "source": [
972 | "# Agent Setup"
973 | ]
974 | },
975 | {
976 | "cell_type": "markdown",
977 | "id": "9f992b52-3680-43fd-ae1e-e6e87e51a826",
978 | "metadata": {},
979 | "source": [
980 | "With your models ready to go, and your toolbox configured, you're ready to setup your ReAct agent.\n",
981 | "\n",
982 | "The agent will take in a ReAct style prompt along with the list of tools to build out the chain of thought reasoning and resulting actions.\n",
983 | "\n",
984 | "The Langchain agent initialization takes 3 prompt keyword arguments to customize the agent's prompt:\n",
985 | "- `prefix`: the beginning of the prompt\n",
986 | "- \\\n",
987 | "- `format_instructions`: how to format the intermediary and final responses, this is important in dictating how the chain of thought is created and processed.\n",
988 | "- `suffix`: The invocation of the agent. This contains the initial input request as well as the conversational chain to determine what the agent has already done and seen.\n",
989 | "\n",
990 | "Some template parameters you see below are:\n",
991 | "- `{tool_names}`: this is a list of the tools that are available to the agent, in name only. This helps direct the agent into what to supply as actions during reasoning. In this case the agent will do this step for you, but in others you may need to extract the names yourself.\n",
992 | "- `{agent_scratchpad}`: this is important as it documents the activities and obserations that the agent has had throughout its reasoning. Without this, you will normally see the agent endlessly loop.\n",
993 | "- `{input}`: The input request from the user."
994 | ]
995 | },
996 | {
997 | "cell_type": "code",
998 | "execution_count": 33,
999 | "id": "0507421e-b2e8-4749-b1b3-d46632008338",
1000 | "metadata": {},
1001 | "outputs": [],
1002 | "source": [
1003 | "_DEFAULT_AGENT_PROMPT_TEMPLATE = \"\"\"Human: You are an agent tasked with helping look up and modify insurance claims.\n",
1004 | "\n",
1005 | "Given an input request, take a step-by-step approach to find an insurance policy and modify its status.\n",
1006 | "Only use the tools provided. Do NOT assume any information that hasn't been included in the coversation history.\n",
1007 | "When you have completed the task, end your chain of thought and provide a final response to the user.\n",
1008 | "\n",
1009 | "You have access to the following tools:\n",
1010 | "\n",
1011 | "{tools}\n",
1012 | "\n",
1013 | "\n",
1014 | "To use a tool, please use the following format:\n",
1015 | "\n",
1016 | "Thought: Do I need to use a tool? Yes\n",
1017 | "Action: the action to take, should be one of [{tool_names}]\n",
1018 | "Action Input: the input to the action\n",
1019 | "Observation: the result of the action\n",
1020 | "\n",
1021 | "\n",
1022 | "When you have a response to say to the User, or if you do not need to use a tool, you MUST use the format:\n",
1023 | "\n",
1024 | "Thought: Do I need to use a tool? No\n",
1025 | "Final Answer: [your response here]\n",
1026 | "\n",
1027 | "\n",
1028 | "Begin!\n",
1029 | "\n",
1030 | "Previous conversation history:{agent_scratchpad}\n",
1031 | "Original input: {input}\n",
1032 | "\n",
1033 | "Assistant:\n",
1034 | "\"\"\"\n",
1035 | "\n",
1036 | "AGENT_PROMPT = PromptTemplate(\n",
1037 | " input_variables=['tools', 'tool_names', 'agent_scratchpad', 'input'], template=_DEFAULT_AGENT_PROMPT_TEMPLATE\n",
1038 | ")"
1039 | ]
1040 | },
1041 | {
1042 | "cell_type": "code",
1043 | "execution_count": 34,
1044 | "id": "5abd42ac-7ab1-4450-8ea7-9cb733aa1c09",
1045 | "metadata": {
1046 | "tags": []
1047 | },
1048 | "outputs": [],
1049 | "source": [
1050 | "from langchain.agents import initialize_agent\n",
1051 | "from langchain.agents import AgentType\n",
1052 | "from langchain.agents import AgentExecutor, create_react_agent, create_tool_calling_agent"
1053 | ]
1054 | },
1055 | {
1056 | "cell_type": "markdown",
1057 | "id": "d147d016-ec63-4c82-ab04-4386fcc86b11",
1058 | "metadata": {},
1059 | "source": [
1060 | "This is the user request that the agent will process using the provided tools. "
1061 | ]
1062 | },
1063 | {
1064 | "cell_type": "code",
1065 | "execution_count": 35,
1066 | "id": "dfb3152d-392d-4c44-9588-1d78c2858b32",
1067 | "metadata": {
1068 | "tags": []
1069 | },
1070 | "outputs": [],
1071 | "source": [
1072 | "query = \"Update the insurance policy for Charles Wise by updating the phone number to 333-321-5622\""
1073 | ]
1074 | },
1075 | {
1076 | "cell_type": "markdown",
1077 | "id": "4cf4fd87-f421-46a7-8174-3f6357c3a4c4",
1078 | "metadata": {},
1079 | "source": [
1080 | "Here you will set up your agent.\n",
1081 | "\n",
1082 | "initialize_agent has the following configuration:\n",
1083 | "- `agent`: using the [ZERO_SHOT_REACT agent](https://python.langchain.com/docs/modules/agents/agent_types/react#using-zeroshotreactagent)\n",
1084 | "- `agent_kwargs`: variable contains the object with all of the customized prompt information.\n",
1085 | "- `tools`: list of tools from the vector store, dynamically generated from the input query\n",
1086 | "- `llm`: the agent LLM (in this case Claude V2 from Amazon Bedrock)\n",
1087 | "- `verbose`: show all the details for learning purposes\n",
1088 | "- `return_intermediate_steps`: important for retaining all of the thoughts/actions/observations in the agent_scratchpad\n",
1089 | "- `max_iterations`: VERY IMPORTANT for ensuring that your agent doesn't endlessly loop and run away. Prevents the agent from hammering on the associated LLM.\n",
1090 | "- `handle_parsing_errors`: used for dealing with issues parsing the output of a given step, not necessary but catches edge cases in this example."
1091 | ]
1092 | },
1093 | {
1094 | "cell_type": "code",
1095 | "execution_count": 36,
1096 | "id": "687a3b2b-0920-4f4b-8699-761487b25a37",
1097 | "metadata": {},
1098 | "outputs": [],
1099 | "source": [
1100 | "tools = get_tools(query)\n",
1101 | "\n",
1102 | "agent = create_react_agent(llm=claude3_llm, tools=tools, prompt=AGENT_PROMPT)\n",
1103 | "agent_executor = AgentExecutor(agent=agent, tools=tools, handle_parsing_errors=True)"
1104 | ]
1105 | },
1106 | {
1107 | "cell_type": "markdown",
1108 | "id": "294923e8-8957-466c-a436-aa01f1bf6717",
1109 | "metadata": {},
1110 | "source": [
1111 | "Finally you can invoke your agent with the input query!\n",
1112 | "\n",
1113 | "For the first run, we will run with debug = False which will show condensed output from the agent. Following this run, we'll run the same query with debug = True if you want to dive deeper into the details. "
1114 | ]
1115 | },
1116 | {
1117 | "cell_type": "code",
1118 | "execution_count": 37,
1119 | "id": "4f2415d1-bbc9-48b7-b3d9-34d7f988cb95",
1120 | "metadata": {},
1121 | "outputs": [],
1122 | "source": [
1123 | "langchain.debug = False\n",
1124 | "\n",
1125 | "agent_executor.invoke({\"input\":query})"
1126 | ]
1127 | },
1128 | {
1129 | "cell_type": "markdown",
1130 | "id": "f5aead9f-8954-4da1-b439-3056af299598",
1131 | "metadata": {},
1132 | "source": [
1133 | "Let's try to rerun it with debug set to 'true' where you can dive deep into the details on the Thought > Observation > Action workflow. The output will be verbose, but if you follow through you will see the chain start/end and invoke tools to get the information and take action until it reaches a conclusion or hits the `max_iterations`. "
1134 | ]
1135 | },
1136 | {
1137 | "cell_type": "code",
1138 | "execution_count": null,
1139 | "id": "1b7ecdb4-a575-42b6-b446-b42e7979a04d",
1140 | "metadata": {},
1141 | "outputs": [],
1142 | "source": [
1143 | "langchain.debug = True\n",
1144 | "\n",
1145 | "agent_executor.invoke({\"input\":query})"
1146 | ]
1147 | },
1148 | {
1149 | "cell_type": "code",
1150 | "execution_count": null,
1151 | "id": "d2af6965-c609-4117-a506-8f0320063a37",
1152 | "metadata": {},
1153 | "outputs": [],
1154 | "source": []
1155 | }
1156 | ],
1157 | "metadata": {
1158 | "availableInstances": [
1159 | {
1160 | "_defaultOrder": 0,
1161 | "_isFastLaunch": true,
1162 | "category": "General purpose",
1163 | "gpuNum": 0,
1164 | "hideHardwareSpecs": false,
1165 | "memoryGiB": 4,
1166 | "name": "ml.t3.medium",
1167 | "vcpuNum": 2
1168 | },
1169 | {
1170 | "_defaultOrder": 1,
1171 | "_isFastLaunch": false,
1172 | "category": "General purpose",
1173 | "gpuNum": 0,
1174 | "hideHardwareSpecs": false,
1175 | "memoryGiB": 8,
1176 | "name": "ml.t3.large",
1177 | "vcpuNum": 2
1178 | },
1179 | {
1180 | "_defaultOrder": 2,
1181 | "_isFastLaunch": false,
1182 | "category": "General purpose",
1183 | "gpuNum": 0,
1184 | "hideHardwareSpecs": false,
1185 | "memoryGiB": 16,
1186 | "name": "ml.t3.xlarge",
1187 | "vcpuNum": 4
1188 | },
1189 | {
1190 | "_defaultOrder": 3,
1191 | "_isFastLaunch": false,
1192 | "category": "General purpose",
1193 | "gpuNum": 0,
1194 | "hideHardwareSpecs": false,
1195 | "memoryGiB": 32,
1196 | "name": "ml.t3.2xlarge",
1197 | "vcpuNum": 8
1198 | },
1199 | {
1200 | "_defaultOrder": 4,
1201 | "_isFastLaunch": true,
1202 | "category": "General purpose",
1203 | "gpuNum": 0,
1204 | "hideHardwareSpecs": false,
1205 | "memoryGiB": 8,
1206 | "name": "ml.m5.large",
1207 | "vcpuNum": 2
1208 | },
1209 | {
1210 | "_defaultOrder": 5,
1211 | "_isFastLaunch": false,
1212 | "category": "General purpose",
1213 | "gpuNum": 0,
1214 | "hideHardwareSpecs": false,
1215 | "memoryGiB": 16,
1216 | "name": "ml.m5.xlarge",
1217 | "vcpuNum": 4
1218 | },
1219 | {
1220 | "_defaultOrder": 6,
1221 | "_isFastLaunch": false,
1222 | "category": "General purpose",
1223 | "gpuNum": 0,
1224 | "hideHardwareSpecs": false,
1225 | "memoryGiB": 32,
1226 | "name": "ml.m5.2xlarge",
1227 | "vcpuNum": 8
1228 | },
1229 | {
1230 | "_defaultOrder": 7,
1231 | "_isFastLaunch": false,
1232 | "category": "General purpose",
1233 | "gpuNum": 0,
1234 | "hideHardwareSpecs": false,
1235 | "memoryGiB": 64,
1236 | "name": "ml.m5.4xlarge",
1237 | "vcpuNum": 16
1238 | },
1239 | {
1240 | "_defaultOrder": 8,
1241 | "_isFastLaunch": false,
1242 | "category": "General purpose",
1243 | "gpuNum": 0,
1244 | "hideHardwareSpecs": false,
1245 | "memoryGiB": 128,
1246 | "name": "ml.m5.8xlarge",
1247 | "vcpuNum": 32
1248 | },
1249 | {
1250 | "_defaultOrder": 9,
1251 | "_isFastLaunch": false,
1252 | "category": "General purpose",
1253 | "gpuNum": 0,
1254 | "hideHardwareSpecs": false,
1255 | "memoryGiB": 192,
1256 | "name": "ml.m5.12xlarge",
1257 | "vcpuNum": 48
1258 | },
1259 | {
1260 | "_defaultOrder": 10,
1261 | "_isFastLaunch": false,
1262 | "category": "General purpose",
1263 | "gpuNum": 0,
1264 | "hideHardwareSpecs": false,
1265 | "memoryGiB": 256,
1266 | "name": "ml.m5.16xlarge",
1267 | "vcpuNum": 64
1268 | },
1269 | {
1270 | "_defaultOrder": 11,
1271 | "_isFastLaunch": false,
1272 | "category": "General purpose",
1273 | "gpuNum": 0,
1274 | "hideHardwareSpecs": false,
1275 | "memoryGiB": 384,
1276 | "name": "ml.m5.24xlarge",
1277 | "vcpuNum": 96
1278 | },
1279 | {
1280 | "_defaultOrder": 12,
1281 | "_isFastLaunch": false,
1282 | "category": "General purpose",
1283 | "gpuNum": 0,
1284 | "hideHardwareSpecs": false,
1285 | "memoryGiB": 8,
1286 | "name": "ml.m5d.large",
1287 | "vcpuNum": 2
1288 | },
1289 | {
1290 | "_defaultOrder": 13,
1291 | "_isFastLaunch": false,
1292 | "category": "General purpose",
1293 | "gpuNum": 0,
1294 | "hideHardwareSpecs": false,
1295 | "memoryGiB": 16,
1296 | "name": "ml.m5d.xlarge",
1297 | "vcpuNum": 4
1298 | },
1299 | {
1300 | "_defaultOrder": 14,
1301 | "_isFastLaunch": false,
1302 | "category": "General purpose",
1303 | "gpuNum": 0,
1304 | "hideHardwareSpecs": false,
1305 | "memoryGiB": 32,
1306 | "name": "ml.m5d.2xlarge",
1307 | "vcpuNum": 8
1308 | },
1309 | {
1310 | "_defaultOrder": 15,
1311 | "_isFastLaunch": false,
1312 | "category": "General purpose",
1313 | "gpuNum": 0,
1314 | "hideHardwareSpecs": false,
1315 | "memoryGiB": 64,
1316 | "name": "ml.m5d.4xlarge",
1317 | "vcpuNum": 16
1318 | },
1319 | {
1320 | "_defaultOrder": 16,
1321 | "_isFastLaunch": false,
1322 | "category": "General purpose",
1323 | "gpuNum": 0,
1324 | "hideHardwareSpecs": false,
1325 | "memoryGiB": 128,
1326 | "name": "ml.m5d.8xlarge",
1327 | "vcpuNum": 32
1328 | },
1329 | {
1330 | "_defaultOrder": 17,
1331 | "_isFastLaunch": false,
1332 | "category": "General purpose",
1333 | "gpuNum": 0,
1334 | "hideHardwareSpecs": false,
1335 | "memoryGiB": 192,
1336 | "name": "ml.m5d.12xlarge",
1337 | "vcpuNum": 48
1338 | },
1339 | {
1340 | "_defaultOrder": 18,
1341 | "_isFastLaunch": false,
1342 | "category": "General purpose",
1343 | "gpuNum": 0,
1344 | "hideHardwareSpecs": false,
1345 | "memoryGiB": 256,
1346 | "name": "ml.m5d.16xlarge",
1347 | "vcpuNum": 64
1348 | },
1349 | {
1350 | "_defaultOrder": 19,
1351 | "_isFastLaunch": false,
1352 | "category": "General purpose",
1353 | "gpuNum": 0,
1354 | "hideHardwareSpecs": false,
1355 | "memoryGiB": 384,
1356 | "name": "ml.m5d.24xlarge",
1357 | "vcpuNum": 96
1358 | },
1359 | {
1360 | "_defaultOrder": 20,
1361 | "_isFastLaunch": false,
1362 | "category": "General purpose",
1363 | "gpuNum": 0,
1364 | "hideHardwareSpecs": true,
1365 | "memoryGiB": 0,
1366 | "name": "ml.geospatial.interactive",
1367 | "supportedImageNames": [
1368 | "sagemaker-geospatial-v1-0"
1369 | ],
1370 | "vcpuNum": 0
1371 | },
1372 | {
1373 | "_defaultOrder": 21,
1374 | "_isFastLaunch": true,
1375 | "category": "Compute optimized",
1376 | "gpuNum": 0,
1377 | "hideHardwareSpecs": false,
1378 | "memoryGiB": 4,
1379 | "name": "ml.c5.large",
1380 | "vcpuNum": 2
1381 | },
1382 | {
1383 | "_defaultOrder": 22,
1384 | "_isFastLaunch": false,
1385 | "category": "Compute optimized",
1386 | "gpuNum": 0,
1387 | "hideHardwareSpecs": false,
1388 | "memoryGiB": 8,
1389 | "name": "ml.c5.xlarge",
1390 | "vcpuNum": 4
1391 | },
1392 | {
1393 | "_defaultOrder": 23,
1394 | "_isFastLaunch": false,
1395 | "category": "Compute optimized",
1396 | "gpuNum": 0,
1397 | "hideHardwareSpecs": false,
1398 | "memoryGiB": 16,
1399 | "name": "ml.c5.2xlarge",
1400 | "vcpuNum": 8
1401 | },
1402 | {
1403 | "_defaultOrder": 24,
1404 | "_isFastLaunch": false,
1405 | "category": "Compute optimized",
1406 | "gpuNum": 0,
1407 | "hideHardwareSpecs": false,
1408 | "memoryGiB": 32,
1409 | "name": "ml.c5.4xlarge",
1410 | "vcpuNum": 16
1411 | },
1412 | {
1413 | "_defaultOrder": 25,
1414 | "_isFastLaunch": false,
1415 | "category": "Compute optimized",
1416 | "gpuNum": 0,
1417 | "hideHardwareSpecs": false,
1418 | "memoryGiB": 72,
1419 | "name": "ml.c5.9xlarge",
1420 | "vcpuNum": 36
1421 | },
1422 | {
1423 | "_defaultOrder": 26,
1424 | "_isFastLaunch": false,
1425 | "category": "Compute optimized",
1426 | "gpuNum": 0,
1427 | "hideHardwareSpecs": false,
1428 | "memoryGiB": 96,
1429 | "name": "ml.c5.12xlarge",
1430 | "vcpuNum": 48
1431 | },
1432 | {
1433 | "_defaultOrder": 27,
1434 | "_isFastLaunch": false,
1435 | "category": "Compute optimized",
1436 | "gpuNum": 0,
1437 | "hideHardwareSpecs": false,
1438 | "memoryGiB": 144,
1439 | "name": "ml.c5.18xlarge",
1440 | "vcpuNum": 72
1441 | },
1442 | {
1443 | "_defaultOrder": 28,
1444 | "_isFastLaunch": false,
1445 | "category": "Compute optimized",
1446 | "gpuNum": 0,
1447 | "hideHardwareSpecs": false,
1448 | "memoryGiB": 192,
1449 | "name": "ml.c5.24xlarge",
1450 | "vcpuNum": 96
1451 | },
1452 | {
1453 | "_defaultOrder": 29,
1454 | "_isFastLaunch": true,
1455 | "category": "Accelerated computing",
1456 | "gpuNum": 1,
1457 | "hideHardwareSpecs": false,
1458 | "memoryGiB": 16,
1459 | "name": "ml.g4dn.xlarge",
1460 | "vcpuNum": 4
1461 | },
1462 | {
1463 | "_defaultOrder": 30,
1464 | "_isFastLaunch": false,
1465 | "category": "Accelerated computing",
1466 | "gpuNum": 1,
1467 | "hideHardwareSpecs": false,
1468 | "memoryGiB": 32,
1469 | "name": "ml.g4dn.2xlarge",
1470 | "vcpuNum": 8
1471 | },
1472 | {
1473 | "_defaultOrder": 31,
1474 | "_isFastLaunch": false,
1475 | "category": "Accelerated computing",
1476 | "gpuNum": 1,
1477 | "hideHardwareSpecs": false,
1478 | "memoryGiB": 64,
1479 | "name": "ml.g4dn.4xlarge",
1480 | "vcpuNum": 16
1481 | },
1482 | {
1483 | "_defaultOrder": 32,
1484 | "_isFastLaunch": false,
1485 | "category": "Accelerated computing",
1486 | "gpuNum": 1,
1487 | "hideHardwareSpecs": false,
1488 | "memoryGiB": 128,
1489 | "name": "ml.g4dn.8xlarge",
1490 | "vcpuNum": 32
1491 | },
1492 | {
1493 | "_defaultOrder": 33,
1494 | "_isFastLaunch": false,
1495 | "category": "Accelerated computing",
1496 | "gpuNum": 4,
1497 | "hideHardwareSpecs": false,
1498 | "memoryGiB": 192,
1499 | "name": "ml.g4dn.12xlarge",
1500 | "vcpuNum": 48
1501 | },
1502 | {
1503 | "_defaultOrder": 34,
1504 | "_isFastLaunch": false,
1505 | "category": "Accelerated computing",
1506 | "gpuNum": 1,
1507 | "hideHardwareSpecs": false,
1508 | "memoryGiB": 256,
1509 | "name": "ml.g4dn.16xlarge",
1510 | "vcpuNum": 64
1511 | },
1512 | {
1513 | "_defaultOrder": 35,
1514 | "_isFastLaunch": false,
1515 | "category": "Accelerated computing",
1516 | "gpuNum": 1,
1517 | "hideHardwareSpecs": false,
1518 | "memoryGiB": 61,
1519 | "name": "ml.p3.2xlarge",
1520 | "vcpuNum": 8
1521 | },
1522 | {
1523 | "_defaultOrder": 36,
1524 | "_isFastLaunch": false,
1525 | "category": "Accelerated computing",
1526 | "gpuNum": 4,
1527 | "hideHardwareSpecs": false,
1528 | "memoryGiB": 244,
1529 | "name": "ml.p3.8xlarge",
1530 | "vcpuNum": 32
1531 | },
1532 | {
1533 | "_defaultOrder": 37,
1534 | "_isFastLaunch": false,
1535 | "category": "Accelerated computing",
1536 | "gpuNum": 8,
1537 | "hideHardwareSpecs": false,
1538 | "memoryGiB": 488,
1539 | "name": "ml.p3.16xlarge",
1540 | "vcpuNum": 64
1541 | },
1542 | {
1543 | "_defaultOrder": 38,
1544 | "_isFastLaunch": false,
1545 | "category": "Accelerated computing",
1546 | "gpuNum": 8,
1547 | "hideHardwareSpecs": false,
1548 | "memoryGiB": 768,
1549 | "name": "ml.p3dn.24xlarge",
1550 | "vcpuNum": 96
1551 | },
1552 | {
1553 | "_defaultOrder": 39,
1554 | "_isFastLaunch": false,
1555 | "category": "Memory Optimized",
1556 | "gpuNum": 0,
1557 | "hideHardwareSpecs": false,
1558 | "memoryGiB": 16,
1559 | "name": "ml.r5.large",
1560 | "vcpuNum": 2
1561 | },
1562 | {
1563 | "_defaultOrder": 40,
1564 | "_isFastLaunch": false,
1565 | "category": "Memory Optimized",
1566 | "gpuNum": 0,
1567 | "hideHardwareSpecs": false,
1568 | "memoryGiB": 32,
1569 | "name": "ml.r5.xlarge",
1570 | "vcpuNum": 4
1571 | },
1572 | {
1573 | "_defaultOrder": 41,
1574 | "_isFastLaunch": false,
1575 | "category": "Memory Optimized",
1576 | "gpuNum": 0,
1577 | "hideHardwareSpecs": false,
1578 | "memoryGiB": 64,
1579 | "name": "ml.r5.2xlarge",
1580 | "vcpuNum": 8
1581 | },
1582 | {
1583 | "_defaultOrder": 42,
1584 | "_isFastLaunch": false,
1585 | "category": "Memory Optimized",
1586 | "gpuNum": 0,
1587 | "hideHardwareSpecs": false,
1588 | "memoryGiB": 128,
1589 | "name": "ml.r5.4xlarge",
1590 | "vcpuNum": 16
1591 | },
1592 | {
1593 | "_defaultOrder": 43,
1594 | "_isFastLaunch": false,
1595 | "category": "Memory Optimized",
1596 | "gpuNum": 0,
1597 | "hideHardwareSpecs": false,
1598 | "memoryGiB": 256,
1599 | "name": "ml.r5.8xlarge",
1600 | "vcpuNum": 32
1601 | },
1602 | {
1603 | "_defaultOrder": 44,
1604 | "_isFastLaunch": false,
1605 | "category": "Memory Optimized",
1606 | "gpuNum": 0,
1607 | "hideHardwareSpecs": false,
1608 | "memoryGiB": 384,
1609 | "name": "ml.r5.12xlarge",
1610 | "vcpuNum": 48
1611 | },
1612 | {
1613 | "_defaultOrder": 45,
1614 | "_isFastLaunch": false,
1615 | "category": "Memory Optimized",
1616 | "gpuNum": 0,
1617 | "hideHardwareSpecs": false,
1618 | "memoryGiB": 512,
1619 | "name": "ml.r5.16xlarge",
1620 | "vcpuNum": 64
1621 | },
1622 | {
1623 | "_defaultOrder": 46,
1624 | "_isFastLaunch": false,
1625 | "category": "Memory Optimized",
1626 | "gpuNum": 0,
1627 | "hideHardwareSpecs": false,
1628 | "memoryGiB": 768,
1629 | "name": "ml.r5.24xlarge",
1630 | "vcpuNum": 96
1631 | },
1632 | {
1633 | "_defaultOrder": 47,
1634 | "_isFastLaunch": false,
1635 | "category": "Accelerated computing",
1636 | "gpuNum": 1,
1637 | "hideHardwareSpecs": false,
1638 | "memoryGiB": 16,
1639 | "name": "ml.g5.xlarge",
1640 | "vcpuNum": 4
1641 | },
1642 | {
1643 | "_defaultOrder": 48,
1644 | "_isFastLaunch": false,
1645 | "category": "Accelerated computing",
1646 | "gpuNum": 1,
1647 | "hideHardwareSpecs": false,
1648 | "memoryGiB": 32,
1649 | "name": "ml.g5.2xlarge",
1650 | "vcpuNum": 8
1651 | },
1652 | {
1653 | "_defaultOrder": 49,
1654 | "_isFastLaunch": false,
1655 | "category": "Accelerated computing",
1656 | "gpuNum": 1,
1657 | "hideHardwareSpecs": false,
1658 | "memoryGiB": 64,
1659 | "name": "ml.g5.4xlarge",
1660 | "vcpuNum": 16
1661 | },
1662 | {
1663 | "_defaultOrder": 50,
1664 | "_isFastLaunch": false,
1665 | "category": "Accelerated computing",
1666 | "gpuNum": 1,
1667 | "hideHardwareSpecs": false,
1668 | "memoryGiB": 128,
1669 | "name": "ml.g5.8xlarge",
1670 | "vcpuNum": 32
1671 | },
1672 | {
1673 | "_defaultOrder": 51,
1674 | "_isFastLaunch": false,
1675 | "category": "Accelerated computing",
1676 | "gpuNum": 1,
1677 | "hideHardwareSpecs": false,
1678 | "memoryGiB": 256,
1679 | "name": "ml.g5.16xlarge",
1680 | "vcpuNum": 64
1681 | },
1682 | {
1683 | "_defaultOrder": 52,
1684 | "_isFastLaunch": false,
1685 | "category": "Accelerated computing",
1686 | "gpuNum": 4,
1687 | "hideHardwareSpecs": false,
1688 | "memoryGiB": 192,
1689 | "name": "ml.g5.12xlarge",
1690 | "vcpuNum": 48
1691 | },
1692 | {
1693 | "_defaultOrder": 53,
1694 | "_isFastLaunch": false,
1695 | "category": "Accelerated computing",
1696 | "gpuNum": 4,
1697 | "hideHardwareSpecs": false,
1698 | "memoryGiB": 384,
1699 | "name": "ml.g5.24xlarge",
1700 | "vcpuNum": 96
1701 | },
1702 | {
1703 | "_defaultOrder": 54,
1704 | "_isFastLaunch": false,
1705 | "category": "Accelerated computing",
1706 | "gpuNum": 8,
1707 | "hideHardwareSpecs": false,
1708 | "memoryGiB": 768,
1709 | "name": "ml.g5.48xlarge",
1710 | "vcpuNum": 192
1711 | },
1712 | {
1713 | "_defaultOrder": 55,
1714 | "_isFastLaunch": false,
1715 | "category": "Accelerated computing",
1716 | "gpuNum": 8,
1717 | "hideHardwareSpecs": false,
1718 | "memoryGiB": 1152,
1719 | "name": "ml.p4d.24xlarge",
1720 | "vcpuNum": 96
1721 | },
1722 | {
1723 | "_defaultOrder": 56,
1724 | "_isFastLaunch": false,
1725 | "category": "Accelerated computing",
1726 | "gpuNum": 8,
1727 | "hideHardwareSpecs": false,
1728 | "memoryGiB": 1152,
1729 | "name": "ml.p4de.24xlarge",
1730 | "vcpuNum": 96
1731 | },
1732 | {
1733 | "_defaultOrder": 57,
1734 | "_isFastLaunch": false,
1735 | "category": "Accelerated computing",
1736 | "gpuNum": 0,
1737 | "hideHardwareSpecs": false,
1738 | "memoryGiB": 32,
1739 | "name": "ml.trn1.2xlarge",
1740 | "vcpuNum": 8
1741 | },
1742 | {
1743 | "_defaultOrder": 58,
1744 | "_isFastLaunch": false,
1745 | "category": "Accelerated computing",
1746 | "gpuNum": 0,
1747 | "hideHardwareSpecs": false,
1748 | "memoryGiB": 512,
1749 | "name": "ml.trn1.32xlarge",
1750 | "vcpuNum": 128
1751 | },
1752 | {
1753 | "_defaultOrder": 59,
1754 | "_isFastLaunch": false,
1755 | "category": "Accelerated computing",
1756 | "gpuNum": 0,
1757 | "hideHardwareSpecs": false,
1758 | "memoryGiB": 512,
1759 | "name": "ml.trn1n.32xlarge",
1760 | "vcpuNum": 128
1761 | }
1762 | ],
1763 | "kernelspec": {
1764 | "display_name": "Python 3 (Data Science 3.0)",
1765 | "language": "python",
1766 | "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-310-v1"
1767 | },
1768 | "language_info": {
1769 | "codemirror_mode": {
1770 | "name": "ipython",
1771 | "version": 3
1772 | },
1773 | "file_extension": ".py",
1774 | "mimetype": "text/x-python",
1775 | "name": "python",
1776 | "nbconvert_exporter": "python",
1777 | "pygments_lexer": "ipython3",
1778 | "version": "3.10.6"
1779 | }
1780 | },
1781 | "nbformat": 4,
1782 | "nbformat_minor": 5
1783 | }
1784 |
--------------------------------------------------------------------------------
/images/Bedrock-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Bedrock-1.png
--------------------------------------------------------------------------------
/images/Bedrock-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Bedrock-2.png
--------------------------------------------------------------------------------
/images/Bedrock-3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Bedrock-3.png
--------------------------------------------------------------------------------
/images/Bedrock-4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Bedrock-4.png
--------------------------------------------------------------------------------
/images/DynamicTools.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/DynamicTools.png
--------------------------------------------------------------------------------
/images/EE-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/EE-1.png
--------------------------------------------------------------------------------
/images/EE-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/EE-2.png
--------------------------------------------------------------------------------
/images/EE-3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/EE-3.png
--------------------------------------------------------------------------------
/images/EE-4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/EE-4.png
--------------------------------------------------------------------------------
/images/EE-5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/EE-5.png
--------------------------------------------------------------------------------
/images/Overview-3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Overview-3.png
--------------------------------------------------------------------------------
/images/Setup-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Setup-1.png
--------------------------------------------------------------------------------
/images/Setup-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Setup-2.png
--------------------------------------------------------------------------------
/images/Setup-3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Setup-3.png
--------------------------------------------------------------------------------
/images/Setup-4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Setup-4.png
--------------------------------------------------------------------------------
/images/Studio-E.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-E.png
--------------------------------------------------------------------------------
/images/Studio-F.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-F.png
--------------------------------------------------------------------------------
/images/Studio-G.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-G.png
--------------------------------------------------------------------------------
/images/Studio-H.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-H.png
--------------------------------------------------------------------------------
/images/Studio-I.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-I.png
--------------------------------------------------------------------------------
/images/Studio-J.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-J.png
--------------------------------------------------------------------------------
/images/Studio-K.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-K.png
--------------------------------------------------------------------------------
/images/Studio-a.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-a.png
--------------------------------------------------------------------------------
/images/Studio-b.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-b.png
--------------------------------------------------------------------------------
/images/Studio-c.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-c.png
--------------------------------------------------------------------------------
/images/Studio-d.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-d.png
--------------------------------------------------------------------------------
/images/Studio-e.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-e.png
--------------------------------------------------------------------------------
/images/Studio-f.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-f.png
--------------------------------------------------------------------------------
/images/Studio-g.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/Studio-g.png
--------------------------------------------------------------------------------
/images/WS-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/WS-1.png
--------------------------------------------------------------------------------
/images/WS-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/WS-2.png
--------------------------------------------------------------------------------
/images/studio-D.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/images/studio-D.png
--------------------------------------------------------------------------------
/notebook-environment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/giuseppe-zappia/complex-reasoning-with-react-and-langchain/5ae453bf0144ab8504bad9c83184579470ad43c2/notebook-environment.png
--------------------------------------------------------------------------------