├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── NOTICE ├── README.md ├── data ├── Restaurant_Childrens_Menu.pdf ├── Restaurant_Dinner_Menu.pdf └── Restaurant_week_specials.pdf ├── notebooks ├── create-agent-with-knowledge-base-and-action-group.ipynb ├── images │ ├── agentic_integration.png │ ├── agentic_orchestration.png │ ├── bedrock-agent-kb-dynamodb.png │ └── multiagent.png ├── multi_agent_collaboration.ipynb └── reasoning_with_langgraph_bedrock_workshop.ipynb ├── requirements.txt ├── requirements2.txt └── scripts ├── __pycache__ ├── agent.cpython-311.pyc ├── blog_writer.cpython-311.pyc └── knowledge_base.cpython-311.pyc ├── agent.py ├── agenteval.yml ├── bedrock.py ├── blog_writer.py ├── knowledge_base.py └── lambda_function.py /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Reasoning Orchestration Workshop using Amazon Bedrock, Knowledge Bases, and Langchain 2 | This repository provides step-by-step instructions for creating a collaborative multi-agent system that leverages graph-based orchestration. It demonstrates how to combine Amazon Bedrock Agent with agents developed using open-source frameworks, enabling enhanced reasoning capabilities and seamless integration. The exercise will guide you through the process of building a reasoning orchestration system using [Amazon Bedrock](https://aws.amazon.com/bedrock/), [Knowledge Bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) and [Agents for Amazon Bedrock](https://aws.amazon.com/bedrock/agents/) as well as customer fine tuned models. We will also explore the integration of Bedrock agents with open source orchestration frameworks [LangGgraph](https://langchain-ai.github.io/langgraph/) and [CrewAI](https://github.com/crewAIInc/crewAI) for dispatching and reasoning. 3 | 4 | 5 |
6 | 7 | ## Overview 8 | In this workshop, you will learn how to: 9 | 10 | 1. Build a multimodal agentic orchestration framework using AWS and open source tools 11 | 3. Set up and configure Amazon Bedrock, a foundation for building large language models (LLMs) and other AI-powered applications, including Agent and Knowledge Bases. 12 | 5. Set up a open source RAG solution using Chroma and an embedding engine of your choice. 13 | 6. Utilize Langchain, a framework for building applications with large language models, to orchestrate the reasoning process. 14 | 7. Integrate Langgraph, a tool for managing agentic services, to dispatch and reason about the various components of your system. 15 | 8. Integrate open source langgraph with Amazon Bedrock Agent which is acssociated with Amazon Lambda and Amazon Bedrock Knowledge Bases 16 | 17 | ## Prerequisites 18 | Workshop practioners are expected to have LLM, Jupyter Notebook and Python working experience. 19 | 20 | * AWS account with appropriate permissions to access the services used in this workshop, include Bedrock models access and IAM permission to access S3, DynamoDB, Lambda and others. 21 | * Basic understanding of large language models, knowledge management, and AI-powered applications. 22 | * Familiarity with Python programming language. 23 | 24 | ## Getting Started 25 | 26 | Set up Amazon Bedrock: Follow the official Amazon Bedrock documentation to create your Bedrock environment and configure the necessary permissions and resources. 27 | 28 | Integrate Knowledge Bases: Explore the Knowledge Bases service and learn how to integrate it with your Bedrock-powered application. Ensure that your knowledge base is populated with relevant information to support your reasoning tasks. 29 | 30 | Utilize Langchain: Dive into the Langchain framework and understand how to use it to orchestrate the reasoning process. Explore the various Langchain components, such as agents, chains, and prompts, to build your reasoning system. 31 | 32 | Integrate Langgraph: Introduce Langgraph into your system to manage the agentic services involved in the reasoning process. Learn how to dispatch tasks and reason about the various components of your system using Langgraph. 33 | 34 | Develop your Reasoning Orchestration System: Combine the knowledge and tools you've acquired to build your reasoning orchestration system. Ensure that the different components (Bedrock, Knowledge Bases, Langchain, and Langgraph) work seamlessly together to provide the desired functionality. 35 | 36 | Test and Refine: Thoroughly test your reasoning orchestration system, and make any necessary adjustments to improve its performance and reliability. 37 | 38 | 39 | ## Definitions 40 | 41 | ## Agents or Agentic Services: 42 | Agents are created to fulfill specific roles and responsibilities. Each agent has predefined capabilities and embedded intelligence that enables them to focus on executing particular components or aspects of a project. 43 | 44 | ### Key Elements of an Agent 45 | - Planning: By prompting elicits reasoning in large language models, AI agents can engage in complex problem-solving and planning processes. 46 | - Reflection: This involves iterative refinement with self-feedback, allowing AI agents to continuously improve their performance. 47 | - Role Playing: Each agent is assigned a specific role, such as a researcher or analyst, to focus their efforts. 48 | - Tasks: Agents maintain attention on their assigned tasks, ensuring efficient execution. 49 | - Tools: Equipped with various tools for data retrieval, processing, and interaction. 50 | - Colaboration: Agents collaborate with each other to complete tasks. 51 | - Guardrails: Safety measures and protocols to ensure reliable and ethical operations. 52 | - Memory: Ability to store and recall past interactions and data, enhancing decision-making. 53 | 54 | ### Multi-Agent Collaboration 55 | Multi-agent collaboration involves multiple agents working together to achieve complex goals. This collaboration can take various forms depending on the nature and requirements of the tasks. Effective collaboration ensures that tasks are completed efficiently and accurately. Here are the few typical types: 56 | 57 | - Sequential: Agents perform tasks in a predetermined order, where each step depends on the completion of the previous one. 58 | - Hierarchical: Agents follow a structured hierarchy, with higher-level agents overseeing and coordinating the activities of lower-level agents. 59 | - Asynchronous: Agents operate independently, handling tasks as they arise without adhering to a fixed sequence, allowing for flexibility and parallel processing. 60 | 61 | ## Resources 62 | * [Amazon Bedrock Documentation](https://docs.aws.amazon.com/bedrock/) 63 | * [Amazon OpenSearch Documentation](https://docs.aws.amazon.com/opensearch-service/) 64 | * [Amazon Bedrock Agent Document](https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html) 65 | * [Langchain Agent Documentation](https://python.langchain.com/v0.1/docs/modules/agents/) 66 | * [Langgraph Documentation](https://langchain-ai.github.io/langgraph/) 67 | * [CrewAI Framwork](https://github.com/crewAIInc/crewAI) 68 | 69 | ## Conclusion 70 | By the end of this workshop, you will have a solid understanding of how to build a agentic orchestration system using Amazon Bedrock, Knowledge Bases, Langchain, and Langgraph. This knowledge will enable you to create powerful genreatiev AI-powered applications that can effectively reason about complex problems and make informed decisions. 71 | 72 | 73 | -------------------------------------------------------------------------------- /data/Restaurant_Childrens_Menu.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/data/Restaurant_Childrens_Menu.pdf -------------------------------------------------------------------------------- /data/Restaurant_Dinner_Menu.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/data/Restaurant_Dinner_Menu.pdf -------------------------------------------------------------------------------- /data/Restaurant_week_specials.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/data/Restaurant_week_specials.pdf -------------------------------------------------------------------------------- /notebooks/create-agent-with-knowledge-base-and-action-group.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "a0bb5c39-2fde-4336-8127-8debe7cb2741", 6 | "metadata": { 7 | "pycharm": { 8 | "name": "#%% md\n" 9 | } 10 | }, 11 | "source": [ 12 | "# Create an Amazon Bedrock Agent integrated with KnowledgeBases\n", 13 | "\n", 14 | "In this notebook which is a part of AWS Bedrock [code samples](https://github.com/aws-samples/amazon-bedrock-samples.git), you will learn how to create an Amazon Bedrock Agent that makes use of Knowledge Bases for Amazon Bedrock to retrieve data about a restaurant's menu. The sample use case showcases a booking agent creation, using Amazon Bedrock Agent backed by Amazon KnowledgeBase and Amazon DynamoDB. The booking agent will be able to answer customer queries from menus in pdf files stored in S3 bucket manage customer reservations. The example architecture looks as following:\n", 15 | "\n", 16 | "\n", 17 | "
\n", 18 | "\n", 19 | "The steps to complete this notebook are:\n", 20 | "\n", 21 | "1. Import the needed libraries\n", 22 | "2. Create the Knowledge Base for Amazon Bedrock\n", 23 | "3. Upload files to Amazon S3 then Sync with KnowledgeBase\n", 24 | "4. Create a DynamoDB for reservation records\n", 25 | "5. Create lambda functions to handle reservation bookings\n", 26 | "6. Create the Booking Agent \n", 27 | "7. Associate the Agent with KnowledgeBase and DynamoDB via Lambda function\n", 28 | "8. Test the Agent\n", 29 | "9. Clean-up the resources created" 30 | ] 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "id": "5c1fa6bb-ecce-4b33-8e53-c0756456231e", 35 | "metadata": {}, 36 | "source": [ 37 | "## 0. Pre-requisites:\n", 38 | "\n", 39 | "* a) Setup IAM to allow bedrock, dynamodb and opensearch\n", 40 | "* b) Request access to the Bedrock models" 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "id": "076a5aba-9735-4e98-8a53-0daccd7e94b0", 46 | "metadata": { 47 | "pycharm": { 48 | "name": "#%% md\n" 49 | } 50 | }, 51 | "source": [ 52 | "## 1. Import the needed libraries" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "id": "4fa67d7a", 58 | "metadata": { 59 | "pycharm": { 60 | "name": "#%% md\n" 61 | } 62 | }, 63 | "source": [ 64 | "First step is to install the pre-requisites packages. This step is identical to the agent orchestration notebook. Uncomment to execute the installation, if needed.\n", 65 | "**The installation might take a few minutes to complete**" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "id": "ac05c073-d45b-4d85-9bf8-ae10aa78be8d", 72 | "metadata": { 73 | "pycharm": { 74 | "name": "#%%\n" 75 | }, 76 | "tags": [] 77 | }, 78 | "outputs": [], 79 | "source": [ 80 | "!pip install --upgrade -q -r ../requirements.txt" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": null, 86 | "id": "e8ad6ec2-b283-4c5d-879f-e397e46568c0", 87 | "metadata": { 88 | "pycharm": { 89 | "name": "#%%\n" 90 | }, 91 | "tags": [] 92 | }, 93 | "outputs": [], 94 | "source": [ 95 | "import os\n", 96 | "import time\n", 97 | "import boto3\n", 98 | "import logging\n", 99 | "import pprint\n", 100 | "import json\n", 101 | "import sys\n", 102 | "\n", 103 | "\n", 104 | "module_paths = [\"./\", \"../scripts\"]\n", 105 | "for module_path in module_paths:\n", 106 | " sys.path.append(os.path.abspath(module_path))\n", 107 | " \n", 108 | "from knowledge_base import BedrockKnowledgeBase\n", 109 | "from agent import create_agent_role_and_policies, create_lambda_role, delete_agent_roles_and_policies, get_agent_lambda_role_arn\n", 110 | "from agent import create_dynamodb, create_lambda, clean_up_resources\n", 111 | "from bedrock import *" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "id": "d2b2d607-c1f2-4cbb-9f89-d935676e0101", 118 | "metadata": { 119 | "pycharm": { 120 | "name": "#%%\n" 121 | }, 122 | "tags": [] 123 | }, 124 | "outputs": [], 125 | "source": [ 126 | "#Clients\n", 127 | "s3_client = boto3.client('s3')\n", 128 | "sts_client = boto3.client('sts')\n", 129 | "session = boto3.session.Session()\n", 130 | "region = session.region_name\n", 131 | "account_id = sts_client.get_caller_identity()[\"Account\"]\n", 132 | "bedrock_agent_client = boto3.client('bedrock-agent')\n", 133 | "bedrock_agent_runtime_client = boto3.client('bedrock-agent-runtime')\n", 134 | "logging.basicConfig(format='[%(asctime)s] p%(process)s {%(filename)s:%(lineno)d} %(levelname)s - %(message)s', level=logging.INFO)\n", 135 | "logger = logging.getLogger(__name__)\n", 136 | "region, account_id" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "id": "55f7de63-8682-466e-b7ac-e4827554bc3d", 142 | "metadata": {}, 143 | "source": [ 144 | "**Make sure to check the values of knowledge_base_name and bucket_name for pre-created names using AWS Console or AWS CLI commands.**" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "id": "d647d2a3", 151 | "metadata": { 152 | "pycharm": { 153 | "name": "#%%\n" 154 | }, 155 | "tags": [] 156 | }, 157 | "outputs": [], 158 | "source": [ 159 | "suffix = f\"{region}-{account_id}\"\n", 160 | "agent_name = 'booking-agent'\n", 161 | "knowledge_base_name = f'{agent_name}-kb' #Provisioned by using CloudFormation template \n", 162 | "knowledge_base_description = \"Knowledge Base containing the restaurant menu's collection\"\n", 163 | "agent_alias_name = \"booking-agent-alias\"\n", 164 | "agent_lambda_role_name = \"booking-agent-lambda-role\"\n", 165 | "#bucket_name = f'{agent_name}-{suffix}' \n", 166 | "bucket_name = f'{account_id}-agentic-orch-bkt-1' #Provisioned by using CloudFormation template \n", 167 | "agent_bedrock_allow_policy_name = f\"{agent_name}-ba\"\n", 168 | "agent_role_name = f'AmazonBedrockExecutionRoleForAgents_{agent_name}'\n", 169 | "agent_foundation_model = \"anthropic.claude-3-sonnet-20240229-v1:0\"\n", 170 | "\n", 171 | "agent_description = \"Agent in charge of a restaurants table bookings\"\n", 172 | "agent_instruction = \"\"\"\n", 173 | "You are a restaurant agent, helping clients retrieve information from their booking, \n", 174 | "create a new booking or delete an existing booking\n", 175 | "\"\"\"\n", 176 | "\n", 177 | "agent_action_group_description = \"\"\"\n", 178 | "Actions for getting table booking information, create a new booking or delete an existing booking\"\"\"\n", 179 | "\n", 180 | "agent_action_group_name = \"TableBookingsActionGroup\"" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "id": "dfdb509a-e25c-4e3f-900f-a23666d716c1", 186 | "metadata": {}, 187 | "source": [ 188 | "Creat a bucket ith the name defined in the designated region" 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "id": "20aa6d3e-8d1d-44fb-9214-41b81445de0a", 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [ 198 | "# Function to create the bucket\n", 199 | "def create_bucket(bucket_name):\n", 200 | " try:\n", 201 | " s3_client.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={'LocationConstraint': region})\n", 202 | " print(f\"Bucket '{bucket_name}' created successfully.\")\n", 203 | " except ClientError as e:\n", 204 | " error_code = e.response['Error']['Code']\n", 205 | " if error_code == 'BucketAlreadyExists':\n", 206 | " print(f\"Bucket '{bucket_name}' already exists. Ignoring creation.\")\n", 207 | " elif error_code == 'BucketAlreadyOwnedByYou':\n", 208 | " print(f\"Bucket '{bucket_name}' already owned by you. Ignoring creation.\")\n", 209 | " else:\n", 210 | " print(f\"An error occurred: {e}\")\n", 211 | "\n", 212 | "# Call the function to create the bucket\n", 213 | "create_bucket(bucket_name)" 214 | ] 215 | }, 216 | { 217 | "cell_type": "markdown", 218 | "id": "38c38fcb-9b87-414e-a644-04c263eea5c9", 219 | "metadata": { 220 | "pycharm": { 221 | "name": "#%% md\n" 222 | } 223 | }, 224 | "source": [ 225 | "## 2. Create Knowledge Base for Amazon Bedrock\n", 226 | "Let's start by creating a [Knowledge Base for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) to store the restaurant menus. Knowledge Bases allow you to integrate with different vector databases including [Amazon OpenSearch Serverless](https://aws.amazon.com/opensearch-service/features/serverless/), [Amazon Aurora](https://aws.amazon.com/rds/aurora/) and [Pinecone](http://app.pinecone.io/bedrock-integration). For this example, we will integrate the knowledge base with Amazon OpenSearch Serverless. To do so, we will use the helper class `BedrockKnowledgeBase` which will create the knowledge base and all of its pre-requisites:\n", 227 | "1. IAM roles and policies\n", 228 | "2. S3 bucket\n", 229 | "3. Amazon OpenSearch Serverless encryption, network and data access policies\n", 230 | "4. Amazon OpenSearch Serverless collection\n", 231 | "5. Amazon OpenSearch Serverless vector index\n", 232 | "6. Knowledge base\n", 233 | "7. Knowledge base data source" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "id": "786b86b9-c221-4d1f-a5b6-75f98845ee1d", 239 | "metadata": {}, 240 | "source": [ 241 | "### Upload the dataset to Amazon S3\n", 242 | "Now that we have created the knowledge base, let's populate it with the menu's dataset. The Knowledge Base data source expects the data to be available on the S3 bucket connected to it and changes on the data can be syncronized to the knowledge base using the StartIngestionJob API call. In this example we will use the boto3 abstraction of the API, via our helper classe.\n", 243 | "\n", 244 | "Let's first upload the menu's data available on the dataset folder to s3" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "7f2ee421-a253-46ae-aeaa-ea0b5ed8ce4a", 251 | "metadata": {}, 252 | "outputs": [], 253 | "source": [ 254 | "def upload_directory(path, bucket_name):\n", 255 | " for root,dirs,files in os.walk(path):\n", 256 | " for file in files:\n", 257 | " file_to_upload = os.path.join(root,file)\n", 258 | " print(f\"uploading file {file_to_upload} to {bucket_name}\")\n", 259 | " s3_client.upload_file(file_to_upload,bucket_name,file)\n", 260 | "\n", 261 | "upload_directory(\"../data\", bucket_name)" 262 | ] 263 | }, 264 | { 265 | "cell_type": "markdown", 266 | "id": "9fc9b486-6787-41d7-b15e-9e271edc309e", 267 | "metadata": {}, 268 | "source": [ 269 | "### Create or use the designated KB already created" 270 | ] 271 | }, 272 | { 273 | "cell_type": "code", 274 | "execution_count": null, 275 | "id": "8383ed35-95c1-47a7-b77c-3b9cf70f733c", 276 | "metadata": {}, 277 | "outputs": [], 278 | "source": [ 279 | "response = bedrock_agent_client.list_knowledge_bases()\n", 280 | "kb_id=bedrock_agent_client.list_knowledge_bases()['knowledgeBaseSummaries'][0]['knowledgeBaseId']\n", 281 | "kb_name = bedrock_agent_client.list_knowledge_bases()['knowledgeBaseSummaries'][0]['name']\n", 282 | "kb_status = bedrock_agent_client.list_knowledge_bases()['knowledgeBaseSummaries'][0]['status']" 283 | ] 284 | }, 285 | { 286 | "cell_type": "code", 287 | "execution_count": null, 288 | "id": "a46bc0d6-83ff-47e7-a6e8-bad9f54e807d", 289 | "metadata": {}, 290 | "outputs": [], 291 | "source": [ 292 | "kb_id" 293 | ] 294 | }, 295 | { 296 | "cell_type": "code", 297 | "execution_count": null, 298 | "id": "2c39c2d9-7965-4c22-a74b-65d1961c4166", 299 | "metadata": { 300 | "pycharm": { 301 | "name": "#%%\n" 302 | }, 303 | "tags": [] 304 | }, 305 | "outputs": [], 306 | "source": [ 307 | "%%time \n", 308 | "if kb_name not in knowledge_base_name and 'active' not in kb_status.lower():\n", 309 | " knowledge_base = BedrockKnowledgeBase(\n", 310 | " kb_name=knowledge_base_name,\n", 311 | " kb_description=knowledge_base_description,\n", 312 | " data_bucket_name=bucket_name\n", 313 | " )\n", 314 | " # Uncomment next to ensure that the kb is available, if needed\n", 315 | " #time.sleep(30)\n", 316 | " # sync knowledge base\n", 317 | " knowledge_base.start_ingestion_job()\n", 318 | " kb_id = knowledge_base.get_knowledge_base_id()\n", 319 | "else:\n", 320 | " dsr = bedrock_agent_client.list_data_sources(knowledgeBaseId=kb_id)\n", 321 | " datasource_id = dsr['dataSourceSummaries'][0]['dataSourceId']\n", 322 | " print('Syncing up KB')\n", 323 | " response = bedrock_agent_client.start_ingestion_job(\n", 324 | " knowledgeBaseId=kb_id,\n", 325 | " dataSourceId=datasource_id\n", 326 | " )" 327 | ] 328 | }, 329 | { 330 | "cell_type": "markdown", 331 | "id": "9c85cbee-9359-4927-ae16-fa7af42cf981", 332 | "metadata": { 333 | "pycharm": { 334 | "name": "#%% md\n" 335 | } 336 | }, 337 | "source": [ 338 | "Finally we collect the Knowledge Base Id to integrate it with our Agent later on" 339 | ] 340 | }, 341 | { 342 | "cell_type": "markdown", 343 | "id": "315e0df7-4008-4fb4-b28d-a4df6ff446f6", 344 | "metadata": { 345 | "pycharm": { 346 | "name": "#%% md\n" 347 | } 348 | }, 349 | "source": [ 350 | "## 3. Test the Knowledge Base\n", 351 | "Now the Knowlegde Base is available we can test it out using the [**retrieve**](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agent-runtime/client/retrieve.html) and [**retrieve_and_generate**](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agent-runtime/client/retrieve_and_generate.html) functions. \n", 352 | "\n", 353 | "#### Testing Knowledge Base with Retrieve and Generate API\n", 354 | "\n", 355 | "Let's first test the knowledge base using the retrieve and generate API. With this API, Bedrock takes care of retrieving the necessary references from the knowledge base and generating the final answer using a LLM model from Bedrock. Repeat executing this cell until it is able to product the right answer, to verify the sync-up task completion.\n", 356 | "\n", 357 | "**Be aware that knowledgeBase syncing up with S3 bucket might take a few minutes, depending on data volumes.**" 358 | ] 359 | }, 360 | { 361 | "cell_type": "code", 362 | "execution_count": null, 363 | "id": "f199e822-6e06-4bac-9dbf-40ff0be98598", 364 | "metadata": { 365 | "pycharm": { 366 | "name": "#%%\n" 367 | }, 368 | "tags": [] 369 | }, 370 | "outputs": [], 371 | "source": [ 372 | "# Wait for sync-up, if needed\n", 373 | "time.sleep(30)\n", 374 | "\n", 375 | "response = bedrock_agent_runtime_client.retrieve_and_generate(\n", 376 | " input={\n", 377 | " \"text\": \"Which are the 5 mains available in the childrens menu?\"\n", 378 | " },\n", 379 | " retrieveAndGenerateConfiguration={\n", 380 | " \"type\": \"KNOWLEDGE_BASE\",\n", 381 | " \"knowledgeBaseConfiguration\": {\n", 382 | " 'knowledgeBaseId': kb_id,\n", 383 | " \"modelArn\": \"arn:aws:bedrock:{}::foundation-model/{}\".format(region, agent_foundation_model),\n", 384 | " \"retrievalConfiguration\": {\n", 385 | " \"vectorSearchConfiguration\": {\n", 386 | " \"numberOfResults\":5\n", 387 | " } \n", 388 | " }\n", 389 | " }\n", 390 | " }\n", 391 | ")\n", 392 | "\n", 393 | "print(response['output']['text'],end='\\n'*2)" 394 | ] 395 | }, 396 | { 397 | "cell_type": "markdown", 398 | "id": "e6e4235c", 399 | "metadata": { 400 | "pycharm": { 401 | "name": "#%% md\n" 402 | } 403 | }, 404 | "source": [ 405 | "As you can see, with the retrieve and generate API we get the final response directly and we don't see the different sources used to generate this response. Let's now retrieve the source information from the knowledge base with the retrieve API.\n", 406 | "\n", 407 | "#### Testing Knowledge Base with Retrieve API\n", 408 | "If you need an extra layer of control, you can retrieve the chuncks that best match your query using the retrieve API. In this setup, we can configure the desired number of results and control the final answer with your own application logic. The API then provides you with the matching content, its S3 location, the similarity score and the chunk metadata" 409 | ] 410 | }, 411 | { 412 | "cell_type": "code", 413 | "execution_count": null, 414 | "id": "34a3f96b-926a-411c-bef7-38c4689fed7b", 415 | "metadata": { 416 | "pycharm": { 417 | "name": "#%%\n" 418 | }, 419 | "tags": [] 420 | }, 421 | "outputs": [], 422 | "source": [ 423 | "response_ret = bedrock_agent_runtime_client.retrieve(\n", 424 | " knowledgeBaseId=kb_id, \n", 425 | " nextToken='string',\n", 426 | " retrievalConfiguration={\n", 427 | " \"vectorSearchConfiguration\": {\n", 428 | " \"numberOfResults\":5,\n", 429 | " } \n", 430 | " },\n", 431 | " retrievalQuery={\n", 432 | " 'text': 'Which are the 5 mains available in the childrens menu?'\n", 433 | " }\n", 434 | ")\n", 435 | "\n", 436 | "def response_print(retrieve_resp):\n", 437 | "#structure 'retrievalResults': list of contents. Each list has content, location, score, metadata\n", 438 | " for num,chunk in enumerate(response_ret['retrievalResults'],1):\n", 439 | " print(f'Chunk {num}: ',chunk['content']['text'],end='\\n'*2)\n", 440 | " print(f'Chunk {num} Location: ',chunk['location'],end='\\n'*2)\n", 441 | " print(f'Chunk {num} Score: ',chunk['score'],end='\\n'*2)\n", 442 | " print(f'Chunk {num} Metadata: ',chunk['metadata'],end='\\n'*2)\n", 443 | "\n", 444 | "response_print(response_ret)" 445 | ] 446 | }, 447 | { 448 | "cell_type": "markdown", 449 | "id": "e032c452-4feb-406d-aa6e-b85a59aded79", 450 | "metadata": { 451 | "pycharm": { 452 | "name": "#%% md\n" 453 | } 454 | }, 455 | "source": [ 456 | "## 4. Create the Agent for Amazon Bedrock\n", 457 | "\n", 458 | "In this section we will go through all the steps to create an Agent for Amazon Bedrock. \n", 459 | "\n", 460 | "These are the steps to complete:\n", 461 | " \n", 462 | "1. Create an Amazon DynamoDB table\n", 463 | "2. Create an AWS Lambda function\n", 464 | "3. Create the IAM policies needed for the Agent\n", 465 | "4. Create the Agent\n", 466 | "5. Create the Agent Action Group\n", 467 | "6. Allow the Agent to invoke the Action Group Lambda\n", 468 | "7. Associate the Knowledge Base to the agent\n", 469 | "8. Prepare the Agent and create an alias" 470 | ] 471 | }, 472 | { 473 | "cell_type": "markdown", 474 | "id": "c5edf3bd-3214-4fe3-a9bc-df927e30b8b4", 475 | "metadata": { 476 | "pycharm": { 477 | "name": "#%% md\n" 478 | } 479 | }, 480 | "source": [ 481 | "### 4.1 Create the DynamoDB table\n", 482 | "We will create a DynamoDB table which contains the restaurant bookings information. The functions are designed to avoid duplicate effort in case role or function has been created before." 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": null, 488 | "id": "18ff65c0-0207-4e34-932f-6d291f4d5c8d", 489 | "metadata": { 490 | "pycharm": { 491 | "name": "#%%\n" 492 | }, 493 | "tags": [] 494 | }, 495 | "outputs": [], 496 | "source": [ 497 | "table_name = 'restaurant_bookings'\n", 498 | "if not check_table_exists(table_name):\n", 499 | " create_dynamodb(table_name)" 500 | ] 501 | }, 502 | { 503 | "cell_type": "markdown", 504 | "id": "8460c785-e13f-4182-84d7-3bf7aafd3842", 505 | "metadata": { 506 | "pycharm": { 507 | "name": "#%% md\n" 508 | } 509 | }, 510 | "source": [ 511 | "### 4.2 Create the Lambda Function\n", 512 | "\n", 513 | "We will now create a lambda function that interacts with DynamoDB table. To do so we will:\n", 514 | "\n", 515 | "1. Create the `lambda_function.py` file which contains the logic for our lambda function\n", 516 | "2. Create the IAM role for our Lambda function\n", 517 | "3. Create the lambda function with the required permissions" 518 | ] 519 | }, 520 | { 521 | "cell_type": "markdown", 522 | "id": "68871530-2953-4a37-854e-10323cabf095", 523 | "metadata": { 524 | "pycharm": { 525 | "name": "#%% md\n" 526 | } 527 | }, 528 | "source": [ 529 | "#### Create the function code\n", 530 | "When creating an Agent for Amazon Bedrock, you can connect a Lambda function to the Action Group in order to execute the functions required by the agent. In this option, your agent is responsible for the execution of your functions. Let's create the lambda function tha implements the functions for `get_booking_details`, `create_booking` and `delete_booking`" 531 | ] 532 | }, 533 | { 534 | "cell_type": "code", 535 | "execution_count": null, 536 | "id": "12271f1f-4739-472b-8909-cc81c148a941", 537 | "metadata": { 538 | "pycharm": { 539 | "name": "#%%\n" 540 | }, 541 | "tags": [] 542 | }, 543 | "outputs": [], 544 | "source": [ 545 | "%%writefile lambda_function.py\n", 546 | "import json\n", 547 | "import uuid\n", 548 | "import boto3\n", 549 | "\n", 550 | "dynamodb = boto3.resource('dynamodb')\n", 551 | "table = dynamodb.Table('restaurant_bookings')\n", 552 | "\n", 553 | "def get_named_parameter(event, name):\n", 554 | " \"\"\"\n", 555 | " Get a parameter from the lambda event\n", 556 | " \"\"\"\n", 557 | " return next(item for item in event['parameters'] if item['name'] == name)['value']\n", 558 | "\n", 559 | "\n", 560 | "def get_booking_details(booking_id):\n", 561 | " \"\"\"\n", 562 | " Retrieve details of a restaurant booking\n", 563 | " \n", 564 | " Args:\n", 565 | " booking_id (string): The ID of the booking to retrieve\n", 566 | " \"\"\"\n", 567 | " try:\n", 568 | " response = table.get_item(Key={'booking_id': booking_id})\n", 569 | " if 'Item' in response:\n", 570 | " return response['Item']\n", 571 | " else:\n", 572 | " return {'message': f'No booking found with ID {booking_id}'}\n", 573 | " except Exception as e:\n", 574 | " return {'error': str(e)}\n", 575 | "\n", 576 | "\n", 577 | "def create_booking(date, name, hour, num_guests):\n", 578 | " \"\"\"\n", 579 | " Create a new restaurant booking\n", 580 | " \n", 581 | " Args:\n", 582 | " date (string): The date of the booking\n", 583 | " name (string): Name to idenfity your reservation\n", 584 | " hour (string): The hour of the booking\n", 585 | " num_guests (integer): The number of guests for the booking\n", 586 | " \"\"\"\n", 587 | " try:\n", 588 | " booking_id = str(uuid.uuid4())[:8]\n", 589 | " table.put_item(\n", 590 | " Item={\n", 591 | " 'booking_id': booking_id,\n", 592 | " 'date': date,\n", 593 | " 'name': name,\n", 594 | " 'hour': hour,\n", 595 | " 'num_guests': num_guests\n", 596 | " }\n", 597 | " )\n", 598 | " return {'booking_id': booking_id}\n", 599 | " except Exception as e:\n", 600 | " return {'error': str(e)}\n", 601 | "\n", 602 | "\n", 603 | "def delete_booking(booking_id):\n", 604 | " \"\"\"\n", 605 | " Delete an existing restaurant booking\n", 606 | " \n", 607 | " Args:\n", 608 | " booking_id (str): The ID of the booking to delete\n", 609 | " \"\"\"\n", 610 | " try:\n", 611 | " response = table.delete_item(Key={'booking_id': booking_id})\n", 612 | " if response['ResponseMetadata']['HTTPStatusCode'] == 200:\n", 613 | " return {'message': f'Booking with ID {booking_id} deleted successfully'}\n", 614 | " else:\n", 615 | " return {'message': f'Failed to delete booking with ID {booking_id}'}\n", 616 | " except Exception as e:\n", 617 | " return {'error': str(e)}\n", 618 | " \n", 619 | "\n", 620 | "def lambda_handler(event, context):\n", 621 | " # get the action group used during the invocation of the lambda function\n", 622 | " actionGroup = event.get('actionGroup', '')\n", 623 | " \n", 624 | " # name of the function that should be invoked\n", 625 | " function = event.get('function', '')\n", 626 | " \n", 627 | " # parameters to invoke function with\n", 628 | " parameters = event.get('parameters', [])\n", 629 | "\n", 630 | " if function == 'get_booking_details':\n", 631 | " booking_id = get_named_parameter(event, \"booking_id\")\n", 632 | " if booking_id:\n", 633 | " response = str(get_booking_details(booking_id))\n", 634 | " responseBody = {'TEXT': {'body': json.dumps(response)}}\n", 635 | " else:\n", 636 | " responseBody = {'TEXT': {'body': 'Missing booking_id parameter'}}\n", 637 | "\n", 638 | " elif function == 'create_booking':\n", 639 | " date = get_named_parameter(event, \"date\")\n", 640 | " name = get_named_parameter(event, \"name\")\n", 641 | " hour = get_named_parameter(event, \"hour\")\n", 642 | " num_guests = get_named_parameter(event, \"num_guests\")\n", 643 | "\n", 644 | " if date and hour and num_guests:\n", 645 | " response = str(create_booking(date, name, hour, num_guests))\n", 646 | " responseBody = {'TEXT': {'body': json.dumps(response)}}\n", 647 | " else:\n", 648 | " responseBody = {'TEXT': {'body': 'Missing required parameters'}}\n", 649 | "\n", 650 | " elif function == 'delete_booking':\n", 651 | " booking_id = get_named_parameter(event, \"booking_id\")\n", 652 | " if booking_id:\n", 653 | " response = str(delete_booking(booking_id))\n", 654 | " responseBody = {'TEXT': {'body': json.dumps(response)}}\n", 655 | " else:\n", 656 | " responseBody = {'TEXT': {'body': 'Missing booking_id parameter'}}\n", 657 | "\n", 658 | " else:\n", 659 | " responseBody = {'TEXT': {'body': 'Invalid function'}}\n", 660 | "\n", 661 | " action_response = {\n", 662 | " 'actionGroup': actionGroup,\n", 663 | " 'function': function,\n", 664 | " 'functionResponse': {\n", 665 | " 'responseBody': responseBody\n", 666 | " }\n", 667 | " }\n", 668 | "\n", 669 | " function_response = {'response': action_response, 'messageVersion': event['messageVersion']}\n", 670 | " print(\"Response: {}\".format(function_response))\n", 671 | "\n", 672 | " return function_response" 673 | ] 674 | }, 675 | { 676 | "cell_type": "markdown", 677 | "id": "19f07460-93c8-476a-86a7-d4af1d48053d", 678 | "metadata": { 679 | "pycharm": { 680 | "name": "#%% md\n" 681 | } 682 | }, 683 | "source": [ 684 | "### Create the required permissions then create the lambda function, if needed\n", 685 | "\n", 686 | "Now let's also create the lambda role and its required policies. For this case, we need the lambda to be able to access DynamoDB, that is why we also create a DynamoDB policy and attach to our Lambda. To do so, we will use the support function `create_lambda_role`. \n", 687 | "\n", 688 | "Now that we have the Lambda function code and its execution role, let's package it into a Zip file and create the Lambda resources\n", 689 | "\n", 690 | "**Delete the existing lambda with the identical name and IAM policy using AWS Console or CLI (preferred \"i.e. aws iam delete-policy --policy-arn arn:aws:iam:::policy/booking-agent-dynamodb-policy\") if needed.**" 691 | ] 692 | }, 693 | { 694 | "cell_type": "code", 695 | "execution_count": null, 696 | "id": "490addc9-16a0-48e1-bd8f-02abb56bb520", 697 | "metadata": { 698 | "pycharm": { 699 | "name": "#%%\n" 700 | }, 701 | "tags": [] 702 | }, 703 | "outputs": [], 704 | "source": [ 705 | "lambda_function_name = f'{agent_name}-lambda'\n", 706 | "if check_lambda_function_exists(lambda_function_name):\n", 707 | " boto3.client('lambda').delete_function(FunctionName=lambda_function_name)\n", 708 | " check_and_delete_iam_policy('booking-agent-dynamodb-policy')\n", 709 | "# Create a new Lambda\n", 710 | "lambda_iam_role = get_agent_lambda_role_arn(role_name=agent_lambda_role_name)\n", 711 | "lambda_iam_role = create_lambda_role(agent_name, table_name) if lambda_iam_role is None else lambda_iam_role\n", 712 | "lambda_function = create_lambda(lambda_function_name, lambda_iam_role)" 713 | ] 714 | }, 715 | { 716 | "cell_type": "markdown", 717 | "id": "aef843aa-9f0f-473f-ab4e-1c4827974b87", 718 | "metadata": { 719 | "pycharm": { 720 | "name": "#%% md\n" 721 | } 722 | }, 723 | "source": [ 724 | "### 4.3 Create the IAM policies needed for the Agent\n", 725 | "\n", 726 | "Now that we have created the Knowledge Base, our DynamoDB table and the Lambda function to execute the tasks for our agent, let's start creating our Agent.\n", 727 | "\n", 728 | "\n", 729 | "First need to create the agent policies that allow bedrock model invocation and Knowledge Base query and the agent IAM role with the policy associated to it. We will allow this agent to invoke the Claude Sonnet model. Here we use the `create_agent_role_and_policies` to create the agent role and its required policies. Delete the agent with the same name.\n", 730 | "\n", 731 | "* aws bedrock-agent list-agents --> get the agent_id\n", 732 | "* aws bedrock-agent delete-agent-alias --agent-alias-id --agent-id \n", 733 | "* aws bedrock-agent delete-agent --agent-id " 734 | ] 735 | }, 736 | { 737 | "cell_type": "code", 738 | "execution_count": null, 739 | "id": "a583c4c3-a2c2-42fc-88b7-5152e45a4eac", 740 | "metadata": {}, 741 | "outputs": [], 742 | "source": [ 743 | "response = bedrock_agent_client.list_agents()['agentSummaries']\n", 744 | "if agent_name in str(response):\n", 745 | " bedrock_agent_client.delete_agent(agentId=agent_name)\n", 746 | " check_and_delete_iam_policy('booking-agent-ba')" 747 | ] 748 | }, 749 | { 750 | "cell_type": "markdown", 751 | "id": "6bda6f8b-3c9e-40c9-b780-d398d67f7e43", 752 | "metadata": {}, 753 | "source": [ 754 | "#### Now create a new agent policy in IAM. **Use AWS console or CLI to delete the policy and role if needed.**" 755 | ] 756 | }, 757 | { 758 | "cell_type": "code", 759 | "execution_count": null, 760 | "id": "dd45ef4e-1f2d-4747-a16e-f64fc6f865f5", 761 | "metadata": { 762 | "pycharm": { 763 | "name": "#%%\n" 764 | }, 765 | "tags": [] 766 | }, 767 | "outputs": [], 768 | "source": [ 769 | "agent_role = create_agent_role_and_policies(agent_name, agent_foundation_model, kb_id=kb_id)" 770 | ] 771 | }, 772 | { 773 | "cell_type": "markdown", 774 | "id": "595bf767-3a26-4f92-b629-ba2908b90d81", 775 | "metadata": { 776 | "pycharm": { 777 | "name": "#%% md\n" 778 | } 779 | }, 780 | "source": [ 781 | "### 4.4 Create the Agent\n", 782 | "Once the needed IAM role is created, we can use the bedrock agent client to create a new agent. To do so we use the [`create_agent`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agent/client/create_agent.html) api from boto3. It requires an agent name, underline foundation model and instruction. You can also provide an agent description. Note that the agent created is not yet prepared. We will focus on preparing the agent and then using it to invoke actions and use other APIs. Let's get our Agent ID. It will be important to perform operations with our agent.\n", 783 | "\n", 784 | "AWS CLI: aws bedrock-agent list-agents" 785 | ] 786 | }, 787 | { 788 | "cell_type": "code", 789 | "execution_count": null, 790 | "id": "7ccaf856-7eff-4bd1-ac47-c89b6fbf7e8f", 791 | "metadata": { 792 | "pycharm": { 793 | "name": "#%%\n" 794 | }, 795 | "tags": [] 796 | }, 797 | "outputs": [], 798 | "source": [ 799 | "response = bedrock_agent_client.create_agent(\n", 800 | " agentName=agent_name,\n", 801 | " agentResourceRoleArn=agent_role['Role']['Arn'],\n", 802 | " description=agent_description,\n", 803 | " idleSessionTTLInSeconds=1800,\n", 804 | " foundationModel=agent_foundation_model,\n", 805 | " instruction=agent_instruction,\n", 806 | ")\n", 807 | "agent_id = response['agent']['agentId']\n", 808 | "print(\"The agent id is:\",agent_id)" 809 | ] 810 | }, 811 | { 812 | "cell_type": "markdown", 813 | "id": "3f30f874-eb63-4ec3-8c19-36fab0f598df", 814 | "metadata": { 815 | "pycharm": { 816 | "name": "#%% md\n" 817 | } 818 | }, 819 | "source": [ 820 | "### 4.5 Create the Agent Action Group\n", 821 | "We will now create an agent action group that uses the lambda function created before. The [`create_agent_action_group`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agent/client/create_agent_action_group.html) function provides this functionality. We will use `DRAFT` as the agent version since we haven't yet created an agent version or alias. To inform the agent about the action group functionalities, we will provide an action group description containing the functionalities of the action group.\n", 822 | "\n", 823 | "In this example, we will provide the Action Group functionality using a [`functionSchema`](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-action-function.html).\n", 824 | "\n", 825 | "To define the functions using a function schema, you need to provide the `name`, `description` and `parameters` for each function." 826 | ] 827 | }, 828 | { 829 | "cell_type": "code", 830 | "execution_count": null, 831 | "id": "91c4c380-ca58-467a-ab67-b475605a0274", 832 | "metadata": { 833 | "pycharm": { 834 | "name": "#%%\n" 835 | }, 836 | "tags": [] 837 | }, 838 | "outputs": [], 839 | "source": [ 840 | "agent_functions = [\n", 841 | " {\n", 842 | " 'name': 'get_booking_details',\n", 843 | " 'description': 'Retrieve details of a restaurant booking',\n", 844 | " 'parameters': {\n", 845 | " \"booking_id\": {\n", 846 | " \"description\": \"The ID of the booking to retrieve\",\n", 847 | " \"required\": True,\n", 848 | " \"type\": \"string\"\n", 849 | " }\n", 850 | " }\n", 851 | " },\n", 852 | " {\n", 853 | " 'name': 'create_booking',\n", 854 | " 'description': 'Create a new restaurant booking',\n", 855 | " 'parameters': {\n", 856 | " \"date\": {\n", 857 | " \"description\": \"The date of the booking\",\n", 858 | " \"required\": True,\n", 859 | " \"type\": \"string\"\n", 860 | " },\n", 861 | " \"name\": {\n", 862 | " \"description\": \"Name to idenfity your reservation\",\n", 863 | " \"required\": True,\n", 864 | " \"type\": \"string\"\n", 865 | " },\n", 866 | " \"hour\": {\n", 867 | " \"description\": \"The hour of the booking\",\n", 868 | " \"required\": True,\n", 869 | " \"type\": \"string\"\n", 870 | " },\n", 871 | " \"num_guests\": {\n", 872 | " \"description\": \"The number of guests for the booking\",\n", 873 | " \"required\": True,\n", 874 | " \"type\": \"integer\"\n", 875 | " }\n", 876 | " }\n", 877 | " },\n", 878 | " {\n", 879 | " 'name': 'delete_booking',\n", 880 | " 'description': 'Delete an existing restaurant booking',\n", 881 | " 'parameters': {\n", 882 | " \"booking_id\": {\n", 883 | " \"description\": \"The ID of the booking to delete\",\n", 884 | " \"required\": True,\n", 885 | " \"type\": \"string\"\n", 886 | " }\n", 887 | " }\n", 888 | " },\n", 889 | "]" 890 | ] 891 | }, 892 | { 893 | "cell_type": "markdown", 894 | "id": "60d174f6", 895 | "metadata": { 896 | "pycharm": { 897 | "name": "#%% md\n" 898 | } 899 | }, 900 | "source": [ 901 | "We now use the function schema to create the agent action group using the [`create_agent_action_group`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agent/client/create_agent_action_group.html) API. It might takes up to 1 minute for agent to be created." 902 | ] 903 | }, 904 | { 905 | "cell_type": "code", 906 | "execution_count": null, 907 | "id": "24559f1c-e081-4f03-959f-a82f9444aa49", 908 | "metadata": { 909 | "pycharm": { 910 | "name": "#%%\n" 911 | }, 912 | "tags": [] 913 | }, 914 | "outputs": [], 915 | "source": [ 916 | "# Pause to make sure agent is created\n", 917 | "#time.sleep(30)\n", 918 | "\n", 919 | "# Now, we can configure and create an action group here:\n", 920 | "agent_action_group_response = bedrock_agent_client.create_agent_action_group(\n", 921 | " agentId=agent_id,\n", 922 | " agentVersion='DRAFT',\n", 923 | " actionGroupExecutor={\n", 924 | " 'lambda': lambda_function['FunctionArn']\n", 925 | " },\n", 926 | " actionGroupName=agent_action_group_name,\n", 927 | " functionSchema={\n", 928 | " 'functions': agent_functions\n", 929 | " },\n", 930 | " description=agent_action_group_description\n", 931 | ")" 932 | ] 933 | }, 934 | { 935 | "cell_type": "code", 936 | "execution_count": null, 937 | "id": "0a0c3eeb-37f3-4ec9-88d4-43c5306c3195", 938 | "metadata": { 939 | "pycharm": { 940 | "name": "#%%\n" 941 | }, 942 | "tags": [] 943 | }, 944 | "outputs": [], 945 | "source": [ 946 | "agent_action_group_response" 947 | ] 948 | }, 949 | { 950 | "cell_type": "markdown", 951 | "id": "b7d27e07-7d4c-4c59-987d-41a606c6af65", 952 | "metadata": { 953 | "pycharm": { 954 | "name": "#%% md\n" 955 | } 956 | }, 957 | "source": [ 958 | "### 4.6 Allow the Agent to invoke the Action Group Lambda\n", 959 | "Before using the action group, we need to allow the agent to invoke the lambda function associated with the action group. This is done via [resource-based policy](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-permissions.html#agents-permissions-lambda). Let's add the resource-based policy to the lambda function created" 960 | ] 961 | }, 962 | { 963 | "cell_type": "code", 964 | "execution_count": null, 965 | "id": "3262f6dd-fb05-4351-904d-8097c0f52134", 966 | "metadata": { 967 | "pycharm": { 968 | "name": "#%%\n" 969 | }, 970 | "tags": [] 971 | }, 972 | "outputs": [], 973 | "source": [ 974 | "# Create allow to invoke permission on lambda\n", 975 | "lambda_client = boto3.client('lambda')\n", 976 | "response = lambda_client.add_permission(\n", 977 | " FunctionName=lambda_function_name,\n", 978 | " StatementId='allow_bedrock',\n", 979 | " Action='lambda:InvokeFunction',\n", 980 | " Principal='bedrock.amazonaws.com',\n", 981 | " SourceArn=f\"arn:aws:bedrock:{region}:{account_id}:agent/{agent_id}\",\n", 982 | ")\n" 983 | ] 984 | }, 985 | { 986 | "cell_type": "code", 987 | "execution_count": null, 988 | "id": "eb3b8c21-f14a-4645-b28e-3ac9a50f1f7f", 989 | "metadata": { 990 | "pycharm": { 991 | "name": "#%%\n" 992 | }, 993 | "tags": [] 994 | }, 995 | "outputs": [], 996 | "source": [ 997 | "response" 998 | ] 999 | }, 1000 | { 1001 | "cell_type": "markdown", 1002 | "id": "2c8536a4-e625-464c-bae2-d28dba82c556", 1003 | "metadata": { 1004 | "pycharm": { 1005 | "name": "#%% md\n" 1006 | } 1007 | }, 1008 | "source": [ 1009 | "### 4.7 Associate the Knowledge Base to the agent\n", 1010 | "Now we have created the Agent we can go ahead and associate the Knowledge Base we created earlier. \n", 1011 | "\n", 1012 | "AWS CLI: aws bedrock-agent list-agent-knowledge-bases --agent-id --agent-version #DRAFT" 1013 | ] 1014 | }, 1015 | { 1016 | "cell_type": "code", 1017 | "execution_count": null, 1018 | "id": "95c7b781-7e82-47b6-a268-237fa97ab58d", 1019 | "metadata": { 1020 | "pycharm": { 1021 | "name": "#%%\n" 1022 | }, 1023 | "tags": [] 1024 | }, 1025 | "outputs": [], 1026 | "source": [ 1027 | "response = bedrock_agent_client.associate_agent_knowledge_base(\n", 1028 | " agentId=agent_id,\n", 1029 | " agentVersion='DRAFT',\n", 1030 | " description='Access the knowledge base when customers ask about the plates in the menu.',\n", 1031 | " knowledgeBaseId=kb_id,\n", 1032 | " knowledgeBaseState='ENABLED'\n", 1033 | ")" 1034 | ] 1035 | }, 1036 | { 1037 | "cell_type": "code", 1038 | "execution_count": null, 1039 | "id": "0c72b5ee-2479-41b2-93e1-fd7aa66da08d", 1040 | "metadata": { 1041 | "pycharm": { 1042 | "name": "#%%\n" 1043 | }, 1044 | "scrolled": true, 1045 | "tags": [] 1046 | }, 1047 | "outputs": [], 1048 | "source": [ 1049 | "response" 1050 | ] 1051 | }, 1052 | { 1053 | "cell_type": "markdown", 1054 | "id": "b42a8caf-4c76-4760-a7cd-4c6b189f9f5b", 1055 | "metadata": { 1056 | "pycharm": { 1057 | "name": "#%% md\n" 1058 | } 1059 | }, 1060 | "source": [ 1061 | "### 4.8 Prepare the Agent and create an alias\n", 1062 | "\n", 1063 | "Let's create a DRAFT version of the agent that can be used for internal testing. **It might take upto 1 minute to get the alias created.**\n" 1064 | ] 1065 | }, 1066 | { 1067 | "cell_type": "code", 1068 | "execution_count": null, 1069 | "id": "4013c967-9a14-4689-b0b5-61aa4f75748c", 1070 | "metadata": { 1071 | "pycharm": { 1072 | "name": "#%%\n" 1073 | }, 1074 | "tags": [] 1075 | }, 1076 | "outputs": [], 1077 | "source": [ 1078 | "response = bedrock_agent_client.prepare_agent(\n", 1079 | " agentId=agent_id\n", 1080 | ")\n", 1081 | "print(response)\n", 1082 | "# Pause to make sure agent is prepared. Uncomment the next line if needed \n", 1083 | "#time.sleep(30) #aws bedrock-agent list-agents" 1084 | ] 1085 | }, 1086 | { 1087 | "cell_type": "markdown", 1088 | "id": "4c2f477d-60b5-4135-bfe4-e9af3874d611", 1089 | "metadata": { 1090 | "pycharm": { 1091 | "name": "#%% md\n" 1092 | } 1093 | }, 1094 | "source": [ 1095 | "You can invoke the DRAFT version of your agent using the test alias id `TSTALIASID` or you can create a new alias and a new version for your agent. Here we are also going to create an Agent alias to later on use to invoke it with the alias id created" 1096 | ] 1097 | }, 1098 | { 1099 | "cell_type": "code", 1100 | "execution_count": null, 1101 | "id": "b9889aaf-9238-479d-b9f1-2ab13b1ec9c0", 1102 | "metadata": { 1103 | "pycharm": { 1104 | "name": "#%%\n" 1105 | }, 1106 | "tags": [] 1107 | }, 1108 | "outputs": [], 1109 | "source": [ 1110 | "try:\n", 1111 | " response = bedrock_agent_client.create_agent_alias(\n", 1112 | " agentAliasName='TestAlias',\n", 1113 | " agentId=agent_id,\n", 1114 | " description='Test alias',\n", 1115 | " )\n", 1116 | " alias_id = response[\"agentAlias\"][\"agentAliasId\"]\n", 1117 | "except:\n", 1118 | " pass\n", 1119 | "\n", 1120 | "print(\"The Agent alias is:\",alias_id)\n", 1121 | "\n", 1122 | "# Use %#aws bedrock-agent list-agent-aliases --agent-id to check if aliases has been cerated successfully or not.\n", 1123 | "#time.sleep(30) " 1124 | ] 1125 | }, 1126 | { 1127 | "cell_type": "code", 1128 | "execution_count": null, 1129 | "id": "b31caf69-6933-4501-8dc5-3e52302555a3", 1130 | "metadata": {}, 1131 | "outputs": [], 1132 | "source": [ 1133 | "%store kb_id\n", 1134 | "%store table_name\n", 1135 | "%store lambda_function\n", 1136 | "%store lambda_function_name\n", 1137 | "%store agent_action_group_response\n", 1138 | "%store agent_functions\n", 1139 | "%store alias_id\n", 1140 | "%store agent_id\n", 1141 | "%store agent_name" 1142 | ] 1143 | }, 1144 | { 1145 | "cell_type": "markdown", 1146 | "id": "8ebf4438-1f48-4642-a57c-530a16815064", 1147 | "metadata": { 1148 | "pycharm": { 1149 | "name": "#%% md\n" 1150 | } 1151 | }, 1152 | "source": [ 1153 | "## 6. Clean-up \n", 1154 | "Let's delete all the associated resources created to avoid unnecessary costs, after your workshop is completed.\n", 1155 | "\n", 1156 | "**Don't delete KB or agent here yet** \n", 1157 | "\n", 1158 | "**Please change the followinsg cells from markdown to code before encuring the cells**" 1159 | ] 1160 | }, 1161 | { 1162 | "cell_type": "markdown", 1163 | "id": "36a36503-fbd2-4c58-b3cb-2cf32fd695be", 1164 | "metadata": { 1165 | "pycharm": { 1166 | "name": "#%%\n" 1167 | }, 1168 | "tags": [] 1169 | }, 1170 | "source": [ 1171 | "clean_up_resources(\n", 1172 | " table_name, lambda_function, lambda_function_name, agent_action_group_response, agent_functions, \n", 1173 | " agent_id, kb_id, alias_id\n", 1174 | ")" 1175 | ] 1176 | }, 1177 | { 1178 | "cell_type": "markdown", 1179 | "id": "17495576-33d6-41c1-9634-e4455b999d5a", 1180 | "metadata": { 1181 | "pycharm": { 1182 | "name": "#%%\n" 1183 | } 1184 | }, 1185 | "source": [ 1186 | "# Delete the agent roles and policies\n", 1187 | "delete_agent_roles_and_policies(agent_name)" 1188 | ] 1189 | }, 1190 | { 1191 | "cell_type": "code", 1192 | "execution_count": null, 1193 | "id": "665663fd-06da-42c5-ad89-fbe86de35678", 1194 | "metadata": {}, 1195 | "outputs": [], 1196 | "source": [] 1197 | } 1198 | ], 1199 | "metadata": { 1200 | "availableInstances": [ 1201 | { 1202 | "_defaultOrder": 0, 1203 | "_isFastLaunch": true, 1204 | "category": "General purpose", 1205 | "gpuNum": 0, 1206 | "hideHardwareSpecs": false, 1207 | "memoryGiB": 4, 1208 | "name": "ml.t3.medium", 1209 | "vcpuNum": 2 1210 | }, 1211 | { 1212 | "_defaultOrder": 1, 1213 | "_isFastLaunch": false, 1214 | "category": "General purpose", 1215 | "gpuNum": 0, 1216 | "hideHardwareSpecs": false, 1217 | "memoryGiB": 8, 1218 | "name": "ml.t3.large", 1219 | "vcpuNum": 2 1220 | }, 1221 | { 1222 | "_defaultOrder": 2, 1223 | "_isFastLaunch": false, 1224 | "category": "General purpose", 1225 | "gpuNum": 0, 1226 | "hideHardwareSpecs": false, 1227 | "memoryGiB": 16, 1228 | "name": "ml.t3.xlarge", 1229 | "vcpuNum": 4 1230 | }, 1231 | { 1232 | "_defaultOrder": 3, 1233 | "_isFastLaunch": false, 1234 | "category": "General purpose", 1235 | "gpuNum": 0, 1236 | "hideHardwareSpecs": false, 1237 | "memoryGiB": 32, 1238 | "name": "ml.t3.2xlarge", 1239 | "vcpuNum": 8 1240 | }, 1241 | { 1242 | "_defaultOrder": 4, 1243 | "_isFastLaunch": true, 1244 | "category": "General purpose", 1245 | "gpuNum": 0, 1246 | "hideHardwareSpecs": false, 1247 | "memoryGiB": 8, 1248 | "name": "ml.m5.large", 1249 | "vcpuNum": 2 1250 | }, 1251 | { 1252 | "_defaultOrder": 5, 1253 | "_isFastLaunch": false, 1254 | "category": "General purpose", 1255 | "gpuNum": 0, 1256 | "hideHardwareSpecs": false, 1257 | "memoryGiB": 16, 1258 | "name": "ml.m5.xlarge", 1259 | "vcpuNum": 4 1260 | }, 1261 | { 1262 | "_defaultOrder": 6, 1263 | "_isFastLaunch": false, 1264 | "category": "General purpose", 1265 | "gpuNum": 0, 1266 | "hideHardwareSpecs": false, 1267 | "memoryGiB": 32, 1268 | "name": "ml.m5.2xlarge", 1269 | "vcpuNum": 8 1270 | }, 1271 | { 1272 | "_defaultOrder": 7, 1273 | "_isFastLaunch": false, 1274 | "category": "General purpose", 1275 | "gpuNum": 0, 1276 | "hideHardwareSpecs": false, 1277 | "memoryGiB": 64, 1278 | "name": "ml.m5.4xlarge", 1279 | "vcpuNum": 16 1280 | }, 1281 | { 1282 | "_defaultOrder": 8, 1283 | "_isFastLaunch": false, 1284 | "category": "General purpose", 1285 | "gpuNum": 0, 1286 | "hideHardwareSpecs": false, 1287 | "memoryGiB": 128, 1288 | "name": "ml.m5.8xlarge", 1289 | "vcpuNum": 32 1290 | }, 1291 | { 1292 | "_defaultOrder": 9, 1293 | "_isFastLaunch": false, 1294 | "category": "General purpose", 1295 | "gpuNum": 0, 1296 | "hideHardwareSpecs": false, 1297 | "memoryGiB": 192, 1298 | "name": "ml.m5.12xlarge", 1299 | "vcpuNum": 48 1300 | }, 1301 | { 1302 | "_defaultOrder": 10, 1303 | "_isFastLaunch": false, 1304 | "category": "General purpose", 1305 | "gpuNum": 0, 1306 | "hideHardwareSpecs": false, 1307 | "memoryGiB": 256, 1308 | "name": "ml.m5.16xlarge", 1309 | "vcpuNum": 64 1310 | }, 1311 | { 1312 | "_defaultOrder": 11, 1313 | "_isFastLaunch": false, 1314 | "category": "General purpose", 1315 | "gpuNum": 0, 1316 | "hideHardwareSpecs": false, 1317 | "memoryGiB": 384, 1318 | "name": "ml.m5.24xlarge", 1319 | "vcpuNum": 96 1320 | }, 1321 | { 1322 | "_defaultOrder": 12, 1323 | "_isFastLaunch": false, 1324 | "category": "General purpose", 1325 | "gpuNum": 0, 1326 | "hideHardwareSpecs": false, 1327 | "memoryGiB": 8, 1328 | "name": "ml.m5d.large", 1329 | "vcpuNum": 2 1330 | }, 1331 | { 1332 | "_defaultOrder": 13, 1333 | "_isFastLaunch": false, 1334 | "category": "General purpose", 1335 | "gpuNum": 0, 1336 | "hideHardwareSpecs": false, 1337 | "memoryGiB": 16, 1338 | "name": "ml.m5d.xlarge", 1339 | "vcpuNum": 4 1340 | }, 1341 | { 1342 | "_defaultOrder": 14, 1343 | "_isFastLaunch": false, 1344 | "category": "General purpose", 1345 | "gpuNum": 0, 1346 | "hideHardwareSpecs": false, 1347 | "memoryGiB": 32, 1348 | "name": "ml.m5d.2xlarge", 1349 | "vcpuNum": 8 1350 | }, 1351 | { 1352 | "_defaultOrder": 15, 1353 | "_isFastLaunch": false, 1354 | "category": "General purpose", 1355 | "gpuNum": 0, 1356 | "hideHardwareSpecs": false, 1357 | "memoryGiB": 64, 1358 | "name": "ml.m5d.4xlarge", 1359 | "vcpuNum": 16 1360 | }, 1361 | { 1362 | "_defaultOrder": 16, 1363 | "_isFastLaunch": false, 1364 | "category": "General purpose", 1365 | "gpuNum": 0, 1366 | "hideHardwareSpecs": false, 1367 | "memoryGiB": 128, 1368 | "name": "ml.m5d.8xlarge", 1369 | "vcpuNum": 32 1370 | }, 1371 | { 1372 | "_defaultOrder": 17, 1373 | "_isFastLaunch": false, 1374 | "category": "General purpose", 1375 | "gpuNum": 0, 1376 | "hideHardwareSpecs": false, 1377 | "memoryGiB": 192, 1378 | "name": "ml.m5d.12xlarge", 1379 | "vcpuNum": 48 1380 | }, 1381 | { 1382 | "_defaultOrder": 18, 1383 | "_isFastLaunch": false, 1384 | "category": "General purpose", 1385 | "gpuNum": 0, 1386 | "hideHardwareSpecs": false, 1387 | "memoryGiB": 256, 1388 | "name": "ml.m5d.16xlarge", 1389 | "vcpuNum": 64 1390 | }, 1391 | { 1392 | "_defaultOrder": 19, 1393 | "_isFastLaunch": false, 1394 | "category": "General purpose", 1395 | "gpuNum": 0, 1396 | "hideHardwareSpecs": false, 1397 | "memoryGiB": 384, 1398 | "name": "ml.m5d.24xlarge", 1399 | "vcpuNum": 96 1400 | }, 1401 | { 1402 | "_defaultOrder": 20, 1403 | "_isFastLaunch": false, 1404 | "category": "General purpose", 1405 | "gpuNum": 0, 1406 | "hideHardwareSpecs": true, 1407 | "memoryGiB": 0, 1408 | "name": "ml.geospatial.interactive", 1409 | "supportedImageNames": [ 1410 | "sagemaker-geospatial-v1-0" 1411 | ], 1412 | "vcpuNum": 0 1413 | }, 1414 | { 1415 | "_defaultOrder": 21, 1416 | "_isFastLaunch": true, 1417 | "category": "Compute optimized", 1418 | "gpuNum": 0, 1419 | "hideHardwareSpecs": false, 1420 | "memoryGiB": 4, 1421 | "name": "ml.c5.large", 1422 | "vcpuNum": 2 1423 | }, 1424 | { 1425 | "_defaultOrder": 22, 1426 | "_isFastLaunch": false, 1427 | "category": "Compute optimized", 1428 | "gpuNum": 0, 1429 | "hideHardwareSpecs": false, 1430 | "memoryGiB": 8, 1431 | "name": "ml.c5.xlarge", 1432 | "vcpuNum": 4 1433 | }, 1434 | { 1435 | "_defaultOrder": 23, 1436 | "_isFastLaunch": false, 1437 | "category": "Compute optimized", 1438 | "gpuNum": 0, 1439 | "hideHardwareSpecs": false, 1440 | "memoryGiB": 16, 1441 | "name": "ml.c5.2xlarge", 1442 | "vcpuNum": 8 1443 | }, 1444 | { 1445 | "_defaultOrder": 24, 1446 | "_isFastLaunch": false, 1447 | "category": "Compute optimized", 1448 | "gpuNum": 0, 1449 | "hideHardwareSpecs": false, 1450 | "memoryGiB": 32, 1451 | "name": "ml.c5.4xlarge", 1452 | "vcpuNum": 16 1453 | }, 1454 | { 1455 | "_defaultOrder": 25, 1456 | "_isFastLaunch": false, 1457 | "category": "Compute optimized", 1458 | "gpuNum": 0, 1459 | "hideHardwareSpecs": false, 1460 | "memoryGiB": 72, 1461 | "name": "ml.c5.9xlarge", 1462 | "vcpuNum": 36 1463 | }, 1464 | { 1465 | "_defaultOrder": 26, 1466 | "_isFastLaunch": false, 1467 | "category": "Compute optimized", 1468 | "gpuNum": 0, 1469 | "hideHardwareSpecs": false, 1470 | "memoryGiB": 96, 1471 | "name": "ml.c5.12xlarge", 1472 | "vcpuNum": 48 1473 | }, 1474 | { 1475 | "_defaultOrder": 27, 1476 | "_isFastLaunch": false, 1477 | "category": "Compute optimized", 1478 | "gpuNum": 0, 1479 | "hideHardwareSpecs": false, 1480 | "memoryGiB": 144, 1481 | "name": "ml.c5.18xlarge", 1482 | "vcpuNum": 72 1483 | }, 1484 | { 1485 | "_defaultOrder": 28, 1486 | "_isFastLaunch": false, 1487 | "category": "Compute optimized", 1488 | "gpuNum": 0, 1489 | "hideHardwareSpecs": false, 1490 | "memoryGiB": 192, 1491 | "name": "ml.c5.24xlarge", 1492 | "vcpuNum": 96 1493 | }, 1494 | { 1495 | "_defaultOrder": 29, 1496 | "_isFastLaunch": true, 1497 | "category": "Accelerated computing", 1498 | "gpuNum": 1, 1499 | "hideHardwareSpecs": false, 1500 | "memoryGiB": 16, 1501 | "name": "ml.g4dn.xlarge", 1502 | "vcpuNum": 4 1503 | }, 1504 | { 1505 | "_defaultOrder": 30, 1506 | "_isFastLaunch": false, 1507 | "category": "Accelerated computing", 1508 | "gpuNum": 1, 1509 | "hideHardwareSpecs": false, 1510 | "memoryGiB": 32, 1511 | "name": "ml.g4dn.2xlarge", 1512 | "vcpuNum": 8 1513 | }, 1514 | { 1515 | "_defaultOrder": 31, 1516 | "_isFastLaunch": false, 1517 | "category": "Accelerated computing", 1518 | "gpuNum": 1, 1519 | "hideHardwareSpecs": false, 1520 | "memoryGiB": 64, 1521 | "name": "ml.g4dn.4xlarge", 1522 | "vcpuNum": 16 1523 | }, 1524 | { 1525 | "_defaultOrder": 32, 1526 | "_isFastLaunch": false, 1527 | "category": "Accelerated computing", 1528 | "gpuNum": 1, 1529 | "hideHardwareSpecs": false, 1530 | "memoryGiB": 128, 1531 | "name": "ml.g4dn.8xlarge", 1532 | "vcpuNum": 32 1533 | }, 1534 | { 1535 | "_defaultOrder": 33, 1536 | "_isFastLaunch": false, 1537 | "category": "Accelerated computing", 1538 | "gpuNum": 4, 1539 | "hideHardwareSpecs": false, 1540 | "memoryGiB": 192, 1541 | "name": "ml.g4dn.12xlarge", 1542 | "vcpuNum": 48 1543 | }, 1544 | { 1545 | "_defaultOrder": 34, 1546 | "_isFastLaunch": false, 1547 | "category": "Accelerated computing", 1548 | "gpuNum": 1, 1549 | "hideHardwareSpecs": false, 1550 | "memoryGiB": 256, 1551 | "name": "ml.g4dn.16xlarge", 1552 | "vcpuNum": 64 1553 | }, 1554 | { 1555 | "_defaultOrder": 35, 1556 | "_isFastLaunch": false, 1557 | "category": "Accelerated computing", 1558 | "gpuNum": 1, 1559 | "hideHardwareSpecs": false, 1560 | "memoryGiB": 61, 1561 | "name": "ml.p3.2xlarge", 1562 | "vcpuNum": 8 1563 | }, 1564 | { 1565 | "_defaultOrder": 36, 1566 | "_isFastLaunch": false, 1567 | "category": "Accelerated computing", 1568 | "gpuNum": 4, 1569 | "hideHardwareSpecs": false, 1570 | "memoryGiB": 244, 1571 | "name": "ml.p3.8xlarge", 1572 | "vcpuNum": 32 1573 | }, 1574 | { 1575 | "_defaultOrder": 37, 1576 | "_isFastLaunch": false, 1577 | "category": "Accelerated computing", 1578 | "gpuNum": 8, 1579 | "hideHardwareSpecs": false, 1580 | "memoryGiB": 488, 1581 | "name": "ml.p3.16xlarge", 1582 | "vcpuNum": 64 1583 | }, 1584 | { 1585 | "_defaultOrder": 38, 1586 | "_isFastLaunch": false, 1587 | "category": "Accelerated computing", 1588 | "gpuNum": 8, 1589 | "hideHardwareSpecs": false, 1590 | "memoryGiB": 768, 1591 | "name": "ml.p3dn.24xlarge", 1592 | "vcpuNum": 96 1593 | }, 1594 | { 1595 | "_defaultOrder": 39, 1596 | "_isFastLaunch": false, 1597 | "category": "Memory Optimized", 1598 | "gpuNum": 0, 1599 | "hideHardwareSpecs": false, 1600 | "memoryGiB": 16, 1601 | "name": "ml.r5.large", 1602 | "vcpuNum": 2 1603 | }, 1604 | { 1605 | "_defaultOrder": 40, 1606 | "_isFastLaunch": false, 1607 | "category": "Memory Optimized", 1608 | "gpuNum": 0, 1609 | "hideHardwareSpecs": false, 1610 | "memoryGiB": 32, 1611 | "name": "ml.r5.xlarge", 1612 | "vcpuNum": 4 1613 | }, 1614 | { 1615 | "_defaultOrder": 41, 1616 | "_isFastLaunch": false, 1617 | "category": "Memory Optimized", 1618 | "gpuNum": 0, 1619 | "hideHardwareSpecs": false, 1620 | "memoryGiB": 64, 1621 | "name": "ml.r5.2xlarge", 1622 | "vcpuNum": 8 1623 | }, 1624 | { 1625 | "_defaultOrder": 42, 1626 | "_isFastLaunch": false, 1627 | "category": "Memory Optimized", 1628 | "gpuNum": 0, 1629 | "hideHardwareSpecs": false, 1630 | "memoryGiB": 128, 1631 | "name": "ml.r5.4xlarge", 1632 | "vcpuNum": 16 1633 | }, 1634 | { 1635 | "_defaultOrder": 43, 1636 | "_isFastLaunch": false, 1637 | "category": "Memory Optimized", 1638 | "gpuNum": 0, 1639 | "hideHardwareSpecs": false, 1640 | "memoryGiB": 256, 1641 | "name": "ml.r5.8xlarge", 1642 | "vcpuNum": 32 1643 | }, 1644 | { 1645 | "_defaultOrder": 44, 1646 | "_isFastLaunch": false, 1647 | "category": "Memory Optimized", 1648 | "gpuNum": 0, 1649 | "hideHardwareSpecs": false, 1650 | "memoryGiB": 384, 1651 | "name": "ml.r5.12xlarge", 1652 | "vcpuNum": 48 1653 | }, 1654 | { 1655 | "_defaultOrder": 45, 1656 | "_isFastLaunch": false, 1657 | "category": "Memory Optimized", 1658 | "gpuNum": 0, 1659 | "hideHardwareSpecs": false, 1660 | "memoryGiB": 512, 1661 | "name": "ml.r5.16xlarge", 1662 | "vcpuNum": 64 1663 | }, 1664 | { 1665 | "_defaultOrder": 46, 1666 | "_isFastLaunch": false, 1667 | "category": "Memory Optimized", 1668 | "gpuNum": 0, 1669 | "hideHardwareSpecs": false, 1670 | "memoryGiB": 768, 1671 | "name": "ml.r5.24xlarge", 1672 | "vcpuNum": 96 1673 | }, 1674 | { 1675 | "_defaultOrder": 47, 1676 | "_isFastLaunch": false, 1677 | "category": "Accelerated computing", 1678 | "gpuNum": 1, 1679 | "hideHardwareSpecs": false, 1680 | "memoryGiB": 16, 1681 | "name": "ml.g5.xlarge", 1682 | "vcpuNum": 4 1683 | }, 1684 | { 1685 | "_defaultOrder": 48, 1686 | "_isFastLaunch": false, 1687 | "category": "Accelerated computing", 1688 | "gpuNum": 1, 1689 | "hideHardwareSpecs": false, 1690 | "memoryGiB": 32, 1691 | "name": "ml.g5.2xlarge", 1692 | "vcpuNum": 8 1693 | }, 1694 | { 1695 | "_defaultOrder": 49, 1696 | "_isFastLaunch": false, 1697 | "category": "Accelerated computing", 1698 | "gpuNum": 1, 1699 | "hideHardwareSpecs": false, 1700 | "memoryGiB": 64, 1701 | "name": "ml.g5.4xlarge", 1702 | "vcpuNum": 16 1703 | }, 1704 | { 1705 | "_defaultOrder": 50, 1706 | "_isFastLaunch": false, 1707 | "category": "Accelerated computing", 1708 | "gpuNum": 1, 1709 | "hideHardwareSpecs": false, 1710 | "memoryGiB": 128, 1711 | "name": "ml.g5.8xlarge", 1712 | "vcpuNum": 32 1713 | }, 1714 | { 1715 | "_defaultOrder": 51, 1716 | "_isFastLaunch": false, 1717 | "category": "Accelerated computing", 1718 | "gpuNum": 1, 1719 | "hideHardwareSpecs": false, 1720 | "memoryGiB": 256, 1721 | "name": "ml.g5.16xlarge", 1722 | "vcpuNum": 64 1723 | }, 1724 | { 1725 | "_defaultOrder": 52, 1726 | "_isFastLaunch": false, 1727 | "category": "Accelerated computing", 1728 | "gpuNum": 4, 1729 | "hideHardwareSpecs": false, 1730 | "memoryGiB": 192, 1731 | "name": "ml.g5.12xlarge", 1732 | "vcpuNum": 48 1733 | }, 1734 | { 1735 | "_defaultOrder": 53, 1736 | "_isFastLaunch": false, 1737 | "category": "Accelerated computing", 1738 | "gpuNum": 4, 1739 | "hideHardwareSpecs": false, 1740 | "memoryGiB": 384, 1741 | "name": "ml.g5.24xlarge", 1742 | "vcpuNum": 96 1743 | }, 1744 | { 1745 | "_defaultOrder": 54, 1746 | "_isFastLaunch": false, 1747 | "category": "Accelerated computing", 1748 | "gpuNum": 8, 1749 | "hideHardwareSpecs": false, 1750 | "memoryGiB": 768, 1751 | "name": "ml.g5.48xlarge", 1752 | "vcpuNum": 192 1753 | }, 1754 | { 1755 | "_defaultOrder": 55, 1756 | "_isFastLaunch": false, 1757 | "category": "Accelerated computing", 1758 | "gpuNum": 8, 1759 | "hideHardwareSpecs": false, 1760 | "memoryGiB": 1152, 1761 | "name": "ml.p4d.24xlarge", 1762 | "vcpuNum": 96 1763 | }, 1764 | { 1765 | "_defaultOrder": 56, 1766 | "_isFastLaunch": false, 1767 | "category": "Accelerated computing", 1768 | "gpuNum": 8, 1769 | "hideHardwareSpecs": false, 1770 | "memoryGiB": 1152, 1771 | "name": "ml.p4de.24xlarge", 1772 | "vcpuNum": 96 1773 | }, 1774 | { 1775 | "_defaultOrder": 57, 1776 | "_isFastLaunch": false, 1777 | "category": "Accelerated computing", 1778 | "gpuNum": 0, 1779 | "hideHardwareSpecs": false, 1780 | "memoryGiB": 32, 1781 | "name": "ml.trn1.2xlarge", 1782 | "vcpuNum": 8 1783 | }, 1784 | { 1785 | "_defaultOrder": 58, 1786 | "_isFastLaunch": false, 1787 | "category": "Accelerated computing", 1788 | "gpuNum": 0, 1789 | "hideHardwareSpecs": false, 1790 | "memoryGiB": 512, 1791 | "name": "ml.trn1.32xlarge", 1792 | "vcpuNum": 128 1793 | }, 1794 | { 1795 | "_defaultOrder": 59, 1796 | "_isFastLaunch": false, 1797 | "category": "Accelerated computing", 1798 | "gpuNum": 0, 1799 | "hideHardwareSpecs": false, 1800 | "memoryGiB": 512, 1801 | "name": "ml.trn1n.32xlarge", 1802 | "vcpuNum": 128 1803 | } 1804 | ], 1805 | "instance_type": "ml.t3.medium", 1806 | "kernelspec": { 1807 | "display_name": "medf", 1808 | "language": "python", 1809 | "name": "medf" 1810 | }, 1811 | "language_info": { 1812 | "codemirror_mode": { 1813 | "name": "ipython", 1814 | "version": 3 1815 | }, 1816 | "file_extension": ".py", 1817 | "mimetype": "text/x-python", 1818 | "name": "python", 1819 | "nbconvert_exporter": "python", 1820 | "pygments_lexer": "ipython3", 1821 | "version": "3.11.5" 1822 | } 1823 | }, 1824 | "nbformat": 4, 1825 | "nbformat_minor": 5 1826 | } 1827 | -------------------------------------------------------------------------------- /notebooks/images/agentic_integration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/notebooks/images/agentic_integration.png -------------------------------------------------------------------------------- /notebooks/images/agentic_orchestration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/notebooks/images/agentic_orchestration.png -------------------------------------------------------------------------------- /notebooks/images/bedrock-agent-kb-dynamodb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/notebooks/images/bedrock-agent-kb-dynamodb.png -------------------------------------------------------------------------------- /notebooks/images/multiagent.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/notebooks/images/multiagent.png -------------------------------------------------------------------------------- /notebooks/multi_agent_collaboration.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "16df69d3-74d7-413b-82c7-f597f55eea78", 6 | "metadata": {}, 7 | "source": [ 8 | "# Multi-Agent Collaboration\n", 9 | "\n", 10 | "We will delve into the intricacies of agentic service and explore the strategies and best practices for cultivating effective multi-agent collaboration. We will examine how organizations can harness the collective intelligence of their agents, leverage emerging technologies to enhance coordination and communication, and navigate the challenges that often arise in complex, dynamic environments.\n", 11 | "\n", 12 | "\n", 13 | "
\n" 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "id": "898c86e5-10f3-4dc3-93ac-2488b8e69259", 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "#!pip install crewai --force-reinstall\n", 24 | "#!pip install --force-reinstall -v \"setuptools<70\"" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "execution_count": null, 30 | "id": "e5b795ed-a9b7-445c-8487-fde81b71fd42", 31 | "metadata": {}, 32 | "outputs": [], 33 | "source": [ 34 | "import os, sys\n", 35 | "from crewai import Agent, Task, Crew, Process\n", 36 | "from dotenv import load_dotenv\n", 37 | "from langchain.tools import DuckDuckGoSearchRun\n", 38 | "module_paths = [\"./\", \"../scripts\"]\n", 39 | "for module_path in module_paths:\n", 40 | " sys.path.append(os.path.abspath(module_path))\n", 41 | "from bedrock import *\n", 42 | "from crewai_tools import (\n", 43 | " DirectoryReadTool,\n", 44 | " FileReadTool,\n", 45 | " SerperDevTool,\n", 46 | " WebsiteSearchTool\n", 47 | ")\n", 48 | "\n", 49 | "load_dotenv()\n", 50 | "search_tool = DuckDuckGoSearchRun()\n", 51 | "web_rag_tool = WebsiteSearchTool()" 52 | ] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "id": "fe7427ba-19da-4d4a-b579-04b747eedef3", 57 | "metadata": {}, 58 | "source": [ 59 | "## Define agents" 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": null, 65 | "id": "7b479cc6-f898-4bdd-a98a-eeb9b46529dc", 66 | "metadata": {}, 67 | "outputs": [], 68 | "source": [ 69 | "class blogAgents():\n", 70 | " def planner(self, topic, model_id):\n", 71 | " return Agent(\n", 72 | " role=\"Content Planner\",\n", 73 | " goal=f\"Plan engaging and factually accurate content on {topic}\",\n", 74 | " backstory=\"You're working on planning a blog article \"\n", 75 | " f\"about the topic: {topic}.\"\n", 76 | " \"You collect information by searhing the web for the latest developements that directly relate to the {topic}.\"\n", 77 | " \"audience learn something \"\n", 78 | " \"and make informed decisions. \"\n", 79 | " \"Your work is the basis for \"\n", 80 | " \"the Content Writer to write an article on this topic.\",\n", 81 | " allow_delegation=False,\n", 82 | " tools=[search_tool, web_rag_tool],\n", 83 | " llm=get_llm(model_id),\n", 84 | " verbose=True\n", 85 | " )\n", 86 | " \n", 87 | " def writer(self, topic, model_id):\n", 88 | " return Agent(\n", 89 | " role=\"Content Writer\",\n", 90 | " goal=f\"Write insightful and factually accurate opinion piece about the topic: {topic}\",\n", 91 | " backstory=\"You're working on a writing \"\n", 92 | " f\"a new opinion piece about the topic: {topic}. \"\n", 93 | " \"You base your writing on the work of \"\n", 94 | " \"the Content Planner, who provides an outline \"\n", 95 | " \"and relevant context about the topic. \"\n", 96 | " \"You follow the main objectives and \"\n", 97 | " \"direction of the outline, \"\n", 98 | " \"as provide by the Content Planner. \"\n", 99 | " \"You also provide objective and impartial insights \"\n", 100 | " \"and back them up with information \"\n", 101 | " \"provide by the Content Planner. \"\n", 102 | " \"You acknowledge in your opinion piece \"\n", 103 | " \"when your statements are opinions \"\n", 104 | " \"as opposed to objective statements.\",\n", 105 | " allow_delegation=False,\n", 106 | " llm=get_llm(model_id),\n", 107 | " verbose=True\n", 108 | " )\n", 109 | "\n", 110 | " def editor(self, model_id):\n", 111 | " return Agent(\n", 112 | " role=\"Editor\",\n", 113 | " goal=\"Edit a given blog post to align with \"\n", 114 | " \"the writing style of the organization. \",\n", 115 | " backstory=\"You are an editor who receives a blog post from the Content Writer. \"\n", 116 | " \"Your goal is to review the blog post to ensure that it follows journalistic best practices,\"\n", 117 | " \"provides balanced viewpoints when providing opinions or assertions, \"\n", 118 | " \"and also avoids major controversial topics or opinions when possible.\",\n", 119 | " allow_delegation=False,\n", 120 | " llm=get_llm(model_id),\n", 121 | " verbose=True\n", 122 | " )" 123 | ] 124 | }, 125 | { 126 | "cell_type": "markdown", 127 | "id": "f412e0e3-5ef8-4ed2-98c5-f44b8996cba0", 128 | "metadata": {}, 129 | "source": [ 130 | "## Define the tasks (plan, write and edit) for the agents we created above." 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": null, 136 | "id": "44bb42d5-9396-45aa-863e-34ef2a4921a4", 137 | "metadata": {}, 138 | "outputs": [], 139 | "source": [ 140 | "class blogTasks():\n", 141 | " def plan(self, planner, topic): \n", 142 | " return Task(\n", 143 | " description=(\n", 144 | " \"1. Prioritize the latest trends, key players, \"\n", 145 | " f\"and noteworthy news on {topic}.\\n\"\n", 146 | " \"2. Identify the target audience, considering \"\n", 147 | " \"their interests and pain points.\\n\"\n", 148 | " \"3. Develop a detailed content outline including \"\n", 149 | " \"an introduction, key points, and a call to action.\\n\"\n", 150 | " \"4. Include SEO keywords and relevant data or sources.\"\n", 151 | " ),\n", 152 | " expected_output=\"A comprehensive content plan document \"\n", 153 | " \"with an outline, audience analysis, \"\n", 154 | " \"SEO keywords, and resources.\",\n", 155 | " agent=planner,\n", 156 | " )\n", 157 | " def write(self, writer, topic): \n", 158 | " return Task(\n", 159 | " description=(\n", 160 | " \"1. Use the content plan to craft a compelling \"\n", 161 | " f\"blog post on {topic}.\\n\"\n", 162 | " \"2. Incorporate SEO keywords naturally.\\n\"\n", 163 | " \"3. Sections/Subtitles are properly named \"\n", 164 | " \"in an engaging manner.\\n\"\n", 165 | " \"4. Ensure the post is structured with an \"\n", 166 | " \"engaging introduction, insightful body, \"\n", 167 | " \"and a summarizing conclusion.\\n\"\n", 168 | " \"5. Proofread for grammatical errors and \"\n", 169 | " \"alignment with the brand's voice.\\n\"\n", 170 | " ),\n", 171 | " expected_output=\"A well-written blog post \"\n", 172 | " \"in markdown format, ready for publication, \"\n", 173 | " \"each section should have 2 or 3 paragraphs.\",\n", 174 | " agent=writer,\n", 175 | " )\n", 176 | " \n", 177 | " def edit(self, editor):\n", 178 | " return Task(\n", 179 | " description=(\"Proofread the given blog post for \"\n", 180 | " \"grammatical errors and \"\n", 181 | " \"alignment with the brand's voice.\"),\n", 182 | " expected_output=\"A well-written blog post in markdown format, \"\n", 183 | " \"ready for publication, \"\n", 184 | " \"each section should have 2 or 3 paragraphs.\",\n", 185 | " agent=editor\n", 186 | " )" 187 | ] 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "id": "77c9e235-2cbc-4d80-bc31-9c43ada63b3a", 192 | "metadata": {}, 193 | "source": [ 194 | "## It’s time to assemble the crew. Combine the agents into our awesome crew." 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": null, 200 | "id": "50b03f9f-23d6-4ea6-8f7a-6f8ee56e94db", 201 | "metadata": {}, 202 | "outputs": [], 203 | "source": [ 204 | "class blogCrew:\n", 205 | " def __init__(self, topic, model_id):\n", 206 | " self.topic = topic\n", 207 | " self.model_id = model_id\n", 208 | "\n", 209 | " def run(self):\n", 210 | " agents = blogAgents()\n", 211 | " tasks = blogTasks()\n", 212 | "\n", 213 | " planner_agent = agents.planner(self.topic, self.model_id)\n", 214 | " writer_agent = agents.writer(self.topic, self.model_id)\n", 215 | " editor_agent = agents.editor(self.model_id)\n", 216 | "\n", 217 | " plan_task = tasks.plan(planner_agent, self.topic)\n", 218 | " write_task = tasks.write(writer_agent, self.topic)\n", 219 | " edit_task = tasks.edit(editor_agent)\n", 220 | "\n", 221 | "\n", 222 | " crew = Crew(\n", 223 | " agents=[planner_agent, writer_agent, editor_agent],\n", 224 | " tasks=[plan_task, write_task, edit_task],\n", 225 | " verbose=True,\n", 226 | " memory=True,\n", 227 | " embedder={\n", 228 | " \"provider\": \"huggingface\",\n", 229 | " \"config\": {\"model\": \"sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2\"},\n", 230 | " },\n", 231 | " cache=True,\n", 232 | " process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is\n", 233 | " )\n", 234 | "\n", 235 | " result = crew.kickoff()\n", 236 | " return result" 237 | ] 238 | }, 239 | { 240 | "cell_type": "markdown", 241 | "id": "998a1a25-5b2d-400b-b8e0-00fd890979b3", 242 | "metadata": {}, 243 | "source": [ 244 | "## Write a blog post based on the input topic" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "4b83fe08-12ed-479c-aefc-2e2c959b33ef", 251 | "metadata": { 252 | "scrolled": true 253 | }, 254 | "outputs": [], 255 | "source": [ 256 | "topic = \"Write a release note for Amazon Q\"\n", 257 | "model_id = \"anthropic.claude-3-haiku-20240307-v1:0\"\n", 258 | "blog_crew = blogCrew(topic, model_id)\n", 259 | "result = blog_crew.run()" 260 | ] 261 | }, 262 | { 263 | "cell_type": "markdown", 264 | "id": "791ad4fe-7df5-4f4f-a8ff-e05512ca13d6", 265 | "metadata": {}, 266 | "source": [ 267 | "## (Optional) Display the final output in markdown format " 268 | ] 269 | }, 270 | { 271 | "cell_type": "code", 272 | "execution_count": null, 273 | "id": "9734e88e-36f2-4975-8158-a4f28070c227", 274 | "metadata": {}, 275 | "outputs": [], 276 | "source": [ 277 | "from IPython.display import Markdown\n", 278 | "Markdown(result.raw)" 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "execution_count": null, 284 | "id": "2c5546b5-6d94-46b4-969f-08efccfeab2e", 285 | "metadata": {}, 286 | "outputs": [], 287 | "source": [] 288 | } 289 | ], 290 | "metadata": { 291 | "kernelspec": { 292 | "display_name": "medf", 293 | "language": "python", 294 | "name": "medf" 295 | }, 296 | "language_info": { 297 | "codemirror_mode": { 298 | "name": "ipython", 299 | "version": 3 300 | }, 301 | "file_extension": ".py", 302 | "mimetype": "text/x-python", 303 | "name": "python", 304 | "nbconvert_exporter": "python", 305 | "pygments_lexer": "ipython3", 306 | "version": "3.11.5" 307 | } 308 | }, 309 | "nbformat": 4, 310 | "nbformat_minor": 5 311 | } 312 | -------------------------------------------------------------------------------- /notebooks/reasoning_with_langgraph_bedrock_workshop.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "52b30b6b-e2c4-4c9c-905f-530125707d29", 6 | "metadata": {}, 7 | "source": [ 8 | "# Build an Multi-agentic Services with Orchestration for Reasoning\n", 9 | "\n", 10 | "This workshop will focus on creating a sophisticated system of coordinated AI agents. We'll incorporate recent breakthroughs in generative AI to enhance the system's reasoning capabilities. By leveraging the power of LangGraph and the advanced language models available through Amazon Bedrock, including Claude-3.5 and LLaMA-3.1, we will create a versatile and capable system that can tackle a wide range of challenges.\n", 11 | "\n", 12 | "\n", 13 | "\n", 14 | "Key Features:\n", 15 | "\n", 16 | "* Agentic Service: The agent will be designed as a service, allowing for seamless integration and deployment in various applications.\n", 17 | "* Dynamic Prompt Rewriting: The agent will dynamically rewrite prompts to optimize the responses from the underlying language models, ensuring more accurate and informative outputs.\n", 18 | "* Adaptive Routing: Inspired by the Semantic Router, the routing agent will intelligently route requests to retrieval, web search, or pre-trained LLMs for the most desirable answers, leveraging the strengths of each method for optimal performance. This adaptive routing mechanism will ensure that the agent can effectively handle a diverse set of queries and tasks.\n", 19 | "* Hallucination Grader: The agent will include a hallucination grader component to assess the reliability of the generated responses. This will help identify and correct any hallucinations or incomplete answers.\n", 20 | "* Human Involvement: If needed, the agent will be able to involve human subject matter experts to provide additional verification and correction of the responses, further improving the trustworthiness and reliability of the system.\n", 21 | "\n", 22 | "By combining these advanced reasoning techniques, the agentic services with orchestration will be able to provide more accurate and informative responses to challenging queries. This will be a significant step forward in the development of intelligent agents that can truly understand and respond to complex questions.\n", 23 | "\n", 24 | "To build this powerful system, we will use LangGraph to create complex, multi-step workflows that involve language models and other components. This will allow us to develop a flexible and scalable system that can handle a wide range of tasks.\n", 25 | "\n", 26 | "To further enhance our capabilities, we will port the original notebook to utilize Amazon Bedrock for LLM inference. This will enable us to leverage the cloud processing power and take advantage of the advanced language models available through Amazon Bedrock, such as Claude-3 and LLaMA-3. By harnessing the power of these cutting-edge language models, we will be able to push the boundaries of what is possible with intelligent agents.\n", 27 | "\n", 28 | "While the choice of vector stores (local chromaDB) will remain unchanged for now, we will explore how to scale this part in future blog posts, ensuring that our system can handle ever-growing amounts of data and information.\n", 29 | "\n", 30 | "Join us in this exciting workshop as we embark on a journey to create an intelligent agent that redefines the boundaries of what is possible with language-based AI systems. Together, we will explore the latest advancements in the field and push the limits of what can be achieved.\n", 31 | "\n" 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": null, 37 | "id": "f1665046-6a6e-43f8-a6d7-46e41c13ed18", 38 | "metadata": { 39 | "scrolled": true 40 | }, 41 | "outputs": [], 42 | "source": [ 43 | "!pip install -q -r ../requirements2.txt -U " 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "id": "2eaf8ec0-2a7c-4283-a03f-e8ead72e49b1", 49 | "metadata": {}, 50 | "source": [ 51 | "**Restart the kernel after pkg install**" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": null, 57 | "id": "3753831a-fefc-4097-9bda-3e0e1393f7a4", 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "import os\n", 62 | "os._exit(00)" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "id": "ee6fa942-0a95-40a1-87f7-c3d98806a015", 68 | "metadata": {}, 69 | "source": [ 70 | "## 1. Setting Up API keys or tokens \n", 71 | "\n", 72 | "To access various services, such as Amazon Bedrock for Large Language Models (LLMs) and embedding models, Tavily web search engine, and optional Langchain, you will need to set up and obtain the necessary API keys or tokens. These API keys and tokens serve as authentication credentials that allow your application to securely connect and interact with the respective services." 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": null, 78 | "id": "c41bbfd4-9225-4747-9e3a-262af2b8dca7", 79 | "metadata": { 80 | "tags": [] 81 | }, 82 | "outputs": [], 83 | "source": [ 84 | "import sys\n", 85 | "import os\n", 86 | "import boto3\n", 87 | "import json\n", 88 | "import requests\n", 89 | "\n", 90 | "\n", 91 | "aws_region = \"us-west-2\" # choose your region you operate in\n", 92 | "os.environ['TAVILY_API_KEY'] = tavily_ai_api_key = 'tvly-NA' # For extra search result. Optional\n", 93 | "os.environ['OPENAI_API_KEY'] = openai_api_key = 'sk-NA' # Only when you elect to use OpenAI's ada as embedding model. Otherwise you just need to assign an empty key. \n", 94 | "# Temp image file\n", 95 | "temp_gen_image = \"./delme.png\"\n", 96 | "markdown_filename = \"./blogpost.md\"\n", 97 | "\n", 98 | "module_paths = [\"./\", \"../scripts\"]\n", 99 | "for module_path in module_paths:\n", 100 | " sys.path.append(os.path.abspath(module_path))\n", 101 | "\n", 102 | "from blog_writer import *\n", 103 | "from bedrock import *\n", 104 | "\n", 105 | "#os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", 106 | "#os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n", 107 | "#os.environ[\"LANGCHAIN_API_KEY\"] = langchain_api_key" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "id": "c1a16bda-dd03-4e7a-abb4-1da18f0f55af", 113 | "metadata": {}, 114 | "source": [ 115 | "## 2. Creating a Bedrock Runtime Client\n", 116 | "We'll create a Bedrock runtime client to connect to the Amazon Bedrock service. Bedrock, a fully managed service by AWS, allows developers to build and deploy generative AI models like large language models (LLMs). This client will enable us to leverage pre-trained LLMs from Amazon, such as the powerful LLaMA3 model from Meta.\n", 117 | "\n", 118 | "Connecting to Bedrock is crucial for building our scalable and secure RAG agent, as it provides the necessary language model for generation capabilities. With the Bedrock runtime client in place, we can integrate LLaMA3 into our workflow and use its advanced natural language processing capabilities to generate accurate responses." 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": null, 124 | "id": "8293dd71-f502-46f6-b039-935ff458accb", 125 | "metadata": { 126 | "tags": [] 127 | }, 128 | "outputs": [], 129 | "source": [ 130 | "### Select models\n", 131 | "import ipywidgets as widgets\n", 132 | "from ipywidgets import interact, interactive, fixed\n", 133 | "\n", 134 | "options = [\"mistral.mistral-large-2407-v1:0\", \"anthropic.claude-3-haiku-20240307-v1:0\", \"anthropic.claude-3-5-sonnet-20240620-v1:0\", \"meta.llama3-1-70b-instruct-v1:0\"]\n", 135 | "# Create the dropdown widget\n", 136 | "dropdown = widgets.Dropdown(\n", 137 | " options=options,\n", 138 | " value=options[1],\n", 139 | " description='Choose an option:',\n", 140 | " disabled=False,\n", 141 | ")\n", 142 | "\n", 143 | "# Display the dropdown widget\n", 144 | "display(dropdown)" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "id": "ed332f82-89e8-4a99-aa84-1a4533042ee4", 151 | "metadata": { 152 | "tags": [] 153 | }, 154 | "outputs": [], 155 | "source": [ 156 | "model_id = dropdown.value\n", 157 | "llm = get_llm(model_id)\n", 158 | "model_id_l31 = 'meta.llama3-1-70b-instruct-v1:0'\n", 159 | "model_id_c35 = \"anthropic.claude-3-sonnet-20240229-v1:0\" # Due to model access restriction #'anthropic.claude-3-5-sonnet-20240620-v1:0' \n", 160 | "model_id_mistral_large = 'mistral.mistral-large-2407-v1:0'\n", 161 | "# Choose multiple models for different purpose to deversify and avoid potential bias \n", 162 | "llm = get_llm(model_id)\n", 163 | "llm_llama31 = get_llm(model_id_l31)\n", 164 | "llm_claude35 = get_llm(model_id_c35 )\n", 165 | "llm_mistral = get_llm(model_id_mistral_large)" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "id": "b058bb22-85c0-4f4f-9795-fd686ff9466a", 171 | "metadata": {}, 172 | "source": [ 173 | "## 3. Create agentic services with multi-agent capability\n", 174 | "\n", 175 | "Creating agentic services with multi-agent capability using Amazon Bedrock, Converse API, and LangChain can be a powerful approach to building intelligent and collaborative systems. Amazon Bedrock provides a foundation for developing large language models (LLMs) and integrating them into applications, while the Converse API enables seamless communication between these models and external services. LangChain, on the other hand, offers a framework for building complex, multi-agent systems that can leverage the capabilities of various LLMs and other AI components. By combining these tools, developers can create agentic services that can engage in dynamic, context-aware interactions, share knowledge, and coordinate their efforts to tackle complex tasks. This approach can be particularly useful in scenarios where a diverse set of specialized agents need to collaborate, such as in enterprise automation, customer service, or research and development." 176 | ] 177 | }, 178 | { 179 | "cell_type": "code", 180 | "execution_count": null, 181 | "id": "1a35bb0a-ba58-4a31-a518-a3a067899c5f", 182 | "metadata": { 183 | "tags": [] 184 | }, 185 | "outputs": [], 186 | "source": [ 187 | "from IPython.display import Image, Markdown\n", 188 | "from langgraph.prebuilt import create_react_agent\n", 189 | "from chromadb import Documents, EmbeddingFunction, Embeddings\n", 190 | "from langchain_aws import BedrockEmbeddings\n", 191 | "from langchain.text_splitter import RecursiveCharacterTextSplitter\n", 192 | "from langchain_community.document_loaders import WebBaseLoader\n", 193 | "from langchain_community.vectorstores import Chroma\n", 194 | "from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage, AnyMessage\n", 195 | "from typing import TypedDict\n", 196 | "from langgraph.graph import StateGraph, END\n", 197 | "from langgraph.checkpoint.memory import MemorySaver\n", 198 | "from botocore.exceptions import ClientError\n", 199 | "from langchain.prompts import PromptTemplate, ChatPromptTemplate\n", 200 | "from langchain_core.output_parsers import JsonOutputParser, StrOutputParser\n", 201 | "from langchain_community.tools.tavily_search import TavilySearchResults\n", 202 | "from langchain_community.tools import DuckDuckGoSearchResults" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": null, 208 | "id": "446454c8-42b3-4b91-8a49-d6dc3c1ef3a6", 209 | "metadata": { 210 | "tags": [] 211 | }, 212 | "outputs": [], 213 | "source": [ 214 | "config = Config(\n", 215 | " retries = dict(\n", 216 | " max_attempts = 10,\n", 217 | " total_max_attempts = 25,\n", 218 | " )\n", 219 | " )\n", 220 | "bedrock_client = boto3.client(\"bedrock-runtime\", config=config) " 221 | ] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "id": "e3617811-2627-4c3f-aef4-2be4797b80e7", 227 | "metadata": { 228 | "tags": [] 229 | }, 230 | "outputs": [], 231 | "source": [ 232 | "class MyEmbeddingFunction(EmbeddingFunction):\n", 233 | " def __init__(self, client, region_name: str, model_id: str):\n", 234 | " self.embedder = BedrockEmbeddings(\n", 235 | " client=client,\n", 236 | " region_name=region_name,\n", 237 | " model_id=model_id\n", 238 | " )\n", 239 | " def embed_query(self, query: str) -> Embeddings:\n", 240 | " return self.embedder.embed_query(query)\n", 241 | " def embed_documents(self, documents: list[str]) -> Embeddings:\n", 242 | " return self.embedder.embed_documents(documents)\n", 243 | "\n", 244 | "class MultiAgentState(TypedDict):\n", 245 | " question: str\n", 246 | " question_type: str\n", 247 | " answer: str\n", 248 | " feedback: str\n", 249 | "\n", 250 | "\n", 251 | "memory = MemorySaver()\n", 252 | "embedding_model_id = \"amazon.titan-embed-text-v2:0\"\n", 253 | "\n", 254 | "####\n", 255 | "# Router\n", 256 | "###\n", 257 | "def route_question(state: MultiAgentState):\n", 258 | " print('route function execution')\n", 259 | " print(state)\n", 260 | " return state['question_type']\n", 261 | "\n", 262 | "\n", 263 | "####\n", 264 | "# rewrite the question\n", 265 | "####\n", 266 | "def rewrite_node(state: MultiAgentState):\n", 267 | " \"\"\"\n", 268 | " REwrite question from query to match domain expert\n", 269 | " Args:\n", 270 | " question (str): The user query\n", 271 | " Returns:\n", 272 | " promt (str): rewrite question to form an expert prompt\n", 273 | " \"\"\"\n", 274 | " print(\"---REWRITE QUESTION---\")\n", 275 | " c3_template = \"\"\"Rewrite the question by following the {{instruction}} to capture more precise and comprehensive intent from {question}.\n", 276 | " \n", 277 | " Identify the key purposes, concepts and entities in the original {{question}}. \n", 278 | " Rephrase the question to be more specific and focused, ensuring that the language is clear and unambiguous. \n", 279 | " Provide additional context or background information that may be helpful for web search or RAG system to better understand and respond to the question. \n", 280 | " Output your reqritten question only without answering it or repeating the riginal one.\n", 281 | " \n", 282 | " \"\"\"\n", 283 | " \n", 284 | " c3_prompt = ChatPromptTemplate.from_template(c3_template)\n", 285 | " #chain = ( c3_prompt | llm_c3 | StrOutputParser() | (lambda x: x.split(\"\\n\")))\n", 286 | " rewritten_chain = ( c3_prompt | llm | StrOutputParser() )\n", 287 | " rewritten_question = rewritten_chain.invoke({\"question\": state['question']})\n", 288 | " print(rewritten_question)\n", 289 | " if os.path.exists(temp_gen_image):\n", 290 | " os.remove(temp_gen_image)\n", 291 | " return {\"answer\": rewritten_question}\n", 292 | "\n", 293 | " \n", 294 | "#####\n", 295 | "# Router agent\n", 296 | "#####\n", 297 | "question_category_prompt = '''You are a senior specialist of analytical support. Your task is to classify the incoming questions. \n", 298 | "Depending on your answer, question will be routed to the right team, so your task is crucial for our team. \n", 299 | "There are 5 possible question types: \n", 300 | "- Vectorstore - Answer questions related to pre-indexed healthcare and medical research related topics stored in the vactorestore.\n", 301 | "- Websearch- Answer questions based on events happened recently, after most LLM's cut-off dates. \n", 302 | "- General - Answer questions for LLM or a few LLMs.\n", 303 | "- Text2image - Generate an image from text input.\n", 304 | "- Booking - Assist in restaurant reservation booking.\n", 305 | "- BlogWriter - Writer a blog post about the provided topic as a professional writer.\n", 306 | "Return in the output only one word (VECTORSTORE, WEBSEARCH, GENERAL, TEXT2IMAGE, BOOKING or BLOGWRITER).\n", 307 | "'''\n", 308 | "\n", 309 | "def router_node(state: MultiAgentState):\n", 310 | " print('Router node started execution')\n", 311 | " messages = [\n", 312 | " SystemMessage(content=question_category_prompt), \n", 313 | " HumanMessage(content=state['question'])\n", 314 | " ]\n", 315 | " response = llm.invoke(messages)\n", 316 | " print('Question type: %s' % response.content)\n", 317 | " return {\"question_type\": response.content}\n", 318 | "\n", 319 | "\n", 320 | "#####\n", 321 | "# Search agent\n", 322 | "#####\n", 323 | "def search_expert_node(state: MultiAgentState):\n", 324 | " tavily_tool = TavilySearchResults(max_results=5)\n", 325 | " duck_search = DuckDuckGoSearchResults()\n", 326 | "\n", 327 | " search_expert_system_prompt = '''\n", 328 | " You are an expert in LangChain and other technologies. \n", 329 | " Your goal is to answer questions based on results provided by search.\n", 330 | " You don't add anything yourself and provide only information baked by other sources. \n", 331 | " '''\n", 332 | " search_agent = create_react_agent(llm, [duck_search, tavily_tool],\n", 333 | " state_modifier = search_expert_system_prompt)\n", 334 | " messages = [HumanMessage(content=state['question'])]\n", 335 | " result = search_agent.invoke({\"messages\": messages})\n", 336 | " return {'answer': result['messages'][-1].content}\n", 337 | "\n", 338 | "\n", 339 | "#######\n", 340 | "# RAG\n", 341 | "#######\n", 342 | "def rag_node(state: MultiAgentState):\n", 343 | " urls = [\n", 344 | " \"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11127599/\",\n", 345 | " \"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11127585/\",\n", 346 | " \"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11127581/\"\n", 347 | " ]\n", 348 | " c3_template = \"\"\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. Use less than 10 sentences maximum and keep the answer concise. \n", 349 | " \n", 350 | " {context} \n", 351 | " \n", 352 | " Use these to craft an answer to the question: {question}\"\"\"\n", 353 | " c3_prompt = ChatPromptTemplate.from_template(c3_template)\n", 354 | " \n", 355 | " docs = [WebBaseLoader(url).load() for url in urls] \n", 356 | " docs_list = [item for sublist in docs for item in sublist]\n", 357 | " text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n", 358 | " chunk_size=4000, chunk_overlap=400\n", 359 | " )\n", 360 | " doc_splits = text_splitter.split_documents(docs_list)\n", 361 | "\n", 362 | " embedding_function = MyEmbeddingFunction(client = bedrock_client,\n", 363 | " region_name=aws_region,\n", 364 | " model_id=embedding_model_id)\n", 365 | " # Add to vectorDB\n", 366 | " vectorstore = Chroma.from_documents(\n", 367 | " documents=doc_splits,\n", 368 | " embedding=embedding_function,\n", 369 | " collection_name=\"rag-chroma-titan-embed-text-v2-1\",\n", 370 | " )\n", 371 | " retriever = vectorstore.as_retriever(\n", 372 | " search_type=\"mmr\",\n", 373 | " search_kwargs={'k': 3, 'lambda_mult': 0.25})\n", 374 | " rag_chain = c3_prompt | llm | StrOutputParser()\n", 375 | " documents = retriever.invoke(state['question'])\n", 376 | " generation = rag_chain.invoke({\"context\": documents, \"question\": state['question']})\n", 377 | " #generation = rag_chain.invoke({\"context\": documents, \"question\": state['answer']}) # Use the rewritten question instead\n", 378 | " return {'answer': generation}\n", 379 | "\n", 380 | "\n", 381 | "#####\n", 382 | "# LLM node\n", 383 | "####\n", 384 | "def llm_node(state: MultiAgentState):\n", 385 | " model_ids = [model_id_mistral_large , model_id_l31]\n", 386 | " max_tokens = 2048\n", 387 | " temperature = 0.01\n", 388 | " top_p = 0.95\n", 389 | "\n", 390 | " conversation = [\n", 391 | " {\n", 392 | " \"role\": \"user\",\n", 393 | " #\"system\": \"You are a domain expert who can understand the intent of user query and answer question truthful and professionally. Please, don't provide any unchecked information and just tell that you don't know if you don't have enough info.\",\n", 394 | " \"content\": [{\"text\": state['question']}],\n", 395 | " }\n", 396 | " ]\n", 397 | " try:\n", 398 | " # Send the message to the model, using a basic inference configuration.\n", 399 | " responses = []\n", 400 | " for model_id in model_ids:\n", 401 | " response = bedrock_client.converse(\n", 402 | " modelId=model_id,\n", 403 | " messages=conversation,\n", 404 | " inferenceConfig={\"maxTokens\": max_tokens, \"temperature\": temperature, \"topP\": top_p},\n", 405 | " )\n", 406 | " \n", 407 | " # Extract and print the response text.\n", 408 | " responses.append( response[\"output\"][\"message\"][\"content\"][0][\"text\"])\n", 409 | "\n", 410 | " ###\n", 411 | " # Combine the answers to form a unified one\n", 412 | " ###\n", 413 | " c3_template = \"\"\"Your are a domain expert and your goal is to Merge and eliminate redundant elements from {{responses}} that captures the essence of all input while adhering to the following the {{instruction}}.\n", 414 | " \n", 415 | " Aggregate relevant information from the provided context. \n", 416 | " Eliminate redundancies to ensure a concise response. \n", 417 | " Maintain fidelity to the original content. \n", 418 | " Add additional relevent info to the question or removing iirelevant information.\n", 419 | " \n", 420 | " \n", 421 | " {responses}\n", 422 | " \n", 423 | " \"\"\"\n", 424 | " \n", 425 | " messages = [\n", 426 | " SystemMessage(content=c3_template), \n", 427 | " HumanMessage(content=state['question'])\n", 428 | " ]\n", 429 | "\n", 430 | " return {'answer': llm_claude35.invoke(messages)}\n", 431 | " except (ClientError, Exception) as e:\n", 432 | " print(f\"ERROR: Can't invoke '{model_id}'. Reason: {e}\")\n", 433 | "\n", 434 | "\n", 435 | "####\n", 436 | "# Human in the loop\n", 437 | "###\n", 438 | "\n", 439 | "def human_feedback_node(state: MultiAgentState):\n", 440 | " editor_prompt = '''You're an editor and your goal is to provide the final answer to the customer, taking into account the feedback. \n", 441 | " You don't add any information on your own. You use friendly and professional tone. \n", 442 | " In the output please provide the final answer to the customer without additional comments.\n", 443 | " Here's all the information you need.\n", 444 | " \n", 445 | " \n", 446 | " Question from customer: \n", 447 | " ----\n", 448 | " {question}\n", 449 | " ----\n", 450 | " Draft answer:\n", 451 | " ----\n", 452 | " {answer}\n", 453 | " ----\n", 454 | " Feedback: \n", 455 | " ----\n", 456 | " {feedback}\n", 457 | " ----\n", 458 | " '''\n", 459 | " print(state)\n", 460 | " messages = [\n", 461 | " SystemMessage(content=editor_prompt.format(question = state['question'], answer = state['answer'], feedback = state['feedback']))\n", 462 | " ]\n", 463 | " response = llm.invoke(messages)\n", 464 | " return {\"answer\": response.content}\n", 465 | "\n", 466 | "def editor_node(state: MultiAgentState):\n", 467 | " pass\n", 468 | " print(state)\n", 469 | " messages = [\n", 470 | " SystemMessage(content=editor_prompt.format(question = state['question'], answer = state['answer'], feedback = state['feedback']))\n", 471 | " ]\n", 472 | " response = llm.invoke(messages)\n", 473 | " return {\"answer\": response.content}\n", 474 | "\n", 475 | "#####\n", 476 | "# multi-agent collaboration node\n", 477 | "#####\n", 478 | "def blog_writer_node(state: MultiAgentState):\n", 479 | " blog_crew = blogCrew(topic=state['answer'], model_id=model_id_c35)\n", 480 | " result = blog_crew.run()\n", 481 | "\n", 482 | " ## Werite to a Markdown file\n", 483 | " if os.path.exists(markdown_filename):\n", 484 | " os.remove(markdown_filename)\n", 485 | " # Create the Markdown format and Save the Markdown text to a file\n", 486 | " markdown_text = f\"# Sample Text\\n\\n{result.raw}\\n\\n![Image]({temp_gen_image})\"\n", 487 | " with open(markdown_filename, \"w\") as file:\n", 488 | " file.write(markdown_text)\n", 489 | "\n", 490 | " return {\"answer\": result}" 491 | ] 492 | }, 493 | { 494 | "cell_type": "markdown", 495 | "id": "671746d7-14f6-4c68-99cb-0f9b60573fcf", 496 | "metadata": {}, 497 | "source": [ 498 | "#### Additional functions" 499 | ] 500 | }, 501 | { 502 | "cell_type": "code", 503 | "execution_count": null, 504 | "id": "a015677c-9fce-45b1-8b6b-ce8b4f5fa09c", 505 | "metadata": { 506 | "tags": [] 507 | }, 508 | "outputs": [], 509 | "source": [ 510 | "#####\n", 511 | "# Hallucination grader\n", 512 | "#####\n", 513 | "from langchain.callbacks.base import BaseCallbackHandler\n", 514 | "import random\n", 515 | "import base64\n", 516 | "\n", 517 | "class MyCustomHandler(BaseCallbackHandler):\n", 518 | " def on_llm_end(self, response, **kwargs):\n", 519 | " print(f\"Response: {response}\")\n", 520 | " \n", 521 | "def hallucination_grader_node(state:MultiAgentState):\n", 522 | " c3_template = \"\"\"You are a grader assessing whether an answer is grounded in supported by facts. \n", 523 | " Give a binary score 'pass' or 'fail' score to indicate whether the answer is grounded in supported by a \n", 524 | " set of facts in your best knowledge. Provide the binary score as a JSON with a single key 'score' and no preamble or explanation.\n", 525 | " \n", 526 | " Here is the answer: {answer}\"\"\"\n", 527 | " c3_prompt = ChatPromptTemplate.from_template(c3_template)\n", 528 | " \n", 529 | " # Grade by a diff model in this case Claude 3\n", 530 | " #hallucination_grader = prompt | llm_llama31 | JsonOutputParser() \n", 531 | " hallucination_grader = c3_prompt | llm_claude35 | JsonOutputParser()\n", 532 | " score = hallucination_grader.invoke({\"answer\": state['answer'], \"callbacks\": [MyCustomHandler()]})\n", 533 | " return {'answer': score}\n", 534 | "\n", 535 | "def hallucination_grader(state:MultiAgentState):\n", 536 | " c3_template = \"\"\"You are a grader assessing whether an answer is grounded in supported by facts. \n", 537 | " Give a binary score 'pass' or 'fail' score to indicate whether the answer is grounded in supported by a \n", 538 | " set of facts in your best knowledge. Provide the binary score as a JSON with a single key 'score' and no preamble or explanation.\n", 539 | " \n", 540 | " Here is the answer: {answer}\"\"\"\n", 541 | " c3_prompt = ChatPromptTemplate.from_template(c3_template)\n", 542 | " \n", 543 | " # Grade by a diff model in this case Claude 3\n", 544 | " #hallucination_grader = prompt | llm_llama31 | JsonOutputParser() \n", 545 | " hallucination_grader = c3_prompt | llm_claude35 | JsonOutputParser()\n", 546 | " score = hallucination_grader.invoke({\"answer\": state['answer'], \"callbacks\": [MyCustomHandler()]})\n", 547 | " if \"yes\" in score['score'].lower():\n", 548 | " # All documents have been filtered check_relevance\n", 549 | " # We will re-generate a new query\n", 550 | " print(\n", 551 | " \"---DECISION: the answer does not seem to contain hallucination ---\"\n", 552 | " )\n", 553 | " return \"END\"\n", 554 | " else:\n", 555 | " # We have relevant documents, so generate answer\n", 556 | " print(\"---DECISION: the answer migh contain hallucination, next off to human review ---\")\n", 557 | " return \"to_human\"\n", 558 | "\n", 559 | "\n", 560 | "####\n", 561 | "# Extra function but not as a node\n", 562 | "####\n", 563 | "def decide_to_search(state:MultiAgentState):\n", 564 | " \"\"\"\n", 565 | " Determines whether to generate an answer, or add web search\n", 566 | " Args:\n", 567 | " state (dict): The current graph state\n", 568 | " Returns:\n", 569 | " str: Binary decision for next node to call\n", 570 | " \"\"\"\n", 571 | " l31_prompt = PromptTemplate(\n", 572 | " template=\"\"\" <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a grader assessing whether\n", 573 | " an {answer} is grounded in / relevant to the {question}. Give a binary score 'yes' or 'no' score to indicate\n", 574 | " whether the answer is grounded in / supported by a set of facts. Provide the binary score as a JSON with a\n", 575 | " single key 'score' and no preamble or explanation. <|eot_id|><|start_header_id|>user<|end_header_id|>\n", 576 | " Here is the answer:\n", 577 | " {answer}\n", 578 | " Here is the question: {question} <|eot_id|><|start_header_id|>assistant<|end_header_id|>\"\"\",\n", 579 | " input_variables=[\"question\", \"answer\"],\n", 580 | " )\n", 581 | " \n", 582 | " answer_grader = l31_prompt | llm_llama31 | JsonOutputParser()\n", 583 | " print(\"---ASSESS GRADED ANSWER AGAINST QUESTION---\")\n", 584 | " relevance = answer_grader.invoke({\"answer\": state[\"answer\"], \"question\": state[\"question\"]})\n", 585 | " print(relevance)\n", 586 | " if \"yes\" in relevance['score'].lower():\n", 587 | " # All documents have been filtered check_relevance\n", 588 | " # We will re-generate a new query\n", 589 | " print(\n", 590 | " \"---DECISION: the answer is relevant to the question so it's ready for human review ---\"\n", 591 | " )\n", 592 | " return \"to_human\"\n", 593 | " else:\n", 594 | " # We have relevant documents, so generate answer\n", 595 | " print(\"---DECISION: the answer is NOT relevant to the question then try web search ---\")\n", 596 | " return \"do_search\"\n", 597 | "\n", 598 | "#####\n", 599 | "# text 2 image generation\n", 600 | "####\n", 601 | "def t2i_node2(state:MultiAgentState):\n", 602 | " negative_prompts = [\n", 603 | " \"poorly rendered\",\n", 604 | " \"poor background details\",\n", 605 | " \"poorly drawn objects\",\n", 606 | " \"poorly focused objects\",\n", 607 | " \"disfigured object features\",\n", 608 | " \"cartoon\",]\n", 609 | " body = json.dumps(\n", 610 | " {\n", 611 | " \"taskType\": \"TEXT_IMAGE\",\n", 612 | " \"textToImageParams\": {\n", 613 | " #\"text\":state['answer'].replace(\"{Rewritten Question}:\\n\\n\", \"\")[:510], # Required, Titan image gen v2 limits up to 512 token input\n", 614 | " \"text\":state['question'][:511],\n", 615 | " \"negativeText\": \"poorly rendere, disfigured object features\" #negative_prompts # Optional\n", 616 | " },\n", 617 | " \"imageGenerationConfig\": {\n", 618 | " \"numberOfImages\": 1, # Range: 1 to 5 \n", 619 | " \"quality\": 'premium', # Options: standard or premium\n", 620 | " \"height\": 1024, # Supported height list in the docs \n", 621 | " \"width\": 1024, # Supported width list in the docs\n", 622 | " \"cfgScale\": 6.5, # Range: 1.0 (exclusive) to 10.0\n", 623 | " \"seed\": random.randint(1, 214783647) # Range: 1 to 214783647\n", 624 | " }\n", 625 | " }\n", 626 | " )\n", 627 | "\n", 628 | "\n", 629 | " if os.path.exists(temp_gen_image):\n", 630 | " os.remove(temp_gen_image)\n", 631 | " response = bedrock_client.invoke_model(\n", 632 | " body=body, \n", 633 | " modelId=\"amazon.titan-image-generator-v2:0\",\n", 634 | " accept=\"application/json\", \n", 635 | " contentType=\"application/json\"\n", 636 | " )\n", 637 | " response_body = json.loads(response[\"body\"].read())\n", 638 | " with open(temp_gen_image, 'wb') as file:\n", 639 | " # Decode the base64 data and write it to the file\n", 640 | " file.write(base64.b64decode(response_body[\"images\"][0]))\n", 641 | " return {\"answer\": temp_gen_image}\n", 642 | "\n", 643 | "def t2i_node(state:MultiAgentState):\n", 644 | " url = \"http://video.cavatar.info:8080/generate?prompt=\"\n", 645 | " prompt = f\"Generate a high resiliution, photo realistic picture of {state['question']} with vivid color and attending to details.\"\n", 646 | " response = requests.get(url+prompt)\n", 647 | " if response.status_code == 200:\n", 648 | " image_bytes = response.content\n", 649 | " else:\n", 650 | " print(f\"Error fetching image from {url}\")\n", 651 | " pass\n", 652 | " with open(temp_gen_image, 'wb') as file:\n", 653 | " file.write(image_bytes)\n", 654 | " return {\"answer\": temp_gen_image}" 655 | ] 656 | }, 657 | { 658 | "cell_type": "markdown", 659 | "id": "c1a562f4-ef92-4a7a-859d-4f71b0852c62", 660 | "metadata": {}, 661 | "source": [ 662 | "### Pre-requisite: complete Bedrock agent association with knowledgeBase and lambda function to interact with pre-populated DynamoDB. OBtain the agent ID and Agent \n", 663 | "\n", 664 | "**Please note the next cell might take a few (> 10) minutes to complete**\n", 665 | "\n", 666 | "You might use AWS console (https://us-west-2.console.aws.amazon.com/aos/home?region=us-west-2#opensearch/get-started-serverless) or aws cli to check the status\n", 667 | "* %aws bedrock-agent list-agents\n", 668 | "* %aws bedrock-agent list-agent-aliases --agent-id \\\n", 669 | "* %aws bedrock-agent list-knowledge-bases\n", 670 | "* %aws bedrock-agent list-agent-knowledge-bases" 671 | ] 672 | }, 673 | { 674 | "cell_type": "code", 675 | "execution_count": null, 676 | "id": "e75a6c68-abc5-4d47-89fc-e4c84c15ade7", 677 | "metadata": { 678 | "tags": [] 679 | }, 680 | "outputs": [], 681 | "source": [ 682 | "#%run ./create-agent-with-knowledge-base-and-action-group.ipynb" 683 | ] 684 | }, 685 | { 686 | "cell_type": "markdown", 687 | "id": "016632ba-67e2-4737-a388-7c29e4d2642f", 688 | "metadata": {}, 689 | "source": [ 690 | "### Upon successful complettion of the Amazon Bedrock agent creation, define a node to invoke the agent. " 691 | ] 692 | }, 693 | { 694 | "cell_type": "code", 695 | "execution_count": null, 696 | "id": "e2d8f2f5-be1a-418f-a97d-21446dd51834", 697 | "metadata": { 698 | "tags": [] 699 | }, 700 | "outputs": [], 701 | "source": [ 702 | "####\n", 703 | "# Bedrock agent integration\n", 704 | "####\n", 705 | "import uuid\n", 706 | "import logging\n", 707 | "from datetime import datetime\n", 708 | "%store -r agent_id\n", 709 | "%store -r alias_id\n", 710 | "\n", 711 | "def invoke_BR_agent(agent_id, alias_id, query, enable_trace=False, session_state=dict()):\n", 712 | " session_id = str(uuid.uuid1())\n", 713 | " end_session = False\n", 714 | " logger = logging.getLogger(__name__)\n", 715 | " \n", 716 | " # invoke the agent API\n", 717 | " agentResponse = bedrock_agent_runtime_client.invoke_agent(\n", 718 | " inputText=query,\n", 719 | " agentId=agent_id,\n", 720 | " agentAliasId=alias_id, \n", 721 | " sessionId=session_id,\n", 722 | " enableTrace=enable_trace, \n", 723 | " endSession= end_session,\n", 724 | " sessionState=session_state\n", 725 | " )\n", 726 | " \n", 727 | " if enable_trace:\n", 728 | " logger.info(pprint.pprint(agentResponse))\n", 729 | " \n", 730 | " event_stream = agentResponse['completion']\n", 731 | " try:\n", 732 | " for event in event_stream: \n", 733 | " if 'chunk' in event:\n", 734 | " data = event['chunk']['bytes']\n", 735 | " if enable_trace:\n", 736 | " logger.info(f\"Final answer ->\\n{data.decode('utf8')}\")\n", 737 | " agent_answer = data.decode('utf8')\n", 738 | " end_event_received = True\n", 739 | " return agent_answer\n", 740 | " # End event indicates that the request finished successfully\n", 741 | " elif 'trace' in event:\n", 742 | " if enable_trace:\n", 743 | " logger.info(json.dumps(event['trace'], indent=2))\n", 744 | " else:\n", 745 | " raise Exception(\"unexpected event.\", event)\n", 746 | " except Exception as e:\n", 747 | " raise Exception(\"unexpected event.\", e)\n", 748 | "\n", 749 | "def bedrock_agent_node(state:MultiAgentState):\n", 750 | " today = datetime.today().strftime('%b-%d-%Y')\n", 751 | " session_state = {\n", 752 | " \"promptSessionAttributes\": {\n", 753 | " \"name\": \"John Doe\",\n", 754 | " \"today\": today\n", 755 | " }\n", 756 | " }\n", 757 | " return {'answer': invoke_BR_agent(agent_id, alias_id, state[\"question\"])}" 758 | ] 759 | }, 760 | { 761 | "cell_type": "markdown", 762 | "id": "16a16723-fce4-4601-865c-ded1eab4d224", 763 | "metadata": {}, 764 | "source": [ 765 | "## 4. Defining the Reasoning Flow with LangGraph Nodes and Edges\n", 766 | "\n", 767 | "Implement nodes representing key actions: document retrieval, document grading, web search, and answer generation. Define conditional edges for decision-making: route the question, decide on document relevance, and grade the generated answer. Set up the workflow graph with entry points, nodes, and edges to ensure a logical progression through the RAG agent's steps. LangGraph allows us to define a graph-based workflow for our RAG agent, integrating document retrieval, question routing, answer generation, and self-correction into an efficient pipeline.\n", 768 | "\n", 769 | "Key steps include:\n", 770 | "\n", 771 | "* Question rewrite: Rewrite the query for better intend classification\n", 772 | "* Routing: Deciding whether the question should go to the RAG, LLMs or a web search.\n", 773 | "* Hallucination Grading: Ensuring the generated answer is grounded in the retrieved documents.\n", 774 | "* Human in the loop: In case the answer fall bwloew desired quality, insert human feedback\n", 775 | "\n", 776 | "LangGraph lets us seamlessly integrate these steps into a modular, adaptable workflow, enhancing the agent's ability to handle diverse queries." 777 | ] 778 | }, 779 | { 780 | "cell_type": "code", 781 | "execution_count": null, 782 | "id": "c876613d-860f-4955-bda7-973112beaec6", 783 | "metadata": { 784 | "tags": [] 785 | }, 786 | "outputs": [], 787 | "source": [ 788 | "orch = StateGraph(MultiAgentState)\n", 789 | "orch.add_node(\"rewrite\", rewrite_node)\n", 790 | "orch.add_node(\"router\", router_node)\n", 791 | "orch.add_node('search_expert', search_expert_node)\n", 792 | "orch.add_node('healthcare_expert', rag_node)\n", 793 | "orch.add_node('general_assistant', llm_node)\n", 794 | "orch.add_node('text2image_generation', t2i_node)\n", 795 | "orch.add_node('booking_assistant', bedrock_agent_node)\n", 796 | "orch.add_node('blog_writer', blog_writer_node)\n", 797 | "orch.add_node('human', human_feedback_node)\n", 798 | "#orch.add_node('editor', editor_node)\n", 799 | "\n", 800 | "orch.add_conditional_edges(\n", 801 | " \"router\", \n", 802 | " route_question,\n", 803 | " {'VECTORSTORE': 'healthcare_expert', 'WEBSEARCH': 'search_expert', 'GENERAL': 'general_assistant', \n", 804 | " 'TEXT2IMAGE': 'text2image_generation','BOOKING': 'booking_assistant', 'BLOGWRITER':'blog_writer'}\n", 805 | ")\n", 806 | "\n", 807 | "orch.set_entry_point(\"rewrite\")\n", 808 | "orch.add_edge('rewrite', 'router')\n", 809 | "orch.add_conditional_edges(\n", 810 | " \"healthcare_expert\",\n", 811 | " decide_to_search,\n", 812 | " {\n", 813 | " \"to_human\": \"human\",\n", 814 | " \"do_search\": \"search_expert\",\n", 815 | " },\n", 816 | ")\n", 817 | "#orch.add_edge('search_expert', 'human')\n", 818 | "orch.add_conditional_edges(\n", 819 | " \"search_expert\",\n", 820 | " decide_to_search,\n", 821 | " {\n", 822 | " \"to_human\": \"human\",\n", 823 | " \"do_search\": \"search_expert\",\n", 824 | " },\n", 825 | ")\n", 826 | "orch.add_edge('booking_assistant', END)\n", 827 | "orch.add_conditional_edges(\n", 828 | " \"general_assistant\",\n", 829 | " hallucination_grader,\n", 830 | " {\n", 831 | " \"to_human\": \"human\",\n", 832 | " \"END\": END,\n", 833 | " },\n", 834 | ")\n", 835 | "orch.add_edge('human', END)\n", 836 | "#orch.add_edge('editor', END)\n", 837 | "orch.add_edge('blog_writer', 'text2image_generation')\n", 838 | "orch.add_edge('text2image_generation', END)" 839 | ] 840 | }, 841 | { 842 | "cell_type": "markdown", 843 | "id": "9ea1cbd4-18b1-4ff5-a608-3360ceae31fd", 844 | "metadata": {}, 845 | "source": [ 846 | "## 5. Display the orchestration flows\n", 847 | "\n", 848 | "The orchestration flows can be depicted using the following visual representation that illustrate the sequence of operations, the data transformations, and the control flow between the different modules or algorithms involved in the vision comprehension process. By providing a clear and concise visual representation of the orchestration, it becomes easier for developers, researchers, and stakeholders to understand the overall architecture, identify potential bottlenecks or optimization opportunities, and communicate the system's functionality and performance." 849 | ] 850 | }, 851 | { 852 | "cell_type": "code", 853 | "execution_count": null, 854 | "id": "2c29c4f0-e832-4df5-89cf-b6d7b1907350", 855 | "metadata": { 856 | "scrolled": true, 857 | "tags": [] 858 | }, 859 | "outputs": [], 860 | "source": [ 861 | "from IPython.display import Image, display\n", 862 | "from langchain_core.runnables.graph import CurveStyle, MermaidDrawMethod #, NodeColors\n", 863 | "\n", 864 | "graph = orch.compile(checkpointer=memory, interrupt_before = ['human'])\n", 865 | "display(Image(graph.get_graph().draw_mermaid_png(\n", 866 | " curve_style=CurveStyle.LINEAR,\n", 867 | " #node_colors=NodeColors(start=\"#ffdfba\", end=\"#baffc9\", other=\"#fad7de\"),\n", 868 | " #node_styles=custom_node_style,\n", 869 | " wrap_label_n_words=9,\n", 870 | " output_file_path=None,\n", 871 | " draw_method=MermaidDrawMethod.API,\n", 872 | " background_color=\"white\",\n", 873 | " padding=20,\n", 874 | ")))" 875 | ] 876 | }, 877 | { 878 | "cell_type": "markdown", 879 | "id": "4cdc6137-3a75-43a9-bae0-64c6dc95ad2a", 880 | "metadata": {}, 881 | "source": [ 882 | "## 6. Execute this orchestration pipeline with query driven reasoning \n", 883 | "\n", 884 | "Executing agentic services with multi-agent capability on executing a pipeline with query-driven reasoning and reactions involves the development of a system that can autonomously perform tasks and make decisions based on the information it gathers and the queries it receives. This system would consist of multiple intelligent agents, each with its own set of capabilities and knowledge, working together to achieve a common goal. The agents would use query-driven reasoning to understand the user's intent and then react accordingly, executing the necessary steps in the pipeline to provide the desired outcome. This approach allows for a more dynamic and adaptive system that can handle a wide range of tasks and respond to changing conditions in real-time. The result is a powerful and flexible service that can assist users with a variety of needs, from information retrieval to complex problem-solving." 885 | ] 886 | }, 887 | { 888 | "cell_type": "code", 889 | "execution_count": null, 890 | "id": "a9c57a06-d95b-460b-aa7f-ab3a5a7c6ebc", 891 | "metadata": { 892 | "tags": [] 893 | }, 894 | "outputs": [], 895 | "source": [ 896 | "from PIL import Image\n", 897 | "\n", 898 | "thread = {\"configurable\": {\"thread_id\": \"42\", \"recursion_limit\": 10}}\n", 899 | "results = []\n", 900 | "prompts =[\n", 901 | " \"Under what circumstances a patient should be screened for ectopic ACTH syndrome(EAS)?\", # Use native RAG then human review if needed\n", 902 | " \"What could be the typical clinical symptoms of Blepharitis?\", # First try native RAG but not found then try Web search hen human review if needed\n", 903 | " \"How many total medals did the US Olympic Team won in Paris 2024?\", # Use Web search hen human review if needed\n", 904 | " \"Why Steve Jobs was considered a legent in the tech world?\", # Combine the answers from 2 LLMs then human review if needed\n", 905 | " \"Generate a high res image of a colorful macaw reasting on tree, with vivid color and attending to details.\", # Use text-2-image generation \n", 906 | " \"Hi, I want to create a booking for 2 people, at 8pm on the 5th of May 2024.\", #Use Bedrock agent to interact with KnowledgeBase and DynamoDB\n", 907 | " \"Write a blog post about the 2024 uncrewed return of the Starliner space capsule pending safety concerns and helium leaks. Explain how NASA plans to safely return the two stranded astronauts. \\\n", 908 | " If possible, please include date, figures and locations associated with the topic\" #Blog writting using CrewAI\n", 909 | " ]\n", 910 | "\n", 911 | "for prompt in prompts:\n", 912 | " for event in graph.stream({'question':prompt,}, thread):\n", 913 | " print(event)\n", 914 | " results.append(event)\n", 915 | " if os.path.exists(temp_gen_image):\n", 916 | " Image.open(temp_gen_image).show()\n", 917 | " print(\"\\n\\n---------------------------------------\\n\\n\")" 918 | ] 919 | }, 920 | { 921 | "cell_type": "markdown", 922 | "id": "d5bf6b07-6780-477f-b3d1-c80410e42469", 923 | "metadata": {}, 924 | "source": [ 925 | "#### (Optional) Display the generated blog" 926 | ] 927 | }, 928 | { 929 | "cell_type": "code", 930 | "execution_count": null, 931 | "id": "6b71dd8c-1657-4305-adb3-a5f97cd59edd", 932 | "metadata": {}, 933 | "outputs": [], 934 | "source": [ 935 | "from IPython.display import Markdown\n", 936 | "with open(markdown_filename, 'r') as file:\n", 937 | " readme_content = file.read()\n", 938 | "\n", 939 | "# Display the contents as Markdown\n", 940 | "display(Markdown(readme_content))" 941 | ] 942 | }, 943 | { 944 | "cell_type": "markdown", 945 | "id": "ca8e30c0-fba5-4359-a453-79d1fe3c92ae", 946 | "metadata": {}, 947 | "source": [ 948 | "### Next Steps:\n", 949 | "\n", 950 | "1. Planning\n", 951 | "2. Colaborative multi-agent reasoning\n", 952 | "3. Momeory for multi-round and personalize reasoning\n", 953 | "4. While this simple search-strategy shows a meaningful improvement in the success rate, it still struggles on long horizon tasks due to sparsity of environment rewards.\n", 954 | "5. To combine a planning and reasoning agent with MCTS inference-time search and AI self-critique for self-supervised data collection, which we then use for RL type training." 955 | ] 956 | }, 957 | { 958 | "cell_type": "markdown", 959 | "id": "ebdb6d0c-cb23-4462-986a-b68c8dacf88e", 960 | "metadata": {}, 961 | "source": [ 962 | "## 4. Clean-up¶\n", 963 | "Let's delete all the associated resources created to avoid unnecessary costs. Please change the markdown to code before encuring the cells" 964 | ] 965 | }, 966 | { 967 | "cell_type": "code", 968 | "execution_count": null, 969 | "id": "98627c16-c21f-429f-aef9-c486930f6dff", 970 | "metadata": {}, 971 | "outputs": [], 972 | "source": [ 973 | "from agent import clean_up_resources, delete_agent_roles_and_policies" 974 | ] 975 | }, 976 | { 977 | "cell_type": "code", 978 | "execution_count": null, 979 | "id": "05e48afd-48d2-464e-9fac-9510f94a3241", 980 | "metadata": {}, 981 | "outputs": [], 982 | "source": [ 983 | "%store -r table_name\n", 984 | "%store -r lambda_function\n", 985 | "%store -r lambda_function_name\n", 986 | "%store -r agent_action_group_response\n", 987 | "%store -r agent_functions\n", 988 | "%store -r alias_id\n", 989 | "%store -r agent_id\n", 990 | "%store -r agent_name\n", 991 | "%store -r kb_id\n", 992 | "%store -r alias_id\n", 993 | "\n", 994 | "# Delete resources including Lambda, Dynamo and agent\n", 995 | "clean_up_resources(\n", 996 | " table_name, lambda_function, lambda_function_name, agent_action_group_response, agent_functions, agent_id, kb_id, alias_id\n", 997 | ")\n", 998 | "\n", 999 | "# Delete the agent roles and policies\n", 1000 | "delete_agent_roles_and_policies(agent_name)" 1001 | ] 1002 | }, 1003 | { 1004 | "cell_type": "code", 1005 | "execution_count": null, 1006 | "id": "bf04e4ad-f2ab-4867-876e-1143139f75f4", 1007 | "metadata": {}, 1008 | "outputs": [], 1009 | "source": [ 1010 | "## __End" 1011 | ] 1012 | } 1013 | ], 1014 | "metadata": { 1015 | "kernelspec": { 1016 | "display_name": "medf", 1017 | "language": "python", 1018 | "name": "medf" 1019 | }, 1020 | "language_info": { 1021 | "codemirror_mode": { 1022 | "name": "ipython", 1023 | "version": 3 1024 | }, 1025 | "file_extension": ".py", 1026 | "mimetype": "text/x-python", 1027 | "name": "python", 1028 | "nbconvert_exporter": "python", 1029 | "pygments_lexer": "ipython3", 1030 | "version": "3.11.5" 1031 | } 1032 | }, 1033 | "nbformat": 4, 1034 | "nbformat_minor": 5 1035 | } 1036 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | boto3 2 | botocore 3 | setuptools==68.2.2 4 | langchain-aws==0.1.18 5 | ipywidgets 6 | chromadb 7 | typing-extensions 8 | langchain-community==0.2.17 9 | langchain-core==0.2.40 10 | langchain_huggingface==0.0.3 11 | langchain==0.2.16 12 | langgraph==0.2.22 13 | sec_api 14 | opensearch-py 15 | retrying 16 | -------------------------------------------------------------------------------- /requirements2.txt: -------------------------------------------------------------------------------- 1 | duckduckgo-search 2 | grpcio>=1.60.0 3 | grpcio-tools>=1.60.0 4 | python-dotenv 5 | crewai==0.51.1 6 | crewai[tools]>=0.8.3 7 | litellm==1.46.1 8 | unstructured -------------------------------------------------------------------------------- /scripts/__pycache__/agent.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/scripts/__pycache__/agent.cpython-311.pyc -------------------------------------------------------------------------------- /scripts/__pycache__/blog_writer.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/scripts/__pycache__/blog_writer.cpython-311.pyc -------------------------------------------------------------------------------- /scripts/__pycache__/knowledge_base.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/agentic-orchestration/c178b11f8564b1827b634d3512958c679014222b/scripts/__pycache__/knowledge_base.cpython-311.pyc -------------------------------------------------------------------------------- /scripts/agent.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import time 4 | import zipfile 5 | from io import BytesIO 6 | 7 | iam_client = boto3.client('iam') 8 | sts_client = boto3.client('sts') 9 | session = boto3.session.Session() 10 | region = session.region_name 11 | account_id = sts_client.get_caller_identity()["Account"] 12 | dynamodb_client = boto3.client('dynamodb') 13 | dynamodb_resource = boto3.resource('dynamodb') 14 | lambda_client = boto3.client('lambda') 15 | bedrock_agent_client = boto3.client('bedrock-agent') 16 | 17 | 18 | def create_dynamodb(table_name): 19 | table = dynamodb_resource.create_table( 20 | TableName=table_name, 21 | KeySchema=[ 22 | { 23 | 'AttributeName': 'booking_id', 24 | 'KeyType': 'HASH' 25 | } 26 | ], 27 | AttributeDefinitions=[ 28 | { 29 | 'AttributeName': 'booking_id', 30 | 'AttributeType': 'S' 31 | } 32 | ], 33 | BillingMode='PAY_PER_REQUEST' # Use on-demand capacity mode 34 | ) 35 | 36 | # Wait for the table to be created 37 | print(f'Creating table {table_name}...') 38 | table.wait_until_exists() 39 | print(f'Table {table_name} created successfully!') 40 | return 41 | 42 | def get_agent_lambda_role_arn(role_name): 43 | # Create an IAM client 44 | iam_client = boto3.client('iam') 45 | 46 | try: 47 | # Get the role information 48 | response = iam_client.get_role(RoleName=role_name) 49 | 50 | # Extract the ARN from the response 51 | role_arn = response['Role']['Arn'] 52 | 53 | return response 54 | 55 | except iam_client.exceptions.NoSuchEntityException: 56 | print(f"Role '{role_name}' not found.") 57 | return None 58 | 59 | except Exception as e: 60 | print(f"An error occurred: {str(e)}") 61 | return None 62 | 63 | 64 | def create_lambda(lambda_function_name, lambda_iam_role): 65 | # add to function 66 | 67 | # Package up the lambda function code 68 | s = BytesIO() 69 | z = zipfile.ZipFile(s, 'w') 70 | z.write("lambda_function.py") 71 | z.close() 72 | zip_content = s.getvalue() 73 | 74 | # Create Lambda Function 75 | lambda_function = lambda_client.create_function( 76 | FunctionName=lambda_function_name, 77 | Runtime='python3.12', 78 | Timeout=60, 79 | Role=lambda_iam_role['Role']['Arn'], 80 | Code={'ZipFile': zip_content}, 81 | Handler='lambda_function.lambda_handler' 82 | ) 83 | return lambda_function 84 | 85 | 86 | def create_lambda_role(agent_name, dynamodb_table_name): 87 | lambda_function_role = f'{agent_name}-lambda-role' 88 | dynamodb_access_policy_name = f'{agent_name}-dynamodb-policy' 89 | # Create IAM Role for the Lambda function 90 | try: 91 | assume_role_policy_document = { 92 | "Version": "2012-10-17", 93 | "Statement": [ 94 | { 95 | "Effect": "Allow", 96 | "Principal": { 97 | "Service": "lambda.amazonaws.com" 98 | }, 99 | "Action": "sts:AssumeRole" 100 | } 101 | ] 102 | } 103 | 104 | assume_role_policy_document_json = json.dumps(assume_role_policy_document) 105 | 106 | lambda_iam_role = iam_client.create_role( 107 | RoleName=lambda_function_role, 108 | AssumeRolePolicyDocument=assume_role_policy_document_json 109 | ) 110 | 111 | # Pause to make sure role is created 112 | time.sleep(10) 113 | except iam_client.exceptions.EntityAlreadyExistsException: 114 | lambda_iam_role = iam_client.get_role(RoleName=lambda_function_role) 115 | 116 | # Attach the AWSLambdaBasicExecutionRole policy 117 | iam_client.attach_role_policy( 118 | RoleName=lambda_function_role, 119 | PolicyArn='arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole' 120 | ) 121 | 122 | # Create a policy to grant access to the DynamoDB table 123 | dynamodb_access_policy = { 124 | "Version": "2012-10-17", 125 | "Statement": [ 126 | { 127 | "Effect": "Allow", 128 | "Action": [ 129 | "dynamodb:GetItem", 130 | "dynamodb:PutItem", 131 | "dynamodb:DeleteItem" 132 | ], 133 | "Resource": "arn:aws:dynamodb:{}:{}:table/{}".format( 134 | region, account_id, dynamodb_table_name 135 | ) 136 | } 137 | ] 138 | } 139 | 140 | # Create the policy 141 | dynamodb_access_policy_json = json.dumps(dynamodb_access_policy) 142 | dynamodb_access_policy_response = iam_client.create_policy( 143 | PolicyName=dynamodb_access_policy_name, 144 | PolicyDocument=dynamodb_access_policy_json 145 | ) 146 | 147 | # Attach the policy to the Lambda function's role 148 | iam_client.attach_role_policy( 149 | RoleName=lambda_function_role, 150 | PolicyArn=dynamodb_access_policy_response['Policy']['Arn'] 151 | ) 152 | return lambda_iam_role 153 | 154 | 155 | def create_agent_role_and_policies(agent_name, agent_foundation_model, kb_id=None): 156 | agent_bedrock_allow_policy_name = f"{agent_name}-ba" 157 | agent_role_name = f'AmazonBedrockExecutionRoleForAgents_{agent_name}' 158 | # Create IAM policies for agent 159 | statements = [ 160 | { 161 | "Sid": "AmazonBedrockAgentBedrockFoundationModelPolicy", 162 | "Effect": "Allow", 163 | "Action": "bedrock:InvokeModel", 164 | "Resource": [ 165 | f"arn:aws:bedrock:{region}::foundation-model/{agent_foundation_model}" 166 | ] 167 | } 168 | ] 169 | # add Knowledge Base retrieve and retrieve and generate permissions if agent has KB attached to it 170 | if kb_id: 171 | statements.append( 172 | { 173 | "Sid": "QueryKB", 174 | "Effect": "Allow", 175 | "Action": [ 176 | "bedrock:Retrieve", 177 | "bedrock:RetrieveAndGenerate" 178 | ], 179 | "Resource": [ 180 | f"arn:aws:bedrock:{region}:{account_id}:knowledge-base/{kb_id}" 181 | ] 182 | } 183 | ) 184 | 185 | bedrock_agent_bedrock_allow_policy_statement = { 186 | "Version": "2012-10-17", 187 | "Statement": statements 188 | } 189 | 190 | bedrock_policy_json = json.dumps(bedrock_agent_bedrock_allow_policy_statement) 191 | 192 | agent_bedrock_policy = iam_client.create_policy( 193 | PolicyName=agent_bedrock_allow_policy_name, 194 | PolicyDocument=bedrock_policy_json 195 | ) 196 | 197 | # Create IAM Role for the agent and attach IAM policies 198 | assume_role_policy_document = { 199 | "Version": "2012-10-17", 200 | "Statement": [{ 201 | "Effect": "Allow", 202 | "Principal": { 203 | "Service": "bedrock.amazonaws.com" 204 | }, 205 | "Action": "sts:AssumeRole" 206 | }] 207 | } 208 | 209 | assume_role_policy_document_json = json.dumps(assume_role_policy_document) 210 | agent_role = iam_client.create_role( 211 | RoleName=agent_role_name, 212 | AssumeRolePolicyDocument=assume_role_policy_document_json 213 | ) 214 | 215 | # Pause to make sure role is created 216 | time.sleep(10) 217 | 218 | iam_client.attach_role_policy( 219 | RoleName=agent_role_name, 220 | PolicyArn=agent_bedrock_policy['Policy']['Arn'] 221 | ) 222 | return agent_role 223 | 224 | 225 | def delete_agent_roles_and_policies(agent_name): 226 | agent_bedrock_allow_policy_name = f"{agent_name}-ba" 227 | agent_role_name = f'AmazonBedrockExecutionRoleForAgents_{agent_name}' 228 | dynamodb_access_policy_name = f'{agent_name}-dynamodb-policy' 229 | lambda_function_role = f'{agent_name}-lambda-role' 230 | 231 | for policy in [agent_bedrock_allow_policy_name]: 232 | try: 233 | iam_client.detach_role_policy( 234 | RoleName=agent_role_name, 235 | PolicyArn=f'arn:aws:iam::{account_id}:policy/{policy}' 236 | ) 237 | except Exception as e: 238 | print(f"Could not detach {policy} from {agent_role_name}") 239 | print(e) 240 | 241 | for policy in [dynamodb_access_policy_name]: 242 | try: 243 | iam_client.detach_role_policy( 244 | RoleName=lambda_function_role, 245 | PolicyArn=f'arn:aws:iam::{account_id}:policy/{policy}' 246 | ) 247 | except Exception as e: 248 | print(f"Could not detach {policy} from {lambda_function_role}") 249 | print(e) 250 | 251 | try: 252 | iam_client.detach_role_policy( 253 | RoleName=lambda_function_role, 254 | PolicyArn='arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole' 255 | ) 256 | except Exception as e: 257 | print(f"Could not detach AWSLambdaBasicExecutionRole from {lambda_function_role}") 258 | print(e) 259 | 260 | for role_name in [agent_role_name, lambda_function_role]: 261 | try: 262 | iam_client.delete_role( 263 | RoleName=role_name 264 | ) 265 | except Exception as e: 266 | print(f"Could not delete role {role_name}") 267 | print(e) 268 | 269 | for policy in [agent_bedrock_allow_policy_name, dynamodb_access_policy_name]: 270 | try: 271 | iam_client.delete_policy( 272 | PolicyArn=f'arn:aws:iam::{account_id}:policy/{policy}' 273 | ) 274 | except Exception as e: 275 | print(f"Could not delete policy {policy}") 276 | print(e) 277 | 278 | 279 | def clean_up_resources( 280 | table_name, lambda_function, lambda_function_name, agent_action_group_response, agent_functions, 281 | agent_id, kb_id, alias_id 282 | ): 283 | action_group_id = agent_action_group_response['agentActionGroup']['actionGroupId'] 284 | action_group_name = agent_action_group_response['agentActionGroup']['actionGroupName'] 285 | # Delete Agent Action Group, Agent Alias, and Agent 286 | try: 287 | bedrock_agent_client.update_agent_action_group( 288 | agentId=agent_id, 289 | agentVersion='DRAFT', 290 | actionGroupId= action_group_id, 291 | actionGroupName=action_group_name, 292 | actionGroupExecutor={ 293 | 'lambda': lambda_function['FunctionArn'] 294 | }, 295 | functionSchema={ 296 | 'functions': agent_functions 297 | }, 298 | actionGroupState='DISABLED', 299 | ) 300 | bedrock_agent_client.disassociate_agent_knowledge_base( 301 | agentId=agent_id, 302 | agentVersion='DRAFT', 303 | knowledgeBaseId=kb_id 304 | ) 305 | bedrock_agent_client.delete_agent_action_group( 306 | agentId=agent_id, 307 | agentVersion='DRAFT', 308 | actionGroupId=action_group_id 309 | ) 310 | bedrock_agent_client.delete_agent_alias( 311 | agentAliasId=alias_id, 312 | agentId=agent_id 313 | ) 314 | bedrock_agent_client.delete_agent(agentId=agent_id) 315 | print(f"Agent {agent_id}, Agent Alias {alias_id}, and Action Group have been deleted.") 316 | except Exception as e: 317 | print(f"Error deleting Agent resources: {e}") 318 | 319 | # Delete Lambda function 320 | try: 321 | lambda_client.delete_function(FunctionName=lambda_function_name) 322 | print(f"Lambda function {lambda_function_name} has been deleted.") 323 | except Exception as e: 324 | print(f"Error deleting Lambda function {lambda_function_name}: {e}") 325 | 326 | # Delete DynamoDB table 327 | try: 328 | dynamodb_client.delete_table(TableName=table_name) 329 | print(f"Table {table_name} is being deleted...") 330 | waiter = dynamodb_client.get_waiter('table_not_exists') 331 | waiter.wait(TableName=table_name) 332 | print(f"Table {table_name} has been deleted.") 333 | except Exception as e: 334 | print(f"Error deleting table {table_name}: {e}") -------------------------------------------------------------------------------- /scripts/agenteval.yml: -------------------------------------------------------------------------------- 1 | evaluator: 2 | model: claude-3 3 | target: 4 | bedrock_agent_alias_id: {agent_id} 5 | bedrock_agent_id: none 6 | type: bedrock-agent 7 | tests: 8 | check_for_chicken_dinner: 9 | expected_results: 10 | - The agent returns a list of dishes from the dinner menu that contains chicken. 11 | steps: 12 | - Ask agent for a the dishes in the dinner menu that contains chicken. 13 | make_and_check_booking: 14 | steps: 15 | - Ask agent to make a booking for Anna, 2 people, 16 July at 7pm. 16 | - Using the booking ID, check for the booking details 17 | expected_results: 18 | - The agent returns with the booking ID 19 | - The booking details are.. Name Anna, Number of guests 2, Date 16 July, Time 7pm 20 | -------------------------------------------------------------------------------- /scripts/bedrock.py: -------------------------------------------------------------------------------- 1 | import os 2 | import boto3 3 | from langchain_aws import BedrockLLM, ChatBedrock, ChatBedrockConverse, BedrockEmbeddings 4 | #from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory 5 | from botocore.config import Config 6 | from botocore.exceptions import ClientError 7 | 8 | bedrock_agent_runtime_client = boto3.client('bedrock-agent-runtime') 9 | 10 | ## Setup LLMs 11 | def get_llm(model_id: str, aws_region: str='us-west-2',): 12 | config = Config( 13 | retries = dict( 14 | max_attempts = 10, 15 | total_max_attempts = 25, 16 | ) 17 | ) 18 | bedrock_client = boto3.client("bedrock-runtime", config=config, region_name=aws_region) 19 | 20 | inference_modifier = { 21 | "max_tokens": 4096, 22 | "temperature": 0.01, 23 | "top_k": 50, 24 | "top_p": 0.95, 25 | "stop_sequences": ["\n\n\nHuman"], 26 | } 27 | 28 | if 'claude-3-5' in model_id: 29 | inference_modifier = { 30 | "max_tokens": 4096, 31 | "temperature": 0.01, 32 | "top_k": 50, 33 | "top_p": 0.95, 34 | "stop_sequences": ["\n\n\nHuman"], 35 | } 36 | llm = ChatBedrock( 37 | model_id=model_id, 38 | client=bedrock_client, 39 | model_kwargs=inference_modifier, 40 | region_name=aws_region, 41 | ) 42 | elif 'claude-3' in model_id or 'mistral' in model_id or 'llama3-1' in model_id: 43 | llm = ChatBedrockConverse( 44 | model=model_id, 45 | client=bedrock_client, 46 | temperature=0.01, 47 | max_tokens=2048 if 'llama3-1' in model_id else 4096, 48 | region_name=aws_region, 49 | ) 50 | else: 51 | llm = BedrockLLM( 52 | model_id=model_id, 53 | client=bedrock_client, 54 | model_kwargs={"temperature": 0.1, "max_gen_len":4096}, 55 | ) 56 | 57 | return llm 58 | 59 | def get_embedding(model_id: str="amazon.titan-embed-text-v2:0", aws_region: str='us-west-2'): 60 | config = Config( 61 | retries = dict( 62 | max_attempts = 10, 63 | total_max_attempts = 25, 64 | ) 65 | ) 66 | bedrock_client = boto3.client("bedrock-runtime", config=config, region_name=aws_region) 67 | 68 | return BedrockEmbeddings(client = bedrock_client, region_name=aws_region, model_id=model_id) 69 | 70 | def check_and_delete_iam_policy(policy_name): 71 | # Create an IAM client 72 | iam = boto3.client('iam') 73 | 74 | try: 75 | # Try to get the policy 76 | response = iam.get_policy(PolicyArn=f'arn:aws:iam::aws:policy/{policy_name}') 77 | 78 | # If we reach here, the policy exists 79 | print(f"Policy '{policy_name}' exists. Attempting to delete...") 80 | 81 | # First, we need to detach the policy from all entities 82 | detach_policy(iam, response['Policy']['Arn']) 83 | 84 | # Now we can delete the policy 85 | iam.delete_policy(PolicyArn=response['Policy']['Arn']) 86 | print(f"Policy '{policy_name}' has been deleted successfully.") 87 | 88 | except ClientError as e: 89 | if e.response['Error']['Code'] == 'NoSuchEntity': 90 | print(f"Policy '{policy_name}' does not exist.") 91 | else: 92 | print(f"An error occurred: {e}") 93 | 94 | def detach_policy(iam, policy_arn): 95 | # Detach from users 96 | for user in iam.list_entities_for_policy(PolicyArn=policy_arn, EntityFilter='User')['PolicyUsers']: 97 | iam.detach_user_policy(UserName=user['UserName'], PolicyArn=policy_arn) 98 | 99 | # Detach from groups 100 | for group in iam.list_entities_for_policy(PolicyArn=policy_arn, EntityFilter='Group')['PolicyGroups']: 101 | iam.detach_group_policy(GroupName=group['GroupName'], PolicyArn=policy_arn) 102 | 103 | # Detach from roles 104 | for role in iam.list_entities_for_policy(PolicyArn=policy_arn, EntityFilter='Role')['PolicyRoles']: 105 | iam.detach_role_policy(RoleName=role['RoleName'], PolicyArn=policy_arn) 106 | 107 | def check_table_exists(table_name): 108 | # Create a DynamoDB client 109 | dynamodb = boto3.client('dynamodb') 110 | 111 | try: 112 | # Try to describe the table 113 | response = dynamodb.describe_table(TableName=table_name) 114 | return True 115 | except : 116 | return False 117 | 118 | def check_lambda_function_exists(function_name): 119 | # Create a Lambda client 120 | lambda_client = boto3.client('lambda') 121 | try: 122 | # Try to get the function configuration 123 | lambda_client.get_function(FunctionName=function_name) 124 | return True 125 | except : 126 | return False 127 | -------------------------------------------------------------------------------- /scripts/blog_writer.py: -------------------------------------------------------------------------------- 1 | import os 2 | from crewai import Agent, Task, Crew, Process 3 | from dotenv import load_dotenv 4 | from langchain.tools import DuckDuckGoSearchRun 5 | from langchain_community.tools.tavily_search import TavilySearchResults 6 | import boto3 7 | from langchain_aws import BedrockLLM, ChatBedrock, ChatBedrockConverse 8 | #from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory 9 | from botocore.config import Config 10 | from bedrock import * 11 | from crewai_tools import ( 12 | DirectoryReadTool, 13 | FileReadTool, 14 | SerperDevTool, 15 | WebsiteSearchTool 16 | ) 17 | 18 | load_dotenv() 19 | duck_search_tool = DuckDuckGoSearchRun() 20 | tavily_tool = TavilySearchResults(max_results=5) 21 | web_rag_tool = WebsiteSearchTool() 22 | 23 | 24 | class blogAgents(): 25 | def __init__(self, topic, model_id): 26 | self.topic = topic 27 | self.model_id = model_id 28 | 29 | def planner(self, topic, model_id): 30 | return Agent( 31 | role="Content Planner", 32 | goal=f"""lan engaging and factually accurate content on {topic}""", 33 | backstory=f"""You're working on planning a blog article about the topic: {topic}. \n 34 | You collect information by searhing the web for the latest developements that directly relate to the {topic}. \n 35 | audience learn something and make informed decisions. Your work is the basis for the Content Writer to write an article on this {topic}.""", 36 | allow_delegation=False, 37 | tools=[duck_search_tool,tavily_tool, web_rag_tool], 38 | llm=get_llm(model_id), 39 | verbose=True 40 | ) 41 | 42 | def writer(self, topic, model_id): 43 | return Agent( 44 | role="Content Writer", 45 | goal=f"Write insightful and factually accurate opinion piece about the topic: {topic}", 46 | backstory=f"""You're working on a writing a new opinion piece about the topic: {topic}. You base your writing on the work of \n 47 | the Content Planner, who provides an outline \n 48 | and relevant context about the topic. \n 49 | You follow the main objectives and \n 50 | direction of the outline, \n 51 | as provide by the Content Planner. \n 52 | You also provide objective and impartial insights \n 53 | and back them up with information \n 54 | provide by the Content Planner. \n 55 | You acknowledge in your opinion piece \n 56 | when your statements are opinions \n 57 | as opposed to objective statements.""", 58 | allow_delegation=False, 59 | llm=get_llm(model_id), 60 | verbose=True 61 | ) 62 | 63 | def editor(self, model_id): 64 | return Agent( 65 | role="Editor", 66 | goal="Edit a given blog post to align with " 67 | "the writing style of the organization. ", 68 | backstory="You are an editor who receives a blog post from the Content Writer. " 69 | "Your goal is to review the blog post to ensure that it follows journalistic best practices," 70 | "provides balanced viewpoints when providing opinions or assertions, " 71 | "and also avoids major controversial topics or opinions when possible.", 72 | allow_delegation=False, 73 | llm=get_llm(model_id), 74 | verbose=True 75 | ) 76 | 77 | 78 | class blogTasks(): 79 | def __init__(self, topic): 80 | self.topic = topic 81 | 82 | def plan(self, planner, topic): 83 | return Task( 84 | description=( 85 | f"""1. Prioritize the latest trends, key players, and noteworthy news on {topic}.\n 86 | 2. Identify the target audience, considering their interests and pain points.\n 87 | 3. Develop a detailed content outline including an introduction, key points, and a call to action.\n 88 | 4. Include SEO keywords and relevant data or sources.""" 89 | ), 90 | expected_output=f"""Covert the latest developments of the {topic} with sufficient depth as a domain expert. 91 | A comprehensive content plan document with an outline, audience analysis, 92 | SEO keywords, and resources.""", 93 | agent=planner, 94 | ) 95 | def write(self, writer, topic): 96 | return Task( 97 | description=( 98 | f"""1. Use the content plan to craft a compelling blog post on {topic}.\n 99 | 2. Incorporate SEO keywords naturally.\n 100 | 3. Sections/Subtitles are properly named in an engaging manner.\n 101 | 4. Ensure the post is structured with an engaging introduction, insightful body, and a summarizing conclusion.\n 102 | 5. Proofread for grammatical errors and alignment with the brand's voice""" 103 | ), 104 | expected_output="A well-written blog post like a professional writer." 105 | "You are a domain expert and your blog is for other subject experts" 106 | "in markdown format, ready for publication, " 107 | "each section should have 2 or 3 paragraphs.", 108 | agent=writer, 109 | ) 110 | 111 | def edit(self, editor): 112 | return Task( 113 | description=("Proofread the given blog post for " 114 | "grammatical errors and " 115 | "alignment with the brand's voice."), 116 | expected_output="A well-written blog post in markdown format, " 117 | "ready for publication, " 118 | "each section should have 2 or 3 paragraphs.", 119 | agent=editor, 120 | file_path='./blogPost.txt', 121 | ) 122 | 123 | class blogCrew(): 124 | def __init__(self, topic, model_id): 125 | self.topic = topic 126 | self.model_id = model_id 127 | 128 | def run(self): 129 | agents = blogAgents(self.topic, self.model_id) 130 | tasks = blogTasks(self.topic) 131 | 132 | planner_agent = agents.planner(self.topic, self.model_id) 133 | writer_agent = agents.writer(self.topic, self.model_id) 134 | editor_agent = agents.editor(self.model_id) 135 | 136 | plan_task = tasks.plan(planner_agent, self.topic) 137 | write_task = tasks.write(writer_agent, self.topic) 138 | edit_task = tasks.edit(editor_agent) 139 | 140 | 141 | crew = Crew( 142 | agents=[planner_agent, writer_agent, editor_agent], 143 | tasks=[plan_task, write_task, edit_task], 144 | verbose=True, 145 | memory=True, 146 | embedder={ 147 | "provider": "huggingface", 148 | "config": {"model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"}, 149 | }, 150 | cache=True, 151 | process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is 152 | ) 153 | 154 | result = crew.kickoff() 155 | return result -------------------------------------------------------------------------------- /scripts/knowledge_base.py: -------------------------------------------------------------------------------- 1 | import json 2 | import boto3 3 | import time 4 | from botocore.exceptions import ClientError 5 | from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth, RequestError 6 | import pprint 7 | from retrying import retry 8 | 9 | valid_embedding_models = ["cohere.embed-multilingual-v3", "cohere.embed-english-v3", "amazon.titan-embed-text-v1"] 10 | pp = pprint.PrettyPrinter(indent=2) 11 | 12 | 13 | def interactive_sleep(seconds: int): 14 | """ 15 | Support functionality to induce an artificial 'sleep' to the code in order to wait for resources to be available 16 | Args: 17 | seconds (int): number of seconds to sleep for 18 | """ 19 | dots = '' 20 | for i in range(seconds): 21 | dots += '.' 22 | print(dots, end='\r') 23 | time.sleep(1) 24 | 25 | 26 | class BedrockKnowledgeBase: 27 | """ 28 | Support class that allows for: 29 | - creation (or retrieval) of a Knowledge Base for Amazon Bedrock with all its pre-requisites 30 | (including OSS, IAM roles and Permissions and S3 bucket) 31 | - Ingestion of data into the Knowledge Base 32 | - Deletion of all resources created 33 | """ 34 | def __init__( 35 | self, 36 | kb_name, 37 | kb_description=None, 38 | data_bucket_name=None, 39 | embedding_model="amazon.titan-embed-text-v1" 40 | ): 41 | """ 42 | Class initializer 43 | Args: 44 | kb_name (str): the knowledge base name 45 | kb_description (str): knowledge base description 46 | data_bucket_name (str): name of s3 bucket to connect with knowledge base 47 | embedding_model (str): embedding model to use 48 | """ 49 | boto3_session = boto3.session.Session() 50 | self.region_name = boto3_session.region_name 51 | self.iam_client = boto3_session.client('iam') 52 | self.account_number = boto3.client('sts').get_caller_identity().get('Account') 53 | self.suffix = str(self.account_number)[:4] 54 | self.identity = boto3.client('sts').get_caller_identity()['Arn'] 55 | self.aoss_client = boto3_session.client('opensearchserverless') 56 | self.s3_client = boto3.client('s3') 57 | self.bedrock_agent_client = boto3.client('bedrock-agent') 58 | credentials = boto3.Session().get_credentials() 59 | self.awsauth = AWSV4SignerAuth(credentials, self.region_name, 'aoss') 60 | 61 | self.kb_name = kb_name 62 | self.kb_description = kb_description 63 | if data_bucket_name is not None: 64 | self.bucket_name = data_bucket_name 65 | else: 66 | self.bucket_name = f"{self.kb_name}-{self.suffix}" 67 | if embedding_model not in valid_embedding_models: 68 | valid_embeddings_str = str(valid_embedding_models) 69 | raise ValueError(f"Invalid embedding model. Your embedding model should be one of {valid_embeddings_str}") 70 | self.embedding_model = embedding_model 71 | self.encryption_policy_name = f"bedrock-sample-rag-sp-{self.suffix}" 72 | self.network_policy_name = f"bedrock-sample-rag-np-{self.suffix}" 73 | self.access_policy_name = f'bedrock-sample-rag-ap-{self.suffix}' 74 | self.kb_execution_role_name = f'AmazonBedrockExecutionRoleForKnowledgeBase_{self.suffix}' 75 | self.fm_policy_name = f'AmazonBedrockFoundationModelPolicyForKnowledgeBase_{self.suffix}' 76 | self.s3_policy_name = f'AmazonBedrockS3PolicyForKnowledgeBase_{self.suffix}' 77 | self.oss_policy_name = f'AmazonBedrockOSSPolicyForKnowledgeBase_{self.suffix}' 78 | 79 | self.vector_store_name = f'bedrock-sample-rag-{self.suffix}' 80 | self.index_name = f"bedrock-sample-rag-index-{self.suffix}" 81 | print("========================================================================================") 82 | print(f"Step 1 - Creating or retrieving {self.bucket_name} S3 bucket for Knowledge Base documents") 83 | self.create_s3_bucket() 84 | print("========================================================================================") 85 | print(f"Step 2 - Creating Knowledge Base Execution Role ({self.kb_execution_role_name}) and Policies") 86 | self.bedrock_kb_execution_role = self.create_bedrock_kb_execution_role() 87 | print("========================================================================================") 88 | print(f"Step 3 - Creating OSS encryption, network and data access policies") 89 | self.encryption_policy, self.network_policy, self.access_policy = self.create_policies_in_oss() 90 | print("========================================================================================") 91 | print(f"Step 4 - Creating OSS Collection (this step takes a couple of minutes to complete)") 92 | self.host, self.collection, self.collection_id, self.collection_arn = self.create_oss() 93 | # Build the OpenSearch client 94 | self.oss_client = OpenSearch( 95 | hosts=[{'host': self.host, 'port': 443}], 96 | http_auth=self.awsauth, 97 | use_ssl=True, 98 | verify_certs=True, 99 | connection_class=RequestsHttpConnection, 100 | timeout=300 101 | ) 102 | print("========================================================================================") 103 | print(f"Step 5 - Creating OSS Vector Index") 104 | self.create_vector_index() 105 | print("========================================================================================") 106 | print(f"Step 6 - Creating Knowledge Base") 107 | self.knowledge_base, self.data_source = self.create_knowledge_base() 108 | print("========================================================================================") 109 | 110 | def create_s3_bucket(self): 111 | """ 112 | Check if bucket exists, and if not create S3 bucket for knowledge base data source 113 | """ 114 | try: 115 | self.s3_client.head_bucket(Bucket=self.bucket_name) 116 | print(f'Bucket {self.bucket_name} already exists - retrieving it!') 117 | except ClientError as e: 118 | print(f'Creating bucket {self.bucket_name}') 119 | if self.region_name == "us-east-1": 120 | self.s3_client.create_bucket( 121 | Bucket=self.bucket_name 122 | ) 123 | else: 124 | self.s3_client.create_bucket( 125 | Bucket=self.bucket_name, 126 | CreateBucketConfiguration={'LocationConstraint': self.region_name} 127 | ) 128 | 129 | def create_bedrock_kb_execution_role(self): 130 | """ 131 | Create Knowledge Base Execution IAM Role and its required policies. 132 | If role and/or policies already exist, retrieve them 133 | Returns: 134 | IAM role 135 | """ 136 | foundation_model_policy_document = { 137 | "Version": "2012-10-17", 138 | "Statement": [ 139 | { 140 | "Effect": "Allow", 141 | "Action": [ 142 | "bedrock:InvokeModel", 143 | ], 144 | "Resource": [ 145 | f"arn:aws:bedrock:{self.region_name}::foundation-model/{self.embedding_model}" 146 | ] 147 | } 148 | ] 149 | } 150 | 151 | s3_policy_document = { 152 | "Version": "2012-10-17", 153 | "Statement": [ 154 | { 155 | "Effect": "Allow", 156 | "Action": [ 157 | "s3:GetObject", 158 | "s3:ListBucket" 159 | ], 160 | "Resource": [ 161 | f"arn:aws:s3:::{self.bucket_name}", 162 | f"arn:aws:s3:::{self.bucket_name}/*" 163 | ], 164 | "Condition": { 165 | "StringEquals": { 166 | "aws:ResourceAccount": f"{self.account_number}" 167 | } 168 | } 169 | } 170 | ] 171 | } 172 | 173 | assume_role_policy_document = { 174 | "Version": "2012-10-17", 175 | "Statement": [ 176 | { 177 | "Effect": "Allow", 178 | "Principal": { 179 | "Service": "bedrock.amazonaws.com" 180 | }, 181 | "Action": "sts:AssumeRole" 182 | } 183 | ] 184 | } 185 | try: 186 | # create policies based on the policy documents 187 | fm_policy = self.iam_client.create_policy( 188 | PolicyName=self.fm_policy_name, 189 | PolicyDocument=json.dumps(foundation_model_policy_document), 190 | Description='Policy for accessing foundation model', 191 | ) 192 | except self.iam_client.exceptions.EntityAlreadyExistsException: 193 | fm_policy = self.iam_client.get_policy( 194 | PolicyArn=f"arn:aws:iam::{self.account_number}:policy/{self.fm_policy_name}" 195 | ) 196 | 197 | try: 198 | s3_policy = self.iam_client.create_policy( 199 | PolicyName=self.s3_policy_name, 200 | PolicyDocument=json.dumps(s3_policy_document), 201 | Description='Policy for reading documents from s3') 202 | except self.iam_client.exceptions.EntityAlreadyExistsException: 203 | s3_policy = self.iam_client.get_policy( 204 | PolicyArn=f"arn:aws:iam::{self.account_number}:policy/{self.s3_policy_name}" 205 | ) 206 | # create bedrock execution role 207 | try: 208 | bedrock_kb_execution_role = self.iam_client.create_role( 209 | RoleName=self.kb_execution_role_name, 210 | AssumeRolePolicyDocument=json.dumps(assume_role_policy_document), 211 | Description='Amazon Bedrock Knowledge Base Execution Role for accessing OSS and S3', 212 | MaxSessionDuration=3600 213 | ) 214 | except self.iam_client.exceptions.EntityAlreadyExistsException: 215 | bedrock_kb_execution_role = self.iam_client.get_role( 216 | RoleName=self.kb_execution_role_name 217 | ) 218 | # fetch arn of the policies and role created above 219 | s3_policy_arn = s3_policy["Policy"]["Arn"] 220 | fm_policy_arn = fm_policy["Policy"]["Arn"] 221 | 222 | # attach policies to Amazon Bedrock execution role 223 | self.iam_client.attach_role_policy( 224 | RoleName=bedrock_kb_execution_role["Role"]["RoleName"], 225 | PolicyArn=fm_policy_arn 226 | ) 227 | self.iam_client.attach_role_policy( 228 | RoleName=bedrock_kb_execution_role["Role"]["RoleName"], 229 | PolicyArn=s3_policy_arn 230 | ) 231 | return bedrock_kb_execution_role 232 | 233 | def create_oss_policy_attach_bedrock_execution_role(self, collection_id): 234 | """ 235 | Create OpenSearch Serverless policy and attach it to the Knowledge Base Execution role. 236 | If policy already exists, attaches it 237 | """ 238 | # define oss policy document 239 | oss_policy_document = { 240 | "Version": "2012-10-17", 241 | "Statement": [ 242 | { 243 | "Effect": "Allow", 244 | "Action": [ 245 | "aoss:APIAccessAll" 246 | ], 247 | "Resource": [ 248 | f"arn:aws:aoss:{self.region_name}:{self.account_number}:collection/{collection_id}" 249 | ] 250 | } 251 | ] 252 | } 253 | 254 | oss_policy_arn = f"arn:aws:iam::{self.account_number}:policy/{self.oss_policy_name}" 255 | created = False 256 | try: 257 | self.iam_client.create_policy( 258 | PolicyName=self.oss_policy_name, 259 | PolicyDocument=json.dumps(oss_policy_document), 260 | Description='Policy for accessing opensearch serverless', 261 | ) 262 | created = True 263 | except self.iam_client.exceptions.EntityAlreadyExistsException: 264 | print(f"Policy {oss_policy_arn} already exists, skipping creation") 265 | print("Opensearch serverless arn: ", oss_policy_arn) 266 | 267 | self.iam_client.attach_role_policy( 268 | RoleName=self.bedrock_kb_execution_role["Role"]["RoleName"], 269 | PolicyArn=oss_policy_arn 270 | ) 271 | return created 272 | 273 | def create_policies_in_oss(self): 274 | """ 275 | Create OpenSearch Serverless encryption, network and data access policies. 276 | If policies already exist, retrieve them 277 | """ 278 | try: 279 | encryption_policy = self.aoss_client.create_security_policy( 280 | name=self.encryption_policy_name, 281 | policy=json.dumps( 282 | { 283 | 'Rules': [{'Resource': ['collection/' + self.vector_store_name], 284 | 'ResourceType': 'collection'}], 285 | 'AWSOwnedKey': True 286 | }), 287 | type='encryption' 288 | ) 289 | except self.aoss_client.exceptions.ConflictException: 290 | encryption_policy = self.aoss_client.get_security_policy( 291 | name=self.encryption_policy_name, 292 | type='encryption' 293 | ) 294 | 295 | try: 296 | network_policy = self.aoss_client.create_security_policy( 297 | name=self.network_policy_name, 298 | policy=json.dumps( 299 | [ 300 | {'Rules': [{'Resource': ['collection/' + self.vector_store_name], 301 | 'ResourceType': 'collection'}], 302 | 'AllowFromPublic': True} 303 | ]), 304 | type='network' 305 | ) 306 | except self.aoss_client.exceptions.ConflictException: 307 | network_policy = self.aoss_client.get_security_policy( 308 | name=self.network_policy_name, 309 | type='network' 310 | ) 311 | 312 | try: 313 | access_policy = self.aoss_client.create_access_policy( 314 | name=self.access_policy_name, 315 | policy=json.dumps( 316 | [ 317 | { 318 | 'Rules': [ 319 | { 320 | 'Resource': ['collection/' + self.vector_store_name], 321 | 'Permission': [ 322 | 'aoss:CreateCollectionItems', 323 | 'aoss:DeleteCollectionItems', 324 | 'aoss:UpdateCollectionItems', 325 | 'aoss:DescribeCollectionItems'], 326 | 'ResourceType': 'collection' 327 | }, 328 | { 329 | 'Resource': ['index/' + self.vector_store_name + '/*'], 330 | 'Permission': [ 331 | 'aoss:CreateIndex', 332 | 'aoss:DeleteIndex', 333 | 'aoss:UpdateIndex', 334 | 'aoss:DescribeIndex', 335 | 'aoss:ReadDocument', 336 | 'aoss:WriteDocument'], 337 | 'ResourceType': 'index' 338 | }], 339 | 'Principal': [self.identity, self.bedrock_kb_execution_role['Role']['Arn']], 340 | 'Description': 'Easy data policy'} 341 | ]), 342 | type='data' 343 | ) 344 | except self.aoss_client.exceptions.ConflictException: 345 | access_policy = self.aoss_client.get_access_policy( 346 | name=self.access_policy_name, 347 | type='data' 348 | ) 349 | 350 | return encryption_policy, network_policy, access_policy 351 | 352 | def create_oss(self): 353 | """ 354 | Create OpenSearch Serverless Collection. If already existent, retrieve 355 | """ 356 | try: 357 | collection = self.aoss_client.create_collection(name=self.vector_store_name, type='VECTORSEARCH') 358 | collection_id = collection['createCollectionDetail']['id'] 359 | collection_arn = collection['createCollectionDetail']['arn'] 360 | except self.aoss_client.exceptions.ConflictException: 361 | collection = self.aoss_client.batch_get_collection(names=[self.vector_store_name])['collectionDetails'][0] 362 | pp.pprint(collection) 363 | collection_id = collection['id'] 364 | collection_arn = collection['arn'] 365 | pp.pprint(collection) 366 | 367 | # Get the OpenSearch serverless collection URL 368 | host = collection_id + '.' + self.region_name + '.aoss.amazonaws.com' 369 | print(host) 370 | # wait for collection creation 371 | # This can take couple of minutes to finish 372 | response = self.aoss_client.batch_get_collection(names=[self.vector_store_name]) 373 | # Periodically check collection status 374 | while (response['collectionDetails'][0]['status']) == 'CREATING': 375 | print('Creating collection...') 376 | interactive_sleep(30) 377 | response = self.aoss_client.batch_get_collection(names=[self.vector_store_name]) 378 | print('\nCollection successfully created:') 379 | pp.pprint(response["collectionDetails"]) 380 | # create opensearch serverless access policy and attach it to Bedrock execution role 381 | try: 382 | created = self.create_oss_policy_attach_bedrock_execution_role(collection_id) 383 | if created: 384 | # It can take up to a minute for data access rules to be enforced 385 | print("Sleeping for a minute to ensure data access rules have been enforced") 386 | interactive_sleep(60) 387 | return host, collection, collection_id, collection_arn 388 | except Exception as e: 389 | print("Policy already exists") 390 | pp.pprint(e) 391 | 392 | def create_vector_index(self): 393 | """ 394 | Create OpenSearch Serverless vector index. If existent, ignore 395 | """ 396 | body_json = { 397 | "settings": { 398 | "index.knn": "true", 399 | "number_of_shards": 1, 400 | "knn.algo_param.ef_search": 512, 401 | "number_of_replicas": 0, 402 | }, 403 | "mappings": { 404 | "properties": { 405 | "vector": { 406 | "type": "knn_vector", 407 | "dimension": 1536, 408 | "method": { 409 | "name": "hnsw", 410 | "engine": "faiss", 411 | "space_type": "l2" 412 | }, 413 | }, 414 | "text": { 415 | "type": "text" 416 | }, 417 | "text-metadata": { 418 | "type": "text"} 419 | } 420 | } 421 | } 422 | 423 | # Create index 424 | try: 425 | response = self.oss_client.indices.create(index=self.index_name, body=json.dumps(body_json)) 426 | print('\nCreating index:') 427 | pp.pprint(response) 428 | 429 | # index creation can take up to a minute 430 | interactive_sleep(60) 431 | except RequestError as e: 432 | # you can delete the index if its already exists 433 | # oss_client.indices.delete(index=index_name) 434 | print( 435 | f'Error while trying to create the index, with error {e.error}\nyou may unmark the delete above to ' 436 | f'delete, and recreate the index') 437 | 438 | @retry(wait_random_min=1000, wait_random_max=2000, stop_max_attempt_number=7) 439 | def create_knowledge_base(self): 440 | """ 441 | Create Knowledge Base and its Data Source. If existent, retrieve 442 | """ 443 | opensearch_serverless_configuration = { 444 | "collectionArn": self.collection_arn, 445 | "vectorIndexName": self.index_name, 446 | "fieldMapping": { 447 | "vectorField": "vector", 448 | "textField": "text", 449 | "metadataField": "text-metadata" 450 | } 451 | } 452 | 453 | # Ingest strategy - How to ingest data from the data source 454 | chunking_strategy_configuration = { 455 | "chunkingStrategy": "FIXED_SIZE", 456 | "fixedSizeChunkingConfiguration": { 457 | "maxTokens": 512, 458 | "overlapPercentage": 20 459 | } 460 | } 461 | 462 | # The data source to ingest documents from, into the OpenSearch serverless knowledge base index 463 | s3_configuration = { 464 | "bucketArn": f"arn:aws:s3:::{self.bucket_name}", 465 | # "inclusionPrefixes":["*.*"] # you can use this if you want to create a KB using data within s3 prefixes. 466 | } 467 | 468 | # The embedding model used by Bedrock to embed ingested documents, and realtime prompts 469 | embedding_model_arn = f"arn:aws:bedrock:{self.region_name}::foundation-model/{self.embedding_model}" 470 | try: 471 | create_kb_response = self.bedrock_agent_client.create_knowledge_base( 472 | name=self.kb_name, 473 | description=self.kb_description, 474 | roleArn=self.bedrock_kb_execution_role['Role']['Arn'], 475 | knowledgeBaseConfiguration={ 476 | "type": "VECTOR", 477 | "vectorKnowledgeBaseConfiguration": { 478 | "embeddingModelArn": embedding_model_arn 479 | } 480 | }, 481 | storageConfiguration={ 482 | "type": "OPENSEARCH_SERVERLESS", 483 | "opensearchServerlessConfiguration": opensearch_serverless_configuration 484 | } 485 | ) 486 | kb = create_kb_response["knowledgeBase"] 487 | pp.pprint(kb) 488 | except self.bedrock_agent_client.exceptions.ConflictException: 489 | kbs = self.bedrock_agent_client.list_knowledge_bases( 490 | maxResults=100 491 | ) 492 | kb_id = None 493 | for kb in kbs['knowledgeBaseSummaries']: 494 | if kb['name'] == self.kb_name: 495 | kb_id = kb['knowledgeBaseId'] 496 | response = self.bedrock_agent_client.get_knowledge_base(knowledgeBaseId=kb_id) 497 | kb = response['knowledgeBase'] 498 | pp.pprint(kb) 499 | 500 | # Create a DataSource in KnowledgeBase 501 | try: 502 | create_ds_response = self.bedrock_agent_client.create_data_source( 503 | name=self.kb_name, 504 | description=self.kb_description, 505 | knowledgeBaseId=kb['knowledgeBaseId'], 506 | dataSourceConfiguration={ 507 | "type": "S3", 508 | "s3Configuration": s3_configuration 509 | }, 510 | vectorIngestionConfiguration={ 511 | "chunkingConfiguration": chunking_strategy_configuration 512 | } 513 | ) 514 | ds = create_ds_response["dataSource"] 515 | pp.pprint(ds) 516 | except self.bedrock_agent_client.exceptions.ConflictException: 517 | ds_id = self.bedrock_agent_client.list_data_sources( 518 | knowledgeBaseId=kb['knowledgeBaseId'], 519 | maxResults=100 520 | )['dataSourceSummaries'][0]['dataSourceId'] 521 | get_ds_response = self.bedrock_agent_client.get_data_source( 522 | dataSourceId=ds_id, 523 | knowledgeBaseId=kb['knowledgeBaseId'] 524 | ) 525 | ds = get_ds_response["dataSource"] 526 | pp.pprint(ds) 527 | return kb, ds 528 | 529 | def start_ingestion_job(self): 530 | """ 531 | Start an ingestion job to synchronize data from an S3 bucket to the Knowledge Base 532 | """ 533 | # Start an ingestion job 534 | start_job_response = self.bedrock_agent_client.start_ingestion_job( 535 | knowledgeBaseId=self.knowledge_base['knowledgeBaseId'], 536 | dataSourceId=self.data_source["dataSourceId"] 537 | ) 538 | job = start_job_response["ingestionJob"] 539 | pp.pprint(job) 540 | # Get job 541 | while job['status'] != 'COMPLETE': 542 | get_job_response = self.bedrock_agent_client.get_ingestion_job( 543 | knowledgeBaseId=self.knowledge_base['knowledgeBaseId'], 544 | dataSourceId=self.data_source["dataSourceId"], 545 | ingestionJobId=job["ingestionJobId"] 546 | ) 547 | job = get_job_response["ingestionJob"] 548 | pp.pprint(job) 549 | interactive_sleep(40) 550 | 551 | def get_knowledge_base_id(self): 552 | """ 553 | Get Knowledge Base Id 554 | """ 555 | pp.pprint(self.knowledge_base["knowledgeBaseId"]) 556 | return self.knowledge_base["knowledgeBaseId"] 557 | 558 | def get_bucket_name(self): 559 | """ 560 | Get the name of the bucket connected with the Knowledge Base Data Source 561 | """ 562 | pp.pprint(f"Bucket connected with KB: {self.bucket_name}") 563 | return self.bucket_name 564 | 565 | def delete_kb(self, delete_s3_bucket=False, delete_iam_roles_and_policies=True): 566 | """ 567 | Delete the Knowledge Base resources 568 | Args: 569 | delete_s3_bucket (bool): boolean to indicate if s3 bucket should also be deleted 570 | delete_iam_roles_and_policies (bool): boolean to indicate if IAM roles and Policies should also be deleted 571 | """ 572 | self.bedrock_agent_client.delete_data_source( 573 | dataSourceId=self.data_source["dataSourceId"], 574 | knowledgeBaseId=self.knowledge_base['knowledgeBaseId'] 575 | ) 576 | self.bedrock_agent_client.delete_knowledge_base( 577 | knowledgeBaseId=self.knowledge_base['knowledgeBaseId'] 578 | ) 579 | self.oss_client.indices.delete(index=self.index_name) 580 | self.aoss_client.delete_collection(id=self.collection_id) 581 | self.aoss_client.delete_access_policy( 582 | type="data", 583 | name=self.access_policy_name 584 | ) 585 | self.aoss_client.delete_security_policy( 586 | type="network", 587 | name=self.network_policy_name 588 | ) 589 | self.aoss_client.delete_security_policy( 590 | type="encryption", 591 | name=self.encryption_policy_name 592 | ) 593 | if delete_s3_bucket: 594 | self.delete_s3() 595 | if delete_iam_roles_and_policies: 596 | self.delete_iam_roles_and_policies() 597 | 598 | def delete_iam_roles_and_policies(self): 599 | """ 600 | Delete IAM Roles and policies used by the Knowledge Base 601 | """ 602 | fm_policy_arn = f"arn:aws:iam::{self.account_number}:policy/{self.fm_policy_name}" 603 | s3_policy_arn = f"arn:aws:iam::{self.account_number}:policy/{self.s3_policy_name}" 604 | oss_policy_arn = f"arn:aws:iam::{self.account_number}:policy/{self.oss_policy_name}" 605 | self.iam_client.detach_role_policy( 606 | RoleName=self.kb_execution_role_name, 607 | PolicyArn=s3_policy_arn 608 | ) 609 | self.iam_client.detach_role_policy( 610 | RoleName=self.kb_execution_role_name, 611 | PolicyArn=fm_policy_arn 612 | ) 613 | self.iam_client.detach_role_policy( 614 | RoleName=self.kb_execution_role_name, 615 | PolicyArn=oss_policy_arn 616 | ) 617 | self.iam_client.delete_role(RoleName=self.kb_execution_role_name) 618 | self.iam_client.delete_policy(PolicyArn=s3_policy_arn) 619 | self.iam_client.delete_policy(PolicyArn=fm_policy_arn) 620 | self.iam_client.delete_policy(PolicyArn=oss_policy_arn) 621 | return 0 622 | 623 | def delete_s3(self): 624 | """ 625 | Delete the objects contained in the Knowledge Base S3 bucket. 626 | Once the bucket is empty, delete the bucket 627 | """ 628 | objects = self.s3_client.list_objects(Bucket=self.bucket_name) 629 | if 'Contents' in objects: 630 | for obj in objects['Contents']: 631 | self.s3_client.delete_object(Bucket=self.bucket_name, Key=obj['Key']) 632 | self.s3_client.delete_bucket(Bucket=self.bucket_name) -------------------------------------------------------------------------------- /scripts/lambda_function.py: -------------------------------------------------------------------------------- 1 | import json 2 | import uuid 3 | import boto3 4 | 5 | dynamodb = boto3.resource('dynamodb') 6 | table = dynamodb.Table('restaurant_bookings') 7 | 8 | def get_named_parameter(event, name): 9 | """ 10 | Get a parameter from the lambda event 11 | """ 12 | return next(item for item in event['parameters'] if item['name'] == name)['value'] 13 | 14 | 15 | def get_booking_details(booking_id): 16 | """ 17 | Retrieve details of a restaurant booking 18 | 19 | Args: 20 | booking_id (string): The ID of the booking to retrieve 21 | """ 22 | try: 23 | response = table.get_item(Key={'booking_id': booking_id}) 24 | if 'Item' in response: 25 | return response['Item'] 26 | else: 27 | return {'message': f'No booking found with ID {booking_id}'} 28 | except Exception as e: 29 | return {'error': str(e)} 30 | 31 | 32 | def create_booking(date, name, hour, num_guests): 33 | """ 34 | Create a new restaurant booking 35 | 36 | Args: 37 | date (string): The date of the booking 38 | name (string): Name to idenfity your reservation 39 | hour (string): The hour of the booking 40 | num_guests (integer): The number of guests for the booking 41 | """ 42 | try: 43 | booking_id = str(uuid.uuid4())[:8] 44 | table.put_item( 45 | Item={ 46 | 'booking_id': booking_id, 47 | 'date': date, 48 | 'name': name, 49 | 'hour': hour, 50 | 'num_guests': num_guests 51 | } 52 | ) 53 | return {'booking_id': booking_id} 54 | except Exception as e: 55 | return {'error': str(e)} 56 | 57 | 58 | def delete_booking(booking_id): 59 | """ 60 | Delete an existing restaurant booking 61 | 62 | Args: 63 | booking_id (str): The ID of the booking to delete 64 | """ 65 | try: 66 | response = table.delete_item(Key={'booking_id': booking_id}) 67 | if response['ResponseMetadata']['HTTPStatusCode'] == 200: 68 | return {'message': f'Booking with ID {booking_id} deleted successfully'} 69 | else: 70 | return {'message': f'Failed to delete booking with ID {booking_id}'} 71 | except Exception as e: 72 | return {'error': str(e)} 73 | 74 | 75 | def lambda_handler(event, context): 76 | # get the action group used during the invocation of the lambda function 77 | actionGroup = event.get('actionGroup', '') 78 | 79 | # name of the function that should be invoked 80 | function = event.get('function', '') 81 | 82 | # parameters to invoke function with 83 | parameters = event.get('parameters', []) 84 | 85 | if function == 'get_booking_details': 86 | booking_id = get_named_parameter(event, "booking_id") 87 | if booking_id: 88 | response = str(get_booking_details(booking_id)) 89 | responseBody = {'TEXT': {'body': json.dumps(response)}} 90 | else: 91 | responseBody = {'TEXT': {'body': 'Missing booking_id parameter'}} 92 | 93 | elif function == 'create_booking': 94 | date = get_named_parameter(event, "date") 95 | name = get_named_parameter(event, "name") 96 | hour = get_named_parameter(event, "hour") 97 | num_guests = get_named_parameter(event, "num_guests") 98 | 99 | if date and hour and num_guests: 100 | response = str(create_booking(date, name, hour, num_guests)) 101 | responseBody = {'TEXT': {'body': json.dumps(response)}} 102 | else: 103 | responseBody = {'TEXT': {'body': 'Missing required parameters'}} 104 | 105 | elif function == 'delete_booking': 106 | booking_id = get_named_parameter(event, "booking_id") 107 | if booking_id: 108 | response = str(delete_booking(booking_id)) 109 | responseBody = {'TEXT': {'body': json.dumps(response)}} 110 | else: 111 | responseBody = {'TEXT': {'body': 'Missing booking_id parameter'}} 112 | 113 | else: 114 | responseBody = {'TEXT': {'body': 'Invalid function'}} 115 | 116 | action_response = { 117 | 'actionGroup': actionGroup, 118 | 'function': function, 119 | 'functionResponse': { 120 | 'responseBody': responseBody 121 | } 122 | } 123 | 124 | function_response = {'response': action_response, 'messageVersion': event['messageVersion']} 125 | print("Response: {}".format(function_response)) 126 | 127 | return function_response 128 | --------------------------------------------------------------------------------