├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── ask-bedrock-with-rag.py ├── load-data-to-opensearch.py ├── requirements.txt ├── terraform ├── .terraform.lock.hcl ├── bedrock-iam.tf ├── opensearch.tf ├── provider.tf ├── secret.tf └── variables.tf └── utils ├── __init__.py ├── dataset.py ├── opensearch.py └── secret.py /.gitignore: -------------------------------------------------------------------------------- 1 | .terraform/ 2 | *.tfstate* 3 | 4 | dependencies/ 5 | 6 | .venv/ 7 | __pycache__/ 8 | 9 | .python-version 10 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 4 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 5 | [opensource-codeofconduct@amazon.com](mailto:opensource-codeofconduct@amazon.com) with any additional questions or comments. 6 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | ## Reporting Bugs/Feature Requests 10 | 11 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 12 | 13 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 14 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 15 | 16 | * A reproducible test case or series of steps 17 | * The version of our code being used 18 | * Any modifications you've made relevant to the bug 19 | * Anything unusual about your environment or deployment 20 | 21 | ## Contributing via Pull Requests 22 | 23 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 24 | 25 | 1. You are working against the latest source on the *main* branch. 26 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 27 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 28 | 29 | To send us a pull request, please: 30 | 31 | 1. Fork the repository. 32 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 33 | 3. Ensure local tests pass. 34 | 4. Commit to your fork using clear commit messages. 35 | 5. Send us a pull request, answering any default questions in the pull request interface. 36 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 37 | 38 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 39 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 40 | 41 | ## Finding contributions to work on 42 | 43 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 44 | 45 | ## Code of Conduct 46 | 47 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 48 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 49 | [opensource-codeofconduct@amazon.com](mailto:opensource-codeofconduct@amazon.com) with any additional questions or comments. 50 | 51 | ## Security issue notifications 52 | 53 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 54 | 55 | ## Licensing 56 | 57 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 58 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RAG using LangChain with Amazon Bedrock Titan text, and embedding, using OpenSearch vector engine 2 | 3 | This sample repository provides a sample code for using RAG (Retrieval augmented generation) method relaying on [Amazon Bedrock](https://aws.amazon.com/bedrock/) [Titan Embeddings Generation 1 (G1)](https://aws.amazon.com/bedrock/titan/) LLM (Large Language Model), for creating text embedding that will be stored in [Amazon OpenSearch](https://aws.amazon.com/opensearch-service/) with [vector engine support](https://aws.amazon.com/about-aws/whats-new/2023/07/vector-engine-amazon-opensearch-serverless-preview/) for assisting with the prompt engineering task for more accurate response from LLMs. 4 | 5 | After we successfully loaded embeddings into OpenSearch, we will then start querying our LLM, by using [LangChain](https://www.langchain.com/). We will ask questions, retrieving similar embedding for a more accurate prompt. 6 | 7 | You can use `--bedrock-model-id` parameter, to seamlessly choose one of the available foundation model in Amazon Bedrock, that defaults to [Anthropic Claude v2](https://aws.amazon.com/bedrock/claude/) and can be replaced to any other model from any other model provider to choose your best performing foundation model. 8 | 9 | Anthropic: 10 | 11 | - Claude v2 `python ./ask-bedrock-with-rag.py --ask "How will AI will change our every day life?"` 12 | - Claude v1.3 `python ./ask-bedrock-with-rag.py --bedrock-model-id anthropic.claude-v1 --ask "How will AI will change our every day life?"` 13 | - Claude Instance v1.2 `python ./ask-bedrock-with-rag.py --bedrock-model-id anthropic.claude-instant-v1 --ask "How will AI will change our every day life?"` 14 | 15 | AI21 Labs: 16 | 17 | - Jurassic-2 Ultra `python ./ask-bedrock-with-rag.py --bedrock-model-id ai21.j2-ultra-v1 --ask "How will AI will change our every day life?"` 18 | - Jurassic-2 Mid `python ./ask-bedrock-with-rag.py --bedrock-model-id ai21.j2-mid-v1 --ask "How will AI will change our every day life?"` 19 | 20 | ## Prerequisites 21 | 22 | 1. This was tested on Python 3.11.4 23 | 2. It is advise to work on a clean environment, use `virtualenv` or any other virtual environment manager. 24 | 25 | ```bash 26 | pip install virtualenv 27 | python -m virtualenv venv 28 | source ./venv/bin/activate 29 | ``` 30 | 31 | 3. Install requirements `pip install -r requirements.txt` 32 | 4. Install [terraform](https://developer.hashicorp.com/terraform/downloads?product_intent=terraform) to create the OpenSearch cluster 33 | 34 | ```bash 35 | brew tap hashicorp/tap 36 | brew install hashicorp/tap/terraform 37 | ``` 38 | 39 | 5. Go to the Model Access [page](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess) and enable the foundation models you want to use. 40 | 41 | ## Steps for using this sample code 42 | 43 | 1. In the first step we will launch an OpenSearch cluster using Terraform. 44 | 45 | ```bash 46 | cd ./terraform 47 | terraform init 48 | terraform apply -auto-approve 49 | ``` 50 | 51 | >>This cluster configuration is for testing proposes only, as it's endpoint is public for simplifying the use of this sample code. 52 | 53 | 2. Now that we have a running OpenSearch cluster with vector engine support we will start uploading our data that will help us with prompt engineering. For this sample, we will use a data source from [Hugging Face](https://huggingface.co) [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data) [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/gooaq_pairs.jsonl.gz), we will download it, and invoke Titan embedding to get a text embedding, that we will store in OpenSearch for next steps. 54 | 55 | ```bash 56 | python load-data-to-opensearch.py --recreate 1 --early-stop 1 57 | ``` 58 | 59 | >>Optional arguments: 60 | >>- `--recreate` for recreating the index in OpenSearch 61 | >>- `--early-stop` to load only 100 embedded documents into OpenSearch 62 | >>- `--index` to use a different index than the default **rag** 63 | >>- `--region` in case you are not using the default **us-east-1** 64 | >>- `--multi-tenant` to use multi tenancy, will load data with tenant IDs (1-5) 65 | 66 | 67 | 3. Now that we have embedded text, into our OpenSearch cluster, we can start querying our LLM model Titan text in Amazon Bedrock with RAG 68 | 69 | ```bash 70 | python ask-bedrock-with-rag.py --ask "your question here" 71 | ``` 72 | 73 | >>Optional arguments: 74 | >>- `--index` to use a different index than the default **rag** 75 | >>- `--region` in case you are not using the default **us-east-1** 76 | >>- `--bedrock-model-id` to choose different models than Anthropic's Claude v2 77 | >>- `--tenant-id` to filter only a specific tenant ID 78 | 79 | 80 | ### Cleanup 81 | 82 | ```bash 83 | cd ./terraform 84 | terraform destroy # When prompt for confirmation, type yes, and press enter. 85 | ``` 86 | 87 | ## Contributing 88 | 89 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 90 | 91 | ## License 92 | 93 | This library is licensed under the MIT-0 License. See the LICENSE file. 94 | -------------------------------------------------------------------------------- /ask-bedrock-with-rag.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from utils import opensearch, secret 3 | from langchain_community.embeddings import BedrockEmbeddings 4 | from langchain_community.vectorstores import OpenSearchVectorSearch 5 | from langchain.chains.combine_documents import create_stuff_documents_chain 6 | from langchain.chains import create_retrieval_chain 7 | from langchain_core.prompts import ChatPromptTemplate 8 | from langchain_community.chat_models import BedrockChat 9 | import boto3 10 | from loguru import logger 11 | import sys 12 | import os 13 | 14 | 15 | # logger 16 | logger.remove() 17 | logger.add(sys.stdout, level=os.getenv("LOG_LEVEL", "INFO")) 18 | 19 | 20 | def parse_args(): 21 | parser = argparse.ArgumentParser() 22 | parser.add_argument("--ask", type=str, default="What is the meaning of <3?") 23 | parser.add_argument("--index", type=str, default="rag") 24 | parser.add_argument("--region", type=str, default="us-east-1") 25 | parser.add_argument("--tenant-id", type=str, default=None) 26 | parser.add_argument("--bedrock-model-id", type=str, default="anthropic.claude-3-sonnet-20240229-v1:0") 27 | parser.add_argument("--bedrock-embedding-model-id", type=str, default="amazon.titan-embed-text-v1") 28 | 29 | return parser.parse_known_args() 30 | 31 | 32 | def get_bedrock_client(region): 33 | bedrock_client = boto3.client("bedrock-runtime", region_name=region) 34 | return bedrock_client 35 | 36 | 37 | def create_langchain_vector_embedding_using_bedrock(bedrock_client, bedrock_embedding_model_id): 38 | bedrock_embeddings_client = BedrockEmbeddings( 39 | client=bedrock_client, 40 | model_id=bedrock_embedding_model_id) 41 | return bedrock_embeddings_client 42 | 43 | 44 | def create_opensearch_vector_search_client(index_name, opensearch_password, bedrock_embeddings_client, opensearch_endpoint, _is_aoss=False): 45 | docsearch = OpenSearchVectorSearch( 46 | index_name=index_name, 47 | embedding_function=bedrock_embeddings_client, 48 | opensearch_url=f"https://{opensearch_endpoint}", 49 | http_auth=(index_name, opensearch_password), 50 | is_aoss=_is_aoss 51 | ) 52 | return docsearch 53 | 54 | 55 | def create_bedrock_llm(bedrock_client, model_version_id): 56 | bedrock_llm = BedrockChat( 57 | model_id=model_version_id, 58 | client=bedrock_client, 59 | model_kwargs={'temperature': 0} 60 | ) 61 | return bedrock_llm 62 | 63 | 64 | def main(): 65 | logger.info("Starting...") 66 | args, _ = parse_args() 67 | region = args.region 68 | index_name = args.index 69 | bedrock_model_id = args.bedrock_model_id 70 | bedrock_embedding_model_id = args.bedrock_embedding_model_id 71 | question = args.ask 72 | tenant_id = args.tenant_id 73 | logger.info(f"Question provided: {question}") 74 | 75 | # Creating all clients for chain 76 | bedrock_client = get_bedrock_client(region) 77 | bedrock_llm = create_bedrock_llm(bedrock_client, bedrock_model_id) 78 | bedrock_embeddings_client = create_langchain_vector_embedding_using_bedrock(bedrock_client, bedrock_embedding_model_id) 79 | opensearch_endpoint = opensearch.get_opensearch_endpoint(index_name, region) 80 | opensearch_password = secret.get_secret(index_name, region) 81 | opensearch_vector_search_client = create_opensearch_vector_search_client(index_name, opensearch_password, bedrock_embeddings_client, opensearch_endpoint) 82 | 83 | # LangChain prompt template 84 | prompt = ChatPromptTemplate.from_template("""If the context is not relevant, please answer the question by using your own knowledge about the topic. If you don't know the answer, just say that you don't know, don't try to make up an answer. don't include harmful content 85 | 86 | {context} 87 | 88 | Question: {input} 89 | Answer:""") 90 | 91 | docs_chain = create_stuff_documents_chain(bedrock_llm, prompt) 92 | 93 | search_kwargs = {} 94 | if tenant_id: 95 | search_kwargs["filter"] = { 96 | "term": { 97 | "tenant_id": tenant_id 98 | } 99 | } 100 | 101 | retrieval_chain = create_retrieval_chain( 102 | retriever=opensearch_vector_search_client.as_retriever(search_kwargs=search_kwargs), 103 | combine_docs_chain = docs_chain 104 | ) 105 | 106 | logger.info(f"Invoking the chain with KNN similarity using OpenSearch, Bedrock FM {bedrock_model_id}, and Bedrock embeddings with {bedrock_embedding_model_id}") 107 | response = retrieval_chain.invoke({"input": question}) 108 | 109 | logger.info("These are the similar documents from OpenSearch based on the provided query:") 110 | source_documents = response.get('context') 111 | for d in source_documents: 112 | logger.info(f"Text: {d.page_content}") 113 | 114 | print("") 115 | logger.info(f"The answer from Bedrock!!!!! {bedrock_model_id} is: {response.get('answer')}") 116 | 117 | 118 | if __name__ == "__main__": 119 | main() -------------------------------------------------------------------------------- /load-data-to-opensearch.py: -------------------------------------------------------------------------------- 1 | import json 2 | import argparse 3 | import boto3 4 | from utils import dataset, secret, opensearch 5 | from loguru import logger 6 | import sys 7 | import os 8 | import random 9 | 10 | 11 | 12 | # logger 13 | logger.remove() 14 | logger.add(sys.stdout, level=os.getenv("LOG_LEVEL", "INFO")) 15 | 16 | 17 | def parse_args(): 18 | parser = argparse.ArgumentParser() 19 | parser.add_argument("--recreate", type=bool, default=0) 20 | parser.add_argument("--early-stop", type=bool, default=0) 21 | parser.add_argument("--index", type=str, default="rag") 22 | parser.add_argument("--region", type=str, default="us-east-1") 23 | parser.add_argument("--multi-tenant", type=bool, default=0) 24 | 25 | return parser.parse_known_args() 26 | 27 | 28 | def get_bedrock_client(region): 29 | bedrock_client = boto3.client("bedrock-runtime", region_name=region) 30 | return bedrock_client 31 | 32 | 33 | def create_vector_embedding_with_bedrock(text, name, bedrock_client): 34 | payload = {"inputText": f"{text}"} 35 | body = json.dumps(payload) 36 | modelId = "amazon.titan-embed-text-v1" 37 | accept = "application/json" 38 | contentType = "application/json" 39 | args, _ = parse_args() 40 | multi_tenant = args.multi_tenant 41 | 42 | response = bedrock_client.invoke_model( 43 | body=body, modelId=modelId, accept=accept, contentType=contentType 44 | ) 45 | response_body = json.loads(response.get("body").read()) 46 | 47 | embedding = response_body.get("embedding") 48 | 49 | document = { 50 | "_index": name, 51 | "text": text, 52 | "vector_field": embedding 53 | } 54 | 55 | 56 | if multi_tenant == 1: 57 | document["tenant_id"] = random.randint(1, 5) 58 | 59 | return document 60 | 61 | def main(): 62 | logger.info("Starting") 63 | 64 | dataset_url = "https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/gooaq_pairs.jsonl.gz" 65 | early_stop_record_count = 100 66 | 67 | args, _ = parse_args() 68 | region = args.region 69 | name = args.index 70 | multi_tenant = args.multi_tenant 71 | 72 | 73 | # Prepare OpenSearch index with vector embeddings index mapping 74 | logger.info(f"Recreating opensearch index: {args.recreate}, using early stop: {args.early_stop} to insert only {early_stop_record_count} records") 75 | if multi_tenant: 76 | logger.info("Using multi tenant mode") 77 | logger.info("Preparing OpenSearch Index") 78 | opensearch_password = secret.get_secret(name, region) 79 | opensearch_client = opensearch.get_opensearch_cluster_client(name, opensearch_password, region) 80 | 81 | # Check if to delete OpenSearch index with the argument passed to the script --recreate 1 82 | if args.recreate: 83 | response = opensearch.delete_opensearch_index(opensearch_client, name) 84 | if response: 85 | logger.info("OpenSearch index successfully deleted") 86 | 87 | logger.info(f"Checking if index {name} exists in OpenSearch cluster") 88 | exists = opensearch.check_opensearch_index(opensearch_client, name) 89 | if not exists: 90 | logger.info("Creating OpenSearch index") 91 | success = opensearch.create_index(opensearch_client, name) 92 | if success: 93 | logger.info("Creating OpenSearch index mapping") 94 | success = opensearch.create_index_mapping(opensearch_client, name) 95 | logger.info("OpenSearch Index mapping created") 96 | 97 | # Download sample dataset from HuggingFace 98 | logger.info("Downloading dataset from HuggingFace") 99 | compressed_file_path = dataset.download_dataset(dataset_url) 100 | if compressed_file_path is not None: 101 | file_path = dataset.decompress_dataset(compressed_file_path) 102 | if file_path is not None: 103 | all_records = dataset.prep_for_put(file_path) 104 | 105 | # Initialize bedrock client 106 | bedrock_client = get_bedrock_client(region) 107 | 108 | # Vector embedding using Amazon Bedrock Titan text embedding 109 | all_json_records = [] 110 | logger.info("Creating embeddings for records") 111 | 112 | # using the arg --early-stop 113 | i = 0 114 | for record in all_records: 115 | i += 1 116 | if args.early_stop: 117 | if i > early_stop_record_count: 118 | # Bulk put all records to OpenSearch 119 | success, failed = opensearch.put_bulk_in_opensearch(all_json_records, opensearch_client) 120 | logger.info(f"Documents saved {success}, documents failed to save {failed}") 121 | break 122 | records_with_embedding = create_vector_embedding_with_bedrock(record, name, bedrock_client) 123 | logger.info(f"Embedding for record {i} created") 124 | all_json_records.append(records_with_embedding) 125 | if i % 500 == 0 or i == len(all_records)-1: 126 | # Bulk put all records to OpenSearch 127 | success, failed = opensearch.put_bulk_in_opensearch(all_json_records, opensearch_client) 128 | all_json_records = [] 129 | logger.info(f"Documents saved {success}, documents failed to save {failed}") 130 | 131 | logger.info("Finished creating records using Amazon Bedrock Titan text embedding") 132 | 133 | logger.info("Cleaning up") 134 | dataset.delete_file(compressed_file_path) 135 | dataset.delete_file(file_path) 136 | 137 | logger.info("Finished") 138 | 139 | if __name__ == "__main__": 140 | main() -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | boto3>=1.35.73 2 | langchain==0.1.14 3 | langchain-community==0.0.36 4 | langchain-core==0.1.50 5 | coloredlogs>=15.0.1 6 | jq==1.7.0 7 | opensearch-py==2.5.0 8 | loguru==0.7.2 9 | -------------------------------------------------------------------------------- /terraform/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.12.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:i28TUsgqoKs891cyDU0V9fFAwEz/RqbwF8sQShLfNq0=", 9 | "zh:0953565eb67ece49556dc9046c77322dc6c76e0ae6fa0c9fd6710b6afa2588c9", 10 | "zh:43676f3592c127a971719cc37b9199967376fb05d445b356f1545609e2b84bf8", 11 | "zh:46422ab8044b35e90f422ffabc17fa043ec8e4a33e3df2f8b305d63a950c0edb", 12 | "zh:4d34f024a82d31d10b5a9498d26fca71e3e35c543dfc5185c94c3205bc4dba22", 13 | "zh:51be0eeb882f041fc2679bd621e64cd775d013ae003055cea013c9d630c15dfb", 14 | "zh:7ca9252befa7271899febde25b679a73f90dbdb700cbbfec07d29389a3937131", 15 | "zh:8325b2152be0534a718e497a3273cf6c42880e78f290dc35024feef2e0af8e97", 16 | "zh:98f0c4d4c190cf4897cb9075a538f42f2998566e9f2d15755901fbb4862f8b32", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:a71e0bc6754bb3924b31727d80f05b04fa65247c009ffdfd2a715a01d95b373d", 19 | "zh:a82ae67ce3d4c7aaae761a592275b8cac5e9965a30b2dba951c1d965b3121006", 20 | "zh:c5510eca023cec89557a8244648bf8ad9a0cd3189b6abf6dcceba30e3b2e8c6d", 21 | "zh:cd11fe9c83793e838b6f90a55840fc45e7c106b358a68f0a88db09a29a321c9a", 22 | "zh:e451ad353f219a2922b92e786a93c31658168b896317be127798cddfa9a99363", 23 | "zh:e4b70a70e925b9ccb7d44e17fd8e7b89aa744a965f298f8bb2480a5c96f3c4f0", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.5.1" 29 | hashes = [ 30 | "h1:IL9mSatmwov+e0+++YX2V6uel+dV6bn+fC/cnGDK3Ck=", 31 | "zh:04e3fbd610cb52c1017d282531364b9c53ef72b6bc533acb2a90671957324a64", 32 | "zh:119197103301ebaf7efb91df8f0b6e0dd31e6ff943d231af35ee1831c599188d", 33 | "zh:4d2b219d09abf3b1bb4df93d399ed156cadd61f44ad3baf5cf2954df2fba0831", 34 | "zh:6130bdde527587bbe2dcaa7150363e96dbc5250ea20154176d82bc69df5d4ce3", 35 | "zh:6cc326cd4000f724d3086ee05587e7710f032f94fc9af35e96a386a1c6f2214f", 36 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 37 | "zh:b6d88e1d28cf2dfa24e9fdcc3efc77adcdc1c3c3b5c7ce503a423efbdd6de57b", 38 | "zh:ba74c592622ecbcef9dc2a4d81ed321c4e44cddf7da799faa324da9bf52a22b2", 39 | "zh:c7c5cde98fe4ef1143bd1b3ec5dc04baf0d4cc3ca2c5c7d40d17c0e9b2076865", 40 | "zh:dac4bad52c940cd0dfc27893507c1e92393846b024c5a9db159a93c534a3da03", 41 | "zh:de8febe2a2acd9ac454b844a4106ed295ae9520ef54dc8ed2faf29f12716b602", 42 | "zh:eab0d0495e7e711cca367f7d4df6e322e6c562fc52151ec931176115b83ed014", 43 | ] 44 | } 45 | -------------------------------------------------------------------------------- /terraform/bedrock-iam.tf: -------------------------------------------------------------------------------- 1 | data "aws_iam_policy_document" "bedrock" { 2 | statement { 3 | sid = 1 4 | actions = ["bedrock:InvokeModel"] 5 | resources = ["arn:aws:bedrock:*::foundation-model/*"] 6 | } 7 | statement { 8 | sid = 2 9 | actions = ["sts:AssumeRole"] 10 | resources = ["*"] 11 | } 12 | } 13 | 14 | resource "aws_iam_policy" "bedrock" { 15 | name = "bedrock" 16 | policy = data.aws_iam_policy_document.bedrock.json 17 | } 18 | 19 | resource "aws_iam_role" "bedrock" { 20 | name = "bedrock" 21 | assume_role_policy = <