├── .gitignore
├── Additional Considerations.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── app.py
├── cdk.json
├── graphics
├── chat.png
├── createnew.png
├── existing.png
├── home.png
├── loginpage.png
├── output.png
└── walink.png
├── lambda_dir
├── extract_document_text
│ └── extract_document_text.py
├── generate_pillar_question_response
│ └── generate_pillar_question_response.py
├── generate_prompts_for_all_the_selected_pillars
│ └── generate_prompts_for_all_the_selected_pillars.py
├── generate_solution_summary
│ └── generate_solution_summary.py
├── insert_wafr_prompts
│ └── insert_wafr_prompts.py
├── prepare_wafr_review
│ └── prepare_wafr_review.py
├── replace_ui_tokens
│ └── replace_ui_tokens.py
├── start_wafr_review
│ └── start_wafr_review.py
└── update_review_status
│ └── update_review_status.py
├── refreshing_kb.md
├── requirements.txt
├── source.bat
├── sys-arch.png
├── ui_code
├── WAFR_Accelerator.py
├── pages
│ └── 3_System_Architecture.py
├── sys-arch.png
└── tokenized-pages
│ ├── 1_Login.py
│ ├── 1_New_WAFR_Review.py
│ └── 2_Existing_WAFR_Reviews.py
├── user_data_script.sh
├── wafr-prompts
└── wafr-prompts.json
├── wafr_genai_accelerator
└── wafr_genai_accelerator_stack.py
└── well_architected_docs
├── dataanalytics
└── analytics-lens.pdf
├── financialservices
└── wellarchitected-financial-services-industry-lens.pdf
├── genai
└── generative-ai-lens.pdf
├── overview
└── wellarchitected-framework.pdf
└── wellarchitected
├── wellarchitected-cost-optimization-pillar.pdf
├── wellarchitected-operational-excellence-pillar.pdf
├── wellarchitected-performance-efficiency-pillar.pdf
├── wellarchitected-reliability-pillar.pdf
├── wellarchitected-security-pillar.pdf
└── wellarchitected-sustainability-pillar.pdf
/.gitignore:
--------------------------------------------------------------------------------
1 | # CDK specific
2 | cdk.out/
3 | cdk.context.json
4 | .cdk.staging/
5 | *.tabl.json
6 |
7 | # Python specific
8 | __pycache__/
9 |
10 | # Project specific
11 | node_modules
12 | .DS_Store
13 | package.json
14 | package-lock.json
15 |
16 |
17 |
18 |
--------------------------------------------------------------------------------
/Additional Considerations.md:
--------------------------------------------------------------------------------
1 | # Security and Compliance Considerations
2 |
3 | ## TLS 1.2 enforcement
4 | To enforce TLS 1.2 for compliance requirements:
5 | - Create a custom SSL/TLS certificate
6 | - Disable SSL 3.0 and TLS 1.0
7 | - Enable TLS 1.2 or higher
8 | - Implement strong cipher suites
9 |
10 | ## Multi-Factor Authentication (MFA)
11 | To implement MFA:
12 | - Add MFA to your Cognito user pool
13 | - Follow the [AWS Cognito MFA setup guide](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html)
14 |
15 | ## VPC Flow Logs
16 | To enable VPC flow logs:
17 | - Follow the instructions in the [AWS Knowledge Center guide](https://repost.aws/knowledge-center/saw-activate-vpc-flow-logs)
18 | - Monitor and analyze network traffic
19 | - Ensure proper logging configuration
20 |
21 | ## IAM permission for Well Architected Tool
22 | Currently the IAM policies for Well Architected tool uses wildcard (*) as the target resource. This is because the sample
23 | requires full access to Well Architected Tool to create new workloads, update workloads as analysis progresses and read existing workload names to avoid duplicate workload creation: [AWS Well-Architected Tool identity-based policy examples](https://docs.aws.amazon.com/wellarchitected/latest/userguide/security_iam_id-based-policy-examples.html )
24 |
25 | ## Size of the text extracted from the uploaded document
26 | The extracted document content alongside the rest of analysis and associated information is stored in the same DynamoDB item. The maximum Dynamo DB item size is 400KB. Hence uploading an extra long document may exceed this limit.
27 |
28 | ## Important note
29 | ⚠️ When reviewing model-generated analysis:
30 | - Always verify the responses independently
31 | - Remember that LLM outputs are not deterministic
32 | - Cross-reference with official AWS documentation
33 | - Validate against your specific use case requirements
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *main* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT No Attribution
2 |
3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of
6 | this software and associated documentation files (the "Software"), to deal in
7 | the Software without restriction, including without limitation the rights to
8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
9 | the Software, and to permit persons to whom the Software is furnished to do so.
10 |
11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17 |
18 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Sample AWS Well-Architected Review (WAFR) Acceleration with Generative AI (GenAI)
2 |
3 | ## Name
4 |
5 | AWS Well-Architected Framework Review (WAFR) Acceleration with Generative AI (GenAI)
6 |
7 | ## Description
8 |
9 | This is a comprehensive sample designed to facilitate and expedite the AWS Well-Architected Framework Review process.
10 |
11 |
12 | This sample aims to accelerate AWS Well-Architected Framework Review (WAFR) velocity and adoption by leveraging the power of generative AI to provide organizations with automated comprehensive analysis and recommendations for optimizing their AWS architectures.
13 |
14 |
15 | ## Core Features
16 |
17 |
18 |
19 | * Ability to upload technical content (for example solution design and architecture documents) to be reviewed in PDF format
20 | * Creation of architecture assessment including:
21 | * Solution summary
22 | * Assessment
23 | * Well-Architected best practices
24 | * Recommendations for improvements
25 | * Risk
26 | * Ability to chat with the document as well as generated content
27 | * Creation of Well-Architected workload in Well-Architected tool that has:
28 | * Initial selection of choices for each of the question based on the assessment.
29 | * Notes populated with the generated assessment.
30 |
31 | ## Optional / Configurable Features
32 |
33 | * [Amazon Bedrock Guardrails](https://aws.amazon.com/bedrock/guardrails/) - initial set of Amazon Bedrock Guardrail configurations for Responsible AI.
34 | * Default - Enabled
35 | * Amazon OpenSearch Serverless disable redundancy - by default, each Amazon OpenSearch Serverless collection has redundancy and has its own standby replicas in a different Availability Zone. This option allows you to disable redundancy to reduce overall stack cost during development and testing. See Amazon OpenSearch Serverless [How it works](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-overview.html#serverless-process) for more details.
36 | * Default - Enabled
37 |
38 | _* Note: The above list of features can be individually enabled and disabled by updating the 'optional_features' JSON object in 'app.py' file._
39 |
40 | ## Technical Architecture
41 |
42 | 
43 |
44 | ## Implementation Guide
45 |
46 | ### Pre-requisites
47 | * Ensure you have access to the following models in Amazon Bedrock:
48 | * Titan Text Embeddings V2
49 | * Claude 3-5 Sonnet
50 |
51 | * The machine used to build and deploy the sample needs to have the following installed:
52 | * [AWS CDK prerequisites](https://docs.aws.amazon.com/cdk/v2/guide/prerequisites.html) with [Python 3.9+](https://www.python.org/downloads/)
53 | * [AWS Cloud Development Kit (AWS CDK) v2](https://docs.aws.amazon.com/cdk/v2/guide/getting-started.html)
54 | * [Docker CE](https://docs.docker.com/engine/)
55 |
56 | ### Cloning Repository
57 |
58 | Clone the repository to your local directory. You can use the following command:
59 | ```
60 | git clone https://github.com/aws-samples/sample-well-architected-acceleration-with-generative-ai.git
61 | ```
62 | Alternatively, you can download the code as a zip file.
63 |
64 | ### Code Download
65 |
66 | Download the compressed source code. You can do this by selecting the ‘Code’ drop down menu on top right side.
67 | If you downloaded the .zip file, you can use the following command:
68 |
69 | ```
70 | unzip sample-well-architected-acceleration-with-generative-ai-main.zip
71 | ```
72 | ```
73 | cd sample-well-architected-acceleration-with-generative-ai-main/
74 | ```
75 | ### Preparing and populating Amazon Bedrock Knowledge Base with AWS Well-Architected reference documents
76 |
77 | The Amazon Bedrock knowledge base is driven by AWS Well-Architected documents. These documents have been downloaded and placed in the 'well_architected_docs' folder for ease of deployment. They are ingested during the build.
78 |
79 | Please refer to [Refreshing Amazon Bedrock Knowledge Base with latest AWS Well-Architected Reference Documents](refreshing_kb.md) for guidance on how to refresh the documents with future releases of the Well-Architected Framework after the build.
80 |
81 | Currently, the following AWS Well-Architected lenses are supported:
82 | * AWS Well-Architected Framework Lens
83 | * Data Analytics Lens
84 | * Generative AI Lens
85 | * Financial Services Industry Lens
86 |
87 | ### CDK Deployment
88 |
89 | Manually create a virtualenv on MacOS and Linux:
90 |
91 | ```
92 | python3 -m venv .venv
93 | ```
94 |
95 | After the init process completes and the virtualenv is created, you can use the following
96 | step to activate your virtualenv.
97 |
98 | ```
99 | source .venv/bin/activate
100 | ```
101 |
102 | If you are a Windows platform, you would activate the virtualenv like this:
103 |
104 | ```
105 | % .venv\Scripts\activate.bat
106 | ```
107 |
108 | Once the virtualenv is activated, you can install the required dependencies.
109 |
110 | ```
111 | pip3 install -r requirements.txt
112 | ```
113 |
114 | Open "wafr_genai_accelerator/wafr_genai_accelerator_stack.py" file using a file editor such as nano and update the Cloundfront [managed prefix list](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/LocationsOfEdgeServers.html) for your deployment region. As a default, it uses 'us-west-2' managed prefix list.
115 |
116 |
117 | ```
118 | alb_security_group.add_ingress_rule(
119 | ec2.Peer.prefix_list("pl-82a045eb"),
120 | ec2.Port.HTTP,
121 | "Allow inbound connections only from Cloudfront to Streamlit port"
122 | )
123 | ```
124 |
125 | Open app.py, update and uncomment either of the following lines based on your deployment location:
126 |
127 | ```
128 | #env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')),
129 | ```
130 | Or
131 | ```
132 | #env=cdk.Environment(account='111122223333', region='us-west-2'),
133 | ```
134 |
135 | If you are deploying CDK for the first time in your account, run the below command (if not, skip this step):
136 |
137 | ```
138 | cdk bootstrap
139 | ```
140 |
141 | At this point you can now synthesize the CloudFormation template for this code.
142 |
143 | ```
144 | cdk synth
145 | ```
146 |
147 | You can now deploy the CDK stack:
148 |
149 | ```
150 | cdk deploy
151 | ```
152 |
153 | You will need to enter 'y' to confirm the deployment. The deployment can take around 20-25 minutes to complete.
154 |
155 |
156 | ### Demo configurations
157 |
158 | On the cdk build completion, you would see three outputs:
159 | a) Amazon Cognito user pool name
160 | b) Front end UI EC2 instance id
161 | c) Amazon Cloudfront URL for the web application
162 |
163 |
164 | ##### Add a user to Amazon Cognito user pool
165 |
166 | Firstly, [add a user to the Amazon Cognito pool](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html#creating-a-new-user-using-the-console) indicated in the output. You would use this user credentials for application login later on.
167 |
168 | ##### EC2 sanity check
169 |
170 | Next, login to the EC2 the instance as ec2-user (for example using EC2 Instance Connect) and check if the front end user interface folder has been synced.
171 |
172 | ```
173 | cd /wafr-accelerator
174 | ls -l
175 | ```
176 |
177 | If there is no "pages" folder then sync the front end user interface as below.
178 |
179 | ```
180 | python3 syncUIFolder.py
181 | ```
182 |
183 | Ensure /wafr-accelerator and all the files underneath are owned by ec2-user. If not then execute the following.
184 |
185 | ```
186 | sudo chown -R ec2-user:ec2-user /wafr-accelerator
187 | ```
188 |
189 | ##### Running the application
190 |
191 | Once the UI folder has been synced, run the application as below
192 |
193 | ```
194 | streamlit run WAFR_Accelerator.py
195 | ```
196 |
197 | You can now use the Amazon Cloudfront URL from the CDK output to access the sample application in a web browser.
198 |
199 |
200 | ### Testing the demo application
201 |
202 | Open a new web browser window and copy the Amazon Cloudfront URL copied earlier into the address bar. On the login page, enter the user credentials for the previously created user.
203 |
204 | 
205 |
206 | On home page, click on the "New WAFR Review" link.
207 |
208 | 
209 |
210 | On "Create New WAFR Analysis" page, select the analysis type ("Quick" or "Deep with Well-Architected Tool") and provide analysis name, description, Well Architectd lens, etc. in the input form.
211 |
212 | **Analysis Types**:
213 | * **"Quick"** - quick analysis without the creation of workload in the AWS Well-Architected tool. Relatively faster as it groups all questions for an individual pillar into a single prompt; suitable for initial assessment.
214 | * **"Deep with Well-Architected Tool"** - robust and deep analysis that also creates workload in the AWS Well-Architected tool. Takes longer to complete as it doesn't group questions and responses are generated for every question individually. This takes longer to execute.
215 |
216 | 
217 |
218 | * Note: "Created by" field is automatically populated with the logged user name.
219 |
220 | You have an option to select one or more Well-Architected pillars.
Finally upload the solution architecture / technical design document that needs to be analysed and press the "Create WAFR Analysis" button.
221 |
222 | Post successful submission, navigate to the "Existing WAFR Reviews" page. The newly submitted analysis would be listed in the table along with any existing reviews.
223 | 
224 |
225 | Once the analysis is marked "Completed", the WAFR analysis for the selected lens would be shown at the bottom part of the page. If there are multiple reviews, then select the relevant analysis from the combo list.
226 |
227 |
228 | 
229 |
230 | * Note: Analysis duration varies based on the Analysis Type ("Quick" or "Deep with Well-Architected Tool") and number of WAFR Pillars selected. A 'Quick' analysis type with one WAFR pillar is likely to be much quicker than "Deep with Well-Architected Tool" analysis type with all the six WAFR Pillars selected.
231 | * Note: Only the questions for the selected Well-Architected lens and pillars are answered.
232 |
233 | To chat with the uploaded document as well as any of the generated content by using the "WAFR Chat" section at the bottom of the "Existing WAFR Reviews" page.
234 |
235 |
236 | 
237 |
238 |
239 | ### Uninstall - CDK Undeploy
240 |
241 | If you no longer need the application or would like to delete the CDK deployment, run the following command:
242 |
243 | ```
244 | cdk destroy
245 | ```
246 |
247 | ### Additional considerations
248 | Please see [Additional Considerations](Additional%20Considerations.md)
249 |
250 |
251 | ### Disclaimer
252 | This is a sample code, for non-production usage. You should work with your security and legal teams to meet your organizational security, regulatory and compliance requirements before deployment.
253 |
254 |
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import aws_cdk as cdk
4 |
5 | from wafr_genai_accelerator.wafr_genai_accelerator_stack import WafrGenaiAcceleratorStack
6 |
7 | app = cdk.App()
8 |
9 | # Define tags as a dictionary
10 | # Tags will be applied to all resources in the stack
11 | # tags = {
12 | # "Environment": "Production",
13 | # "Project": "WellArchitectedReview",
14 | # "Owner": "TeamName",
15 | # "CostCenter": "12345"
16 | # }
17 | tags = {
18 | "Project": "WellArchitectedReview"
19 | }
20 |
21 | # Flags for optional features
22 | optional_features = {
23 | "guardrails": "True",
24 | "openSearchReducedRedundancy": "True"
25 | }
26 |
27 | WafrGenaiAcceleratorStack(app, "WellArchitectedReviewUsingGenAIStack", tags=tags, optional_features=optional_features,
28 | # If you don't specify 'env', this stack will be environment-agnostic.
29 | # Account/Region-dependent features and context lookups will not work,
30 | # but a single synthesized template can be deployed anywhere.
31 |
32 | # Uncomment the next line to specialize this stack for the AWS Account
33 | # and Region that are implied by the current CLI configuration.
34 |
35 | env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')),
36 |
37 | # Uncomment the next line if you know exactly what Account and Region you
38 | # want to deploy the stack to. */
39 |
40 | #env=cdk.Environment(account='**********', region='us-west-2'),
41 |
42 | #For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html
43 |
44 | )
45 |
46 | app.synth()
47 |
--------------------------------------------------------------------------------
/cdk.json:
--------------------------------------------------------------------------------
1 | {
2 | "app": "python3 app.py",
3 | "watch": {
4 | "include": [
5 | "**"
6 | ],
7 | "exclude": [
8 | "README.md",
9 | "cdk*.json",
10 | "requirements*.txt",
11 | "source.bat",
12 | "**/__init__.py",
13 | "**/__pycache__",
14 | "tests"
15 | ]
16 | },
17 | "context": {
18 | "@aws-cdk/aws-lambda:recognizeLayerVersion": true,
19 | "@aws-cdk/core:checkSecretUsage": true,
20 | "@aws-cdk/core:target-partitions": [
21 | "aws",
22 | "aws-cn"
23 | ],
24 | "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
25 | "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
26 | "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true,
27 | "@aws-cdk/aws-iam:minimizePolicies": true,
28 | "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
29 | "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
30 | "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
31 | "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
32 | "@aws-cdk/aws-apigateway:disableCloudWatchRole": true,
33 | "@aws-cdk/core:enablePartitionLiterals": true,
34 | "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true,
35 | "@aws-cdk/aws-iam:standardizedServicePrincipals": true,
36 | "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true,
37 | "@aws-cdk/aws-iam:importedRoleStackSafeDefaultPolicyName": true,
38 | "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true,
39 | "@aws-cdk/aws-route53-patters:useCertificate": true,
40 | "@aws-cdk/customresources:installLatestAwsSdkDefault": false,
41 | "@aws-cdk/aws-rds:databaseProxyUniqueResourceName": true,
42 | "@aws-cdk/aws-codedeploy:removeAlarmsFromDeploymentGroup": true,
43 | "@aws-cdk/aws-apigateway:authorizerChangeDeploymentLogicalId": true,
44 | "@aws-cdk/aws-ec2:launchTemplateDefaultUserData": true,
45 | "@aws-cdk/aws-secretsmanager:useAttachedSecretResourcePolicyForSecretTargetAttachments": true,
46 | "@aws-cdk/aws-redshift:columnId": true,
47 | "@aws-cdk/aws-stepfunctions-tasks:enableEmrServicePolicyV2": true,
48 | "@aws-cdk/aws-ec2:restrictDefaultSecurityGroup": true,
49 | "@aws-cdk/aws-apigateway:requestValidatorUniqueId": true,
50 | "@aws-cdk/aws-kms:aliasNameRef": true,
51 | "@aws-cdk/aws-autoscaling:generateLaunchTemplateInsteadOfLaunchConfig": true,
52 | "@aws-cdk/core:includePrefixInUniqueNameGeneration": true,
53 | "@aws-cdk/aws-efs:denyAnonymousAccess": true,
54 | "@aws-cdk/aws-opensearchservice:enableOpensearchMultiAzWithStandby": true,
55 | "@aws-cdk/aws-lambda-nodejs:useLatestRuntimeVersion": true,
56 | "@aws-cdk/aws-efs:mountTargetOrderInsensitiveLogicalId": true,
57 | "@aws-cdk/aws-rds:auroraClusterChangeScopeOfInstanceParameterGroupWithEachParameters": true,
58 | "@aws-cdk/aws-appsync:useArnForSourceApiAssociationIdentifier": true,
59 | "@aws-cdk/aws-rds:preventRenderingDeprecatedCredentials": true,
60 | "@aws-cdk/aws-codepipeline-actions:useNewDefaultBranchForCodeCommitSource": true,
61 | "@aws-cdk/aws-cloudwatch-actions:changeLambdaPermissionLogicalIdForLambdaAction": true,
62 | "@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse": true,
63 | "@aws-cdk/aws-codepipeline:defaultPipelineTypeToV2": true,
64 | "@aws-cdk/aws-kms:reduceCrossAccountRegionPolicyScope": true,
65 | "@aws-cdk/aws-eks:nodegroupNameAttribute": true,
66 | "@aws-cdk/aws-ec2:ebsDefaultGp3Volume": true,
67 | "@aws-cdk/aws-ecs:removeDefaultDeploymentAlarm": true,
68 | "@aws-cdk/custom-resources:logApiResponseDataPropertyTrueDefault": false,
69 | "@aws-cdk/aws-stepfunctions-tasks:ecsReduceRunTaskPermissions": true
70 | }
71 | }
72 |
--------------------------------------------------------------------------------
/graphics/chat.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/chat.png
--------------------------------------------------------------------------------
/graphics/createnew.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/createnew.png
--------------------------------------------------------------------------------
/graphics/existing.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/existing.png
--------------------------------------------------------------------------------
/graphics/home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/home.png
--------------------------------------------------------------------------------
/graphics/loginpage.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/loginpage.png
--------------------------------------------------------------------------------
/graphics/output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/output.png
--------------------------------------------------------------------------------
/graphics/walink.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/graphics/walink.png
--------------------------------------------------------------------------------
/lambda_dir/extract_document_text/extract_document_text.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 |
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 | dynamodb = boto3.resource('dynamodb')
16 |
17 | logger = logging.getLogger()
18 | logger.setLevel(logging.INFO)
19 |
20 | def lambda_handler(event, context):
21 |
22 | entry_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
23 |
24 | logger.info("extract_document_text invoked at " + entry_timeestamp)
25 |
26 | output_filename = extract_document_text = ""
27 | logger.info(json.dumps(event))
28 |
29 | return_response = data = event
30 | upload_bucket_name = data['extract_output_bucket']
31 | region = data['region']
32 |
33 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
34 | wafr_accelerator_run_key = data['wafr_accelerator_run_key']
35 |
36 | document_s3_key = data['wafr_accelerator_run_items']['document_s3_key']
37 |
38 | try:
39 |
40 | # Extract text from the document
41 | extracted_document_text = extract_text(upload_bucket_name, document_s3_key , region)
42 |
43 | attribute_updates = {
44 | 'extracted_document': {
45 | 'Action': 'ADD' # PUT to update or ADD to add a new attribute
46 | }
47 | }
48 |
49 | # Update the item
50 | response = wafr_accelerator_runs_table.update_item(
51 | Key=wafr_accelerator_run_key,
52 | UpdateExpression="SET extracted_document = :val",
53 | ExpressionAttributeValues={':val': extracted_document_text},
54 | ReturnValues='UPDATED_NEW'
55 | )
56 |
57 | # Write the textract output to a txt file
58 | output_bucket = s3.Bucket(upload_bucket_name)
59 | logger.info ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
60 | logger.info ("document_s3_key[:documentS3Key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
61 | output_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-extracted-text.txt"
62 | return_response['extract_text_file_name'] = output_filename
63 |
64 | # Upload the file to S3
65 | output_bucket.put_object(Key=output_filename, Body=bytes(extracted_document_text, encoding='utf-8'))
66 |
67 | except Exception as error:
68 | # Handle errors and update DynamoDB status
69 | handle_error(wafr_accelerator_runs_table, wafr_accelerator_run_key, error)
70 | raise Exception (f'Exception caught in extract_document_text: {error}')
71 |
72 | logger.info('return_response: ' + json.dumps(return_response))
73 |
74 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
75 | logger.info("Exiting extract_document_text at " + exit_timeestamp)
76 |
77 | # Return a success response
78 | return {
79 | 'statusCode': 200,
80 | 'body': json.dumps(return_response)
81 | }
82 |
83 | def handle_error(table, key, error):
84 | # Handle errors and update DynamoDB status
85 | table.update_item(
86 | Key=key,
87 | UpdateExpression="SET review_status = :val",
88 | ExpressionAttributeValues={':val': "Errored"},
89 | ReturnValues='UPDATED_NEW'
90 | )
91 | logger.error(f"Exception caught in extract_document_text: {error}")
92 |
93 | def extract_text(upload_bucket_name, document_s3_key, region):
94 |
95 | logger.info ("solution_design_text is null and hence using Textract")
96 | # # Initialize Textract and Bedrock clients
97 | textract_config = Config(retries = dict(max_attempts = 5))
98 | textract_client = boto3.client('textract', region_name=region, config=textract_config)
99 |
100 | logger.debug ("extract_text checkpoint 1")
101 | # Start the text detection job
102 | response = textract_client.start_document_text_detection(
103 | DocumentLocation={
104 | 'S3Object': {
105 | 'Bucket': upload_bucket_name,
106 | 'Name': document_s3_key
107 | }
108 | }
109 | )
110 |
111 | job_id = response["JobId"]
112 |
113 | logger.info (f"textract response: {job_id}")
114 |
115 | logger.debug ("extract_text checkpoint 2")
116 |
117 | # Wait for the job to complete
118 | while True:
119 | response = textract_client.get_document_text_detection(JobId=job_id)
120 | status = response["JobStatus"]
121 | if status == "SUCCEEDED":
122 | break
123 |
124 | logger.debug ("extract_text checkpoint 3")
125 | # Get the job results
126 | pages = []
127 | next_token = None
128 | while True:
129 | if next_token:
130 | response = textract_client.get_document_text_detection(JobId=job_id, NextToken=next_token)
131 | else:
132 | response = textract_client.get_document_text_detection(JobId=job_id)
133 | pages.append(response)
134 | if 'NextToken' in response:
135 | next_token = response['NextToken']
136 | else:
137 | break
138 |
139 | logger.debug ("extract_text checkpoint 4")
140 | # Extract the text from all pages
141 | extracted_text = ""
142 | for page in pages:
143 | for item in page["Blocks"]:
144 | if item["BlockType"] == "LINE":
145 | extracted_text += item["Text"] + "\n"
146 |
147 |
148 | return extracted_text
--------------------------------------------------------------------------------
/lambda_dir/generate_pillar_question_response/generate_pillar_question_response.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 | import re
8 |
9 | from boto3.dynamodb.conditions import Key
10 | from boto3.dynamodb.conditions import Attr
11 |
12 | from botocore.client import Config
13 | from botocore.exceptions import ClientError
14 |
15 | s3 = boto3.resource('s3')
16 | s3client = boto3.client('s3')
17 |
18 | dynamodb = boto3.resource('dynamodb')
19 | wa_client = boto3.client('wellarchitected')
20 |
21 | BEDROCK_SLEEP_DURATION = int(os.environ['BEDROCK_SLEEP_DURATION'])
22 | BEDROCK_MAX_TRIES = int(os.environ['BEDROCK_MAX_TRIES'])
23 |
24 | logger = logging.getLogger()
25 | logger.setLevel(logging.INFO)
26 |
27 | def lambda_handler(event, context):
28 |
29 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
30 |
31 | logger.info(f"generate_pillar_question_response invoked at {entry_timestamp}")
32 |
33 | logger.info(json.dumps(event))
34 |
35 | logger.debug(f"BEDROCK_SLEEP_DURATION: {BEDROCK_SLEEP_DURATION}")
36 | logger.debug(f"BEDROCK_MAX_TRIES: {BEDROCK_MAX_TRIES}")
37 |
38 | data = event
39 |
40 | region = data['region']
41 | bedrock_config = Config(connect_timeout=120, region_name=region, read_timeout=120, retries={'max_attempts': 0})
42 | bedrock_client = boto3.client('bedrock-runtime',region_name=region)
43 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
44 |
45 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
46 | wafr_prompts_table = dynamodb.Table(data['wafr_prompts_table'])
47 |
48 | document_s3_key = data['wafr_accelerator_run_items']['document_s3_key']
49 | extract_output_bucket_name = data['extract_output_bucket']
50 |
51 | wafr_lens = data['wafr_accelerator_run_items']['selected_lens']
52 |
53 | pillars = data['wafr_accelerator_run_items'] ['selected_wafr_pillars']
54 | input_pillar = data['input_pillar']
55 | llm_model_id = data['llm_model_id']
56 | wafr_workload_id = data['wafr_accelerator_run_items'] ['wafr_workload_id']
57 | lens_alias = data['wafr_accelerator_run_items'] ['lens_alias']
58 | guardrail_id = data['guardrail_id']
59 |
60 | return_response = {}
61 |
62 | logger.debug (f"generate_pillar_question_response checkpoint 0")
63 | logger.info (f"input_pillar: {input_pillar}")
64 | logger.info (f"wafr_lens: {wafr_lens}")
65 |
66 | try:
67 | extract_output_bucket = s3.Bucket(extract_output_bucket_name)
68 |
69 | streaming = False
70 |
71 | logger.debug (f"generate_pillar_question_response checkpoint 1")
72 |
73 | input_pillar_id = get_pillar_name_to_id_mappings()[input_pillar]
74 |
75 | wafr_accelerator_run_key = {
76 | 'analysis_id': data['wafr_accelerator_run_items']['analysis_id'],
77 | 'analysis_submitter': data['wafr_accelerator_run_items']['analysis_submitter']
78 | }
79 |
80 | logger.debug (f"generate_pillar_question_response checkpoint 2")
81 |
82 | pillar_responses = get_existing_pillar_responses(wafr_accelerator_runs_table, data['wafr_accelerator_run_items']['analysis_id'], data['wafr_accelerator_run_items']['analysis_submitter'])
83 |
84 | logger.debug (f"generate_pillar_question_response checkpoint 3")
85 |
86 | pillar_review_output = ""
87 |
88 | logger.debug (f"generate_pillar_question_response checkpoint 4")
89 |
90 | logger.info (input_pillar)
91 |
92 | file_counter = 0
93 |
94 | # read file content
95 | # invoke bedrock
96 | # append response
97 | # update pillar
98 |
99 | pillar_name_alias_mappings = get_pillar_name_alias_mappings ()
100 |
101 | question_mappings = get_question_id_mappings (data['wafr_prompts_table'], wafr_lens, input_pillar)
102 |
103 | for pillar_question_object in data[input_pillar]:
104 |
105 | filename = pillar_question_object["pillar_review_prompt_filename"]
106 | logger.info (f"generate_pillar_question_response checkpoint 5.{file_counter}")
107 | logger.info (f"Input Prompt filename: " + filename)
108 |
109 | current_prompt_object = s3client.get_object(
110 | Bucket=extract_output_bucket_name,
111 | Key=filename,
112 | )
113 |
114 | current_prompt = current_prompt_object['Body'].read()
115 |
116 | logger.info (f"current_prompt: {current_prompt}")
117 |
118 | logger.debug ("filename.rstrip('.'): " + filename.rstrip('.'))
119 | logger.debug ("filename[:document_s3_key.rfind('.')]: " + filename[:filename.rfind('.')] )
120 | pillar_review_prompt_ouput_filename = filename[:filename.rfind('.')]+ "-output.txt"
121 | logger.info (f"Ouput Prompt ouput filename: " + pillar_review_prompt_ouput_filename)
122 |
123 | logger.info (f"generate_pillar_question_response checkpoint 6.{file_counter}")
124 |
125 | pillar_specfic_question_id = pillar_question_object["pillar_specfic_question_id"]
126 | pillar_specfic_prompt_question = pillar_question_object["pillar_specfic_prompt_question"]
127 |
128 | pillar_question_review_output = invoke_bedrock(streaming, current_prompt, pillar_review_prompt_ouput_filename, extract_output_bucket, bedrock_client, llm_model_id, guardrail_id)
129 |
130 | logger.debug (f"pillar_question_review_output: {pillar_question_review_output}")
131 |
132 | # Comment the next line if you would like to retain the prompts files
133 | s3client.delete_object(Bucket=extract_output_bucket_name, Key=filename)
134 |
135 | pillar_question_review_output = sanitise_string(pillar_question_review_output)
136 | logger.debug (f"sanitised_string: {pillar_question_review_output}")
137 |
138 | full_assessment, extracted_question, extracted_assessment, best_practices_followed, recommendations_and_examples, risk, citations = extract_assessment(pillar_question_review_output, question_mappings, pillar_question_object["pillar_specfic_prompt_question"])
139 | logger.debug (f"extracted_assessment: {full_assessment}")
140 |
141 | extracted_choices = extract_choices(pillar_question_review_output)
142 | logger.debug (f"extracted_choices: {extracted_choices}")
143 |
144 | update_wafr_question_response(wa_client, wafr_workload_id, lens_alias, pillar_specfic_question_id, extracted_choices, f"{extracted_assessment} {best_practices_followed} {recommendations_and_examples}")
145 |
146 | pillar_review_output = pillar_review_output + " \n" + full_assessment
147 |
148 | logger.debug (f"generate_pillar_question_response checkpoint 7.{file_counter}")
149 |
150 | file_counter = file_counter + 1
151 |
152 | logger.debug (f"generate_pillar_question_response checkpoint 8")
153 |
154 | # Now write the completed pillar response in DynamoDB
155 | pillar_response = {
156 | 'pillar_name': input_pillar,
157 | 'pillar_id': input_pillar_id,
158 | 'llm_response': pillar_review_output
159 | }
160 |
161 | # Add the dictionary object to the list
162 | pillar_responses.append(pillar_response)
163 |
164 | # Update the item
165 | response = wafr_accelerator_runs_table.update_item(
166 | Key=wafr_accelerator_run_key,
167 | UpdateExpression="SET pillars = :val",
168 | ExpressionAttributeValues={':val': pillar_responses},
169 | ReturnValues='UPDATED_NEW'
170 | )
171 |
172 | logger.info (f"dynamodb status update response: {response}" )
173 | logger.info (f"generate_pillar_question_response checkpoint 10")
174 |
175 | except Exception as error:
176 | handle_error(wafr_accelerator_runs_table, wafr_accelerator_run_key, error)
177 | raise Exception (f'Exception caught in generate_pillar_question_response: {error}')
178 | finally:
179 | logger.info (f"generate_pillar_question_response inside finally")
180 |
181 | logger.debug (f"generate_pillar_question_response checkpoint 11")
182 |
183 | return_response = data
184 |
185 | logger.info(f"return_response: " + json.dumps(return_response))
186 |
187 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
188 | logger.info(f"Exiting generate_pillar_question_response at {exit_timeestamp}")
189 |
190 | # Return a success response
191 | return {
192 | 'statusCode': 200,
193 | 'body': return_response
194 | }
195 |
196 | def get_pillar_name_to_id_mappings():
197 | mappings = {}
198 |
199 | mappings["Operational Excellence"] = "1"
200 | mappings["Security"] = "2"
201 | mappings["Reliability"] = "3"
202 | mappings["Performance Efficiency"] = "4"
203 | mappings["Cost Optimization"] = "5"
204 | mappings["Sustainability"] = "6"
205 |
206 | return mappings
207 |
208 | def get_pillar_name_alias_mappings():
209 |
210 | mappings = {}
211 |
212 | mappings["Cost Optimization"] = "costOptimization"
213 | mappings["Operational Excellence"] = "operationalExcellence"
214 | mappings["Performance Efficiency"] = "performance"
215 | mappings["Reliability"] = "reliability"
216 | mappings["Security"] = "security"
217 | mappings["Sustainability"] = "sustainability"
218 |
219 | return mappings
220 |
221 | def sanitise_string(content):
222 | junk_list = ["```", "xml", "**wafr_answer_choices:**", "{", "}", "[", "]", "", "", "", "", "Recommendations:", "Assessment:"]
223 | for junk in junk_list:
224 | content = content.replace(junk, '')
225 | return content
226 |
227 | def sanitise_string_2(content):
228 | junk_list = ["", "", "**", "```", "", ""]
229 | for junk in junk_list:
230 | content = content.replace(junk, '')
231 | return content
232 |
233 | # Function to recursively print XML nodes
234 | def extract_assessment(content, question_mappings, question):
235 |
236 | tag_content = ""
237 | full_assessment = question= assessment = best_practices_followed = recommendations_and_examples = risk = citations = ""
238 | try:
239 | xml_start = content.find('')
240 | if xml_start != -1:
241 | xml_content = content[(xml_start+len("")):]
242 | xml_end = xml_content.find('')
243 | if xml_end != -1:
244 | tag_content = xml_content[:xml_end].strip()
245 | print (f"question: {tag_content}")
246 | question = f"**Question: {question_mappings[sanitise_string_2(tag_content)]} - {tag_content}** \n"
247 | else:
248 | print (f"End tag for question not found")
249 | question = f"**Question: {question_mappings[sanitise_string_2(question)]} - {question}** \n"
250 | else:
251 | question = f"**Question: {question_mappings[sanitise_string_2(question)]} - {question}** \n"
252 |
253 | assessment = f"**Assessment:** {extract_tag_data(content, 'assessment')} \n \n"
254 | best_practices_followed = f"**Best Practices Followed:** {extract_tag_data(content, 'best_practices_followed')} \n \n"
255 | recommendations_and_examples = f"**Recommendations:** {extract_tag_data(content, 'recommendations_and_examples')} \n \n"
256 | citations = f"**Citations:** {extract_tag_data(content, 'citations')} \n \n"
257 |
258 | full_assessment = question + assessment + best_practices_followed + recommendations_and_examples + risk +citations
259 |
260 | except Exception as error:
261 | errorFlag = True
262 | logger.info("Exception caught by try loop in extract_assessment!")
263 | logger.info("Error received is:")
264 | logger.info(error)
265 |
266 | return full_assessment, question, assessment, best_practices_followed, recommendations_and_examples, risk, citations
267 |
268 | def extract_tag_data(content, tag):
269 | tag_content = ""
270 | xml_start = content.find(f'<{tag}>')
271 | if xml_start != -1:
272 | xml_content = content[(xml_start+len(f'<{tag}>')):]
273 | xml_end = xml_content.find(f'{tag}>')
274 | if xml_end != -1:
275 | tag_content = sanitise_string_2(xml_content[:xml_end].strip())
276 | print (f"{tag}: {tag_content}")
277 | else:
278 | print (f"End tag for assessment not found")
279 | return tag_content
280 |
281 | def extract_choices(content):
282 |
283 | selectedChoices = []
284 |
285 | xml_end = -1
286 |
287 | try:
288 | xml_start = content.find('')
289 | if xml_start != -1:
290 | xml_content = content[xml_start:]
291 | xml_end = xml_content.find('')
292 | wafr_answer_choices = xml_content[:(xml_end + len(""))].strip()
293 | logger.info(f"wafr_answer_choices: {wafr_answer_choices}")
294 | logger.info(f"response_root is a well-formed XML")
295 |
296 | if ((xml_start!=-1) and (xml_end!=-1)):
297 | # Use regular expression to find all occurrences of ...
298 | id_pattern = re.compile(r'(.*?)', re.DOTALL)
299 | # Find all matches
300 | ids = id_pattern.findall(wafr_answer_choices)
301 | # Loop through the matches
302 | for index, id_value in enumerate(ids, 1):
303 | print(f"ID {index}: {id_value.strip()}")
304 | selectedChoices += [id_value.strip()]
305 |
306 | except Exception as error:
307 | errorFlag = True
308 | logger.info("Exception caught by try loop in extract_choices!")
309 | logger.info("Error received is:")
310 | logger.info(error)
311 |
312 | return selectedChoices
313 |
314 | def get_question_id_mappings(wafr_prompts_table_name, wafr_lens, input_pillar):
315 | questions = {}
316 |
317 | wafr_prompts_table = dynamodb.Table(wafr_prompts_table_name)
318 | response = wafr_prompts_table.query(
319 | ProjectionExpression ='wafr_pillar_id, wafr_pillar_prompt',
320 | KeyConditionExpression=Key('wafr_lens').eq(wafr_lens) & Key('wafr_pillar').eq(input_pillar),
321 | ScanIndexForward=True # Set to False to sort in descending order
322 | )
323 | logger.debug (f"response wafr_pillar_id: " + str(response['Items'][0]['wafr_pillar_id']))
324 | logger.debug (f"response wafr_pillar_prompt: " + response['Items'][0]['wafr_pillar_prompt'])
325 | pillar_specific_prompt_question = response['Items'][0]['wafr_pillar_prompt']
326 |
327 | line_counter = 0
328 | # Before rubnning this, ensure wafr prompt row has only questions and no text before it. Otherwise, the below fails.
329 | for line in response['Items'][0]['wafr_pillar_prompt'].splitlines():
330 | line_counter = line_counter + 1
331 | if(line_counter > 2):
332 | question_id, question_text = line.strip().split(': ', 1)
333 | questions[question_text] = question_id
334 | return questions
335 |
336 | def handle_error(table, key, error):
337 | # Handle errors and update DynamoDB status
338 | table.update_item(
339 | Key=key,
340 | UpdateExpression="SET review_status = :val",
341 | ExpressionAttributeValues={':val': "Errored"},
342 | ReturnValues='UPDATED_NEW'
343 | )
344 | logger.error(f"Exception caught in generate_pillar_question_response: {error}")
345 |
346 |
347 | def update_wafr_question_response(wa_client, wafr_workload_id, lens_alias, pillar_specfic_question_id, choices, assessment):
348 |
349 | errorFlag = False
350 |
351 | try:
352 | if(errorFlag == True):
353 | selectedChoices = []
354 | logger.info(f"Error Flag is true")
355 | else:
356 | selectedChoices = choices
357 |
358 | logger.debug(f"update_wafr_question_response: 1")
359 | logger.info(f"wafr_workload_id: {wafr_workload_id}, lens_alias: {lens_alias}, pillar_specfic_question_id: {pillar_specfic_question_id}")
360 |
361 | try:
362 | response = wa_client.update_answer(
363 | WorkloadId=wafr_workload_id,
364 | LensAlias=lens_alias,
365 | QuestionId=pillar_specfic_question_id,
366 | SelectedChoices=selectedChoices,
367 | Notes=assessment[:2084],
368 | IsApplicable=True
369 | )
370 | logger.info(f"With Choices- response: {response}")
371 | logger.debug(f"update_wafr_question_response: 2")
372 | except Exception as error:
373 | logger.info("Updated answer with choices failed, now attempting update without the choices!")
374 | logger.info("With Choices- Error received is:")
375 | logger.info(error)
376 | selectedChoices = []
377 | response = wa_client.update_answer(
378 | WorkloadId=wafr_workload_id,
379 | LensAlias=lens_alias,
380 | QuestionId=pillar_specfic_question_id,
381 | SelectedChoices=selectedChoices,
382 | Notes=assessment[:2084],
383 | IsApplicable=True
384 | )
385 | logger.info(f"Without Choices- response: {response}")
386 | logger.debug(f"update_wafr_question_response: 3")
387 |
388 | logger.info (json.dumps(response))
389 |
390 | except Exception as error:
391 | logger.info("Exception caught by external try in update_wafr_question_response!")
392 | logger.info("Error received is:")
393 | logger.info(error)
394 | finally:
395 | logger.info (f"update_wafr_question_response Inside finally")
396 |
397 | def invoke_bedrock(streaming, claude_prompt_body, pillar_review_outputFilename, bucket, bedrock_client, llm_model_id, guardrail_id):
398 |
399 | pillar_review_output = ""
400 | retries = 0
401 | max_retries = BEDROCK_MAX_TRIES
402 | pillar_review_output = ""
403 | while retries < max_retries:
404 | try:
405 |
406 | if(streaming):
407 | if(guardrail_id == "Not Selected"):
408 | streaming_response = bedrock_client.invoke_model_with_response_stream(
409 | modelId=llm_model_id,
410 | body=claude_prompt_body
411 | )
412 | else: # Use guardrails
413 | streaming_response = bedrock_client.invoke_model_with_response_stream(
414 | modelId=llm_model_id,
415 | body=claude_prompt_body,
416 | guardrailIdentifier= guardrail_id,
417 | guardrailVersion="DRAFT"
418 | )
419 | logger.info (f"Used Guardrail Id: {guardrail_id}")
420 |
421 | logger.info (f"invoke_bedrock checkpoint 1.{retries}")
422 | stream = streaming_response.get("body")
423 |
424 | logger.info (f"invoke_bedrock checkpoint 2.{retries}")
425 |
426 | for chunk in parse_stream(stream):
427 | pillar_review_output += chunk
428 |
429 | # Uncomment next line if you would like to see response files for each question too.
430 | # bucket.put_object(Key=pillar_review_output_filename, Body=bytes(pillar_review_output, encoding='utf-8'))
431 |
432 | return pillar_review_output
433 |
434 | else:
435 | if(guardrail_id == "Not Selected"):
436 | non_streaming_response = bedrock_client.invoke_model(
437 | modelId=llm_model_id,
438 | body=claude_prompt_body
439 | )
440 | else: # Use guardrails
441 | non_streaming_response = bedrock_client.invoke_model(
442 | modelId=llm_model_id,
443 | body=claude_prompt_body,
444 | guardrailIdentifier=guardrail_id,
445 | guardrailVersion="DRAFT"
446 | )
447 | logger.debug (f"Used Guardrail Id: {guardrail_id}")
448 | response_json = json.loads(non_streaming_response["body"].read().decode("utf-8"))
449 |
450 | logger.debug (response_json)
451 |
452 | logger.info (f"invoke_bedrock checkpoint 1.{retries}")
453 |
454 | # Extract and logger.info the response text.
455 | pillar_review_output = response_json["content"][0]["text"]
456 |
457 | logger.debug (f"invoke_bedrock checkpoint 2.{retries}")
458 |
459 | # Uncomment next line if you would like to see response files for each question too.
460 | #bucket.put_object(Key=pillar_review_outputFilename, Body=pillar_review_output)
461 |
462 | return pillar_review_output
463 |
464 | except Exception as e:
465 | retries += 1
466 | logger.info(f"Sleeping as attempt {retries} failed with exception: {e}")
467 | time.sleep(BEDROCK_SLEEP_DURATION) # Add a delay before the next retry
468 |
469 | logger.info(f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
470 | raise Exception (f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
471 |
472 | def parse_stream(stream):
473 | for event in stream:
474 | chunk = event.get('chunk')
475 | if chunk:
476 | message = json.loads(chunk.get("bytes").decode())
477 | if message['type'] == "content_block_delta":
478 | yield message['delta']['text'] or ""
479 | elif message['type'] == "message_stop":
480 | return "\n"
481 |
482 |
483 | def get_existing_pillar_responses(wafr_accelerator_runs_table, analysis_id, analysis_submitter):
484 |
485 | pillar_responses = []
486 | logger.info (f"analysis_id : {analysis_id}")
487 | logger.info (f"analysis_submitter: " + analysis_submitter)
488 |
489 | response = wafr_accelerator_runs_table.query(
490 | ProjectionExpression ='pillars',
491 | KeyConditionExpression=Key('analysis_id').eq(analysis_id) & Key('analysis_submitter').eq(analysis_submitter),
492 | ConsistentRead=True,
493 | ScanIndexForward=True
494 | )
495 |
496 | logger.info (f"Existing pillar responses stored in wafr_prompts_table table : {response}" )
497 |
498 | items = response['Items']
499 |
500 | logger.debug (f"items assigned: {items}" )
501 |
502 | logger.info (f"Items length: {len(response['Items'])}" )
503 |
504 | try:
505 | if len(response['Items']) > 0:
506 | for item in items:
507 | pillars = item['pillars']
508 | logger.info (pillars)
509 | for pillar in pillars:
510 | logger.info (pillar)
511 | pillar_response = {
512 | 'pillar_name': pillar['pillar_name'],
513 | 'pillar_id': str(pillar['pillar_id']),
514 | 'llm_response': pillar['llm_response']
515 | }
516 | logger.info (f"pillar_response {pillar_response}")
517 | # Add the dictionary object to the list
518 | pillar_responses.append(pillar_response)
519 | else:
520 | logger.info("List is empty")
521 |
522 | except Exception as error:
523 | logger.info("Exception caught by try loop in get_existing_pillar_responses! Looks attribute is empty.")
524 | logger.info(f"Error received is: {error}")
525 |
526 | return pillar_responses
--------------------------------------------------------------------------------
/lambda_dir/generate_prompts_for_all_the_selected_pillars/generate_prompts_for_all_the_selected_pillars.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 |
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 | s3client = boto3.client('s3')
16 | dynamodb = boto3.resource('dynamodb')
17 |
18 | logger = logging.getLogger()
19 | logger.setLevel(logging.INFO)
20 |
21 | WAFR_REFERENCE_DOCS_BUCKET = os.environ['WAFR_REFERENCE_DOCS_BUCKET']
22 |
23 | def lambda_handler(event, context):
24 |
25 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
26 |
27 | logger.info(f"generate_prompts_for_all_the_selected_pillars invoked at {entry_timestamp}")
28 |
29 | logger.info(json.dumps(event))
30 |
31 | data = event
32 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
33 | wafr_prompts_table = dynamodb.Table(data['wafr_prompts_table'])
34 | wafr_accelerator_run_key = data['wafr_accelerator_run_key']
35 |
36 | try:
37 |
38 | document_s3_key = data['wafr_accelerator_run_items']['document_s3_key']
39 | extract_output_bucket = data['extract_output_bucket']
40 | region = data['region']
41 | wafr_lens = data['wafr_accelerator_run_items']['selected_lens']
42 | knowledge_base_id = data ['knowledge_base_id']
43 | pillars = data['wafr_accelerator_run_items'] ['selected_wafr_pillars']
44 | wafr_workload_id = data['wafr_accelerator_run_items'] ['wafr_workload_id']
45 | lens_alias = data['wafr_accelerator_run_items'] ['lens_alias']
46 |
47 | waclient = boto3.client('wellarchitected', region_name=region)
48 |
49 | bedrock_config = Config(connect_timeout=120, region_name=region, read_timeout=120, retries={'max_attempts': 0})
50 | bedrock_client = boto3.client('bedrock-runtime',region_name=region)
51 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
52 |
53 | return_response = {}
54 |
55 | prompt_file_locations = []
56 | all_pillar_prompts = []
57 |
58 | pillar_name_alias_mappings = get_pillar_name_alias_mappings ()
59 | logger.info(pillar_name_alias_mappings)
60 |
61 | pillars_dictionary = get_pillars_dictionary (waclient, wafr_workload_id, lens_alias)
62 |
63 | extracted_document_text = read_s3_file (data['extract_output_bucket'], data['extract_text_file_name'])
64 |
65 | pillar_counter = 0
66 |
67 | #Get all the pillar prompts in a loop
68 | for item in pillars:
69 |
70 | prompt_file_locations = []
71 | logger.info (f"selected_pillar: {item}")
72 | response = ""
73 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
74 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
75 |
76 | questions = {}
77 |
78 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 1")
79 |
80 | # Loop through key-value pairs
81 | question_array_counter = 0
82 | current_wafr_pillar = item
83 |
84 | logger.info (f"generate_prompts_for_all_the_selected_pillars checkpoint 1.{pillar_counter}")
85 |
86 | questions = pillars_dictionary[current_wafr_pillar]["wafr_q"]
87 |
88 | logger.debug (json.dumps(questions))
89 |
90 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 2.{pillar_counter}")
91 |
92 | for question in questions:
93 |
94 | logger.info (json.dumps(question))
95 |
96 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 3.{pillar_counter}.{question_array_counter}")
97 |
98 | pillar_specfic_question_id = question["id"]
99 | pillar_specfic_prompt_question = question["text"]
100 | pillar_specfic_wafr_answer_choices = question["wafr_answer_choices"]
101 |
102 | logger.info (f"pillar_specfic_question_id: {pillar_specfic_question_id}")
103 | logger.info (f"pillar_specfic_prompt_question: {pillar_specfic_prompt_question}")
104 | logger.info (f"pillar_specfic_wafr_answer_choices: {json.dumps(pillar_specfic_wafr_answer_choices)}")
105 |
106 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 4.{pillar_counter}.{question_array_counter}")
107 | claude_prompt_body = bedrock_prompt(wafr_lens, current_wafr_pillar, pillar_specfic_question_id, pillar_specfic_wafr_answer_choices, pillar_specfic_prompt_question, knowledge_base_id, bedrock_agent_client, extracted_document_text, WAFR_REFERENCE_DOCS_BUCKET)
108 |
109 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 5.{pillar_counter}.{question_array_counter}")
110 |
111 | # Write the textract output to a txt file
112 | output_bucket = s3.Bucket(extract_output_bucket)
113 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
114 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
115 | pillar_review_prompt_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-" + pillar_name_alias_mappings[item] + "-" + pillar_specfic_question_id + "-prompt.txt"
116 | logger.info (f"Output prompt file name: {pillar_review_prompt_filename}")
117 |
118 | # Upload the file to S3
119 | output_bucket.put_object(Key=pillar_review_prompt_filename, Body=claude_prompt_body)
120 |
121 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 6.{pillar_counter}.{question_array_counter}")
122 | question_metadata = {}
123 |
124 | question_metadata["pillar_review_prompt_filename"] = pillar_review_prompt_filename ##########
125 | question_metadata["pillar_specfic_question_id"] = pillar_specfic_question_id
126 | question_metadata["pillar_specfic_prompt_question"] = pillar_specfic_prompt_question
127 | question_metadata["pillar_specfic_wafr_answer_choices"] = pillar_specfic_wafr_answer_choices
128 |
129 | logger.info (f"generate_prompts_for_all_the_selected_pillars checkpoint 7.{pillar_counter}.{question_array_counter}")
130 | prompt_file_locations.append(question_metadata)
131 |
132 | question_array_counter = question_array_counter + 1
133 |
134 | pillar_prompts = {}
135 |
136 | pillar_prompts['wafr_accelerator_run_items'] = data ['wafr_accelerator_run_items']
137 | pillar_prompts['wafr_accelerator_run_key'] = data['wafr_accelerator_run_key']
138 | pillar_prompts['extract_output_bucket'] = data['extract_output_bucket']
139 | pillar_prompts['wafr_accelerator_runs_table'] = data ['wafr_accelerator_runs_table']
140 | pillar_prompts['wafr_prompts_table'] = data ['wafr_prompts_table']
141 | pillar_prompts['llm_model_id'] = data ['llm_model_id']
142 | pillar_prompts['region'] = data['region']
143 | pillar_prompts['input_pillar'] = item
144 | pillar_prompts['guardrail_id'] = data ['guardrail_id']
145 |
146 | return_response['wafr_accelerator_run_items'] = data ['wafr_accelerator_run_items']
147 |
148 | pillar_prompts[item] = prompt_file_locations
149 |
150 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 9.{pillar_counter}")
151 |
152 | all_pillar_prompts.append(pillar_prompts)
153 |
154 | pillar_counter = pillar_counter + 1
155 |
156 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 10")
157 |
158 | except Exception as error:
159 | all_pillar_prompts = []
160 | wafr_accelerator_runs_table.update_item(
161 | Key=wafr_accelerator_run_key,
162 | UpdateExpression="SET review_status = :val",
163 | ExpressionAttributeValues={':val': "Errored"},
164 | ReturnValues='UPDATED_NEW'
165 | )
166 | logger.error(f"Exception caught in generate_solution_summary: {error}")
167 | raise Exception (f'Exception caught in generate_prompts_for_all_the_selected_pillars: {error}')
168 |
169 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 11")
170 |
171 | return_response = {}
172 |
173 | return_response = data
174 | return_response['all_pillar_prompts'] = all_pillar_prompts
175 |
176 | logger.info(f"return_response: {return_response}")
177 |
178 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
179 | logger.info(f"Exiting generate_prompts_for_all_the_selected_pillars at {exit_timeestamp}")
180 |
181 | # Return a success response
182 | return {
183 | 'statusCode': 200,
184 | 'body': return_response
185 | }
186 |
187 | def get_pillar_name_alias_mappings():
188 |
189 | mappings = {}
190 |
191 | mappings["Cost Optimization"] = "costOptimization"
192 | mappings["Operational Excellence"] = "operationalExcellence"
193 | mappings["Performance Efficiency"] = "performance"
194 | mappings["Reliability"] = "reliability"
195 | mappings["Security"] = "security"
196 | mappings["Sustainability"] = "sustainability"
197 |
198 | return mappings
199 |
200 | def get_pillars_dictionary(waclient, wafr_workload_id, lens_alias):
201 |
202 | pillars_dictionary = {}
203 |
204 | lens_review = get_lens_review (waclient, wafr_workload_id, lens_alias)
205 |
206 | for item in lens_review['data']:
207 | pillars_dictionary[item["wafr_pillar"]] = item
208 |
209 | return pillars_dictionary
210 |
211 | def get_lens_review(client, workload_id, lens_alias):
212 | try:
213 | response = client.get_lens_review(
214 | WorkloadId=workload_id,
215 | LensAlias=lens_alias
216 | )
217 |
218 | lens_review = response['LensReview']
219 | formatted_data = {
220 | "workload_id": workload_id,
221 | "data": []
222 | }
223 |
224 | for pillar in lens_review['PillarReviewSummaries']:
225 | pillar_data = {
226 | "wafr_lens": lens_alias,
227 | "wafr_pillar": pillar['PillarName'],
228 | "wafr_pillar_id": pillar['PillarId'],
229 | "wafr_q": []
230 | }
231 |
232 | # Manual pagination for list_answers
233 | next_token = None
234 |
235 | while True:
236 | if next_token:
237 | answers_response = client.list_answers(
238 | WorkloadId=workload_id,
239 | LensAlias=lens_alias,
240 | PillarId=pillar['PillarId'],
241 | MaxResults=50,
242 | NextToken=next_token
243 | )
244 | else:
245 | answers_response = client.list_answers(
246 | WorkloadId=workload_id,
247 | LensAlias=lens_alias,
248 | PillarId=pillar['PillarId'],
249 | MaxResults=50
250 | )
251 |
252 | for question in answers_response.get('AnswerSummaries', []):
253 | question_details = client.get_answer(
254 | WorkloadId=workload_id,
255 | LensAlias=lens_alias,
256 | QuestionId=question['QuestionId']
257 | )
258 | formatted_question = {
259 | "id": question['QuestionId'],
260 | "text": question['QuestionTitle'],
261 | "wafr_answer_choices": []
262 | }
263 | for choice in question_details['Answer'].get('Choices', []):
264 | formatted_question['wafr_answer_choices'].append({
265 | "id": choice['ChoiceId'],
266 | "text": choice['Title']
267 | })
268 |
269 | pillar_data['wafr_q'].append(formatted_question)
270 |
271 | next_token = answers_response.get('NextToken')
272 |
273 | if not next_token:
274 | break
275 |
276 | formatted_data['data'].append(pillar_data)
277 |
278 | # Add additional workload information
279 | formatted_data['workload_name'] = lens_review.get('WorkloadName')
280 | formatted_data['workload_id'] = workload_id
281 | formatted_data['lens_alias'] = lens_alias
282 | formatted_data['lens_name'] = lens_review.get('LensName')
283 | formatted_data['updated_at'] = str(lens_review.get('UpdatedAt'))
284 | return formatted_data
285 |
286 | except ClientError as e:
287 | logger.info(f"Error getting lens review: {e}")
288 | return None
289 |
290 | def get_lens_filter(kb_bucket, wafr_lens):
291 |
292 | # Map lens prefixes to their corresponding lens names - will make it easier to add additional lenses
293 | lens_mapping = {
294 | "Financial Services Industry Lens": "financialservices",
295 | "Data Analytics Lens": "dataanalytics",
296 | "Generative AI Lens": "genai"
297 | }
298 |
299 | # Get lens name or default to "wellarchitected"
300 | lens = next(
301 | (value for prefix, value in lens_mapping.items()
302 | if wafr_lens.startswith(prefix)),
303 | "wellarchitected"
304 | )
305 |
306 | # If wellarchitected lens then also use the overview documentation
307 | if lens == "wellarchitected":
308 | lens_filter = {
309 | "orAll": [
310 | {
311 | "startsWith": {
312 | "key": "x-amz-bedrock-kb-source-uri",
313 | "value": f"s3://{kb_bucket}/{lens}"
314 | }
315 | },
316 | {
317 | "startsWith": {
318 | "key": "x-amz-bedrock-kb-source-uri",
319 | "value": f"s3://{kb_bucket}/overview"
320 | }
321 | }
322 | ]
323 | }
324 | else: # Just use the lens documentation
325 | lens_filter = {
326 | "startsWith": {
327 | "key": "x-amz-bedrock-kb-source-uri",
328 | "value": f"s3://{kb_bucket}/{lens}/"
329 | }
330 | }
331 |
332 | logger.info(f"get_lens_filter: {json.dumps(lens_filter)}")
333 | return lens_filter
334 |
335 | def bedrock_prompt(wafr_lens, pillar, pillar_specfic_question_id, pillar_specfic_wafr_answer_choices, pillar_specfic_prompt_question, kb_id, bedrock_agent_client, document_content=None, wafr_reference_bucket = None):
336 |
337 | lens_filter = get_lens_filter(wafr_reference_bucket, wafr_lens)
338 |
339 | response = retrieve(pillar_specfic_prompt_question, kb_id, bedrock_agent_client, lens_filter, pillar, wafr_lens)
340 |
341 | retrievalResults = response['retrievalResults']
342 | contexts = get_contexts(retrievalResults)
343 |
344 | system_prompt = f"""You are an AWS Cloud Solutions Architect who specializes in reviewing solution architecture documents against the AWS Well-Architected Framework, using a process called the Well-Architected Framework Review (WAFR).
345 | The WAFR process consists of evaluating the provided solution architecture document against the 6 pillars of the specified AWS Well-Architected Framework lens, namely:
346 | - Operational Excellence Pillar
347 | - Security Pillar
348 | - Reliability Pillar
349 | - Performance Efficiency Pillar
350 | - Cost Optimization Pillar
351 | - Sustainability Pillar
352 |
353 | A solution architecture document is provided below in the "uploaded_document" section that you will evaluate by answering the questions provided in the "pillar_questions" section in accordance with the WAFR pillar indicated by the "current_pillar" section and the specified WAFR lens indicated by the "" section. Follow the instructions listed under the "instructions" section below.
354 |
355 |
356 | 1) For each question, be concise and limit responses to 350 words maximum. Responses should be specific to the specified lens (listed in the "" section) and pillar only (listed in the "" section). Your response should have three parts: 'Assessment', 'Best Practices Followed', and 'Recommendations/Examples'. Begin with the question asked.
357 | 2) You are also provided with a Knowledge Base which has more information about the specific pillar from the Well-Architected Framework. The relevant parts from the Knowledge Base will be provided under the "kb" section.
358 | 3) For each question, start your response with the 'Assessment' section, in which you will give a short summary (three to four lines) of your answer.
359 | 4) For each question:
360 | a) Provide which Best Practices from the specified pillar have been followed, including the best practice IDs and titles from the respective pillar guidance. List them under the 'Best Practices Followed' section.
361 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
362 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
363 | b) Provide your recommendations on how the solution architecture should be updated to address the question's ask. If you have a relevant example, mention it clearly like so: "Example: ". List all of this under the 'Recommendations/Examples' section.
364 | 5) For each question, if the required information is missing or is inadequate to answer the question, then first state that the document doesn't provide any or enough information. Then, list the recommendations relevant to the question to address the gap in the solution architecture document under the 'Recommendations' section. In this case, the 'Best practices followed' section will simply state "Not enough information".
365 | 6) First list the question within and tags in the respons.
366 | 7) Add citations for the best practices and recommendations by including the best practice ID and heading from the specified lens ("") and specified pillar ("") under the section, strictly within and tags. And every citation within it should be separated by ',' and start on a new line. If there are no citations then return 'N/A' within and .
367 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
368 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
369 | 8) Do not make any assumptions or make up information. Your responses should only be based on the actual solution document provided in the "uploaded_document" section.
370 | 9) Based on the assessment, select the most appropriate choices applicable from the choices provided within the section. Do not make up ids and use only the ids specified in the provided choices.
371 | 10) Return the entire response strictly in well-formed XML format. There should not be any text outside the XML response. Use the following XML structure, and ensure that the XML tags are in the same order:
372 |
373 | This is the input question
374 | This is assessment
375 | Best practices followed with citaiton fom Well Architected best practices for the pillar
376 | Recommendations with examples
377 | citations
378 |
379 |
380 | sec_securely_operate_multi_accounts
381 |
382 |
383 | sec_securely_operate_aws_account
384 |
385 |
386 | sec_securely_operate_control_objectives
387 |
388 |
389 | sec_securely_operate_updated_threats
390 |
391 |
392 |
393 |
394 | """
395 |
396 | #Add Soln Arch Doc to the system_prompt
397 | if document_content:
398 | system_prompt += f"""
399 |
400 | {document_content}
401 |
402 | """
403 |
404 | prompt = f"""
405 |
406 | {wafr_lens}
407 |
408 |
409 |
410 | {pillar}
411 |
412 |
413 |
414 | {contexts}
415 |
416 |
417 |
418 | Please answer the following questions for the {pillar} pillar of the Well-Architected Framework Review (WAFR).
419 | Questions:
420 | {pillar_specfic_prompt_question}
421 |
422 |
423 | Choices:
424 | {pillar_specfic_wafr_answer_choices}
425 |
426 | """
427 | # ask about anthropic version
428 | body = json.dumps({
429 | "anthropic_version": "bedrock-2023-05-31",
430 | "max_tokens": 200000,
431 | "system": system_prompt,
432 | "messages": [
433 | {
434 | "role": "user",
435 | "content": [{"type": "text", "text": prompt}],
436 | }
437 | ],
438 | })
439 | return body
440 |
441 | def retrieve(question, kbId, bedrock_agent_client, lens_filter, pillar, wafr_lens):
442 |
443 | kb_prompt = f"""For the given question from the {pillar} pillar of {wafr_lens}, provide:
444 | - Recommendations
445 | - Best practices
446 | - Examples
447 | - Risks
448 | Question: {question}"""
449 |
450 | logger.debug (f"question: {question}")
451 | logger.debug (f"kb_prompt: {kb_prompt}")
452 |
453 | return bedrock_agent_client.retrieve(
454 | retrievalQuery= {
455 | 'text': kb_prompt
456 | },
457 | knowledgeBaseId=kbId,
458 | retrievalConfiguration={
459 | 'vectorSearchConfiguration':{
460 | 'numberOfResults': 20,
461 | "filter": lens_filter
462 | }
463 | }
464 | )
465 |
466 | def get_contexts(retrievalResults):
467 | contexts = []
468 | for retrievedResult in retrievalResults:
469 | contexts.append(retrievedResult['content']['text'])
470 | return contexts
471 |
472 | def parse_stream(stream):
473 | for event in stream:
474 | chunk = event.get('chunk')
475 | if chunk:
476 | message = json.loads(chunk.get("bytes").decode())
477 | if message['type'] == "content_block_delta":
478 | yield message['delta']['text'] or ""
479 | elif message['type'] == "message_stop":
480 | return "\n"
481 |
482 | def read_s3_file (bucket, filename):
483 |
484 | document_text_object = s3client.get_object(
485 | Bucket=bucket,
486 | Key=filename,
487 | )
488 |
489 | logger.info (f"read_s3_file: {document_text_object}")
490 |
491 | document_text = document_text_object['Body'].read()
492 |
493 | return document_text
494 |
--------------------------------------------------------------------------------
/lambda_dir/generate_solution_summary/generate_solution_summary.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import datetime
4 | import logging
5 |
6 | from botocore.client import Config
7 | from botocore.exceptions import ClientError
8 |
9 | dynamodb = boto3.resource('dynamodb')
10 | s3 = boto3.resource('s3')
11 | s3client = boto3.client('s3')
12 | wa_client = boto3.client('wellarchitected')
13 |
14 | logger = logging.getLogger()
15 | logger.setLevel(logging.INFO)
16 |
17 | def lambda_handler(event, context):
18 |
19 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
20 |
21 | logger.info(f"generate_solution_summary invoked at {entry_timestamp}")
22 |
23 | logger.info(json.dumps(event))
24 |
25 | # Extract data from the input event
26 | return_response = data = event
27 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
28 | wafr_accelerator_run_key = data['wafr_accelerator_run_key']
29 |
30 | # Set up Bedrock client
31 | REGION = data['region']
32 | LLM_MODEL_ID = data['llm_model_id']
33 | bedrock_config = Config(connect_timeout=120, region_name=REGION, read_timeout=120, retries={'max_attempts': 0})
34 | bedrock_client = boto3.client('bedrock-runtime', config=bedrock_config)
35 |
36 | try:
37 | extracted_document_text = read_s3_file (data['extract_output_bucket'], data['extract_text_file_name'])
38 |
39 | # Prepare prompts for solution summary and workload description
40 | prompt = f"The following document is a solution architecture document that you are reviewing as an AWS Cloud Solutions Architect. Please summarise the following solution in 250 words. Begin directly with the architecture summary, don't provide any other opening or closing statements.\n\n\n{extracted_document_text}\n\n"
41 |
42 | # Generate summaries using Bedrock model
43 | summary = invoke_bedrock_model(bedrock_client, LLM_MODEL_ID, prompt)
44 |
45 | logger.info(f"Solution Summary: {summary}")
46 |
47 | # Update DynamoDB item with the generated summary
48 | update_dynamodb_item(wafr_accelerator_runs_table, wafr_accelerator_run_key, summary)
49 |
50 | # Write summary to S3
51 | write_summary_to_s3(s3, data['extract_output_bucket'], data['wafr_accelerator_run_items']['document_s3_key'], summary)
52 |
53 | except Exception as error:
54 | wafr_accelerator_runs_table.update_item(
55 | Key=wafr_accelerator_run_key,
56 | UpdateExpression="SET review_status = :val",
57 | ExpressionAttributeValues={':val': "Errored"},
58 | ReturnValues='UPDATED_NEW'
59 | )
60 | logger.error(f"Exception caught in generate_solution_summary: {error}")
61 | raise Exception (f'Exception caught in generate_solution_summary: {error}')
62 |
63 | # Prepare and return the response
64 | logger.info(f"return_response: {json.dumps(return_response)}")
65 | logger.info(f"Exiting generate_solution_summary at {datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S-%f')}")
66 |
67 | return {'statusCode': 200, 'body': return_response}
68 |
69 | def read_s3_file (bucket, filename):
70 |
71 | document_text_object = s3client.get_object(
72 | Bucket=bucket,
73 | Key=filename,
74 | )
75 |
76 | logger.info (document_text_object)
77 |
78 | document_text = document_text_object['Body'].read()
79 |
80 | return document_text
81 |
82 | def invoke_bedrock_model(bedrock_client, model_id, prompt):
83 | # Invoke Bedrock model and return the generated text
84 | response = bedrock_client.invoke_model(
85 | modelId=model_id,
86 | contentType="application/json",
87 | accept="application/json",
88 | body=json.dumps({
89 | "anthropic_version": "bedrock-2023-05-31",
90 | "max_tokens": 200000,
91 | "messages": [
92 | {"role": "user", "content": [{"type": "text", "text": prompt}]}
93 | ]
94 | })
95 | )
96 | response_body = json.loads(response['body'].read())
97 | return response_body['content'][0]['text']
98 |
99 | def update_dynamodb_item(table, key, summary):
100 | # Update DynamoDB item with the generated summary
101 | table.update_item(
102 | Key=key,
103 | UpdateExpression="SET architecture_summary = :val",
104 | ExpressionAttributeValues={':val': summary},
105 | ReturnValues='UPDATED_NEW'
106 | )
107 |
108 | def write_summary_to_s3(s3, bucket_name, document_key, summary):
109 | output_bucket = s3.Bucket(bucket_name)
110 | output_filename = f"{document_key[:document_key.rfind('.')]}-solution-summary.txt"
111 | logger.info(f"write_summary_to_s3: {output_filename}")
112 | output_bucket.put_object(Key=output_filename, Body=summary)
113 |
--------------------------------------------------------------------------------
/lambda_dir/insert_wafr_prompts/insert_wafr_prompts.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import urllib.parse
6 | import logging
7 | from botocore.exceptions import ClientError
8 |
9 | s3Client = boto3.client('s3')
10 | s3Resource = boto3.resource('s3')
11 |
12 | TABLE_NAME = os.environ['DD_TABLE_NAME']
13 | REGION_NAME = os.environ['REGION_NAME']
14 | dynamodb = boto3.resource('dynamodb', region_name=REGION_NAME)
15 | table = dynamodb.Table(TABLE_NAME)
16 |
17 |
18 | logger = logging.getLogger()
19 | logger.setLevel(logging.INFO)
20 |
21 | def lambda_handler(event, context):
22 |
23 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
24 |
25 | status = 'Prompts inserted successfully!'
26 |
27 | logger.info(f"insert_wafr_prompts invoked at {entry_timestamp}")
28 |
29 | logger.info(json.dumps(event))
30 |
31 | bucket = event['Records'][0]['s3']['bucket']['name']
32 | key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
33 |
34 | message = "WAFR prompts json is stored here - " + bucket + "/" + key
35 | logger.info (message)
36 | logger.info (f"insert_wafr_prompts checkpoint 1")
37 |
38 | purge_existing_data(table)
39 |
40 | try:
41 | s3_input_string = "s3://" + bucket + "/" + key
42 | logger.info("s3_input_string is : " + s3_input_string)
43 |
44 | response = s3Client.get_object(Bucket=bucket, Key=key)
45 |
46 | # Read the content of the file
47 | content = response['Body'].read().decode('utf-8')
48 |
49 | prompts_json = json.loads(content)
50 |
51 | logger.info (f"insert_wafr_prompts checkpoint 2")
52 |
53 | logger.info(json.dumps(prompts_json))
54 |
55 | # Iterate over the array of items
56 | for item_data in prompts_json['data']:
57 | # Prepare the item to be inserted
58 | item = {
59 | 'wafr_lens': item_data['wafr_lens'],
60 | 'wafr_lens_alias': item_data['wafr_lens_alias'],
61 | 'wafr_pillar': item_data['wafr_pillar'],
62 | 'wafr_pillar_id': item_data['wafr_pillar_id'],
63 | 'wafr_pillar_prompt': item_data['wafr_pillar_prompt']
64 | #'wafr_q': item_data['wafr_q']
65 | }
66 |
67 | # Insert the item into the DynamoDB table
68 | table.put_item(Item=item)
69 |
70 | logger.info (f"insert_wafr_prompts checkpoint 4")
71 |
72 | except Exception as error:
73 | logger.error(error)
74 | logger.error("S3 Object could not be opened. Check environment variable. ")
75 | status = 'Failed to insert Prompts!'
76 |
77 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
78 |
79 | logger.info("Exiting insert_wafr_prompts at " + exit_timeestamp)
80 |
81 | # Return a success response
82 | return {
83 | 'statusCode': 200,
84 | 'body': json.dumps(status)
85 | }
86 |
87 | def purge_existing_data(table):
88 | existing_items = table.scan()
89 | with table.batch_writer() as batch:
90 | for item in existing_items['Items']:
91 | batch.delete_item(
92 | Key={
93 | 'wafr_lens': item['wafr_lens'],
94 | 'wafr_pillar': item['wafr_pillar']
95 | }
96 | )
97 |
--------------------------------------------------------------------------------
/lambda_dir/prepare_wafr_review/prepare_wafr_review.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 | from botocore.client import Config
11 | from botocore.exceptions import ClientError
12 |
13 | s3 = boto3.resource('s3')
14 | dynamodb = boto3.resource('dynamodb')
15 | well_architected_client = boto3.client('wellarchitected')
16 |
17 | WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME = os.environ['WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME']
18 | UPLOAD_BUCKET_NAME = os.environ['UPLOAD_BUCKET_NAME']
19 | REGION = os.environ['REGION']
20 | WAFR_PROMPT_DD_TABLE_NAME = os.environ['WAFR_PROMPT_DD_TABLE_NAME']
21 | KNOWLEDGE_BASE_ID=os.environ['KNOWLEDGE_BASE_ID']
22 | LLM_MODEL_ID=os.environ['LLM_MODEL_ID']
23 | BEDROCK_SLEEP_DURATION = os.environ['BEDROCK_SLEEP_DURATION']
24 | BEDROCK_MAX_TRIES = os.environ['BEDROCK_MAX_TRIES']
25 | GUARDRAIL_ID = os.environ['GUARDRAIL_ID']
26 |
27 | bedrock_config = Config(connect_timeout=120, region_name=REGION, read_timeout=120, retries={'max_attempts': 0})
28 | bedrock_client = boto3.client('bedrock-runtime',region_name=REGION)
29 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
30 |
31 | logger = logging.getLogger()
32 | logger.setLevel(logging.INFO)
33 |
34 | def get_lens_alias (wafr_lens):
35 | wafr_prompts_table = dynamodb.Table(WAFR_PROMPT_DD_TABLE_NAME)
36 |
37 | response = wafr_prompts_table.query(
38 | ProjectionExpression ='wafr_pillar_id, wafr_pillar_prompt, wafr_lens_alias',
39 | KeyConditionExpression=Key('wafr_lens').eq(wafr_lens),
40 | ScanIndexForward=True
41 | )
42 |
43 | print (f"response wafr_lens_alias: " + response['Items'][0]['wafr_lens_alias'])
44 |
45 | return response['Items'][0]['wafr_lens_alias']
46 |
47 | def lambda_handler(event, context):
48 |
49 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
50 |
51 | logger.info('prepare_wafr_review invoked at ' + entry_timestamp)
52 |
53 | logger.info(json.dumps(event))
54 |
55 | data = json.loads(event[0]['body'])
56 |
57 | try:
58 |
59 | logger.info("WAFR_PROMPT_DD_TABLE_NAME: " + WAFR_PROMPT_DD_TABLE_NAME)
60 | logger.info("WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME: " + WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
61 | logger.info("REGION: " + REGION)
62 | logger.info("UPLOAD_BUCKET_NAME: " + UPLOAD_BUCKET_NAME)
63 | logger.info("LLM_MODEL_ID: " + LLM_MODEL_ID)
64 | logger.info("KNOWLEDGE_BASE_ID: " + KNOWLEDGE_BASE_ID)
65 | logger.info("BEDROCK_SLEEP_DURATION: " + BEDROCK_SLEEP_DURATION)
66 | logger.info("BEDROCK_MAX_TRIES: " + BEDROCK_MAX_TRIES)
67 |
68 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
69 |
70 | analysis_id = data['analysis_id']
71 | analysis_submitter = data['analysis_submitter']
72 | analysis_response = wafr_accelerator_runs_table.get_item(
73 | Key={
74 | 'analysis_id': analysis_id,
75 | 'analysis_submitter': analysis_submitter
76 | }
77 | )
78 | analysis = analysis_response.get("Item", {})
79 |
80 | logger.info(f'analysis: {json.dumps(analysis)}')
81 |
82 | name = data.get("analysis_name", "")
83 | wafr_lens = data['wafr_lens']
84 |
85 | workload_name = data['analysis_name']
86 | workload_desc = analysis['workload_desc']
87 | environment = analysis['environment']
88 | review_owner = analysis['review_owner']
89 | industry_type = analysis['industry_type']
90 | creation_date = analysis['creation_date']
91 |
92 | # Get the lens ARN from the friendly name
93 | lenses = analysis["lenses"]
94 |
95 | aws_regions = [REGION]
96 |
97 | logger.info('creation_date: ' + creation_date)
98 |
99 | wafr_workload_id = create_workload(well_architected_client, workload_name,
100 | workload_desc, environment, lenses, review_owner, industry_type, aws_regions)
101 |
102 | review_status = "In Progress"
103 | document_s3_key = data['document_s3_key']
104 |
105 | wafr_accelerator_run_key = {
106 | 'analysis_id': analysis_id,
107 | 'analysis_submitter': analysis_submitter
108 | }
109 |
110 | response = wafr_accelerator_runs_table.update_item(
111 | Key=wafr_accelerator_run_key,
112 | UpdateExpression="SET review_status = :val1, wafr_workload_id = :val2",
113 | ExpressionAttributeValues={
114 | ':val1': review_status,
115 | ':val2': wafr_workload_id
116 | },
117 | ReturnValues='UPDATED_NEW'
118 | )
119 | logger.info(f'wafr-accelerator-runs dynamodb table summary update response: {response}')
120 |
121 | pillars = data['selected_pillars']
122 | pillar_string = ",".join(pillars)
123 | logger.info("Final pillar_string: " + pillar_string)
124 |
125 | logger.debug("prepare_wafr_review checkpoint 1")
126 |
127 | # Prepare the item to be returned
128 | wafr_accelerator_run_items = {
129 | 'analysis_id': analysis_id,
130 | 'analysis_submitter': analysis_submitter,
131 | 'analysis_title': name,
132 | 'selected_lens': wafr_lens,
133 | 'creation_date': creation_date,
134 | 'review_status': review_status,
135 | 'selected_wafr_pillars': pillars,
136 | 'document_s3_key': document_s3_key,
137 | 'analysis_owner': name,
138 | 'wafr_workload_id': wafr_workload_id,
139 | 'workload_name': workload_name,
140 | 'workload_desc': workload_desc,
141 | 'environment': environment,
142 | 'review_owner': review_owner,
143 | 'industry_type': industry_type,
144 | 'lens_alias': lenses
145 | }
146 |
147 | logger.debug('prepare_wafr_review checkpoint 2')
148 |
149 | return_response = {}
150 |
151 | return_response['wafr_accelerator_run_items'] = wafr_accelerator_run_items
152 | return_response['wafr_accelerator_run_key'] = wafr_accelerator_run_key
153 | return_response['extract_output_bucket'] = UPLOAD_BUCKET_NAME
154 | return_response['pillars_string'] = pillar_string
155 | return_response['wafr_accelerator_runs_table'] = WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME
156 | return_response['wafr_prompts_table'] = WAFR_PROMPT_DD_TABLE_NAME
157 | return_response['region'] = REGION
158 | return_response['knowledge_base_id'] = KNOWLEDGE_BASE_ID
159 | return_response['llm_model_id'] = LLM_MODEL_ID
160 | return_response['wafr_workload_id'] = wafr_workload_id
161 | return_response['lens_alias'] = lenses
162 | return_response['guardrail_id'] = GUARDRAIL_ID
163 |
164 | except Exception as error:
165 | update_analysis_status (data, error)
166 | raise Exception (f'Exception caught in prepare_wafr_review: {error}')
167 |
168 | logger.info(f'return_response: {return_response}')
169 |
170 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
171 |
172 | logger.info(f'Exiting prepare_wafr_review at {exit_timeestamp}')
173 |
174 | return {
175 | 'statusCode': 200,
176 | 'body': json.dumps(return_response)
177 | }
178 |
179 | def create_workload(client, workload_name, description, environment, lenses, review_owner, industry_type, aws_regions, architectural_design=None):
180 | workload_params = {
181 | 'WorkloadName': workload_name,
182 | 'Description': description,
183 | 'Environment': environment,
184 | 'ReviewOwner': review_owner,
185 | 'IndustryType': industry_type,
186 | 'Lenses': [lenses] if isinstance(lenses, str) else lenses,
187 | 'AwsRegions': aws_regions
188 | }
189 | if architectural_design:
190 | workload_params['ArchitecturalDesign'] = architectural_design
191 | response = client.create_workload(**workload_params)
192 | return response['WorkloadId']
193 |
194 | def update_analysis_status (data, error):
195 |
196 | dynamodb = boto3.resource('dynamodb')
197 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
198 |
199 | wafr_accelerator_run_key = {
200 | 'analysis_id': data['analysis_id'],
201 | 'analysis_submitter': data['analysis_submitter']
202 | }
203 |
204 | wafr_accelerator_runs_table.update_item(
205 | Key=wafr_accelerator_run_key,
206 | UpdateExpression="SET review_status = :val",
207 | ExpressionAttributeValues={':val': "Errored"},
208 | ReturnValues='UPDATED_NEW'
209 | )
210 | logger.error(f"Exception caught in prepare_wafr_review: {error}")
--------------------------------------------------------------------------------
/lambda_dir/replace_ui_tokens/replace_ui_tokens.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import urllib.parse
6 | import re
7 | import logging
8 |
9 | WAFR_ACCELERATOR_QUEUE_URL = os.environ['WAFR_ACCELERATOR_QUEUE_URL']
10 | WAFR_UI_BUCKET_NAME = os.environ['WAFR_UI_BUCKET_NAME']
11 | WAFR_UI_BUCKET_ARN = os.environ['WAFR_UI_BUCKET_ARN']
12 | REGION_NAME = os.environ['REGION_NAME']
13 | WAFR_RUNS_TABLE=os.environ['WAFR_RUNS_TABLE']
14 | EC2_INSTANCE_ID=os.environ['EC2_INSTANCE_ID']
15 | UPLOAD_BUCKET_NAME=os.environ['UPLOAD_BUCKET_NAME']
16 | PARAMETER_1_NEW_WAFR_REVIEW=os.environ['PARAMETER_1_NEW_WAFR_REVIEW']
17 | PARAMETER_2_EXISTING_WAFR_REVIEWS=os.environ['PARAMETER_2_EXISTING_WAFR_REVIEWS']
18 | PARAMETER_UI_SYNC_INITAITED_FLAG=os.environ['PARAMETER_UI_SYNC_INITAITED_FLAG']
19 | PARAMETER_3_LOGIN_PAGE=os.environ['PARAMETER_3_LOGIN_PAGE']
20 | PARAMETER_COGNITO_USER_POOL_ID = os.environ['PARAMETER_COGNITO_USER_POOL_ID']
21 | PARAMETER_COGNITO_USER_POOL_CLIENT_ID = os.environ['PARAMETER_COGNITO_USER_POOL_CLIENT_ID']
22 | GUARDRAIL_ID = os.environ['GUARDRAIL_ID']
23 |
24 | ssm_client = boto3.client('ssm')
25 | s3Client = boto3.client('s3')
26 | s3Resource = boto3.resource('s3')
27 | ssm_parameter_store = boto3.client('ssm', region_name=REGION_NAME)
28 |
29 | logger = logging.getLogger()
30 | logger.setLevel(logging.INFO)
31 |
32 | def lambda_handler(event, context):
33 |
34 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
35 |
36 | logger.info("replace_ui_tokens invoked at " + entry_timestamp)
37 |
38 | s3_script = """
39 | import boto3
40 | import os
41 |
42 | s3 = boto3.client('s3')
43 |
44 | try:
45 | bucket_name = '{{UI_CODE_BUCKET_NAME}}'
46 |
47 | objects = s3.list_objects_v2(Bucket=bucket_name)['Contents']
48 |
49 | for obj in objects:
50 | print(obj['Key'])
51 | current_directory = os.path.dirname(os.path.realpath(__file__))
52 | key = obj['Key']
53 | # Skip if the object is a directory
54 | if key.endswith('/'):
55 | continue
56 | # Create the local directory structure if it doesn't exist
57 | local_path = os.path.join(current_directory, key)
58 | os.makedirs(os.path.dirname(local_path), exist_ok=True)
59 | # Download the object
60 | s3.download_file(bucket_name, key, local_path)
61 | print(f'Downloaded: {key}')
62 | except Exception as e:
63 | print(f'Error: {e}')
64 | """
65 |
66 | logger.info("WAFR_ACCELERATOR_QUEUE_URL: " + WAFR_ACCELERATOR_QUEUE_URL)
67 | logger.info("WAFR_UI_BUCKET_NAME: " + WAFR_UI_BUCKET_NAME)
68 | logger.info("WAFR_UI_BUCKET_ARN: " + WAFR_UI_BUCKET_ARN)
69 | logger.info("REGION_NAME: " + REGION_NAME)
70 | logger.info("WAFR_RUNS_TABLE: " + WAFR_RUNS_TABLE)
71 | logger.info("EC2_INSTANCE_ID: " + EC2_INSTANCE_ID)
72 | logger.info("UPLOAD_BUCKET_NAME: " + UPLOAD_BUCKET_NAME)
73 | logger.info("PARAMETER_1_NEW_WAFR_REVIEW: " + PARAMETER_1_NEW_WAFR_REVIEW)
74 | logger.info("PARAMETER_2_EXISTING_WAFR_REVIEWS: " + PARAMETER_2_EXISTING_WAFR_REVIEWS)
75 | logger.info("PARAMETER_UI_SYNC_INITAITED_FLAG: " + PARAMETER_UI_SYNC_INITAITED_FLAG)
76 | logger.info("PARAMETER_3_LOGIN_PAGE: " + PARAMETER_3_LOGIN_PAGE)
77 | logger.info("PARAMETER_COGNITO_USER_POOL_ID: " + PARAMETER_COGNITO_USER_POOL_ID)
78 | logger.info("PARAMETER_COGNITO_USER_POOL_CLIENT_ID: " + PARAMETER_COGNITO_USER_POOL_CLIENT_ID)
79 | logger.info("GUARDRAIL_ID: " + GUARDRAIL_ID)
80 |
81 | logger.info(json.dumps(event))
82 |
83 | status = 'Everything done successfully - token update, s3 script creation and execution!'
84 |
85 | bucket = event['Records'][0]['s3']['bucket']['name']
86 | key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
87 |
88 | logger.info(f"replace_ui_tokens invoked for {bucket} / {key}")
89 |
90 | logger.info (f"replace_ui_tokens checkpoint 1")
91 |
92 | try:
93 | # Check if the input key has the "tokenized-pages/" prefix
94 | if (key.startswith('tokenized-pages')):
95 |
96 | logger.info (f"replace_ui_tokens checkpoint 2")
97 |
98 | # Replace the prefix with "pages/"
99 | output_key = 'pages/' + key.split('tokenized-pages/')[1]
100 |
101 | if(key.startswith('tokenized-pages/1_New_WAFR_Review.py')):
102 | # Read the file from the input key
103 | response = s3Client.get_object(Bucket=bucket, Key=key)
104 | file_content = response['Body'].read().decode('utf-8')
105 |
106 | #1_New_WAFR_Run.py'
107 | updated_content = re.sub(r'{{REGION}}', REGION_NAME, file_content)
108 | updated_content = re.sub(r'{{SQS_QUEUE_NAME}}', WAFR_ACCELERATOR_QUEUE_URL, updated_content)
109 | updated_content = re.sub(r'{{WAFR_UPLOAD_BUCKET_NAME}}', UPLOAD_BUCKET_NAME, updated_content)
110 | updated_content = re.sub(r'{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}', WAFR_RUNS_TABLE, updated_content)
111 |
112 | # Write the updated content to the output key
113 | s3Client.put_object(Bucket=bucket, Key=output_key, Body=updated_content.encode('utf-8'))
114 |
115 | ssm_parameter_store.put_parameter(
116 | Name=PARAMETER_1_NEW_WAFR_REVIEW,
117 | Value=f'True',
118 | Type='String',
119 | Overwrite=True
120 | )
121 |
122 | logger.info (f"replace_ui_tokens checkpoint 3a")
123 |
124 | elif(key.startswith('tokenized-pages/2_Existing_WAFR_Reviews.py')):
125 | response = s3Client.get_object(Bucket=bucket, Key=key)
126 | file_content = response['Body'].read().decode('utf-8')
127 |
128 | updated_content = re.sub(r'{{REGION}}', REGION_NAME, file_content)
129 | updated_content = re.sub(r'{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}', WAFR_RUNS_TABLE, updated_content)
130 | updated_content = re.sub(r'{{GUARDRAIL_ID}}', GUARDRAIL_ID, updated_content)
131 |
132 | # Write the updated content to the output key
133 | s3Client.put_object(Bucket=bucket, Key=output_key, Body=updated_content.encode('utf-8'))
134 |
135 | ssm_parameter_store.put_parameter(
136 | Name=PARAMETER_2_EXISTING_WAFR_REVIEWS,
137 | Value=f'True',
138 | Type='String',
139 | Overwrite=True
140 | )
141 | logger.info (f"replace_ui_tokens checkpoint 3b")
142 |
143 | elif(key.startswith('tokenized-pages/1_Login.py')):
144 | response = s3Client.get_object(Bucket=bucket, Key=key)
145 | file_content = response['Body'].read().decode('utf-8')
146 |
147 | updated_content = re.sub(r'{{REGION}}', REGION_NAME, file_content)
148 | updated_content = re.sub(r'{{PARAMETER_COGNITO_USER_POOL_ID}}', PARAMETER_COGNITO_USER_POOL_ID, updated_content)
149 | updated_content = re.sub(r'{{PARAMETER_COGNITO_USER_POOL_CLIENT_ID}}', PARAMETER_COGNITO_USER_POOL_CLIENT_ID, updated_content)
150 |
151 | # Write the updated content to the output key
152 | s3Client.put_object(Bucket=bucket, Key=output_key, Body=updated_content.encode('utf-8'))
153 |
154 | ssm_parameter_store.put_parameter(
155 | Name=PARAMETER_3_LOGIN_PAGE,
156 | Value=f'True',
157 | Type='String',
158 | Overwrite=True
159 | )
160 |
161 | logger.info (f"replace_ui_tokens checkpoint 3c")
162 | else: # all other files in the folder
163 | response = s3Client.copy_object(Bucket=bucket, CopySource=key, Key=output_key)
164 | logger.info (f"replace_ui_tokens checkpoint 3d")
165 |
166 | logger.info(f'File processed: {key} -> {output_key}')
167 |
168 | logger.info (f"replace_ui_tokens checkpoint 3d")
169 |
170 | else:
171 | logger.info(f'Skipping file: {key} (does not have the "tokenized-pages/" prefix)')
172 |
173 |
174 | except Exception as error:
175 | logger.info(error)
176 | logger.info("replace_ui_tokens: S3 Object could not be opened. Check environment variable.")
177 | status = 'File updates failed!'
178 |
179 | update1 = update2 = ui_sync_flag = update3 = "False"
180 |
181 | try:
182 | update1 = ssm_parameter_store.get_parameter(Name=PARAMETER_1_NEW_WAFR_REVIEW, WithDecryption=True)['Parameter']['Value']
183 | logger.info (f"replace_ui_tokens checkpoint 4a: update1 (PARAMETER_1_NEW_WAFR_REVIEW) = " + update1)
184 | update2 = ssm_parameter_store.get_parameter(Name=PARAMETER_2_EXISTING_WAFR_REVIEWS, WithDecryption=True)['Parameter']['Value']
185 | logger.info (f"replace_ui_tokens checkpoint 4b: update2 (PARAMETER_2_EXISTING_WAFR_REVIEWS) = " + update2)
186 | update3 = ssm_parameter_store.get_parameter(Name=PARAMETER_3_LOGIN_PAGE, WithDecryption=True)['Parameter']['Value']
187 | logger.info (f"update_login_page checkpoint 4c: update3 (PARAMETER_3_LOGIN_PAGE) = " + update3)
188 | ui_sync_flag = ssm_parameter_store.get_parameter(Name=PARAMETER_UI_SYNC_INITAITED_FLAG, WithDecryption=True)['Parameter']['Value']
189 | logger.info (f"ui_sync_flag checkpoint 4d: ui_synC_flag (PARAMETER_UI_SYNC_INITAITED_FLAG) = " + ui_sync_flag)
190 | except Exception as error:
191 | logger.info(error)
192 | logger.info("One of the parameters is missing, wait for the next event")
193 |
194 | if( ui_sync_flag == "False"):
195 | if(((update1 =="True") and (update2 =="True")) and (update3 =="True")):
196 | logger.info (f"replace_ui_tokens checkpoint 4d : All the parameters are true, sending SSM command!")
197 | send_ssm_command (EC2_INSTANCE_ID, s3_script, WAFR_UI_BUCKET_NAME)
198 | ssm_parameter_store.put_parameter(
199 | Name=PARAMETER_UI_SYNC_INITAITED_FLAG,
200 | Value=f'True',
201 | Type='String',
202 | Overwrite=True
203 | )
204 | logger.info (f"ui_synC_flag set to True checkpoint 4e")
205 | else:
206 | logger.info(f'ui_synC_flag is already set, files are not synced again! Reset the {PARAMETER_UI_SYNC_INITAITED_FLAG} to False for file sync to happen again')
207 |
208 | logger.info (f"replace_ui_tokens checkpoint 5")
209 |
210 | return {
211 | 'statusCode': 200,
212 | 'body': json.dumps(status)
213 | }
214 |
215 | def send_ssm_command(ec2InstanceId, s3_script, wafrUIBucketName):
216 |
217 | s3_script = re.sub(r'{{UI_CODE_BUCKET_NAME}}', wafrUIBucketName, s3_script)
218 |
219 | # Send the SSM Run Command to process the file
220 | command_id = ssm_client.send_command(
221 | InstanceIds=[ec2InstanceId], # Replace with your instance ID
222 | DocumentName='AWS-RunShellScript',
223 | Parameters={
224 | 'commands': [
225 | 'sudo mkdir -p /wafr-accelerator && cd /wafr-accelerator',
226 | f'sudo echo "{s3_script}" > /wafr-accelerator/syncUIFolder.py',
227 | 'sleep 30 && sudo chown -R ec2-user:ec2-user /wafr-accelerator',
228 | 'python3 syncUIFolder.py',
229 | 'sleep 30 && sudo chown -R ec2-user:ec2-user /wafr-accelerator'
230 | ]
231 | }
232 | )['Command']['CommandId']
233 |
234 |
235 | logger.info (f"replace_ui_tokens checkpoint 4f: command_id " + command_id)
236 |
237 | # Wait for the command execution to complete
238 | waiter = ssm_client.get_waiter('command_executed')
239 | waiter.wait(
240 | CommandId=command_id,
241 | InstanceId=ec2InstanceId, # Replace with your instance ID
242 | WaiterConfig={
243 | 'Delay': 30,
244 | 'MaxAttempts': 30
245 | }
246 | )
247 |
248 | logger.info (f"replace_ui_tokens checkpoint 4g: Wait over")
249 |
250 |
--------------------------------------------------------------------------------
/lambda_dir/start_wafr_review/start_wafr_review.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 |
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 |
16 | WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME = os.environ['WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME']
17 | UPLOAD_BUCKET_NAME = os.environ['UPLOAD_BUCKET_NAME']
18 | REGION = os.environ['REGION']
19 | WAFR_PROMPT_DD_TABLE_NAME = os.environ['WAFR_PROMPT_DD_TABLE_NAME']
20 | KNOWLEDGE_BASE_ID=os.environ['KNOWLEDGE_BASE_ID']
21 | LLM_MODEL_ID=os.environ['LLM_MODEL_ID']
22 | START_WAFR_REVIEW_STATEMACHINE_ARN = os.environ['START_WAFR_REVIEW_STATEMACHINE_ARN']
23 | BEDROCK_SLEEP_DURATION = int(os.environ['BEDROCK_SLEEP_DURATION'])
24 | BEDROCK_MAX_TRIES = int(os.environ['BEDROCK_MAX_TRIES'])
25 | WAFR_REFERENCE_DOCS_BUCKET = os.environ['WAFR_REFERENCE_DOCS_BUCKET']
26 | GUARDRAIL_ID = os.environ['GUARDRAIL_ID']
27 |
28 | dynamodb = boto3.resource('dynamodb')
29 | bedrock_config = Config(connect_timeout=120, region_name=REGION, read_timeout=120, retries={'max_attempts': 0})
30 | bedrock_client = boto3.client('bedrock-runtime',region_name=REGION)
31 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
32 |
33 | logger = logging.getLogger()
34 | logger.setLevel(logging.INFO)
35 |
36 | def lambda_handler(event, context):
37 |
38 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
39 | logger.info (f"start_wafr_review invoked at {entry_timestamp}" )
40 | logger.info(json.dumps(event))
41 |
42 | logger.info(f"REGION: {REGION}")
43 | logger.debug(f"START_WAFR_REVIEW_STATEMACHINE_ARN: {START_WAFR_REVIEW_STATEMACHINE_ARN}")
44 | logger.debug(f"WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME: {WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}" )
45 |
46 | data = json.loads(event['Records'][0]['body'])
47 |
48 | return_status = ""
49 | analysis_review_type = data['analysis_review_type']
50 |
51 | try:
52 | if (analysis_review_type != 'Quick' ): #"Deep with Well-Architected Tool"
53 | logger.info("Initiating \'Deep with Well-Architected Tool\' analysis")
54 | sf = boto3.client('stepfunctions', region_name = REGION)
55 | response = sf.start_execution(stateMachineArn = START_WAFR_REVIEW_STATEMACHINE_ARN, input = json.dumps(event['Records']))
56 | logger.info (f"Step function response: {response}")
57 | return_status = 'Deep analysis commenced successfully!'
58 | else: #Quick
59 | logger.info("Executing \'Quick\' analysis")
60 | do_quick_analysis (data, context)
61 | return_status = 'Quick analysis completed successfully!'
62 | except Exception as error:
63 | handle_error (data, error)
64 | raise Exception (f'Exception caught in start_wafr_review: {error}')
65 |
66 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
67 | logger.info(f"Exiting start_wafr_review at {exit_timeestamp}" )
68 |
69 | return {
70 | 'statusCode': 200,
71 | 'body': json.dumps(return_status)
72 | }
73 |
74 | def handle_error (data, error):
75 |
76 | dynamodb = boto3.resource('dynamodb')
77 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
78 |
79 | # Define the key for the item you want to update
80 | wafr_accelerator_run_key = {
81 | 'analysis_id': data['analysis_id'], # Replace with your partition key name and value
82 | 'analysis_submitter': data['analysis_submitter'] # If you have a sort key, replace with its name and value
83 | }
84 |
85 | # Handle errors and update DynamoDB status
86 | wafr_accelerator_runs_table.update_item(
87 | Key=wafr_accelerator_run_key,
88 | UpdateExpression="SET review_status = :val",
89 | ExpressionAttributeValues={':val': "Errored"},
90 | ReturnValues='UPDATED_NEW'
91 | )
92 | logger.error(f"Exception caught in start_wafr_review: {error}")
93 |
94 | def get_pillar_string (pillars):
95 | pillar_string =""
96 |
97 | for item in pillars:
98 | logger.info (f"selected_pillars: {item}")
99 | pillar_string = pillar_string + item + ","
100 | logger.info ("pillar_string: " + pillar_string)
101 |
102 | pillar_string = pillar_string.rstrip(',')
103 |
104 | return pillar_string
105 |
106 | def do_quick_analysis (data, context):
107 |
108 | logger.info(f"WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME: {WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}")
109 | logger.info(f"UPLOAD_BUCKET_NAME: {UPLOAD_BUCKET_NAME}")
110 | logger.info(f"WAFR_PROMPT_DD_TABLE_NAME: {WAFR_PROMPT_DD_TABLE_NAME}")
111 | logger.info(f"KNOWLEDGE_BASE_ID: {KNOWLEDGE_BASE_ID}")
112 | logger.info(f"LLM_MODEL_ID: {LLM_MODEL_ID}")
113 | logger.info(f"BEDROCK_SLEEP_DURATION: {BEDROCK_SLEEP_DURATION}")
114 | logger.info(f"BEDROCK_MAX_TRIES: {BEDROCK_MAX_TRIES}")
115 | if(GUARDRAIL_ID):
116 | logger.info(f"GUARDRAIL_ID: {GUARDRAIL_ID}" )
117 | else:
118 | logger.info(f"GUARDRAIL_ID nNot specified" )
119 |
120 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
121 | wafr_prompts_table = dynamodb.Table(WAFR_PROMPT_DD_TABLE_NAME)
122 |
123 | logger.info(json.dumps(data))
124 |
125 | analysis_id = data['analysis_id']
126 | name = data['analysis_name']
127 | wafr_lens = data['wafr_lens']
128 |
129 | analysis_submitter = data['analysis_submitter']
130 | document_s3_key = data['document_s3_key']
131 |
132 | pillars = data['selected_pillars']
133 | pillar_string = get_pillar_string (pillars)
134 | logger.info ("Final pillar_string: " + pillar_string)
135 |
136 | logger.debug ("do_quick_analysis checkpoint 1")
137 |
138 | wafr_accelerator_run_key = {
139 | 'analysis_id': analysis_id,
140 | 'analysis_submitter': analysis_submitter
141 | }
142 |
143 | response = wafr_accelerator_runs_table.update_item(
144 | Key=wafr_accelerator_run_key,
145 | UpdateExpression="SET review_status = :val",
146 | ExpressionAttributeValues={':val': "In Progress"},
147 | ReturnValues='UPDATED_NEW'
148 | )
149 |
150 | logger.info (f"wafr-accelerator-runs dynamodb table summary update response: {response}" )
151 | logger.debug ("do_quick_analysis checkpoint 2")
152 |
153 | try:
154 |
155 | # Get the bucket object
156 | output_bucket = s3.Bucket(UPLOAD_BUCKET_NAME)
157 |
158 | # Extract document text and write to s3
159 | extracted_document_text = extract_document_text(UPLOAD_BUCKET_NAME, document_s3_key, output_bucket, wafr_accelerator_runs_table, wafr_accelerator_run_key, REGION)
160 |
161 | logger.debug ("do_quick_analysis checkpoint 3")
162 |
163 | # Generate solution summary
164 | summary = generate_solution_summary (extracted_document_text, wafr_accelerator_runs_table, wafr_accelerator_run_key)
165 |
166 | logger.info ("Generated architecture summary:" + summary)
167 |
168 | logger.debug ("do_quick_analysis checkpoint 4")
169 |
170 | partition_key_value = wafr_lens
171 |
172 | sort_key_values = pillars
173 |
174 | logger.info ("wafr_lens: " + wafr_lens)
175 |
176 | pillar_responses = []
177 | pillar_counter = 0
178 |
179 | #Get All the pillar prompts in a loop
180 | for item in pillars:
181 | logger.info (f"selected_pillars: {item}")
182 | response = wafr_prompts_table.query(
183 | ProjectionExpression ='wafr_pillar_id, wafr_pillar_prompt',
184 | KeyConditionExpression=Key('wafr_lens').eq(wafr_lens) & Key('wafr_pillar').eq(item),
185 | ScanIndexForward=True
186 | )
187 |
188 | logger.info (f"response wafr_pillar_id: " + str(response['Items'][0]['wafr_pillar_id']))
189 | logger.info (f"response wafr_pillar_prompt: " + response['Items'][0]['wafr_pillar_prompt'])
190 |
191 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
192 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
193 | pillar_review_prompt_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-" + wafr_lens + "-" + item + "-prompt.txt"
194 | pillar_review_output_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-" + wafr_lens + "-" + item + "-output.txt"
195 |
196 | logger.info (f"pillar_review_prompt_filename: {pillar_review_prompt_filename}")
197 | logger.info (f"pillar_review_output_filename: {pillar_review_output_filename}")
198 |
199 | pillar_specific_prompt_question = response['Items'][0]['wafr_pillar_prompt']
200 |
201 | claude_prompt_body = bedrock_prompt(wafr_lens, item, pillar_specific_prompt_question, KNOWLEDGE_BASE_ID, extracted_document_text, WAFR_REFERENCE_DOCS_BUCKET)
202 | output_bucket.put_object(Key=pillar_review_prompt_filename, Body=claude_prompt_body)
203 |
204 | logger.debug (f"do_quick_analysis checkpoint 5.{pillar_counter}")
205 |
206 | streaming = True
207 |
208 | pillar_review_output = invoke_bedrock(streaming, claude_prompt_body, pillar_review_output_filename, output_bucket)
209 |
210 | # Comment the next line if you would like to retain the prompts files
211 | output_bucket.Object(pillar_review_prompt_filename).delete()
212 |
213 | logger.debug (f"do_quick_analysis checkpoint 6.{pillar_counter}")
214 |
215 | logger.info ("pillar_id: " + str(response['Items'][0]['wafr_pillar_id']))#
216 |
217 | pillarResponse = {
218 | 'pillar_name': item,
219 | 'pillar_id': str(response['Items'][0]['wafr_pillar_id']),
220 | 'llm_response': pillar_review_output
221 | }
222 |
223 | pillar_responses.append(pillarResponse)
224 |
225 | pillar_counter += 1
226 |
227 | logger.debug (f"do_quick_analysis checkpoint 7")
228 |
229 | attribute_updates = {
230 | 'pillars': {
231 | 'Action': 'ADD'
232 | }
233 | }
234 |
235 | response = wafr_accelerator_runs_table.update_item(
236 | Key=wafr_accelerator_run_key,
237 | UpdateExpression="SET pillars = :val",
238 | ExpressionAttributeValues={':val': pillar_responses},
239 | ReturnValues='UPDATED_NEW'
240 | )
241 |
242 | logger.info (f"dynamodb status update response: {response}" )
243 | logger.debug (f"do_quick_analysis checkpoint 8")
244 |
245 | response = wafr_accelerator_runs_table.update_item(
246 | Key=wafr_accelerator_run_key,
247 | UpdateExpression="SET review_status = :val",
248 | ExpressionAttributeValues={':val': "Completed"},
249 | ReturnValues='UPDATED_NEW'
250 | )
251 |
252 | logger.info (f"dynamodb status update response: {response}" )
253 | except Exception as error:
254 | handle_error (data, error)
255 | raise Exception (f'Exception caught in do_quick_analysis: {error}')
256 |
257 | logger.debug (f"do_quick_analysis checkpoint 9")
258 |
259 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
260 | logger.info("Exiting do_quick_analysis at " + exit_timeestamp)
261 |
262 | def extract_document_text(upload_bucket_name, document_s3_key, output_bucket, wafr_accelerator_runs_table, wafr_accelerator_run_key, region):
263 |
264 | textract_config = Config(retries = dict(max_attempts = 5))
265 | textract_client = boto3.client('textract', region_name=region, config=textract_config)
266 |
267 | logger.debug ("extract_document_text checkpoint 1")
268 |
269 | response = textract_client.start_document_text_detection(
270 | DocumentLocation={
271 | 'S3Object': {
272 | 'Bucket': upload_bucket_name,
273 | 'Name': document_s3_key
274 | }
275 | }
276 | )
277 |
278 | job_id = response["JobId"]
279 |
280 | logger.debug ("extract_document_text checkpoint 2")
281 |
282 | # Wait for the job to complete
283 | while True:
284 | response = textract_client.get_document_text_detection(JobId=job_id)
285 | status = response["JobStatus"]
286 | if status == "SUCCEEDED":
287 | break
288 |
289 | logger.debug ("extract_document_text checkpoint 3")
290 |
291 | pages = []
292 | next_token = None
293 | while True:
294 | if next_token:
295 | response = textract_client.get_document_text_detection(JobId=job_id, NextToken=next_token)
296 | else:
297 | response = textract_client.get_document_text_detection(JobId=job_id)
298 | pages.append(response)
299 | if 'NextToken' in response:
300 | next_token = response['NextToken']
301 | else:
302 | break
303 |
304 | logger.debug ("extract_document_text checkpoint 4")
305 |
306 | # Extract the text from all pages
307 | extracted_text = ""
308 | for page in pages:
309 | for item in page["Blocks"]:
310 | if item["BlockType"] == "LINE":
311 | extracted_text += item["Text"] + "\n"
312 |
313 | attribute_updates = {
314 | 'extracted_document': {
315 | 'Action': 'ADD'
316 | }
317 | }
318 |
319 | # Update the item
320 | response = wafr_accelerator_runs_table.update_item(
321 | Key=wafr_accelerator_run_key,
322 | UpdateExpression="SET extracted_document = :val",
323 | ExpressionAttributeValues={':val': extracted_text},
324 | ReturnValues='UPDATED_NEW'
325 | )
326 |
327 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
328 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
329 | output_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-extracted-text.txt"
330 |
331 | logger.info (f"Extracted document text ouput filename: {output_filename}")
332 | output_bucket.put_object(Key=output_filename, Body=bytes(extracted_text, encoding='utf-8'))
333 |
334 | return extracted_text
335 |
336 | def generate_solution_summary (extracted_document_text, wafr_accelerator_runs_table, wafr_accelerator_run_key):
337 |
338 | prompt = f"The following document is a solution architecture document that you are reviewing as an AWS Cloud Solutions Architect. Please summarise the following solution in 250 words. Begin directly with the architecture summary, don't provide any other opening or closing statements.\n\n{extracted_document_text}\n\n" #\nSummary:"
339 |
340 | response = bedrock_client.invoke_model(
341 | modelId= LLM_MODEL_ID,
342 | contentType="application/json",
343 | accept="application/json",
344 | body = json.dumps({
345 | "anthropic_version": "bedrock-2023-05-31",
346 | "max_tokens": 200000,
347 | "messages": [
348 | {
349 | "role": "user",
350 | "content": [{"type": "text", "text": prompt}],
351 | }
352 | ],
353 | }
354 | )
355 | )
356 |
357 | logger.info(f"generate_solution_summary: response: {response}")
358 |
359 | # Extract the summary
360 | response_body = json.loads(response['body'].read())
361 | summary = response_body['content'][0]['text']
362 |
363 | logger.debug (f"start_wafr_review checkpoint 9")
364 |
365 | attribute_updates = {
366 | 'architecture_summary': {
367 | 'Action': 'ADD'
368 | }
369 | }
370 |
371 | logger.debug (f"start_wafr_review checkpoint 10")
372 |
373 | response = wafr_accelerator_runs_table.update_item(
374 | Key=wafr_accelerator_run_key,
375 | UpdateExpression="SET architecture_summary = :val",
376 | ExpressionAttributeValues={':val': summary},
377 | ReturnValues='UPDATED_NEW'
378 | )
379 |
380 | return summary
381 |
382 | def invoke_bedrock(streaming, claude_prompt_body, pillar_review_output_filename, output_bucket):
383 |
384 | pillar_review_output = ""
385 | retries = 1
386 | max_retries = BEDROCK_MAX_TRIES
387 |
388 | while retries <= max_retries:
389 | try:
390 | if(streaming):
391 | if(GUARDRAIL_ID == "Not Selected"):
392 | streaming_response = bedrock_client.invoke_model_with_response_stream(
393 | modelId=LLM_MODEL_ID,
394 | body=claude_prompt_body
395 | )
396 | else: # Use guardrails
397 | streaming_response = bedrock_client.invoke_model_with_response_stream(
398 | modelId=LLM_MODEL_ID,
399 | body=claude_prompt_body,
400 | guardrailIdentifier= GUARDRAIL_ID,
401 | guardrailVersion="DRAFT"
402 | )
403 |
404 | logger.debug (f"invoke_bedrock checkpoint 1.{retries}")
405 | stream = streaming_response.get("body")
406 |
407 | logger.debug (f"invoke_bedrock checkpoint 2.{retries}")
408 |
409 | for chunk in parse_stream(stream):
410 | pillar_review_output += chunk
411 |
412 | # Uncomment next line if you would like to see response files for each question too.
413 | # output_bucket.put_object(Key=pillar_review_output_filename, Body=bytes(pillar_review_output, encoding='utf-8'))
414 |
415 | return pillar_review_output
416 |
417 | else:
418 | non_streaming_response = bedrock_client.invoke_model(
419 | modelId=LLM_MODEL_ID,
420 | body=claude_prompt_body
421 | )
422 | if(GUARDRAIL_ID == "Not Selected"):
423 | non_streaming_response = bedrock_client.invoke_model(
424 | modelId=LLM_MODEL_ID,
425 | body=claude_prompt_body
426 | )
427 | else: # Use guardrails
428 | non_streaming_response = bedrock_client.invoke_model(
429 | modelId=LLM_MODEL_ID,
430 | body=claude_prompt_body,
431 | guardrailIdentifier=GUARDRAIL_ID,
432 | guardrailVersion="DRAFT"
433 | )
434 |
435 | response_json = json.loads(non_streaming_response["body"].read().decode("utf-8"))
436 |
437 | logger.info (response_json)
438 |
439 | logger.debug (f"invoke_bedrock checkpoint 1.{retries}")
440 |
441 | # Extract the response text.
442 | pillar_review_output = response_json["content"][0]["text"]
443 |
444 | logger.debug (pillar_review_output)
445 | logger.debug (f"invoke_bedrock checkpoint 2.{retries}")
446 |
447 | # Uncomment next line if you would like to see response files for each question too.
448 | # output_bucket.put_object(Key=pillar_review_output_filename, Body=pillar_review_output)
449 |
450 | return pillar_review_output
451 |
452 | except Exception as e:
453 | retries += 1
454 | logger.info(f"Sleeping as attempt {retries} failed with exception: {e}")
455 | time.sleep(BEDROCK_SLEEP_DURATION) # Add a delay before the next retry
456 |
457 | logger.info(f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
458 | raise Exception (f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
459 |
460 | def get_lens_filter(kb_bucket, wafr_lens):
461 |
462 | # Map lens prefixes to their corresponding lens names - allows for additional of lenses
463 | lens_mapping = {
464 | "Financial Services Industry Lens": "financialservices",
465 | "Data Analytics Lens": "dataanalytics",
466 | "Generative AI Lens": "genai"
467 | }
468 |
469 | # Get lens name or default to "wellarchitected"
470 | lens = next(
471 | (value for prefix, value in lens_mapping.items()
472 | if wafr_lens.startswith(prefix)),
473 | "wellarchitected"
474 | )
475 |
476 | # If wellarchitected lens then also use the overview documentation
477 | if lens == "wellarchitected":
478 | lens_filter = {
479 | "orAll": [
480 | {
481 | "startsWith": {
482 | "key": "x-amz-bedrock-kb-source-uri",
483 | "value": f"s3://{kb_bucket}/{lens}"
484 | }
485 | },
486 | {
487 | "startsWith": {
488 | "key": "x-amz-bedrock-kb-source-uri",
489 | "value": f"s3://{kb_bucket}/overview"
490 | }
491 | }
492 | ]
493 | }
494 | else: # Just use the lens documentation
495 | lens_filter = {
496 | "startsWith": {
497 | "key": "x-amz-bedrock-kb-source-uri",
498 | "value": f"s3://{kb_bucket}/{lens}/"
499 | }
500 | }
501 |
502 | logger.info(f"get_lens_filter: {json.dumps(lens_filter)}")
503 | return lens_filter
504 |
505 | def bedrock_prompt(wafr_lens, pillar, questions, kb_id, document_content=None, wafr_reference_bucket = None):
506 |
507 | lens_filter = get_lens_filter(wafr_reference_bucket, wafr_lens)
508 | response = retrieve(questions, kb_id, lens_filter)
509 |
510 | retrievalResults = response['retrievalResults']
511 | contexts = get_contexts(retrievalResults)
512 |
513 | system_prompt = f"""You are an AWS Cloud Solutions Architect who specializes in reviewing solution architecture documents against the AWS Well-Architected Framework, using a process called the Well-Architected Framework Review (WAFR).
514 | The WAFR process consists of evaluating the provided solution architecture document against the 6 pillars of the specified AWS Well-Architected Framework lens, namely:
515 | Operational Excellence Pillar
516 | Security Pillar
517 | Reliability Pillar
518 | Performance Efficiency Pillar
519 | Cost Optimization Pillar
520 | Sustainability Pillar
521 |
522 | A solution architecture document is provided below in the "uploaded_document" section that you will evaluate by answering the questions provided in the "pillar_questions" section in accordance with the WAFR pillar indicated by the "" section and the specified WAFR lens indicated by the "" section. Answer each and every question without skipping any question, as it would make the entire response invalid. Follow the instructions listed under the "instructions" section below.
523 |
524 |
525 | 1) For each question, be concise and limit responses to 350 words maximum. Responses should be specific to the specified lens (listed in the "" section) and pillar only (listed in the "" section). Your response to each question should have five parts: 'Assessment', 'Best practices followed', 'Recommendations/Examples', 'Risks' and 'Citations'.
526 | 2) You are also provided with a Knowledge Base which has more information about the specific lens and pillar from the Well-Architected Framework. The relevant parts from the Knowledge Base will be provided under the "kb" section.
527 | 3) For each question, start your response with the 'Assessment' section, in which you will give a short summary (three to four lines) of your answer.
528 | 4) For each question,
529 | a) Provide which Best practices from the specified pillar have been followed, including the best practice titles and IDs from the respective pillar guidance for the question. List them under the 'Best practices followed' section.
530 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
531 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
532 | b) provide your recommendations on how the solution architecture should be updated to address the question's ask. If you have a relevant example, mention it clearly like so: "Example: ". List all of this under the 'Recommendations/Examples' section.
533 | c) Highlight the risks identified based on not following the best practises relevant to the specific WAFR question. Categorize the overall Risk for this question by selecting one of the three: High, Medium, or Low. List them under the 'Risks' section and mention your categorization.
534 | d) Add Citations section listing best practice ID and heading for best practices, recommendations, and risks from the specified lens ("") and specified pillar ("") under the section. If there are no citations then return 'N/A' for Citations.
535 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
536 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
537 | 5) For each question, if the required information is missing or is inadequate to answer the question, then first state that the document doesn't provide any or enough information. Then, list the recommendations relevant to the question to address the gap in the solution architecture document under the 'Recommendations' section. In this case, the 'Best practices followed' section will simply state "not enough information".
538 | 6) Use Markdown formatting for your response. First list, the question in bold. Then the response, and section headings for each of the four sections in your response should also be in bold. Add a Markdown new line at the end of the response.
539 | 7) Do not make any assumptions or make up information including best practice titles and ID. Your responses should only be based on the actual solution document provided in the "uploaded_document" section below.
540 | 8> Each line represents a question, for example 'Question 1 -' followed by the actual question text. Do recheck that all the questions have been answered before sending back the response.
541 |
542 | """
543 |
544 | #Add Soln Arch Doc to the system_prompt
545 | if document_content:
546 | system_prompt += f"""
547 |
548 | {document_content}
549 |
550 | """
551 |
552 | prompt = f"""
553 |
554 | {wafr_lens}
555 |
556 |
557 |
558 | {pillar}
559 |
560 |
561 |
562 | {contexts}
563 |
564 |
565 |
566 | {questions}
567 |
568 | """
569 |
570 | body = json.dumps({
571 | "anthropic_version": "bedrock-2023-05-31",
572 | "max_tokens": 200000,
573 | "system": system_prompt,
574 | "messages": [
575 | {
576 | "role": "user",
577 | "content": [{"type": "text", "text": prompt}],
578 | }
579 | ],
580 | })
581 | return body
582 |
583 | def retrieve(questions, kbId, lens_filter):
584 | kb_prompt = f"""For each question provide:
585 | - Recommendations
586 | - Best practices
587 | - Examples
588 | - Risks
589 | {questions}"""
590 |
591 | return bedrock_agent_client.retrieve(
592 | retrievalQuery= {
593 | 'text': kb_prompt
594 | },
595 | knowledgeBaseId=kbId,
596 | retrievalConfiguration={
597 | 'vectorSearchConfiguration':{
598 | 'numberOfResults': 20,
599 | "filter": lens_filter
600 | }
601 | }
602 | )
603 |
604 | def get_contexts(retrievalResults):
605 | contexts = []
606 | for retrievedResult in retrievalResults:
607 | contexts.append(retrievedResult['content']['text'])
608 | return contexts
609 |
610 | def parse_stream(stream):
611 | for event in stream:
612 | chunk = event.get('chunk')
613 | if chunk:
614 | message = json.loads(chunk.get("bytes").decode())
615 | if message['type'] == "content_block_delta":
616 | yield message['delta']['text'] or ""
617 | elif message['type'] == "message_stop":
618 | return "\n"
619 |
--------------------------------------------------------------------------------
/lambda_dir/update_review_status/update_review_status.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 | import uuid
8 |
9 | from boto3.dynamodb.conditions import Key
10 | from boto3.dynamodb.conditions import Attr
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 | dynamodb = boto3.resource('dynamodb')
16 | well_architected_client = boto3.client('wellarchitected')
17 |
18 | logger = logging.getLogger()
19 | logger.setLevel(logging.INFO)
20 |
21 | def lambda_handler(event, context):
22 |
23 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
24 |
25 | logger.info(f"update_review_status invoked at {entry_timestamp}" )
26 | logger.info(json.dumps(event))
27 |
28 | return_response = 'Success'
29 |
30 | # Parse the input data
31 | data = event
32 |
33 | wafr_accelerator_runs_table = dynamodb.Table(data[0]['wafr_accelerator_runs_table'])
34 | wafr_accelerator_run_key = data[0]['wafr_accelerator_run_key']
35 | wafr_workload_id = data[0]['wafr_accelerator_run_items']['wafr_workload_id']
36 |
37 | try:
38 | logger.debug(f"update_review_status checkpoint 1")
39 |
40 | # Create a milestone
41 | wafr_milestone = well_architected_client.create_milestone(
42 | WorkloadId=wafr_workload_id,
43 | MilestoneName="WAFR Accelerator Baseline",
44 | ClientRequestToken=str(uuid.uuid4())
45 | )
46 |
47 | logger.debug(f"Milestone created - {json.dumps(wafr_milestone)}")
48 |
49 | # Update the item
50 | response = wafr_accelerator_runs_table.update_item(
51 | Key=wafr_accelerator_run_key,
52 | UpdateExpression="SET review_status = :val",
53 | ExpressionAttributeValues={':val': "Completed"},
54 | ReturnValues='UPDATED_NEW'
55 | )
56 | logger.debug(f"update_review_status checkpoint 2")
57 | except Exception as error:
58 | return_response = 'Failed'
59 | logger.error(f"Exception caught in update_review_status: {error}")
60 |
61 | exit_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
62 |
63 | logger.info(f"Exiting update_review_status at {exit_timestamp}" )
64 |
65 | return {
66 | 'statusCode': 200,
67 | 'body' : return_response
68 | }
69 |
--------------------------------------------------------------------------------
/refreshing_kb.md:
--------------------------------------------------------------------------------
1 | ## Refreshing Amazon Bedrock Knowledge Base with the latest AWS Well-Architected Reference Documents
2 |
3 | ### Purging the old version
4 |
5 | * In the AWS Management Console, go to CloudFormation -> Stacks -> Click on the 'WellArchitectedReviewUsingGenAIStack' stack and navigate to the 'Resources' tab.
6 | * Under the Resources tab, identify the knowledge base S3 bucket by searching for 'wafr-accelerator-kb'.
7 | * Delete all files and subfolders within the identified S3 bucket.
8 |
9 | ### AWS Well-Architected reference documents locations
10 |
11 | * **[AWS Well-Architected Framework Overview](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/framework/wellarchitected-framework.pdf):** (create 'overview' subfolder and place the PDF file in it)
12 |
13 | * **AWS Well-Architected Framework pillar documents:** (create 'wellarchitected' subfolder and place the PDF file in it)
14 |
15 | * [Operational Excellence](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/operational-excellence-pillar/wellarchitected-operational-excellence-pillar.pdf)
16 | * [Security](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/security-pillar/wellarchitected-security-pillar.pdf)
17 | * [Reliability](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/reliability-pillar/wellarchitected-reliability-pillar.pdf)
18 | * [Performance efficiency](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/performance-efficiency-pillar/wellarchitected-performance-efficiency-pillar.pdf)
19 | * [Cost optimization](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/cost-optimization-pillar/wellarchitected-cost-optimization-pillar.pdf)
20 | * [Sustainability](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/sustainability-pillar/wellarchitected-sustainability-pillar.pdf)
21 |
22 | Repeat the above for:
23 |
24 | * **[Financial Services Industry Lens:](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/financial-services-industry-lens/wellarchitected-financial-services-industry-lens.pdf)** Create 'financialservices' subfolder and place the PDF file in it.
25 |
26 | * **[Data Analytics Lens:](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/analytics-lens/analytics-lens.pdf)** Create 'dataanalytics' subfolder and place the PDF file in it.
27 |
28 | * **[Generative AI Lens:](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/generative-ai-lens/generative-ai-lens.pdf)** Create 'genai' subfolder and place the PDF file in it.
29 |
30 | *Note: At present, only the above Well-Architected lenses are supported.*
31 |
32 | ### Preparing and populating Amazon Bedrock Knowledge Base with AWS Well-Architected Reference Documents
33 |
34 | The Amazon Bedrock knowledge base is driven by AWS Well-Architected documents. Download the documents in PDF format and place them in their respective subfolders in the S3 bucket. You need to create individual subfolders within S3 before uploading the files.
35 |
36 | +---**well_architected_docs**
37 | +---**dataanalytics**
38 | | analytics-lens.pdf
39 | |
40 | +---**financialservices**
41 | | wellarchitected-financial-services-industry-lens.pdf
42 | |
43 | +---**genai**
44 | | generative-ai-lens.pdf
45 | |
46 | +---**overview**
47 | | wellarchitected-framework.pdf
48 | |
49 | +---**wellarchitected**
50 | | wellarchitected-cost-optimization-pillar.pdf
51 | | wellarchitected-operational-excellence-pillar.pdf
52 | | wellarchitected-performance-efficiency-pillar.pdf
53 | | wellarchitected-reliability-pillar.pdf
54 | | wellarchitected-security-pillar.pdf
55 | | wellarchitected-sustainability-pillar.pdf
56 |
57 |
58 |
59 | ### Re-sync the knowledge base Preparing and populating Amazon Bedrock Knowledge Base with AWS Well-Architected Reference Documents
60 |
61 | * In the AWS Management Console, go to Amazon Bedrock -> Knowledge bases -> Click on the knowledge base created by the stack -> Data Source -> Sync. This will re-sync the knowledge base from the S3 bucket.
62 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | aws-cdk-lib>=2.191.0
2 | constructs>=10.0.0
3 | cdklabs.generative_ai_cdk_constructs==0.1.303
4 |
--------------------------------------------------------------------------------
/source.bat:
--------------------------------------------------------------------------------
1 | @echo off
2 |
3 | rem The sole purpose of this script is to make the command
4 | rem
5 | rem source .venv/bin/activate
6 | rem
7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows.
8 | rem On Windows, this command just runs this batch file (the argument is ignored).
9 | rem
10 | rem Now we don't need to document a Windows command for activating a virtualenv.
11 |
12 | echo Executing .venv\Scripts\activate.bat for you
13 | .venv\Scripts\activate.bat
14 |
--------------------------------------------------------------------------------
/sys-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/sys-arch.png
--------------------------------------------------------------------------------
/ui_code/WAFR_Accelerator.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from PIL import Image
3 |
4 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
5 | st.warning('You are not logged in. Please log in to access this page.')
6 | st.switch_page("pages/1_Login.py")
7 |
8 | # Main content for the home page
9 | st.write("""
10 | ### What is AWS Well-Architected Acceleration with Generative AI sample?
11 |
12 | **WAFR Accelerator** is a comprehensive sample designed to facilitate and expedite the AWS Well-Architected Framework Review process. The AWS Well-Architected Framework helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications.
13 |
14 | **WAFR Accelerator** offers an intuitive interface and a set of robust features aimed at enhancing the review experience.
15 |
16 | ### Getting Started
17 |
18 | To get started, simply navigate through the features available in the navigation bar.
19 | """)
20 |
21 | # Logout function
22 | def logout():
23 | st.session_state['authenticated'] = False
24 | st.session_state.pop('username', None)
25 | st.rerun()
26 |
27 | # Add logout button in sidebar
28 | if st.sidebar.button('Logout'):
29 | logout()
30 |
--------------------------------------------------------------------------------
/ui_code/pages/3_System_Architecture.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
3 | st.warning('You are not logged in. Please log in to access this page.')
4 | st.switch_page("pages/1_Login.py")
5 |
6 | # Check authentication
7 |
8 |
9 | def architecture():
10 | st.title("Architecture")
11 |
12 | st.header("AWS Well-Architected Acceleration with Generative AI Architecture")
13 | st.image("sys-arch.png", use_container_width=True)
14 |
15 | st.header("Components")
16 | st.write("""
17 | - **Frontend:** The user interface is built using Streamlit, providing an interactive and user-friendly environment for users to conduct reviews.
18 | - **Backend:** The backend services are developed using Python and integrate with various AWS services to manage data and computations.
19 | - **Database:** Data is stored securely in DynamoDB. Amazon OpenSearch serverless is used as the knowledge base for Amazon Bedrock.
20 | - **Integration Services:** The system integrates with Amazon DynamoDB, Amazon Bedrock and AWS Well-Architected Tool APIs to fetch necessary data and provide additional functionality.
21 | - **Security:** Amazon Cognito is used for user management.
22 | """)
23 |
24 | if __name__ == "__main__":
25 | architecture()
26 |
27 | # Logout function
28 | def logout():
29 | st.session_state['authenticated'] = False
30 | st.session_state.pop('username', None)
31 | st.rerun()
32 |
33 | # Add logout button in sidebar
34 | if st.sidebar.button('Logout'):
35 | logout()
36 |
--------------------------------------------------------------------------------
/ui_code/sys-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/ui_code/sys-arch.png
--------------------------------------------------------------------------------
/ui_code/tokenized-pages/1_Login.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import boto3
3 | import botocore
4 | import os
5 | import logging
6 |
7 | # Set up logging
8 | logging.basicConfig(level=logging.INFO)
9 | logger = logging.getLogger(__name__)
10 |
11 | # Set page configuration
12 | st.set_page_config(page_title="Login", layout="wide")
13 |
14 |
15 | # Cognito configuration
16 | COGNITO_USER_POOL_ID = '{{PARAMETER_COGNITO_USER_POOL_ID}}'
17 | COGNITO_APP_CLIENT_ID = '{{PARAMETER_COGNITO_USER_POOL_CLIENT_ID}}'
18 | COGNITO_REGION = '{{REGION}}'
19 |
20 | if not COGNITO_USER_POOL_ID or not COGNITO_APP_CLIENT_ID:
21 | st.error("Cognito configuration is missing. Please check your SSM parameters or environment variables.")
22 | st.stop()
23 |
24 | logger.info(f"Cognito User Pool ID: {COGNITO_USER_POOL_ID}")
25 | logger.info(f"Cognito App Client ID: {COGNITO_APP_CLIENT_ID}")
26 | logger.info(f"Cognito Region: {COGNITO_REGION}")
27 |
28 | def authenticate(username, password):
29 | client = boto3.client('cognito-idp', region_name=COGNITO_REGION)
30 | try:
31 | resp = client.initiate_auth(
32 | ClientId=COGNITO_APP_CLIENT_ID,
33 | AuthFlow='USER_PASSWORD_AUTH',
34 | AuthParameters={
35 | 'USERNAME': username,
36 | 'PASSWORD': password,
37 | }
38 | )
39 | logger.info(f"Successfully authenticated user: {username}")
40 | return True, username
41 | except client.exceptions.NotAuthorizedException:
42 | logger.warning(f"Authentication failed for user: {username}")
43 | return False, None
44 | except client.exceptions.UserNotFoundException:
45 | logger.warning(f"User not found: {username}")
46 | return False, None
47 | except Exception as e:
48 | logger.error(f"An error occurred during authentication: {str(e)}")
49 | st.error(f"An error occurred: {str(e)}")
50 | return False, None
51 |
52 | # Main UI
53 | st.title('Login')
54 |
55 | if 'authenticated' not in st.session_state:
56 | st.session_state['authenticated'] = False
57 |
58 | if st.session_state['authenticated']:
59 | st.write(f"Welcome back, {st.session_state['username']}!")
60 | if st.button('Logout'):
61 | st.session_state['authenticated'] = False
62 | st.session_state.pop('username', None)
63 | st.rerun()
64 | else:
65 | tab1, tab2 = st.tabs(["Login", "Register"])
66 |
67 | with tab1:
68 | username = st.text_input("Username", key="login_username")
69 | password = st.text_input("Password", type="password", key="login_password")
70 | login_button = st.button("Login")
71 |
72 | if login_button:
73 | if username and password:
74 | authentication_status, cognito_username = authenticate(username, password)
75 | if authentication_status:
76 | st.session_state['authenticated'] = True
77 | st.session_state['username'] = cognito_username
78 | st.rerun()
79 | else:
80 | st.warning("Please enter both username and password")
81 |
82 | with tab2:
83 | st.info("Please contact your Admin to get registered.")
84 |
85 | # Navigation options
86 | if st.session_state['authenticated']:
87 | st.write("Please select where you'd like to go:")
88 |
89 | col1, col2, col3, = st.columns(3)
90 |
91 | with col1:
92 | if st.button(' New WAFR Review '):
93 | st.switch_page("pages/1_New_WAFR_Review.py")
94 | with col2:
95 | if st.button(' Existing WAFR Reviews '):
96 | st.switch_page("pages/2_Existing_WAFR_Reviews.py")
97 | with col3:
98 | if st.button(' System Architecture '):
99 | st.switch_page("pages/3_System_Architecture.py")
100 |
--------------------------------------------------------------------------------
/ui_code/tokenized-pages/1_New_WAFR_Review.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import boto3
3 | import uuid
4 | import json
5 | import datetime
6 | from boto3.dynamodb.conditions import Attr
7 | from botocore.exceptions import ClientError
8 |
9 | # Check authentication
10 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
11 | st.warning('You are not logged in. Please log in to access this page.')
12 | st.switch_page("pages/1_Login.py")
13 |
14 | st.set_page_config(page_title="Create WAFR Analysis", layout="wide")
15 |
16 | # Initialize AWS clients
17 | s3_client = boto3.client('s3')
18 | well_architected_client = boto3.client('wellarchitected', region_name="{{REGION}}")
19 |
20 | def list_static_lenses():
21 |
22 | lenses = {}
23 | lenses["AWS Well-Architected Framework"] = "wellarchitected"
24 | lenses["Data Analytics Lens"] = "arn:aws:wellarchitected::aws:lens/dataanalytics"
25 | lenses["Financial Services Industry Lens"] = "arn:aws:wellarchitected::aws:lens/financialservices"
26 | lenses["Generative AI Lens"] = "arn:aws:wellarchitected::aws:lens/genai"
27 |
28 | return lenses
29 |
30 | # Not used as only 4 lenses are supported
31 | def list_lenses(wa_client):
32 |
33 | next_token = None
34 | lenses = {}
35 |
36 | try:
37 | while True:
38 | # Prepare the parameters for the API call
39 | params = {}
40 | if next_token:
41 | params['NextToken'] = next_token
42 |
43 | # Make the API call
44 | response = wa_client.list_lenses(**params)
45 |
46 | for lens in response.get('LensSummaries', []):
47 | lens_name = lens.get('LensName', 'Unknown')
48 | lens_alias = lens.get('LensAlias', lens.get('LensArn', 'Unknown'))
49 | lenses[lens_name] = lens_alias
50 | print (f"{lens_name} : {lens_alias}")
51 |
52 | # Check if there are more results
53 | next_token = response.get('NextToken')
54 | if not next_token:
55 | break
56 |
57 | return lenses
58 |
59 | except ClientError as e:
60 | print(f"An error occurred: {e}")
61 | return None
62 |
63 | lenses = list_static_lenses()
64 | lens_list = list(lenses.keys())
65 |
66 | def get_current_user():
67 | return st.session_state.get('username', 'Unknown User')
68 |
69 | # Initialize session state
70 | if 'form_submitted' not in st.session_state:
71 | st.session_state.form_submitted = False
72 |
73 | if 'form_data' not in st.session_state:
74 | st.session_state.form_data = {
75 | 'wafr_lens': lens_list[0],
76 | 'environment': 'PREPRODUCTION',
77 | 'analysis_name': '',
78 | 'created_by': get_current_user(),
79 | 'selected_pillars': [],
80 | 'workload_desc': '',
81 | 'review_owner': '',
82 | 'industry_type': 'Agriculture',
83 | 'analysis_review_type': 'Quick'
84 | }
85 | else:
86 | st.session_state.form_data['created_by'] = get_current_user()
87 |
88 | if 'success_message' not in st.session_state:
89 | st.session_state.success_message = None
90 |
91 | def upload_to_s3(file, bucket, key):
92 | try:
93 | s3_client.upload_fileobj(file, bucket, key)
94 | return True
95 | except Exception as e:
96 | st.error(f"Error uploading to S3: {str(e)}")
97 | return False
98 |
99 | def trigger_wafr_review(input_data):
100 | try:
101 | sqs = boto3.client('sqs', region_name="{{REGION}}")
102 | queue_url = '{{SQS_QUEUE_NAME}}'
103 | message_body = json.dumps(input_data)
104 | response = sqs.send_message(
105 | QueueUrl=queue_url,
106 | MessageBody=message_body
107 | )
108 | return response['MessageId']
109 | except Exception as e:
110 | st.error(f"Error sending message to SQS: {str(e)}")
111 | return None
112 |
113 | def create_wafr_analysis(analysis_data, uploaded_file):
114 | analysis_id = str(uuid.uuid4())
115 |
116 | if uploaded_file:
117 | s3_key = f"{analysis_data['created_by']}/analyses/{analysis_id}/{uploaded_file.name}"
118 | if not upload_to_s3(uploaded_file, "{{WAFR_UPLOAD_BUCKET_NAME}}", s3_key):
119 | return False, "Failed to upload document to S3."
120 | else:
121 | return False, "No document uploaded. Please upload a document before creating the analysis."
122 |
123 | wafr_review_input = {
124 | 'analysis_id': analysis_id,
125 | 'analysis_name': analysis_data['analysis_name'],
126 | 'wafr_lens': analysis_data['wafr_lens'],
127 | 'analysis_submitter': analysis_data['created_by'],
128 | 'selected_pillars': analysis_data['selected_pillars'],
129 | 'document_s3_key': s3_key,
130 | 'review_owner': analysis_data['review_owner'],
131 | 'analysis_owner': analysis_data['created_by'],
132 | 'lenses': lenses[analysis_data['wafr_lens']],
133 | 'environment': analysis_data['environment'],
134 | 'workload_desc': analysis_data['workload_desc'],
135 | 'industry_type': analysis_data['industry_type'],
136 | 'analysis_review_type': analysis_data['analysis_review_type']
137 | }
138 |
139 | message_id = trigger_wafr_review(wafr_review_input)
140 | if message_id:
141 | dynamodb = boto3.resource('dynamodb', region_name="{{REGION}}")
142 | wafr_accelerator_runs_table = dynamodb.Table('{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}')
143 |
144 | creation_date = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
145 |
146 | wafr_accelerator_run_item = {
147 | 'analysis_id': analysis_id,
148 | 'analysis_submitter': analysis_data['created_by'],
149 | 'analysis_title': analysis_data['analysis_name'],
150 | 'selected_lens': analysis_data['wafr_lens'],
151 | 'creation_date': creation_date,
152 | 'review_status': "Submitted",
153 | 'selected_wafr_pillars': analysis_data['selected_pillars'],
154 | 'document_s3_key': s3_key,
155 | 'analysis_owner': analysis_data['created_by'],
156 | 'lenses': lenses[analysis_data['wafr_lens']],
157 | 'environment': analysis_data['environment'],
158 | 'workload_desc': analysis_data['workload_desc'],
159 | 'review_owner': analysis_data['review_owner'],
160 | 'industry_type': analysis_data['industry_type'],
161 | 'analysis_review_type': analysis_data['analysis_review_type']
162 | }
163 |
164 | wafr_accelerator_runs_table.put_item(Item=wafr_accelerator_run_item)
165 | else:
166 | return False, "Failed to start the analysis process."
167 |
168 | return True, f"WAFR Analysis created successfully! Message ID: {message_id}"
169 |
170 | # Main application layout
171 | st.title("Create New WAFR Analysis")
172 |
173 | # User info and logout in sidebar
174 | with st.sidebar:
175 | if st.button('Logout'):
176 | st.session_state['authenticated'] = False
177 | st.session_state.pop('username', None)
178 | st.rerun()
179 |
180 | # Success message
181 | if st.session_state.success_message:
182 | st.success(st.session_state.success_message)
183 | if st.button("Clear Message"):
184 | st.session_state.success_message = None
185 | st.rerun()
186 |
187 | # First row: Analysis Name and Workload Description
188 | with st.expander("Workload Analysis", expanded=True):
189 | st.subheader("Workload Analysis")
190 | analysis_review_type = st.selectbox("Analysis Type", ["Quick", "Deep with Well-Architected Tool"], index=["Quick", "Deep with Well-Architected Tool"].index(st.session_state.form_data['analysis_review_type']))
191 | analysis_name = st.text_input("Workload Name", value=st.session_state.form_data['analysis_name'], max_chars=100)
192 | workload_desc = st.text_area("Workload Description", value=st.session_state.form_data['workload_desc'], height=100, max_chars=250)
193 | if workload_desc:
194 | char_count = len(workload_desc)
195 | if char_count < 3:
196 | st.error("Workload Description must be at least 3 characters long")
197 |
198 |
199 | # Second row: Environment, Review Owner, Created By, Industry Type, Lens
200 | with st.expander("Additional Details", expanded=True):
201 | st.subheader("Industry & Lens Details")
202 | col1, col2 = st.columns(2)
203 | with col1:
204 | wafr_environment = st.selectbox("WAFR Environment", options=['PRODUCTION', 'PREPRODUCTION'], index=['PRODUCTION', 'PREPRODUCTION'].index(st.session_state.form_data['environment']))
205 | review_owner = st.text_input("Review Owner", value=st.session_state.form_data['review_owner'])
206 | st.text_input("Created By", value=st.session_state.form_data['created_by'], disabled=True)
207 | with col2:
208 | industry_type = st.selectbox("Industry Type", options=["Agriculture", "Automotive", "Defense", "Design_And_Engineering", "Digital_Advertising", "Education", "Environmental_Protection", "Financial_Services", "Gaming", "General_Public_Services", "Healthcare", "Hospitality", "InfoTech", "Justice_And_Public_Safety", "Life_Sciences", "Manufacturing", "Media_Entertainment", "Mining_Resources", "Oil_Gas", "Power_Utilities", "Professional_Services", "Real_Estate_Construction", "Retail_Wholesale", "Social_Protection", "Telecommunications", "Travel_Transportation_Logistics", "Other"])
209 | wafr_lens = st.selectbox("WAFR Lens", options=lens_list, index=lens_list.index(st.session_state.form_data['wafr_lens']))
210 |
211 | # Third row: Select WAFR Pillars
212 | with st.expander("Select Pillars", expanded=True):
213 | st.subheader("Well Architected Framework Pillars")
214 | pillars = ["Operational Excellence", "Security", "Reliability", "Performance Efficiency", "Cost Optimization", "Sustainability"]
215 | selected_pillars = st.multiselect("Select WAFR Pillars", options=pillars, default=st.session_state.form_data['selected_pillars'], key="pillar_select")
216 |
217 | # Document Upload
218 | with st.expander("Document Upload", expanded=True):
219 | st.subheader("Design Artifacts Upload")
220 | uploaded_file = st.file_uploader("Upload Document", type=["pdf"])
221 |
222 | def duplicate_wa_tool_workload(workload_name):
223 | try:
224 | next_token = None
225 |
226 | search_name = workload_name.lower()
227 |
228 | while True:
229 | if next_token:
230 | response = well_architected_client.list_workloads(NextToken=next_token)
231 | else:
232 | response = well_architected_client.list_workloads()
233 |
234 | for workload in response['WorkloadSummaries']:
235 | if workload['WorkloadName'].lower() == search_name:
236 | return True
237 |
238 | next_token = response.get('NextToken')
239 | if not next_token:
240 | break
241 |
242 | return False
243 |
244 | except ClientError as e:
245 | print(f"Error checking workload: {e}")
246 | return False
247 |
248 | def duplicate_wafr_accelerator_workload(workload_name):
249 | try:
250 | dynamodb = boto3.resource('dynamodb', region_name="{{REGION}}")
251 | wafr_accelerator_runs_table = dynamodb.Table('{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}')
252 |
253 | filter_expression = Attr("analysis_title").eq(workload_name)
254 |
255 | response = wafr_accelerator_runs_table.scan(
256 | FilterExpression=filter_expression,
257 | ProjectionExpression="analysis_title, analysis_id, analysis_submitter"
258 | )
259 | return len(response.get('Items', [])) > 0
260 |
261 | except ClientError as e:
262 | print(f"ClientError when checking workload: {e}")
263 | return False
264 | except Exception as e:
265 | print(f"Error checking workload: {e}")
266 | return False
267 |
268 | # Full-width button
269 | if st.button("Create WAFR Analysis", type="primary", use_container_width=True):
270 | if not analysis_name:
271 | st.error("Please enter an Analysis Name.")
272 | elif ((not workload_desc) or (len (workload_desc)<3)):
273 | st.error("Workload Description needs to be at least 3 characters long.")
274 | elif ((not review_owner) or (len (review_owner)<3)):
275 | st.error("Review owner needs to be at least 3 characters long.")
276 | elif not selected_pillars:
277 | st.error("Please select at least one WAFR Pillar.")
278 | elif not uploaded_file:
279 | st.error("Please upload a document.")
280 | else:
281 | if (duplicate_wafr_accelerator_workload(analysis_name) ):
282 | st.error("Workload with the same name already exists!")
283 | elif ((analysis_review_type == 'Deep with Well-Architected Tool') and (duplicate_wa_tool_workload(analysis_name)) ):
284 | st.error("Workload with the same name already exists in AWS Well Architected Tool!")
285 | else:
286 | st.session_state.form_data.update({
287 | 'wafr_lens': wafr_lens,
288 | 'environment': wafr_environment,
289 | 'analysis_name': analysis_name,
290 | 'selected_pillars': selected_pillars,
291 | 'workload_desc': workload_desc,
292 | 'review_owner': review_owner,
293 | 'industry_type': industry_type,
294 | 'analysis_review_type': analysis_review_type
295 | })
296 | with st.spinner("Creating WAFR Analysis..."):
297 | success, message = create_wafr_analysis(st.session_state.form_data, uploaded_file)
298 | if success:
299 | st.session_state.success_message = message
300 | st.session_state.form_submitted = True
301 | st.rerun()
302 | else:
303 | st.error(message)
304 |
305 | if st.session_state.form_submitted:
306 | st.session_state.form_data = {
307 | 'wafr_lens': lens_list[0],
308 | 'environment': 'PREPRODUCTION',
309 | 'analysis_name': '',
310 | 'created_by': get_current_user(),
311 | 'selected_pillars': [],
312 | 'workload_desc': '',
313 | 'review_owner': '',
314 | 'industry_type': 'Agriculture',
315 | 'analysis_review_type': 'Quick'
316 | }
317 | st.session_state.form_submitted = False
318 | st.rerun()
319 |
--------------------------------------------------------------------------------
/ui_code/tokenized-pages/2_Existing_WAFR_Reviews.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import pandas as pd
3 | import datetime
4 | import boto3
5 | import json
6 | from boto3.dynamodb.types import TypeDeserializer
7 | import pytz
8 | import dotenv
9 |
10 | # Check authentication
11 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
12 | st.warning('You are not logged in. Please log in to access this page.')
13 | st.switch_page("pages/1_Login.py")
14 | st.set_page_config(page_title="WAFR Analysis Grid", layout="wide")
15 |
16 |
17 | # Logout function
18 | def logout():
19 | st.session_state['authenticated'] = False
20 | st.session_state.pop('username', None)
21 | st.rerun()
22 |
23 | # Add logout button in sidebar
24 | if st.sidebar.button('Logout'):
25 | logout()
26 |
27 | dotenv.load_dotenv()
28 |
29 | client = boto3.client("bedrock-runtime", region_name = "{{REGION}}")
30 | model_id = "anthropic.claude-3-5-sonnet-20240620-v1:0"
31 |
32 |
33 | def load_data():
34 | # Initialize DynamoDB client
35 | dynamodb = boto3.client('dynamodb', region_name='{{REGION}}')
36 |
37 | try:
38 | # Scan the table
39 | response = dynamodb.scan(TableName='{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}')
40 |
41 | items = response['Items']
42 |
43 | # Continue scanning if we haven't scanned all items
44 | while 'LastEvaluatedKey' in response:
45 | response = dynamodb.scan(
46 | TableName='{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}',
47 | ExclusiveStartKey=response['LastEvaluatedKey']
48 | )
49 | items.extend(response['Items'])
50 |
51 | # Check if items is empty
52 | if not items:
53 | st.warning("There are no existing WAFR review records")
54 | return pd.DataFrame(columns=['Analysis Id', 'Workload Name', 'Workload Description', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By', 'Review Owner', 'Solution Summary', 'pillars', 'selected_wafr_pillars'])
55 |
56 | # Unmarshal the items
57 | deserializer = TypeDeserializer()
58 | unmarshalled_items = [{k: deserializer.deserialize(v) for k, v in item.items()} for item in items]
59 |
60 | # Convert to DataFrame
61 | df = pd.DataFrame(unmarshalled_items)
62 |
63 | # Define the mapping of expected column names
64 | column_mapping = {
65 | 'analysis_id': 'Analysis Id',
66 | 'analysis_title': 'Workload Name',
67 | 'workload_desc': 'Workload Description',
68 | 'analysis_review_type': 'Analysis Type',
69 | 'selected_lens': 'WAFR Lens',
70 | 'creation_date': 'Creation Date',
71 | 'review_status': 'Status',
72 | 'analysis_submitter': 'Created By',
73 | 'review_owner' : 'Review Owner',
74 | 'extracted_document' : 'Document',
75 | 'architecture_summary' : 'Solution Summary'
76 | }
77 |
78 | # Rename columns that exist in the DataFrame
79 | df = df.rename(columns={k: v for k, v in column_mapping.items() if k in df.columns})
80 |
81 | # Add missing columns with empty values
82 | for col in column_mapping.values():
83 | if col not in df.columns:
84 | df[col] = ''
85 |
86 | # Parse Pillars
87 | def parse_pillars(pillars):
88 | if isinstance(pillars, list):
89 | return [
90 | {
91 | 'pillar_id': item.get('pillar_id', ''),
92 | 'pillar_name': item.get('pillar_name', ''),
93 | 'llm_response': item.get('llm_response', '')
94 | if isinstance(item.get('llm_response'), str)
95 | else item.get('llm_response', {})
96 | }
97 | for item in pillars
98 | ]
99 | return []
100 |
101 | # Apply parse_pillars only if 'pillars' column exists
102 | if 'pillars' in df.columns:
103 | df['pillars'] = df['pillars'].apply(parse_pillars)
104 | else:
105 | df['pillars'] = [[] for _ in range(len(df))]
106 |
107 | # Ensure all required columns exist, add empty ones if missing
108 | required_columns = ['Analysis Id', 'Workload Name', 'Workload Description', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By', 'Review Owner', 'Solution Summary', 'pillars', 'selected_wafr_pillars', 'Document']
109 | for col in required_columns:
110 | if col not in df.columns:
111 | df[col] = ''
112 |
113 | # Select and return required columns
114 | return df[required_columns]
115 |
116 | except Exception as e:
117 | st.error(f"An error occurred while loading data: {str(e)}")
118 | return pd.DataFrame(columns=['Analysis Id', 'Workload Name', 'Workload Description', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By', 'Review Owner', 'Solution Summary', 'pillars', 'selected_wafr_pillars', 'Document'])
119 |
120 | # Function to display summary of a selected analysis
121 | def display_summary(analysis):
122 | st.subheader("Summary")
123 |
124 | # Ensure selected_wafr_pillars is a string representation of the array
125 | if isinstance(analysis['selected_wafr_pillars'], list):
126 | selected_wafr_pillars = ', '.join(analysis['selected_wafr_pillars'])
127 | else:
128 | selected_wafr_pillars = str(analysis['selected_wafr_pillars'])
129 |
130 | summary_data = {
131 | "Field": ["Analysis Id", "Workload Name", "Workload Description" ,"Analysis Type", "Status", "WAFR Lens", "Creation Date", "Created By", "Review Owner", "Selected WAFR Pillars"],
132 | "Value": [
133 | analysis['Analysis Id'],
134 | analysis['Workload Name'],
135 | analysis['Workload Description'],
136 | analysis['Analysis Type'],
137 | analysis['Status'],
138 | analysis['WAFR Lens'],
139 | analysis['Creation Date'],
140 | analysis['Created By'],
141 | analysis['Review Owner'],
142 | selected_wafr_pillars
143 | ]
144 | }
145 | summary_df = pd.DataFrame(summary_data)
146 | st.dataframe(summary_df, hide_index=True, use_container_width=True)
147 |
148 | # Function to display design review data
149 | def display_design_review(analysis):
150 | st.subheader("Solution Summary")
151 | architecture_review = analysis['Solution Summary']
152 | if isinstance(architecture_review, str):
153 | st.write(architecture_review)
154 | else:
155 | st.write("No architecture review data available.")
156 |
157 | # Function to display pillar data
158 | def display_pillar(pillar):
159 | st.subheader(f"Review findings & recommendations for pillar: {pillar['pillar_name']}")
160 | llm_response = pillar.get('llm_response')
161 | if llm_response:
162 | st.write(llm_response)
163 | else:
164 | st.write("No LLM response data available.")
165 |
166 | def parse_stream(stream):
167 | for event in stream:
168 | chunk = event.get('chunk')
169 | if chunk:
170 | message = json.loads(chunk.get("bytes").decode())
171 | if message['type'] == "content_block_delta":
172 | yield message['delta']['text'] or ""
173 | elif message['type'] == "message_stop":
174 | return "\n"
175 |
176 |
177 | # Main Streamlit app
178 | def main():
179 | st.title("WAFR Analysis")
180 |
181 | st.subheader("WAFR Analysis Runs", divider="rainbow")
182 |
183 | # Generate sample data
184 | data = load_data()
185 |
186 | # Display the data grid with selected columns
187 | selected_columns = ['Analysis Id', 'Workload Name', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By']
188 | st.dataframe(data[selected_columns], use_container_width=True)
189 |
190 | st.subheader("Analysis Details", divider="rainbow")
191 |
192 | # Create a selectbox for choosing an analysis
193 | analysis_names = data['Workload Name'].tolist()
194 | selected_analysis = st.selectbox("Select an analysis to view details:", analysis_names)
195 |
196 | # Display details of the selected analysis
197 | if selected_analysis:
198 | selected_data = data[data['Workload Name'] == selected_analysis].iloc[0]
199 |
200 | wafr_container = st.container()
201 |
202 | with wafr_container:
203 | # Create tabs dynamically
204 | tab_names = ["Summary", "Solution Summary"] + [f"{pillar['pillar_name']}" for pillar in selected_data['pillars']]
205 | tabs = st.tabs(tab_names)
206 |
207 | # Populate tabs
208 | with tabs[0]:
209 | display_summary(selected_data)
210 |
211 | with tabs[1]:
212 | display_design_review(selected_data)
213 |
214 | # Display pillar tabs
215 | for i, pillar in enumerate(selected_data['pillars'], start=2):
216 | with tabs[i]:
217 | display_pillar(pillar)
218 |
219 | st.subheader("", divider="rainbow")
220 |
221 | # Create chat container here
222 | chat_container = st.container()
223 |
224 | with chat_container:
225 | st.subheader("WAFR Chat")
226 |
227 | # Create a list of options including Summary, Solution Summary, and individual pillars
228 | chat_options = ["Summary", "Solution Summary", "Document"] + [pillar['pillar_name'] for pillar in selected_data['pillars']]
229 |
230 | # Let the user select an area to discuss
231 | selected_area = st.selectbox("Select an area to discuss:", chat_options)
232 |
233 | prompt = st.text_input("Ask a question about the selected area:")
234 |
235 | if prompt:
236 | # Prepare the context based on the selected area
237 | if selected_area == "Summary":
238 | area_context = "WAFR Analysis Summary:\n"
239 | area_context += f"Workload Name: {selected_data['Workload Name']}\n"
240 | area_context += f"Workload Description: {selected_data['Workload Description']}\n"
241 | area_context += f"WAFR Lens: {selected_data['WAFR Lens']}\n"
242 | area_context += f"Status: {selected_data['Status']}\n"
243 | area_context += f"Created By: {selected_data['Created By']}\n"
244 | area_context += f"Creation Date: {selected_data['Creation Date']}\n"
245 | area_context += f"Selected WAFR Pillars: {', '.join(selected_data['selected_wafr_pillars'])}\n"
246 | area_context += f"Architecture Review: {selected_data['Solution Summary']}\n"
247 | area_context += f"Review Owner: {selected_data['Review Owner']}\n"
248 | elif selected_area == "Solution Summary":
249 | area_context = "WAFR Solution Summary:\n"
250 | area_context += f"Architecture Review: {selected_data['Solution Summary']}\n"
251 | elif selected_area == "Document":
252 | area_context = "Document:\n"
253 | area_context += f"{selected_data['Document']}\n"
254 | else:
255 | pillar_data = next((pillar for pillar in selected_data['pillars'] if pillar['pillar_name'] == selected_area), None)
256 | if pillar_data:
257 | area_context = f"WAFR Analysis Context for {selected_area}:\n"
258 | area_context += pillar_data['llm_response']
259 | else:
260 | area_context = "Error: Selected area not found in the analysis data."
261 |
262 | # Combine the user's prompt with the context
263 | full_prompt = f"{area_context}\n\nUser Question: {prompt}\n\nPlease answer the question based on the WAFR analysis context provided above for the {selected_area}."
264 |
265 | body = json.dumps({
266 | "anthropic_version": "bedrock-2023-05-31",
267 | "max_tokens": 1024,
268 | "messages": [
269 | {
270 | "role": "user",
271 | "content": [{"type": "text", "text": full_prompt}],
272 | }
273 | ],
274 | })
275 |
276 | if("{{GUARDRAIL_ID}}" == "Not Selected"):
277 | streaming_response = client.invoke_model_with_response_stream(
278 | modelId=model_id,
279 | body=body
280 | )
281 | else: # Use guardrails
282 | streaming_response = client.invoke_model_with_response_stream(
283 | modelId=model_id,
284 | body=body,
285 | guardrailIdentifier="{{GUARDRAIL_ID}}",
286 | guardrailVersion="DRAFT",
287 | )
288 |
289 | st.subheader("Response")
290 | stream = streaming_response.get("body")
291 | st.write_stream(parse_stream(stream))
292 |
293 | if __name__ == "__main__":
294 | main()
295 |
--------------------------------------------------------------------------------
/user_data_script.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Check output in /var/log/cloud-init-output.log
3 | export AWS_DEFAULT_REGION={{REGION}}
4 | max_attempts=5
5 | attempt_num=1
6 | success=false
7 | #Adding the while loop since getting "RPM: error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Resource temporarily unavailable)" frequently
8 | while [ $success = false ] && [ $attempt_num -le $max_attempts ]; do
9 | echo "Trying to install required modules..."
10 | yum update -y
11 | yum install -y python3-pip
12 | yum remove -y python3-requests
13 | pip3 install boto3 awscli streamlit streamlit-authenticator numpy python-dotenv
14 | pip3 install streamlit
15 | pip3 install streamlit-authenticator
16 | # Check the exit code of the command
17 | if [ $? -eq 0 ]; then
18 | echo "Installation succeeded!"
19 | success=true
20 | else
21 | echo "Attempt $attempt_num failed. Sleeping for 10 seconds and trying again..."
22 | sleep 10
23 | ((attempt_num++))
24 | fi
25 | done
26 |
27 | sudo mkdir -p /wafr-accelerator && cd /wafr-accelerator
28 | sudo chown -R ec2-user:ec2-user /wafr-accelerator
29 |
30 | chown -R ec2-user:ec2-user /wafr-accelerator
31 |
--------------------------------------------------------------------------------
/wafr-prompts/wafr-prompts.json:
--------------------------------------------------------------------------------
1 | {
2 | "data": [
3 | {
4 | "wafr_lens": "AWS Well-Architected Framework",
5 | "wafr_lens_alias": "wellarchitected",
6 | "wafr_pillar": "Operational Excellence",
7 | "wafr_pillar_id": 1,
8 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nOPS 1: How do you determine what your priorities are?\nOPS 2: How do you structure your organization to support your business outcomes?\nOPS 3: How does your organizational culture support your business outcomes?\nOPS 4: How do you implement observability in your workload?\nOPS 5: How do you reduce defects, ease remediation, and improve flow into production?\nOPS 6: How do you mitigate deployment risks?\nOPS 7: How do you know that you are ready to support a workload?\nOPS 8: How do you utilize workload observability in your organization?\nOPS 9: How do you understand the health of your operations?\nOPS 10: How do you manage workload and operations events?\nOPS 11: How do you evolve operations?"
9 | },
10 | {
11 | "wafr_lens": "AWS Well-Architected Framework",
12 | "wafr_lens_alias": "wellarchitected",
13 | "wafr_pillar": "Security",
14 | "wafr_pillar_id": 2,
15 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nSEC 1: How do you securely operate your workload?\nSEC 2: How do you manage identities for people and machines?\nSEC 3: How do you manage permissions for people and machines?\nSEC 4: How do you detect and investigate security events?\nSEC 5: How do you protect your network resources?\nSEC 6: How do you protect your compute resources?\nSEC 7: How do you classify your data?\nSEC 8: How do you protect your data at rest?\nSEC 9: How do you protect your data in transit?\nSEC 10: How do you anticipate, respond to, and recover from incidents?\nSEC 11: How do you incorporate and validate the security properties of applications throughout the design, development, and deployment lifecycle?"
16 | },
17 | {
18 | "wafr_lens": "AWS Well-Architected Framework",
19 | "wafr_lens_alias": "wellarchitected",
20 | "wafr_pillar": "Reliability",
21 | "wafr_pillar_id": 3,
22 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nREL 1: How do you manage service quotas and constraints?\nREL 2: How do you plan your network topology?\nREL 3: How do you design your workload service architecture?\nREL 4: How do you design interactions in a distributed system to prevent failures?\nREL 5: How do you design interactions in a distributed system to mitigate or withstand failures?\nREL 6: How do you monitor workload resources?\nREL 7: How do you design your workload to adapt to changes in demand?\nREL 8: How do you implement change?\nREL 9: How do you back up data?\nREL 10: How do you use fault isolation to protect your workload?\nREL 11: How do you design your workload to withstand component failures?\nREL 12: How do you test reliability?\nREL 13: How do you plan for disaster recovery (DR)?"
23 | },
24 | {
25 | "wafr_lens": "AWS Well-Architected Framework",
26 | "wafr_lens_alias": "wellarchitected",
27 | "wafr_pillar": "Performance Efficiency",
28 | "wafr_pillar_id": 4,
29 | "wafr_pillar_prompt": "Please answer the following questions for the Performance efficiency pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nPERF 1: How do you select the appropriate cloud resources and architecture patterns for your workload?\nPERF 2: How do you select and use compute resources in your workload?\nPERF 3: How do you store, manage, and access data in your workload?\nPERF 4: How do you select and configure networking resources in your workload?\nPERF 5: What process do you use to support more performance efficiency for your workload?"
30 | },
31 | {
32 | "wafr_lens": "AWS Well-Architected Framework",
33 | "wafr_lens_alias": "wellarchitected",
34 | "wafr_pillar": "Cost Optimization",
35 | "wafr_pillar_id": 5,
36 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nCOST 1: How do you implement cloud financial management?\nCOST 2: How do you govern usage?\nCOST 3: How do you monitor your cost and usage?\nCOST 4: How do you decommission resources?\nCOST 5: How do you evaluate cost when you select services?\nCOST 6: How do you meet cost targets when you select resource type, size and number?\nCOST 7: How do you use pricing models to reduce cost?\nCOST 8: How do you plan for data transfer charges?\nCOST 9: How do you manage demand, and supply resources?\nCOST 10: How do you evaluate new services?\nCOST 11: How do you evaluate the cost of effort?"
37 | },
38 | {
39 | "wafr_lens": "AWS Well-Architected Framework",
40 | "wafr_lens_alias": "wellarchitected",
41 | "wafr_pillar": "Sustainability",
42 | "wafr_pillar_id": 6,
43 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nSUS 1: How do you select Regions for your workload?\nSUS 2: How do you align cloud resources to your demand?\nSUS 3: How do you take advantage of software and architecture patterns to support your sustainability goals?\nSUS 4: How do you take advantage of data management policies and patterns to support your sustainability goals?\nSUS 5: How do you select and use cloud hardware and services in your architecture to support your sustainability goals?\nSUS 6: How do your organizational processes support your sustainability goals?"
44 | },
45 | {
46 | "wafr_lens": "Data Analytics Lens",
47 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
48 | "wafr_pillar": "Operational Excellence",
49 | "wafr_pillar_id": 1,
50 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nOPS 1: How do you measure the health of your analytics workload?\nOPS 2: How do you deploy jobs and applications in a controlled and reproducible way?"
51 | },
52 | {
53 | "wafr_lens": "Data Analytics Lens",
54 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
55 | "wafr_pillar": "Security",
56 | "wafr_pillar_id": 2,
57 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nSEC 1: How do you protect data in your organizations analytics workload?\nSEC 2: How do you manage access to data within your organization's source, analytics, and downstream systems?\nSEC 3: How do you protect the infrastructure of the analytics workload?"
58 | },
59 | {
60 | "wafr_lens": "Data Analytics Lens",
61 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
62 | "wafr_pillar": "Reliability",
63 | "wafr_pillar_id": 3,
64 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nREL 1: How do you design analytics workloads to withstand and mitigate failures?\nREL 2: How do you govern data and metadata changes?"
65 | },
66 | {
67 | "wafr_lens": "Data Analytics Lens",
68 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
69 | "wafr_pillar": "Performance Efficiency",
70 | "wafr_pillar_id": 4,
71 | "wafr_pillar_prompt": "Please answer the following questions for the Performance Efficiency pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions\nPERF 1: How do you select the best-performing options for your analytics workload?\nPERF 2: How do you select the best-performing storage options for your workload?\nPERF 3: How do you select the best-performing file formats and partitioning?"
72 | },
73 | {
74 | "wafr_lens": "Data Analytics Lens",
75 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
76 | "wafr_pillar": "Cost Optimization",
77 | "wafr_pillar_id": 5,
78 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nCOST 1: How do you select the compute and storage solution for your analytics workload?\nCOST 2: How do you measure and attribute the analytics workload financial accountability?\nCOST 3: How do you manage the cost of your workload over time?\nCOST 4: How do you choose the financially-optimal pricing models of the infrastructure?"
79 | },
80 | {
81 | "wafr_lens": "Data Analytics Lens",
82 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
83 | "wafr_pillar": "Sustainability",
84 | "wafr_pillar_id": 6,
85 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nSUS 1: How does your organization measure and improve its sustainability practices?"
86 | },
87 | {
88 | "wafr_lens": "Financial Services Industry Lens",
89 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
90 | "wafr_pillar": "Operational Excellence",
91 | "wafr_pillar_id": 1,
92 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nOPS 1: Have you defined risk management roles for the cloud?\nOPS 2: Have you completed an operational risk assessment?\nOPS 3: Have you assessed your specific workload against regulatory needs?\nOPS 4: How do you assess your ability to operate a workload in the cloud?\nOPS 5: How do you understand the health of your workload?\nOPS 6: How do you assess the business impact of a cloud provider service event?\nOPS 7: Have you developed a continuous improvement model?"
93 | },
94 | {
95 | "wafr_lens": "Financial Services Industry Lens",
96 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
97 | "wafr_pillar": "Security",
98 | "wafr_pillar_id": 2,
99 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nSEC 1: How does your governance enable secure cloud adoption at scale?\nSEC 2: How do you achieve, maintain, and monitor ongoing compliance with regulatory guidelines and mandates?\nSEC 3: How do you monitor the use of elevated credentials, such as administrative accounts, and guard against privilege escalation?\nSEC 4: How do you accommodate separation of duties as part of your identity and access management design?\nSEC 5: How are you monitoring your ongoing cloud environment for potential threats?\nSEC 6: How do you address emerging threats?\nSEC 7: How are you inspecting your financial services infrastructure and network for unauthorized traffic?\nSEC 8: How do you isolate your software development lifecycle (SDLC) environments (like development, test, and production)?\nSEC 9: How are you managing your encryption keys?\nSEC 10: How are you handling data loss prevention in the cloud environment?\nSEC 11: How are you protecting against ransomware?\nSEC 12: How are you meeting your obligations for incident reporting to regulators?"
100 | },
101 | {
102 | "wafr_lens": "Financial Services Industry Lens",
103 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
104 | "wafr_pillar": "Reliability",
105 | "wafr_pillar_id": 3,
106 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nREL 1: Have you planned for events that impact your software development infrastructure and challenge your recovery & resolution plans?\nREL 2: Are you practicing continuous resilience to ensure that your services meet regulatory availability and recovery requirements?\nREL 3: How are your business and regulatory requirements driving the resilience of your workload?\nREL 4: Does the resilience and the architecture of your workload reflect the business requirements and resilience tier?\nREL 5: Is the resilience of the architecture addressing challenges for distributed workloads across AWS and an external entity?\nREL 6: To mitigate operational risks, can your workload owners detect, locate, and recover from gray failures?\nREL 7: How do you monitor your resilience objectives to achieve your strategic objectives and business plan?\nREL 8: How do you monitor your resources to understand your workloads health?\nREL 9: How are you backing up data in the cloud?\nREL 10: How are backups retained?"
107 | },
108 | {
109 | "wafr_lens": "Financial Services Industry Lens",
110 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
111 | "wafr_pillar": "Performance Efficiency",
112 | "wafr_pillar_id": 4,
113 | "wafr_pillar_prompt": "Please answer the following questions for the Performance Efficiency pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nPERF 1: How do you select the best performing architecture?\nPERF 2: How do you select your compute architecture?\nPERF 3: How do you select your storage architecture?\nPERF 4: How do you select your network architecture?\nPERF 5: How do you evaluate compliance with performance requirements?\nPERF 6: How do you make trade-offs in your architecture?"
114 |
115 | },
116 | {
117 | "wafr_lens": "Financial Services Industry Lens",
118 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
119 | "wafr_pillar": "Cost Optimization",
120 | "wafr_pillar_id": 5,
121 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of the Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nCOST 1: Is your cloud team educated on relevant technical and commercial optimization mechanisms?\nCOST 2: Do you apply the Pareto-principle (80/20 rule) to manage, optimize, and plan your cloud usage and spend?\nCOST 3: Do you use automation to drive scale for Cloud Financial Management practices?\nCOST 4: How do you promote cost-awareness within your organization?\nCOST 5: How do you track anomalies in your ongoing costs for AWS services?\nCOST 6: How do you track your workload usage cycles?\nCOST 7: Are you using all the available AWS credit and investment programs?\nCOST 8: Are you monitoring usage of Savings Plans regularly?\nCOST 9: Are you using the cost advantages of tiered storage?\nCOST 10: Do you use lower cost Regions to run less data-intensive or time-sensitive workloads?\nCOST 11: Do you use cost tradeoffs of various AWS pricing models in your workload design?\nCOST 12: Are you saving costs by adopting a set of modern microservice architectures?\nCOST 13: Do you use cloud services to accommodate consulting or testing of projects?\nCOST 14: How do you measure the cost of licensing third-party applications and software?\nCOST 15: Have you reviewed your ongoing cost structure tradeoffs for your current AWS services lately?\nCOST 16: Are you continuously assessing the ongoing costs and usage of your cloud implementations?\nCOST 17: Are you continually reviewing your workload to provide the most cost-effective resources?\nCOST 18: Do you have specific workload modernization or refactoring goals in your cloud strategy?\nCOST 19: Do you use the cloud to drive innovation & operational excellence of your business model to impact both the top & bottom line?"
122 | },
123 | {
124 | "wafr_lens": "Financial Services Industry Lens",
125 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
126 | "wafr_pillar": "Sustainability",
127 | "wafr_pillar_id": 6,
128 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nSUS 1: How do you select the most sustainable Regions in your area?\nSUS 2: How do you address data sovereignty regulations for location of sustainable Region?\nSUS 3: How do you select a Region to optimize financial services workloads for sustainability?\nSUS 4: How do you prioritize business critical functions over non-critical functions?\nSUS 5: How do you define, review, and optimize network access patterns for sustainability?\nSUS 6: How do you monitor and minimize resource usage for financial services workloads?\nSUS 7: How do you optimize batch processing components for sustainability?\nSUS 8: How do you optimize your resource usage?\nSUS 9: How do you optimize areas of your code that use the most resources?\nSUS 10: Have you selected the storage class with the lowest carbon footprint?\nSUS 11: Do you store processed data or raw data?\nSUS 12: What is your process for benchmarking instances for existing workloads?\nSUS 13: Can you complete workloads over more time while not violating your maximum SLA?\nSUS 14: Do you have multi-architecture images for grid computing systems?\nSUS 15: What is your testing process for workloads that require floating point precision?\nSUS 16: Do you achieve a judicious use of development resources?\nSUS 17: How do you minimize your test, staging, sandbox instances?\nSUS 18: How do you define the minimum requirement in response time for customers in order to maximize your green SLA?"
129 | },
130 | {
131 | "wafr_lens": "Generative AI Lens",
132 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/genai",
133 | "wafr_pillar": "Operational Excellence",
134 | "wafr_pillar_id": 1,
135 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of the Well-Architected Framework Review (WAFR) Generative AI Lens.\nQuestions:\nOPS 1: How do you achieve and verify consistent model output quality?\nOPS 2: How do you monitor and manage the operational health of your applications?\nOPS 3: How do you maintain traceability for your models, prompts, and assets?\nOPS 4: How do you automate the lifecycle management of your generative AI workloads?\nOPS 5: How do you determine when to execute Gen AI model customization?"
136 | },
137 | {
138 | "wafr_lens": "Generative AI Lens",
139 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/genai",
140 | "wafr_pillar": "Security",
141 | "wafr_pillar_id": 2,
142 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of the Well-Architected Framework Review (WAFR) Generative AI Lens.\nQuestions:\nSEC 1: How do you manage access to generative AI endpoints?\nSEC 2: How do you prevent generative AI applications from generating harmful, biased, or factually incorrect responses?\nSEC 3: How do you monitor and audit events associated with your generative AI workloads?\nSEC 4: How do you secure system and user prompts?\nSEC 5: How do you prevent excessive agency for models?\nSEC 6: How do you detect and remediate data poisoning risks?"
143 | },
144 | {
145 | "wafr_lens": "Generative AI Lens",
146 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/genai",
147 | "wafr_pillar": "Reliability",
148 | "wafr_pillar_id": 3,
149 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of the Well-Architected Framework Review (WAFR) Generative AI Lens.\nQuestions:\nREL 1: How do you determine throughput quotas (or needs) for foundation models?\nREL 2: How do you maintain reliable communication between different components of your generative AI architecture?\nREL 3: How do you implement remediation actions for generative AI workload loops, retries, and failures?\nREL 4: How do you maintain versions for prompts, model parameters, and foundation models?\nREL 5: How do you distribute inference workloads over multiple regions of availability?\nREL 6: How do you design high-performance distributed computation tasks to maximize successful completion?"
150 | },
151 | {
152 | "wafr_lens": "Generative AI Lens",
153 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/genai",
154 | "wafr_pillar": "Performance Efficiency",
155 | "wafr_pillar_id": 4,
156 | "wafr_pillar_prompt": "Please answer the following questions for the Performance efficiency pillar of the Well-Architected Framework Review (WAFR) Generative AI Lens.\nQuestions:\nPERF 1: How do you capture and improve the performance of your generative AI models in production?\nPERF 2: How do you verify your generative AI workload maintains acceptable performance levels?\nPERF 3: How do you optimize computational resources required for high-performance distributed computation tasks?\nPERF 4: How do you improve the performance of data retrieval systems?"
157 | },
158 | {
159 | "wafr_lens": "Generative AI Lens",
160 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/genai",
161 | "wafr_pillar": "Cost Optimization",
162 | "wafr_pillar_id": 5,
163 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of the Well-Architected Framework Review (WAFR) Generative AI Lens.\nQuestions:\nCOST 1: How do you select the appropriate model to optimize costs?\nCOST 2: How do you select a cost-effective pricing model (for example, provisioned, on-demand, hosted, or batch)?\nCOST 3: How do you engineer prompts to optimize cost?\nCOST 4: How do you optimize vector stores for cost?\nCOST 5: How do you optimize agent workflows for cost?"
164 | },
165 | {
166 | "wafr_lens": "Generative AI Lens",
167 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/genai",
168 | "wafr_pillar": "Sustainability",
169 | "wafr_pillar_id": 6,
170 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of the Well-Architected Framework Review (WAFR) Generative AI Lens.\nQuestions:\nSUS 1: How do you minimize the computational resources needed for training, customizing, and hosting generative AI workloads?\nSUS 2: How can you optimize data processing and storage to minimize energy consumption and maximize efficiency?\nSUS 3: How do you maintain model efficiency and resource optimization when working with large language models?"
171 | }
172 | ]
173 | }
174 |
175 |
176 |
--------------------------------------------------------------------------------
/well_architected_docs/dataanalytics/analytics-lens.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/dataanalytics/analytics-lens.pdf
--------------------------------------------------------------------------------
/well_architected_docs/financialservices/wellarchitected-financial-services-industry-lens.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/financialservices/wellarchitected-financial-services-industry-lens.pdf
--------------------------------------------------------------------------------
/well_architected_docs/genai/generative-ai-lens.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/genai/generative-ai-lens.pdf
--------------------------------------------------------------------------------
/well_architected_docs/overview/wellarchitected-framework.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/overview/wellarchitected-framework.pdf
--------------------------------------------------------------------------------
/well_architected_docs/wellarchitected/wellarchitected-cost-optimization-pillar.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/wellarchitected/wellarchitected-cost-optimization-pillar.pdf
--------------------------------------------------------------------------------
/well_architected_docs/wellarchitected/wellarchitected-operational-excellence-pillar.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/wellarchitected/wellarchitected-operational-excellence-pillar.pdf
--------------------------------------------------------------------------------
/well_architected_docs/wellarchitected/wellarchitected-performance-efficiency-pillar.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/wellarchitected/wellarchitected-performance-efficiency-pillar.pdf
--------------------------------------------------------------------------------
/well_architected_docs/wellarchitected/wellarchitected-reliability-pillar.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/wellarchitected/wellarchitected-reliability-pillar.pdf
--------------------------------------------------------------------------------
/well_architected_docs/wellarchitected/wellarchitected-security-pillar.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/wellarchitected/wellarchitected-security-pillar.pdf
--------------------------------------------------------------------------------
/well_architected_docs/wellarchitected/wellarchitected-sustainability-pillar.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/7231edb7c98182dfbec1f3861cd2da48ca97326b/well_architected_docs/wellarchitected/wellarchitected-sustainability-pillar.pdf
--------------------------------------------------------------------------------