├── .gitignore
├── Additional Considerations.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── app.py
├── cdk.json
├── graphics
├── chat.png
├── createnew.png
├── existing.png
├── home.png
├── kbbucket.png
├── loginpage.png
├── output.png
└── walink.png
├── lambda_dir
├── extract_document_text
│ └── extract_document_text.py
├── generate_pillar_question_response
│ └── generate_pillar_question_response.py
├── generate_prompts_for_all_the_selected_pillars
│ └── generate_prompts_for_all_the_selected_pillars.py
├── generate_solution_summary
│ └── generate_solution_summary.py
├── insert_wafr_prompts
│ └── insert_wafr_prompts.py
├── prepare_wafr_review
│ └── prepare_wafr_review.py
├── replace_ui_tokens
│ └── replace_ui_tokens.py
├── start_wafr_review
│ └── start_wafr_review.py
└── update_review_status
│ └── update_review_status.py
├── requirements.txt
├── source.bat
├── sys-arch.png
├── ui_code
├── WAFR_Accelerator.py
├── pages
│ └── 3_System_Architecture.py
├── sys-arch.png
└── tokenized-pages
│ ├── 1_Login.py
│ ├── 1_New_WAFR_Review.py
│ └── 2_Existing_WAFR_Reviews.py
├── user_data_script.sh
├── wafr-prompts
└── wafr-prompts.json
├── wafr_genai_accelerator
└── wafr_genai_accelerator_stack.py
└── well_architected_docs
└── README.MD
/.gitignore:
--------------------------------------------------------------------------------
1 | # CDK specific
2 | cdk.out/
3 | cdk.context.json
4 | .cdk.staging/
5 | *.tabl.json
6 |
7 | # Python specific
8 | __pycache__/
9 |
10 | # Project specific
11 | node_modules
12 | .DS_Store
13 | package.json
14 | package-lock.json
15 | *.pdf
16 |
17 |
18 |
--------------------------------------------------------------------------------
/Additional Considerations.md:
--------------------------------------------------------------------------------
1 | # Security and Compliance Considerations
2 |
3 | ## TLS 1.2 enforcement
4 | To enforce TLS 1.2 for compliance requirements:
5 | - Create a custom SSL/TLS certificate
6 | - Disable SSL 3.0 and TLS 1.0
7 | - Enable TLS 1.2 or higher
8 | - Implement strong cipher suites
9 |
10 | ## Multi-Factor Authentication (MFA)
11 | To implement MFA:
12 | - Add MFA to your Cognito user pool
13 | - Follow the [AWS Cognito MFA setup guide](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html)
14 |
15 | ## VPC Flow Logs
16 | To enable VPC flow logs:
17 | - Follow the instructions in the [AWS Knowledge Center guide](https://repost.aws/knowledge-center/saw-activate-vpc-flow-logs)
18 | - Monitor and analyze network traffic
19 | - Ensure proper logging configuration
20 |
21 | ## IAM permission for Well Architected Tool
22 | Currently the IAM policies for Well Architected tool uses wildcard (*) as the target resource. This is because the sample
23 | requires full access to Well Architected Tool to create new workloads, update workloads as analysis progresses and read existing workload names to avoid duplicate workload creation: [AWS Well-Architected Tool identity-based policy examples](https://docs.aws.amazon.com/wellarchitected/latest/userguide/security_iam_id-based-policy-examples.html )
24 |
25 | ## Size of the text extracted from the uploaded document
26 | The extracted document content alongside the rest of analysis and associated information is stored in the same DynamoDB item. The maximum Dynamo DB item size is 400KB. Hence uploading an extra long document may exceed this limit.
27 |
28 | ## Important note
29 | ⚠️ When reviewing model-generated analysis:
30 | - Always verify the responses independently
31 | - Remember that LLM outputs are not deterministic
32 | - Cross-reference with official AWS documentation
33 | - Validate against your specific use case requirements
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *main* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT No Attribution
2 |
3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of
6 | this software and associated documentation files (the "Software"), to deal in
7 | the Software without restriction, including without limitation the rights to
8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
9 | the Software, and to permit persons to whom the Software is furnished to do so.
10 |
11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17 |
18 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Sample AWS Well-Architected Review (WAFR) Acceleration with Generative AI (GenAI)
2 |
3 | ## Name
4 |
5 | AWS Well-Architected Framework Review (WAFR) Acceleration with Generative AI (GenAI)
6 |
7 | ## Description
8 |
9 | This is a comprehensive sample designed to facilitate and expedite the AWS Well-Architected Framework Review process.
10 |
11 |
12 | This sample aims to accelerate AWS Well-Architected Framework Review (WAFR) velocity and adoption by leveraging the power of generative AI to provide organizations with automated comprehensive analysis and recommendations for optimizing their AWS architectures.
13 |
14 |
15 | ## Core Features
16 |
17 |
18 |
19 | * Ability to upload technical content (for example solution design and architecture documents) to be reviewed in PDF format
20 | * Creation of architecture assessment including:
21 | * Solution summary
22 | * Assessment
23 | * Well-Architected best practices
24 | * Recommendations for improvements
25 | * Risk
26 | * Ability to chat with the document as well as generated content
27 | * Creation of Well-Architected workload in Well-Architected tool that has:
28 | * Initial selection of choices for each of the question based on the assessment.
29 | * Notes populated with the generated assessment.
30 |
31 | ## Optional / Configurable Features
32 |
33 | * [Amazon Bedrock Guardrails](https://aws.amazon.com/bedrock/guardrails/) - initial set of Amazon Bedrock Guardrail configurations for Responsible AI.
34 | * Default - Enabled
35 |
36 | _* Note: The above list of features can be individually enabled and disabled by updating the 'optional_features' JSON object in 'app.py' file._
37 |
38 | ## Technical Architecture
39 |
40 | 
41 |
42 | ## Implementation Guide
43 |
44 | ### Pre-requisites
45 | * Ensure you have access to the following models in Amazon Bedrock:
46 | * Titan Text Embeddings V2
47 | * Claude 3-5 Sonnet
48 |
49 | ### Cloning Repository
50 |
51 | Clone the repository to your local directory. You can use the following command:
52 | ```
53 | git clone https://github.com/aws-samples/sample-well-architected-acceleration-with-generative-ai.git
54 | ```
55 | Alternatively, you can download the code as a zip file.
56 |
57 | ### Code Download
58 |
59 | Download the compressed source code. You can do this by selecting the ‘Code’ drop down menu on top right side.
60 | If you downloaded the .zip file, you can use the following command:
61 |
62 | ```
63 | unzip sample-well-architected-acceleration-with-generative-ai-main.zip
64 | ```
65 | ```
66 | cd sample-well-architected-acceleration-with-generative-ai-main/
67 | ```
68 |
69 | ### Preparing and populating Amazon Bedrock Knowledge Base with AWS Well-Architected Reference Documents
70 |
71 | Amazon Bedrock knowledge base is driven by the AWS Well-Architected documents. Download the documents in the PDF format and put them under 'well_architected_docs' folder. These should be populated before the build for them to be ingested during the build. Delete the default README.MD from the 'well_architected_docs' folder.
72 |
73 | 
74 |
75 |
76 | **AWS Well-Architected Framework Overview:** (place it under 'well_architected_docs/overview' subfolder)
77 |
78 |
79 | * [Overview](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html)
80 |
81 | **AWS Well-Architected Framework pillar documents:** (place them under 'well_architected_docs/wellarchitected' subfolder)
82 |
83 |
84 | * [Operational Excellence](https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/welcome.html)
85 |
86 | * [Security](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html)
87 |
88 | * [Reliability](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html)
89 |
90 | * [Performance efficiency](https://docs.aws.amazon.com/wellarchitected/latest/performance-efficiency-pillar/welcome.html)
91 |
92 | * [Cost optimization](https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/welcome.html)
93 |
94 | * [Sustainability](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sustainability-pillar.html)
95 |
96 | Repeat the above for:
97 |
98 | **[Financial Services Industry Lens:](https://docs.aws.amazon.com/wellarchitected/latest/financial-services-industry-lens/financial-services-industry-lens.html)** Place it under 'well_architected_docs/financialservices' subfolder.
99 |
100 | **[Data Analytics Lens:](https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/analytics-lens.html)** Place it under 'well_architected_docs/dataanalytics' subfolder.
101 |
102 | The 'well_architected_docs' folder would now look like as below:
103 | 
104 |
105 | * Note: At present, only the above Well-Architected lenses are supported.
106 |
107 | * Note: If you missed this step or would like to refresh the documents with further releases of Well-Architected Framework then:
108 | * Upload the files to the knowledge base bucket created by the stack, adhering to the above folder structure.
109 | * On AWS management console, go to Amazon Bedrock -> Knowledge bases -> Click on the knowledge base created by the stack-> Data Source -> Sync. This will re-sync the knowledge base from the S3 bucket.
110 |
111 | ### CDK Deployment
112 |
113 | Manually create a virtualenv on MacOS and Linux:
114 |
115 | ```
116 | python3 -m venv .venv
117 | ```
118 |
119 | After the init process completes and the virtualenv is created, you can use the following
120 | step to activate your virtualenv.
121 |
122 | ```
123 | source .venv/bin/activate
124 | ```
125 |
126 | If you are a Windows platform, you would activate the virtualenv like this:
127 |
128 | ```
129 | % .venv\Scripts\activate.bat
130 | ```
131 |
132 | Once the virtualenv is activated, you can install the required dependencies.
133 |
134 | ```
135 | pip3 install -r requirements.txt
136 | ```
137 |
138 | Open "wafr_genai_accelerator/wafr_genai_accelerator_stack.py" file using a file editor such as nano and update the Cloundfront [managed prefix list](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/LocationsOfEdgeServers.html) for your deployment region. As a default, it uses 'us-west-2' managed prefix list.
139 |
140 |
141 | ```
142 | alb_security_group.add_ingress_rule(
143 | ec2.Peer.prefix_list("pl-82a045eb"),
144 | ec2.Port.HTTP,
145 | "Allow inbound connections only from Cloudfront to Streamlit port"
146 | )
147 | ```
148 |
149 | Open app.py, update and uncomment either of the following lines based on your deployment location:
150 |
151 | ```
152 | #env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')),
153 | ```
154 | Or
155 | ```
156 | #env=cdk.Environment(account='111122223333', region='us-west-2'),
157 | ```
158 |
159 | If you are deploying CDK for the first time in your account, run the below command (if not, skip this step):
160 |
161 | ```
162 | cdk bootstrap
163 | ```
164 |
165 | At this point you can now synthesize the CloudFormation template for this code.
166 |
167 | ```
168 | cdk synth
169 | ```
170 |
171 | You can now deploy the CDK stack:
172 |
173 | ```
174 | cdk deploy
175 | ```
176 |
177 | You will need to enter 'y' to confirm the deployment. The deployment can take around 20-25 minutes to complete.
178 |
179 |
180 | ### Demo configurations
181 |
182 | On the cdk build completion, you would see three outputs:
183 | a) Amazon Cognito user pool name
184 | b) Front end UI EC2 instance id
185 | c) Amazon Cloudfront URL for the web application
186 |
187 |
188 | ##### Add a user to Amazon Cognito user pool
189 |
190 | Firstly, [add a user to the Amazon Cognito pool](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html#creating-a-new-user-using-the-console) indicated in the output. You would use this user credentials for application login later on.
191 |
192 | ##### EC2 sanity check
193 |
194 | Next, login to the EC2 the instance as ec2-user (for example using EC2 Instance Connect) and check if the front end user interface folder has been synced.
195 |
196 | ```
197 | cd /wafr-accelerator
198 | ls -l
199 | ```
200 |
201 | If there is no "pages" folder then sync the front end user interface as below.
202 |
203 | ```
204 | python3 syncUIFolder.py
205 | ```
206 |
207 | Ensure /wafr-accelerator and all the files underneath are owned by ec2-user. If not then execute the following.
208 |
209 | ```
210 | sudo chown -R ec2-user:ec2-user /wafr-accelerator
211 | ```
212 |
213 | ##### Running the application
214 |
215 | Once the UI folder has been synced, run the application as below
216 |
217 | ```
218 | streamlit run WAFR_Accelerator.py
219 | ```
220 |
221 | You can now use the Amazon Cloudfront URL from the CDK output to access the sample application in a web browser.
222 |
223 |
224 | ### Testing the demo application
225 |
226 | Open a new web browser window and copy the Amazon Cloudfront URL copied earlier into the address bar. On the login page, enter the user credentials for the previously created user.
227 |
228 | 
229 |
230 | On home page, click on the "New WAFR Review" link.
231 |
232 | 
233 |
234 | On "Create New WAFR Analysis" page, select the analysis type ("Quick" or "Deep with Well-Architected Tool") and provide analysis name, description, Well Architectd lens, etc. in the input form.
235 |
236 | **Analysis Types**:
237 | * **"Quick"** - quick analysis without the creation of workload in the AWS Well-Architected tool. Relatively faster as it groups all questions for an individual pillar into a single prompt; suitable for initial assessment.
238 | * **"Deep with Well-Architected Tool"** - robust and deep analysis that also creates workload in the AWS Well-Architected tool. Takes longer to complete as it doesn't group questions and responses are generated for every question individually. This takes longer to execute.
239 |
240 | 
241 |
242 | * Note: "Created by" field is automatically populated with the logged user name.
243 |
244 | You have an option to select one or more Well-Architected pillars.
Finally upload the solution architecture / technical design document that needs to be analysed and press the "Create WAFR Analysis" button.
245 |
246 | Post successful submission, navigate to the "Existing WAFR Reviews" page. The newly submitted analysis would be listed in the table along with any existing reviews.
247 | 
248 |
249 | Once the analysis is marked "Completed", the WAFR analysis for the selected lens would be shown at the bottom part of the page. If there are multiple reviews, then select the relevant analysis from the combo list.
250 |
251 |
252 | 
253 |
254 | * Note: Analysis duration varies based on the Analysis Type ("Quick" or "Deep with Well-Architected Tool") and number of WAFR Pillars selected. A 'Quick' analysis type with one WAFR pillar is likely to be much quicker than "Deep with Well-Architected Tool" analysis type with all the six WAFR Pillars selected.
255 | * Note: Only the questions for the selected Well-Architected lens and pillars are answered.
256 |
257 | To chat with the uploaded document as well as any of the generated content by using the "WAFR Chat" section at the bottom of the "Existing WAFR Reviews" page.
258 |
259 |
260 | 
261 |
262 |
263 | ### Uninstall - CDK Undeploy
264 |
265 | If you no longer need the application or would like to delete the CDK deployment, run the following command:
266 |
267 | ```
268 | cdk destroy
269 | ```
270 |
271 | ### Additional considerations
272 | Please see [Additional Considerations](Additional%20Considerations.md)
273 |
274 |
275 | ### Disclaimer
276 | This is a sample code, for non-production usage. You should work with your security and legal teams to meet your organizational security, regulatory and compliance requirements before deployment.
277 |
278 |
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import aws_cdk as cdk
4 |
5 | from wafr_genai_accelerator.wafr_genai_accelerator_stack import WafrGenaiAcceleratorStack
6 |
7 | app = cdk.App()
8 |
9 | # Define tags as a dictionary
10 | # Tags will be applied to all resources in the stack
11 | # tags = {
12 | # "Environment": "Production",
13 | # "Project": "WellArchitectedReview",
14 | # "Owner": "TeamName",
15 | # "CostCenter": "12345"
16 | # }
17 | tags = {
18 | "Project": "WellArchitectedReview"
19 | }
20 |
21 | # Flags for optional features
22 | optional_features = {
23 | "guardrails": "True",
24 | }
25 |
26 | WafrGenaiAcceleratorStack(app, "WellArchitectedReviewUsingGenAIStack", tags=tags, optional_features=optional_features,
27 | # If you don't specify 'env', this stack will be environment-agnostic.
28 | # Account/Region-dependent features and context lookups will not work,
29 | # but a single synthesized template can be deployed anywhere.
30 |
31 | # Uncomment the next line to specialize this stack for the AWS Account
32 | # and Region that are implied by the current CLI configuration.
33 |
34 | env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')),
35 |
36 | # Uncomment the next line if you know exactly what Account and Region you
37 | # want to deploy the stack to. */
38 |
39 | #env=cdk.Environment(account='**********', region='us-west-2'),
40 |
41 | #For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html
42 |
43 | )
44 |
45 | app.synth()
46 |
--------------------------------------------------------------------------------
/cdk.json:
--------------------------------------------------------------------------------
1 | {
2 | "app": "python3 app.py",
3 | "watch": {
4 | "include": [
5 | "**"
6 | ],
7 | "exclude": [
8 | "README.md",
9 | "cdk*.json",
10 | "requirements*.txt",
11 | "source.bat",
12 | "**/__init__.py",
13 | "**/__pycache__",
14 | "tests"
15 | ]
16 | },
17 | "context": {
18 | "@aws-cdk/aws-lambda:recognizeLayerVersion": true,
19 | "@aws-cdk/core:checkSecretUsage": true,
20 | "@aws-cdk/core:target-partitions": [
21 | "aws",
22 | "aws-cn"
23 | ],
24 | "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
25 | "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
26 | "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true,
27 | "@aws-cdk/aws-iam:minimizePolicies": true,
28 | "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
29 | "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
30 | "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
31 | "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
32 | "@aws-cdk/aws-apigateway:disableCloudWatchRole": true,
33 | "@aws-cdk/core:enablePartitionLiterals": true,
34 | "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true,
35 | "@aws-cdk/aws-iam:standardizedServicePrincipals": true,
36 | "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true,
37 | "@aws-cdk/aws-iam:importedRoleStackSafeDefaultPolicyName": true,
38 | "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true,
39 | "@aws-cdk/aws-route53-patters:useCertificate": true,
40 | "@aws-cdk/customresources:installLatestAwsSdkDefault": false,
41 | "@aws-cdk/aws-rds:databaseProxyUniqueResourceName": true,
42 | "@aws-cdk/aws-codedeploy:removeAlarmsFromDeploymentGroup": true,
43 | "@aws-cdk/aws-apigateway:authorizerChangeDeploymentLogicalId": true,
44 | "@aws-cdk/aws-ec2:launchTemplateDefaultUserData": true,
45 | "@aws-cdk/aws-secretsmanager:useAttachedSecretResourcePolicyForSecretTargetAttachments": true,
46 | "@aws-cdk/aws-redshift:columnId": true,
47 | "@aws-cdk/aws-stepfunctions-tasks:enableEmrServicePolicyV2": true,
48 | "@aws-cdk/aws-ec2:restrictDefaultSecurityGroup": true,
49 | "@aws-cdk/aws-apigateway:requestValidatorUniqueId": true,
50 | "@aws-cdk/aws-kms:aliasNameRef": true,
51 | "@aws-cdk/aws-autoscaling:generateLaunchTemplateInsteadOfLaunchConfig": true,
52 | "@aws-cdk/core:includePrefixInUniqueNameGeneration": true,
53 | "@aws-cdk/aws-efs:denyAnonymousAccess": true,
54 | "@aws-cdk/aws-opensearchservice:enableOpensearchMultiAzWithStandby": true,
55 | "@aws-cdk/aws-lambda-nodejs:useLatestRuntimeVersion": true,
56 | "@aws-cdk/aws-efs:mountTargetOrderInsensitiveLogicalId": true,
57 | "@aws-cdk/aws-rds:auroraClusterChangeScopeOfInstanceParameterGroupWithEachParameters": true,
58 | "@aws-cdk/aws-appsync:useArnForSourceApiAssociationIdentifier": true,
59 | "@aws-cdk/aws-rds:preventRenderingDeprecatedCredentials": true,
60 | "@aws-cdk/aws-codepipeline-actions:useNewDefaultBranchForCodeCommitSource": true,
61 | "@aws-cdk/aws-cloudwatch-actions:changeLambdaPermissionLogicalIdForLambdaAction": true,
62 | "@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse": true,
63 | "@aws-cdk/aws-codepipeline:defaultPipelineTypeToV2": true,
64 | "@aws-cdk/aws-kms:reduceCrossAccountRegionPolicyScope": true,
65 | "@aws-cdk/aws-eks:nodegroupNameAttribute": true,
66 | "@aws-cdk/aws-ec2:ebsDefaultGp3Volume": true,
67 | "@aws-cdk/aws-ecs:removeDefaultDeploymentAlarm": true,
68 | "@aws-cdk/custom-resources:logApiResponseDataPropertyTrueDefault": false,
69 | "@aws-cdk/aws-stepfunctions-tasks:ecsReduceRunTaskPermissions": true
70 | }
71 | }
72 |
--------------------------------------------------------------------------------
/graphics/chat.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/chat.png
--------------------------------------------------------------------------------
/graphics/createnew.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/createnew.png
--------------------------------------------------------------------------------
/graphics/existing.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/existing.png
--------------------------------------------------------------------------------
/graphics/home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/home.png
--------------------------------------------------------------------------------
/graphics/kbbucket.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/kbbucket.png
--------------------------------------------------------------------------------
/graphics/loginpage.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/loginpage.png
--------------------------------------------------------------------------------
/graphics/output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/output.png
--------------------------------------------------------------------------------
/graphics/walink.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/graphics/walink.png
--------------------------------------------------------------------------------
/lambda_dir/extract_document_text/extract_document_text.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 |
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 | dynamodb = boto3.resource('dynamodb')
16 |
17 | logger = logging.getLogger()
18 | logger.setLevel(logging.INFO)
19 |
20 | def lambda_handler(event, context):
21 |
22 | entry_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
23 |
24 | logger.info("extract_document_text invoked at " + entry_timeestamp)
25 |
26 | output_filename = extract_document_text = ""
27 | logger.info(json.dumps(event))
28 |
29 | return_response = data = event
30 | upload_bucket_name = data['extract_output_bucket']
31 | region = data['region']
32 |
33 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
34 | wafr_accelerator_run_key = data['wafr_accelerator_run_key']
35 |
36 | document_s3_key = data['wafr_accelerator_run_items']['document_s3_key']
37 |
38 | try:
39 |
40 | # Extract text from the document
41 | extracted_document_text = extract_text(upload_bucket_name, document_s3_key , region)
42 |
43 | attribute_updates = {
44 | 'extracted_document': {
45 | 'Action': 'ADD' # PUT to update or ADD to add a new attribute
46 | }
47 | }
48 |
49 | # Update the item
50 | response = wafr_accelerator_runs_table.update_item(
51 | Key=wafr_accelerator_run_key,
52 | UpdateExpression="SET extracted_document = :val",
53 | ExpressionAttributeValues={':val': extracted_document_text},
54 | ReturnValues='UPDATED_NEW'
55 | )
56 |
57 | # Write the textract output to a txt file
58 | output_bucket = s3.Bucket(upload_bucket_name)
59 | logger.info ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
60 | logger.info ("document_s3_key[:documentS3Key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
61 | output_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-extracted-text.txt"
62 | return_response['extract_text_file_name'] = output_filename
63 |
64 | # Upload the file to S3
65 | output_bucket.put_object(Key=output_filename, Body=bytes(extracted_document_text, encoding='utf-8'))
66 |
67 | except Exception as error:
68 | # Handle errors and update DynamoDB status
69 | handle_error(wafr_accelerator_runs_table, wafr_accelerator_run_key, error)
70 | raise Exception (f'Exception caught in extract_document_text: {error}')
71 |
72 | logger.info('return_response: ' + json.dumps(return_response))
73 |
74 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
75 | logger.info("Exiting extract_document_text at " + exit_timeestamp)
76 |
77 | # Return a success response
78 | return {
79 | 'statusCode': 200,
80 | 'body': json.dumps(return_response)
81 | }
82 |
83 | def handle_error(table, key, error):
84 | # Handle errors and update DynamoDB status
85 | table.update_item(
86 | Key=key,
87 | UpdateExpression="SET review_status = :val",
88 | ExpressionAttributeValues={':val': "Errored"},
89 | ReturnValues='UPDATED_NEW'
90 | )
91 | logger.error(f"Exception caught in extract_document_text: {error}")
92 |
93 | def extract_text(upload_bucket_name, document_s3_key, region):
94 |
95 | logger.info ("solution_design_text is null and hence using Textract")
96 | # # Initialize Textract and Bedrock clients
97 | textract_config = Config(retries = dict(max_attempts = 5))
98 | textract_client = boto3.client('textract', region_name=region, config=textract_config)
99 |
100 | logger.debug ("extract_text checkpoint 1")
101 | # Start the text detection job
102 | response = textract_client.start_document_text_detection(
103 | DocumentLocation={
104 | 'S3Object': {
105 | 'Bucket': upload_bucket_name,
106 | 'Name': document_s3_key
107 | }
108 | }
109 | )
110 |
111 | job_id = response["JobId"]
112 |
113 | logger.info (f"textract response: {job_id}")
114 |
115 | logger.debug ("extract_text checkpoint 2")
116 |
117 | # Wait for the job to complete
118 | while True:
119 | response = textract_client.get_document_text_detection(JobId=job_id)
120 | status = response["JobStatus"]
121 | if status == "SUCCEEDED":
122 | break
123 |
124 | logger.debug ("extract_text checkpoint 3")
125 | # Get the job results
126 | pages = []
127 | next_token = None
128 | while True:
129 | if next_token:
130 | response = textract_client.get_document_text_detection(JobId=job_id, NextToken=next_token)
131 | else:
132 | response = textract_client.get_document_text_detection(JobId=job_id)
133 | pages.append(response)
134 | if 'NextToken' in response:
135 | next_token = response['NextToken']
136 | else:
137 | break
138 |
139 | logger.debug ("extract_text checkpoint 4")
140 | # Extract the text from all pages
141 | extracted_text = ""
142 | for page in pages:
143 | for item in page["Blocks"]:
144 | if item["BlockType"] == "LINE":
145 | extracted_text += item["Text"] + "\n"
146 |
147 |
148 | return extracted_text
--------------------------------------------------------------------------------
/lambda_dir/generate_pillar_question_response/generate_pillar_question_response.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 | import re
8 |
9 | from boto3.dynamodb.conditions import Key
10 | from boto3.dynamodb.conditions import Attr
11 |
12 | from botocore.client import Config
13 | from botocore.exceptions import ClientError
14 |
15 | s3 = boto3.resource('s3')
16 | s3client = boto3.client('s3')
17 |
18 | dynamodb = boto3.resource('dynamodb')
19 | wa_client = boto3.client('wellarchitected')
20 |
21 | BEDROCK_SLEEP_DURATION = int(os.environ['BEDROCK_SLEEP_DURATION'])
22 | BEDROCK_MAX_TRIES = int(os.environ['BEDROCK_MAX_TRIES'])
23 |
24 | logger = logging.getLogger()
25 | logger.setLevel(logging.INFO)
26 |
27 | def lambda_handler(event, context):
28 |
29 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
30 |
31 | logger.info(f"generate_pillar_question_response invoked at {entry_timestamp}")
32 |
33 | logger.info(json.dumps(event))
34 |
35 | logger.debug(f"BEDROCK_SLEEP_DURATION: {BEDROCK_SLEEP_DURATION}")
36 | logger.debug(f"BEDROCK_MAX_TRIES: {BEDROCK_MAX_TRIES}")
37 |
38 | data = event
39 |
40 | region = data['region']
41 | bedrock_config = Config(connect_timeout=120, region_name=region, read_timeout=120, retries={'max_attempts': 0})
42 | bedrock_client = boto3.client('bedrock-runtime',region_name=region)
43 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
44 |
45 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
46 | wafr_prompts_table = dynamodb.Table(data['wafr_prompts_table'])
47 |
48 | document_s3_key = data['wafr_accelerator_run_items']['document_s3_key']
49 | extract_output_bucket_name = data['extract_output_bucket']
50 |
51 | wafr_lens = data['wafr_accelerator_run_items']['selected_lens']
52 |
53 | pillars = data['wafr_accelerator_run_items'] ['selected_wafr_pillars']
54 | input_pillar = data['input_pillar']
55 | llm_model_id = data['llm_model_id']
56 | wafr_workload_id = data['wafr_accelerator_run_items'] ['wafr_workload_id']
57 | lens_alias = data['wafr_accelerator_run_items'] ['lens_alias']
58 |
59 | return_response = {}
60 |
61 | logger.debug (f"generate_pillar_question_response checkpoint 0")
62 | logger.info (f"input_pillar: {input_pillar}")
63 | logger.info (f"wafr_lens: {wafr_lens}")
64 |
65 | try:
66 | extract_output_bucket = s3.Bucket(extract_output_bucket_name)
67 |
68 | streaming = False
69 |
70 | logger.debug (f"generate_pillar_question_response checkpoint 1")
71 |
72 | input_pillar_id = get_pillar_name_to_id_mappings()[input_pillar]
73 |
74 | wafr_accelerator_run_key = {
75 | 'analysis_id': data['wafr_accelerator_run_items']['analysis_id'],
76 | 'analysis_submitter': data['wafr_accelerator_run_items']['analysis_submitter']
77 | }
78 |
79 | logger.debug (f"generate_pillar_question_response checkpoint 2")
80 |
81 | pillar_responses = get_existing_pillar_responses(wafr_accelerator_runs_table, data['wafr_accelerator_run_items']['analysis_id'], data['wafr_accelerator_run_items']['analysis_submitter'])
82 |
83 | logger.debug (f"generate_pillar_question_response checkpoint 3")
84 |
85 | pillar_review_output = ""
86 |
87 | logger.debug (f"generate_pillar_question_response checkpoint 4")
88 |
89 | logger.info (input_pillar)
90 |
91 | file_counter = 0
92 |
93 | # read file content
94 | # invoke bedrock
95 | # append response
96 | # update pillar
97 |
98 | pillar_name_alias_mappings = get_pillar_name_alias_mappings ()
99 |
100 | question_mappings = get_question_id_mappings (data['wafr_prompts_table'], wafr_lens, input_pillar)
101 |
102 | for pillar_question_object in data[input_pillar]:
103 |
104 | filename = pillar_question_object["pillar_review_prompt_filename"]
105 | logger.info (f"generate_pillar_question_response checkpoint 5.{file_counter}")
106 | logger.info (f"Input Prompt filename: " + filename)
107 |
108 | current_prompt_object = s3client.get_object(
109 | Bucket=extract_output_bucket_name,
110 | Key=filename,
111 | )
112 |
113 | current_prompt = current_prompt_object['Body'].read()
114 |
115 | logger.info (f"current_prompt: {current_prompt}")
116 |
117 | logger.debug ("filename.rstrip('.'): " + filename.rstrip('.'))
118 | logger.debug ("filename[:document_s3_key.rfind('.')]: " + filename[:filename.rfind('.')] )
119 | pillar_review_prompt_ouput_filename = filename[:filename.rfind('.')]+ "-output.txt"
120 | logger.info (f"Ouput Prompt ouput filename: " + pillar_review_prompt_ouput_filename)
121 |
122 | logger.info (f"generate_pillar_question_response checkpoint 6.{file_counter}")
123 |
124 | pillar_specfic_question_id = pillar_question_object["pillar_specfic_question_id"]
125 | pillar_specfic_prompt_question = pillar_question_object["pillar_specfic_prompt_question"]
126 |
127 | pillar_question_review_output = invoke_bedrock(streaming, current_prompt, pillar_review_prompt_ouput_filename, extract_output_bucket, bedrock_client, llm_model_id)
128 |
129 | logger.debug (f"pillar_question_review_output: {pillar_question_review_output}")
130 |
131 | # Comment the next line if you would like to retain the prompts files
132 | s3client.delete_object(Bucket=extract_output_bucket_name, Key=filename)
133 |
134 | pillar_question_review_output = sanitise_string(pillar_question_review_output)
135 | logger.debug (f"sanitised_string: {pillar_question_review_output}")
136 |
137 | full_assessment, extracted_question, extracted_assessment, best_practices_followed, recommendations_and_examples, risk, citations = extract_assessment(pillar_question_review_output, question_mappings, pillar_question_object["pillar_specfic_prompt_question"])
138 | logger.debug (f"extracted_assessment: {full_assessment}")
139 |
140 | extracted_choices = extract_choices(pillar_question_review_output)
141 | logger.debug (f"extracted_choices: {extracted_choices}")
142 |
143 | update_wafr_question_response(wa_client, wafr_workload_id, lens_alias, pillar_specfic_question_id, extracted_choices, f"{extracted_assessment} {best_practices_followed} {recommendations_and_examples}")
144 |
145 | pillar_review_output = pillar_review_output + " \n" + full_assessment
146 |
147 | logger.debug (f"generate_pillar_question_response checkpoint 7.{file_counter}")
148 |
149 | file_counter = file_counter + 1
150 |
151 | logger.debug (f"generate_pillar_question_response checkpoint 8")
152 |
153 | # Now write the completed pillar response in DynamoDB
154 | pillar_response = {
155 | 'pillar_name': input_pillar,
156 | 'pillar_id': input_pillar_id,
157 | 'llm_response': pillar_review_output
158 | }
159 |
160 | # Add the dictionary object to the list
161 | pillar_responses.append(pillar_response)
162 |
163 | # Update the item
164 | response = wafr_accelerator_runs_table.update_item(
165 | Key=wafr_accelerator_run_key,
166 | UpdateExpression="SET pillars = :val",
167 | ExpressionAttributeValues={':val': pillar_responses},
168 | ReturnValues='UPDATED_NEW'
169 | )
170 |
171 | logger.info (f"dynamodb status update response: {response}" )
172 | logger.info (f"generate_pillar_question_response checkpoint 10")
173 |
174 | except Exception as error:
175 | handle_error(wafr_accelerator_runs_table, wafr_accelerator_run_key, error)
176 | raise Exception (f'Exception caught in generate_pillar_question_response: {error}')
177 | finally:
178 | logger.info (f"generate_pillar_question_response inside finally")
179 |
180 | logger.debug (f"generate_pillar_question_response checkpoint 11")
181 |
182 | return_response = data
183 |
184 | logger.info(f"return_response: " + json.dumps(return_response))
185 |
186 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
187 | logger.info(f"Exiting generate_pillar_question_response at {exit_timeestamp}")
188 |
189 | # Return a success response
190 | return {
191 | 'statusCode': 200,
192 | 'body': return_response
193 | }
194 |
195 | def get_pillar_name_to_id_mappings():
196 | mappings = {}
197 |
198 | mappings["Operational Excellence"] = "1"
199 | mappings["Security"] = "2"
200 | mappings["Reliability"] = "3"
201 | mappings["Performance Efficiency"] = "4"
202 | mappings["Cost Optimization"] = "5"
203 | mappings["Sustainability"] = "6"
204 |
205 | return mappings
206 |
207 | def get_pillar_name_alias_mappings():
208 |
209 | mappings = {}
210 |
211 | mappings["Cost Optimization"] = "costOptimization"
212 | mappings["Operational Excellence"] = "operationalExcellence"
213 | mappings["Performance Efficiency"] = "performance"
214 | mappings["Reliability"] = "reliability"
215 | mappings["Security"] = "security"
216 | mappings["Sustainability"] = "sustainability"
217 |
218 | return mappings
219 |
220 | def sanitise_string(content):
221 | junk_list = ["```", "xml", "**wafr_answer_choices:**", "{", "}", "[", "]", "", "", "", "", "Recommendations:", "Assessment:"]
222 | for junk in junk_list:
223 | content = content.replace(junk, '')
224 | return content
225 |
226 | def sanitise_string_2(content):
227 | junk_list = ["", "", "**", "```", "", ""]
228 | for junk in junk_list:
229 | content = content.replace(junk, '')
230 | return content
231 |
232 | # Function to recursively print XML nodes
233 | def extract_assessment(content, question_mappings, question):
234 |
235 | tag_content = ""
236 | full_assessment = question= assessment = best_practices_followed = recommendations_and_examples = risk = citations = ""
237 | try:
238 | xml_start = content.find('')
239 | if xml_start != -1:
240 | xml_content = content[(xml_start+len("")):]
241 | xml_end = xml_content.find('')
242 | if xml_end != -1:
243 | tag_content = xml_content[:xml_end].strip()
244 | print (f"question: {tag_content}")
245 | question = f"**Question: {question_mappings[sanitise_string_2(tag_content)]} - {tag_content}** \n"
246 | else:
247 | print (f"End tag for question not found")
248 | question = f"**Question: {question_mappings[sanitise_string_2(question)]} - {question}** \n"
249 | else:
250 | question = f"**Question: {question_mappings[sanitise_string_2(question)]} - {question}** \n"
251 |
252 | assessment = f"**Assessment:** {extract_tag_data(content, 'assessment')} \n \n"
253 | best_practices_followed = f"**Best Practices Followed:** {extract_tag_data(content, 'best_practices_followed')} \n \n"
254 | recommendations_and_examples = f"**Recommendations:** {extract_tag_data(content, 'recommendations_and_examples')} \n \n"
255 | citations = f"**Citations:** {extract_tag_data(content, 'citations')} \n \n"
256 |
257 | full_assessment = question + assessment + best_practices_followed + recommendations_and_examples + risk +citations
258 |
259 | except Exception as error:
260 | errorFlag = True
261 | logger.info("Exception caught by try loop in extract_assessment!")
262 | logger.info("Error received is:")
263 | logger.info(error)
264 |
265 | return full_assessment, question, assessment, best_practices_followed, recommendations_and_examples, risk, citations
266 |
267 | def extract_tag_data(content, tag):
268 | tag_content = ""
269 | xml_start = content.find(f'<{tag}>')
270 | if xml_start != -1:
271 | xml_content = content[(xml_start+len(f'<{tag}>')):]
272 | xml_end = xml_content.find(f'{tag}>')
273 | if xml_end != -1:
274 | tag_content = sanitise_string_2(xml_content[:xml_end].strip())
275 | print (f"{tag}: {tag_content}")
276 | else:
277 | print (f"End tag for assessment not found")
278 | return tag_content
279 |
280 | def extract_choices(content):
281 |
282 | selectedChoices = []
283 |
284 | xml_end = -1
285 |
286 | try:
287 | xml_start = content.find('')
288 | if xml_start != -1:
289 | xml_content = content[xml_start:]
290 | xml_end = xml_content.find('')
291 | wafr_answer_choices = xml_content[:(xml_end + len(""))].strip()
292 | logger.info(f"wafr_answer_choices: {wafr_answer_choices}")
293 | logger.info(f"response_root is a well-formed XML")
294 |
295 | if ((xml_start!=-1) and (xml_end!=-1)):
296 | # Use regular expression to find all occurrences of ...
297 | id_pattern = re.compile(r'(.*?)', re.DOTALL)
298 | # Find all matches
299 | ids = id_pattern.findall(wafr_answer_choices)
300 | # Loop through the matches
301 | for index, id_value in enumerate(ids, 1):
302 | print(f"ID {index}: {id_value.strip()}")
303 | selectedChoices += [id_value.strip()]
304 |
305 | except Exception as error:
306 | errorFlag = True
307 | logger.info("Exception caught by try loop in extract_choices!")
308 | logger.info("Error received is:")
309 | logger.info(error)
310 |
311 | return selectedChoices
312 |
313 | def get_question_id_mappings(wafr_prompts_table_name, wafr_lens, input_pillar):
314 | questions = {}
315 |
316 | wafr_prompts_table = dynamodb.Table(wafr_prompts_table_name)
317 | response = wafr_prompts_table.query(
318 | ProjectionExpression ='wafr_pillar_id, wafr_pillar_prompt',
319 | KeyConditionExpression=Key('wafr_lens').eq(wafr_lens) & Key('wafr_pillar').eq(input_pillar),
320 | ScanIndexForward=True # Set to False to sort in descending order
321 | )
322 | logger.debug (f"response wafr_pillar_id: " + str(response['Items'][0]['wafr_pillar_id']))
323 | logger.debug (f"response wafr_pillar_prompt: " + response['Items'][0]['wafr_pillar_prompt'])
324 | pillar_specific_prompt_question = response['Items'][0]['wafr_pillar_prompt']
325 |
326 | line_counter = 0
327 | # Before rubnning this, ensure wafr prompt row has only questions and no text before it. Otherwise, the below fails.
328 | for line in response['Items'][0]['wafr_pillar_prompt'].splitlines():
329 | line_counter = line_counter + 1
330 | if(line_counter > 2):
331 | question_id, question_text = line.strip().split(': ', 1)
332 | questions[question_text] = question_id
333 | return questions
334 |
335 | def handle_error(table, key, error):
336 | # Handle errors and update DynamoDB status
337 | table.update_item(
338 | Key=key,
339 | UpdateExpression="SET review_status = :val",
340 | ExpressionAttributeValues={':val': "Errored"},
341 | ReturnValues='UPDATED_NEW'
342 | )
343 | logger.error(f"Exception caught in generate_pillar_question_response: {error}")
344 |
345 |
346 | def update_wafr_question_response(wa_client, wafr_workload_id, lens_alias, pillar_specfic_question_id, choices, assessment):
347 |
348 | errorFlag = False
349 |
350 | try:
351 | if(errorFlag == True):
352 | selectedChoices = []
353 | logger.info(f"Error Flag is true")
354 | else:
355 | selectedChoices = choices
356 |
357 | logger.debug(f"update_wafr_question_response: 1")
358 | logger.info(f"wafr_workload_id: {wafr_workload_id}, lens_alias: {lens_alias}, pillar_specfic_question_id: {pillar_specfic_question_id}")
359 |
360 | try:
361 | response = wa_client.update_answer(
362 | WorkloadId=wafr_workload_id,
363 | LensAlias=lens_alias,
364 | QuestionId=pillar_specfic_question_id,
365 | SelectedChoices=selectedChoices,
366 | Notes=assessment[:2084],
367 | IsApplicable=True
368 | )
369 | logger.info(f"With Choices- response: {response}")
370 | logger.debug(f"update_wafr_question_response: 2")
371 | except Exception as error:
372 | logger.info("Updated answer with choices failed, now attempting update without the choices!")
373 | logger.info("With Choices- Error received is:")
374 | logger.info(error)
375 | selectedChoices = []
376 | response = wa_client.update_answer(
377 | WorkloadId=wafr_workload_id,
378 | LensAlias=lens_alias,
379 | QuestionId=pillar_specfic_question_id,
380 | SelectedChoices=selectedChoices,
381 | Notes=assessment[:2084],
382 | IsApplicable=True
383 | )
384 | logger.info(f"Without Choices- response: {response}")
385 | logger.debug(f"update_wafr_question_response: 3")
386 |
387 | logger.info (json.dumps(response))
388 |
389 | except Exception as error:
390 | logger.info("Exception caught by external try in update_wafr_question_response!")
391 | logger.info("Error received is:")
392 | logger.info(error)
393 | finally:
394 | logger.info (f"update_wafr_question_response Inside finally")
395 |
396 | def invoke_bedrock(streaming, claude_prompt_body, pillar_review_outputFilename, bucket, bedrock_client, llm_model_id):
397 |
398 | pillar_review_output = ""
399 | retries = 0
400 | max_retries = BEDROCK_MAX_TRIES
401 | pillar_review_output = ""
402 | while retries < max_retries:
403 | try:
404 | if(streaming):
405 | streaming_response = bedrock_client.invoke_model_with_response_stream(
406 | modelId=llm_model_id,
407 | body=claude_prompt_body,
408 | )
409 |
410 | logger.info (f"invoke_bedrock checkpoint 1.{retries}")
411 | stream = streaming_response.get("body")
412 |
413 | logger.debug (f"invoke_bedrock checkpoint 2")
414 |
415 | for chunk in parse_stream(stream):
416 | pillar_review_output += chunk
417 |
418 | # Uncomment next line if you would like to see response files for each question too.
419 | #bucket.put_object(Key=pillar_review_outputFilename, Body=bytes(pillar_review_output, encoding='utf-8'))
420 |
421 | return pillar_review_output
422 |
423 | else:
424 | non_streaming_response = bedrock_client.invoke_model(
425 | modelId=llm_model_id,
426 | body=claude_prompt_body,
427 | )
428 |
429 | response_json = json.loads(non_streaming_response["body"].read().decode("utf-8"))
430 |
431 | logger.debug (response_json)
432 |
433 | logger.info (f"invoke_bedrock checkpoint 1.{retries}")
434 |
435 | # Extract and logger.info the response text.
436 | pillar_review_output = response_json["content"][0]["text"]
437 |
438 | logger.debug (f"invoke_bedrock checkpoint 2.{retries}")
439 |
440 | # Uncomment next line if you would like to see response files for each question too.
441 | #bucket.put_object(Key=pillar_review_outputFilename, Body=pillar_review_output)
442 |
443 | return pillar_review_output
444 |
445 | except Exception as e:
446 | retries += 1
447 | logger.info(f"Sleeping as attempt {retries} failed with exception: {e}")
448 | time.sleep(BEDROCK_SLEEP_DURATION) # Add a delay before the next retry
449 |
450 | logger.info(f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
451 | raise Exception (f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
452 |
453 | def parse_stream(stream):
454 | for event in stream:
455 | chunk = event.get('chunk')
456 | if chunk:
457 | message = json.loads(chunk.get("bytes").decode())
458 | if message['type'] == "content_block_delta":
459 | yield message['delta']['text'] or ""
460 | elif message['type'] == "message_stop":
461 | return "\n"
462 |
463 |
464 | def get_existing_pillar_responses(wafr_accelerator_runs_table, analysis_id, analysis_submitter):
465 |
466 | pillar_responses = []
467 | logger.info (f"analysis_id : {analysis_id}")
468 | logger.info (f"analysis_submitter: " + analysis_submitter)
469 |
470 | response = wafr_accelerator_runs_table.query(
471 | ProjectionExpression ='pillars',
472 | KeyConditionExpression=Key('analysis_id').eq(analysis_id) & Key('analysis_submitter').eq(analysis_submitter),
473 | ConsistentRead=True,
474 | ScanIndexForward=True
475 | )
476 |
477 | logger.info (f"Existing pillar responses stored in wafr_prompts_table table : {response}" )
478 |
479 | items = response['Items']
480 |
481 | logger.debug (f"items assigned: {items}" )
482 |
483 | logger.info (f"Items length: {len(response['Items'])}" )
484 |
485 | try:
486 | if len(response['Items']) > 0:
487 | for item in items:
488 | pillars = item['pillars']
489 | logger.info (pillars)
490 | for pillar in pillars:
491 | logger.info (pillar)
492 | pillar_response = {
493 | 'pillar_name': pillar['pillar_name'],
494 | 'pillar_id': str(pillar['pillar_id']),
495 | 'llm_response': pillar['llm_response']
496 | }
497 | logger.info (f"pillar_response {pillar_response}")
498 | # Add the dictionary object to the list
499 | pillar_responses.append(pillar_response)
500 | else:
501 | logger.info("List is empty")
502 |
503 | except Exception as error:
504 | logger.info("Exception caught by try loop in get_existing_pillar_responses! Looks attribute is empty.")
505 | logger.info(f"Error received is: {error}")
506 |
507 | return pillar_responses
--------------------------------------------------------------------------------
/lambda_dir/generate_prompts_for_all_the_selected_pillars/generate_prompts_for_all_the_selected_pillars.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 |
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 | s3client = boto3.client('s3')
16 | dynamodb = boto3.resource('dynamodb')
17 |
18 | logger = logging.getLogger()
19 | logger.setLevel(logging.INFO)
20 |
21 | WAFR_REFERENCE_DOCS_BUCKET = os.environ['WAFR_REFERENCE_DOCS_BUCKET']
22 |
23 | def lambda_handler(event, context):
24 |
25 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
26 |
27 | logger.info(f"generate_prompts_for_all_the_selected_pillars invoked at {entry_timestamp}")
28 |
29 | logger.info(json.dumps(event))
30 |
31 | data = event
32 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
33 | wafr_prompts_table = dynamodb.Table(data['wafr_prompts_table'])
34 | wafr_accelerator_run_key = data['wafr_accelerator_run_key']
35 |
36 | try:
37 |
38 | document_s3_key = data['wafr_accelerator_run_items']['document_s3_key']
39 | extract_output_bucket = data['extract_output_bucket']
40 | region = data['region']
41 | wafr_lens = data['wafr_accelerator_run_items']['selected_lens']
42 | knowledge_base_id = data ['knowledge_base_id']
43 | pillars = data['wafr_accelerator_run_items'] ['selected_wafr_pillars']
44 | wafr_workload_id = data['wafr_accelerator_run_items'] ['wafr_workload_id']
45 | lens_alias = data['wafr_accelerator_run_items'] ['lens_alias']
46 |
47 | waclient = boto3.client('wellarchitected', region_name=region)
48 |
49 | bedrock_config = Config(connect_timeout=120, region_name=region, read_timeout=120, retries={'max_attempts': 0})
50 | bedrock_client = boto3.client('bedrock-runtime',region_name=region)
51 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
52 |
53 | return_response = {}
54 |
55 | prompt_file_locations = []
56 | all_pillar_prompts = []
57 |
58 | pillar_name_alias_mappings = get_pillar_name_alias_mappings ()
59 | logger.info(pillar_name_alias_mappings)
60 |
61 | pillars_dictionary = get_pillars_dictionary (waclient, wafr_workload_id, lens_alias)
62 |
63 | extracted_document_text = read_s3_file (data['extract_output_bucket'], data['extract_text_file_name'])
64 |
65 | pillar_counter = 0
66 |
67 | #Get all the pillar prompts in a loop
68 | for item in pillars:
69 |
70 | prompt_file_locations = []
71 | logger.info (f"selected_pillar: {item}")
72 | response = ""
73 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
74 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
75 |
76 | questions = {}
77 |
78 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 1")
79 |
80 | # Loop through key-value pairs
81 | question_array_counter = 0
82 | current_wafr_pillar = item
83 |
84 | logger.info (f"generate_prompts_for_all_the_selected_pillars checkpoint 1.{pillar_counter}")
85 |
86 | questions = pillars_dictionary[current_wafr_pillar]["wafr_q"]
87 |
88 | logger.debug (json.dumps(questions))
89 |
90 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 2.{pillar_counter}")
91 |
92 | for question in questions:
93 |
94 | logger.info (json.dumps(question))
95 |
96 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 3.{pillar_counter}.{question_array_counter}")
97 |
98 | pillar_specfic_question_id = question["id"]
99 | pillar_specfic_prompt_question = question["text"]
100 | pillar_specfic_wafr_answer_choices = question["wafr_answer_choices"]
101 |
102 | logger.info (f"pillar_specfic_question_id: {pillar_specfic_question_id}")
103 | logger.info (f"pillar_specfic_prompt_question: {pillar_specfic_prompt_question}")
104 | logger.info (f"pillar_specfic_wafr_answer_choices: {json.dumps(pillar_specfic_wafr_answer_choices)}")
105 |
106 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 4.{pillar_counter}.{question_array_counter}")
107 | claude_prompt_body = bedrock_prompt(wafr_lens, current_wafr_pillar, pillar_specfic_question_id, pillar_specfic_wafr_answer_choices, pillar_specfic_prompt_question, knowledge_base_id, bedrock_agent_client, extracted_document_text, WAFR_REFERENCE_DOCS_BUCKET)
108 |
109 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 5.{pillar_counter}.{question_array_counter}")
110 |
111 | # Write the textract output to a txt file
112 | output_bucket = s3.Bucket(extract_output_bucket)
113 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
114 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
115 | pillar_review_prompt_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-" + pillar_name_alias_mappings[item] + "-" + pillar_specfic_question_id + "-prompt.txt"
116 | logger.info (f"Output prompt file name: {pillar_review_prompt_filename}")
117 |
118 | # Upload the file to S3
119 | output_bucket.put_object(Key=pillar_review_prompt_filename, Body=claude_prompt_body)
120 |
121 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 6.{pillar_counter}.{question_array_counter}")
122 | question_metadata = {}
123 |
124 | question_metadata["pillar_review_prompt_filename"] = pillar_review_prompt_filename ##########
125 | question_metadata["pillar_specfic_question_id"] = pillar_specfic_question_id
126 | question_metadata["pillar_specfic_prompt_question"] = pillar_specfic_prompt_question
127 | question_metadata["pillar_specfic_wafr_answer_choices"] = pillar_specfic_wafr_answer_choices
128 |
129 | logger.info (f"generate_prompts_for_all_the_selected_pillars checkpoint 7.{pillar_counter}.{question_array_counter}")
130 | prompt_file_locations.append(question_metadata)
131 |
132 | question_array_counter = question_array_counter + 1
133 |
134 | pillar_prompts = {}
135 |
136 | pillar_prompts['wafr_accelerator_run_items'] = data ['wafr_accelerator_run_items']
137 | pillar_prompts['wafr_accelerator_run_key'] = data['wafr_accelerator_run_key']
138 | pillar_prompts['extract_output_bucket'] = data['extract_output_bucket']
139 | pillar_prompts['wafr_accelerator_runs_table'] = data ['wafr_accelerator_runs_table']
140 | pillar_prompts['wafr_prompts_table'] = data ['wafr_prompts_table']
141 | pillar_prompts['llm_model_id'] = data ['llm_model_id']
142 | pillar_prompts['region'] = data['region']
143 | pillar_prompts['input_pillar'] = item
144 |
145 | return_response['wafr_accelerator_run_items'] = data ['wafr_accelerator_run_items']
146 |
147 | pillar_prompts[item] = prompt_file_locations
148 |
149 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 9.{pillar_counter}")
150 |
151 | all_pillar_prompts.append(pillar_prompts)
152 |
153 | pillar_counter = pillar_counter + 1
154 |
155 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 10")
156 |
157 | except Exception as error:
158 | all_pillar_prompts = []
159 | wafr_accelerator_runs_table.update_item(
160 | Key=wafr_accelerator_run_key,
161 | UpdateExpression="SET review_status = :val",
162 | ExpressionAttributeValues={':val': "Errored"},
163 | ReturnValues='UPDATED_NEW'
164 | )
165 | logger.error(f"Exception caught in generate_solution_summary: {error}")
166 | raise Exception (f'Exception caught in generate_prompts_for_all_the_selected_pillars: {error}')
167 |
168 | logger.debug (f"generate_prompts_for_all_the_selected_pillars checkpoint 11")
169 |
170 | return_response = {}
171 |
172 | return_response = data
173 | return_response['all_pillar_prompts'] = all_pillar_prompts
174 |
175 | logger.info(f"return_response: {return_response}")
176 |
177 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
178 | logger.info(f"Exiting generate_prompts_for_all_the_selected_pillars at {exit_timeestamp}")
179 |
180 | # Return a success response
181 | return {
182 | 'statusCode': 200,
183 | 'body': return_response
184 | }
185 |
186 | def get_pillar_name_alias_mappings():
187 |
188 | mappings = {}
189 |
190 | mappings["Cost Optimization"] = "costOptimization"
191 | mappings["Operational Excellence"] = "operationalExcellence"
192 | mappings["Performance Efficiency"] = "performance"
193 | mappings["Reliability"] = "reliability"
194 | mappings["Security"] = "security"
195 | mappings["Sustainability"] = "sustainability"
196 |
197 | return mappings
198 |
199 | def get_pillars_dictionary(waclient, wafr_workload_id, lens_alias):
200 |
201 | pillars_dictionary = {}
202 |
203 | lens_review = get_lens_review (waclient, wafr_workload_id, lens_alias)
204 |
205 | for item in lens_review['data']:
206 | pillars_dictionary[item["wafr_pillar"]] = item
207 |
208 | return pillars_dictionary
209 |
210 | def get_lens_review(client, workload_id, lens_alias):
211 | try:
212 | response = client.get_lens_review(
213 | WorkloadId=workload_id,
214 | LensAlias=lens_alias
215 | )
216 |
217 | lens_review = response['LensReview']
218 | formatted_data = {
219 | "workload_id": workload_id,
220 | "data": []
221 | }
222 |
223 | for pillar in lens_review['PillarReviewSummaries']:
224 | pillar_data = {
225 | "wafr_lens": lens_alias,
226 | "wafr_pillar": pillar['PillarName'],
227 | "wafr_pillar_id": pillar['PillarId'],
228 | "wafr_q": []
229 | }
230 |
231 | # Manual pagination for list_answers
232 | next_token = None
233 |
234 | while True:
235 | if next_token:
236 | answers_response = client.list_answers(
237 | WorkloadId=workload_id,
238 | LensAlias=lens_alias,
239 | PillarId=pillar['PillarId'],
240 | MaxResults=50,
241 | NextToken=next_token
242 | )
243 | else:
244 | answers_response = client.list_answers(
245 | WorkloadId=workload_id,
246 | LensAlias=lens_alias,
247 | PillarId=pillar['PillarId'],
248 | MaxResults=50
249 | )
250 |
251 | for question in answers_response.get('AnswerSummaries', []):
252 | question_details = client.get_answer(
253 | WorkloadId=workload_id,
254 | LensAlias=lens_alias,
255 | QuestionId=question['QuestionId']
256 | )
257 | formatted_question = {
258 | "id": question['QuestionId'],
259 | "text": question['QuestionTitle'],
260 | "wafr_answer_choices": []
261 | }
262 | for choice in question_details['Answer'].get('Choices', []):
263 | formatted_question['wafr_answer_choices'].append({
264 | "id": choice['ChoiceId'],
265 | "text": choice['Title']
266 | })
267 |
268 | pillar_data['wafr_q'].append(formatted_question)
269 |
270 | next_token = answers_response.get('NextToken')
271 |
272 | if not next_token:
273 | break
274 |
275 | formatted_data['data'].append(pillar_data)
276 |
277 | # Add additional workload information
278 | formatted_data['workload_name'] = lens_review.get('WorkloadName')
279 | formatted_data['workload_id'] = workload_id
280 | formatted_data['lens_alias'] = lens_alias
281 | formatted_data['lens_name'] = lens_review.get('LensName')
282 | formatted_data['updated_at'] = str(lens_review.get('UpdatedAt'))
283 | return formatted_data
284 |
285 | except ClientError as e:
286 | logger.info(f"Error getting lens review: {e}")
287 | return None
288 |
289 | def get_lens_filter(kb_bucket, wafr_lens):
290 |
291 | # Map lens prefixes to their corresponding lens names - will make it easier to add additional lenses
292 | lens_mapping = {
293 | "Financial Services Industry Lens": "financialservices",
294 | "Data Analytics Lens": "dataanalytics"
295 | }
296 |
297 | # Get lens name or default to "wellarchitected"
298 | lens = next(
299 | (value for prefix, value in lens_mapping.items()
300 | if wafr_lens.startswith(prefix)),
301 | "wellarchitected"
302 | )
303 |
304 | # If wellarchitected lens then also use the overview documentation
305 | if lens == "wellarchitected":
306 | lens_filter = {
307 | "orAll": [
308 | {
309 | "startsWith": {
310 | "key": "x-amz-bedrock-kb-source-uri",
311 | "value": f"s3://{kb_bucket}/{lens}"
312 | }
313 | },
314 | {
315 | "startsWith": {
316 | "key": "x-amz-bedrock-kb-source-uri",
317 | "value": f"s3://{kb_bucket}/overview"
318 | }
319 | }
320 | ]
321 | }
322 | else: # Just use the lens documentation
323 | lens_filter = {
324 | "startsWith": {
325 | "key": "x-amz-bedrock-kb-source-uri",
326 | "value": f"s3://{kb_bucket}/{lens}/"
327 | }
328 | }
329 |
330 | logger.info(f"get_lens_filter: {json.dumps(lens_filter)}")
331 | return lens_filter
332 |
333 | def bedrock_prompt(wafr_lens, pillar, pillar_specfic_question_id, pillar_specfic_wafr_answer_choices, pillar_specfic_prompt_question, kb_id, bedrock_agent_client, document_content=None, wafr_reference_bucket = None):
334 |
335 | lens_filter = get_lens_filter(wafr_reference_bucket, wafr_lens)
336 |
337 | response = retrieve(pillar_specfic_prompt_question, kb_id, bedrock_agent_client, lens_filter, pillar, wafr_lens)
338 |
339 | retrievalResults = response['retrievalResults']
340 | contexts = get_contexts(retrievalResults)
341 |
342 | system_prompt = f"""You are an AWS Cloud Solutions Architect who specializes in reviewing solution architecture documents against the AWS Well-Architected Framework, using a process called the Well-Architected Framework Review (WAFR).
343 | The WAFR process consists of evaluating the provided solution architecture document against the 6 pillars of the specified AWS Well-Architected Framework lens, namely:
344 | - Operational Excellence Pillar
345 | - Security Pillar
346 | - Reliability Pillar
347 | - Performance Efficiency Pillar
348 | - Cost Optimization Pillar
349 | - Sustainability Pillar
350 |
351 | A solution architecture document is provided below in the "uploaded_document" section that you will evaluate by answering the questions provided in the "pillar_questions" section in accordance with the WAFR pillar indicated by the "current_pillar" section and the specified WAFR lens indicated by the "" section. Follow the instructions listed under the "instructions" section below.
352 |
353 |
354 | 1) For each question, be concise and limit responses to 350 words maximum. Responses should be specific to the specified lens (listed in the "" section) and pillar only (listed in the "" section). Your response should have three parts: 'Assessment', 'Best Practices Followed', and 'Recommendations/Examples'. Begin with the question asked.
355 | 2) You are also provided with a Knowledge Base which has more information about the specific pillar from the Well-Architected Framework. The relevant parts from the Knowledge Base will be provided under the "kb" section.
356 | 3) For each question, start your response with the 'Assessment' section, in which you will give a short summary (three to four lines) of your answer.
357 | 4) For each question:
358 | a) Provide which Best Practices from the specified pillar have been followed, including the best practice IDs and titles from the respective pillar guidance. List them under the 'Best Practices Followed' section.
359 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
360 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
361 | b) Provide your recommendations on how the solution architecture should be updated to address the question's ask. If you have a relevant example, mention it clearly like so: "Example: ". List all of this under the 'Recommendations/Examples' section.
362 | 5) For each question, if the required information is missing or is inadequate to answer the question, then first state that the document doesn't provide any or enough information. Then, list the recommendations relevant to the question to address the gap in the solution architecture document under the 'Recommendations' section. In this case, the 'Best practices followed' section will simply state "Not enough information".
363 | 6) First list the question within and tags in the respons.
364 | 7) Add citations for the best practices and recommendations by including the best practice ID and heading from the specified lens ("") and specified pillar ("") under the section, strictly within and tags. And every citation within it should be separated by ',' and start on a new line. If there are no citations then return 'N/A' within and .
365 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
366 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
367 | 8) Do not make any assumptions or make up information. Your responses should only be based on the actual solution document provided in the "uploaded_document" section.
368 | 9) Based on the assessment, select the most appropriate choices applicable from the choices provided within the section. Do not make up ids and use only the ids specified in the provided choices.
369 | 10) Return the entire response strictly in well-formed XML format. There should not be any text outside the XML response. Use the following XML structure, and ensure that the XML tags are in the same order:
370 |
371 | This is the input question
372 | This is assessment
373 | Best practices followed with citaiton fom Well Architected best practices for the pillar
374 | Recommendations with examples
375 | citations
376 |
377 |
378 | sec_securely_operate_multi_accounts
379 |
380 |
381 | sec_securely_operate_aws_account
382 |
383 |
384 | sec_securely_operate_control_objectives
385 |
386 |
387 | sec_securely_operate_updated_threats
388 |
389 |
390 |
391 |
392 | """
393 |
394 | #Add Soln Arch Doc to the system_prompt
395 | if document_content:
396 | system_prompt += f"""
397 |
398 | {document_content}
399 |
400 | """
401 |
402 | prompt = f"""
403 |
404 | {wafr_lens}
405 |
406 |
407 |
408 | {pillar}
409 |
410 |
411 |
412 | {contexts}
413 |
414 |
415 |
416 | Please answer the following questions for the {pillar} pillar of the Well-Architected Framework Review (WAFR).
417 | Questions:
418 | {pillar_specfic_prompt_question}
419 |
420 |
421 | Choices:
422 | {pillar_specfic_wafr_answer_choices}
423 |
424 | """
425 | # ask about anthropic version
426 | body = json.dumps({
427 | "anthropic_version": "bedrock-2023-05-31",
428 | "max_tokens": 200000,
429 | "system": system_prompt,
430 | "messages": [
431 | {
432 | "role": "user",
433 | "content": [{"type": "text", "text": prompt}],
434 | }
435 | ],
436 | })
437 | return body
438 |
439 | def retrieve(question, kbId, bedrock_agent_client, lens_filter, pillar, wafr_lens):
440 |
441 | kb_prompt = f"""For the given question from the {pillar} pillar of {wafr_lens}, provide:
442 | - Recommendations
443 | - Best practices
444 | - Examples
445 | - Risks
446 | Question: {question}"""
447 |
448 | logger.debug (f"question: {question}")
449 | logger.debug (f"kb_prompt: {kb_prompt}")
450 |
451 | return bedrock_agent_client.retrieve(
452 | retrievalQuery= {
453 | 'text': kb_prompt
454 | },
455 | knowledgeBaseId=kbId,
456 | retrievalConfiguration={
457 | 'vectorSearchConfiguration':{
458 | 'numberOfResults': 20,
459 | "filter": lens_filter
460 | }
461 | }
462 | )
463 |
464 | def get_contexts(retrievalResults):
465 | contexts = []
466 | for retrievedResult in retrievalResults:
467 | contexts.append(retrievedResult['content']['text'])
468 | return contexts
469 |
470 | def parse_stream(stream):
471 | for event in stream:
472 | chunk = event.get('chunk')
473 | if chunk:
474 | message = json.loads(chunk.get("bytes").decode())
475 | if message['type'] == "content_block_delta":
476 | yield message['delta']['text'] or ""
477 | elif message['type'] == "message_stop":
478 | return "\n"
479 |
480 | def read_s3_file (bucket, filename):
481 |
482 | document_text_object = s3client.get_object(
483 | Bucket=bucket,
484 | Key=filename,
485 | )
486 |
487 | logger.info (f"read_s3_file: {document_text_object}")
488 |
489 | document_text = document_text_object['Body'].read()
490 |
491 | return document_text
492 |
--------------------------------------------------------------------------------
/lambda_dir/generate_solution_summary/generate_solution_summary.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import datetime
4 | import logging
5 |
6 | from botocore.client import Config
7 | from botocore.exceptions import ClientError
8 |
9 | dynamodb = boto3.resource('dynamodb')
10 | s3 = boto3.resource('s3')
11 | s3client = boto3.client('s3')
12 | wa_client = boto3.client('wellarchitected')
13 |
14 | logger = logging.getLogger()
15 | logger.setLevel(logging.INFO)
16 |
17 | def lambda_handler(event, context):
18 |
19 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
20 |
21 | logger.info(f"generate_solution_summary invoked at {entry_timestamp}")
22 |
23 | logger.info(json.dumps(event))
24 |
25 | # Extract data from the input event
26 | return_response = data = event
27 | wafr_accelerator_runs_table = dynamodb.Table(data['wafr_accelerator_runs_table'])
28 | wafr_accelerator_run_key = data['wafr_accelerator_run_key']
29 |
30 | # Set up Bedrock client
31 | REGION = data['region']
32 | LLM_MODEL_ID = data['llm_model_id']
33 | bedrock_config = Config(connect_timeout=120, region_name=REGION, read_timeout=120, retries={'max_attempts': 0})
34 | bedrock_client = boto3.client('bedrock-runtime', config=bedrock_config)
35 |
36 | try:
37 | extracted_document_text = read_s3_file (data['extract_output_bucket'], data['extract_text_file_name'])
38 |
39 | # Prepare prompts for solution summary and workload description
40 | prompt = f"The following document is a solution architecture document that you are reviewing as an AWS Cloud Solutions Architect. Please summarise the following solution in 250 words. Begin directly with the architecture summary, don't provide any other opening or closing statements.\n\n\n{extracted_document_text}\n\n"
41 |
42 | # Generate summaries using Bedrock model
43 | summary = invoke_bedrock_model(bedrock_client, LLM_MODEL_ID, prompt)
44 |
45 | logger.info(f"Solution Summary: {summary}")
46 |
47 | # Update DynamoDB item with the generated summary
48 | update_dynamodb_item(wafr_accelerator_runs_table, wafr_accelerator_run_key, summary)
49 |
50 | # Write summary to S3
51 | write_summary_to_s3(s3, data['extract_output_bucket'], data['wafr_accelerator_run_items']['document_s3_key'], summary)
52 |
53 | except Exception as error:
54 | wafr_accelerator_runs_table.update_item(
55 | Key=wafr_accelerator_run_key,
56 | UpdateExpression="SET review_status = :val",
57 | ExpressionAttributeValues={':val': "Errored"},
58 | ReturnValues='UPDATED_NEW'
59 | )
60 | logger.error(f"Exception caught in generate_solution_summary: {error}")
61 | raise Exception (f'Exception caught in generate_solution_summary: {error}')
62 |
63 | # Prepare and return the response
64 | logger.info(f"return_response: {json.dumps(return_response)}")
65 | logger.info(f"Exiting generate_solution_summary at {datetime.datetime.now().strftime('%Y-%m-%d %H-%M-%S-%f')}")
66 |
67 | return {'statusCode': 200, 'body': return_response}
68 |
69 | def read_s3_file (bucket, filename):
70 |
71 | document_text_object = s3client.get_object(
72 | Bucket=bucket,
73 | Key=filename,
74 | )
75 |
76 | logger.info (document_text_object)
77 |
78 | document_text = document_text_object['Body'].read()
79 |
80 | return document_text
81 |
82 | def invoke_bedrock_model(bedrock_client, model_id, prompt):
83 | # Invoke Bedrock model and return the generated text
84 | response = bedrock_client.invoke_model(
85 | modelId=model_id,
86 | contentType="application/json",
87 | accept="application/json",
88 | body=json.dumps({
89 | "anthropic_version": "bedrock-2023-05-31",
90 | "max_tokens": 200000,
91 | "messages": [
92 | {"role": "user", "content": [{"type": "text", "text": prompt}]}
93 | ]
94 | })
95 | )
96 | response_body = json.loads(response['body'].read())
97 | return response_body['content'][0]['text']
98 |
99 | def update_dynamodb_item(table, key, summary):
100 | # Update DynamoDB item with the generated summary
101 | table.update_item(
102 | Key=key,
103 | UpdateExpression="SET architecture_summary = :val",
104 | ExpressionAttributeValues={':val': summary},
105 | ReturnValues='UPDATED_NEW'
106 | )
107 |
108 | def write_summary_to_s3(s3, bucket_name, document_key, summary):
109 | output_bucket = s3.Bucket(bucket_name)
110 | output_filename = f"{document_key[:document_key.rfind('.')]}-solution-summary.txt"
111 | logger.info(f"write_summary_to_s3: {output_filename}")
112 | output_bucket.put_object(Key=output_filename, Body=summary)
113 |
--------------------------------------------------------------------------------
/lambda_dir/insert_wafr_prompts/insert_wafr_prompts.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import urllib.parse
6 | import logging
7 | from botocore.exceptions import ClientError
8 |
9 | s3Client = boto3.client('s3')
10 | s3Resource = boto3.resource('s3')
11 |
12 | TABLE_NAME = os.environ['DD_TABLE_NAME']
13 | REGION_NAME = os.environ['REGION_NAME']
14 | dynamodb = boto3.resource('dynamodb', region_name=REGION_NAME)
15 | table = dynamodb.Table(TABLE_NAME)
16 |
17 |
18 | logger = logging.getLogger()
19 | logger.setLevel(logging.INFO)
20 |
21 | def lambda_handler(event, context):
22 |
23 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
24 |
25 | status = 'Prompts inserted successfully!'
26 |
27 | logger.info(f"insert_wafr_prompts invoked at {entry_timestamp}")
28 |
29 | logger.info(json.dumps(event))
30 |
31 | bucket = event['Records'][0]['s3']['bucket']['name']
32 | key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
33 |
34 | message = "WAFR prompts json is stored here - " + bucket + "/" + key
35 | logger.info (message)
36 | logger.info (f"insert_wafr_prompts checkpoint 1")
37 |
38 | purge_existing_data(table)
39 |
40 | try:
41 | s3_input_string = "s3://" + bucket + "/" + key
42 | logger.info("s3_input_string is : " + s3_input_string)
43 |
44 | response = s3Client.get_object(Bucket=bucket, Key=key)
45 |
46 | # Read the content of the file
47 | content = response['Body'].read().decode('utf-8')
48 |
49 | prompts_json = json.loads(content)
50 |
51 | logger.info (f"insert_wafr_prompts checkpoint 2")
52 |
53 | logger.info(json.dumps(prompts_json))
54 |
55 | # Iterate over the array of items
56 | for item_data in prompts_json['data']:
57 | # Prepare the item to be inserted
58 | item = {
59 | 'wafr_lens': item_data['wafr_lens'],
60 | 'wafr_lens_alias': item_data['wafr_lens_alias'],
61 | 'wafr_pillar': item_data['wafr_pillar'],
62 | 'wafr_pillar_id': item_data['wafr_pillar_id'],
63 | 'wafr_pillar_prompt': item_data['wafr_pillar_prompt']
64 | #'wafr_q': item_data['wafr_q']
65 | }
66 |
67 | # Insert the item into the DynamoDB table
68 | table.put_item(Item=item)
69 |
70 | logger.info (f"insert_wafr_prompts checkpoint 4")
71 |
72 | except Exception as error:
73 | logger.error(error)
74 | logger.error("S3 Object could not be opened. Check environment variable. ")
75 | status = 'Failed to insert Prompts!'
76 |
77 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
78 |
79 | logger.info("Exiting insert_wafr_prompts at " + exit_timeestamp)
80 |
81 | # Return a success response
82 | return {
83 | 'statusCode': 200,
84 | 'body': json.dumps(status)
85 | }
86 |
87 | def purge_existing_data(table):
88 | existing_items = table.scan()
89 | with table.batch_writer() as batch:
90 | for item in existing_items['Items']:
91 | batch.delete_item(
92 | Key={
93 | 'wafr_lens': item['wafr_lens'],
94 | 'wafr_pillar': item['wafr_pillar']
95 | }
96 | )
97 |
--------------------------------------------------------------------------------
/lambda_dir/prepare_wafr_review/prepare_wafr_review.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 | from botocore.client import Config
11 | from botocore.exceptions import ClientError
12 |
13 | s3 = boto3.resource('s3')
14 | dynamodb = boto3.resource('dynamodb')
15 | well_architected_client = boto3.client('wellarchitected')
16 |
17 | WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME = os.environ['WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME']
18 | UPLOAD_BUCKET_NAME = os.environ['UPLOAD_BUCKET_NAME']
19 | REGION = os.environ['REGION']
20 | WAFR_PROMPT_DD_TABLE_NAME = os.environ['WAFR_PROMPT_DD_TABLE_NAME']
21 | KNOWLEDGE_BASE_ID=os.environ['KNOWLEDGE_BASE_ID']
22 | LLM_MODEL_ID=os.environ['LLM_MODEL_ID']
23 | BEDROCK_SLEEP_DURATION = os.environ['BEDROCK_SLEEP_DURATION']
24 | BEDROCK_MAX_TRIES = os.environ['BEDROCK_MAX_TRIES']
25 |
26 | bedrock_config = Config(connect_timeout=120, region_name=REGION, read_timeout=120, retries={'max_attempts': 0})
27 | bedrock_client = boto3.client('bedrock-runtime',region_name=REGION)
28 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
29 |
30 | logger = logging.getLogger()
31 | logger.setLevel(logging.INFO)
32 |
33 | def get_lens_alias (wafr_lens):
34 | wafr_prompts_table = dynamodb.Table(WAFR_PROMPT_DD_TABLE_NAME)
35 |
36 | response = wafr_prompts_table.query(
37 | ProjectionExpression ='wafr_pillar_id, wafr_pillar_prompt, wafr_lens_alias',
38 | KeyConditionExpression=Key('wafr_lens').eq(wafr_lens),
39 | ScanIndexForward=True
40 | )
41 |
42 | print (f"response wafr_lens_alias: " + response['Items'][0]['wafr_lens_alias'])
43 |
44 | return response['Items'][0]['wafr_lens_alias']
45 |
46 | def lambda_handler(event, context):
47 |
48 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
49 |
50 | logger.info('prepare_wafr_review invoked at ' + entry_timestamp)
51 |
52 | logger.info(json.dumps(event))
53 |
54 | data = json.loads(event[0]['body'])
55 |
56 | try:
57 |
58 | logger.info("WAFR_PROMPT_DD_TABLE_NAME: " + WAFR_PROMPT_DD_TABLE_NAME)
59 | logger.info("WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME: " + WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
60 | logger.info("REGION: " + REGION)
61 | logger.info("UPLOAD_BUCKET_NAME: " + UPLOAD_BUCKET_NAME)
62 | logger.info("LLM_MODEL_ID: " + LLM_MODEL_ID)
63 | logger.info("KNOWLEDGE_BASE_ID: " + KNOWLEDGE_BASE_ID)
64 | logger.info("BEDROCK_SLEEP_DURATION: " + BEDROCK_SLEEP_DURATION)
65 | logger.info("BEDROCK_MAX_TRIES: " + BEDROCK_MAX_TRIES)
66 |
67 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
68 |
69 | analysis_id = data['analysis_id']
70 | analysis_submitter = data['analysis_submitter']
71 | analysis_response = wafr_accelerator_runs_table.get_item(
72 | Key={
73 | 'analysis_id': analysis_id,
74 | 'analysis_submitter': analysis_submitter
75 | }
76 | )
77 | analysis = analysis_response.get("Item", {})
78 |
79 | logger.info(f'analysis: {json.dumps(analysis)}')
80 |
81 | name = data.get("analysis_name", "")
82 | wafr_lens = data['wafr_lens']
83 |
84 | workload_name = data['analysis_name']
85 | workload_desc = analysis['workload_desc']
86 | environment = analysis['environment']
87 | review_owner = analysis['review_owner']
88 | industry_type = analysis['industry_type']
89 | creation_date = analysis['creation_date']
90 |
91 | # Get the lens ARN from the friendly name
92 | lenses = analysis["lenses"]
93 |
94 | aws_regions = [REGION]
95 |
96 | logger.info('creation_date: ' + creation_date)
97 |
98 | wafr_workload_id = create_workload(well_architected_client, workload_name,
99 | workload_desc, environment, lenses, review_owner, industry_type, aws_regions)
100 |
101 | review_status = "In Progress"
102 | document_s3_key = data['document_s3_key']
103 |
104 | wafr_accelerator_run_key = {
105 | 'analysis_id': analysis_id,
106 | 'analysis_submitter': analysis_submitter
107 | }
108 |
109 | response = wafr_accelerator_runs_table.update_item(
110 | Key=wafr_accelerator_run_key,
111 | UpdateExpression="SET review_status = :val1, wafr_workload_id = :val2",
112 | ExpressionAttributeValues={
113 | ':val1': review_status,
114 | ':val2': wafr_workload_id
115 | },
116 | ReturnValues='UPDATED_NEW'
117 | )
118 | logger.info(f'wafr-accelerator-runs dynamodb table summary update response: {response}')
119 |
120 | pillars = data['selected_pillars']
121 | pillar_string = ",".join(pillars)
122 | logger.info("Final pillar_string: " + pillar_string)
123 |
124 | logger.debug("prepare_wafr_review checkpoint 1")
125 |
126 | # Prepare the item to be returned
127 | wafr_accelerator_run_items = {
128 | 'analysis_id': analysis_id,
129 | 'analysis_submitter': analysis_submitter,
130 | 'analysis_title': name,
131 | 'selected_lens': wafr_lens,
132 | 'creation_date': creation_date,
133 | 'review_status': review_status,
134 | 'selected_wafr_pillars': pillars,
135 | 'document_s3_key': document_s3_key,
136 | 'analysis_owner': name,
137 | 'wafr_workload_id': wafr_workload_id,
138 | 'workload_name': workload_name,
139 | 'workload_desc': workload_desc,
140 | 'environment': environment,
141 | 'review_owner': review_owner,
142 | 'industry_type': industry_type,
143 | 'lens_alias': lenses
144 | }
145 |
146 | logger.debug('prepare_wafr_review checkpoint 2')
147 |
148 | return_response = {}
149 |
150 | return_response['wafr_accelerator_run_items'] = wafr_accelerator_run_items
151 | return_response['wafr_accelerator_run_key'] = wafr_accelerator_run_key
152 | return_response['extract_output_bucket'] = UPLOAD_BUCKET_NAME
153 | return_response['pillars_string'] = pillar_string
154 | return_response['wafr_accelerator_runs_table'] = WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME
155 | return_response['wafr_prompts_table'] = WAFR_PROMPT_DD_TABLE_NAME
156 | return_response['region'] = REGION
157 | return_response['knowledge_base_id'] = KNOWLEDGE_BASE_ID
158 | return_response['llm_model_id'] = LLM_MODEL_ID
159 | return_response['wafr_workload_id'] = wafr_workload_id
160 | return_response['lens_alias'] = lenses
161 |
162 | except Exception as error:
163 | update_analysis_status (data, error)
164 | raise Exception (f'Exception caught in prepare_wafr_review: {error}')
165 |
166 | logger.info(f'return_response: {return_response}')
167 |
168 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
169 |
170 | logger.info(f'Exiting prepare_wafr_review at {exit_timeestamp}')
171 |
172 | return {
173 | 'statusCode': 200,
174 | 'body': json.dumps(return_response)
175 | }
176 |
177 | def create_workload(client, workload_name, description, environment, lenses, review_owner, industry_type, aws_regions, architectural_design=None):
178 | workload_params = {
179 | 'WorkloadName': workload_name,
180 | 'Description': description,
181 | 'Environment': environment,
182 | 'ReviewOwner': review_owner,
183 | 'IndustryType': industry_type,
184 | 'Lenses': [lenses] if isinstance(lenses, str) else lenses,
185 | 'AwsRegions': aws_regions
186 | }
187 | if architectural_design:
188 | workload_params['ArchitecturalDesign'] = architectural_design
189 | response = client.create_workload(**workload_params)
190 | return response['WorkloadId']
191 |
192 | def update_analysis_status (data, error):
193 |
194 | dynamodb = boto3.resource('dynamodb')
195 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
196 |
197 | wafr_accelerator_run_key = {
198 | 'analysis_id': data['analysis_id'],
199 | 'analysis_submitter': data['analysis_submitter']
200 | }
201 |
202 | wafr_accelerator_runs_table.update_item(
203 | Key=wafr_accelerator_run_key,
204 | UpdateExpression="SET review_status = :val",
205 | ExpressionAttributeValues={':val': "Errored"},
206 | ReturnValues='UPDATED_NEW'
207 | )
208 | logger.error(f"Exception caught in prepare_wafr_review: {error}")
--------------------------------------------------------------------------------
/lambda_dir/replace_ui_tokens/replace_ui_tokens.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import urllib.parse
6 | import re
7 | import logging
8 |
9 | WAFR_ACCELERATOR_QUEUE_URL = os.environ['WAFR_ACCELERATOR_QUEUE_URL']
10 | WAFR_UI_BUCKET_NAME = os.environ['WAFR_UI_BUCKET_NAME']
11 | WAFR_UI_BUCKET_ARN = os.environ['WAFR_UI_BUCKET_ARN']
12 | REGION_NAME = os.environ['REGION_NAME']
13 | WAFR_RUNS_TABLE=os.environ['WAFR_RUNS_TABLE']
14 | EC2_INSTANCE_ID=os.environ['EC2_INSTANCE_ID']
15 | UPLOAD_BUCKET_NAME=os.environ['UPLOAD_BUCKET_NAME']
16 | PARAMETER_1_NEW_WAFR_REVIEW=os.environ['PARAMETER_1_NEW_WAFR_REVIEW']
17 | PARAMETER_2_EXISTING_WAFR_REVIEWS=os.environ['PARAMETER_2_EXISTING_WAFR_REVIEWS']
18 | PARAMETER_UI_SYNC_INITAITED_FLAG=os.environ['PARAMETER_UI_SYNC_INITAITED_FLAG']
19 | PARAMETER_3_LOGIN_PAGE=os.environ['PARAMETER_3_LOGIN_PAGE']
20 | PARAMETER_COGNITO_USER_POOL_ID = os.environ['PARAMETER_COGNITO_USER_POOL_ID']
21 | PARAMETER_COGNITO_USER_POOL_CLIENT_ID = os.environ['PARAMETER_COGNITO_USER_POOL_CLIENT_ID']
22 | GUARDRAIL_ID = os.environ['GUARDRAIL_ID']
23 |
24 | ssm_client = boto3.client('ssm')
25 | s3Client = boto3.client('s3')
26 | s3Resource = boto3.resource('s3')
27 | ssm_parameter_store = boto3.client('ssm', region_name=REGION_NAME)
28 |
29 | logger = logging.getLogger()
30 | logger.setLevel(logging.INFO)
31 |
32 | def lambda_handler(event, context):
33 |
34 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
35 |
36 | logger.info("replace_ui_tokens invoked at " + entry_timestamp)
37 |
38 | s3_script = """
39 | import boto3
40 | import os
41 |
42 | s3 = boto3.client('s3')
43 |
44 | try:
45 | bucket_name = '{{UI_CODE_BUCKET_NAME}}'
46 |
47 | objects = s3.list_objects_v2(Bucket=bucket_name)['Contents']
48 |
49 | for obj in objects:
50 | print(obj['Key'])
51 | current_directory = os.path.dirname(os.path.realpath(__file__))
52 | key = obj['Key']
53 | # Skip if the object is a directory
54 | if key.endswith('/'):
55 | continue
56 | # Create the local directory structure if it doesn't exist
57 | local_path = os.path.join(current_directory, key)
58 | os.makedirs(os.path.dirname(local_path), exist_ok=True)
59 | # Download the object
60 | s3.download_file(bucket_name, key, local_path)
61 | print(f'Downloaded: {key}')
62 | except Exception as e:
63 | print(f'Error: {e}')
64 | """
65 |
66 | logger.info("WAFR_ACCELERATOR_QUEUE_URL: " + WAFR_ACCELERATOR_QUEUE_URL)
67 | logger.info("WAFR_UI_BUCKET_NAME: " + WAFR_UI_BUCKET_NAME)
68 | logger.info("WAFR_UI_BUCKET_ARN: " + WAFR_UI_BUCKET_ARN)
69 | logger.info("REGION_NAME: " + REGION_NAME)
70 | logger.info("WAFR_RUNS_TABLE: " + WAFR_RUNS_TABLE)
71 | logger.info("EC2_INSTANCE_ID: " + EC2_INSTANCE_ID)
72 | logger.info("UPLOAD_BUCKET_NAME: " + UPLOAD_BUCKET_NAME)
73 | logger.info("PARAMETER_1_NEW_WAFR_REVIEW: " + PARAMETER_1_NEW_WAFR_REVIEW)
74 | logger.info("PARAMETER_2_EXISTING_WAFR_REVIEWS: " + PARAMETER_2_EXISTING_WAFR_REVIEWS)
75 | logger.info("PARAMETER_UI_SYNC_INITAITED_FLAG: " + PARAMETER_UI_SYNC_INITAITED_FLAG)
76 | logger.info("PARAMETER_3_LOGIN_PAGE: " + PARAMETER_3_LOGIN_PAGE)
77 | logger.info("PARAMETER_COGNITO_USER_POOL_ID: " + PARAMETER_COGNITO_USER_POOL_ID)
78 | logger.info("PARAMETER_COGNITO_USER_POOL_CLIENT_ID: " + PARAMETER_COGNITO_USER_POOL_CLIENT_ID)
79 | logger.info("GUARDRAIL_ID: " + GUARDRAIL_ID)
80 |
81 | logger.info(json.dumps(event))
82 |
83 | status = 'Everything done successfully - token update, s3 script creation and execution!'
84 |
85 | bucket = event['Records'][0]['s3']['bucket']['name']
86 | key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
87 |
88 | logger.info(f"replace_ui_tokens invoked for {bucket} / {key}")
89 |
90 | logger.info (f"replace_ui_tokens checkpoint 1")
91 |
92 | try:
93 | # Check if the input key has the "tokenized-pages/" prefix
94 | if (key.startswith('tokenized-pages')):
95 |
96 | logger.info (f"replace_ui_tokens checkpoint 2")
97 |
98 | # Replace the prefix with "pages/"
99 | output_key = 'pages/' + key.split('tokenized-pages/')[1]
100 |
101 | if(key.startswith('tokenized-pages/1_New_WAFR_Review.py')):
102 | # Read the file from the input key
103 | response = s3Client.get_object(Bucket=bucket, Key=key)
104 | file_content = response['Body'].read().decode('utf-8')
105 |
106 | #1_New_WAFR_Run.py'
107 | updated_content = re.sub(r'{{REGION}}', REGION_NAME, file_content)
108 | updated_content = re.sub(r'{{SQS_QUEUE_NAME}}', WAFR_ACCELERATOR_QUEUE_URL, updated_content)
109 | updated_content = re.sub(r'{{WAFR_UPLOAD_BUCKET_NAME}}', UPLOAD_BUCKET_NAME, updated_content)
110 | updated_content = re.sub(r'{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}', WAFR_RUNS_TABLE, updated_content)
111 |
112 | # Write the updated content to the output key
113 | s3Client.put_object(Bucket=bucket, Key=output_key, Body=updated_content.encode('utf-8'))
114 |
115 | ssm_parameter_store.put_parameter(
116 | Name=PARAMETER_1_NEW_WAFR_REVIEW,
117 | Value=f'True',
118 | Type='String',
119 | Overwrite=True
120 | )
121 |
122 | logger.info (f"replace_ui_tokens checkpoint 3a")
123 |
124 | elif(key.startswith('tokenized-pages/2_Existing_WAFR_Reviews.py')):
125 | response = s3Client.get_object(Bucket=bucket, Key=key)
126 | file_content = response['Body'].read().decode('utf-8')
127 |
128 | updated_content = re.sub(r'{{REGION}}', REGION_NAME, file_content)
129 | updated_content = re.sub(r'{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}', WAFR_RUNS_TABLE, updated_content)
130 | updated_content = re.sub(r'{{GUARDRAIL_ID}}', GUARDRAIL_ID, updated_content)
131 |
132 | # Write the updated content to the output key
133 | s3Client.put_object(Bucket=bucket, Key=output_key, Body=updated_content.encode('utf-8'))
134 |
135 | ssm_parameter_store.put_parameter(
136 | Name=PARAMETER_2_EXISTING_WAFR_REVIEWS,
137 | Value=f'True',
138 | Type='String',
139 | Overwrite=True
140 | )
141 | logger.info (f"replace_ui_tokens checkpoint 3b")
142 |
143 | elif(key.startswith('tokenized-pages/1_Login.py')):
144 | response = s3Client.get_object(Bucket=bucket, Key=key)
145 | file_content = response['Body'].read().decode('utf-8')
146 |
147 | updated_content = re.sub(r'{{REGION}}', REGION_NAME, file_content)
148 | updated_content = re.sub(r'{{PARAMETER_COGNITO_USER_POOL_ID}}', PARAMETER_COGNITO_USER_POOL_ID, updated_content)
149 | updated_content = re.sub(r'{{PARAMETER_COGNITO_USER_POOL_CLIENT_ID}}', PARAMETER_COGNITO_USER_POOL_CLIENT_ID, updated_content)
150 |
151 | # Write the updated content to the output key
152 | s3Client.put_object(Bucket=bucket, Key=output_key, Body=updated_content.encode('utf-8'))
153 |
154 | ssm_parameter_store.put_parameter(
155 | Name=PARAMETER_3_LOGIN_PAGE,
156 | Value=f'True',
157 | Type='String',
158 | Overwrite=True
159 | )
160 |
161 | logger.info (f"replace_ui_tokens checkpoint 3c")
162 | else: # all other files in the folder
163 | response = s3Client.copy_object(Bucket=bucket, CopySource=key, Key=output_key)
164 | logger.info (f"replace_ui_tokens checkpoint 3d")
165 |
166 | logger.info(f'File processed: {key} -> {output_key}')
167 |
168 | logger.info (f"replace_ui_tokens checkpoint 3d")
169 |
170 | else:
171 | logger.info(f'Skipping file: {key} (does not have the "tokenized-pages/" prefix)')
172 |
173 |
174 | except Exception as error:
175 | logger.info(error)
176 | logger.info("replace_ui_tokens: S3 Object could not be opened. Check environment variable.")
177 | status = 'File updates failed!'
178 |
179 | update1 = update2 = ui_sync_flag = update3 = "False"
180 |
181 | try:
182 | update1 = ssm_parameter_store.get_parameter(Name=PARAMETER_1_NEW_WAFR_REVIEW, WithDecryption=True)['Parameter']['Value']
183 | logger.info (f"replace_ui_tokens checkpoint 4a: update1 (PARAMETER_1_NEW_WAFR_REVIEW) = " + update1)
184 | update2 = ssm_parameter_store.get_parameter(Name=PARAMETER_2_EXISTING_WAFR_REVIEWS, WithDecryption=True)['Parameter']['Value']
185 | logger.info (f"replace_ui_tokens checkpoint 4b: update2 (PARAMETER_2_EXISTING_WAFR_REVIEWS) = " + update2)
186 | update3 = ssm_parameter_store.get_parameter(Name=PARAMETER_3_LOGIN_PAGE, WithDecryption=True)['Parameter']['Value']
187 | logger.info (f"update_login_page checkpoint 4c: update3 (PARAMETER_3_LOGIN_PAGE) = " + update3)
188 | ui_sync_flag = ssm_parameter_store.get_parameter(Name=PARAMETER_UI_SYNC_INITAITED_FLAG, WithDecryption=True)['Parameter']['Value']
189 | logger.info (f"ui_sync_flag checkpoint 4d: ui_synC_flag (PARAMETER_UI_SYNC_INITAITED_FLAG) = " + ui_sync_flag)
190 | except Exception as error:
191 | logger.info(error)
192 | logger.info("One of the parameters is missing, wait for the next event")
193 |
194 | if( ui_sync_flag == "False"):
195 | if(((update1 =="True") and (update2 =="True")) and (update3 =="True")):
196 | logger.info (f"replace_ui_tokens checkpoint 4d : All the parameters are true, sending SSM command!")
197 | send_ssm_command (EC2_INSTANCE_ID, s3_script, WAFR_UI_BUCKET_NAME)
198 | ssm_parameter_store.put_parameter(
199 | Name=PARAMETER_UI_SYNC_INITAITED_FLAG,
200 | Value=f'True',
201 | Type='String',
202 | Overwrite=True
203 | )
204 | logger.info (f"ui_synC_flag set to True checkpoint 4e")
205 | else:
206 | logger.info(f'ui_synC_flag is already set, files are not synced again! Reset the {PARAMETER_UI_SYNC_INITAITED_FLAG} to False for file sync to happen again')
207 |
208 | logger.info (f"replace_ui_tokens checkpoint 5")
209 |
210 | return {
211 | 'statusCode': 200,
212 | 'body': json.dumps(status)
213 | }
214 |
215 | def send_ssm_command(ec2InstanceId, s3_script, wafrUIBucketName):
216 |
217 | s3_script = re.sub(r'{{UI_CODE_BUCKET_NAME}}', wafrUIBucketName, s3_script)
218 |
219 | # Send the SSM Run Command to process the file
220 | command_id = ssm_client.send_command(
221 | InstanceIds=[ec2InstanceId], # Replace with your instance ID
222 | DocumentName='AWS-RunShellScript',
223 | Parameters={
224 | 'commands': [
225 | 'sudo mkdir -p /wafr-accelerator && cd /wafr-accelerator',
226 | f'sudo echo "{s3_script}" > /wafr-accelerator/syncUIFolder.py',
227 | 'sleep 30 && sudo chown -R ec2-user:ec2-user /wafr-accelerator',
228 | 'python3 syncUIFolder.py',
229 | 'sleep 30 && sudo chown -R ec2-user:ec2-user /wafr-accelerator'
230 | ]
231 | }
232 | )['Command']['CommandId']
233 |
234 |
235 | logger.info (f"replace_ui_tokens checkpoint 4f: command_id " + command_id)
236 |
237 | # Wait for the command execution to complete
238 | waiter = ssm_client.get_waiter('command_executed')
239 | waiter.wait(
240 | CommandId=command_id,
241 | InstanceId=ec2InstanceId, # Replace with your instance ID
242 | WaiterConfig={
243 | 'Delay': 30,
244 | 'MaxAttempts': 30
245 | }
246 | )
247 |
248 | logger.info (f"replace_ui_tokens checkpoint 4g: Wait over")
249 |
250 |
--------------------------------------------------------------------------------
/lambda_dir/start_wafr_review/start_wafr_review.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 |
8 | from boto3.dynamodb.conditions import Key
9 | from boto3.dynamodb.conditions import Attr
10 |
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 |
16 | WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME = os.environ['WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME']
17 | UPLOAD_BUCKET_NAME = os.environ['UPLOAD_BUCKET_NAME']
18 | REGION = os.environ['REGION']
19 | WAFR_PROMPT_DD_TABLE_NAME = os.environ['WAFR_PROMPT_DD_TABLE_NAME']
20 | KNOWLEDGE_BASE_ID=os.environ['KNOWLEDGE_BASE_ID']
21 | LLM_MODEL_ID=os.environ['LLM_MODEL_ID']
22 | START_WAFR_REVIEW_STATEMACHINE_ARN = os.environ['START_WAFR_REVIEW_STATEMACHINE_ARN']
23 | BEDROCK_SLEEP_DURATION = int(os.environ['BEDROCK_SLEEP_DURATION'])
24 | BEDROCK_MAX_TRIES = int(os.environ['BEDROCK_MAX_TRIES'])
25 | WAFR_REFERENCE_DOCS_BUCKET = os.environ['WAFR_REFERENCE_DOCS_BUCKET']
26 |
27 | dynamodb = boto3.resource('dynamodb')
28 | bedrock_config = Config(connect_timeout=120, region_name=REGION, read_timeout=120, retries={'max_attempts': 0})
29 | bedrock_client = boto3.client('bedrock-runtime',region_name=REGION)
30 | bedrock_agent_client = boto3.client("bedrock-agent-runtime", config=bedrock_config)
31 |
32 | logger = logging.getLogger()
33 | logger.setLevel(logging.INFO)
34 |
35 | def lambda_handler(event, context):
36 |
37 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
38 | logger.info (f"start_wafr_review invoked at {entry_timestamp}" )
39 | logger.info(json.dumps(event))
40 |
41 | logger.info(f"REGION: {REGION}")
42 | logger.debug(f"START_WAFR_REVIEW_STATEMACHINE_ARN: {START_WAFR_REVIEW_STATEMACHINE_ARN}")
43 | logger.debug(f"WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME: {WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}" )
44 |
45 | data = json.loads(event['Records'][0]['body'])
46 |
47 | return_status = ""
48 | analysis_review_type = data['analysis_review_type']
49 |
50 | try:
51 | if (analysis_review_type != 'Quick' ): #"Deep with Well-Architected Tool"
52 | logger.info("Initiating \'Deep with Well-Architected Tool\' analysis")
53 | sf = boto3.client('stepfunctions', region_name = REGION)
54 | response = sf.start_execution(stateMachineArn = START_WAFR_REVIEW_STATEMACHINE_ARN, input = json.dumps(event['Records']))
55 | logger.info (f"Step function response: {response}")
56 | return_status = 'Deep analysis commenced successfully!'
57 | else: #Quick
58 | logger.info("Executing \'Quick\' analysis")
59 | do_quick_analysis (data, context)
60 | return_status = 'Quick analysis completed successfully!'
61 | except Exception as error:
62 | handle_error (data, error)
63 | raise Exception (f'Exception caught in start_wafr_review: {error}')
64 |
65 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
66 | logger.info(f"Exiting start_wafr_review at {exit_timeestamp}" )
67 |
68 | return {
69 | 'statusCode': 200,
70 | 'body': json.dumps(return_status)
71 | }
72 |
73 | def handle_error (data, error):
74 |
75 | dynamodb = boto3.resource('dynamodb')
76 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
77 |
78 | # Define the key for the item you want to update
79 | wafr_accelerator_run_key = {
80 | 'analysis_id': data['analysis_id'], # Replace with your partition key name and value
81 | 'analysis_submitter': data['analysis_submitter'] # If you have a sort key, replace with its name and value
82 | }
83 |
84 | # Handle errors and update DynamoDB status
85 | wafr_accelerator_runs_table.update_item(
86 | Key=wafr_accelerator_run_key,
87 | UpdateExpression="SET review_status = :val",
88 | ExpressionAttributeValues={':val': "Errored"},
89 | ReturnValues='UPDATED_NEW'
90 | )
91 | logger.error(f"Exception caught in start_wafr_review: {error}")
92 |
93 | def get_pillar_string (pillars):
94 | pillar_string =""
95 |
96 | for item in pillars:
97 | logger.info (f"selected_pillars: {item}")
98 | pillar_string = pillar_string + item + ","
99 | logger.info ("pillar_string: " + pillar_string)
100 |
101 | pillar_string = pillar_string.rstrip(',')
102 |
103 | return pillar_string
104 |
105 | def do_quick_analysis (data, context):
106 |
107 | logger.info(f"WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME: {WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}")
108 | logger.info(f"UPLOAD_BUCKET_NAME: {UPLOAD_BUCKET_NAME}")
109 | logger.info(f"WAFR_PROMPT_DD_TABLE_NAME: {WAFR_PROMPT_DD_TABLE_NAME}")
110 | logger.info(f"KNOWLEDGE_BASE_ID: {KNOWLEDGE_BASE_ID}")
111 | logger.info(f"LLM_MODEL_ID: {LLM_MODEL_ID}")
112 | logger.info(f"BEDROCK_SLEEP_DURATION: {BEDROCK_SLEEP_DURATION}")
113 | logger.info(f"BEDROCK_SLEEP_DURATION: {BEDROCK_MAX_TRIES}")
114 |
115 | wafr_accelerator_runs_table = dynamodb.Table(WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME)
116 | wafr_prompts_table = dynamodb.Table(WAFR_PROMPT_DD_TABLE_NAME)
117 |
118 | logger.info(json.dumps(data))
119 |
120 | analysis_id = data['analysis_id']
121 | name = data['analysis_name']
122 | wafr_lens = data['wafr_lens']
123 |
124 | analysis_submitter = data['analysis_submitter']
125 | document_s3_key = data['document_s3_key']
126 |
127 | pillars = data['selected_pillars']
128 | pillar_string = get_pillar_string (pillars)
129 | logger.info ("Final pillar_string: " + pillar_string)
130 |
131 | logger.debug ("do_quick_analysis checkpoint 1")
132 |
133 | wafr_accelerator_run_key = {
134 | 'analysis_id': analysis_id,
135 | 'analysis_submitter': analysis_submitter
136 | }
137 |
138 | response = wafr_accelerator_runs_table.update_item(
139 | Key=wafr_accelerator_run_key,
140 | UpdateExpression="SET review_status = :val",
141 | ExpressionAttributeValues={':val': "In Progress"},
142 | ReturnValues='UPDATED_NEW'
143 | )
144 |
145 | logger.info (f"wafr-accelerator-runs dynamodb table summary update response: {response}" )
146 | logger.debug ("do_quick_analysis checkpoint 2")
147 |
148 | try:
149 |
150 | # Get the bucket object
151 | output_bucket = s3.Bucket(UPLOAD_BUCKET_NAME)
152 |
153 | # Extract document text and write to s3
154 | extracted_document_text = extract_document_text(UPLOAD_BUCKET_NAME, document_s3_key, output_bucket, wafr_accelerator_runs_table, wafr_accelerator_run_key, REGION)
155 |
156 | logger.debug ("do_quick_analysis checkpoint 3")
157 |
158 | # Generate solution summary
159 | summary = generate_solution_summary (extracted_document_text, wafr_accelerator_runs_table, wafr_accelerator_run_key)
160 |
161 | logger.info ("Generated architecture summary:" + summary)
162 |
163 | logger.debug ("do_quick_analysis checkpoint 4")
164 |
165 | partition_key_value = wafr_lens
166 |
167 | sort_key_values = pillars
168 |
169 | logger.info ("wafr_lens: " + wafr_lens)
170 |
171 | pillar_responses = []
172 | pillar_counter = 0
173 |
174 | #Get All the pillar prompts in a loop
175 | for item in pillars:
176 | logger.info (f"selected_pillars: {item}")
177 | response = wafr_prompts_table.query(
178 | ProjectionExpression ='wafr_pillar_id, wafr_pillar_prompt',
179 | KeyConditionExpression=Key('wafr_lens').eq(wafr_lens) & Key('wafr_pillar').eq(item),
180 | ScanIndexForward=True
181 | )
182 |
183 | logger.info (f"response wafr_pillar_id: " + str(response['Items'][0]['wafr_pillar_id']))
184 | logger.info (f"response wafr_pillar_prompt: " + response['Items'][0]['wafr_pillar_prompt'])
185 |
186 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
187 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
188 | pillar_review_prompt_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-" + wafr_lens + "-" + item + "-prompt.txt"
189 | pillar_review_output_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-" + wafr_lens + "-" + item + "-output.txt"
190 |
191 | logger.info (f"pillar_review_prompt_filename: {pillar_review_prompt_filename}")
192 | logger.info (f"pillar_review_output_filename: {pillar_review_output_filename}")
193 |
194 | pillar_specific_prompt_question = response['Items'][0]['wafr_pillar_prompt']
195 |
196 | claude_prompt_body = bedrock_prompt(wafr_lens, item, pillar_specific_prompt_question, KNOWLEDGE_BASE_ID, extracted_document_text, WAFR_REFERENCE_DOCS_BUCKET)
197 | output_bucket.put_object(Key=pillar_review_prompt_filename, Body=claude_prompt_body)
198 |
199 | logger.debug (f"do_quick_analysis checkpoint 5.{pillar_counter}")
200 |
201 | streaming = True
202 |
203 | pillar_review_output = invoke_bedrock(streaming, claude_prompt_body, pillar_review_output_filename, output_bucket)
204 |
205 | # Comment the next line if you would like to retain the prompts files
206 | output_bucket.Object(pillar_review_prompt_filename).delete()
207 |
208 | logger.debug (f"do_quick_analysis checkpoint 6.{pillar_counter}")
209 |
210 | logger.info ("pillar_id" + str(response['Items'][0]['wafr_pillar_id']))#
211 |
212 | pillarResponse = {
213 | 'pillar_name': item,
214 | 'pillar_id': str(response['Items'][0]['wafr_pillar_id']),
215 | 'llm_response': pillar_review_output
216 | }
217 |
218 | pillar_responses.append(pillarResponse)
219 |
220 | pillar_counter += 1
221 |
222 | logger.debug (f"do_quick_analysis checkpoint 7")
223 |
224 | attribute_updates = {
225 | 'pillars': {
226 | 'Action': 'ADD'
227 | }
228 | }
229 |
230 | response = wafr_accelerator_runs_table.update_item(
231 | Key=wafr_accelerator_run_key,
232 | UpdateExpression="SET pillars = :val",
233 | ExpressionAttributeValues={':val': pillar_responses},
234 | ReturnValues='UPDATED_NEW'
235 | )
236 |
237 | logger.info (f"dynamodb status update response: {response}" )
238 | logger.debug (f"do_quick_analysis checkpoint 8")
239 |
240 | response = wafr_accelerator_runs_table.update_item(
241 | Key=wafr_accelerator_run_key,
242 | UpdateExpression="SET review_status = :val",
243 | ExpressionAttributeValues={':val': "Completed"},
244 | ReturnValues='UPDATED_NEW'
245 | )
246 |
247 | logger.info (f"dynamodb status update response: {response}" )
248 | except Exception as error:
249 | handle_error (data, error)
250 | raise Exception (f'Exception caught in do_quick_analysis: {error}')
251 |
252 | logger.debug (f"do_quick_analysis checkpoint 9")
253 |
254 | exit_timeestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
255 | logger.info("Exiting start_wafr_review at " + exit_timeestamp)
256 |
257 |
258 | def extract_document_text(upload_bucket_name, document_s3_key, output_bucket, wafr_accelerator_runs_table, wafr_accelerator_run_key, region):
259 |
260 | textract_config = Config(retries = dict(max_attempts = 5))
261 | textract_client = boto3.client('textract', region_name=region, config=textract_config)
262 |
263 | logger.debug ("extract_document_text checkpoint 1")
264 |
265 | response = textract_client.start_document_text_detection(
266 | DocumentLocation={
267 | 'S3Object': {
268 | 'Bucket': upload_bucket_name,
269 | 'Name': document_s3_key
270 | }
271 | }
272 | )
273 |
274 | job_id = response["JobId"]
275 |
276 | logger.debug ("extract_document_text checkpoint 2")
277 |
278 | # Wait for the job to complete
279 | while True:
280 | response = textract_client.get_document_text_detection(JobId=job_id)
281 | status = response["JobStatus"]
282 | if status == "SUCCEEDED":
283 | break
284 |
285 | logger.debug ("extract_document_text checkpoint 3")
286 |
287 | pages = []
288 | next_token = None
289 | while True:
290 | if next_token:
291 | response = textract_client.get_document_text_detection(JobId=job_id, NextToken=next_token)
292 | else:
293 | response = textract_client.get_document_text_detection(JobId=job_id)
294 | pages.append(response)
295 | if 'NextToken' in response:
296 | next_token = response['NextToken']
297 | else:
298 | break
299 |
300 | logger.debug ("extract_document_text checkpoint 4")
301 |
302 | # Extract the text from all pages
303 | extracted_text = ""
304 | for page in pages:
305 | for item in page["Blocks"]:
306 | if item["BlockType"] == "LINE":
307 | extracted_text += item["Text"] + "\n"
308 |
309 | attribute_updates = {
310 | 'extracted_document': {
311 | 'Action': 'ADD'
312 | }
313 | }
314 |
315 | # Update the item
316 | response = wafr_accelerator_runs_table.update_item(
317 | Key=wafr_accelerator_run_key,
318 | UpdateExpression="SET extracted_document = :val",
319 | ExpressionAttributeValues={':val': extracted_text},
320 | ReturnValues='UPDATED_NEW'
321 | )
322 |
323 | logger.debug ("document_s3_key.rstrip('.'): " + document_s3_key.rstrip('.'))
324 | logger.debug ("document_s3_key[:document_s3_key.rfind('.')]: " + document_s3_key[:document_s3_key.rfind('.')] )
325 | output_filename = document_s3_key[:document_s3_key.rfind('.')]+ "-extracted-text.txt"
326 |
327 | logger.info (f"Extracted document text ouput filename: {output_filename}")
328 | output_bucket.put_object(Key=output_filename, Body=bytes(extracted_text, encoding='utf-8'))
329 |
330 | return extracted_text
331 |
332 | def generate_solution_summary (extracted_document_text, wafr_accelerator_runs_table, wafr_accelerator_run_key):
333 |
334 | prompt = f"The following document is a solution architecture document that you are reviewing as an AWS Cloud Solutions Architect. Please summarise the following solution in 250 words. Begin directly with the architecture summary, don't provide any other opening or closing statements.\n\n{extracted_document_text}\n\n" #\nSummary:"
335 |
336 | response = bedrock_client.invoke_model(
337 | modelId= LLM_MODEL_ID,
338 | contentType="application/json",
339 | accept="application/json",
340 | body = json.dumps({
341 | "anthropic_version": "bedrock-2023-05-31",
342 | "max_tokens": 200000,
343 | "messages": [
344 | {
345 | "role": "user",
346 | "content": [{"type": "text", "text": prompt}],
347 | }
348 | ],
349 | }
350 | )
351 | )
352 |
353 | logger.info(f"generate_solution_summary: response: {response}")
354 |
355 | # Extract the summary
356 | response_body = json.loads(response['body'].read())
357 | summary = response_body['content'][0]['text']
358 |
359 | logger.debug (f"start_wafr_review checkpoint 9")
360 |
361 | attribute_updates = {
362 | 'architecture_summary': {
363 | 'Action': 'ADD'
364 | }
365 | }
366 |
367 | logger.debug (f"start_wafr_review checkpoint 10")
368 |
369 | response = wafr_accelerator_runs_table.update_item(
370 | Key=wafr_accelerator_run_key,
371 | UpdateExpression="SET architecture_summary = :val",
372 | ExpressionAttributeValues={':val': summary},
373 | ReturnValues='UPDATED_NEW'
374 | )
375 |
376 | return summary
377 |
378 | def invoke_bedrock(streaming, claude_prompt_body, pillar_review_output_filename, output_bucket):
379 |
380 | pillar_review_output = ""
381 | retries = 1
382 | max_retries = BEDROCK_MAX_TRIES
383 |
384 | while retries <= max_retries:
385 | try:
386 | if(streaming):
387 | streaming_response = bedrock_client.invoke_model_with_response_stream(
388 | modelId=LLM_MODEL_ID,
389 | body=claude_prompt_body,
390 | )
391 |
392 | logger.debug (f"invoke_bedrock checkpoint 1.{retries}")
393 | stream = streaming_response.get("body")
394 |
395 | logger.debug (f"invoke_bedrock checkpoint 2.{retries}")
396 |
397 | for chunk in parse_stream(stream):
398 | pillar_review_output += chunk
399 |
400 | # Uncomment next line if you would like to see response files for each question too.
401 | # output_bucket.put_object(Key=pillar_review_output_filename, Body=bytes(pillar_review_output, encoding='utf-8'))
402 |
403 | return pillar_review_output
404 |
405 | else:
406 | non_streaming_response = bedrock_client.invoke_model(
407 | modelId=LLM_MODEL_ID,
408 | body=claude_prompt_body,
409 | )
410 |
411 | response_json = json.loads(non_streaming_response["body"].read().decode("utf-8"))
412 |
413 | logger.info (response_json)
414 |
415 | logger.debug (f"invoke_bedrock checkpoint 1.{retries}")
416 |
417 | # Extract the response text.
418 | pillar_review_output = response_json["content"][0]["text"]
419 |
420 | logger.debug (pillar_review_output)
421 | logger.debug (f"invoke_bedrock checkpoint 2.{retries}")
422 |
423 | # Uncomment next line if you would like to see response files for each question too.
424 | # output_bucket.put_object(Key=pillar_review_output_filename, Body=pillar_review_output)
425 |
426 | return pillar_review_output
427 |
428 | except Exception as e:
429 | retries += 1
430 | logger.info(f"Sleeping as attempt {retries} failed with exception: {e}")
431 | time.sleep(BEDROCK_SLEEP_DURATION) # Add a delay before the next retry
432 |
433 | logger.info(f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
434 | raise Exception (f"Maximum retries ({max_retries}) exceeded. Unable to invoke the model.")
435 |
436 | def get_lens_filter(kb_bucket, wafr_lens):
437 |
438 | # Map lens prefixes to their corresponding lens names - allows for additional of lenses
439 | lens_mapping = {
440 | "Financial Services Industry Lens": "financialservices",
441 | "Data Analytics Lens": "dataanalytics"
442 | }
443 |
444 | # Get lens name or default to "wellarchitected"
445 | lens = next(
446 | (value for prefix, value in lens_mapping.items()
447 | if wafr_lens.startswith(prefix)),
448 | "wellarchitected"
449 | )
450 |
451 | # If wellarchitected lens then also use the overview documentation
452 | if lens == "wellarchitected":
453 | lens_filter = {
454 | "orAll": [
455 | {
456 | "startsWith": {
457 | "key": "x-amz-bedrock-kb-source-uri",
458 | "value": f"s3://{kb_bucket}/{lens}"
459 | }
460 | },
461 | {
462 | "startsWith": {
463 | "key": "x-amz-bedrock-kb-source-uri",
464 | "value": f"s3://{kb_bucket}/overview"
465 | }
466 | }
467 | ]
468 | }
469 | else: # Just use the lens documentation
470 | lens_filter = {
471 | "startsWith": {
472 | "key": "x-amz-bedrock-kb-source-uri",
473 | "value": f"s3://{kb_bucket}/{lens}/"
474 | }
475 | }
476 |
477 | logger.info(f"get_lens_filter: {json.dumps(lens_filter)}")
478 | return lens_filter
479 |
480 | def bedrock_prompt(wafr_lens, pillar, questions, kb_id, document_content=None, wafr_reference_bucket = None):
481 |
482 | lens_filter = get_lens_filter(wafr_reference_bucket, wafr_lens)
483 | response = retrieve(questions, kb_id, lens_filter)
484 |
485 | retrievalResults = response['retrievalResults']
486 | contexts = get_contexts(retrievalResults)
487 |
488 | system_prompt = f"""You are an AWS Cloud Solutions Architect who specializes in reviewing solution architecture documents against the AWS Well-Architected Framework, using a process called the Well-Architected Framework Review (WAFR).
489 | The WAFR process consists of evaluating the provided solution architecture document against the 6 pillars of the specified AWS Well-Architected Framework lens, namely:
490 | Operational Excellence Pillar
491 | Security Pillar
492 | Reliability Pillar
493 | Performance Efficiency Pillar
494 | Cost Optimization Pillar
495 | Sustainability Pillar
496 |
497 | A solution architecture document is provided below in the "uploaded_document" section that you will evaluate by answering the questions provided in the "pillar_questions" section in accordance with the WAFR pillar indicated by the "" section and the specified WAFR lens indicated by the "" section. Answer each and every question without skipping any question, as it would make the entire response invalid. Follow the instructions listed under the "instructions" section below.
498 |
499 |
500 | 1) For each question, be concise and limit responses to 350 words maximum. Responses should be specific to the specified lens (listed in the "" section) and pillar only (listed in the "" section). Your response to each question should have five parts: 'Assessment', 'Best practices followed', 'Recommendations/Examples', 'Risks' and 'Citations'.
501 | 2) You are also provided with a Knowledge Base which has more information about the specific lens and pillar from the Well-Architected Framework. The relevant parts from the Knowledge Base will be provided under the "kb" section.
502 | 3) For each question, start your response with the 'Assessment' section, in which you will give a short summary (three to four lines) of your answer.
503 | 4) For each question,
504 | a) Provide which Best practices from the specified pillar have been followed, including the best practice titles and IDs from the respective pillar guidance for the question. List them under the 'Best practices followed' section.
505 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
506 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
507 | b) provide your recommendations on how the solution architecture should be updated to address the question's ask. If you have a relevant example, mention it clearly like so: "Example: ". List all of this under the 'Recommendations/Examples' section.
508 | c) Highlight the risks identified based on not following the best practises relevant to the specific WAFR question. Categorize the overall Risk for this question by selecting one of the three: High, Medium, or Low. List them under the 'Risks' section and mention your categorization.
509 | d) Add Citations section listing best practice ID and heading for best practices, recommendations, and risks from the specified lens ("") and specified pillar ("") under the section. If there are no citations then return 'N/A' for Citations.
510 | Example: REL01-BP03: Accommodate fixed service quotas and constraints through architecture
511 | Example: BP 15.5: Optimize your data modeling and data storage for efficient data retrieval
512 | 5) For each question, if the required information is missing or is inadequate to answer the question, then first state that the document doesn't provide any or enough information. Then, list the recommendations relevant to the question to address the gap in the solution architecture document under the 'Recommendations' section. In this case, the 'Best practices followed' section will simply state "not enough information".
513 | 6) Use Markdown formatting for your response. First list, the question in bold. Then the response, and section headings for each of the four sections in your response should also be in bold. Add a Markdown new line at the end of the response.
514 | 7) Do not make any assumptions or make up information including best practice titles and ID. Your responses should only be based on the actual solution document provided in the "uploaded_document" section below.
515 | 8> Each line represents a question, for example 'Question 1 -' followed by the actual question text. Do recheck that all the questions have been answered before sending back the response.
516 |
517 | """
518 |
519 | #Add Soln Arch Doc to the system_prompt
520 | if document_content:
521 | system_prompt += f"""
522 |
523 | {document_content}
524 |
525 | """
526 |
527 | prompt = f"""
528 |
529 | {wafr_lens}
530 |
531 |
532 |
533 | {pillar}
534 |
535 |
536 |
537 | {contexts}
538 |
539 |
540 |
541 | {questions}
542 |
543 | """
544 |
545 | body = json.dumps({
546 | "anthropic_version": "bedrock-2023-05-31",
547 | "max_tokens": 200000,
548 | "system": system_prompt,
549 | "messages": [
550 | {
551 | "role": "user",
552 | "content": [{"type": "text", "text": prompt}],
553 | }
554 | ],
555 | })
556 | return body
557 |
558 | def retrieve(questions, kbId, lens_filter):
559 | kb_prompt = f"""For each question provide:
560 | - Recommendations
561 | - Best practices
562 | - Examples
563 | - Risks
564 | {questions}"""
565 |
566 | return bedrock_agent_client.retrieve(
567 | retrievalQuery= {
568 | 'text': kb_prompt
569 | },
570 | knowledgeBaseId=kbId,
571 | retrievalConfiguration={
572 | 'vectorSearchConfiguration':{
573 | 'numberOfResults': 20,
574 | "filter": lens_filter
575 | }
576 | }
577 | )
578 |
579 | def get_contexts(retrievalResults):
580 | contexts = []
581 | for retrievedResult in retrievalResults:
582 | contexts.append(retrievedResult['content']['text'])
583 | return contexts
584 |
585 | def parse_stream(stream):
586 | for event in stream:
587 | chunk = event.get('chunk')
588 | if chunk:
589 | message = json.loads(chunk.get("bytes").decode())
590 | if message['type'] == "content_block_delta":
591 | yield message['delta']['text'] or ""
592 | elif message['type'] == "message_stop":
593 | return "\n"
594 |
--------------------------------------------------------------------------------
/lambda_dir/update_review_status/update_review_status.py:
--------------------------------------------------------------------------------
1 | import os
2 | import boto3
3 | import json
4 | import datetime
5 | import time
6 | import logging
7 | import uuid
8 |
9 | from boto3.dynamodb.conditions import Key
10 | from boto3.dynamodb.conditions import Attr
11 | from botocore.client import Config
12 | from botocore.exceptions import ClientError
13 |
14 | s3 = boto3.resource('s3')
15 | dynamodb = boto3.resource('dynamodb')
16 | well_architected_client = boto3.client('wellarchitected')
17 |
18 | logger = logging.getLogger()
19 | logger.setLevel(logging.INFO)
20 |
21 | def lambda_handler(event, context):
22 |
23 | entry_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
24 |
25 | logger.info(f"update_review_status invoked at {entry_timestamp}" )
26 | logger.info(json.dumps(event))
27 |
28 | return_response = 'Success'
29 |
30 | # Parse the input data
31 | data = event
32 |
33 | wafr_accelerator_runs_table = dynamodb.Table(data[0]['wafr_accelerator_runs_table'])
34 | wafr_accelerator_run_key = data[0]['wafr_accelerator_run_key']
35 | wafr_workload_id = data[0]['wafr_accelerator_run_items']['wafr_workload_id']
36 |
37 | try:
38 | logger.debug(f"update_review_status checkpoint 1")
39 |
40 | # Create a milestone
41 | wafr_milestone = well_architected_client.create_milestone(
42 | WorkloadId=wafr_workload_id,
43 | MilestoneName="WAFR Accelerator Baseline",
44 | ClientRequestToken=str(uuid.uuid4())
45 | )
46 |
47 | logger.debug(f"Milestone created - {json.dumps(wafr_milestone)}")
48 |
49 | # Update the item
50 | response = wafr_accelerator_runs_table.update_item(
51 | Key=wafr_accelerator_run_key,
52 | UpdateExpression="SET review_status = :val",
53 | ExpressionAttributeValues={':val': "Completed"},
54 | ReturnValues='UPDATED_NEW'
55 | )
56 | logger.debug(f"update_review_status checkpoint 2")
57 | except Exception as error:
58 | return_response = 'Failed'
59 | logger.error(f"Exception caught in update_review_status: {error}")
60 |
61 | exit_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S-%f")
62 |
63 | logger.info(f"Exiting update_review_status at {exit_timestamp}" )
64 |
65 | return {
66 | 'statusCode': 200,
67 | 'body' : return_response
68 | }
69 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | aws-cdk-lib==2.154.1
2 | constructs>=10.0.0
3 | cdklabs.generative_ai_cdk_constructs==0.1.258
4 |
--------------------------------------------------------------------------------
/source.bat:
--------------------------------------------------------------------------------
1 | @echo off
2 |
3 | rem The sole purpose of this script is to make the command
4 | rem
5 | rem source .venv/bin/activate
6 | rem
7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows.
8 | rem On Windows, this command just runs this batch file (the argument is ignored).
9 | rem
10 | rem Now we don't need to document a Windows command for activating a virtualenv.
11 |
12 | echo Executing .venv\Scripts\activate.bat for you
13 | .venv\Scripts\activate.bat
14 |
--------------------------------------------------------------------------------
/sys-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/sys-arch.png
--------------------------------------------------------------------------------
/ui_code/WAFR_Accelerator.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from PIL import Image
3 |
4 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
5 | st.warning('You are not logged in. Please log in to access this page.')
6 | st.switch_page("pages/1_Login.py")
7 |
8 | # Main content for the home page
9 | st.write("""
10 | ### What is AWS Well-Architected Acceleration with Generative AI sample?
11 |
12 | **WAFR Accelerator** is a comprehensive sample designed to facilitate and expedite the AWS Well-Architected Framework Review process. The AWS Well-Architected Framework helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications.
13 |
14 | **WAFR Accelerator** offers an intuitive interface and a set of robust features aimed at enhancing the review experience.
15 |
16 | ### Getting Started
17 |
18 | To get started, simply navigate through the features available in the navigation bar.
19 | """)
20 |
21 | # Logout function
22 | def logout():
23 | st.session_state['authenticated'] = False
24 | st.session_state.pop('username', None)
25 | st.rerun()
26 |
27 | # Add logout button in sidebar
28 | if st.sidebar.button('Logout'):
29 | logout()
30 |
--------------------------------------------------------------------------------
/ui_code/pages/3_System_Architecture.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
3 | st.warning('You are not logged in. Please log in to access this page.')
4 | st.switch_page("pages/1_Login.py")
5 |
6 | # Check authentication
7 |
8 |
9 | def architecture():
10 | st.title("Architecture")
11 |
12 | st.header("AWS Well-Architected Acceleration with Generative AI Architecture")
13 | st.image("sys-arch.png", use_container_width=True)
14 |
15 | st.header("Components")
16 | st.write("""
17 | - **Frontend:** The user interface is built using Streamlit, providing an interactive and user-friendly environment for users to conduct reviews.
18 | - **Backend:** The backend services are developed using Python and integrate with various AWS services to manage data and computations.
19 | - **Database:** Data is stored securely in DynamoDB. Amazon OpenSearch serverless is used as the knowledge base for Amazon Bedrock.
20 | - **Integration Services:** The system integrates with Amazon DynamoDB, Amazon Bedrock and AWS Well-Architected Tool APIs to fetch necessary data and provide additional functionality.
21 | - **Security:** Amazon Cognito is used for user management.
22 | """)
23 |
24 | if __name__ == "__main__":
25 | architecture()
26 |
27 | # Logout function
28 | def logout():
29 | st.session_state['authenticated'] = False
30 | st.session_state.pop('username', None)
31 | st.rerun()
32 |
33 | # Add logout button in sidebar
34 | if st.sidebar.button('Logout'):
35 | logout()
36 |
--------------------------------------------------------------------------------
/ui_code/sys-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/sample-well-architected-acceleration-with-generative-ai/b01d98bbb8ac8e16181af128b78b418b8c182a05/ui_code/sys-arch.png
--------------------------------------------------------------------------------
/ui_code/tokenized-pages/1_Login.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import boto3
3 | import botocore
4 | import os
5 | import logging
6 |
7 | # Set up logging
8 | logging.basicConfig(level=logging.INFO)
9 | logger = logging.getLogger(__name__)
10 |
11 | # Set page configuration
12 | st.set_page_config(page_title="Login", layout="wide")
13 |
14 |
15 | # Cognito configuration
16 | COGNITO_USER_POOL_ID = '{{PARAMETER_COGNITO_USER_POOL_ID}}'
17 | COGNITO_APP_CLIENT_ID = '{{PARAMETER_COGNITO_USER_POOL_CLIENT_ID}}'
18 | COGNITO_REGION = '{{REGION}}'
19 |
20 | if not COGNITO_USER_POOL_ID or not COGNITO_APP_CLIENT_ID:
21 | st.error("Cognito configuration is missing. Please check your SSM parameters or environment variables.")
22 | st.stop()
23 |
24 | logger.info(f"Cognito User Pool ID: {COGNITO_USER_POOL_ID}")
25 | logger.info(f"Cognito App Client ID: {COGNITO_APP_CLIENT_ID}")
26 | logger.info(f"Cognito Region: {COGNITO_REGION}")
27 |
28 | def authenticate(username, password):
29 | client = boto3.client('cognito-idp', region_name=COGNITO_REGION)
30 | try:
31 | resp = client.initiate_auth(
32 | ClientId=COGNITO_APP_CLIENT_ID,
33 | AuthFlow='USER_PASSWORD_AUTH',
34 | AuthParameters={
35 | 'USERNAME': username,
36 | 'PASSWORD': password,
37 | }
38 | )
39 | logger.info(f"Successfully authenticated user: {username}")
40 | return True, username
41 | except client.exceptions.NotAuthorizedException:
42 | logger.warning(f"Authentication failed for user: {username}")
43 | return False, None
44 | except client.exceptions.UserNotFoundException:
45 | logger.warning(f"User not found: {username}")
46 | return False, None
47 | except Exception as e:
48 | logger.error(f"An error occurred during authentication: {str(e)}")
49 | st.error(f"An error occurred: {str(e)}")
50 | return False, None
51 |
52 | # Main UI
53 | st.title('Login')
54 |
55 | if 'authenticated' not in st.session_state:
56 | st.session_state['authenticated'] = False
57 |
58 | if st.session_state['authenticated']:
59 | st.write(f"Welcome back, {st.session_state['username']}!")
60 | if st.button('Logout'):
61 | st.session_state['authenticated'] = False
62 | st.session_state.pop('username', None)
63 | st.rerun()
64 | else:
65 | tab1, tab2 = st.tabs(["Login", "Register"])
66 |
67 | with tab1:
68 | username = st.text_input("Username", key="login_username")
69 | password = st.text_input("Password", type="password", key="login_password")
70 | login_button = st.button("Login")
71 |
72 | if login_button:
73 | if username and password:
74 | authentication_status, cognito_username = authenticate(username, password)
75 | if authentication_status:
76 | st.session_state['authenticated'] = True
77 | st.session_state['username'] = cognito_username
78 | st.rerun()
79 | else:
80 | st.warning("Please enter both username and password")
81 |
82 | with tab2:
83 | st.info("Please contact your Admin to get registered.")
84 |
85 | # Navigation options
86 | if st.session_state['authenticated']:
87 | st.write("Please select where you'd like to go:")
88 |
89 | col1, col2, col3, = st.columns(3)
90 |
91 | with col1:
92 | if st.button(' New WAFR Review '):
93 | st.switch_page("pages/1_New_WAFR_Review.py")
94 | with col2:
95 | if st.button(' Existing WAFR Reviews '):
96 | st.switch_page("pages/2_Existing_WAFR_Reviews.py")
97 | with col3:
98 | if st.button(' System Architecture '):
99 | st.switch_page("pages/3_System_Architecture.py")
100 |
--------------------------------------------------------------------------------
/ui_code/tokenized-pages/1_New_WAFR_Review.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import boto3
3 | import uuid
4 | import json
5 | import datetime
6 | from boto3.dynamodb.conditions import Attr
7 | from botocore.exceptions import ClientError
8 |
9 | # Check authentication
10 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
11 | st.warning('You are not logged in. Please log in to access this page.')
12 | st.switch_page("pages/1_Login.py")
13 |
14 | st.set_page_config(page_title="Create WAFR Analysis", layout="wide")
15 |
16 | # Initialize AWS clients
17 | s3_client = boto3.client('s3')
18 | well_architected_client = boto3.client('wellarchitected', region_name="{{REGION}}")
19 |
20 | def list_static_lenses():
21 |
22 | lenses = {}
23 | lenses["AWS Well-Architected Framework"] = "wellarchitected"
24 | lenses["Data Analytics Lens"] = "arn:aws:wellarchitected::aws:lens/dataanalytics"
25 | lenses["Financial Services Industry Lens"] = "arn:aws:wellarchitected::aws:lens/financialservices"
26 |
27 | return lenses
28 |
29 | # Not used as only 3 lenses are supported
30 | def list_lenses(wa_client):
31 |
32 | next_token = None
33 | lenses = {}
34 |
35 | try:
36 | while True:
37 | # Prepare the parameters for the API call
38 | params = {}
39 | if next_token:
40 | params['NextToken'] = next_token
41 |
42 | # Make the API call
43 | response = wa_client.list_lenses(**params)
44 |
45 | for lens in response.get('LensSummaries', []):
46 | lens_name = lens.get('LensName', 'Unknown')
47 | lens_alias = lens.get('LensAlias', lens.get('LensArn', 'Unknown'))
48 | lenses[lens_name] = lens_alias
49 | print (f"{lens_name} : {lens_alias}")
50 |
51 | # Check if there are more results
52 | next_token = response.get('NextToken')
53 | if not next_token:
54 | break
55 |
56 | return lenses
57 |
58 | except ClientError as e:
59 | print(f"An error occurred: {e}")
60 | return None
61 |
62 | lenses = list_static_lenses()
63 | lens_list = list(lenses.keys())
64 |
65 | def get_current_user():
66 | return st.session_state.get('username', 'Unknown User')
67 |
68 | # Initialize session state
69 | if 'form_submitted' not in st.session_state:
70 | st.session_state.form_submitted = False
71 |
72 | if 'form_data' not in st.session_state:
73 | st.session_state.form_data = {
74 | 'wafr_lens': lens_list[0],
75 | 'environment': 'PREPRODUCTION',
76 | 'analysis_name': '',
77 | 'created_by': get_current_user(),
78 | 'selected_pillars': [],
79 | 'workload_desc': '',
80 | 'review_owner': '',
81 | 'industry_type': 'Agriculture',
82 | 'analysis_review_type': 'Quick'
83 | }
84 | else:
85 | st.session_state.form_data['created_by'] = get_current_user()
86 |
87 | if 'success_message' not in st.session_state:
88 | st.session_state.success_message = None
89 |
90 | def upload_to_s3(file, bucket, key):
91 | try:
92 | s3_client.upload_fileobj(file, bucket, key)
93 | return True
94 | except Exception as e:
95 | st.error(f"Error uploading to S3: {str(e)}")
96 | return False
97 |
98 | def trigger_wafr_review(input_data):
99 | try:
100 | sqs = boto3.client('sqs', region_name="{{REGION}}")
101 | queue_url = '{{SQS_QUEUE_NAME}}'
102 | message_body = json.dumps(input_data)
103 | response = sqs.send_message(
104 | QueueUrl=queue_url,
105 | MessageBody=message_body
106 | )
107 | return response['MessageId']
108 | except Exception as e:
109 | st.error(f"Error sending message to SQS: {str(e)}")
110 | return None
111 |
112 | def create_wafr_analysis(analysis_data, uploaded_file):
113 | analysis_id = str(uuid.uuid4())
114 |
115 | if uploaded_file:
116 | s3_key = f"{analysis_data['created_by']}/analyses/{analysis_id}/{uploaded_file.name}"
117 | if not upload_to_s3(uploaded_file, "{{WAFR_UPLOAD_BUCKET_NAME}}", s3_key):
118 | return False, "Failed to upload document to S3."
119 | else:
120 | return False, "No document uploaded. Please upload a document before creating the analysis."
121 |
122 | wafr_review_input = {
123 | 'analysis_id': analysis_id,
124 | 'analysis_name': analysis_data['analysis_name'],
125 | 'wafr_lens': analysis_data['wafr_lens'],
126 | 'analysis_submitter': analysis_data['created_by'],
127 | 'selected_pillars': analysis_data['selected_pillars'],
128 | 'document_s3_key': s3_key,
129 | 'review_owner': analysis_data['review_owner'],
130 | 'analysis_owner': analysis_data['created_by'],
131 | 'lenses': lenses[analysis_data['wafr_lens']],
132 | 'environment': analysis_data['environment'],
133 | 'workload_desc': analysis_data['workload_desc'],
134 | 'industry_type': analysis_data['industry_type'],
135 | 'analysis_review_type': analysis_data['analysis_review_type']
136 | }
137 |
138 | message_id = trigger_wafr_review(wafr_review_input)
139 | if message_id:
140 | dynamodb = boto3.resource('dynamodb', region_name="{{REGION}}")
141 | wafr_accelerator_runs_table = dynamodb.Table('{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}')
142 |
143 | creation_date = datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")
144 |
145 | wafr_accelerator_run_item = {
146 | 'analysis_id': analysis_id,
147 | 'analysis_submitter': analysis_data['created_by'],
148 | 'analysis_title': analysis_data['analysis_name'],
149 | 'selected_lens': analysis_data['wafr_lens'],
150 | 'creation_date': creation_date,
151 | 'review_status': "Submitted",
152 | 'selected_wafr_pillars': analysis_data['selected_pillars'],
153 | 'document_s3_key': s3_key,
154 | 'analysis_owner': analysis_data['created_by'],
155 | 'lenses': lenses[analysis_data['wafr_lens']],
156 | 'environment': analysis_data['environment'],
157 | 'workload_desc': analysis_data['workload_desc'],
158 | 'review_owner': analysis_data['review_owner'],
159 | 'industry_type': analysis_data['industry_type'],
160 | 'analysis_review_type': analysis_data['analysis_review_type']
161 | }
162 |
163 | wafr_accelerator_runs_table.put_item(Item=wafr_accelerator_run_item)
164 | else:
165 | return False, "Failed to start the analysis process."
166 |
167 | return True, f"WAFR Analysis created successfully! Message ID: {message_id}"
168 |
169 | # Main application layout
170 | st.title("Create New WAFR Analysis")
171 |
172 | # User info and logout in sidebar
173 | with st.sidebar:
174 | if st.button('Logout'):
175 | st.session_state['authenticated'] = False
176 | st.session_state.pop('username', None)
177 | st.rerun()
178 |
179 | # Success message
180 | if st.session_state.success_message:
181 | st.success(st.session_state.success_message)
182 | if st.button("Clear Message"):
183 | st.session_state.success_message = None
184 | st.rerun()
185 |
186 | # First row: Analysis Name and Workload Description
187 | with st.expander("Workload Analysis", expanded=True):
188 | st.subheader("Workload Analysis")
189 | analysis_review_type = st.selectbox("Analysis Type", ["Quick", "Deep with Well-Architected Tool"], index=["Quick", "Deep with Well-Architected Tool"].index(st.session_state.form_data['analysis_review_type']))
190 | analysis_name = st.text_input("Workload Name", value=st.session_state.form_data['analysis_name'], max_chars=100)
191 | workload_desc = st.text_area("Workload Description", value=st.session_state.form_data['workload_desc'], height=100, max_chars=250)
192 | if workload_desc:
193 | char_count = len(workload_desc)
194 | if char_count < 3:
195 | st.error("Workload Description must be at least 3 characters long")
196 |
197 |
198 | # Second row: Environment, Review Owner, Created By, Industry Type, Lens
199 | with st.expander("Additional Details", expanded=True):
200 | st.subheader("Industry & Lens Details")
201 | col1, col2 = st.columns(2)
202 | with col1:
203 | wafr_environment = st.selectbox("WAFR Environment", options=['PRODUCTION', 'PREPRODUCTION'], index=['PRODUCTION', 'PREPRODUCTION'].index(st.session_state.form_data['environment']))
204 | review_owner = st.text_input("Review Owner", value=st.session_state.form_data['review_owner'])
205 | st.text_input("Created By", value=st.session_state.form_data['created_by'], disabled=True)
206 | with col2:
207 | industry_type = st.selectbox("Industry Type", options=["Agriculture", "Automotive", "Defense", "Design_And_Engineering", "Digital_Advertising", "Education", "Environmental_Protection", "Financial_Services", "Gaming", "General_Public_Services", "Healthcare", "Hospitality", "InfoTech", "Justice_And_Public_Safety", "Life_Sciences", "Manufacturing", "Media_Entertainment", "Mining_Resources", "Oil_Gas", "Power_Utilities", "Professional_Services", "Real_Estate_Construction", "Retail_Wholesale", "Social_Protection", "Telecommunications", "Travel_Transportation_Logistics", "Other"])
208 | wafr_lens = st.selectbox("WAFR Lens", options=lens_list, index=lens_list.index(st.session_state.form_data['wafr_lens']))
209 |
210 | # Third row: Select WAFR Pillars
211 | with st.expander("Select Pillars", expanded=True):
212 | st.subheader("Well Architected Framework Pillars")
213 | pillars = ["Operational Excellence", "Security", "Reliability", "Performance Efficiency", "Cost Optimization", "Sustainability"]
214 | selected_pillars = st.multiselect("Select WAFR Pillars", options=pillars, default=st.session_state.form_data['selected_pillars'], key="pillar_select")
215 |
216 | # Document Upload
217 | with st.expander("Document Upload", expanded=True):
218 | st.subheader("Design Artifacts Upload")
219 | uploaded_file = st.file_uploader("Upload Document", type=["pdf"])
220 |
221 | def duplicate_wa_tool_workload(workload_name):
222 | try:
223 | next_token = None
224 |
225 | search_name = workload_name.lower()
226 |
227 | while True:
228 | if next_token:
229 | response = well_architected_client.list_workloads(NextToken=next_token)
230 | else:
231 | response = well_architected_client.list_workloads()
232 |
233 | for workload in response['WorkloadSummaries']:
234 | if workload['WorkloadName'].lower() == search_name:
235 | return True
236 |
237 | next_token = response.get('NextToken')
238 | if not next_token:
239 | break
240 |
241 | return False
242 |
243 | except ClientError as e:
244 | print(f"Error checking workload: {e}")
245 | return False
246 |
247 | def duplicate_wafr_accelerator_workload(workload_name):
248 | try:
249 | dynamodb = boto3.resource('dynamodb', region_name="{{REGION}}")
250 | wafr_accelerator_runs_table = dynamodb.Table('{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}')
251 |
252 | filter_expression = Attr("analysis_title").eq(workload_name)
253 |
254 | response = wafr_accelerator_runs_table.scan(
255 | FilterExpression=filter_expression,
256 | ProjectionExpression="analysis_title, analysis_id, analysis_submitter"
257 | )
258 | return len(response.get('Items', [])) > 0
259 |
260 | except ClientError as e:
261 | print(f"ClientError when checking workload: {e}")
262 | return False
263 | except Exception as e:
264 | print(f"Error checking workload: {e}")
265 | return False
266 |
267 | # Full-width button
268 | if st.button("Create WAFR Analysis", type="primary", use_container_width=True):
269 | if not analysis_name:
270 | st.error("Please enter an Analysis Name.")
271 | elif ((not workload_desc) or (len (workload_desc)<3)):
272 | st.error("Workload Description needs to be at least 3 characters long.")
273 | elif ((not review_owner) or (len (review_owner)<3)):
274 | st.error("Review owner needs to be at least 3 characters long.")
275 | elif not selected_pillars:
276 | st.error("Please select at least one WAFR Pillar.")
277 | elif not uploaded_file:
278 | st.error("Please upload a document.")
279 | else:
280 | if (duplicate_wafr_accelerator_workload(analysis_name) ):
281 | st.error("Workload with the same name already exists!")
282 | elif ((analysis_review_type == 'Deep with Well-Architected Tool') and (duplicate_wa_tool_workload(analysis_name)) ):
283 | st.error("Workload with the same name already exists in AWS Well Architected Tool!")
284 | else:
285 | st.session_state.form_data.update({
286 | 'wafr_lens': wafr_lens,
287 | 'environment': wafr_environment,
288 | 'analysis_name': analysis_name,
289 | 'selected_pillars': selected_pillars,
290 | 'workload_desc': workload_desc,
291 | 'review_owner': review_owner,
292 | 'industry_type': industry_type,
293 | 'analysis_review_type': analysis_review_type
294 | })
295 | with st.spinner("Creating WAFR Analysis..."):
296 | success, message = create_wafr_analysis(st.session_state.form_data, uploaded_file)
297 | if success:
298 | st.session_state.success_message = message
299 | st.session_state.form_submitted = True
300 | st.rerun()
301 | else:
302 | st.error(message)
303 |
304 | if st.session_state.form_submitted:
305 | st.session_state.form_data = {
306 | 'wafr_lens': lens_list[0],
307 | 'environment': 'PREPRODUCTION',
308 | 'analysis_name': '',
309 | 'created_by': get_current_user(),
310 | 'selected_pillars': [],
311 | 'workload_desc': '',
312 | 'review_owner': '',
313 | 'industry_type': 'Agriculture',
314 | 'analysis_review_type': 'Quick'
315 | }
316 | st.session_state.form_submitted = False
317 | st.rerun()
318 |
--------------------------------------------------------------------------------
/ui_code/tokenized-pages/2_Existing_WAFR_Reviews.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import pandas as pd
3 | import datetime
4 | import boto3
5 | import json
6 | from boto3.dynamodb.types import TypeDeserializer
7 | import pytz
8 | import dotenv
9 |
10 | # Check authentication
11 | if 'authenticated' not in st.session_state or not st.session_state['authenticated']:
12 | st.warning('You are not logged in. Please log in to access this page.')
13 | st.switch_page("pages/1_Login.py")
14 | st.set_page_config(page_title="WAFR Analysis Grid", layout="wide")
15 |
16 |
17 | # Logout function
18 | def logout():
19 | st.session_state['authenticated'] = False
20 | st.session_state.pop('username', None)
21 | st.rerun()
22 |
23 | # Add logout button in sidebar
24 | if st.sidebar.button('Logout'):
25 | logout()
26 |
27 | dotenv.load_dotenv()
28 |
29 | client = boto3.client("bedrock-runtime", region_name = "{{REGION}}")
30 | model_id = "anthropic.claude-3-5-sonnet-20240620-v1:0"
31 |
32 |
33 | def load_data():
34 | # Initialize DynamoDB client
35 | dynamodb = boto3.client('dynamodb', region_name='{{REGION}}')
36 |
37 | try:
38 | # Scan the table
39 | response = dynamodb.scan(TableName='{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}')
40 |
41 | items = response['Items']
42 |
43 | # Continue scanning if we haven't scanned all items
44 | while 'LastEvaluatedKey' in response:
45 | response = dynamodb.scan(
46 | TableName='{{WAFR_ACCELERATOR_RUNS_DD_TABLE_NAME}}',
47 | ExclusiveStartKey=response['LastEvaluatedKey']
48 | )
49 | items.extend(response['Items'])
50 |
51 | # Check if items is empty
52 | if not items:
53 | st.warning("There are no existing WAFR review records")
54 | return pd.DataFrame(columns=['Analysis Id', 'Workload Name', 'Workload Description', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By', 'Review Owner', 'Solution Summary', 'pillars', 'selected_wafr_pillars'])
55 |
56 | # Unmarshal the items
57 | deserializer = TypeDeserializer()
58 | unmarshalled_items = [{k: deserializer.deserialize(v) for k, v in item.items()} for item in items]
59 |
60 | # Convert to DataFrame
61 | df = pd.DataFrame(unmarshalled_items)
62 |
63 | # Define the mapping of expected column names
64 | column_mapping = {
65 | 'analysis_id': 'Analysis Id',
66 | 'analysis_title': 'Workload Name',
67 | 'workload_desc': 'Workload Description',
68 | 'analysis_review_type': 'Analysis Type',
69 | 'selected_lens': 'WAFR Lens',
70 | 'creation_date': 'Creation Date',
71 | 'review_status': 'Status',
72 | 'analysis_submitter': 'Created By',
73 | 'review_owner' : 'Review Owner',
74 | 'extracted_document' : 'Document',
75 | 'architecture_summary' : 'Solution Summary'
76 | }
77 |
78 | # Rename columns that exist in the DataFrame
79 | df = df.rename(columns={k: v for k, v in column_mapping.items() if k in df.columns})
80 |
81 | # Add missing columns with empty values
82 | for col in column_mapping.values():
83 | if col not in df.columns:
84 | df[col] = ''
85 |
86 | # Parse Pillars
87 | def parse_pillars(pillars):
88 | if isinstance(pillars, list):
89 | return [
90 | {
91 | 'pillar_id': item.get('pillar_id', ''),
92 | 'pillar_name': item.get('pillar_name', ''),
93 | 'llm_response': item.get('llm_response', '')
94 | if isinstance(item.get('llm_response'), str)
95 | else item.get('llm_response', {})
96 | }
97 | for item in pillars
98 | ]
99 | return []
100 |
101 | # Apply parse_pillars only if 'pillars' column exists
102 | if 'pillars' in df.columns:
103 | df['pillars'] = df['pillars'].apply(parse_pillars)
104 | else:
105 | df['pillars'] = [[] for _ in range(len(df))]
106 |
107 | # Ensure all required columns exist, add empty ones if missing
108 | required_columns = ['Analysis Id', 'Workload Name', 'Workload Description', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By', 'Review Owner', 'Solution Summary', 'pillars', 'selected_wafr_pillars', 'Document']
109 | for col in required_columns:
110 | if col not in df.columns:
111 | df[col] = ''
112 |
113 | # Select and return required columns
114 | return df[required_columns]
115 |
116 | except Exception as e:
117 | st.error(f"An error occurred while loading data: {str(e)}")
118 | return pd.DataFrame(columns=['Analysis Id', 'Workload Name', 'Workload Description', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By', 'Review Owner', 'Solution Summary', 'pillars', 'selected_wafr_pillars', 'Document'])
119 |
120 | # Function to display summary of a selected analysis
121 | def display_summary(analysis):
122 | st.subheader("Summary")
123 |
124 | # Ensure selected_wafr_pillars is a string representation of the array
125 | if isinstance(analysis['selected_wafr_pillars'], list):
126 | selected_wafr_pillars = ', '.join(analysis['selected_wafr_pillars'])
127 | else:
128 | selected_wafr_pillars = str(analysis['selected_wafr_pillars'])
129 |
130 | summary_data = {
131 | "Field": ["Analysis Id", "Workload Name", "Workload Description" ,"Analysis Type", "Status", "WAFR Lens", "Creation Date", "Created By", "Review Owner", "Selected WAFR Pillars"],
132 | "Value": [
133 | analysis['Analysis Id'],
134 | analysis['Workload Name'],
135 | analysis['Workload Description'],
136 | analysis['Analysis Type'],
137 | analysis['Status'],
138 | analysis['WAFR Lens'],
139 | analysis['Creation Date'],
140 | analysis['Created By'],
141 | analysis['Review Owner'],
142 | selected_wafr_pillars
143 | ]
144 | }
145 | summary_df = pd.DataFrame(summary_data)
146 | st.dataframe(summary_df, hide_index=True, use_container_width=True)
147 |
148 | # Function to display design review data
149 | def display_design_review(analysis):
150 | st.subheader("Solution Summary")
151 | architecture_review = analysis['Solution Summary']
152 | if isinstance(architecture_review, str):
153 | st.write(architecture_review)
154 | else:
155 | st.write("No architecture review data available.")
156 |
157 | # Function to display pillar data
158 | def display_pillar(pillar):
159 | st.subheader(f"Review findings & recommendations for pillar: {pillar['pillar_name']}")
160 | llm_response = pillar.get('llm_response')
161 | if llm_response:
162 | st.write(llm_response)
163 | else:
164 | st.write("No LLM response data available.")
165 |
166 | def parse_stream(stream):
167 | for event in stream:
168 | chunk = event.get('chunk')
169 | if chunk:
170 | message = json.loads(chunk.get("bytes").decode())
171 | if message['type'] == "content_block_delta":
172 | yield message['delta']['text'] or ""
173 | elif message['type'] == "message_stop":
174 | return "\n"
175 |
176 |
177 | # Main Streamlit app
178 | def main():
179 | st.title("WAFR Analysis")
180 |
181 | st.subheader("WAFR Analysis Runs", divider="rainbow")
182 |
183 | # Generate sample data
184 | data = load_data()
185 |
186 | # Display the data grid with selected columns
187 | selected_columns = ['Analysis Id', 'Workload Name', 'Analysis Type', 'WAFR Lens', 'Creation Date', 'Status', 'Created By']
188 | st.dataframe(data[selected_columns], use_container_width=True)
189 |
190 | st.subheader("Analysis Details", divider="rainbow")
191 |
192 | # Create a selectbox for choosing an analysis
193 | analysis_names = data['Workload Name'].tolist()
194 | selected_analysis = st.selectbox("Select an analysis to view details:", analysis_names)
195 |
196 | # Display details of the selected analysis
197 | if selected_analysis:
198 | selected_data = data[data['Workload Name'] == selected_analysis].iloc[0]
199 |
200 | wafr_container = st.container()
201 |
202 | with wafr_container:
203 | # Create tabs dynamically
204 | tab_names = ["Summary", "Solution Summary"] + [f"{pillar['pillar_name']}" for pillar in selected_data['pillars']]
205 | tabs = st.tabs(tab_names)
206 |
207 | # Populate tabs
208 | with tabs[0]:
209 | display_summary(selected_data)
210 |
211 | with tabs[1]:
212 | display_design_review(selected_data)
213 |
214 | # Display pillar tabs
215 | for i, pillar in enumerate(selected_data['pillars'], start=2):
216 | with tabs[i]:
217 | display_pillar(pillar)
218 |
219 | st.subheader("", divider="rainbow")
220 |
221 | # Create chat container here
222 | chat_container = st.container()
223 |
224 | with chat_container:
225 | st.subheader("WAFR Chat")
226 |
227 | # Create a list of options including Summary, Solution Summary, and individual pillars
228 | chat_options = ["Summary", "Solution Summary", "Document"] + [pillar['pillar_name'] for pillar in selected_data['pillars']]
229 |
230 | # Let the user select an area to discuss
231 | selected_area = st.selectbox("Select an area to discuss:", chat_options)
232 |
233 | prompt = st.text_input("Ask a question about the selected area:")
234 |
235 | if prompt:
236 | # Prepare the context based on the selected area
237 | if selected_area == "Summary":
238 | area_context = "WAFR Analysis Summary:\n"
239 | area_context += f"Workload Name: {selected_data['Workload Name']}\n"
240 | area_context += f"Workload Description: {selected_data['Workload Description']}\n"
241 | area_context += f"WAFR Lens: {selected_data['WAFR Lens']}\n"
242 | area_context += f"Status: {selected_data['Status']}\n"
243 | area_context += f"Created By: {selected_data['Created By']}\n"
244 | area_context += f"Creation Date: {selected_data['Creation Date']}\n"
245 | area_context += f"Selected WAFR Pillars: {', '.join(selected_data['selected_wafr_pillars'])}\n"
246 | area_context += f"Architecture Review: {selected_data['Solution Summary']}\n"
247 | area_context += f"Review Owner: {selected_data['Review Owner']}\n"
248 | elif selected_area == "Solution Summary":
249 | area_context = "WAFR Solution Summary:\n"
250 | area_context += f"Architecture Review: {selected_data['Solution Summary']}\n"
251 | elif selected_area == "Document":
252 | area_context = "Document:\n"
253 | area_context += f"{selected_data['Document']}\n"
254 | else:
255 | pillar_data = next((pillar for pillar in selected_data['pillars'] if pillar['pillar_name'] == selected_area), None)
256 | if pillar_data:
257 | area_context = f"WAFR Analysis Context for {selected_area}:\n"
258 | area_context += pillar_data['llm_response']
259 | else:
260 | area_context = "Error: Selected area not found in the analysis data."
261 |
262 | # Combine the user's prompt with the context
263 | full_prompt = f"{area_context}\n\nUser Question: {prompt}\n\nPlease answer the question based on the WAFR analysis context provided above for the {selected_area}."
264 |
265 | body = json.dumps({
266 | "anthropic_version": "bedrock-2023-05-31",
267 | "max_tokens": 1024,
268 | "messages": [
269 | {
270 | "role": "user",
271 | "content": [{"type": "text", "text": full_prompt}],
272 | }
273 | ],
274 | })
275 |
276 | if("{{GUARDRAIL_ID}}" == "Not Selected"):
277 | streaming_response = client.invoke_model_with_response_stream(
278 | modelId=model_id,
279 | body=body
280 | )
281 | else: # Use guardrails
282 | streaming_response = client.invoke_model_with_response_stream(
283 | modelId=model_id,
284 | body=body,
285 | guardrailIdentifier="{{GUARDRAIL_ID}}",
286 | guardrailVersion="DRAFT",
287 | )
288 |
289 | st.subheader("Response")
290 | stream = streaming_response.get("body")
291 | st.write_stream(parse_stream(stream))
292 |
293 | if __name__ == "__main__":
294 | main()
295 |
--------------------------------------------------------------------------------
/user_data_script.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Check output in /var/log/cloud-init-output.log
3 | export AWS_DEFAULT_REGION={{REGION}}
4 | max_attempts=5
5 | attempt_num=1
6 | success=false
7 | #Adding the while loop since getting "RPM: error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Resource temporarily unavailable)" frequently
8 | while [ $success = false ] && [ $attempt_num -le $max_attempts ]; do
9 | echo "Trying to install required modules..."
10 | yum update -y
11 | yum install -y python3-pip
12 | yum remove -y python3-requests
13 | pip3 install boto3 awscli streamlit streamlit-authenticator numpy python-dotenv
14 | pip3 install streamlit
15 | pip3 install streamlit-authenticator
16 | # Check the exit code of the command
17 | if [ $? -eq 0 ]; then
18 | echo "Installation succeeded!"
19 | success=true
20 | else
21 | echo "Attempt $attempt_num failed. Sleeping for 10 seconds and trying again..."
22 | sleep 10
23 | ((attempt_num++))
24 | fi
25 | done
26 |
27 | sudo mkdir -p /wafr-accelerator && cd /wafr-accelerator
28 | sudo chown -R ec2-user:ec2-user /wafr-accelerator
29 |
30 | chown -R ec2-user:ec2-user /wafr-accelerator
31 |
--------------------------------------------------------------------------------
/wafr-prompts/wafr-prompts.json:
--------------------------------------------------------------------------------
1 | {
2 | "data": [
3 | {
4 | "wafr_lens": "AWS Well-Architected Framework",
5 | "wafr_lens_alias": "wellarchitected",
6 | "wafr_pillar": "Operational Excellence",
7 | "wafr_pillar_id": 1,
8 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nOPS 1: How do you determine what your priorities are?\nOPS 2: How do you structure your organization to support your business outcomes?\nOPS 3: How does your organizational culture support your business outcomes?\nOPS 4: How do you implement observability in your workload?\nOPS 5: How do you reduce defects, ease remediation, and improve flow into production?\nOPS 6: How do you mitigate deployment risks?\nOPS 7: How do you know that you are ready to support a workload?\nOPS 8: How do you utilize workload observability in your organization?\nOPS 9: How do you understand the health of your operations?\nOPS 10: How do you manage workload and operations events?\nOPS 11: How do you evolve operations?"
9 | },
10 | {
11 | "wafr_lens": "AWS Well-Architected Framework",
12 | "wafr_lens_alias": "wellarchitected",
13 | "wafr_pillar": "Security",
14 | "wafr_pillar_id": 2,
15 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nSEC 1: How do you securely operate your workload?\nSEC 2: How do you manage identities for people and machines?\nSEC 3: How do you manage permissions for people and machines?\nSEC 4: How do you detect and investigate security events?\nSEC 5: How do you protect your network resources?\nSEC 6: How do you protect your compute resources?\nSEC 7: How do you classify your data?\nSEC 8: How do you protect your data at rest?\nSEC 9: How do you protect your data in transit?\nSEC 10: How do you anticipate, respond to, and recover from incidents?\nSEC 11: How do you incorporate and validate the security properties of applications throughout the design, development, and deployment lifecycle?"
16 | },
17 | {
18 | "wafr_lens": "AWS Well-Architected Framework",
19 | "wafr_lens_alias": "wellarchitected",
20 | "wafr_pillar": "Reliability",
21 | "wafr_pillar_id": 3,
22 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nREL 1: How do you manage service quotas and constraints?\nREL 2: How do you plan your network topology?\nREL 3: How do you design your workload service architecture?\nREL 4: How do you design interactions in a distributed system to prevent failures?\nREL 5: How do you design interactions in a distributed system to mitigate or withstand failures?\nREL 6: How do you monitor workload resources?\nREL 7: How do you design your workload to adapt to changes in demand?\nREL 8: How do you implement change?\nREL 9: How do you back up data?\nREL 10: How do you use fault isolation to protect your workload?\nREL 11: How do you design your workload to withstand component failures?\nREL 12: How do you test reliability?\nREL 13: How do you plan for disaster recovery (DR)?"
23 | },
24 | {
25 | "wafr_lens": "AWS Well-Architected Framework",
26 | "wafr_lens_alias": "wellarchitected",
27 | "wafr_pillar": "Performance Efficiency",
28 | "wafr_pillar_id": 4,
29 | "wafr_pillar_prompt": "Please answer the following questions for the Performance efficiency pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nPERF 1: How do you select the appropriate cloud resources and architecture patterns for your workload?\nPERF 2: How do you select and use compute resources in your workload?\nPERF 3: How do you store, manage, and access data in your workload?\nPERF 4: How do you select and configure networking resources in your workload?\nPERF 5: What process do you use to support more performance efficiency for your workload?"
30 | },
31 | {
32 | "wafr_lens": "AWS Well-Architected Framework",
33 | "wafr_lens_alias": "wellarchitected",
34 | "wafr_pillar": "Cost Optimization",
35 | "wafr_pillar_id": 5,
36 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nCOST 1: How do you implement cloud financial management?\nCOST 2: How do you govern usage?\nCOST 3: How do you monitor your cost and usage?\nCOST 4: How do you decommission resources?\nCOST 5: How do you evaluate cost when you select services?\nCOST 6: How do you meet cost targets when you select resource type, size and number?\nCOST 7: How do you use pricing models to reduce cost?\nCOST 8: How do you plan for data transfer charges?\nCOST 9: How do you manage demand, and supply resources?\nCOST 10: How do you evaluate new services?\nCOST 11: How do you evaluate the cost of effort?"
37 | },
38 | {
39 | "wafr_lens": "AWS Well-Architected Framework",
40 | "wafr_lens_alias": "wellarchitected",
41 | "wafr_pillar": "Sustainability",
42 | "wafr_pillar_id": 6,
43 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nSUS 1: How do you select Regions for your workload?\nSUS 2: How do you align cloud resources to your demand?\nSUS 3: How do you take advantage of software and architecture patterns to support your sustainability goals?\nSUS 4: How do you take advantage of data management policies and patterns to support your sustainability goals?\nSUS 5: How do you select and use cloud hardware and services in your architecture to support your sustainability goals?\nSUS 6: How do your organizational processes support your sustainability goals?"
44 | },
45 | {
46 | "wafr_lens": "Data Analytics Lens",
47 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
48 | "wafr_pillar": "Operational Excellence",
49 | "wafr_pillar_id": 1,
50 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nOPS 1: How do you measure the health of your analytics workload?\nOPS 2: How do you deploy jobs and applications in a controlled and reproducible way?"
51 | },
52 | {
53 | "wafr_lens": "Data Analytics Lens",
54 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
55 | "wafr_pillar": "Security",
56 | "wafr_pillar_id": 2,
57 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nSEC 1: How do you protect data in your organizations analytics workload?\nSEC 2: How do you manage access to data within your organization's source, analytics, and downstream systems?\nSEC 3: How do you protect the infrastructure of the analytics workload?"
58 | },
59 | {
60 | "wafr_lens": "Data Analytics Lens",
61 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
62 | "wafr_pillar": "Reliability",
63 | "wafr_pillar_id": 3,
64 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nREL 1: How do you design analytics workloads to withstand and mitigate failures?\nREL 2: How do you govern data and metadata changes?"
65 | },
66 | {
67 | "wafr_lens": "Data Analytics Lens",
68 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
69 | "wafr_pillar": "Performance Efficiency",
70 | "wafr_pillar_id": 4,
71 | "wafr_pillar_prompt": "Please answer the following questions for the Performance Efficiency pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions\nPERF 1: How do you select the best-performing options for your analytics workload?\nPERF 2: How do you select the best-performing storage options for your workload?\nPERF 3: How do you select the best-performing file formats and partitioning?"
72 | },
73 | {
74 | "wafr_lens": "Data Analytics Lens",
75 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
76 | "wafr_pillar": "Cost Optimization",
77 | "wafr_pillar_id": 5,
78 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nCOST 1: How do you select the compute and storage solution for your analytics workload?\nCOST 2: How do you measure and attribute the analytics workload financial accountability?\nCOST 3: How do you manage the cost of your workload over time?\nCOST 4: How do you choose the financially-optimal pricing models of the infrastructure?"
79 | },
80 | {
81 | "wafr_lens": "Data Analytics Lens",
82 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/dataanalytics",
83 | "wafr_pillar": "Sustainability",
84 | "wafr_pillar_id": 6,
85 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of Well-Architected Framework Review (WAFR) Data Analytics Lens Lens.\nQuestions:\nSUS 1: How does your organization measure and improve its sustainability practices?"
86 | },
87 | {
88 | "wafr_lens": "Financial Services Industry Lens",
89 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
90 | "wafr_pillar": "Operational Excellence",
91 | "wafr_pillar_id": 1,
92 | "wafr_pillar_prompt": "Please answer the following questions for the Operational Excellence pillar of the Well-Architected Framework Review (WAFR).\nQuestions:\nOPS 1: Have you defined risk management roles for the cloud?\nOPS 2: Have you completed an operational risk assessment?\nOPS 3: Have you assessed your specific workload against regulatory needs?\nOPS 4: How do you assess your ability to operate a workload in the cloud?\nOPS 5: How do you understand the health of your workload?\nOPS 6: How do you assess the business impact of a cloud provider service event?\nOPS 7: Have you developed a continuous improvement model?"
93 | },
94 | {
95 | "wafr_lens": "Financial Services Industry Lens",
96 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
97 | "wafr_pillar": "Security",
98 | "wafr_pillar_id": 2,
99 | "wafr_pillar_prompt": "Please answer the following questions for the Security pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nSEC 1: How does your governance enable secure cloud adoption at scale?\nSEC 2: How do you achieve, maintain, and monitor ongoing compliance with regulatory guidelines and mandates?\nSEC 3: How do you monitor the use of elevated credentials, such as administrative accounts, and guard against privilege escalation?\nSEC 4: How do you accommodate separation of duties as part of your identity and access management design?\nSEC 5: How are you monitoring your ongoing cloud environment for potential threats?\nSEC 6: How do you address emerging threats?\nSEC 7: How are you inspecting your financial services infrastructure and network for unauthorized traffic?\nSEC 8: How do you isolate your software development lifecycle (SDLC) environments (like development, test, and production)?\nSEC 9: How are you managing your encryption keys?\nSEC 10: How are you handling data loss prevention in the cloud environment?\nSEC 11: How are you protecting against ransomware?\nSEC 12: How are you meeting your obligations for incident reporting to regulators?"
100 | },
101 | {
102 | "wafr_lens": "Financial Services Industry Lens",
103 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
104 | "wafr_pillar": "Reliability",
105 | "wafr_pillar_id": 3,
106 | "wafr_pillar_prompt": "Please answer the following questions for the Reliability pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nREL 1: Have you planned for events that impact your software development infrastructure and challenge your recovery & resolution plans?\nREL 2: Are you practicing continuous resilience to ensure that your services meet regulatory availability and recovery requirements?\nREL 3: How are your business and regulatory requirements driving the resilience of your workload?\nREL 4: Does the resilience and the architecture of your workload reflect the business requirements and resilience tier?\nREL 5: Is the resilience of the architecture addressing challenges for distributed workloads across AWS and an external entity?\nREL 6: To mitigate operational risks, can your workload owners detect, locate, and recover from gray failures?\nREL 7: How do you monitor your resilience objectives to achieve your strategic objectives and business plan?\nREL 8: How do you monitor your resources to understand your workloads health?\nREL 9: How are you backing up data in the cloud?\nREL 10: How are backups retained?"
107 | },
108 | {
109 | "wafr_lens": "Financial Services Industry Lens",
110 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
111 | "wafr_pillar": "Performance Efficiency",
112 | "wafr_pillar_id": 4,
113 | "wafr_pillar_prompt": "Please answer the following questions for the Performance Efficiency pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nPERF 1: How do you select the best performing architecture?\nPERF 2: How do you select your compute architecture?\nPERF 3: How do you select your storage architecture?\nPERF 4: How do you select your network architecture?\nPERF 5: How do you evaluate compliance with performance requirements?\nPERF 6: How do you make trade-offs in your architecture?"
114 |
115 | },
116 | {
117 | "wafr_lens": "Financial Services Industry Lens",
118 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
119 | "wafr_pillar": "Cost Optimization",
120 | "wafr_pillar_id": 5,
121 | "wafr_pillar_prompt": "Please answer the following questions for the Cost Optimization pillar of the Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nCOST 1: Is your cloud team educated on relevant technical and commercial optimization mechanisms?\nCOST 2: Do you apply the Pareto-principle (80/20 rule) to manage, optimize, and plan your cloud usage and spend?\nCOST 3: Do you use automation to drive scale for Cloud Financial Management practices?\nCOST 4: How do you promote cost-awareness within your organization?\nCOST 5: How do you track anomalies in your ongoing costs for AWS services?\nCOST 6: How do you track your workload usage cycles?\nCOST 7: Are you using all the available AWS credit and investment programs?\nCOST 8: Are you monitoring usage of Savings Plans regularly?\nCOST 9: Are you using the cost advantages of tiered storage?\nCOST 10: Do you use lower cost Regions to run less data-intensive or time-sensitive workloads?\nCOST 11: Do you use cost tradeoffs of various AWS pricing models in your workload design?\nCOST 12: Are you saving costs by adopting a set of modern microservice architectures?\nCOST 13: Do you use cloud services to accommodate consulting or testing of projects?\nCOST 14: How do you measure the cost of licensing third-party applications and software?\nCOST 15: Have you reviewed your ongoing cost structure tradeoffs for your current AWS services lately?\nCOST 16: Are you continuously assessing the ongoing costs and usage of your cloud implementations?\nCOST 17: Are you continually reviewing your workload to provide the most cost-effective resources?\nCOST 18: Do you have specific workload modernization or refactoring goals in your cloud strategy?\nCOST 19: Do you use the cloud to drive innovation & operational excellence of your business model to impact both the top & bottom line?"
122 | },
123 | {
124 | "wafr_lens": "Financial Services Industry Lens",
125 | "wafr_lens_alias": "arn:aws:wellarchitected::aws:lens/financialservices",
126 | "wafr_pillar": "Sustainability",
127 | "wafr_pillar_id": 6,
128 | "wafr_pillar_prompt": "Please answer the following questions for the Sustainability pillar of Well-Architected Framework Review (WAFR) Financial Services Industry Lens Lens.\nQuestions:\nSUS 1: How do you select the most sustainable Regions in your area?\nSUS 2: How do you address data sovereignty regulations for location of sustainable Region?\nSUS 3: How do you select a Region to optimize financial services workloads for sustainability?\nSUS 4: How do you prioritize business critical functions over non-critical functions?\nSUS 5: How do you define, review, and optimize network access patterns for sustainability?\nSUS 6: How do you monitor and minimize resource usage for financial services workloads?\nSUS 7: How do you optimize batch processing components for sustainability?\nSUS 8: How do you optimize your resource usage?\nSUS 9: How do you optimize areas of your code that use the most resources?\nSUS 10: Have you selected the storage class with the lowest carbon footprint?\nSUS 11: Do you store processed data or raw data?\nSUS 12: What is your process for benchmarking instances for existing workloads?\nSUS 13: Can you complete workloads over more time while not violating your maximum SLA?\nSUS 14: Do you have multi-architecture images for grid computing systems?\nSUS 15: What is your testing process for workloads that require floating point precision?\nSUS 16: Do you achieve a judicious use of development resources?\nSUS 17: How do you minimize your test, staging, sandbox instances?\nSUS 18: How do you define the minimum requirement in response time for customers in order to maximize your green SLA?"
129 | }
130 | ]
131 | }
132 |
133 |
134 |
--------------------------------------------------------------------------------
/well_architected_docs/README.MD:
--------------------------------------------------------------------------------
1 | Put all the AWS Well-Architected Framework documents in this folder using the following folder structure and delete this file.
2 |
3 |
4 | +---dataanalytics
5 | | analytics-lens.pdf
6 | |
7 | +---financialservices
8 | | wellarchitected-financial-services-industry-lens.pdf
9 | |
10 | +---overview
11 | | wellarchitected-framework.pdf
12 | |
13 | \\---wellarchitected
14 | | wellarchitected-cost-optimization-pillar.pdf
15 | | wellarchitected-operational-excellence-pillar.pdf
16 | | wellarchitected-performance-efficiency-pillar.pdf
17 | | wellarchitected-reliability-pillar.pdf
18 | | wellarchitected-security-pillar.pdf
19 | | wellarchitected-sustainability-pillar.pdf
--------------------------------------------------------------------------------