├── .gitignore ├── .template_user.ini ├── AI_intro_at_SAP.pdf ├── LICENSE ├── LICENSES └── Apache-2.0.txt ├── README.md ├── REUSE.toml ├── codejam.code-workspace ├── exercises ├── 00-connect-AICore-and-AILaunchpad.md ├── 01-explore-genai-hub.md ├── 02-setup-python-environment.md ├── 03-prompt-llm.ipynb ├── 04-create-embeddings.ipynb ├── 05-store-embeddings-hana.ipynb ├── 06-RAG.ipynb ├── 08-orchestration-service.ipynb ├── 09-orchestration-service-grounding.ipynb ├── 10-chatbot-with-memory.ipynb ├── 11-your-chatbot.ipynb ├── 12-ai-agents.ipynb ├── documents │ ├── .DS_Store │ ├── DocumentGrounding_GenerativeAIHubSDKv4.10.2.pdf │ ├── GenerativeAIhubSDK_GenerativeAIHubSDKv4.10.2.pdf │ ├── Orchestration_Service_Generative_AI_hub_SDK.pdf │ ├── SAP_note_models.pdf │ ├── ai-foundation-architecture.png │ └── bananabread.png ├── images │ ├── AIL_survey.png │ ├── BTP_cockpit.png │ ├── BTP_cockpit_BAS.png │ ├── Screenshot 2025-01-29 at 16.18.20.png │ ├── Screenshot 2025-01-29 at 16.18.56.png │ ├── Screenshot 2025-01-29 at 16.19.21.png │ ├── Screenshot 2025-01-29 at 16.19.51.png │ ├── Screenshot 2025-01-29 at 16.20.48.png │ ├── auth_default.png │ ├── bas.png │ ├── bas_exercises.png │ ├── btp-subaccount.png │ ├── btp.png │ ├── chat.png │ ├── chat1.png │ ├── chat2.png │ ├── chat3.png │ ├── chat_2.png │ ├── clone_git.png │ ├── clone_git_2.png │ ├── configuration_o.png │ ├── configuration_o2.png │ ├── configurations.png │ ├── configurations_2.png │ ├── configurations_3.png │ ├── configurations_4.png │ ├── configurations_5.png │ ├── create_dev_space.png │ ├── deployment_o.png │ ├── deployment_o2.png │ ├── deployment_o3.png │ ├── deployment_orchestration_service.png │ ├── deployments.png │ ├── deployments_2.png │ ├── deployments_3.png │ ├── deployments_4.png │ ├── deployments_5.png │ ├── dev_running.png │ ├── dev_starting.png │ ├── enable_orchestration.png │ ├── extensions.png │ ├── model-library-2.png │ ├── model-library-3.png │ ├── model-library.png │ ├── open_workspace.png │ ├── prompt_editor.png │ ├── resource_group.png │ ├── resource_group_2.png │ ├── scenarios.png │ ├── scenarios_2.png │ ├── scenarios_3.png │ ├── select_embedding_table.png │ ├── select_kernel.png │ ├── service-binding.png │ ├── service-key.png │ ├── service_binding.png │ ├── service_key.png │ ├── setup0010.png │ ├── setup0030.png │ ├── setup0040.png │ ├── setup0042.png │ ├── setup0060.png │ ├── start_terminal.png │ ├── venv.png │ └── workspace.png ├── init_env.py └── variables.py ├── prerequisites.md └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | .aicore-config.json 2 | .user.ini 3 | exercises/variables.py 4 | 5 | # Environments 6 | .env 7 | .venv 8 | env/ 9 | venv/ 10 | ENV/ 11 | env.bak/ 12 | venv.bak/ 13 | 14 | # Jupyter Notebook 15 | .ipynb_checkpoints 16 | 17 | # Byte-compiled / optimized / DLL files 18 | __pycache__/ 19 | *.py[cod] 20 | *$py.class 21 | 22 | # macOS-specific files 23 | .DS_Store 24 | -------------------------------------------------------------------------------- /.template_user.ini: -------------------------------------------------------------------------------- 1 | [hana] 2 | url=b7c3ff95-9e0c-480d-9022-bfaaa268f780.hna0.prod-us10.hanacloud.ondemand.com 3 | user=GENAI_CODEJAM_01 4 | passwd= 5 | port=443 6 | -------------------------------------------------------------------------------- /AI_intro_at_SAP.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/AI_intro_at_SAP.pdf -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2024 SAP SE 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /LICENSES/Apache-2.0.txt: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CodeJam - Getting started with Generative AI Hub on SAP AI Core 2 | [![REUSE status](https://api.reuse.software/badge/github.com/SAP-samples/generative-ai-codejam)](https://api.reuse.software/info/github.com/SAP-samples/generative-ai-codejam) 3 | 4 | 7 | 8 | ## Description 9 | This repository contains the material for the "Getting started with the Generative AI Hub on SAP AI Core" CodeJam, brought to you by the Developer Advocates at SAP. 10 | 11 | ## Overview 12 | In this CodeJam you will learn how to use Generative AI Hub on SAP AI Core to implement a retrieval augmented generation (RAG) use case to improve the responses of large language models (LLMs) and reduce hallucinations. You will learn how to deploy an LLM on SAP AI Core and query it via SAP AI Launchpad and the Python SDK. Furthermore, you will learn about the most important genAI concepts and create and use text chunks and embeddings to improve your RAG response. 13 | 14 | ## Requirements 15 | 16 | The requirements necessary to complete the exercises in this repository, including hardware and software specifications, are outlined in the [prerequisites](prerequisites.md) file. 17 | 18 | ### Exercises 19 | 20 | You can find all exercises in the exercises folder. We will work through the exercises in the order shown here. From a session flow perspective, we are taking the "coordinated" approach: 21 | 22 | The instructor will start you on the first exercise, and that's the only one you should do. You should only proceed to the next exercise once the instructor tells you to. 23 | 24 | 00. [Connect SAP AI Core and SAP AI Launchpad](exercises/00-connect-AICore-and-AILaunchpad.md) 25 | 01. [Explore the Prompt Editor in SAP AI Launchpad](exercises/01-explore-genai-hub.md) 26 | 02. [Setup your Python environment](exercises/02-setup-python-environment.md) 27 | 03. [Prompt an LLM](exercises/03-prompt-llm.ipynb) 28 | 04. [Create embeddings for your document chunks](exercises/04-create-embeddings.ipynb) 29 | 05. [Store embeddings](exercises/05-store-embeddings-hana.ipynb) 30 | 06. [Implement the RAG use case](exercises/06-RAG.ipynb) 31 | 07. [Deploy an orchestration service](exercises/07-deploy-orchestration-service.md) 32 | 08. [Orchestration service](exercises/08-orchestration-service.ipynb) 33 | 09. [Orchestration service - grounding](exercises/09-orchestration-service-grounding.ipynb) 34 | 10. [Chatbot with Memory](exercises/10-chatbot-with-memory.ipynb) 35 | 11. [Your Chatbot](exercises/11-your-chatbot.ipynb) 36 | 12. [OPTIONAL: AI Agents](exercises/12-ai-agents.ipynb) 37 | 38 | ## Feedback 39 | 40 | If you can spare a couple of minutes at the end of the session, please provide feedback to help us improve next time. 41 | 42 | Use this [Give feedback](https://github.com/SAP-samples/generative-ai-codejam/issues/new?assignees=&labels=feedback&template=session-feedback-template.md&title=Session%20Feedback) link to create a special "feedback" issue, and follow the instructions in there. 43 | 44 | Thank you! 45 | 46 | ## Other CodeJams 47 | 48 | ### CodeJam repositories 49 | 50 | * [Service integration with SAP Cloud Application Programming Model](https://github.com/SAP-samples/cap-service-integration-codejam) 51 | * [CodeJam - Getting Started with Machine Learning using SAP HANA and Python](https://github.com/SAP-samples/hana-ml-py-codejam) 52 | * [Hands-on with the btp CLI and APIs](https://github.com/SAP-samples/cloud-btp-cli-api-codejam) 53 | * [CodeJam - Combine SAP Cloud Application Programming Model with SAP HANA Cloud to Create Full-Stack Applications](https://github.com/SAP-samples/cap-hana-exercises-codejam) 54 | * [All CodeJams in one list](https://github.com/orgs/SAP-samples/repositories?language=&q=Codejam&sort=&type=all) 55 | 56 | ### CodeJam Community 57 | 58 | * [SAP CodeJam Events](https://community.sap.com/t5/sap-codejam/eb-p/codejam-events) 59 | * [SAP CodeJam Community](https://community.sap.com/t5/sap-codejam/gh-p/code-jam) 60 | * [SAP CodeJam Discussions](https://community.sap.com/t5/sap-codejam-discussions/bd-p/code-jamforum-board) 61 | 62 | ## Acknowledgements 63 | 64 | The exercise content in this repository is based on different sample repositories created by the SAP AI Core team. 65 | 66 | The exercises related to the prerequisites and setting up BAS were copied and adjusted from my colleague [Vitaliy's](https://www.linkedin.com/in/witalij/?originalSubdomain=pl) **CodeJam - Getting Started with Machine Learning using SAP HANA and Python**. 67 | 68 | 69 | ## How to obtain support 70 | [Create an issue](https://github.com/SAP-samples//issues) in this repository if you find a bug or have questions about the content. 71 | 72 | For additional support, [ask a question in SAP Community](https://answers.sap.com/questions/ask.html). 73 | 74 | ## License 75 | Copyright (c) 2024 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the [LICENSE](LICENSE) file. 76 | -------------------------------------------------------------------------------- /REUSE.toml: -------------------------------------------------------------------------------- 1 | version = 1 2 | SPDX-PackageName = "generative-ai-codejam" 3 | SPDX-PackageSupplier = "Nora von Thenen " 4 | SPDX-PackageDownloadLocation = "" 5 | SPDX-PackageComment = "The code in this project may include calls to APIs (\"API Calls\") of\n SAP or third-party products or services developed outside of this project\n (\"External Products\").\n \"APIs\" means application programming interfaces, as well as their respective\n specifications and implementing code that allows software to communicate with\n other software.\n API Calls to External Products are not licensed under the open source license\n that governs this project. The use of such API Calls and related External\n Products are subject to applicable additional agreements with the relevant\n provider of the External Products. In no event shall the open source license\n that governs this project grant any rights in or to any External Products,or\n alter, expand or supersede any terms of the applicable additional agreements.\n If you have a valid license agreement with SAP for the use of a particular SAP\n External Product, then you may make use of any API Calls included in this\n project's code for that SAP External Product, subject to the terms of such\n license agreement. If you do not have a valid license agreement for the use of\n a particular SAP External Product, then you may only make use of any API Calls\n in this project for that SAP External Product for your internal, non-productive\n and non-commercial test and evaluation of such API Calls. Nothing herein grants\n you any rights to use or access any SAP External Product, or provide any third\n parties the right to use of access any SAP External Product, through API Calls." 6 | 7 | [[annotations]] 8 | path = "**" 9 | precedence = "aggregate" 10 | SPDX-FileCopyrightText = "2024 SAP SE or an SAP affiliate company and generative-ai-codejam contributors" 11 | SPDX-License-Identifier = "Apache-2.0" 12 | -------------------------------------------------------------------------------- /codejam.code-workspace: -------------------------------------------------------------------------------- 1 | { 2 | "folders": [ 3 | { 4 | "path": "." 5 | } 6 | ], 7 | "settings": { 8 | "jupyter.kernels.excludePythonEnvironments": [ 9 | "/bin/python3", 10 | "/usr/bin/python3", 11 | "~/.asdf-inst/shims/python", 12 | "~/.asdf-inst/shims/python3", 13 | "~/.asdf-inst/shims/python3.11", 14 | "~/.asdf-inst/shims/python3.13" 15 | ] 16 | } 17 | } -------------------------------------------------------------------------------- /exercises/00-connect-AICore-and-AILaunchpad.md: -------------------------------------------------------------------------------- 1 | # Setup SAP AI Launchpad and SAP AI Core 2 | [SAP AI Launchpad](https://help.sap.com/docs/ai-launchpad) is a multitenant SaaS application on SAP BTP. You can use SAP AI Launchpad to manage AI use cases across different AI runtimes. SAP AI Launchpad also provides generative AI capabilities via the [Generative AI Hub in SAP AI Core](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/generative-ai-hub-in-sap-ai-core-7db524ee75e74bf8b50c167951fe34a5). 3 | 4 | You can also connect the SAP HANA database as an AI runtime to work with the HANA Predictive Analysis Library (PAL), or connect SAP AI Services to work with the SAP AI Service Data Attribute Recommendation. 5 | 6 | ## [1/3] Open SAP Business Technology Platform 7 | 👉 Open SAP [BTP Cockpit](https://emea.cockpit.btp.cloud.sap/cockpit). 8 | 9 | 👉 Navigate to the subaccount: [`GenAI CodeJam`](https://emea.cockpit.btp.cloud.sap/cockpit#/globalaccount/275320f9-4c26-4622-8728-b6f5196075f5/subaccount/a5a420d8-58c6-4820-ab11-90c7145da589/subaccountoverview). 10 | 11 | ## [2/3] Open SAP AI Launchpad and connect to SAP AI Core 12 | 👉 Go to **Instances and Subscriptions**. 13 | 14 | ![BTP Cockpit](images/btp-subaccount.png) 15 | 16 | Check whether you see an **SAP AI Core** service instance and an **SAP AI Launchpad** application subscription. 17 | 18 | With SAP AI Launchpad, you can administer all your machine learning operations. ☝️ You have the `extended` plan of SAP AI Core service instance, so that you can access capabilities of Generative AI Hub. 19 | 20 | ☝️ In this subaccount the connection between the SAP AI Core service instance and the SAP AI Launchpad application is already established. Otherwise you would have to add a new AI runtime using the SAP AI Core service key information. 21 | 22 | Should you need to connect to the SAP AI Core to SAP AI Launchpad, you would need the credentials from the SAP AI Core secret. 23 | 24 | ![SAP AI Core service key](images/service-binding.png) 25 | 26 | 👉 Open **SAP AI Launchpad**. Make sure you authenticate your user using the **Default Identity Provider**. 27 | 28 | ![Authentication client](images/auth_default.png) 29 | 30 | ## [3/3] Select the resource group that was assigned to you/your team 31 | SAP AI Core tenants use [resource groups](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups) to isolate AI resources and workloads. Scenarios (e.g. `foundation-models`) and executables (a template for training a model or creation of a deployment) are shared across all resource groups within the instance. 32 | 33 | >DO NOT USE THE DEFAULT `default` RESOURCE GROUP! 34 | 35 | 👉 Go back to **Workspaces**. 36 | 37 | 👉 Select your workspace (like `codajam*`) and your resource group. 38 | 39 | 👉 Make sure it is set as a context. The proper name of the context, like `codejam-YYY (team-XX)` should show up at the top next to SAP AI Launchpad. 40 | 41 | ☝️ You will need the name of your resource group in [Exercise 04-create-embeddings](04-create-embeddings.ipynb). 42 | 43 | ![SAP AI Launchpad - Recourse Group 2/2](images/resource_group_2.png) 44 | 45 | [Next Exercise](01-explore-genai-hub.md) -------------------------------------------------------------------------------- /exercises/01-explore-genai-hub.md: -------------------------------------------------------------------------------- 1 | # Explore Generative AI Hub in SAP AI Launchpad 2 | 3 | To leverage large language models (LLMs) or foundation models in your applications, you can use the Generative AI Hub on SAP AI Core. Like most other LLM applications, Generative AI Hub operates on a pay-per-use basis. 4 | 5 | Generative AI Hub offers all major models on the market. There are open-source models that SAP has deployed such as the Falcon model. And there are models that SAP is a proxy for, such as the GPT models, Google models, models provided by Amazon Bedrock and more. You can easily switch between them, compare results, and select the model that works best for your use case. 6 | 7 | SAP maintains strict data privacy contracts with LLM providers to ensure that your data remains secure. 8 | 9 | To start using one of the available models in Generative AI Hub, you need to first deploy it. You need to deploy an orchestration service to access all available models. You can also deploy a single model directly via the **Model Library**. You can access your deployed models using the Python SDK, the SAP Cloud SDK for AI (JavaScript SDK), any programming language or API platform, or the user interface in SAP AI Launchpad. 10 | 11 | ## Deploy an Orchestration Service 12 | 👉 Navigate to the `Generative AI Hub > Chat` tab. 13 | 14 | 👉 Click **Enable Orchestration** to deploy an orchestration service. 15 | 16 | ![Orchestration Deployment](images/deployment_orchestration_service.png) 17 | 18 | Once your model(s) is deployed, the Chat will look like this: 19 | 20 | ![Chat](images/chat1.png) 21 | 22 | ## Use the Chat in Generative AI Hub 23 | 24 | 👉 If you lost the previous chat window you can always open the `Generative AI Hub` tab and select `Chat`. 25 | 26 | 👉 Click `Configure` and have a look at the available fields. 27 | 28 | Under `Selected Model` you will find all the deployed models. If there is no deployment this will be empty and you will not be able to chat. As the orchestration service is enabled, you will see all available models here. 29 | 30 | 👉 Pick whatever model you would like to try! You can head over to the `Model Library` and compare models using the **Leaderboard** or **Chart**. Check for example for the cheapest model with the highest **Helm** score. 31 | 32 | The `Frequency Penalty` parameter allows you to penalize words that appear too frequently in the text, helping the model sound less robotic. 33 | 34 | Similarly, the higher the `Presence Penalty`, the more likely the model is to introduce new topics, as it penalizes words that have already appeared in the text. 35 | 36 | The `Max Tokens` parameter allows you to set the size of the model's input and output. Tokens are not individual words but are typically 4-5 characters long. 37 | 38 | The `Temperature` parameter allows you to control how creative the model should be, determining how flexible it is in selecting the next token in the sequence. 39 | 40 | ![Chat 1/2](images/chat2.png) 41 | 42 | In the `Chat Context` tab, right under `Context History`, you can set the number of messages to be sent to the model, determining how much of the chat history should be provided as context for each new request. This is basically the size of the models memory. 43 | 44 | You can also add a `System Message` to describe the role or give more information about what is expected from the model. Additionally, you can provide example inputs and outputs. 45 | 46 | ![Chat 2/2](images/chat3.png) 47 | 48 | ## Prompt Engineering 49 | 👉 Try out different prompt engineering techniques following these examples: 50 | 51 | 1. **Zero shot**: 52 | ``` 53 | The capital of Germany is: 54 | ``` 55 | 2. **Few shots**: 56 | ``` 57 | France - Paris 58 | Canada - Ottawa 59 | Ukraine - Kyiv 60 | Georgia - 61 | ``` 62 | 3. **Chain of thought** (Copy the WHOLE BLOCK into the chat window): 63 | ``` 64 | 1. What is the most important city of a country? 65 | 2. In which country was the hot air balloon originally developed? 66 | 3. What is the >fill in the word from step 1< of the country >fill in the word from step 2<. 67 | ``` 68 | 69 | 👉 Try to add something funny to the `System Message` like "always respond like a pirate" and try the prompts again. You can also instruct it to speak more technically, like a developer, or more polished, like in marketing. 70 | 71 | 👉 Have the model count letters in words. For example how often the letter **r** occurs in **strawberry**. Can you come up with a prompt that counts it correctly? If your model did it correctly try if an older version like GPT-3.5 still does it wrong. 72 | 73 | ## Use the Prompt Editor in Generative AI Hub 74 | 75 | The `Prompt Editor` is useful if you want to save a prompt and its response to revisit later or compare prompts. Often, you can identify tasks that an LLM can help you with on a regular basis. In that case, you can also save different versions of the prompt that work well, saving you from having to write the prompt again each time. 76 | 77 | The parameters you were able to set in the `Chat` can also be set here. Additionally, you can view the number of tokens your prompt used below the response. 78 | 79 | 👉 Go over to `Prompt Editor`, select a model and set `Max Tokens` to the maximum again 80 | 81 | 👉 Paste the example below and click **Run** to try out the example below. 82 | 83 | 👉 Give your prompt a `Name`, a `Collection` name, and **Save** the prompt. 84 | 85 | 👉 If you now head over to `Prompt Management` you will find your previously saved prompt there. To run the prompt again click `Open in Prompt Editor`. You can also select other saved prompts by clicking on **Select**. 86 | 87 | 1. Chain of thought prompt - customer support (Copy the WHOLE BLOCK into the prompt editor): 88 | ``` 89 | You are working at a big tech company and you are part of the support team. 90 | You are tasked with sorting the incoming support requests into: German, English, Polish or French. 91 | 92 | Review the incoming query. 93 | Identify the language of the query as either German, English, Polish, or French. 94 | - Example: 'bad usability. very confusing user interface.' - English 95 | Count the number of queries for each language: German, English, Polish, and French. 96 | Summarize the key pain points mentioned in the queries in bullet points in English. 97 | 98 | Queries: 99 | - What are the shipping costs to Australia? 100 | - Kann ich einen Artikel ohne Kassenbon umtauschen? 101 | - Offrez-vous des réductions pour les achats en gros? 102 | - Can I change the delivery address after placing the order? 103 | - Comment puis-je annuler mon abonnement? 104 | - Wo kann ich den Status meiner Reparatur einsehen? 105 | - What payment methods do you accept? 106 | - Czemu to tak długo ma iść do Wrocławia, gdy cena nie jest wcale niska? 107 | - Quel est le délai de livraison estimé pour le Mexique? 108 | - Gibt es eine Garantie auf elektronische Geräte? 109 | - I’m having trouble logging into my account, what should I do? 110 | ``` 111 | 112 | ![Prompt Editor](images/prompt_editor.png) 113 | 114 | 👉 If you still have time. Ask the LLM to come up with different support queries to have more data. 115 | 116 | ## Deploy a proxy of the multi-modal LLM via the Model Library 117 | 118 | In exercise [03-prompt-llm](exercises/03-prompt-llm.ipynb) you will also need a deployment of the **GPT-4o Mini** model, because we will use openAI's API directly and also use that model for a Retrieval Augmented Generation (RAG) workflow later. 119 | 120 | ![Model Library 1/3](images/model-library.png) 121 | 122 | 👉 Open the `Generative AI Hub` tab and select `Model Library`. 123 | 124 | 👉 Click on **GPT-4o Mini** which is a text generation model that can also process images. 125 | 126 | ![Model Library 2/3](images/model-library-2.png) 127 | 128 | 👉 Click on `Deploy` **ONLY ONCE** to make the model endpoint available to you. **It will TAKE A MINUTE or so to be deployed.** 129 | 130 | 👉 You can check the deployment status of the models by clicking on `ML Operations > Deployments`. 131 | 132 | ## Deploy a proxy of the text-embedding model via the Model Library 133 | 134 | 👉 Before you move on to the next exercise, make sure to also deploy the `Text Embedding 3 Small` model. You will need it later! Clicking the deploy button once is enough, then you can see your deployments under `ML Operations>Deployments`. 135 | 136 | ## Usability Survey 137 | 138 | Now that you have experienced the AI Launchpad, if you see the **Feedback** icon in the upper right corner, please click it to complete a survey. It should take no more than 3 minutes of your time, but this feedback is extremely important for us. 139 | 140 | ![Feedback](images/AIL_survey.png) 141 | 142 | ## Summary 143 | 144 | By this point, you will know how to use the Generative AI Hub user interface in SAP AI Launchpad to query LLMs and store important prompts. You will also understand how to refine the output of a large language model using prompt engineering techniques. 145 | 146 | ## Further reading 147 | 148 | * [Generative AI Hub on SAP AI Core - Help Portal (Documentation)](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/generative-ai-hub-in-sap-ai-core-7db524ee75e74bf8b50c167951fe34a5) 149 | * [Prompt Engineering Guide](https://www.promptingguide.ai/) is a good resource if you want to know more about prompt engineering in general. 150 | * [Prompt LLMs in the generative AI hub in SAP AI Core & Launchpad](https://developers.sap.com/tutorials/ai-core-generative-ai.html) is a good tutorial on how to prompt LLMs with Generative AI Hub. 151 | 152 | --- 153 | 154 | [Next exercise](02-setup-python-environment.md) -------------------------------------------------------------------------------- /exercises/02-setup-python-environment.md: -------------------------------------------------------------------------------- 1 | # Setup SAP Business Application Studio and a dev space 2 | > [SAP Business Application Studio](https://help.sap.com/docs/bas/sap-business-application-studio/what-is-sap-business-application-studio) is based on Code-OSS, an open-source project used for building Visual Studio Code. Available as a cloud service, SAP Business Application Studio provides a desktop-like experience similar to leading IDEs, with command line and optimized editors. 3 | 4 | > At the heart of SAP Business Application Studio are the dev spaces. The dev spaces are comparable to isolated virtual machines in the cloud containing tailored tools and preinstalled runtimes per business scenario, such as SAP Fiori, SAP S/4HANA extensions, Workflow, HANA native development and more. This simplifies and speeds up the setup of your development environment, enabling you to efficiently develop, test, build, and run your solutions locally or in the cloud. 5 | 6 | ## Open SAP Business Application Studio 7 | 👉 Go back to the [BTP cockpit](https://emea.cockpit.btp.cloud.sap/cockpit#/globalaccount/275320f9-4c26-4622-8728-b6f5196075f5/subaccount/a5a420d8-58c6-4820-ab11-90c7145da589/subaccountoverview). 8 | 9 | 👉 Navigate to `Instances and Subscriptions` and open `SAP Business Application Studio`. 10 | 11 | ![Open BAS](images/BTP_cockpit_BAS.png) 12 | 13 | 14 | ## Create a new Dev Space for CodeJam exercises 15 | 16 | 👉 Create a new Dev Space. 17 | 18 | ![Create a Dev Space 1](images/bas.png) 19 | 20 | 👉 Enter the name of the dev space `GenAICodeJam`, select the `Basic` kind of application and `Python Tools` from Additional SAP Extensions. 21 | 22 | 👉 Click **Create Dev Space**. 23 | 24 | ![Create a Dev Space 2](images/create_dev_space.png) 25 | 26 | You should see the dev space **STARTING**. 27 | 28 | ![Dev Space is Starting](images/dev_starting.png) 29 | 30 | 👉 Wait for the dev space to get into the **RUNNING** state and then open it. 31 | 32 | ![Dev Space is Running](images/dev_running.png) 33 | 34 | ## Clone the exercises from the Git repository 35 | 36 | 👉 Once you opened your dev space in BAS, use one of the available options to clone this Git repository with exercises using the URL below: 37 | 38 | ```sh 39 | https://github.com/SAP-samples/generative-ai-codejam.git 40 | ``` 41 | 42 | ![Clone the repo](images/clone_git.png) 43 | 44 | 👉 Click **Open** to open a project in the Explorer view. 45 | 46 | ![Open a project](images/clone_git_2.png) 47 | 48 | ## Open the Workspace 49 | 50 | The cloned repository contains a file `codejam.code-workspace` and therefore you will be asked, if you want to open it. 51 | 52 | 👉 Click **Open Workspace**. 53 | 54 | ![Automatic notification to open a workspace](images/open_workspace.png) 55 | 56 | ☝️ If you missed the previous dialog, you can go to the BAS Explorer, open the `codejam.code-workspace` file, and click **Open Workspace**. 57 | 58 | You should see: 59 | * `CODEJAM` as the workspace at the root of the hierarchy of the project, and 60 | * `generative-ai-codejam` as the name of the top level folder, **not** `generative-ai-codejam-1` or any other names ending with a number. 61 | 62 | 👉 You can close the **Get Started** tab. 63 | 64 | ![Open a workspace](images/workspace.png) 65 | 66 | ## Configure the connection details to Generative AI Hub 67 | 68 | 👉 Go back to the Subaccount in the [BTP cockpit](https://emea.cockpit.btp.cloud.sap/cockpit#/globalaccount/275320f9-4c26-4622-8728-b6f5196075f5/subaccount/a5a420d8-58c6-4820-ab11-90c7145da589/subaccountoverview). 69 | 70 | 👉 Navigate to `Instances and Subscriptions` and open the SAP AI Core instance's service binding. 71 | 72 | ![Service Binding in the BTP Cockpit](images/service_binding.png) 73 | 74 | 👉 Click **Copy JSON**. 75 | 76 | 👉 Return to BAS and create a new file `.aicore-config.json` in the `generative-ai-codejam/` directory. 77 | 78 | 👉 Paste the service key into `generative-ai-codejam/.aicore-config.json`, which should look similar to the following. 79 | 80 | ```json 81 | { 82 | "appname": "7ce41b4e-e483-4fc8-8de4-0eee29142e43!b505946|aicore!b540", 83 | "clientid": "sb-7ce41b4e-e483-4fc8-8de4-0eee29142e43!b505946|aicore!b540", 84 | "clientsecret": "...", 85 | "identityzone": "genai-codejam-luyq1wkg", 86 | "identityzoneid": "a5a420d8-58c6-4820-ab11-90c7145da589", 87 | "serviceurls": { 88 | "AI_API_URL": "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com" 89 | }, 90 | "url": "https://genai-codejam-luyq1wkg.authentication.eu10.hana.ondemand.com" 91 | } 92 | ``` 93 | 94 | ## Create a Python virtual environment and install the SAP's [Python SDK for Generative AI Hub](https://pypi.org/project/generative-ai-hub-sdk/) 95 | 96 | 👉 Start a new Terminal. 97 | 98 | ![Start terminal](images/start_terminal.png) 99 | 100 | 👉 Create a virtual environment using the following command: 101 | 102 | ```bash 103 | python3 -m venv ~/projects/generative-ai-codejam/env --upgrade-deps 104 | ``` 105 | 106 | 👉 Activate the `env` virtual environment like this and make sure it is activated: 107 | 108 | ```bash 109 | source ~/projects/generative-ai-codejam/env/bin/activate 110 | ``` 111 | 112 | ![venv](images/venv.png) 113 | 114 | 👉 Install the Generative AI Hub [Python SDK](https://pypi.org/project/generative-ai-hub-sdk/) and [LangChain integration for SAP HANA Cloud](https://pypi.org/project/langchain-hana) plus all the other packages listed below using the following `pip install` commands. 115 | 116 | ```bash 117 | pip install --require-virtualenv -U generative-ai-hub-sdk[all] 'langchain-hana==0.2.1' 118 | ``` 119 | 120 | 👉 You will also need the [SciPy package](https://pypi.org/project/scipy/). 121 | 122 | ```bash 123 | pip install --require-virtualenv scipy 124 | ``` 125 | 126 | 👉 You will also need the [pydf package](https://pypi.org/project/pypdf/) and [wikipedia](https://pypi.org/project/wikipedia/). 127 | 128 | ```bash 129 | pip install --require-virtualenv pypdf wikipedia 130 | ``` 131 | 132 | 137 | 138 | 139 | ## From now on, continue the exercises in BAS 140 | 141 | [Next exercise](03-prompt-llm.ipynb) 142 | 143 | ![BAS exercise 3](images/bas_exercises.png) -------------------------------------------------------------------------------- /exercises/03-prompt-llm.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Check your connection to Generative AI Hub\n" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "👉 First you need to assign the values from `generative-ai-codejam/.aicore-config.json` to the environmental variables. \n", 15 | "\n", 16 | "That way the Generative AI Hub [Python SDK](https://pypi.org/project/generative-ai-hub-sdk/) will connect to Generative AI Hub." 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "👉 For the Python SDK to know which resource group to use, you also need to set the `resource group` in the [`variables.py`](variables.py) file to your own `resource group` (e.g. **team-01**) that you created in the SAP AI Launchpad in exercise [00-connect-AICore-and-AILaunchpad](000-connect-AICore-and-AILaunchpad.md)." 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": null, 29 | "metadata": {}, 30 | "outputs": [], 31 | "source": [ 32 | "import init_env\n", 33 | "import variables\n", 34 | "import importlib\n", 35 | "variables = importlib.reload(variables)\n", 36 | "\n", 37 | "# TODO: You need to specify which model you want to use. In this case we are directing our prompt\n", 38 | "# to the openAI API directly so you need to pick one of the GPT models. Make sure the model is actually deployed\n", 39 | "# in genAI Hub. You might also want to chose a model that can also process images here already. \n", 40 | "# E.g. 'gpt-4o-mini' or 'gpt-4o'\n", 41 | "MODEL_NAME = ''\n", 42 | "# Do not modify the `assert` line below\n", 43 | "assert MODEL_NAME!='', \"\"\"You should change the variable `MODEL_NAME` with the name of your deployed model (like 'gpt-4o-mini') first!\"\"\"\n", 44 | "\n", 45 | "init_env.set_environment_variables()\n", 46 | "# Do not modify the `assert` line below \n", 47 | "assert variables.RESOURCE_GROUP!='', \"\"\"You should change the value assigned to the `RESOURCE_GROUP` in the `variables.py` file to your own resource group first!\"\"\"\n", 48 | "print(f\"Resource group is set to: {variables.RESOURCE_GROUP}\")" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "metadata": {}, 54 | "source": [ 55 | "## Prompt an LLM in the Generative AI Hub...\n", 56 | "\n", 57 | "...using OpenAI native client integration: https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/gen_ai_hub.html#openai\n", 58 | "\n", 59 | "To understand the API and structures, check OpenAI documentation: https://platform.openai.com/docs/guides/text?api-mode=chat" 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": null, 65 | "metadata": {}, 66 | "outputs": [], 67 | "source": [ 68 | "from gen_ai_hub.proxy.native.openai import chat\n", 69 | "from IPython.display import Markdown\n", 70 | "\n", 71 | "messages = [\n", 72 | " {\n", 73 | " \"role\": \"user\", \n", 74 | " \"content\": \"What is the underlying model architecture of an LLM? Explain it as short as possible.\"\n", 75 | " }\n", 76 | "]\n", 77 | "\n", 78 | "kwargs = dict(model_name=MODEL_NAME, messages=messages)\n", 79 | "\n", 80 | "response = chat.completions.create(**kwargs)\n", 81 | "display(Markdown(response.choices[0].message.content))" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": {}, 87 | "source": [ 88 | "## Understanding roles\n", 89 | "Most LLMs have the roles `system`, `assistant` (GPT) or `model` (Gemini) and `user` that can be used to steer the models response. In the previous step you only used the role `user` to ask your question. \n", 90 | "\n", 91 | "👉 Try out different `system` messages to change the response. You can also tell the model to not engage in smalltalk or only answer questions on a certain topic. Then try different user prompts as well!\n", 92 | "\n", 93 | "Please note, that in OpenAI API with `o1` models and newer, `developer` messages replace the previous `system` messages." 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "metadata": {}, 100 | "outputs": [], 101 | "source": [ 102 | "messages = [\n", 103 | " { \"role\": \"system\", \n", 104 | " # TODO try changing the system prompt\n", 105 | " \"content\": \"Speak like Yoda from Star Wars.\"\n", 106 | " }, \n", 107 | " {\n", 108 | " \"role\": \"user\", \n", 109 | " \"content\": \"What is the underlying model architecture of an LLM? Explain it as short as possible.\"\n", 110 | " }\n", 111 | "]\n", 112 | "\n", 113 | "kwargs = dict(model_name=MODEL_NAME, messages=messages)\n", 114 | "response = chat.completions.create(**kwargs)\n", 115 | "display(Markdown(response.choices[0].message.content))" 116 | ] 117 | }, 118 | { 119 | "cell_type": "markdown", 120 | "metadata": {}, 121 | "source": [ 122 | "👉 Also try to have it speak like a pirate.\n", 123 | "\n", 124 | "👉 Now let's be more serious! Tell it to behave like an SAP consultant talking to AI Developers.\n", 125 | "\n", 126 | "👉 Ask it to behave like an SAP Consultant talking to ABAP Developers and to make ABAP comparisons." 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "## Hallucinations\n", 134 | "👉 Run the following question." 135 | ] 136 | }, 137 | { 138 | "cell_type": "code", 139 | "execution_count": null, 140 | "metadata": {}, 141 | "outputs": [], 142 | "source": [ 143 | "messages = [\n", 144 | " { \"role\": \"system\", \n", 145 | " \"content\": \"You are an SAP Consultant.\"\n", 146 | " }, \n", 147 | " {\n", 148 | " \"role\": \"user\", \n", 149 | " \"content\": \"How does the data masking of the orchestration service work?\"\n", 150 | " }\n", 151 | "]\n", 152 | "\n", 153 | "kwargs = dict(model_name=MODEL_NAME, messages=messages)\n", 154 | "response = chat.completions.create(**kwargs)\n", 155 | "display(Markdown(response.choices[0].message.content))" 156 | ] 157 | }, 158 | { 159 | "cell_type": "markdown", 160 | "metadata": {}, 161 | "source": [ 162 | "☝️ Compare the response to [SAP Help - Generative AI Hub SDK](https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/orchestration-service.html). \n", 163 | "\n", 164 | "👉 What did the model respond? Was it the truth or a hallucination?\n", 165 | "\n", 166 | "👉 Which questions work well, which questions still do not work so well?\n", 167 | "\n", 168 | "# Use Multimodal Models\n", 169 | "\n", 170 | "Multimodal models can use different inputs such as text, audio and [images](https://platform.openai.com/docs/guides/images-vision?api-mode=chat&format=base64-encoded#giving-a-model-images-as-input). In Generative AI Hub on SAP AI Core you can access multiple multimodal models (e.g. `gpt-4o-mini`).\n", 171 | "\n", 172 | "👉 If you have not deployed one of the gpt-4o-mini models in previous exercises, then go back to the model library and deploy a model that can also process images.\n", 173 | "\n", 174 | "👉 Now run the code snippet below to get a description for [the image of the AI Foundation Architecture](documents/ai-foundation-architecture.png). These descriptions can then for example be used as alternative text for screen readers or other assistive tech.\n", 175 | "\n", 176 | "👉 You can upload your own image and play around with it!" 177 | ] 178 | }, 179 | { 180 | "cell_type": "code", 181 | "execution_count": null, 182 | "metadata": {}, 183 | "outputs": [], 184 | "source": [ 185 | "import base64\n", 186 | "\n", 187 | "# get the image from the documents folder\n", 188 | "with open(\"documents/ai-foundation-architecture.png\", \"rb\") as image_file:\n", 189 | " image_data = base64.b64encode(image_file.read()).decode('utf-8')\n", 190 | "\n", 191 | "messages = [{\"role\": \"user\", \"content\": [\n", 192 | " {\"type\": \"text\", \"text\": \"Describe the images as an alternative text.\"},\n", 193 | " {\"type\": \"image_url\", \"image_url\": {\n", 194 | " \"url\": f\"data:image/png;base64,{image_data}\"}\n", 195 | " }]\n", 196 | " }]\n", 197 | "\n", 198 | "kwargs = dict(model_name=MODEL_NAME, messages=messages)\n", 199 | "response = chat.completions.create(**kwargs)\n", 200 | "\n", 201 | "display(Markdown(response.choices[0].message.content))" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": {}, 207 | "source": [ 208 | "# Extracting text from images\n", 209 | "Nora loves bananabread and thinks recipes are a good example of how LLMs can also extract complex text from images, like from [a picture of a recipe of a bananabread](documents/bananabread.png). Try your own recipe if you like :)\n", 210 | "\n", 211 | "This exercise also shows how you can use the output of an LLM in other systems, as you can tell the LLM how to output information, for example in JSON format." 212 | ] 213 | }, 214 | { 215 | "cell_type": "code", 216 | "execution_count": null, 217 | "metadata": {}, 218 | "outputs": [], 219 | "source": [ 220 | "# get the image from the documents folder\n", 221 | "with open(\"documents/bananabread.png\", \"rb\") as image_file:\n", 222 | " image_data = base64.b64encode(image_file.read()).decode('utf-8')\n", 223 | "\n", 224 | "messages = [{\"role\": \"user\", \"content\": [\n", 225 | " {\"type\": \"text\", \"text\": \"Extract the ingredients and instructions in two different json files.\"},\n", 226 | " {\"type\": \"image_url\", \"image_url\": {\n", 227 | " \"url\": f\"data:image/png;base64,{image_data}\"}\n", 228 | " }]\n", 229 | " }]\n", 230 | "\n", 231 | "kwargs = dict(model_name=MODEL_NAME, messages=messages)\n", 232 | "response = chat.completions.create(**kwargs)\n", 233 | "\n", 234 | "display(Markdown(response.choices[0].message.content))" 235 | ] 236 | }, 237 | { 238 | "cell_type": "markdown", 239 | "metadata": {}, 240 | "source": [ 241 | "[Next exercise](04-create-embeddings.ipynb)\n" 242 | ] 243 | } 244 | ], 245 | "metadata": { 246 | "kernelspec": { 247 | "display_name": "env", 248 | "language": "python", 249 | "name": "python3" 250 | }, 251 | "language_info": { 252 | "codemirror_mode": { 253 | "name": "ipython", 254 | "version": 3 255 | }, 256 | "file_extension": ".py", 257 | "mimetype": "text/x-python", 258 | "name": "python", 259 | "nbconvert_exporter": "python", 260 | "pygments_lexer": "ipython3", 261 | "version": "3.13.1" 262 | } 263 | }, 264 | "nbformat": 4, 265 | "nbformat_minor": 2 266 | } 267 | -------------------------------------------------------------------------------- /exercises/04-create-embeddings.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Create embeddings with Generative AI Hub\n", 8 | "Like any other machine learning model, also foundation models only work with numbers. In the context of generative AI, these numbers are embeddings. Embeddings are numerical representations of unstructured data, such as text and images. The text embedding model of OpenAI `text-embedding-3-small` for example turns your input text into 1536 numbers. That is a vector with 1536 dimensions.\n", 9 | "\n", 10 | "👉 Select the kernel again. Make sure to select the same virtual environment as in the previous exercise so that all your packages are installed." 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": null, 16 | "metadata": {}, 17 | "outputs": [], 18 | "source": [ 19 | "import init_env\n", 20 | "\n", 21 | "init_env.set_environment_variables()\n", 22 | "\n", 23 | "from gen_ai_hub.proxy.native.openai import embeddings\n", 24 | "\n", 25 | "# TODO assign the model name of the embedding model here, e.g. \"text-embedding-3-small\"\n", 26 | "EMBEDDING_MODEL_NAME = \"\"" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "## Create embeddings\n", 34 | "Define the method **get_embedding()**." 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "def get_embedding(input_text):\n", 44 | " response = embeddings.create(\n", 45 | " input=input_text, \n", 46 | " model_name=EMBEDDING_MODEL_NAME\n", 47 | " )\n", 48 | " embedding = response.data[0].embedding\n", 49 | " return embedding" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "Get embeddings for the words: **apple, orange, phone** and for the phrases: **I love dogs, I love animals, I hate cats.**" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": null, 62 | "metadata": {}, 63 | "outputs": [], 64 | "source": [ 65 | "apple_embedding = get_embedding(\"apple\")\n", 66 | "orange_embedding = get_embedding(\"orange\")\n", 67 | "phone_embedding = get_embedding(\"phone\")\n", 68 | "dog_embedding = get_embedding(\"I love dogs\")\n", 69 | "animals_embedding = get_embedding(\"I love animals\")\n", 70 | "cat_embedding = get_embedding(\"I hate cats\")\n", 71 | "\n", 72 | "print(apple_embedding)" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "## Calculate Vector Similarities\n", 80 | "To calculate the cosine similarity of the vectors, we also need the [SciPy](https://scipy.org/) package. SciPy contains many fundamental algorithms for scientific computing.\n", 81 | "\n", 82 | "Cosine similarity is used to measure the distance between two vectors. The closer the two vectors are, the higher the similarity between the embedded texts.\n", 83 | "\n", 84 | "👉 Import the SciPy package and define the method **get_cosine_similarity()**." 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": null, 90 | "metadata": {}, 91 | "outputs": [], 92 | "source": [ 93 | "from scipy import spatial\n", 94 | "\n", 95 | "# TODO the get_cosine_similarity function does not work very well does it? Fix it!\n", 96 | "def get_cosine_similarity(vector_1, vector_2):\n", 97 | " return 1 - spatial.distance.cosine(vector_1, vector_1)" 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": {}, 103 | "source": [ 104 | "👉 Calculate similarities between the embeddings of the words and phrases from above and find the most similar vectors. You can follow the example below." 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": null, 110 | "metadata": {}, 111 | "outputs": [], 112 | "source": [ 113 | "print(\"apple-orange\")\n", 114 | "print(get_cosine_similarity(apple_embedding, orange_embedding))" 115 | ] 116 | }, 117 | { 118 | "cell_type": "markdown", 119 | "metadata": {}, 120 | "source": [ 121 | "[Next exercise](05-store-embeddings-hana.ipynb)" 122 | ] 123 | } 124 | ], 125 | "metadata": { 126 | "kernelspec": { 127 | "display_name": ".venv", 128 | "language": "python", 129 | "name": "python3" 130 | }, 131 | "language_info": { 132 | "codemirror_mode": { 133 | "name": "ipython", 134 | "version": 3 135 | }, 136 | "file_extension": ".py", 137 | "mimetype": "text/x-python", 138 | "name": "python", 139 | "nbconvert_exporter": "python", 140 | "pygments_lexer": "ipython3", 141 | "version": "3.9.13" 142 | } 143 | }, 144 | "nbformat": 4, 145 | "nbformat_minor": 2 146 | } 147 | -------------------------------------------------------------------------------- /exercises/05-store-embeddings-hana.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Store Embeddings for a Retrieval Augmented Generation (RAG) Use Case\n", 8 | "\n", 9 | "RAG is especially useful for question-answering use cases that involve large amounts of unstructured documents containing important information.\n", 10 | "\n", 11 | "Let’s implement a RAG use case so that the next time you ask about the [orchestration service](https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/orchestration-service.html), you get the correct response! To achieve this, you need to vectorize our context documents. You can find the documents to vectorize and store as embeddings in SAP HANA Cloud Vector Engine in the `documents` directory. \n", 12 | "\n", 13 | "☝️ You will find four documents on Generative AI Hub, the orchestration service, the grounding module and detailed information on the available models.\n", 14 | "\n", 15 | "## LangChain\n", 16 | "\n", 17 | "The Generative AI Hub Python SDK includes integrations with the [LangChain](https://python.langchain.com/docs/introduction/) library. LangChain is a framework for building applications that utilize large language models, such as GPT models from OpenAI and others. It is valuable, as it helps manage and connect different AI models, tools, and datasets, simplifying the creation of complex AI workflows." 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": null, 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "import warnings\n", 27 | "warnings.filterwarnings(\"ignore\", message=\"As the c extension\") #Avoid a warning message for \"RuntimeWarning: As the c extension couldn't be imported, `google-crc32c` is using a pure python implementation that is significantly slower. If possible, please configure a c build environment and compile the extension. warnings.warn(_SLOW_CRC32C_WARNING, RuntimeWarning)\"\n", 28 | "\n", 29 | "# OpenAIEmbeddings to create text embeddings\n", 30 | "from gen_ai_hub.proxy.langchain.openai import OpenAIEmbeddings\n", 31 | "\n", 32 | "# TextLoader to load documents\n", 33 | "from langchain_community.document_loaders import PyPDFDirectoryLoader\n", 34 | "\n", 35 | "# different TextSplitters to chunk documents into smaller text chunks\n", 36 | "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", 37 | "\n", 38 | "# LangChain & HANA Vector Engine - legacy integration: https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.hanavector.HanaDB.html\n", 39 | "#from langchain_community.vectorstores.hanavector import HanaDB\n", 40 | "\n", 41 | "# SAP HANA integration with LangChain: https://pypi.org/project/langchain-hana \n", 42 | "from langchain_hana import HanaDB" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "👉 Change the `EMBEDDING_DEPLOYMENT_ID` in [variables.py](variables.py) to your deployment ID from exercise [01-explore-genai-hub](01-explore-genai-hub.md). For that go to **SAP AI Launchpad** application and navigate to **ML Operations** > **Deployments**.\n", 50 | "\n", 51 | "☝️ The `EMBEDDING_DEPLOYMENT_ID` is the embedding model that creates vector representations of your text, e.g. **text-embedding-3-small**.\n", 52 | "\n", 53 | "👉 In [variables.py](variables.py) also set the `EMBEDDING_TABLE` from `\"EMBEDDINGS_CODEJAM_\"+\"->ADD_YOUR_NAME_HERE<-\"` by adding your personal or team name, like `\"EMBEDDINGS_CODEJAM_NORA\"`.\n", 54 | "\n", 55 | "👉 In the root folder of the project create a [.user.ini](../.user.ini) file with the SAP HANA database connection details provided by the instructor.\n", 56 | "```ini\n", 57 | "[hana]\n", 58 | "url=XXXXXX.hanacloud.ondemand.com\n", 59 | "user=XXXXXX\n", 60 | "passwd=XXXXXX\n", 61 | "port=443\n", 62 | "```" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": null, 68 | "metadata": {}, 69 | "outputs": [], 70 | "source": [ 71 | "import init_env\n", 72 | "import variables\n", 73 | "\n", 74 | "init_env.set_environment_variables()\n", 75 | "\n", 76 | "# connect to HANA instance\n", 77 | "connection = init_env.connect_to_hana_db()\n", 78 | "print(f\"Successfuly connected to SAP HANA db instance: {connection.isconnected()}\")" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "# Chunking of the documents\n", 86 | "\n", 87 | "Before you can create embeddings for your documents, you need to break them down into smaller text pieces, called \"`chunks`\". You will use the simplest chunking technique, which involves splitting the text based on character length and the separator `\"\\n\\n\"`, using the [Character Text Splitter](https://python.langchain.com/v0.1/docs/modules/data_connection/document_transformers/character_text_splitter/) from LangChain.\n", 88 | "\n", 89 | "## Character Text Splitter" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": null, 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [ 98 | "\n", 99 | "# Load custom documents\n", 100 | "loader = PyPDFDirectoryLoader('documents/')\n", 101 | "documents = loader.load()\n", 102 | "text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)\n", 103 | "texts = text_splitter.split_documents(documents)\n", 104 | "print(f\"Number of document chunks: {len(texts)}\")\n" 105 | ] 106 | }, 107 | { 108 | "cell_type": "markdown", 109 | "metadata": {}, 110 | "source": [ 111 | "Now you can connect to your SAP HANA Cloud Vector Engine and store the embeddings for your text chunks." 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "metadata": {}, 118 | "outputs": [], 119 | "source": [ 120 | "import importlib\n", 121 | "variables = importlib.reload(variables)\n", 122 | "\n", 123 | "# DO NOT CHANGE THIS LINE BELOW - It is only to check whether you changed the table name in variables.py\n", 124 | "assert variables.EMBEDDING_TABLE != 'EMBEDDINGS_CODEJAM_->ADD_YOUR_NAME_HERE<-', 'EMBEDDING_TABLE name not changed in variables.py'\n", 125 | "\n", 126 | "# Create embeddings for custom documents\n", 127 | "embeddings = OpenAIEmbeddings(deployment_id=variables.EMBEDDING_DEPLOYMENT_ID)\n", 128 | "\n", 129 | "db = HanaDB(\n", 130 | " embedding=embeddings, connection=connection, table_name=variables.EMBEDDING_TABLE\n", 131 | ")\n", 132 | "\n", 133 | "# Delete already existing documents from the table\n", 134 | "db.delete(filter={})\n", 135 | "\n", 136 | "# add the loaded document chunks\n", 137 | "db.add_documents(texts)\n", 138 | "print(f\"Table {db.table_name} created in the SAP HANA database.\")" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "## Check the embeddings in SAP HANA Cloud Vector Engine\n", 146 | "\n", 147 | "👉 Print the rows from your embedding table and scroll to the right to see the embeddings." 148 | ] 149 | }, 150 | { 151 | "cell_type": "code", 152 | "execution_count": null, 153 | "metadata": {}, 154 | "outputs": [], 155 | "source": [ 156 | "from IPython.display import Markdown\n", 157 | "cursor = connection.cursor()\n", 158 | "\n", 159 | "# Use `db.table_name` instead of `variables.EMBEDDING_TABLE` because HANA driver sanitizes a table name by removing unaccepted characters\n", 160 | "is_ok = cursor.execute(f'''SELECT \"VEC_TEXT\", \"VEC_META\", TO_NVARCHAR(\"VEC_VECTOR\") FROM \"{db.table_name}\"''')\n", 161 | "record_columns=cursor.fetchone()\n", 162 | "if record_columns:\n", 163 | " display({\"VEC_TEXT\" : record_columns[0], \"VEC_META\" : eval(record_columns[1]), \"VEC_VECTOR\" : record_columns[2]})\n", 164 | "cursor.close()" 165 | ] 166 | }, 167 | { 168 | "cell_type": "markdown", 169 | "metadata": {}, 170 | "source": [ 171 | "[Next exercise](06-RAG.ipynb)" 172 | ] 173 | } 174 | ], 175 | "metadata": { 176 | "kernelspec": { 177 | "display_name": ".venv", 178 | "language": "python", 179 | "name": "python3" 180 | }, 181 | "language_info": { 182 | "codemirror_mode": { 183 | "name": "ipython", 184 | "version": 3 185 | }, 186 | "file_extension": ".py", 187 | "mimetype": "text/x-python", 188 | "name": "python", 189 | "nbconvert_exporter": "python", 190 | "pygments_lexer": "ipython3", 191 | "version": "3.9.13" 192 | } 193 | }, 194 | "nbformat": 4, 195 | "nbformat_minor": 2 196 | } 197 | -------------------------------------------------------------------------------- /exercises/06-RAG.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Implement the Retrieval for a Retrieval Augmented Generation (RAG) Use Case\n", 8 | "\n", 9 | "Now that you have all your context information stored in the SAP HANA Cloud Vector Store, you can start asking the LLM questions about the orchestration service of Generative AI Hub. This time, the model will not respond from its knowledge base—what it learned during training—but instead, the retriever will search for relevant context information in your vector database and send the appropriate text chunk to the LLM to review before responding.\n", 10 | "\n", 11 | "👉 Change the `LLM_DEPLOYMENT_ID` in [variables.py](variables.py) to your deployment ID from exercise [01-explore-genai-hub](01-explore-genai-hub.md). For that go to **SAP AI Launchpad** application and navigate to **ML Operations** > **Deployments**.\n", 12 | "\n", 13 | "☝️ The `LLM_DEPLOYMENT_ID` is the deployment ID of the chat model you want to use e.g. **gpt-4o-mini**." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import init_env\n", 23 | "import variables\n", 24 | "import importlib\n", 25 | "variables = importlib.reload(variables)\n", 26 | "\n", 27 | "init_env.set_environment_variables()\n", 28 | "\n", 29 | "from gen_ai_hub.proxy.langchain.openai import ChatOpenAI\n", 30 | "from gen_ai_hub.proxy.langchain.openai import OpenAIEmbeddings\n", 31 | "\n", 32 | "from langchain.chains import RetrievalQA\n", 33 | "\n", 34 | "# LangChain & HANA Vector Engine - legacy integration: https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.hanavector.HanaDB.html\n", 35 | "#from langchain_community.vectorstores.hanavector import HanaDB\n", 36 | "\n", 37 | "# SAP HANA integration with LangChain: https://pypi.org/project/langchain-hana \n", 38 | "from langchain_hana import HanaDB" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "You are again connecting to our shared SAP HANA Cloud Vector Engine." 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "# connect to HANA instance\n", 55 | "connection = init_env.connect_to_hana_db()\n", 56 | "connection.isconnected()" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": null, 62 | "metadata": {}, 63 | "outputs": [], 64 | "source": [ 65 | "# Reference which embedding model, DB connection and table to use\n", 66 | "embeddings = OpenAIEmbeddings(deployment_id=variables.EMBEDDING_DEPLOYMENT_ID)\n", 67 | "db = HanaDB(\n", 68 | " embedding=embeddings, connection=connection, table_name=variables.EMBEDDING_TABLE\n", 69 | ")" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "metadata": {}, 75 | "source": [ 76 | "In this step you are defining which LLM to use during the retrieving process. You then also assign which database to retrieve information from. " 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "# Define which model to use\n", 86 | "chat_llm = ChatOpenAI(deployment_id=variables.LLM_DEPLOYMENT_ID)\n", 87 | "\n", 88 | "# Create a retriever instance of the vector store\n", 89 | "retriever = db.as_retriever(search_kwargs={\"k\": 2})" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "👉 Instead of sending the query directly to the LLM, you will now create a `RetrievalQA` instance and pass both the LLM and the database to be used during the retrieval process. Once set up, you can send your query to the `Retriever`.\n", 97 | "\n", 98 | "👉 Try out different queries. Feel free to ask anything you'd like to know about the Models that are available in Generative AI Hub." 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": null, 104 | "metadata": {}, 105 | "outputs": [], 106 | "source": [ 107 | "# Create the QA instance to query llm based on custom documents\n", 108 | "qa = RetrievalQA.from_llm(llm=chat_llm, retriever=retriever, return_source_documents=True)\n", 109 | "\n", 110 | "# Send query\n", 111 | "query = \"What is data masking in the orchestration service?\"\n", 112 | "\n", 113 | "answer = qa.invoke(query)\n", 114 | "display(answer)" 115 | ] 116 | }, 117 | { 118 | "cell_type": "code", 119 | "execution_count": null, 120 | "metadata": {}, 121 | "outputs": [], 122 | "source": [ 123 | "for document in answer['source_documents']:\n", 124 | " display(document.metadata) \n", 125 | " print(document.page_content)" 126 | ] 127 | }, 128 | { 129 | "cell_type": "markdown", 130 | "metadata": {}, 131 | "source": [ 132 | "👉 Go back to [05-store-embeddings-hana](05-store-embeddings-hana.ipynb) and try out different chunk sizes and/or different values for overlap. Store these chunks in a different table by adding a new variable to [variables.py](variables.py) and run this script again using the newly created table.\n", 133 | "\n", 134 | "[Next exercise](08-orchestration-service.ipynb)" 135 | ] 136 | } 137 | ], 138 | "metadata": { 139 | "kernelspec": { 140 | "display_name": ".venv", 141 | "language": "python", 142 | "name": "python3" 143 | }, 144 | "language_info": { 145 | "codemirror_mode": { 146 | "name": "ipython", 147 | "version": 3 148 | }, 149 | "file_extension": ".py", 150 | "mimetype": "text/x-python", 151 | "name": "python", 152 | "nbconvert_exporter": "python", 153 | "pygments_lexer": "ipython3", 154 | "version": "3.9.13" 155 | } 156 | }, 157 | "nbformat": 4, 158 | "nbformat_minor": 2 159 | } 160 | -------------------------------------------------------------------------------- /exercises/08-orchestration-service.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Use the orchestration service of Generative AI Hub\n", 8 | "\n", 9 | "The orchestration service of Generative AI Hub lets you use all the available models with the same codebase. You only deploy the orchestration service and then you can access all available models simply by changing the model name parameter. You can also use grounding, prompt templating, data masking and content filtering capabilities.\n", 10 | "\n", 11 | "This code is based on the [AI180 TechEd 2024 Jump-Start session](https://github.com/SAP-samples/teched2024-AI180/tree/e648921c46337b57f61ecc9a93251d4b838d7ad0/exercises/python).\n", 12 | "\n", 13 | "👉 Make sure you assign the deployment url of the orchestration service (you can find the url in SAP AI Launchpad in the `Deployments` tab) to `AICORE_ORCHESTRATION_DEPLOYMENT_URL` in [variables.py](variables.py)." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import init_env\n", 23 | "import variables\n", 24 | "\n", 25 | "init_env.set_environment_variables()" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "### Import the packages you want to use" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": null, 38 | "metadata": {}, 39 | "outputs": [], 40 | "source": [ 41 | "from gen_ai_hub.orchestration.models.llm import LLM\n", 42 | "from gen_ai_hub.orchestration.models.message import SystemMessage, UserMessage\n", 43 | "from gen_ai_hub.orchestration.models.template import Template, TemplateValue\n", 44 | "from gen_ai_hub.orchestration.models.config import OrchestrationConfig\n", 45 | "from gen_ai_hub.orchestration.service import OrchestrationService\n", 46 | "from gen_ai_hub.orchestration.models.azure_content_filter import AzureContentFilter\n", 47 | "from gen_ai_hub.orchestration.exceptions import OrchestrationError" 48 | ] 49 | }, 50 | { 51 | "cell_type": "markdown", 52 | "metadata": {}, 53 | "source": [ 54 | "### Assign the model you want to use\n", 55 | "You can find more information regarding the available models here: https://me.sap.com/notes/3437766" 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": {}, 62 | "outputs": [], 63 | "source": [ 64 | "# TODO chose the model you want to try e.g. gemini-1.5-flash, mistralai--mistral-large-instruct, \n", 65 | "# ibm--granite-13b-chat meta--llama3.1-70b-instruct, amazon--titan-text-lite, anthropic--claude-3.5-sonnet, \n", 66 | "# gpt-4o-mini and assign it to name\n", 67 | "llm = LLM(\n", 68 | " name=\"\",\n", 69 | " version=\"latest\",\n", 70 | " parameters={\"max_tokens\": 500, \"temperature\": 1},\n", 71 | ")" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "### Create a prompt template\n", 79 | "The parameter **user_query** in the code snippet below is going to hold the user query that you will add later on. The user query is the text to be translated by the model. The parameter **to_lang** can be any language you want to translate into. By default it is set to **English**." 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": null, 85 | "metadata": {}, 86 | "outputs": [], 87 | "source": [ 88 | "template = Template(\n", 89 | " messages=[\n", 90 | " SystemMessage(\"You are a helpful translation assistant.\"),\n", 91 | " UserMessage(\n", 92 | " \"Translate the following text to {{?to_lang}}: {{?user_query}}\",\n", 93 | " )\n", 94 | " ],\n", 95 | " defaults=[\n", 96 | " TemplateValue(name=\"to_lang\", value=\"English\"),\n", 97 | " ],\n", 98 | ")" 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "metadata": {}, 104 | "source": [ 105 | "## Create an orchestration configuration \n", 106 | "\n", 107 | "Create an orchestration configuration by adding the llm you referenced and the prompt template you created previously." 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": null, 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [ 116 | "config = OrchestrationConfig(\n", 117 | " template=template,\n", 118 | " llm=llm,\n", 119 | ")" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "Add it the configuration to the OrchestrationService instance and send the prompt to the model." 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": null, 132 | "metadata": {}, 133 | "outputs": [], 134 | "source": [ 135 | "import importlib\n", 136 | "variables = importlib.reload(variables)\n", 137 | "\n", 138 | "orchestration_service = OrchestrationService(\n", 139 | " api_url=variables.AICORE_ORCHESTRATION_DEPLOYMENT_URL,\n", 140 | " config=config,\n", 141 | ")\n", 142 | "result = orchestration_service.run(\n", 143 | " template_values=[\n", 144 | " TemplateValue(\n", 145 | " name=\"user_query\",\n", 146 | " #TODO Here you can change the user prompt into whatever you want to ask the model\n", 147 | " value=\"Geht das wirklich mit allen Modellen die verfügbar sind?\"\n", 148 | " )\n", 149 | " ]\n", 150 | ")\n", 151 | "print(result.orchestration_result.choices[0].message.content)\n" 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "metadata": {}, 157 | "source": [ 158 | "## Add a content filter\n", 159 | "\n", 160 | "Create a content filter and add it as an input filter to the orchestration configuration. This is going to filter out harmful content from the input query and not send the request to the model. Whereas adding a filter to the output would let the request go through but then filter any harmful text created by the model. Depending on your use case it can make sense to have both input and output filters.\n", 161 | "\n", 162 | "👉 Try out different values for the content filters. You can chose values [0 = **Safe**, 2 = **Low**, 4 = **Medium**, 6 = **High**]. Where **Safe** is *content generally related to violence* and **High** is *severely harmful content*." 163 | ] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "execution_count": null, 168 | "metadata": {}, 169 | "outputs": [], 170 | "source": [ 171 | "\n", 172 | "content_filter = AzureContentFilter(\n", 173 | " hate=0,\n", 174 | " sexual=0,\n", 175 | " self_harm=0,\n", 176 | " violence=0,\n", 177 | ")\n", 178 | "\n", 179 | "orchestration_service.config.input_filters = [content_filter]" 180 | ] 181 | }, 182 | { 183 | "cell_type": "markdown", 184 | "metadata": {}, 185 | "source": [ 186 | "## Try out the content filter" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": null, 192 | "metadata": {}, 193 | "outputs": [], 194 | "source": [ 195 | "\n", 196 | "try:\n", 197 | " result = orchestration_service.run(\n", 198 | " template_values=[\n", 199 | " TemplateValue(\n", 200 | " name=\"user_query\",\n", 201 | " #TODO Here you can change the user prompt into whatever you want to ask the model\n", 202 | " value=\"Du bist ein ganz mieser Entwickler!\",\n", 203 | " ),\n", 204 | " ]\n", 205 | " )\n", 206 | " print(result.orchestration_result.choices[0].message.content)\n", 207 | "except OrchestrationError as error:\n", 208 | " print(error.message)\n", 209 | "\n", 210 | "\n", 211 | "result = orchestration_service.run(\n", 212 | " template_values=[\n", 213 | " TemplateValue(\n", 214 | " name=\"user_query\",\n", 215 | " #TODO Here you can change the user prompt into whatever you want to ask the model\n", 216 | " value=\"Du bist ein super talentierter Entwickler!\",\n", 217 | " )\n", 218 | " ]\n", 219 | ")\n", 220 | "print(result.orchestration_result.choices[0].message.content)" 221 | ] 222 | }, 223 | { 224 | "cell_type": "markdown", 225 | "metadata": {}, 226 | "source": [ 227 | "## Now also add the content filter as an output filter" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": null, 233 | "metadata": {}, 234 | "outputs": [], 235 | "source": [ 236 | "orchestration_service.config.output_filters = [content_filter]\n", 237 | "\n", 238 | "try:\n", 239 | " result = orchestration_service.run(\n", 240 | " template_values=[\n", 241 | " TemplateValue(\n", 242 | " name=\"user_query\",\n", 243 | " #TODO Here you can change the user prompt into whatever you want to ask the model\n", 244 | " value='Ich würde gerne wissen, wie ich gewaltvoll die Fensterscheibe in einem Bürogebäude am Besten zerstören kann.',\n", 245 | " )\n", 246 | " ]\n", 247 | " )\n", 248 | " print(result.orchestration_result.choices[0].message.content)\n", 249 | "except OrchestrationError as error:\n", 250 | " print(error.message)" 251 | ] 252 | }, 253 | { 254 | "cell_type": "markdown", 255 | "metadata": {}, 256 | "source": [ 257 | "## Optional exercise\n", 258 | "\n", 259 | "👉 Try adding the [Llama Guard 3 Filter](https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/orchestration-service.html#content-filtering) on top of or instead of the Azure Content Filter.\n", 260 | "\n", 261 | "👉 Now that you know how the orchestration service works, try adding the [Data Masking](https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/orchestration-service.html#data-masking) capability. With Data Masking you can hide personal information like email, name or phone numbers before sending such sensitive data to an LLM.\n", 262 | "\n", 263 | "## More Info\n", 264 | "\n", 265 | "Here you can find more [info on the Azure content filter](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/content-filtering)\n", 266 | "\n", 267 | "[Next exercise](09-orchestration-service-grounding.ipynb)" 268 | ] 269 | } 270 | ], 271 | "metadata": { 272 | "kernelspec": { 273 | "display_name": ".venv", 274 | "language": "python", 275 | "name": "python3" 276 | }, 277 | "language_info": { 278 | "codemirror_mode": { 279 | "name": "ipython", 280 | "version": 3 281 | }, 282 | "file_extension": ".py", 283 | "mimetype": "text/x-python", 284 | "name": "python", 285 | "nbconvert_exporter": "python", 286 | "pygments_lexer": "ipython3", 287 | "version": "3.9.13" 288 | } 289 | }, 290 | "nbformat": 4, 291 | "nbformat_minor": 2 292 | } 293 | -------------------------------------------------------------------------------- /exercises/09-orchestration-service-grounding.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Use the orchestration service of Generative AI Hub\n", 8 | "\n", 9 | "The orchestration service of Generative AI Hub lets you use all the available models with the same codebase. You only deploy the orchestration service and then you can access all available models simply by changing the model name parameter. You can also use grounding, prompt templating, data masking and content filtering capabilities.\n", 10 | "\n", 11 | "Store the `orchestration deployment url` from the previous step in your `variables.py` file. This code is based on the [AI180 TechEd 2024 Jump-Start session](https://github.com/SAP-samples/teched2024-AI180/tree/e648921c46337b57f61ecc9a93251d4b838d7ad0/exercises/python).\n", 12 | "\n", 13 | "👉 Make sure you assign the deployment url of the orchestration service to `AICORE_ORCHESTRATION_DEPLOYMENT_URL` in [variables.py](variables.py)." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import init_env\n", 23 | "import variables\n", 24 | "\n", 25 | "init_env.set_environment_variables()" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "### Import the packages you want to use" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": null, 38 | "metadata": {}, 39 | "outputs": [], 40 | "source": [ 41 | "from gen_ai_hub.orchestration.models.llm import LLM\n", 42 | "from gen_ai_hub.orchestration.models.config import GroundingModule, OrchestrationConfig\n", 43 | "from gen_ai_hub.orchestration.models.document_grounding import DocumentGrounding, DocumentGroundingFilter\n", 44 | "from gen_ai_hub.orchestration.models.template import Template, TemplateValue\n", 45 | "from gen_ai_hub.orchestration.models.message import SystemMessage, UserMessage\n", 46 | "from gen_ai_hub.orchestration.service import OrchestrationService" 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "metadata": {}, 52 | "source": [ 53 | "### Assign a model and define a prompt template\n", 54 | "**user_query** is again the user input. Whereas **grounding_response** is the context retrieved from the context information, in this case from sap.help.com." 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": null, 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "# TODO again you need to chose a model e.g. gemini-1.5-flash or gpt-4o-mini\n", 64 | "llm = LLM(\n", 65 | " name=\"\",\n", 66 | " parameters={\n", 67 | " 'temperature': 0.0,\n", 68 | " }\n", 69 | ")\n", 70 | "template = Template(\n", 71 | " messages=[\n", 72 | " SystemMessage(\"You are a helpful translation assistant.\"),\n", 73 | " UserMessage(\"\"\"Answer the request by providing relevant answers that fit to the request.\n", 74 | " Request: {{ ?user_query }}\n", 75 | " Context:{{ ?grounding_response }}\n", 76 | " \"\"\"),\n", 77 | " ]\n", 78 | " )" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "### Create an orchestration configuration that specifies the grounding capability" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "# Set up Document Grounding\n", 95 | "filters = [\n", 96 | " DocumentGroundingFilter(id=\"SAPHelp\", data_repository_type=\"help.sap.com\")\n", 97 | " ]\n", 98 | "\n", 99 | "grounding_config = GroundingModule(\n", 100 | " type=\"document_grounding_service\",\n", 101 | " config=DocumentGrounding(input_params=[\"user_query\"], output_param=\"grounding_response\", filters=filters)\n", 102 | " )\n", 103 | "\n", 104 | "config = OrchestrationConfig(\n", 105 | " template=template,\n", 106 | " llm=llm,\n", 107 | " grounding=grounding_config\n", 108 | ")" 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "execution_count": null, 114 | "metadata": {}, 115 | "outputs": [], 116 | "source": [ 117 | "import importlib\n", 118 | "variables = importlib.reload(variables)\n", 119 | "\n", 120 | "orchestration_service = OrchestrationService(\n", 121 | " api_url=variables.AICORE_ORCHESTRATION_DEPLOYMENT_URL,\n", 122 | " config=config\n", 123 | ")\n", 124 | "\n", 125 | "response = orchestration_service.run(\n", 126 | " template_values=[\n", 127 | " TemplateValue(\n", 128 | " name=\"user_query\",\n", 129 | " #TODO Here you can change the user prompt into whatever you want to ask the model\n", 130 | " value=\"What is Joule?\"\n", 131 | " )\n", 132 | " ]\n", 133 | ")\n", 134 | "\n", 135 | "print(response.orchestration_result.choices[0].message.content)" 136 | ] 137 | }, 138 | { 139 | "cell_type": "markdown", 140 | "metadata": {}, 141 | "source": [ 142 | "# Streaming\n", 143 | "Long response times can be frustrating in chat applications, especially when a large amount of text is involved. To create a smoother, more engaging user experience, we can use streaming to display the response in real-time, character by character or chunk by chunk, as the model generates it. This avoids the awkward wait for a complete response and provides immediate feedback to the user." 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": null, 149 | "metadata": {}, 150 | "outputs": [], 151 | "source": [ 152 | "response = orchestration_service.stream(\n", 153 | " config=config,\n", 154 | " template_values=[\n", 155 | " TemplateValue(\n", 156 | " name=\"user_query\",\n", 157 | " #TODO Here you can change the user prompt into whatever you want to ask the model\n", 158 | " value=\"What is Joule?\"\n", 159 | " )\n", 160 | " ]\n", 161 | ")\n", 162 | "\n", 163 | "for chunk in response:\n", 164 | " print(chunk.orchestration_result.choices[0].delta.content)\n" 165 | ] 166 | }, 167 | { 168 | "cell_type": "markdown", 169 | "metadata": {}, 170 | "source": [ 171 | "[More Info on the content filter](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/content-filtering)\n", 172 | "\n", 173 | "[Next exercise](10-chatbot-with-memory.ipynb)" 174 | ] 175 | } 176 | ], 177 | "metadata": { 178 | "kernelspec": { 179 | "display_name": ".venv", 180 | "language": "python", 181 | "name": "python3" 182 | }, 183 | "language_info": { 184 | "codemirror_mode": { 185 | "name": "ipython", 186 | "version": 3 187 | }, 188 | "file_extension": ".py", 189 | "mimetype": "text/x-python", 190 | "name": "python", 191 | "nbconvert_exporter": "python", 192 | "pygments_lexer": "ipython3", 193 | "version": "3.9.13" 194 | } 195 | }, 196 | "nbformat": 4, 197 | "nbformat_minor": 2 198 | } 199 | -------------------------------------------------------------------------------- /exercises/10-chatbot-with-memory.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Chatbot with Memory\n", 8 | "\n", 9 | "Let's add some history to the interaction and build a chatbot. Unlike many people think. LLMs are fixed in their state. They are trained until a certain cutoff date and do not know anything after that point unless you feed them current information. That is also why LLMs do not remember anything about you or the prompts you send to the model. If the model seems to remember you and what you said it is always because the application you are using (e.g. ChatPGT or the chat function in SAP AI Launchpad) is sending the chat history to the model to provide the conversation history to the model as context.\n", 10 | "\n", 11 | "Below you can find a simple implementation of a chatbot with memory.\n", 12 | "\n", 13 | "The code in this exercise is based on the [help documentation](https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/orchestration-service.html) of the Generative AI Hub Python SDK." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import init_env\n", 23 | "import variables\n", 24 | "from typing import List\n", 25 | "\n", 26 | "init_env.set_environment_variables()" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "### Import the packages you want to use" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "from gen_ai_hub.orchestration.models.llm import LLM\n", 43 | "from gen_ai_hub.orchestration.models.message import Message, SystemMessage, UserMessage\n", 44 | "from gen_ai_hub.orchestration.models.template import Template, TemplateValue\n", 45 | "from gen_ai_hub.orchestration.models.config import OrchestrationConfig\n", 46 | "from gen_ai_hub.orchestration.service import OrchestrationService" 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "metadata": {}, 52 | "source": [ 53 | "### Create the chatbot class" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "class ChatBot:\n", 63 | " def __init__(self, orchestration_service: OrchestrationService):\n", 64 | " self.service = orchestration_service\n", 65 | " self.config = OrchestrationConfig(\n", 66 | " template=Template(\n", 67 | " messages=[\n", 68 | " SystemMessage(\"You are a helpful chatbot assistant.\"),\n", 69 | " UserMessage(\"{{?user_query}}\"),\n", 70 | " ],\n", 71 | " ),\n", 72 | " # TODO add a model name here, e.g. gemini-1.5-flash\n", 73 | " llm=LLM(name=\"\"),\n", 74 | " )\n", 75 | " self.history: List[Message] = []\n", 76 | "\n", 77 | " def chat(self, user_input):\n", 78 | " response = self.service.run(\n", 79 | " config=self.config,\n", 80 | " template_values=[\n", 81 | " TemplateValue(name=\"user_query\", value=user_input),\n", 82 | " ],\n", 83 | " history=self.history,\n", 84 | " )\n", 85 | "\n", 86 | " message = response.orchestration_result.choices[0].message\n", 87 | "\n", 88 | " self.history = response.module_results.templating\n", 89 | " self.history.append(message)\n", 90 | "\n", 91 | " return message.content\n", 92 | " \n", 93 | " def reset(self):\n", 94 | " self.history = []\n", 95 | "\n", 96 | "service = OrchestrationService(api_url=variables.AICORE_ORCHESTRATION_DEPLOYMENT_URL)\n", 97 | "bot = ChatBot(orchestration_service=service)" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": null, 103 | "metadata": {}, 104 | "outputs": [], 105 | "source": [ 106 | "bot.chat(\"Hello, are you there?\")" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": null, 112 | "metadata": {}, 113 | "outputs": [], 114 | "source": [ 115 | "bot.chat(\"Do you like CodeJams?\")" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": null, 121 | "metadata": {}, 122 | "outputs": [], 123 | "source": [ 124 | "bot.chat(\"Can you remember what we first talked about?\")" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": {}, 131 | "outputs": [], 132 | "source": [ 133 | "bot.history" 134 | ] 135 | }, 136 | { 137 | "cell_type": "markdown", 138 | "metadata": {}, 139 | "source": [ 140 | "And to prove to you that the model does indeed not remember you, let's delete the history and try again :)" 141 | ] 142 | }, 143 | { 144 | "cell_type": "code", 145 | "execution_count": null, 146 | "metadata": {}, 147 | "outputs": [], 148 | "source": [ 149 | "bot.reset()\n", 150 | "bot.chat(\"Can you remember what we first talked about?\")" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "metadata": {}, 156 | "source": [ 157 | "[Next exercise](11-your-chatbot.ipynb)" 158 | ] 159 | } 160 | ], 161 | "metadata": { 162 | "kernelspec": { 163 | "display_name": ".venv", 164 | "language": "python", 165 | "name": "python3" 166 | }, 167 | "language_info": { 168 | "codemirror_mode": { 169 | "name": "ipython", 170 | "version": 3 171 | }, 172 | "file_extension": ".py", 173 | "mimetype": "text/x-python", 174 | "name": "python", 175 | "nbconvert_exporter": "python", 176 | "pygments_lexer": "ipython3", 177 | "version": "3.9.13" 178 | } 179 | }, 180 | "nbformat": 4, 181 | "nbformat_minor": 2 182 | } 183 | -------------------------------------------------------------------------------- /exercises/11-your-chatbot.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Now it's your turn!\n", 8 | "Build your own chatbot similar to [10-chatbot-with-memory.ipynb](10-chatbot-with-memory.ipynb) with what you have learned! Combine what you have learned in [08-orchestration-service.ipynb](08-orchestration-service.ipynb) and/or [09-orchestration-service-grounding.ipynb](09-orchestration-service-grounding.ipynb) and build a chatbot that can answer SAP specific questions or filter out personal data or hate speech." 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": null, 14 | "metadata": {}, 15 | "outputs": [], 16 | "source": [ 17 | "import init_env\n", 18 | "import variables\n", 19 | "\n", 20 | "init_env.set_environment_variables()" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": null, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "# TODO Your code can start here!" 30 | ] 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "metadata": {}, 35 | "source": [ 36 | "Well done! You have completed the CodeJam! If you still have time check out the optional [AI Agents exercise](12-ai-agents.ipynb)!" 37 | ] 38 | } 39 | ], 40 | "metadata": { 41 | "language_info": { 42 | "name": "python" 43 | } 44 | }, 45 | "nbformat": 4, 46 | "nbformat_minor": 2 47 | } 48 | -------------------------------------------------------------------------------- /exercises/12-ai-agents.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# AI Agents with Generative AI Hub\n", 8 | "\n", 9 | "Everyone is talking about AI Agents and how they are going to make LLMs more powerful. AI Agents are basically systems that have a set of tools that they can use and the LLM is the agents brain to decide which tool to use when. Some systems are more agentic than others, depending on the amount of autonomy and decision-making power they possess. \n", 10 | "\n", 11 | "If you want to know more about AI Agents and how to build one from scratch, check out this [Devtoberfest session](https://community.sap.com/t5/devtoberfest/getting-started-with-agents-using-sap-generative-ai-hub/ev-p/13865119).\n", 12 | "\n", 13 | "👉 Before you start you will need to add the constant `LLM_DEPLOYMENT_ID` to [variables.py](variables.py). You can use the `gpt-4o-mini` model again that we deployed earlier. You find the deployment ID in SAP AI Launchpad." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import init_env\n", 23 | "import variables\n", 24 | "\n", 25 | "init_env.set_environment_variables()" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "### Import the packages you want to use" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": null, 38 | "metadata": {}, 39 | "outputs": [], 40 | "source": [ 41 | "from langchain_community.tools import WikipediaQueryRun\n", 42 | "from langchain_community.utilities import WikipediaAPIWrapper\n", 43 | "\n", 44 | "from langchain import hub\n", 45 | "from langchain.agents import AgentExecutor, create_react_agent\n", 46 | "\n", 47 | "from gen_ai_hub.proxy.langchain.openai import ChatOpenAI\n", 48 | "from gen_ai_hub.proxy.langchain.openai import OpenAIEmbeddings\n" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "metadata": {}, 54 | "source": [ 55 | "We will give the agent different tools to use. The first tool will be [access to Wikipedia](https://python.langchain.com/docs/integrations/tools/wikipedia/). This way instead of answering from it's training data, the model will check on Wikipedia articles first." 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": {}, 62 | "outputs": [], 63 | "source": [ 64 | "wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n", 65 | "\n", 66 | "# Let's check if it works\n", 67 | "print(wikipedia.run(\"Tbilisi\"))\n", 68 | "\n", 69 | "# Assign wikipedia to the set of tools\n", 70 | "tools = [wikipedia]" 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": null, 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "# For agents the initial prompt including all the instructions is very important. \n", 80 | "# LangChain already has a good set of prompts that we can reuse here.\n", 81 | "prompt = hub.pull(\"hwchase17/react\")\n", 82 | "print(prompt.template)" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "import importlib\n", 92 | "variables = importlib.reload(variables)\n", 93 | "\n", 94 | "# Create a chat model instance which will act as the brain of the agent\n", 95 | "llm = ChatOpenAI(deployment_id=variables.LLM_DEPLOYMENT_ID)\n", 96 | "\n", 97 | "# Create an agent with the llm, tools and prompt\n", 98 | "agent = create_react_agent(llm, tools, prompt)\n", 99 | "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n", 100 | "\n", 101 | "# Let's ask try the agent and ask about the city\n", 102 | "agent_executor.invoke({\"input\": \"Tell me about Tbilisi!\"})" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": {}, 108 | "source": [ 109 | "## Give the agent access to our HANA vector store\n", 110 | "Now the agent can use Wikipedia but wouldn't it be nice if the agent could pick between resources and decide when to use what?" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [ 119 | "# LangChain & HANA Vector Engine - legacy integration: https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.hanavector.HanaDB.html\n", 120 | "#from langchain_community.vectorstores.hanavector import HanaDB\n", 121 | "\n", 122 | "# SAP HANA integration with LangChain: https://pypi.org/project/langchain-hana \n", 123 | "from langchain_hana import HanaDB\n", 124 | "from langchain.tools.retriever import create_retriever_tool\n", 125 | "\n", 126 | "# connect to HANA instance\n", 127 | "connection = init_env.connect_to_hana_db()\n", 128 | "connection.isconnected()\n", 129 | "\n", 130 | "# Reference which embedding model, DB connection and table to use\n", 131 | "embeddings = OpenAIEmbeddings(deployment_id=variables.EMBEDDING_DEPLOYMENT_ID)\n", 132 | "db = HanaDB(\n", 133 | " embedding=embeddings, connection=connection, table_name=variables.EMBEDDING_TABLE\n", 134 | ")\n", 135 | "\n", 136 | "# Create a retriever instance of the vector store\n", 137 | "retriever = db.as_retriever(search_kwargs={\"k\": 2})\n", 138 | "\n", 139 | "# We need to add a description to the retriever so that the llm knows when to use this tool.\n", 140 | "retriever_tool = create_retriever_tool(\n", 141 | " retriever,\n", 142 | " \"SAP orchestration service docu\",\n", 143 | " \"Search for information SAP's orchestration service. For all inquiries specifically related to the Generative AI Hub and SAP's orchestration service, this tool must be used without exception!\",\n", 144 | ")" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "## Ask the Agent\n", 152 | "\n", 153 | "You can ask the agent anything in general and it will try to find an entry from Wikipedia, unless you ask about SAP's orchestration service. Then it will get the information from SAP HANA vector store, where it holds the documentation, instead of Wikipedia." 154 | ] 155 | }, 156 | { 157 | "cell_type": "code", 158 | "execution_count": null, 159 | "metadata": {}, 160 | "outputs": [], 161 | "source": [ 162 | "# Now let's add the retriever as a tool\n", 163 | "tools = [wikipedia, retriever_tool]\n", 164 | "\n", 165 | "# And create the agent again with the two tools, wikipedia and the HANA vector store (retriever)\n", 166 | "agent = create_react_agent(llm, tools, prompt)\n", 167 | "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n", 168 | "\n", 169 | "# First ask: What is Data Masking?\n", 170 | "# Then ask: What is Data Masking in SAP's Generative AI Hub?\n", 171 | "# Pay attention to the response! Do you see what is happening now?\n", 172 | "agent_executor.invoke({\"input\": \"What is data masking?\"})" 173 | ] 174 | } 175 | ], 176 | "metadata": { 177 | "kernelspec": { 178 | "display_name": ".venv", 179 | "language": "python", 180 | "name": "python3" 181 | }, 182 | "language_info": { 183 | "codemirror_mode": { 184 | "name": "ipython", 185 | "version": 3 186 | }, 187 | "file_extension": ".py", 188 | "mimetype": "text/x-python", 189 | "name": "python", 190 | "nbconvert_exporter": "python", 191 | "pygments_lexer": "ipython3", 192 | "version": "3.9.13" 193 | } 194 | }, 195 | "nbformat": 4, 196 | "nbformat_minor": 2 197 | } 198 | -------------------------------------------------------------------------------- /exercises/documents/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/.DS_Store -------------------------------------------------------------------------------- /exercises/documents/DocumentGrounding_GenerativeAIHubSDKv4.10.2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/DocumentGrounding_GenerativeAIHubSDKv4.10.2.pdf -------------------------------------------------------------------------------- /exercises/documents/GenerativeAIhubSDK_GenerativeAIHubSDKv4.10.2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/GenerativeAIhubSDK_GenerativeAIHubSDKv4.10.2.pdf -------------------------------------------------------------------------------- /exercises/documents/Orchestration_Service_Generative_AI_hub_SDK.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/Orchestration_Service_Generative_AI_hub_SDK.pdf -------------------------------------------------------------------------------- /exercises/documents/SAP_note_models.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/SAP_note_models.pdf -------------------------------------------------------------------------------- /exercises/documents/ai-foundation-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/ai-foundation-architecture.png -------------------------------------------------------------------------------- /exercises/documents/bananabread.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/documents/bananabread.png -------------------------------------------------------------------------------- /exercises/images/AIL_survey.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/AIL_survey.png -------------------------------------------------------------------------------- /exercises/images/BTP_cockpit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/BTP_cockpit.png -------------------------------------------------------------------------------- /exercises/images/BTP_cockpit_BAS.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/BTP_cockpit_BAS.png -------------------------------------------------------------------------------- /exercises/images/Screenshot 2025-01-29 at 16.18.20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/Screenshot 2025-01-29 at 16.18.20.png -------------------------------------------------------------------------------- /exercises/images/Screenshot 2025-01-29 at 16.18.56.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/Screenshot 2025-01-29 at 16.18.56.png -------------------------------------------------------------------------------- /exercises/images/Screenshot 2025-01-29 at 16.19.21.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/Screenshot 2025-01-29 at 16.19.21.png -------------------------------------------------------------------------------- /exercises/images/Screenshot 2025-01-29 at 16.19.51.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/Screenshot 2025-01-29 at 16.19.51.png -------------------------------------------------------------------------------- /exercises/images/Screenshot 2025-01-29 at 16.20.48.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/Screenshot 2025-01-29 at 16.20.48.png -------------------------------------------------------------------------------- /exercises/images/auth_default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/auth_default.png -------------------------------------------------------------------------------- /exercises/images/bas.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/bas.png -------------------------------------------------------------------------------- /exercises/images/bas_exercises.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/bas_exercises.png -------------------------------------------------------------------------------- /exercises/images/btp-subaccount.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/btp-subaccount.png -------------------------------------------------------------------------------- /exercises/images/btp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/btp.png -------------------------------------------------------------------------------- /exercises/images/chat.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/chat.png -------------------------------------------------------------------------------- /exercises/images/chat1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/chat1.png -------------------------------------------------------------------------------- /exercises/images/chat2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/chat2.png -------------------------------------------------------------------------------- /exercises/images/chat3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/chat3.png -------------------------------------------------------------------------------- /exercises/images/chat_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/chat_2.png -------------------------------------------------------------------------------- /exercises/images/clone_git.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/clone_git.png -------------------------------------------------------------------------------- /exercises/images/clone_git_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/clone_git_2.png -------------------------------------------------------------------------------- /exercises/images/configuration_o.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configuration_o.png -------------------------------------------------------------------------------- /exercises/images/configuration_o2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configuration_o2.png -------------------------------------------------------------------------------- /exercises/images/configurations.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configurations.png -------------------------------------------------------------------------------- /exercises/images/configurations_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configurations_2.png -------------------------------------------------------------------------------- /exercises/images/configurations_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configurations_3.png -------------------------------------------------------------------------------- /exercises/images/configurations_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configurations_4.png -------------------------------------------------------------------------------- /exercises/images/configurations_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/configurations_5.png -------------------------------------------------------------------------------- /exercises/images/create_dev_space.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/create_dev_space.png -------------------------------------------------------------------------------- /exercises/images/deployment_o.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployment_o.png -------------------------------------------------------------------------------- /exercises/images/deployment_o2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployment_o2.png -------------------------------------------------------------------------------- /exercises/images/deployment_o3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployment_o3.png -------------------------------------------------------------------------------- /exercises/images/deployment_orchestration_service.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployment_orchestration_service.png -------------------------------------------------------------------------------- /exercises/images/deployments.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployments.png -------------------------------------------------------------------------------- /exercises/images/deployments_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployments_2.png -------------------------------------------------------------------------------- /exercises/images/deployments_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployments_3.png -------------------------------------------------------------------------------- /exercises/images/deployments_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployments_4.png -------------------------------------------------------------------------------- /exercises/images/deployments_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/deployments_5.png -------------------------------------------------------------------------------- /exercises/images/dev_running.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/dev_running.png -------------------------------------------------------------------------------- /exercises/images/dev_starting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/dev_starting.png -------------------------------------------------------------------------------- /exercises/images/enable_orchestration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/enable_orchestration.png -------------------------------------------------------------------------------- /exercises/images/extensions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/extensions.png -------------------------------------------------------------------------------- /exercises/images/model-library-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/model-library-2.png -------------------------------------------------------------------------------- /exercises/images/model-library-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/model-library-3.png -------------------------------------------------------------------------------- /exercises/images/model-library.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/model-library.png -------------------------------------------------------------------------------- /exercises/images/open_workspace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/open_workspace.png -------------------------------------------------------------------------------- /exercises/images/prompt_editor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/prompt_editor.png -------------------------------------------------------------------------------- /exercises/images/resource_group.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/resource_group.png -------------------------------------------------------------------------------- /exercises/images/resource_group_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/resource_group_2.png -------------------------------------------------------------------------------- /exercises/images/scenarios.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/scenarios.png -------------------------------------------------------------------------------- /exercises/images/scenarios_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/scenarios_2.png -------------------------------------------------------------------------------- /exercises/images/scenarios_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/scenarios_3.png -------------------------------------------------------------------------------- /exercises/images/select_embedding_table.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/select_embedding_table.png -------------------------------------------------------------------------------- /exercises/images/select_kernel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/select_kernel.png -------------------------------------------------------------------------------- /exercises/images/service-binding.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/service-binding.png -------------------------------------------------------------------------------- /exercises/images/service-key.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/service-key.png -------------------------------------------------------------------------------- /exercises/images/service_binding.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/service_binding.png -------------------------------------------------------------------------------- /exercises/images/service_key.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/service_key.png -------------------------------------------------------------------------------- /exercises/images/setup0010.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/setup0010.png -------------------------------------------------------------------------------- /exercises/images/setup0030.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/setup0030.png -------------------------------------------------------------------------------- /exercises/images/setup0040.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/setup0040.png -------------------------------------------------------------------------------- /exercises/images/setup0042.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/setup0042.png -------------------------------------------------------------------------------- /exercises/images/setup0060.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/setup0060.png -------------------------------------------------------------------------------- /exercises/images/start_terminal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/start_terminal.png -------------------------------------------------------------------------------- /exercises/images/venv.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/venv.png -------------------------------------------------------------------------------- /exercises/images/workspace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SAP-samples/generative-ai-codejam/0fa9d3aa2593efa7470dde9ce93e0850acc72a91/exercises/images/workspace.png -------------------------------------------------------------------------------- /exercises/init_env.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import configparser 4 | 5 | from hdbcli import dbapi 6 | 7 | import variables 8 | 9 | ROOT_PATH_DIR = os.path.dirname(os.getcwd()) 10 | AICORE_CONFIG_FILENAME = '.aicore-config.json' 11 | USER_CONFIG_FILENAME = '.user.ini' 12 | 13 | def set_environment_variables() -> None: 14 | with open(os.path.join(ROOT_PATH_DIR, AICORE_CONFIG_FILENAME), 'r') as config_file: 15 | config_data = json.load(config_file) 16 | 17 | os.environ["AICORE_AUTH_URL"]=config_data["url"]+"/oauth/token" 18 | os.environ["AICORE_CLIENT_ID"]=config_data["clientid"] 19 | os.environ["AICORE_CLIENT_SECRET"]=config_data["clientsecret"] 20 | os.environ["AICORE_BASE_URL"]=config_data["serviceurls"]["AI_API_URL"] 21 | 22 | os.environ["AICORE_RESOURCE_GROUP"]=variables.RESOURCE_GROUP 23 | 24 | def connect_to_hana_db() -> dbapi.Connection: 25 | config = configparser.ConfigParser() 26 | config.read(os.path.join(ROOT_PATH_DIR, USER_CONFIG_FILENAME)) 27 | return dbapi.connect( 28 | address=config.get('hana', 'url'), 29 | port=config.get('hana', 'port'), 30 | user=config.get('hana', 'user'), 31 | password=config.get('hana', 'passwd'), 32 | autocommit=True, 33 | sslValidateCertificate=False 34 | ) -------------------------------------------------------------------------------- /exercises/variables.py: -------------------------------------------------------------------------------- 1 | # TODO Change the resource group, embedding model ID and LLM/Chat model ID to your own 2 | RESOURCE_GROUP = "" # your resource group e.g. team-01 3 | EMBEDDING_DEPLOYMENT_ID = "" # e.g. d6b74feab22bc49a 4 | LLM_DEPLOYMENT_ID = "" 5 | 6 | # TODO Change the URL to your own orchestration deployment ID 7 | AICORE_ORCHESTRATION_DEPLOYMENT_URL = "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/xxx" 8 | 9 | # HANA table name to store your embeddings 10 | # TODO Change the name to your team name 11 | EMBEDDING_TABLE = "EMBEDDINGS_CODEJAM_"+"->ADD_YOUR_NAME_HERE<-" 12 | -------------------------------------------------------------------------------- /prerequisites.md: -------------------------------------------------------------------------------- 1 | # Prerequisites 2 | 3 | There are hardware, software and service prerequisites for participating in this SAP CodeJam. 4 | 5 | The approach assumed for the participants is that exercises will be executed using [SAP Business Application Studio](https://help.sap.com/docs/bas/sap-business-application-studio/what-is-sap-business-application-studio) as the development tool. 6 | 7 | If you are experienced in using Jupyter or Microsoft VSCode, then you could execute exercises locally on your own computer, but the setup for this is outside of the scope of this material. 8 | 9 | # SAP Business Application Studio 10 | 11 | ## Hardware 12 | 13 | * If attending an in-person SAP CodeJam, then bring your own laptop. 14 | 15 | ## Software 16 | 17 | * A Chromium-based web browser, like Google Chrome, MS Edge, Vivaldi, Brave etc. 18 | * Unfortunately there are some issues, when using a Firefox web browser. 19 | 20 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | generative-ai-hub-sdk==4.1.1 2 | hdbcli==2.23.26 3 | pypdf==5.2.0 4 | scipy==1.15.1 --------------------------------------------------------------------------------