├── .github
├── CONTRIBUTING.md
├── ISSUE_TEMPLATE.md
└── PULL_REQUEST_TEMPLATE.md
├── .gitignore
├── Instructions
├── Exercises
│ ├── 01-app-develop.md
│ ├── 01-get-started-azure-openai.md
│ ├── 02-natural-language-azure-openai.md
│ ├── 02-use-own-data.md
│ ├── 03-generate-images.md
│ ├── 03-prompt-engineering.md
│ ├── 04-code-generation.md
│ ├── 05-generate-images.md
│ └── 06-use-own-data.md
├── Labs
│ ├── 01-app-develop.md
│ └── 02-use-own-data.md
└── media
│ ├── ai-foundry-home.png
│ ├── ai-foundry-project.png
│ ├── cloudshell-launch-portal.png
│ ├── images-playground-new-style.png
│ └── images-playground.png
├── LICENSE
├── Labfiles
├── 01-app-develop
│ ├── CSharp
│ │ ├── CSharp.csproj
│ │ ├── Program.cs
│ │ ├── appsettings.json
│ │ ├── grounding.txt
│ │ └── system.txt
│ └── Python
│ │ ├── .env
│ │ ├── application.py
│ │ ├── grounding.txt
│ │ └── system.txt
├── 02-azure-openai-api
│ ├── CSharp
│ │ ├── CSharp.csproj
│ │ ├── Program.cs
│ │ └── appsettings.json
│ ├── Python
│ │ ├── .env
│ │ └── test-openai-model.py
│ └── text-files
│ │ └── sample-text.txt
├── 02-use-own-data
│ ├── CSharp
│ │ ├── CSharp.csproj
│ │ ├── OwnData.cs
│ │ └── appsettings.json
│ ├── Python
│ │ ├── .env
│ │ └── ownData.py
│ └── data
│ │ ├── Dubai Brochure.pdf
│ │ ├── Las Vegas Brochure.pdf
│ │ ├── London Brochure.pdf
│ │ ├── Margies Travel Company Info.pdf
│ │ ├── New York Brochure.pdf
│ │ ├── San Francisco Brochure.pdf
│ │ └── brochures.zip
├── 03-image-generation
│ ├── CSharp
│ │ ├── Program.cs
│ │ ├── appsettings.json
│ │ └── dalle-client.csproj
│ └── Python
│ │ ├── .env
│ │ └── dalle-client.py
├── 03-prompt-engineering
│ ├── CSharp
│ │ ├── CSharp.csproj
│ │ ├── Program.cs
│ │ ├── appsettings.json
│ │ ├── grounding.txt
│ │ └── system.txt
│ └── Python
│ │ ├── .env
│ │ ├── grounding.txt
│ │ ├── prompt-engineering.py
│ │ └── system.txt
├── 04-code-generation
│ ├── CSharp
│ │ ├── CSharp.csproj
│ │ ├── Program.cs
│ │ ├── appsettings.json
│ │ └── result
│ │ │ └── app.txt
│ ├── Python
│ │ ├── .env
│ │ ├── code-generation.py
│ │ └── result
│ │ │ └── app.txt
│ └── sample-code
│ │ ├── function
│ │ ├── function.cs
│ │ └── function.py
│ │ └── go-fish
│ │ ├── CSharp.csproj
│ │ ├── go-fish.cs
│ │ └── go-fish.py
├── 05-image-generation
│ ├── CSharp
│ │ ├── Program.cs
│ │ ├── appsettings.json
│ │ └── generate_image.csproj
│ └── Python
│ │ ├── .env
│ │ └── generate-image.py
├── 06-use-own-data
│ ├── CSharp
│ │ ├── CSharp.csproj
│ │ ├── OwnData.cs
│ │ └── appsettings.json
│ ├── Python
│ │ ├── .env
│ │ └── ownData.py
│ └── data
│ │ ├── Dubai Brochure.pdf
│ │ ├── Las Vegas Brochure.pdf
│ │ ├── London Brochure.pdf
│ │ ├── Margies Travel Company Info.pdf
│ │ ├── New York Brochure.pdf
│ │ ├── San Francisco Brochure.pdf
│ │ └── brochures.zip
└── readme.txt
├── _build.yml
├── _config.yml
├── index.md
└── readme.md
/.github/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to Microsoft Learning Repositories
2 |
3 | MCT contributions are a key part of keeping the lab and demo content current as the Azure platform changes. We want to make it as easy as possible for you to contribute changes to the lab files. Here are a few guidelines to keep in mind as you contribute changes.
4 |
5 | ## GitHub Use & Purpose
6 |
7 | Microsoft Learning is using GitHub to publish the lab steps and lab scripts for courses that cover cloud services like Azure. Using GitHub allows the course’s authors and MCTs to keep the lab content current with Azure platform changes. Using GitHub allows the MCTs to provide feedback and suggestions for lab changes, and then the course authors can update lab steps and scripts quickly and relatively easily.
8 |
9 | > When you prepare to teach these courses, you should ensure that you are using the latest lab steps and scripts by downloading the appropriate files from GitHub. GitHub should not be used to discuss technical content in the course, or how to prep. It should only be used to address changes in the labs.
10 |
11 | It is strongly recommended that MCTs and Partners access these materials and in turn, provide them separately to students. Pointing students directly to GitHub to access Lab steps as part of an ongoing class will require them to access yet another UI as part of the course, contributing to a confusing experience for the student. An explanation to the student regarding why they are receiving separate Lab instructions can highlight the nature of an always-changing cloud-based interface and platform. Microsoft Learning support for accessing files on GitHub and support for navigation of the GitHub site is limited to MCTs teaching this course only.
12 |
13 | > As an alternative to pointing students directly to the GitHub repository, you can point students to the GitHub Pages website to view the lab instructions. The URL for the GitHub Pages website can be found at the top of the repository.
14 |
15 | To address general comments about the course and demos, or how to prepare for a course delivery, please use the existing MCT forums.
16 |
17 | ## Additional Resources
18 |
19 | A user guide has been provided for MCTs who are new to GitHub. It provides steps for connecting to GitHub, downloading and printing course materials, updating the scripts that students use in labs, and explaining how you can help ensure that this course’s content remains current.
20 |
21 |
22 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | # Module: 00
2 | ## Lab/Demo: 00
3 | ### Task: 00
4 | #### Step: 00
5 |
6 | Description of issue
7 |
8 | Repro steps:
9 |
10 | 1.
11 | 1.
12 | 1.
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | # Module: 00
2 | ## Lab/Demo: 00
3 |
4 | Fixes # .
5 |
6 | Changes proposed in this pull request:
7 |
8 | -
9 | -
10 | -
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | bin
2 | obj
3 | *.sln
4 |
--------------------------------------------------------------------------------
/Instructions/Exercises/01-app-develop.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Application development with Azure OpenAI Service'
4 | status: new
5 | ---
6 |
7 | # Application development with Azure OpenAI Service
8 |
9 | With the Azure OpenAI Service, developers can create chatbots and other applications that excel at understanding natural human language through the use of REST APIs or language specific SDKs. When working with these language models, how developers shape their prompt greatly impacts how the generative AI model will respond. Azure OpenAI models are able to tailor and format content, if requested in a clear and concise way. In this exercise, you'll learn how to connect your application to Azure OpenAI and see how different prompts for similar content help shape the AI model's response to better satisfy your requirements.
10 |
11 | In the scenario for this exercise, you will perform the role of a software developer working on a wildlife marketing campaign. You are exploring how to use generative AI to improve advertising emails and categorize articles that might apply to your team. The prompt engineering techniques used in the exercise can be applied similarly for a variety of use cases.
12 |
13 | This exercise will take approximately **30** minutes.
14 |
15 | ## Clone the repository for this course
16 |
17 | If you have not already done so, you must clone the code repository for this course:
18 |
19 | 1. Start Visual Studio Code.
20 | 2. Open the command palette (SHIFT+CTRL+P or **View** > **Command Palette...**) and run a **Git: Clone** command to clone the `https://github.com/MicrosoftLearning/mslearn-openai` repository to a local folder (it doesn't matter which folder).
21 | 3. When the repository has been cloned, open the folder in Visual Studio Code.
22 | 4. Wait while additional files are installed to support the C# code projects in the repo.
23 |
24 | > **Note**: If you are prompted to add required assets to build and debug, select **Not Now**.
25 |
26 | ## Provision an Azure OpenAI resource
27 |
28 | If you don't already have one, provision an Azure OpenAI resource in your Azure subscription.
29 |
30 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
31 |
32 | 1. Create an **Azure OpenAI** resource with the following settings:
33 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
34 | - **Resource group**: *Choose or create a resource group*
35 | - **Region**: *Make a **random** choice from any of the following regions*\*
36 | - East US
37 | - East US 2
38 | - North Central US
39 | - South Central US
40 | - Sweden Central
41 | - West US
42 | - West US 3
43 | - **Name**: *A unique name of your choice*
44 | - **Pricing tier**: Standard S0
45 |
46 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
47 |
48 | 1. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.
49 |
50 | ## Deploy a model
51 |
52 | Next, you will deploy an Azure OpenAI model resource from Cloud Shell.
53 |
54 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***Bash*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.
55 |
56 | > **Note**: If you have previously created a cloud shell that uses a *PowerShell* environment, switch it to ***Bash***.
57 |
58 | 1. Refer to this example and replace the following variables with your own values from above:
59 |
60 | ```dotnetcli
61 | az cognitiveservices account deployment create \
62 | -g \
63 | -n \
64 | --deployment-name gpt-4o \
65 | --model-name gpt-4o \
66 | --model-version 2024-05-13 \
67 | --model-format OpenAI \
68 | --sku-name "Standard" \
69 | --sku-capacity 5
70 | ```
71 |
72 | > **Note**: Sku-capacity is measured in thousands of tokens per minute. A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
73 |
74 | ## Configure your application
75 |
76 | Applications for both C# and Python have been provided, and both apps feature the same functionality. First, you'll complete some key parts of the application to enable using your Azure OpenAI resource with asynchronous API calls.
77 |
78 | 1. In Visual Studio Code, in the **Explorer** pane, browse to the **Labfiles/01-app-develop** folder and expand the **CSharp** or **Python** folder depending on your language preference. Each folder contains the language-specific files for an app into which you're you're going to integrate Azure OpenAI functionality.
79 | 2. Right-click the **CSharp** or **Python** folder containing your code files and open an integrated terminal. Then install the Azure OpenAI SDK package by running the appropriate command for your language preference:
80 |
81 | **C#**:
82 |
83 | ```powershell
84 | dotnet add package Azure.AI.OpenAI --version 2.1.0
85 | ```
86 |
87 | **Python**:
88 |
89 | ```powershell
90 | pip install openai==1.65.2
91 | ```
92 |
93 | 3. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the configuration file for your preferred language
94 |
95 | - **C#**: appsettings.json
96 | - **Python**: .env
97 |
98 | 4. Update the configuration values to include:
99 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
100 | - The **deployment name** you specified for your model deployment.
101 | 5. Save the configuration file.
102 |
103 | ## Add code to use the Azure OpenAI service
104 |
105 | Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
106 |
107 | 1. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the code file for your preferred language, and replace the comment ***Add Azure OpenAI package*** with code to add the Azure OpenAI SDK library:
108 |
109 | **C#**: Program.cs
110 |
111 | ```csharp
112 | // Add Azure OpenAI packages
113 | using Azure.AI.OpenAI;
114 | using OpenAI.Chat;
115 | ```
116 |
117 | **Python**: application.py
118 |
119 | ```python
120 | # Add Azure OpenAI package
121 | from openai import AsyncAzureOpenAI
122 | ```
123 |
124 | 2. In the code file, find the comment ***Configure the Azure OpenAI client***, and add code to configure the Azure OpenAI client:
125 |
126 | **C#**: Program.cs
127 |
128 | ```csharp
129 | // Configure the Azure OpenAI client
130 | AzureOpenAIClient azureClient = new (new Uri(oaiEndpoint), new ApiKeyCredential(oaiKey));
131 | ChatClient chatClient = azureClient.GetChatClient(oaiDeploymentName);
132 | ```
133 |
134 | **Python**: application.py
135 |
136 | ```python
137 | # Configure the Azure OpenAI client
138 | client = AsyncAzureOpenAI(
139 | azure_endpoint = azure_oai_endpoint,
140 | api_key=azure_oai_key,
141 | api_version="2024-02-15-preview"
142 | )
143 | ```
144 |
145 | 3. In the function that calls the Azure OpenAI model, under the comment ***Get response from Azure OpenAI***, add the code to format and send the request to the model.
146 |
147 | **C#**: Program.cs
148 |
149 | ```csharp
150 | // Get response from Azure OpenAI
151 | ChatCompletionOptions chatCompletionOptions = new ChatCompletionOptions()
152 | {
153 | Temperature = 0.7f,
154 | MaxOutputTokenCount = 800
155 | };
156 |
157 | ChatCompletion completion = chatClient.CompleteChat(
158 | [
159 | new SystemChatMessage(systemMessage),
160 | new UserChatMessage(userMessage)
161 | ],
162 | chatCompletionOptions
163 | );
164 |
165 | Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
166 | ```
167 |
168 | **Python**: application.py
169 |
170 | ```python
171 | # Get response from Azure OpenAI
172 | messages =[
173 | {"role": "system", "content": system_message},
174 | {"role": "user", "content": user_message},
175 | ]
176 |
177 | print("\nSending request to Azure OpenAI model...\n")
178 |
179 | # Call the Azure OpenAI model
180 | response = await client.chat.completions.create(
181 | model=model,
182 | messages=messages,
183 | temperature=0.7,
184 | max_tokens=800
185 | )
186 | ```
187 |
188 | 4. Save the changes to the code file.
189 |
190 | ## Run your application
191 |
192 | Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request.
193 |
194 | 1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interactions, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message.
195 | 1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
196 |
197 | - **C#**: `dotnet run`
198 | - **Python**: `python application.py`
199 |
200 | > **Tip**: You can use the **Maximize panel size** (**^**) icon in the terminal toolbar to see more of the console text.
201 |
202 | 1. For the first iteration, enter the following prompts:
203 |
204 | **System message**
205 |
206 | ```prompt
207 | You are an AI assistant
208 | ```
209 |
210 | **User message:**
211 |
212 | ```prompt
213 | Write an intro for a new wildlife Rescue
214 | ```
215 |
216 | 1. Observe the output. The AI model will likely produce a good generic introduction to a wildlife rescue.
217 | 1. Next, enter the following prompts which specify a format for the response:
218 |
219 | **System message**
220 |
221 | ```prompt
222 | You are an AI assistant helping to write emails
223 | ```
224 |
225 | **User message:**
226 |
227 | ```prompt
228 | Write a promotional email for a new wildlife rescue, including the following:
229 | - Rescue name is Contoso
230 | - It specializes in elephants
231 | - Call for donations to be given at our website
232 | ```
233 |
234 | > **Tip**: You may find the automatic typing in the VM doesn't work well with multiline prompts. If that is your issue, copy the entire prompt then paste it into Visual Studio Code.
235 |
236 | 1. Observe the output. This time, you'll likely see the format of an email with the specific animals included, as well as the call for donations.
237 | 1. Next, enter the following prompts that additionally specify the content:
238 |
239 | **System message**
240 |
241 | ```prompt
242 | You are an AI assistant helping to write emails
243 | ```
244 |
245 | **User message:**
246 |
247 | ```prompt
248 | Write a promotional email for a new wildlife rescue, including the following:
249 | - Rescue name is Contoso
250 | - It specializes in elephants, as well as zebras and giraffes
251 | - Call for donations to be given at our website
252 | Include a list of the current animals we have at our rescue after the signature, in the form of a table. These animals include elephants, zebras, gorillas, lizards, and jackrabbits.
253 | ```
254 |
255 | 1. Observe the output, and see how the email has changed based on your clear instructions.
256 | 1. Next, enter the following prompts where we add details about tone to the system message:
257 |
258 | **System message**
259 |
260 | ```prompt
261 | You are an AI assistant that helps write promotional emails to generate interest in a new business. Your tone is light, chit-chat oriented and you always include at least two jokes.
262 | ```
263 |
264 | **User message:**
265 |
266 | ```prompt
267 | Write a promotional email for a new wildlife rescue, including the following:
268 | - Rescue name is Contoso
269 | - It specializes in elephants, as well as zebras and giraffes
270 | - Call for donations to be given at our website
271 | Include a list of the current animals we have at our rescue after the signature, in the form of a table. These animals include elephants, zebras, gorillas, lizards, and jackrabbits.
272 | ```
273 |
274 | 1. Observe the output. This time you'll likely see the email in a similar format, but with a much more informal tone. You'll likely even see jokes included!
275 |
276 | ## Use grounding context and maintain chat history
277 |
278 | 1. For the final iteration, we're deviating from email generation and exploring *grounding context* and maintaining chat history. Here you provide a simple system message, and change the app to provide the grounding context as the beginning of the chat history. The app will then append the user input, and extract information from the grounding context to answer our user prompt.
279 | 1. Open the file `grounding.txt` and briefly read the grounding context you'll be inserting.
280 | 1. In your app immediately after the comment ***Initialize messages list*** and before any existing code, add the following code snippet to read text in from `grounding.txt` and to initialize the chat history with the grounding context.
281 |
282 | **C#**: Program.cs
283 |
284 | ```csharp
285 | // Initialize messages list
286 | Console.WriteLine("\nAdding grounding context from grounding.txt");
287 | string groundingText = System.IO.File.ReadAllText("grounding.txt");
288 | var messagesList = new List()
289 | {
290 | new UserChatMessage(groundingText),
291 | };
292 | ```
293 |
294 | **Python**: application.py
295 |
296 | ```python
297 | # Initialize messages array
298 | print("\nAdding grounding context from grounding.txt")
299 | grounding_text = open(file="grounding.txt", encoding="utf8").read().strip()
300 | messages_array = [{"role": "user", "content": grounding_text}]
301 | ```
302 |
303 | 1. Under the comment ***Format and send the request to the model***, replace the code from the comment to the end of the **while** loop with the following code. The code is mostly the same, but now using the messages array to send the request to the model.
304 |
305 | **C#**: Program.cs
306 |
307 | ```csharp
308 | // Format and send the request to the model
309 | messagesList.Add(new SystemChatMessage(systemMessage));
310 | messagesList.Add(new UserChatMessage(userMessage));
311 | GetResponseFromOpenAI(messagesList);
312 | ```
313 |
314 | **Python**: application.py
315 |
316 | ```python
317 | # Format and send the request to the model
318 | messages_array.append({"role": "system", "content": system_text})
319 | messages_array.append({"role": "user", "content": user_text})
320 | await call_openai_model(messages=messages_array,
321 | model=azure_oai_deployment,
322 | client=client
323 | )
324 | ```
325 |
326 | 1. Under the comment ***Define the function that will get the response from Azure OpenAI endpoint***, replace the function declaration with the following code to use the chat history list when calling the function `GetResponseFromOpenAI` for C# or `call_openai_model` for Python.
327 |
328 | **C#**: Program.cs
329 |
330 | ```csharp
331 | // Define the function that gets the response from Azure OpenAI endpoint
332 | private static void GetResponseFromOpenAI(List messagesList)
333 | ```
334 |
335 | **Python**: application.py
336 |
337 | ```python
338 | # Define the function that will get the response from Azure OpenAI endpoint
339 | async def call_openai_model(messages, model, client):
340 | ```
341 |
342 | 1. Lastly, replace all the code under ***Get response from Azure OpenAI***. The code is mostly the same, but now using the messages array to store the conversation history.
343 |
344 | **C#**: Program.cs
345 |
346 | ```csharp
347 | // Get response from Azure OpenAI
348 | ChatCompletionOptions chatCompletionOptions = new ChatCompletionOptions()
349 | {
350 | Temperature = 0.7f,
351 | MaxOutputTokenCount = 800
352 | };
353 |
354 | ChatCompletion completion = chatClient.CompleteChat(
355 | messagesList,
356 | chatCompletionOptions
357 | );
358 |
359 | Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
360 | messagesList.Add(new AssistantChatMessage(completion.Content[0].Text));
361 | ```
362 |
363 | **Python**: application.py
364 |
365 | ```python
366 | # Get response from Azure OpenAI
367 | print("\nSending request to Azure OpenAI model...\n")
368 |
369 | # Call the Azure OpenAI model
370 | response = await client.chat.completions.create(
371 | model=model,
372 | messages=messages,
373 | temperature=0.7,
374 | max_tokens=800
375 | )
376 |
377 | print("Response:\n" + response.choices[0].message.content + "\n")
378 | messages.append({"role": "assistant", "content": response.choices[0].message.content})
379 | ```
380 |
381 | 1. Save the file and rerun your app.
382 | 1. Enter the following prompts (with the **system message** still being entered and saved in `system.txt`).
383 |
384 | **System message**
385 |
386 | ```prompt
387 | You're an AI assistant who helps people find information. You'll provide answers from the text provided in the prompt, and respond concisely.
388 | ```
389 |
390 | **User message:**
391 |
392 | ```prompt
393 | What animal is the favorite of children at Contoso?
394 | ```
395 |
396 | Notice that the model uses the grounding text information to answer your question.
397 |
398 | 1. Without changing the system message, enter the following prompt for the user message:
399 |
400 | **User message:**
401 |
402 | ```prompt
403 | How can they interact with it at Contoso?
404 | ```
405 |
406 | Notice that the model recognizes "they" as the children and "it" as their favorite animal, since now it has access to your previous question in the chat history.
407 |
408 | ## Clean up
409 |
410 | When you're done with your Azure OpenAI resource, remember to delete the deployment or the entire resource in the **Azure portal** at `https://portal.azure.com`.
411 |
--------------------------------------------------------------------------------
/Instructions/Exercises/01-get-started-azure-openai.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Get started with Azure OpenAI service'
4 | status: stale
5 | ---
6 |
7 | # Get started with Azure OpenAI service
8 |
9 | Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure AI Foundry to deploy and explore generative AI models.
10 |
11 | In the scenario for this exercise, you will perform the role of a software developer who has been tasked to implement an AI agent that can use generative AI to help a marketing organization improve its effectiveness at reaching customers and advertising new products. The techniques used in the exercise can be applied to any scenario where an organization wants to use generative AI models to help employees be more effective and productive.
12 |
13 | This exercise takes approximately **30** minutes.
14 |
15 | ## Provision an Azure OpenAI resource
16 |
17 | If you don't already have one, provision an Azure OpenAI resource in your Azure subscription.
18 |
19 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
20 | 2. Create an **Azure OpenAI** resource with the following settings:
21 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
22 | - **Resource group**: *Choose or create a resource group*
23 | - **Region**: *Make a **random** choice from any of the following regions*\*
24 | - East US
25 | - East US 2
26 | - North Central US
27 | - South Central US
28 | - Sweden Central
29 | - West US
30 | - West US 3
31 | - **Name**: *A unique name of your choice*
32 | - **Pricing tier**: Standard S0
33 |
34 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
35 |
36 | 3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.
37 |
38 | ## Deploy a model
39 |
40 | Azure provides a web-based portal named **Azure AI Foundry portal**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Foundry portal to deploy a model.
41 |
42 | > **Note**: As you use Azure AI Foundry portal, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
43 |
44 | 1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Foundry portal** (previously AI Studio).
45 | 1. In Azure AI Foundry portal, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-4o** model with the following settings:
46 | - **Deployment name**: *A unique name of your choice*
47 | - **Model**: gpt-4o
48 | - **Model version**: *Use default version*
49 | - **Deployment type**: Standard
50 | - **Tokens per minute rate limit**: 5K\*
51 | - **Content filter**: Default
52 | - **Enable dynamic quota**: Disabled
53 |
54 | > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
55 |
56 | ## Use the Chat playground
57 |
58 | Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure AI Foundry portal provides a chatbot interface for GPT 4 and higher models.
59 |
60 | > **Note:** The *Chat* playground uses the *ChatCompletions* API rather than the older *Completions* API that is used by the *Completions* playground. The Completions playground is provided for compatibility with older models.
61 |
62 | 1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
63 | - **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment.
64 | - **Chat session** - used to submit chat messages and view responses.
65 | 1. Under **Deployments**, ensure that your gpt-4o model deployment is selected.
66 | 1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user.
67 | 1. In the **Chat session** panel, enter the user query `How can I use generative AI to help me market a new product?`
68 |
69 | > **Note**: You may receive a response that the API deployment is not yet ready. If so, wait for a few minutes and try again.
70 |
71 | 1. Review the response, noting that the model has generated a cohesive natural language answer that is relevant to the query with which it was prompted.
72 | 1. Enter the user query `What skills do I need if I want to develop a solution to accomplish this?`.
73 | 1. Review the response, noting that the chat session has retained the conversational context (so "this" is interpreted as a generative AI solution for marketing). This contextualization is achieved by including the recent conversation history in each successive prompt submission, so the prompt sent to the model for the second query included the original query and response as well as the new user input.
74 | 1. In the **Chat session** panel toolbar, select **Clear chat** and confirm that you want to restart the chat session.
75 | 1. Enter the query `Can you help me find resources to learn those skills?` and review the response, which should be a valid natural language answer, but since the previous chat history has been lost, the answer is likely to be about finding generic skilling resources rather than being related to the specific skills needed to build a generative AI marketing solution.
76 |
77 | ## Experiment with system messages, prompts, and few-shot examples
78 |
79 | So far, you've engaged in a chat conversation with your model based on the default system message. You can customize the system setup to have more control over the kinds of responses generated by your model.
80 |
81 | 1. In the main toolbar, select the **Prompt samples**, and use the **Marketing Writing Assistant** prompt template.
82 | 1. Review the new system message, which describes how an AI agent should use the model to respond.
83 | 1. In the **Chat session** panel, enter the user query `Create an advertisement for a new scrubbing brush`.
84 | 1. Review the response, which should include advertising copy for a scrubbing brush. The copy may be quite extensive and creative.
85 |
86 | In a real scenario, a marketing professional would likely already know the name of the scrubbing brush product as well as have some ideas about key features that should be highlighted in an advert. To get the most useful results from a generative AI model, users need to design their prompts to include as much pertinent information as possible.
87 |
88 | 1. Enter the prompt `Revise the advertisement for a scrubbing brush named "Scrubadub 2000", which is made of carbon fiber and reduces cleaning times by half compared to ordinary scrubbing brushes`.
89 | 1. Review the response, which should take into account the additional information you provided about the scrubbing brush product.
90 |
91 | The response should now be more useful, but to have even more control over the output from the model, you can provide one or more *few-shot* examples on which responses should be based.
92 |
93 | 1. Under the **System message** text box, expand the dropdown for **Add section** and select **Examples**. Then type the following message and response in the designated boxes:
94 |
95 | **User**:
96 |
97 | ```prompt
98 | Write an advertisement for the lightweight "Ultramop" mop, which uses patented absorbent materials to clean floors.
99 | ```
100 |
101 | **Assistant**:
102 |
103 | ```prompt
104 | Welcome to the future of cleaning!
105 |
106 | The Ultramop makes light work of even the dirtiest of floors. Thanks to its patented absorbent materials, it ensures a brilliant shine. Just look at these features:
107 | - Lightweight construction, making it easy to use.
108 | - High absorbency, enabling you to apply lots of clean soapy water to the floor.
109 | - Great low price.
110 |
111 | Check out this and other products on our website at www.contoso.com.
112 | ```
113 |
114 | 1. Use the **Apply changes** button to save the examples and start a new session.
115 | 1. In the **Chat session** section, enter the user query `Create an advertisement for the Scrubadub 2000 - a new scrubbing brush made of carbon fiber that reduces cleaning time by half`.
116 | 1. Review the response, which should be a new advert for the "Scrubadub 2000" that is modeled on the "Ultramop" example provided in the system setup.
117 |
118 | ## Experiment with parameters
119 |
120 | You've explored how the system message, examples, and prompts can help refine the responses returned by the model. You can also use parameters to control model behavior.
121 |
122 | 1. In the **Configuration** panel, select the **Parameters** tab and set the following parameter values:
123 | - **Max response**: 1000
124 | - **Temperature**: 1
125 |
126 | 1. In the **Chat session** section, use the **Clear chat** button to reset the chat session. Then enter the user query `Create an advertisement for a cleaning sponge` and review the response. The resulting advertisement copy should include a maximum of 1000 text tokens, and include some creative elements - for example, the model may have invented a product name for the sponge and made some claims about its features.
127 | 1. Use the **Clear chat** button to reset the chat session again, and then re-enter the same query as before (`Create an advertisement for a cleaning sponge`) and review the response. The response may be different from the previous response.
128 | 1. In the **Configuration** panel, on the **Parameters** tab, change the **Temperature** parameter value to 0.
129 | 1. In the **Chat session** section, use the **Clear chat** button to reset the chat session again, and then re-enter the same query as before (`Create an advertisement for a cleaning sponge`) and review the response. This time, the response may not be quite so creative.
130 | 1. Use the **Clear chat** button to reset the chat session one more time, and then re-enter the same query as before (`Create an advertisement for a cleaning sponge`) and review the response; which should be very similar (if not identical) to the previous response.
131 |
132 | The **Temperature** parameter controls the degree to which the model can be creative in its generation of a response. A low value results in a consistent response with little random variation, while a high value encourages the model to add creative elements its output; which may affect the accuracy and realism of the response.
133 |
134 | ## Deploy your model to a web app
135 |
136 | Now that you've explored some of the capabilities of a generative AI model in the Azure AI Foundry playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model.
137 |
138 | > **Note**: For some users, deploying to the web app cannot be deployed due to a bug in the template in the studio. If that's the case, skip this section.
139 |
140 | 1. At the top right of the **Chat** playground page, in the **Deploy to** menu, select **A new web app**.
141 | 1. In the **Deploy to a web app** dialog box, create a new web app with the following settings:
142 | - **Name**: *A unique name*
143 | - **Subscription**: *Your Azure subscription*
144 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
145 | - **Locations**: *The region where you provisioned your Azure OpenAI resource*
146 | - **Pricing plan**: Free (F1) - *If this is not available, select Basic (B1)*
147 | - **Enable chat history in the web app**: **Un**selected
148 | - **I acknowledge that web apps will incur usage to my account**: Selected
149 | 1. Deploy the new web app and wait for deployment to complete (which may take 10 minutes or so)
150 | 1. After your web app has deployed successfully, use the button at the top right of the **Chat** playground page to launch the web app. The app may take a few minutes to launch. If prompted, accept the permissions request.
151 | 1. In the web app, enter the following chat message:
152 |
153 | ```prompt
154 | Write an advertisement for the new "WonderWipe" cloth that attracts dust particulates and can be used to clean any household surface.
155 | ```
156 |
157 | 1. Review the response.
158 |
159 | > **Note**: You deployed the *model* to a web app, but this deployment doesn't include the system settings and parameters you set in the playground; so the response may not reflect the examples you specified in the playground. In a real scenario, you would add logic to your application to modify the prompt so that it includes the appropriate contextual data for the kinds of response you want to generate. This kind of customization is beyond the scope of this introductory-level exercise, but you can learn about prompt engineering techniques and Azure OpenAI APIs in other exercises and product documentation.
160 |
161 | 1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure AI Foundry portal.
162 |
163 | ## Clean up
164 |
165 | When you're done with your Azure OpenAI resource, remember to delete the deployment or the entire resource in the **Azure portal** at `https://portal.azure.com`.
166 |
--------------------------------------------------------------------------------
/Instructions/Exercises/02-natural-language-azure-openai.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Use Azure OpenAI SDKs in your app'
4 | status: stale
5 | ---
6 |
7 | # Use Azure OpenAI APIs in your app
8 |
9 | With the Azure OpenAI Service, developers can create chatbots, language models, and other applications that excel at understanding natural human language. The Azure OpenAI provides access to pre-trained AI models, as well as a suite of APIs and tools for customizing and fine-tuning these models to meet the specific requirements of your application. In this exercise, you'll learn how to deploy a model in Azure OpenAI and use it in your own application.
10 |
11 | In the scenario for this exercise, you will perform the role of a software developer who has been tasked to implement an app that can use generative AI to help provide hiking recommendations. The techniques used in the exercise can be applied to any app that wants to use Azure OpenAI APIs.
12 |
13 | This exercise will take approximately **30** minutes.
14 |
15 | ## Provision an Azure OpenAI resource
16 |
17 | If you don't already have one, provision an Azure OpenAI resource in your Azure subscription.
18 |
19 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
20 | 2. Create an **Azure OpenAI** resource with the following settings:
21 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
22 | - **Resource group**: *Choose or create a resource group*
23 | - **Region**: *Make a **random** choice from any of the following regions*\*
24 | - East US
25 | - East US 2
26 | - North Central US
27 | - South Central US
28 | - Sweden Central
29 | - West US
30 | - West US 3
31 | - **Name**: *A unique name of your choice*
32 | - **Pricing tier**: Standard S0
33 |
34 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
35 |
36 | 3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.
37 |
38 | ## Deploy a model
39 |
40 | Azure provides a web-based portal named **Azure AI Foundry portal**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Foundry portal to deploy a model.
41 |
42 | > **Note**: As you use Azure AI Foundry portal, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
43 |
44 | 1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Foundry portal** (previously AI Studio).
45 | 1. In Azure AI Foundry portal, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-4o** model with the following settings:
46 | - **Deployment name**: *A unique name of your choice*
47 | - **Model**: gpt-4o
48 | - **Model version**: *Use default version*
49 | - **Deployment type**: Standard
50 | - **Tokens per minute rate limit**: 5K\*
51 | - **Content filter**: Default
52 | - **Enable dynamic quota**: Disabled
53 |
54 | > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
55 |
56 | ## Prepare to develop an app in Visual Studio Code
57 |
58 | You'll develop your Azure OpenAI app using Visual Studio Code. The code files for your app have been provided in a GitHub repo.
59 |
60 | > **Tip**: If you have already cloned the **mslearn-openai** repo, open it in Visual Studio code. Otherwise, follow these steps to clone it to your development environment.
61 |
62 | 1. Start Visual Studio Code.
63 | 2. Open the command palette (SHIFT+CTRL+P or **View** > **Command Palette...**) and run a **Git: Clone** command to clone the `https://github.com/MicrosoftLearning/mslearn-openai` repository to a local folder (it doesn't matter which folder).
64 | 3. When the repository has been cloned, open the folder in Visual Studio Code.
65 |
66 | > **Note**: If Visual Studio Code shows you a pop-up message to prompt you to trust the code you are opening, click on **Yes, I trust the authors** option in the pop-up.
67 |
68 | 4. Wait while additional files are installed to support the C# code projects in the repo.
69 |
70 | > **Note**: If you are prompted to add required assets to build and debug, select **Not Now**.
71 |
72 | ## Configure your application
73 |
74 | Applications for both C# and Python have been provided. Both apps feature the same functionality. First, you'll complete some key parts of the application to enable using your Azure OpenAI resource.
75 |
76 | 1. In Visual Studio Code, in the **Explorer** pane, browse to the **Labfiles/02-azure-openai-api** folder and expand the **CSharp** or **Python** folder depending on your language preference. Each folder contains the language-specific files for an app into which you're going to integrate Azure OpenAI functionality.
77 | 2. Right-click the **CSharp** or **Python** folder containing your code files and open an integrated terminal. Then install the Azure OpenAI SDK package by running the appropriate command for your language preference:
78 |
79 | **C#**:
80 |
81 | ```powershell
82 | dotnet add package Azure.AI.OpenAI --version 2.1.0
83 | ```
84 |
85 | **Python**:
86 |
87 | ```powershell
88 | pip install openai==1.65.2
89 | ```
90 |
91 | 3. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the configuration file for your preferred language
92 |
93 | - **C#**: appsettings.json
94 | - **Python**: .env
95 |
96 | 4. Update the configuration values to include:
97 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
98 | - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Foundry portal).
99 | 5. Save the configuration file.
100 |
101 | ## Add code to use the Azure OpenAI service
102 |
103 | Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
104 |
105 | 1. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the code file for your preferred language, and replace the comment ***Add Azure OpenAI package*** with code to add the Azure OpenAI SDK library:
106 |
107 | **C#**: Program.cs
108 |
109 | ```csharp
110 | // Add Azure OpenAI packages
111 | using Azure.AI.OpenAI;
112 | using OpenAI.Chat;
113 | ```
114 |
115 | **Python**: test-openai-model.py
116 |
117 | ```python
118 | # Add Azure OpenAI package
119 | from openai import AzureOpenAI
120 | ```
121 |
122 | 1. In the application code for your language, replace the comment ***Initialize the Azure OpenAI client...*** with the following code to initialize the client and define our system message.
123 |
124 | **C#**: Program.cs
125 |
126 | ```csharp
127 | // Initialize the Azure OpenAI client
128 | AzureOpenAIClient azureClient = new (new Uri(oaiEndpoint), new ApiKeyCredential(oaiKey));
129 | ChatClient chatClient = azureClient.GetChatClient(oaiDeploymentName);
130 |
131 | // System message to provide context to the model
132 | string systemMessage = "I am a hiking enthusiast named Forest who helps people discover hikes in their area. If no area is specified, I will default to near Rainier National Park. I will then provide three suggestions for nearby hikes that vary in length. I will also share an interesting fact about the local nature on the hikes when making a recommendation.";
133 | ```
134 |
135 | **Python**: test-openai-model.py
136 |
137 | ```python
138 | # Initialize the Azure OpenAI client
139 | client = AzureOpenAI(
140 | azure_endpoint = azure_oai_endpoint,
141 | api_key=azure_oai_key,
142 | api_version="2024-02-15-preview"
143 | )
144 |
145 | # Create a system message
146 | system_message = """I am a hiking enthusiast named Forest who helps people discover hikes in their area.
147 | If no area is specified, I will default to near Rainier National Park.
148 | I will then provide three suggestions for nearby hikes that vary in length.
149 | I will also share an interesting fact about the local nature on the hikes when making a recommendation.
150 | """
151 | ```
152 |
153 | 1. Replace the comment ***Add code to send request...*** with the necessary code for building the request; specifying the various parameters for your model such as `Temperature` and `MaxOutputTokenCount`.
154 |
155 | **C#**: Program.cs
156 |
157 | ```csharp
158 | // Add code to send request...
159 | // Get response from Azure OpenAI
160 | ChatCompletionOptions chatCompletionOptions = new ChatCompletionOptions()
161 | {
162 | Temperature = 0.7f,
163 | MaxOutputTokenCount = 800
164 | };
165 |
166 | ChatCompletion completion = chatClient.CompleteChat(
167 | [
168 | new SystemChatMessage(systemMessage),
169 | new UserChatMessage(inputText)
170 | ],
171 | chatCompletionOptions
172 | );
173 |
174 | Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
175 | ```
176 |
177 | **Python**: test-openai-model.py
178 |
179 | ```python
180 | # Add code to send request...
181 | # Send request to Azure OpenAI model
182 | response = client.chat.completions.create(
183 | model=azure_oai_deployment,
184 | temperature=0.7,
185 | max_tokens=400,
186 | messages=[
187 | {"role": "system", "content": system_message},
188 | {"role": "user", "content": input_text}
189 | ]
190 | )
191 | generated_text = response.choices[0].message.content
192 |
193 | # Print the response
194 | print("Response: " + generated_text + "\n")
195 | ```
196 |
197 | 1. Save the changes to your code file.
198 |
199 | ## Test your application
200 |
201 | Now that your app has been configured, run it to send your request to your model and observe the response.
202 |
203 | 1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
204 |
205 | - **C#**: `dotnet run`
206 | - **Python**: `python test-openai-model.py`
207 |
208 | > **Tip**: You can use the **Maximize panel size** (**^**) icon in the terminal toolbar to see more of the console text.
209 |
210 | 1. When prompted, enter the text `What hike should I do near Rainier?`.
211 | 1. Observe the output, taking note that the response follows the guidelines provided in the system message you added to the *messages* array.
212 | 1. Provide the prompt `Where should I hike near Boise? I'm looking for something of easy difficulty, between 2 to 3 miles, with moderate elevation gain.` and observe the output.
213 | 1. In the code file for your preferred language, change the *temperature* parameter value in your request to **1.0** and save the file.
214 | 1. Run the application again using the prompts above, and observe the output.
215 |
216 | Increasing the temperature often causes the response to vary, even when provided the same text, due to the increased randomness. You can run it several times to see how the output may change. Try using different values for your temperature with the same input.
217 |
218 | ## Maintain conversation history
219 |
220 | In most real-world applications, the ability to reference previous parts of the conversation allows for a more realistic interaction with an AI agent. The Azure OpenAI API is stateless by design, but by providing a history of the conversation in your prompt you enable the AI model to reference past messages.
221 |
222 | 1. Run the app again and provide the prompt `Where is a good hike near Boise?`.
223 | 1. Observe the output, and then prompt `How difficult is the second hike you suggested?`.
224 | 1. The response from the model will likely indicate can't understand the hike you're referring to. To fix that, we can enable the model to have the past conversation messages for reference.
225 | 1. In your application, we need to add the previous prompt and response to the future prompt we are sending. Below the definition of the **system message**, add the following code.
226 |
227 | **C#**: Program.cs
228 |
229 | ```csharp
230 | // Initialize messages list
231 | var messagesList = new List()
232 | {
233 | new SystemChatMessage(systemMessage),
234 | };
235 | ```
236 |
237 | **Python**: test-openai-model.py
238 |
239 | ```python
240 | # Initialize messages array
241 | messages_array = [{"role": "system", "content": system_message}]
242 | ```
243 |
244 | 1. Under the comment ***Add code to send request...***, replace all the code from the comment to the end of the **while** loop with the following code then save the file. The code is mostly the same, but now using the messages array to store the conversation history.
245 |
246 | **C#**: Program.cs
247 |
248 | ```csharp
249 | // Add code to send request...
250 | // Build completion options object
251 | messagesList.Add(new UserChatMessage(inputText));
252 |
253 | ChatCompletionOptions chatCompletionOptions = new ChatCompletionOptions()
254 | {
255 | Temperature = 0.7f,
256 | MaxOutputTokenCount = 800
257 | };
258 |
259 | ChatCompletion completion = chatClient.CompleteChat(
260 | messagesList,
261 | chatCompletionOptions
262 | );
263 |
264 | // Return the response
265 | string response = completion.Content[0].Text;
266 |
267 | // Add generated text to messages list
268 | messagesList.Add(new AssistantChatMessage(response));
269 |
270 | Console.WriteLine("Response: " + response + "\n");
271 | ```
272 |
273 | **Python**: test-openai-model.py
274 |
275 | ```python
276 | # Add code to send request...
277 | # Send request to Azure OpenAI model
278 | messages_array.append({"role": "user", "content": input_text})
279 |
280 | response = client.chat.completions.create(
281 | model=azure_oai_deployment,
282 | temperature=0.7,
283 | max_tokens=1200,
284 | messages=messages_array
285 | )
286 | generated_text = response.choices[0].message.content
287 | # Add generated text to messages array
288 | messages_array.append({"role": "assistant", "content": generated_text})
289 |
290 | # Print generated text
291 | print("Summary: " + generated_text + "\n")
292 | ```
293 |
294 | 1. Save the file. In the code you added, notice we now append the previous input and response to the prompt array which allows the model to understand the history of our conversation.
295 | 1. In the terminal pane, enter the following command to run the application.
296 |
297 | - **C#**: `dotnet run`
298 | - **Python**: `python test-openai-model.py`
299 |
300 | 1. Run the app again and provide the prompt `Where is a good hike near Boise?`.
301 | 1. Observe the output, and then prompt `How difficult is the second hike you suggested?`.
302 | 1. You'll likely get a response about the second hike the model suggested, which provides a much more realistic conversation. You can ask additional follow up questions referencing previous answers, and each time the history provides context for the model to answer.
303 |
304 | > **Tip**: The output token count is only set to 800, so if the conversation continues for too long the application will run out of available tokens, resulting in an incomplete prompt. In production uses, limiting the length of the history to the most recent inputs and responses will help control the number of required tokens.
305 |
306 | ## Clean up
307 |
308 | When you're done with your Azure OpenAI resource, remember to delete the deployment or the entire resource in the **Azure portal** at `https://portal.azure.com`.
309 |
--------------------------------------------------------------------------------
/Instructions/Exercises/02-use-own-data.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service'
4 | status: new
5 | ---
6 |
7 | # Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service
8 |
9 | The Azure OpenAI Service enables you to use your own data with the intelligence of the underlying LLM. You can limit the model to only use your data for pertinent topics, or blend it with results from the pre-trained model.
10 |
11 | In the scenario for this exercise, you will perform the role of a software developer working for Margie's Travel Agency. You will explore how use Azure AI Search to index your own data and use it with Azure OpenAI to augment prompts.
12 |
13 | This exercise will take approximately **30** minutes.
14 |
15 | ## Provision Azure resources
16 |
17 | To complete this exercise, you'll need:
18 |
19 | - An Azure OpenAI resource.
20 | - An Azure AI Search resource.
21 | - An Azure Storage Account resource.
22 |
23 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
24 | 2. Create an **Azure OpenAI** resource with the following settings:
25 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
26 | - **Resource group**: *Choose or create a resource group*
27 | - **Region**: *Make a **random** choice from any of the following regions*\*
28 | - East US
29 | - East US 2
30 | - North Central US
31 | - South Central US
32 | - Sweden Central
33 | - West US
34 | - West US 3
35 | - **Name**: *A unique name of your choice*
36 | - **Pricing tier**: Standard S0
37 |
38 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
39 |
40 | 3. While the Azure OpenAI resource is being provisioned, create an **Azure AI Search** resource with the following settings:
41 | - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource*
42 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
43 | - **Service name**: *A unique name of your choice*
44 | - **Location**: *The region in which you provisioned your Azure OpenAI resource*
45 | - **Pricing tier**: Basic
46 | 4. While the Azure AI Search resource is being provisioned, create a **Storage account** resource with the following settings:
47 | - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource*
48 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
49 | - **Storage account name**: *A unique name of your choice*
50 | - **Region**: *The region in which you provisioned your Azure OpenAI resource*
51 | - **Primary service**: Azure Blob Storage or Azure Data Lake Storage Gen 2
52 | - **Performance**: Standard
53 | - **Redundancy**: Locally redundant storage (LRS)
54 | 5. After all three of the resources have been successfully deployed in your Azure subscription, review them in the Azure portal and gather the following information (which you'll need later in the exercise):
55 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
56 | - The endpoint for your Azure AI Search service (the **Url** value on the overview page for your Azure AI Search resource in the Azure portal).
57 | - A **primary admin key** for your Azure AI Search resource (available in the **Keys** page for your Azure AI Search resource in the Azure portal).
58 |
59 | ## Upload your data
60 |
61 | You're going to ground the prompts you use with a generative AI model by using your own data. In this exercise, the data consists of a collection of travel brochures from the fictional *Margies Travel* company.
62 |
63 | 1. In a new browser tab, download an archive of brochure data from `https://aka.ms/own-data-brochures`. Extract the brochures to a folder on your PC.
64 | 1. In the Azure portal, navigate to your storage account and view the **Storage browser** page.
65 | 1. Select **Blob containers** and then add a new container named `margies-travel`.
66 | 1. Select the **margies-travel** container, and then upload the .pdf brochures you extracted previously to the root folder of the blob container.
67 |
68 | ## Deploy AI models
69 |
70 | You're going to use two AI models in this exercise:
71 |
72 | - A text embedding model to *vectorize* the text in the brochures so it can be indexed efficiently for use in grounding prompts.
73 | - A GPT model that you application can use to generate responses to prompts that are grounded in your data.
74 |
75 | ## Deploy a model
76 |
77 | Next, you will deploy Azure OpenAI models from Cloud Shell.
78 |
79 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***Bash*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.
80 |
81 | > **Note**: If you have previously created a cloud shell that uses a *PowerShell* environment, switch it to ***Bash***.
82 |
83 | ```dotnetcli
84 | az cognitiveservices account deployment create \
85 | -g \
86 | -n \
87 | --deployment-name text-embedding-ada-002 \
88 | --model-name text-embedding-ada-002 \
89 | --model-version "2" \
90 | --model-format OpenAI \
91 | --sku-name "Standard" \
92 | --sku-capacity 5
93 | ```
94 |
95 | > **Note**: Sku-capacity is measured in thousands of tokens per minute. A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
96 |
97 | After the text embedding model has been deployed, create a new deployment of the **gpt-4o** model with the following settings:
98 |
99 | ```dotnetcli
100 | az cognitiveservices account deployment create \
101 | -g \
102 | -n \
103 | --deployment-name gpt-4o \
104 | --model-name gpt-4o \
105 | --model-version "2024-05-13" \
106 | --model-format OpenAI \
107 | --sku-name "Standard" \
108 | --sku-capacity 5
109 | ```
110 |
111 | ## Create an index
112 |
113 | To make it easy to use your own data in a prompt, you'll index it using Azure AI Search. You'll use the text embedding model to *vectorize* the text data (which results in each text token in the index being represented by numeric vectors - making it compatible with the way a generative AI model represents text)
114 |
115 | 1. In the Azure portal, navigate to your Azure AI Search resource.
116 | 1. On the **Overview** page, select **Import and vectorize data**.
117 | 1. In the **Setup your data connection** page, select **Azure Blob Storage** and configure the data source with the following settings:
118 | - **Subscription**: The Azure subscription in which you provisioned your storage account.
119 | - **Blob storage account**: The storage account you created previously.
120 | - **Blob container**: margies-travel
121 | - **Blob folder**: *Leave blank*
122 | - **Enable deletion tracking**: Unselected
123 | - **Authenticate using managed identity**: Unselected
124 | 1. On the **Vectorize your text** page, select the following settings:
125 | - **Kind**: Azure OpenAI
126 | - **Subscription**: The Azure subscription in which you provisioned your Azure OpenAI service.
127 | - **Azure OpenAI Service**: Your Azure OpenAI Service resource
128 | - **Model deployment**: text-embedding-ada-002
129 | - **Authentication type**: API key
130 | - **I acknowledge that connecting to an Azure OpenAI service will incur additional costs to my account**: Selected
131 | 1. On the next page, do **not** select the option to vectorize images or extract data with AI skills.
132 | 1. On the next page, enable semantic ranking and schedule the indexer to run once.
133 | 1. On the final page, set the **Objects name prefix** to `margies-index` and then create the index.
134 |
135 | ## Prepare to develop an app in Visual Studio Code
136 |
137 | Now let's explore the use of your own data in an app that uses the Azure OpenAI service SDK. You'll develop your app using Visual Studio Code. The code files for your app have been provided in a GitHub repo.
138 |
139 | > **Tip**: If you have already cloned the **mslearn-openai** repo, open it in Visual Studio code. Otherwise, follow these steps to clone it to your development environment.
140 |
141 | 1. Start Visual Studio Code.
142 | 2. Open the palette (SHIFT+CTRL+P or **View** > **Command Palette...**) and run a **Git: Clone** command to clone the `https://github.com/MicrosoftLearning/mslearn-openai` repository to a local folder (it doesn't matter which folder).
143 | 3. When the repository has been cloned, open the folder in Visual Studio Code.
144 |
145 | > **Note**: If Visual Studio Code shows you a pop-up message to prompt you to trust the code you are opening, click on **Yes, I trust the authors** option in the pop-up.
146 |
147 | 4. Wait while additional files are installed to support the C# code projects in the repo.
148 |
149 | > **Note**: If you are prompted to add required assets to build and debug, select **Not Now**.
150 |
151 | ## Configure your application
152 |
153 | Applications for both C# and Python have been provided, and both apps feature the same functionality. First, you'll complete some key parts of the application to enable using your Azure OpenAI resource.
154 |
155 | 1. In Visual Studio Code, in the **Explorer** pane, browse to the **Labfiles/02-use-own-data** folder and expand the **CSharp** or **Python** folder depending on your language preference. Each folder contains the language-specific files for an app into which you're going to integrate Azure OpenAI functionality.
156 | 2. Right-click the **CSharp** or **Python** folder containing your code files and open an integrated terminal. Then install the Azure OpenAI SDK package by running the appropriate command for your language preference:
157 |
158 | **C#**:
159 |
160 | ```powershell
161 | dotnet add package Azure.AI.OpenAI --version 2.1.0
162 | dotnet add package Azure.Search.Documents --version 11.6.0
163 | ```
164 |
165 | **Python**:
166 |
167 | ```powershell
168 | pip install openai==1.65.2
169 | ```
170 |
171 | 3. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the configuration file for your preferred language
172 |
173 | - **C#**: appsettings.json
174 | - **Python**: .env
175 |
176 | 4. Update the configuration values to include:
177 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
178 | - The **deployment name** you specified for your gpt-4o model deployment (should be `gpt-4o`).
179 | - The endpoint for your search service (the **Url** value on the overview page for your search resource in the Azure portal).
180 | - A **key** for your search resource (available in the **Keys** page for your search resource in the Azure portal - you can use either of the admin keys)
181 | - The name of the search index (which should be `margies-index`).
182 | 5. Save the configuration file.
183 |
184 | ### Add code to use the Azure OpenAI service
185 |
186 | Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
187 |
188 | 1. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the code file for your preferred language, and replace the comment ***Configure your data source*** with code to your index as a data source for chat completion:
189 |
190 | **C#**: ownData.cs
191 |
192 | ```csharp
193 | // Configure your data source
194 | // Extension methods to use data sources with options are subject to SDK surface changes. Suppress the warning to acknowledge this and use the subject-to-change AddDataSource method.
195 | #pragma warning disable AOAI001
196 |
197 | ChatCompletionOptions chatCompletionsOptions = new ChatCompletionOptions()
198 | {
199 | MaxOutputTokenCount = 600,
200 | Temperature = 0.9f,
201 | };
202 |
203 | chatCompletionsOptions.AddDataSource(new AzureSearchChatDataSource()
204 | {
205 | Endpoint = new Uri(azureSearchEndpoint),
206 | IndexName = azureSearchIndex,
207 | Authentication = DataSourceAuthentication.FromApiKey(azureSearchKey),
208 | });
209 | ```
210 |
211 | **Python**: ownData.py
212 |
213 | ```python
214 | # Configure your data source
215 | text = input('\nEnter a question:\n')
216 |
217 | completion = client.chat.completions.create(
218 | model=deployment,
219 | messages=[
220 | {
221 | "role": "user",
222 | "content": text,
223 | },
224 | ],
225 | extra_body={
226 | "data_sources":[
227 | {
228 | "type": "azure_search",
229 | "parameters": {
230 | "endpoint": os.environ["AZURE_SEARCH_ENDPOINT"],
231 | "index_name": os.environ["AZURE_SEARCH_INDEX"],
232 | "authentication": {
233 | "type": "api_key",
234 | "key": os.environ["AZURE_SEARCH_KEY"],
235 | }
236 | }
237 | }
238 | ],
239 | }
240 | )
241 | ```
242 |
243 | 1. Save the changes to the code file.
244 |
245 | ## Run your application
246 |
247 | Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request.
248 |
249 | 1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
250 |
251 | - **C#**: `dotnet run`
252 | - **Python**: `python ownData.py`
253 |
254 | > **Tip**: You can use the **Maximize panel size** (**^**) icon in the terminal toolbar to see more of the console text.
255 |
256 | 2. Review the response to the prompt `Tell me about London`, which should include an answer as well as some details of the data used to ground the prompt, which was obtained from your search service.
257 |
258 | > **Tip**: If you want to see the citations from your search index, set the variable ***show citations*** near the top of the code file to **true**.
259 |
260 | ## Clean up
261 |
262 | When you're done with your Azure OpenAI resource, remember to delete the resources in the **Azure portal** at `https://portal.azure.com`. Be sure to also include the storage account and search resource, as those can incur a relatively large cost.
263 |
--------------------------------------------------------------------------------
/Instructions/Exercises/03-generate-images.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Generate images with AI'
4 | description: 'Learn how to use a DALL-E OpenAI model to generate images.'
5 | status: new
6 | ---
7 |
8 | # Generate images with AI
9 |
10 | In this exercise, you use the the OpenAI DALL-E generative AI model to generate images. You'll develop your app by using Azure AI Foundry and the Azure OpenAI service.
11 |
12 | This exercise takes approximately **30** minutes.
13 |
14 | ## Create an Azure AI Foundry project
15 |
16 | Let's start by creating an Azure AI Foundry project.
17 |
18 | 1. In a web browser, open the [Azure AI Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the **Azure AI Foundry** logo at the top left to navigate to the home page, which looks similar to the following image:
19 |
20 | 
21 |
22 | 1. In the home page, select **+ Create project**.
23 | 1. In the **Create a project** wizard, enter a valid name for your project and if an existing hub is suggested, choose the option to create a new one. Then review the Azure resources that will be automatically created to support your hub and project.
24 | 1. Select **Customize** and specify the following settings for your hub:
25 | - **Hub name**: *A valid name for your hub*
26 | - **Subscription**: *Your Azure subscription*
27 | - **Resource group**: *Create or select a resource group*
28 | - **Location**: Select **Help me choose** and then select **dalle** in the Location helper window and use the recommended region\*
29 | - **Connect Azure AI Services or Azure OpenAI**: *Create a new AI Services resource*
30 | - **Connect Azure AI Search**: Skip connecting
31 |
32 | > \* Azure OpenAI resources are constrained by regional quotas. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
33 |
34 | 1. Select **Next** and review your configuration. Then select **Create** and wait for the process to complete.
35 | 1. When your project is created, close any tips that are displayed and review the project page in Azure AI Foundry portal, which should look similar to the following image:
36 |
37 | 
38 |
39 | ## Deploy a DALL-E model
40 |
41 | Now you're ready to deploy a DALL-E model to support image-generation.
42 |
43 | 1. In the toolbar at the top right of your Azure AI Foundry project page, use the **Preview features** icon to enable the **Deploy models to Azure AI model inference service** feature. This feature ensures your model deployment is available to the Azure AI Inference service, which you'll use in your application code.
44 | 1. In the pane on the left for your project, in the **My assets** section, select the **Models + endpoints** page.
45 | 1. In the **Models + endpoints** page, in the **Model deployments** tab, in the **+ Deploy model** menu, select **Deploy base model**.
46 | 1. Search for the **dall-e-3** model in the list, and then select and confirm it.
47 | 1. Agree to the license agreement if prompted, and then deploy the model with the following settings by selecting **Customize** in the deployment details:
48 | - **Deployment name**: *A valid name for your model deployment*
49 | - **Deployment type**: Standard
50 | - **Deployment details**: *Use the default settings*
51 | 1. Wait for the deployment provisioning state to be **Completed**.
52 |
53 | ## Test the model in the playground
54 |
55 | Before creating a client application, let's test the DALL-E model in the playground.
56 |
57 | 1. In the page for the DALL-E model you deployed, select **Open in playground** (or in the **Playgrounds** page, open the **Images playground**).
58 | 1. Ensure your DALL-E model deployment is selected. Then, in the **Prompt** box, enter a prompt such as `Create an image of an robot eating spaghetti`.
59 | 1. Review the resulting image in the playground:
60 |
61 | 
62 |
63 | 1. Enter a follow-up prompt, such as `Show the robot in a restaurant` and review the resulting image.
64 | 1. Continue testing with new prompts to refine the image until you are happy with it.
65 |
66 | ## Create a client application
67 |
68 | The model seems to work in the playground. Now you can use the Azure OpenAI SDK to use it in a client application.
69 |
70 | > **Tip**: You can choose to develop your solution using Python or Microsoft C#. Follow the instructions in the appropriate section for your chosen language.
71 |
72 | ### Prepare the application configuration
73 |
74 | 1. In the Azure AI Foundry portal, view the **Overview** page for your project.
75 | 1. In the **Project details** area, note the **Project connection string**. You'll use this connection string to connect to your project in a client application.
76 | 1. Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`; signing in with your Azure credentials if prompted.
77 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.
78 |
79 | > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.
80 |
81 | 5. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).
82 |
83 | **Ensure you've switched to the classic version of the cloud shell before continuing.**
84 |
85 | 1. In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text):
86 |
87 | ```
88 | rm -r mslearn-openai -f
89 | git clone https://github.com/microsoftlearning/mslearn-openai mslearn-openai
90 | ```
91 |
92 | > **Tip**: As you paste commands into the cloudshell, the ouput may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.
93 |
94 | 1. After the repo has been cloned, navigate to the language-specific folder containing the application code files, based on the programming language of your choice (Python or C#):
95 |
96 | **Python**
97 |
98 | ```
99 | cd mslearn-openai/Labfiles/03-image-generation/Python
100 | ```
101 |
102 | **C#**
103 |
104 | ```
105 | cd mslearn-openai/Labfiles/03-image-generation/CSharp
106 | ```
107 |
108 | 1. In the cloud shell command line pane, enter the following command to install the libraries you'll use:
109 |
110 | **Python**
111 |
112 | ```
113 | python -m venv labenv
114 | ./labenv/bin/Activate.ps1
115 | pip install python-dotenv azure-identity azure-ai-projects openai requests
116 | ```
117 |
118 | **C#**
119 |
120 | ```
121 | dotnet add package Azure.Identity
122 | dotnet add package Azure.AI.Projects --prerelease
123 | dotnet add package Azure.AI.OpenAI
124 | ```
125 |
126 | 1. Enter the following command to edit the configuration file that has been provided:
127 |
128 | **Python**
129 |
130 | ```
131 | code .env
132 | ```
133 |
134 | **C#**
135 |
136 | ```
137 | code appsettings.json
138 | ```
139 |
140 | The file is opened in a code editor.
141 |
142 | 1. In the code file, replace the **your_project_endpoint** placeholder with the connection string for your project (copied from the project **Overview** page in the Azure AI Foundry portal), and the **your_model_deployment** placeholder with the name you assigned to your dall-e-3 model deployment.
143 | 1. After you've replaced the placeholders, use the **CTRL+S** command to save your changes and then use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open.
144 |
145 | ### Write code to connect to your project and chat with your model
146 |
147 | > **Tip**: As you add code, be sure to maintain the correct indentation.
148 |
149 | 1. Enter the following command to edit the code file that has been provided:
150 |
151 | **Python**
152 |
153 | ```
154 | code dalle-client.py
155 | ```
156 |
157 | **C#**
158 |
159 | ```
160 | code Program.cs
161 | ```
162 |
163 | 1. In the code file, note the existing statements that have been added at the top of the file to import the necessary SDK namespaces. Then, under the comment **Add references**, add the following code to reference the namespaces in the libraries you installed previously:
164 |
165 | **Python**
166 |
167 | ```python
168 | # Add references
169 | from dotenv import load_dotenv
170 | from azure.identity import DefaultAzureCredential
171 | from azure.ai.projects import AIProjectClient
172 | from openai import AzureOpenAI
173 | import requests
174 | ```
175 |
176 | **C#**
177 |
178 | ```csharp
179 | // Add references
180 | using Azure.Identity;
181 | using Azure.AI.Projects;
182 | using Azure.AI.OpenAI;
183 | using OpenAI.Images;
184 | ```
185 |
186 | 1. In the **main** function, under the comment **Get configuration settings**, note that the code loads the project connection string and model deployment name values you defined in the configuration file.
187 | 1. Under the comment **Initialize the project client**, add the following code to connect to your Azure AI Foundry project using the Azure credentials you are currently signed in with:
188 |
189 | **Python**
190 |
191 | ```python
192 | # Initialize the project client
193 | project_client = AIProjectClient.from_connection_string(
194 | conn_str=project_connection,
195 | credential=DefaultAzureCredential())
196 | ```
197 |
198 | **C#**
199 |
200 | ```csharp
201 | // Initialize the project client
202 | var projectClient = new AIProjectClient(project_connection,
203 | new DefaultAzureCredential());
204 | ```
205 |
206 | 1. Under the comment **Get an OpenAI client**, add the following code to create a client object for chatting with a model:
207 |
208 | **Python**
209 |
210 | ```python
211 | # Get an OpenAI client
212 | openai_client = project_client.inference.get_azure_openai_client(api_version="2024-06-01")
213 |
214 | ```
215 |
216 | **C#**
217 |
218 | ```csharp
219 | // Get an OpenAI client
220 | ConnectionResponse connection = projectClient.GetConnectionsClient().GetDefaultConnection(ConnectionType.AzureOpenAI, withCredential: true);
221 |
222 | var connectionProperties = connection.Properties as ConnectionPropertiesApiKeyAuth;
223 |
224 | AzureOpenAIClient openAIClient = new(
225 | new Uri(connectionProperties.Target),
226 | new AzureKeyCredential(connectionProperties.Credentials.Key));
227 |
228 | ImageClient openAIimageClient = openAIClient.GetImageClient(model_deployment);
229 |
230 | ```
231 |
232 | 1. Note that the code includes a loop to allow a user to input a prompt until they enter "quit". Then in the loop section, under the comment **Generate an image**, add the following code to submit the prompt and retrieve the URL for the generated image from your model:
233 |
234 | **Python**
235 |
236 | ```python
237 | # Generate an image
238 | result = openai_client.images.generate(
239 | model=model_deployment,
240 | prompt=input_text,
241 | n=1
242 | )
243 |
244 | json_response = json.loads(result.model_dump_json())
245 | image_url = json_response["data"][0]["url"]
246 | ```
247 |
248 | **C#**
249 |
250 | ```csharp
251 | // Generate an image
252 | var imageGeneration = await openAIimageClient.GenerateImageAsync(
253 | input_text,
254 | new ImageGenerationOptions()
255 | {
256 | Size = GeneratedImageSize.W1024xH1024
257 | }
258 | );
259 | imageUrl= imageGeneration.Value.ImageUri;
260 | ```
261 |
262 | 1. Note that the code in the remainder of the **main** function passes the image URL and a filename to a provided function, which downloads the generated image and saves it as a .png file.
263 |
264 | 1. Use the **CTRL+S** command to save your changes to the code file and then use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open.
265 |
266 | ### Run the client application
267 |
268 | 1. In the cloud shell command line pane, enter the following command to run the app:
269 |
270 | **Python**
271 |
272 | ```
273 | python dalle-client.py
274 | ```
275 |
276 | **C#**
277 |
278 | ```
279 | dotnet run
280 | ```
281 |
282 | 1. When prompted, enter a request for an image, such as `Create an image of a robot eating pizza`. After a moment or two, the app should confirm that the image has been saved.
283 | 1. Try a few more prompts. When you're finished, enter `quit` to exit the program.
284 |
285 | > **Note**: In this simple app, we haven't implemented logic to retain conversation history; so the model will treat each prompt as a new request with no context of the previous prompt.
286 |
287 | 1. To download and view the images that were generated by your app, use the cloud shell **download** command - specifying the .png file that was generated:
288 |
289 | ```
290 | download ./images/image_1.png
291 | ```
292 |
293 | The download command creates a popup link at the bottom right of your browser, which you can select to download and open the file.
294 |
295 | ## Summary
296 |
297 | In this exercise, you used Azure AI Foundry and the Azure OpenAI SDK to create a client application uses a DALL-E model to generate images.
298 |
299 | ## Clean up
300 |
301 | If you've finished exploring DALL-E, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs.
302 |
303 | 1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com` in a new browser tab) and view the contents of the resource group where you deployed the resources used in this exercise.
304 | 1. On the toolbar, select **Delete resource group**.
305 | 1. Enter the resource group name and confirm that you want to delete it.
--------------------------------------------------------------------------------
/Instructions/Exercises/04-code-generation.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Generate and improve code with Azure OpenAI Service'
4 | status: stale
5 | ---
6 |
7 | # Generate and improve code with Azure OpenAI Service
8 |
9 | The Azure OpenAI Service models can generate code for you using natural language prompts, fixing bugs in completed code, and providing code comments. These models can also explain and simplify existing code to help you understand what it does and how to improve it.
10 |
11 | In scenario for this exercise, you will perform the role of a software developer exploring how to use generative AI to make coding tasks easier and more efficient. The techniques used in the exercise can be applied to other code files, programming languages, and use cases.
12 |
13 | This exercise will take approximately **25** minutes.
14 |
15 | ## Provision an Azure OpenAI resource
16 |
17 | If you don't already have one, provision an Azure OpenAI resource in your Azure subscription.
18 |
19 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
20 | 2. Create an **Azure OpenAI** resource with the following settings:
21 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
22 | - **Resource group**: *Choose or create a resource group*
23 | - **Region**: *Make a **random** choice from any of the following regions*\*
24 | - East US
25 | - East US 2
26 | - North Central US
27 | - South Central US
28 | - Sweden Central
29 | - West US
30 | - West US 3
31 | - **Name**: *A unique name of your choice*
32 | - **Pricing tier**: Standard S0
33 |
34 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.need to create another resource in a different region.
35 |
36 | 3. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.
37 |
38 | ## Deploy a model
39 |
40 | Azure provides a web-based portal named **Azure AI Foundry portal**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Foundry portal to deploy a model.
41 |
42 | > **Note**: As you use Azure AI Foundry portal, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
43 |
44 | 1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Foundry portal** (previously AI Studio).
45 | 1. In Azure AI Foundry portal, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-4o** model with the following settings:
46 | - **Deployment name**: *A unique name of your choice*
47 | - **Model**: gpt-4o
48 | - **Model version**: *Use default version*
49 | - **Deployment type**: Standard
50 | - **Tokens per minute rate limit**: 5K\*
51 | - **Content filter**: Default
52 | - **Enable dynamic quota**: Disabled
53 |
54 | > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
55 |
56 | ## Generate code in chat playground
57 |
58 | Before using in your app, examine how Azure OpenAI can generate and explain code in the chat playground.
59 |
60 | 1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
61 | - **Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment.
62 | - **Chat session** - used to submit chat messages and view responses.
63 | 1. Under **Deployments**, ensure that your model deployment is selected.
64 | 1. In the **System message** area, set the system message to `You are a programming assistant helping write code` and apply the changes.
65 | 1. In the **Chat session**, submit the following query:
66 |
67 | ```prompt
68 | Write a function in python that takes a character and a string as input, and returns how many times the character appears in the string
69 | ```
70 |
71 | The model will likely respond with a function, with some explanation of what the function does and how to call it.
72 |
73 | 1. Next, send the prompt `Do the same thing, but this time write it in C#`.
74 |
75 | The model likely responded very similarly as the first time, but this time coding in C#. You can ask it again for a different language of your choice, or a function to complete a different task such as reversing the input string.
76 |
77 | 1. Next, let's explore using AI to understand code. Submit the following prompt as the user message.
78 |
79 | ```prompt
80 | What does the following function do?
81 | ---
82 | def multiply(a, b):
83 | result = 0
84 | negative = False
85 | if a < 0 and b > 0:
86 | a = -a
87 | negative = True
88 | elif a > 0 and b < 0:
89 | b = -b
90 | negative = True
91 | elif a < 0 and b < 0:
92 | a = -a
93 | b = -b
94 | while b > 0:
95 | result += a
96 | b -= 1
97 | if negative:
98 | return -result
99 | else:
100 | return result
101 | ```
102 |
103 | The model should describe what the function does, which is to multiply two numbers together by using a loop.
104 |
105 | 1. Submit the prompt `Can you simplify the function?`.
106 |
107 | The model should write a simpler version of the function.
108 |
109 | 1. Submit the prompt: `Add some comments to the function.`
110 |
111 | The model adds comments to the code.
112 |
113 | ## Prepare to develop an app in Visual Studio Code
114 |
115 | Now let's explore how you could build a custom app that uses Azure OpenAI service to generate code. You'll develop your app using Visual Studio Code. The code files for your app have been provided in a GitHub repo.
116 |
117 | > **Tip**: If you have already cloned the **mslearn-openai** repo, open it in Visual Studio code. Otherwise, follow these steps to clone it to your development environment.
118 |
119 | 1. Start Visual Studio Code.
120 | 2. Open the command palette (SHIFT+CTRL+P or **View** > **Command Palette...**) and run a **Git: Clone** command to clone the `https://github.com/MicrosoftLearning/mslearn-openai` repository to a local folder (it doesn't matter which folder).
121 | 3. When the repository has been cloned, open the folder in Visual Studio Code.
122 |
123 | > **Note**: If Visual Studio Code shows you a pop-up message to prompt you to trust the code you are opening, click on **Yes, I trust the authors** option in the pop-up.
124 |
125 | 4. Wait while additional files are installed to support the C# code projects in the repo.
126 |
127 | > **Note**: If you are prompted to add required assets to build and debug, select **Not Now**.
128 |
129 | ## Configure your application
130 |
131 | Applications for both C# and Python have been provided, as well as a sample text file you'll use to test the summarization. Both apps feature the same functionality. First, you'll complete some key parts of the application to enable using your Azure OpenAI resource.
132 |
133 | 1. In Visual Studio Code, in the **Explorer** pane, browse to the **Labfiles/04-code-generation** folder and expand the **CSharp** or **Python** folder depending on your language preference. Each folder contains the language-specific files for an app into which you're you're going to integrate Azure OpenAI functionality.
134 | 2. Right-click the **CSharp** or **Python** folder containing your code files and open an integrated terminal. Then install the Azure OpenAI SDK package by running the appropriate command for your language preference:
135 |
136 | **C#**:
137 |
138 | ```powershell
139 | dotnet add package Azure.AI.OpenAI --version 2.1.0
140 | ```
141 |
142 | **Python**:
143 |
144 | ```powershell
145 | pip install openai==1.65.2
146 | ```
147 |
148 | 3. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the configuration file for your preferred language
149 |
150 | - **C#**: appsettings.json
151 | - **Python**: .env
152 |
153 | 4. Update the configuration values to include:
154 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
155 | - The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Foundry portal).
156 | 5. Save the configuration file.
157 |
158 | ## Add code to use your Azure OpenAI service model
159 |
160 | Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
161 |
162 | 1. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the code file for your preferred language. In the function that calls the Azure OpenAI model, under the comment ***Format and send the request to the model***, add the code to format and send the request to the model.
163 |
164 | **C#**: Program.cs
165 |
166 | ```csharp
167 | // Format and send the request to the model
168 | var chatCompletionsOptions = new ChatCompletionOptions()
169 | {
170 | Temperature = 0.7f,
171 | MaxOutputTokenCount = 800
172 | };
173 |
174 | // Get response from Azure OpenAI
175 | ChatCompletion response = await chatClient.CompleteChatAsync(
176 | [
177 | new SystemChatMessage(systemPrompt),
178 | new UserChatMessage(userPrompt),
179 | ],
180 | chatCompletionsOptions);
181 | ```
182 |
183 | **Python**: code-generation.py
184 |
185 | ```python
186 | # Format and send the request to the model
187 | messages =[
188 | {"role": "system", "content": system_message},
189 | {"role": "user", "content": user_message},
190 | ]
191 |
192 | # Call the Azure OpenAI model
193 | response = client.chat.completions.create(
194 | model=model,
195 | messages=messages,
196 | temperature=0.7,
197 | max_tokens=1000
198 | )
199 | ```
200 |
201 | 1. Save the changes to the code file.
202 |
203 | ## Run the application
204 |
205 | Now that your app has been configured, run it to try generating code for each use case. The use case is numbered in the app, and can be run in any order.
206 |
207 | > **Note**: Some users may experience rate limiting if calling the model too frequently. If you hit an error about a token rate limit, wait for a minute then try again.
208 |
209 | 1. In the **Explorer** pane, expand the **Labfiles/04-code-generation/sample-code** folder and review the function and the *go-fish* app for your language. These files will be used for the tasks in the app.
210 | 2. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
211 |
212 | - **C#**: `dotnet run`
213 | - **Python**: `python code-generation.py`
214 |
215 | > **Tip**: You can use the **Maximize panel size** (**^**) icon in the terminal toolbar to see more of the console text.
216 |
217 | 3. Choose option **1** to add comments to your code and enter the following prompt. Note, the response might take a few seconds for each of these tasks.
218 |
219 | ```prompt
220 | Add comments to the following function. Return only the commented code.\n---\n
221 | ```
222 |
223 | The results will be put into **result/app.txt**. Open that file up, and compare it to the function file in **sample-code**.
224 |
225 | 4. Next, choose option **2** to write unit tests for that same function and enter the following prompt.
226 |
227 | ```prompt
228 | Write four unit tests for the following function.\n---\n
229 | ```
230 |
231 | The results will replace what was in **result/app.txt**, and details four unit tests for that function.
232 |
233 | 5. Next, choose option **3** to fix bugs in an app for playing Go Fish. Enter the following prompt.
234 |
235 | ```prompt
236 | Fix the code below for an app to play Go Fish with the user. Return only the corrected code.\n---\n
237 | ```
238 |
239 | The results will replace what was in **result/app.txt**, and should have very similar code with a few things corrected.
240 |
241 | - **C#**: Fixes are made on line 30 and 59
242 | - **Python**: Fixes are made on line 18 and 31
243 |
244 | The app for Go Fish in **sample-code** can be run if you replace the lines that contain bugs with the response from Azure OpenAI. If you run it without the fixes, it will not work correctly.
245 |
246 | > **Note**: It's important to note that even though the code for this Go Fish app was corrected for some syntax, it's not a strictly accurate representation of the game. If you look closely, there are issues with not checking if the deck is empty when drawing cards, not removing pairs from the players hand when they get a pair, and a few other bugs that require understanding of card games to realize. This is a great example of how useful generative AI models can be to assist with code generation, but can't be trusted as correct and need to be verified by the developer.
247 |
248 | If you would like to see the full response from Azure OpenAI, you can set the **printFullResponse** variable to `True`, and rerun the app.
249 |
250 | ## Clean up
251 |
252 | When you're done with your Azure OpenAI resource, remember to delete the deployment or the entire resource in the **Azure portal** at `https://portal.azure.com`.
253 |
--------------------------------------------------------------------------------
/Instructions/Exercises/05-generate-images.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Generate images with a DALL-E model'
4 | status: stale
5 | ---
6 |
7 | # Generate images with a DALL-E model
8 |
9 | The Azure OpenAI Service includes an image-generation model named DALL-E. You can use this model to submit natural language prompts that describe a desired image, and the model will generate an original image based on the description you provide.
10 |
11 | In this exercise, you'll use a DALL-E version 3 model to generate images based on natural language prompts.
12 |
13 | This exercise will take approximately **25** minutes.
14 |
15 | ## Provision an Azure OpenAI resource
16 |
17 | Before you can use Azure OpenAI to generate images, you must provision an Azure OpenAI resource in your Azure subscription. The resource must be in a region where DALL-E models are supported.
18 |
19 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
20 | 1. Create an **Azure OpenAI** resource with the following settings:
21 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service, including DALL-E*
22 | - **Resource group**: *Choose or create a resource group*
23 | - **Region**: *Choose either **East US** or **Sweden Central***\*
24 | - **Name**: *A unique name of your choice*
25 | - **Pricing tier**: Standard S0
26 |
27 | > \* DALL-E 3 models are only available in Azure OpenAI service resources in the **East US** and **Sweden Central** regions.
28 |
29 | 1. Wait for deployment to complete. Then go to the deployed Azure OpenAI resource in the Azure portal.
30 | 1. On the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Foundry portal** (previously AI Studio).
31 | 1. In Azure AI Foundry portal, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one for DALL-E 3, create a new deployment of the **dall-e-3** model with the following settings:
32 | - **Deployment name**: dalle3
33 | - **Model version**: *Use default version*
34 | - **Deployment type**: Standard
35 | - **Capacity units**: 1K
36 | - **Content filter**: Default
37 | - **Enable dynamic quota**: Disabled
38 | 1. One deployed, navigate back to the **Images** page in the left pane.
39 |
40 | ## Explore image-generation in the images playground
41 |
42 | You can use the Images playground in **Azure AI Foundry portal** to experiment with image generation.
43 |
44 | 1. In the **Images** playground section, your deployment of DALL-E 3 should be automatically selected. If not, select it from the deployment dropdown.
45 | 1. In the **Prompt** box, enter a description of an image you'd like to generate. For example, `An elephant on a skateboard` Then select **Generate** and view the image that is generated.
46 |
47 | 1. Modify the prompt to provide a more specific description. For example `An elephant on a skateboard in the style of Picasso`. Then generate the new image and review the results.
48 |
49 | 
50 |
51 | ## Use the REST API to generate images
52 |
53 | The Azure OpenAI service provides a REST API that you can use to submit prompts for content generation - including images generated by a DALL-E model.
54 |
55 | ### Prepare to develop an app in Visual Studio Code
56 |
57 | Now let's explore how you could build a custom app that uses Azure OpenAI service to generate images. You'll develop your app using Visual Studio Code. The code files for your app have been provided in a GitHub repo.
58 |
59 | > **Tip**: If you have already cloned the **mslearn-openai** repo, open it in Visual Studio code. Otherwise, follow these steps to clone it to your development environment.
60 |
61 | 1. Start Visual Studio Code.
62 | 2. Open the palette (SHIFT+CTRL+P) and run a **Git: Clone** command to clone the `https://github.com/MicrosoftLearning/mslearn-openai` repository to a local folder (it doesn't matter which folder).
63 | 3. When the repository has been cloned, open the folder in Visual Studio Code.
64 |
65 | > **Note**: If Visual Studio Code shows you a pop-up message to prompt you to trust the code you are opening, click on **Yes, I trust the authors** option in the pop-up.
66 |
67 | 4. Wait while additional files are installed to support the C# code projects in the repo.
68 |
69 | > **Note**: If you are prompted to add required assets to build and debug, select **Not Now**.
70 |
71 | ### Configure your application
72 |
73 | Applications for both C# and Python have been provided. Both apps feature the same functionality. First, you'll add the endpoint and key for your Azure OpenAI resource to the app's configuration file.
74 |
75 | 1. In Visual Studio Code, in the **Explorer** pane, browse to the **Labfiles/05-image-generation** folder and expand the **CSharp** or **Python** folder depending on your language preference. Each folder contains the language-specific files for an app into which you're you're going to integrate Azure OpenAI functionality.
76 | 2. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the configuration file for your preferred language
77 |
78 | - **C#**: appsettings.json
79 | - **Python**: .env
80 |
81 | 3. Update the configuration values to include the **endpoint** and **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal).
82 | 4. Save the configuration file.
83 |
84 | ### View application code
85 |
86 | Now you're ready to explore the code used to call the REST API and generate an image.
87 |
88 | 1. In the **Explorer** pane, select the main code file for your application:
89 |
90 | - C#: `Program.cs`
91 | - Python: `generate-image.py`
92 |
93 | 2. Review the code that the file contains, noting the following key features:
94 | - The code makes an https request to the endpoint for your service, including the key for your service in the header. Both of these values are obtained from the configuration file.
95 | - The request includes some parameters, including the prompt from on the image should be based, the number of images to generate, and the size of the generated image(s).
96 | - The response includes a revised prompt that the DALL-E model extrapolated from the user-provided prompt to make it more descriptive, and the URL for the generated image.
97 |
98 | > **Important**: If you named your deployment anything other than the recommended *dalle3*, you'll need to update the code to use the name of your deployment.
99 |
100 | ### Run the app
101 |
102 | Now that you've reviewed the code, it's time to run it and generate some images.
103 |
104 | 1. Right-click the **CSharp** or **Python** folder containing your code files and open an integrated terminal. Then enter the appropriate command to run your application:
105 |
106 | **C#**
107 | ```
108 | dotnet run
109 | ```
110 |
111 | **Python**
112 | ```
113 | pip install requests
114 | python generate-image.py
115 | ```
116 |
117 | 3. When prompted, enter a description for an image. For example, *A giraffe flying a kite*.
118 |
119 | 4. Wait for the image to be generated - a hyperlink will be displayed in the terminal pane. Then select the hyperlink to open a new browser tab and review the image that was generated.
120 |
121 | > **TIP**: If the app doesn't return a response, wait a minute and try again. Newly deployed resources can take up to 5 minutes to become available.
122 |
123 | 5. Close the browser tab containing the generated image and re-run the app to generate a new image with a different prompt.
124 |
125 | ## Clean up
126 |
127 | When you're done with your Azure OpenAI resource, remember to delete the resource in the **Azure portal** at `https://portal.azure.com`.
128 |
--------------------------------------------------------------------------------
/Instructions/Exercises/06-use-own-data.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service'
4 | status: stale
5 | ---
6 |
7 | # Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service
8 |
9 | The Azure OpenAI Service enables you to use your own data with the intelligence of the underlying LLM. You can limit the model to only use your data for pertinent topics, or blend it with results from the pre-trained model.
10 |
11 | In the scenario for this exercise, you will perform the role of a software developer working for Margie's Travel Agency. You will explore how use Azure AI Search to index your own data and use it with Azure OpenAI to augment prompts.
12 |
13 | This exercise will take approximately **30** minutes.
14 |
15 | ## Provision Azure resources
16 |
17 | To complete this exercise, you'll need:
18 |
19 | - An Azure OpenAI resource.
20 | - An Azure AI Search resource.
21 | - An Azure Storage Account resource.
22 |
23 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
24 | 2. Create an **Azure OpenAI** resource with the following settings:
25 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
26 | - **Resource group**: *Choose or create a resource group*
27 | - **Region**: *Make a **random** choice from any of the following regions*\*
28 | - Australia East
29 | - Canada East
30 | - East US
31 | - East US 2
32 | - France Central
33 | - Japan East
34 | - North Central US
35 | - Sweden Central
36 | - Switzerland North
37 | - UK South
38 | - **Name**: *A unique name of your choice*
39 | - **Pricing tier**: Standard S0
40 |
41 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
42 |
43 | 3. While the Azure OpenAI resource is being provisioned, create an **Azure AI Search** resource with the following settings:
44 | - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource*
45 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
46 | - **Service name**: *A unique name of your choice*
47 | - **Location**: *The region in which you provisioned your Azure OpenAI resource*
48 | - **Pricing tier**: Basic
49 | 4. While the Azure AI Search resource is being provisioned, create a **Storage account** resource with the following settings:
50 | - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource*
51 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
52 | - **Storage account name**: *A unique name of your choice*
53 | - **Region**: *The region in which you provisioned your Azure OpenAI resource*
54 | - **Performance**: Standard
55 | - **Redundancy**: Locally redundant storage (LRS)
56 | 5. After all three of the resources have been successfully deployed in your Azure subscription, review them in the Azure portal and gather the following information (which you'll need later in the exercise):
57 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
58 | - The endpoint for your Azure AI Search service (the **Url** value on the overview page for your Azure AI Search resource in the Azure portal).
59 | - A **primary admin key** for your Azure AI Search resource (available in the **Keys** page for your Azure AI Search resource in the Azure portal).
60 |
61 | ## Upload your data
62 |
63 | You're going to ground the prompts you use with a generative AI model by using your own data. In this exercise, the data consists of a collection of travel brochures from the fictional *Margies Travel* company.
64 |
65 | 1. In a new browser tab, download an archive of brochure data from `https://aka.ms/own-data-brochures`. Extract the brochures to a folder on your PC.
66 | 1. In the Azure portal, navigate to your storage account and view the **Storage browser** page.
67 | 1. Select **Blob containers** and then add a new container named `margies-travel`.
68 | 1. Select the **margies-travel** container, and then upload the .pdf brochures you extracted previously to the root folder of the blob container.
69 |
70 | ## Deploy AI models
71 |
72 | You're going to use two AI models in this exercise:
73 |
74 | - A text embedding model to *vectorize* the text in the brochures so it can be indexed efficiently for use in grounding prompts.
75 | - A GPT model that you application can use to generate responses to prompts that are grounded in your data.
76 |
77 | To deploy these models, you'll use AI Foundry.
78 |
79 | 1. In the Azure portal, navigate to your Azure OpenAI resource. Then use the link to open your resource in **Azure AI Foundry portal**..
80 | 1. In Azure AI Foundry portal, on the **Deployments** page, view your existing model deployments. Then create a new base model deployment of the **text-embedding-ada-002** model with the following settings:
81 | - **Deployment name**: text-embedding-ada-002
82 | - **Model**: text-embedding-ada-002
83 | - **Model version**: *The default version*
84 | - **Deployment type**: Standard
85 | - **Tokens per minute rate limit**: 5K\*
86 | - **Content filter**: Default
87 | - **Enable dynamic quota**: Enabled
88 | 1. After the text embedding model has been deployed, return to the **Deployments** page and create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
89 | - **Deployment name**: gpt-35-turbo-16k
90 | - **Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
91 | - **Model version**: *The default version*
92 | - **Deployment type**: Standard
93 | - **Tokens per minute rate limit**: 5K\*
94 | - **Content filter**: Default
95 | - **Enable dynamic quota**: Enabled
96 |
97 | > \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
98 |
99 | ## Create an index
100 |
101 | To make it easy to use your own data in a prompt, you'll index it using Azure AI Search. You'll use the text embedding model you deployed previously during the indexing process to *vectorize* the text data (which results in each text token in the index being represented by numeric vectors - making it compatible with the way a generative AI model represents text)
102 |
103 | 1. In the Azure portal, navigate to your Azure AI Search resource.
104 | 1. On the **Overview** page, select **Import and vectorize data**.
105 | 1. In the **Setup your data connection** page, select **Azure Blob Storage** and configure the data source with the following settings:
106 | - **Subscription**: The Azure subscription in which you provisioned your storage account.
107 | - **Blob storage account**: The storage account you created previously.
108 | - **Blob container**: margies-travel
109 | - **Blob folder**: *Leave blank*
110 | - **Enable deletion tracking**: Unselected
111 | - **Authenticate using managed identity**: Unselected
112 | 1. On the **Vectorize your text** page, select the following settings:
113 | - **Kind**: Azure OpenAI
114 | - **Subscription**: The Azure subscription in which you provisioned your Azure OpenAI service.
115 | - **Azure OpenAI Service**: Your Azure OpenAI Service resource
116 | - **Model deployment**: text-embedding-ada-002
117 | - **Authentication type**: API key
118 | - **I acknowledge that connecting to an Azure OpenAI service will incur additional costs to my account**: Selected
119 | 1. On the next page, do not select the option to vectorize images or extract data with AI skills.
120 | 1. On the next page, enable semantic ranking and schedule the indexer to run once.
121 | 1. On the final page, set the **Objects name prefix** to `margies-index` and then create the index.
122 |
123 | ## Prepare to develop an app in Visual Studio Code
124 |
125 | Now let's explore the use of your own data in an app that uses the Azure OpenAI service SDK. You'll develop your app using Visual Studio Code. The code files for your app have been provided in a GitHub repo.
126 |
127 | > **Tip**: If you have already cloned the **mslearn-openai** repo, open it in Visual Studio code. Otherwise, follow these steps to clone it to your development environment.
128 |
129 | 1. Start Visual Studio Code.
130 | 2. Open the palette (SHIFT+CTRL+P) and run a **Git: Clone** command to clone the `https://github.com/MicrosoftLearning/mslearn-openai` repository to a local folder (it doesn't matter which folder).
131 | 3. When the repository has been cloned, open the folder in Visual Studio Code.
132 |
133 | > **Note**: If Visual Studio Code shows you a pop-up message to prompt you to trust the code you are opening, click on **Yes, I trust the authors** option in the pop-up.
134 |
135 | 4. Wait while additional files are installed to support the C# code projects in the repo.
136 |
137 | > **Note**: If you are prompted to add required assets to build and debug, select **Not Now**.
138 |
139 | ## Configure your application
140 |
141 | Applications for both C# and Python have been provided, and both apps feature the same functionality. First, you'll complete some key parts of the application to enable using your Azure OpenAI resource.
142 |
143 | 1. In Visual Studio Code, in the **Explorer** pane, browse to the **Labfiles/06-use-own-data** folder and expand the **CSharp** or **Python** folder depending on your language preference. Each folder contains the language-specific files for an app into which you're going to integrate Azure OpenAI functionality.
144 | 2. Right-click the **CSharp** or **Python** folder containing your code files and open an integrated terminal. Then install the Azure OpenAI SDK package by running the appropriate command for your language preference:
145 |
146 | **C#**:
147 |
148 | ```
149 | dotnet add package Azure.AI.OpenAI --version 1.0.0-beta.14
150 | ```
151 |
152 | **Python**:
153 |
154 | ```
155 | pip install openai==1.55.3
156 | ```
157 |
158 | 3. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the configuration file for your preferred language
159 |
160 | - **C#**: appsettings.json
161 | - **Python**: .env
162 |
163 | 4. Update the configuration values to include:
164 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
165 | - The **deployment name** you specified for your gpt-35-turbo model deployment (available in the **Deployments** page in Azure AI Foundry portal).
166 | - The endpoint for your search service (the **Url** value on the overview page for your search resource in the Azure portal).
167 | - A **key** for your search resource (available in the **Keys** page for your search resource in the Azure portal - you can use either of the admin keys)
168 | - The name of the search index (which should be `margies-index`).
169 | 5. Save the configuration file.
170 |
171 | ### Add code to use the Azure OpenAI service
172 |
173 | Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
174 |
175 | 1. In the **Explorer** pane, in the **CSharp** or **Python** folder, open the code file for your preferred language, and replace the comment ***Configure your data source*** with code to add the Azure OpenAI SDK library:
176 |
177 | **C#**: ownData.cs
178 |
179 | ```csharp
180 | // Configure your data source
181 | AzureSearchChatExtensionConfiguration ownDataConfig = new()
182 | {
183 | SearchEndpoint = new Uri(azureSearchEndpoint),
184 | Authentication = new OnYourDataApiKeyAuthenticationOptions(azureSearchKey),
185 | IndexName = azureSearchIndex
186 | };
187 | ```
188 |
189 | **Python**: ownData.py
190 |
191 | ```python
192 | # Configure your data source
193 | extension_config = dict(dataSources = [
194 | {
195 | "type": "AzureCognitiveSearch",
196 | "parameters": {
197 | "endpoint":azure_search_endpoint,
198 | "key": azure_search_key,
199 | "indexName": azure_search_index,
200 | }
201 | }]
202 | )
203 | ```
204 |
205 | 2. Review the rest of the code, noting the use of the *extensions* in the request body that is used to provide information about the data source settings.
206 |
207 | 3. Save the changes to the code file.
208 |
209 | ## Run your application
210 |
211 | Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request.
212 |
213 | 1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
214 |
215 | - **C#**: `dotnet run`
216 | - **Python**: `python ownData.py`
217 |
218 | > **Tip**: You can use the **Maximize panel size** (**^**) icon in the terminal toolbar to see more of the console text.
219 |
220 | 2. Review the response to the prompt `Tell me about London`, which should include an answer as well as some details of the data used to ground the prompt, which was obtained from your search service.
221 |
222 | > **Tip**: If you want to see the citations from your search index, set the variable ***show citations*** near the top of the code file to **true**.
223 |
224 | ## Clean up
225 |
226 | When you're done with your Azure OpenAI resource, remember to delete the resources in the **Azure portal** at `https://portal.azure.com`. Be sure to also include the storage account and search resource, as those can incur a relatively large cost.
227 |
--------------------------------------------------------------------------------
/Instructions/Labs/02-use-own-data.md:
--------------------------------------------------------------------------------
1 | ---
2 | lab:
3 | title: 'Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service'
4 | description: new
5 | ---
6 |
7 | # Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service
8 |
9 | The Azure OpenAI Service enables you to use your own data with the intelligence of the underlying LLM. You can limit the model to only use your data for pertinent topics, or blend it with results from the pre-trained model.
10 |
11 | In the scenario for this exercise, you will perform the role of a software developer working for Margie's Travel Agency. You will explore how use Azure AI Search to index your own data and use it with Azure OpenAI to augment prompts.
12 |
13 | This exercise will take approximately **30** minutes.
14 |
15 | ## Provision Azure resources
16 |
17 | To complete this exercise, you'll need:
18 |
19 | - An Azure OpenAI resource.
20 | - An Azure AI Search resource.
21 | - An Azure Storage Account resource.
22 |
23 | 1. Sign into the **Azure portal** at `https://portal.azure.com`.
24 | 2. Create an **Azure OpenAI** resource with the following settings:
25 | - **Subscription**: *Select an Azure subscription that has been approved for access to the Azure OpenAI service*
26 | - **Resource group**: *Choose or create a resource group*
27 | - **Region**: *Make a **random** choice from any of the following regions*\*
28 | - East US
29 | - East US 2
30 | - North Central US
31 | - South Central US
32 | - Sweden Central
33 | - West US
34 | - West US 3
35 | - **Name**: *A unique name of your choice*
36 | - **Pricing tier**: Standard S0
37 |
38 | > \* Azure OpenAI resources are constrained by regional quotas. The listed regions include default quota for the model type(s) used in this exercise. Randomly choosing a region reduces the risk of a single region reaching its quota limit in scenarios where you are sharing a subscription with other users. In the event of a quota limit being reached later in the exercise, there's a possibility you may need to create another resource in a different region.
39 |
40 | 3. While the Azure OpenAI resource is being provisioned, create an **Azure AI Search** resource with the following settings:
41 | - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource*
42 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
43 | - **Service name**: *A unique name of your choice*
44 | - **Location**: *The region in which you provisioned your Azure OpenAI resource*
45 | - **Pricing tier**: Basic
46 | 4. While the Azure AI Search resource is being provisioned, create a **Storage account** resource with the following settings:
47 | - **Subscription**: *The subscription in which you provisioned your Azure OpenAI resource*
48 | - **Resource group**: *The resource group in which you provisioned your Azure OpenAI resource*
49 | - **Storage account name**: *A unique name of your choice*
50 | - **Region**: *The region in which you provisioned your Azure OpenAI resource*
51 | - **Primary service**: Azure Blob Storage or Azure Data Lake Storage Gen 2
52 | - **Performance**: Standard
53 | - **Redundancy**: Locally redundant storage (LRS)
54 | 5. After all three of the resources have been successfully deployed in your Azure subscription, review them in the Azure portal and gather the following information (which you'll need later in the exercise):
55 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
56 | - The endpoint for your Azure AI Search service (the **Url** value on the overview page for your Azure AI Search resource in the Azure portal).
57 | - A **primary admin key** for your Azure AI Search resource (available in the **Keys** page for your Azure AI Search resource in the Azure portal).
58 |
59 | ## Upload your data
60 |
61 | You're going to ground the prompts you use with a generative AI model by using your own data. In this exercise, the data consists of a collection of travel brochures from the fictional *Margies Travel* company.
62 |
63 | 1. In a new browser tab, download an archive of brochure data from `https://aka.ms/own-data-brochures`. Extract the brochures to a folder on your PC.
64 | 1. In the Azure portal, navigate to your storage account and view the **Storage browser** page.
65 | 1. Select **Blob containers** and then add a new container named `margies-travel`.
66 | 1. Select the **margies-travel** container, and then upload the .pdf brochures you extracted previously to the root folder of the blob container.
67 |
68 | ## Deploy AI models
69 |
70 | You're going to use two AI models in this exercise:
71 |
72 | - A text embedding model to *vectorize* the text in the brochures so it can be indexed efficiently for use in grounding prompts.
73 | - A GPT model that you application can use to generate responses to prompts that are grounded in your data.
74 |
75 | ## Deploy a model
76 |
77 | Next, you will deploy Azure OpenAI models from Cloud Shell.
78 |
79 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***Bash*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.
80 |
81 | > **Note**: If you have previously created a cloud shell that uses a *PowerShell* environment, switch it to ***Bash***.
82 |
83 | ```dotnetcli
84 | az cognitiveservices account deployment create \
85 | -g \
86 | -n \
87 | --deployment-name text-embedding-ada-002 \
88 | --model-name text-embedding-ada-002 \
89 | --model-version "2" \
90 | --model-format OpenAI \
91 | --sku-name "Standard" \
92 | --sku-capacity 5
93 | ```
94 |
95 | > **Note**: Sku-capacity is measured in thousands of tokens per minute. A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
96 |
97 | After the text embedding model has been deployed, create a new deployment of the **gpt-4o** model with the following settings:
98 |
99 | ```dotnetcli
100 | az cognitiveservices account deployment create \
101 | -g \
102 | -n \
103 | --deployment-name gpt-4o \
104 | --model-name gpt-4o \
105 | --model-version "2024-05-13" \
106 | --model-format OpenAI \
107 | --sku-name "Standard" \
108 | --sku-capacity 5
109 | ```
110 |
111 | ## Create an index
112 |
113 | To make it easy to use your own data in a prompt, you'll index it using Azure AI Search. You'll use the text embedding model to *vectorize* the text data (which results in each text token in the index being represented by numeric vectors - making it compatible with the way a generative AI model represents text)
114 |
115 | 1. In the Azure portal, navigate to your Azure AI Search resource.
116 | 1. On the **Overview** page, select **Import and vectorize data**.
117 | 1. In the **Setup your data connection** page, select **Azure Blob Storage** and configure the data source with the following settings:
118 | - **Subscription**: The Azure subscription in which you provisioned your storage account.
119 | - **Blob storage account**: The storage account you created previously.
120 | - **Blob container**: margies-travel
121 | - **Blob folder**: *Leave blank*
122 | - **Enable deletion tracking**: Unselected
123 | - **Authenticate using managed identity**: Unselected
124 | 1. On the **Vectorize your text** page, select the following settings:
125 | - **Kind**: Azure OpenAI
126 | - **Subscription**: The Azure subscription in which you provisioned your Azure OpenAI service.
127 | - **Azure OpenAI Service**: Your Azure OpenAI Service resource
128 | - **Model deployment**: text-embedding-ada-002
129 | - **Authentication type**: API key
130 | - **I acknowledge that connecting to an Azure OpenAI service will incur additional costs to my account**: Selected
131 | 1. On the next page, do **not** select the option to vectorize images or extract data with AI skills.
132 | 1. On the next page, enable semantic ranking and schedule the indexer to run once.
133 | 1. On the final page, set the **Objects name prefix** to `margies-index` and then create the index.
134 |
135 | ## Prepare to develop an app in Cloud Shell
136 |
137 | Now let's explore the use of your own data in an app that uses the Azure OpenAI service SDK. You'll develop your app using Cloud Shell. The code files for your app have been provided in a GitHub repo.
138 |
139 | > **Tip**: If you have already cloned the **mslearn-openai** repo, navigate to the exercise's folder in Cloud Shell. Otherwise, follow these steps to clone it to your development environment.
140 |
141 | 1. In the cloud shell toolbar, select **Switch to Powershell**, and in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).
142 |
143 | **Ensure you've switched to the classic version of the cloud shell before continuing.**
144 |
145 | 1. In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text):
146 |
147 | ```
148 | rm -r mslearn-openai -f
149 | git clone https://github.com/microsoftlearning/mslearn-openai mslearn-openai
150 | ```
151 |
152 | > **Tip**: As you enter commands into the cloudshell, the output may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.
153 |
154 | ## Configure your application
155 |
156 | Applications for both C# and Python have been provided, and both apps feature the same functionality. First, you'll complete some key parts of the application to enable using your Azure OpenAI resource.
157 |
158 | 1. After the repo has been cloned, navigate to the folder containing the chat application code files:
159 |
160 | Use the command below depending on your choice of programming language.
161 |
162 | **Python**
163 |
164 | ```
165 | cd mslearn-openai/Labfiles/02-use-own-data/Python
166 | ```
167 |
168 | **C#**
169 |
170 | ```
171 | cd mslearn-openai/Labfiles/02-use-own-data/CSharp
172 | ```
173 |
174 | 1. In the cloud shell command-line pane, enter the following command to install the libraries you'll use:
175 |
176 | **Python**
177 |
178 | ```
179 | python -m venv labenv
180 | ./labenv/bin/Activate.ps1
181 | pip install python-dotenv openai==1.65.2
182 | ```
183 |
184 | **C#**
185 |
186 | ```
187 | dotnet add package Azure.AI.OpenAI --version 2.1.0
188 | dotnet add package Azure.Search.Documents --version 11.6.0
189 | ```
190 |
191 | 1. Enter the following command to edit the configuration file that has been provided:
192 |
193 | **Python**
194 |
195 | ```
196 | code .env
197 | ```
198 |
199 | **C#**
200 |
201 | ```
202 | code appsettings.json
203 | ```
204 |
205 | The file is opened in a code editor.
206 |
207 | 1. Update the configuration values to include:
208 | - The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
209 | - The **deployment name** you specified for your gpt-4o model deployment (should be `gpt-4o`).
210 | - The endpoint for your search service (the **Url** value on the overview page for your search resource in the Azure portal).
211 | - A **key** for your search resource (available in the **Keys** page for your search resource in the Azure portal - you can use either of the admin keys)
212 | - The name of the search index (which should be `margies-index`).
213 |
214 | 1. After you've replaced the placeholders, within the code editor, use the **CTRL+S** command or **Right-click > Save** to save your changes and then use the **CTRL+Q** command or **Right-click > Quit** to close the code editor while keeping the cloud shell command line open.
215 |
216 | ### Add code to use the Azure OpenAI service
217 |
218 | Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
219 |
220 | > **Tip**: As you add code, be sure to maintain the correct indentation.
221 |
222 | 1. Enter the following command to edit the code file that has been provided:
223 |
224 | **Python**
225 |
226 | ```
227 | code ownData.py
228 | ```
229 |
230 | **C#**
231 |
232 | ```
233 | code OwnData.cs
234 | ```
235 |
236 | 1. In the code editor, and replace the comment ***Configure your data source*** with code to your index as a data source for chat completion:
237 |
238 | **Python**
239 |
240 | ```python
241 | # Configure your data source
242 | text = input('\nEnter a question:\n')
243 |
244 | completion = client.chat.completions.create(
245 | model=deployment,
246 | messages=[
247 | {
248 | "role": "user",
249 | "content": text,
250 | },
251 | ],
252 | extra_body={
253 | "data_sources":[
254 | {
255 | "type": "azure_search",
256 | "parameters": {
257 | "endpoint": os.environ["AZURE_SEARCH_ENDPOINT"],
258 | "index_name": os.environ["AZURE_SEARCH_INDEX"],
259 | "authentication": {
260 | "type": "api_key",
261 | "key": os.environ["AZURE_SEARCH_KEY"],
262 | }
263 | }
264 | }
265 | ],
266 | }
267 | )
268 | ```
269 |
270 | **C#**
271 |
272 | ```csharp
273 | // Configure your data source
274 | // Extension methods to use data sources with options are subject to SDK surface changes. Suppress the warning to acknowledge this and use the subject-to-change AddDataSource method.
275 | #pragma warning disable AOAI001
276 |
277 | ChatCompletionOptions chatCompletionsOptions = new ChatCompletionOptions()
278 | {
279 | MaxOutputTokenCount = 600,
280 | Temperature = 0.9f,
281 | };
282 |
283 | chatCompletionsOptions.AddDataSource(new AzureSearchChatDataSource()
284 | {
285 | Endpoint = new Uri(azureSearchEndpoint),
286 | IndexName = azureSearchIndex,
287 | Authentication = DataSourceAuthentication.FromApiKey(azureSearchKey),
288 | });
289 | ```
290 |
291 | 1. Save the changes to the code file.
292 |
293 | ## Run your application
294 |
295 | Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request.
296 |
297 | 1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
298 |
299 | - **C#**: `dotnet run`
300 | - **Python**: `python ownData.py`
301 |
302 | > **Tip**: You can use the **Maximize panel size** (**^**) icon in the terminal toolbar to see more of the console text.
303 |
304 | 2. Review the response to the prompt `Tell me about London`, which should include an answer as well as some details of the data used to ground the prompt, which was obtained from your search service.
305 |
306 | > **Tip**: If you want to see the citations from your search index, set the variable ***show citations*** near the top of the code file to **true**.
307 |
308 | ## Clean up
309 |
310 | When you're done with your Azure OpenAI resource, remember to delete the resources in the **Azure portal** at `https://portal.azure.com`. Be sure to also include the storage account and search resource, as those can incur a relatively large cost.
311 |
--------------------------------------------------------------------------------
/Instructions/media/ai-foundry-home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Instructions/media/ai-foundry-home.png
--------------------------------------------------------------------------------
/Instructions/media/ai-foundry-project.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Instructions/media/ai-foundry-project.png
--------------------------------------------------------------------------------
/Instructions/media/cloudshell-launch-portal.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Instructions/media/cloudshell-launch-portal.png
--------------------------------------------------------------------------------
/Instructions/media/images-playground-new-style.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Instructions/media/images-playground-new-style.png
--------------------------------------------------------------------------------
/Instructions/media/images-playground.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Instructions/media/images-playground.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Microsoft
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/CSharp/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | enable
7 | enable
8 | 12
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 | PreserveNewest
20 |
21 |
22 |
23 |
24 |
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/CSharp/Program.cs:
--------------------------------------------------------------------------------
1 | // Implicit using statements are included
2 | using System.Text;
3 | using System.ClientModel;
4 | using System.Text.Json;
5 | using Microsoft.Extensions.Configuration;
6 | using Microsoft.Extensions.Configuration.Json;
7 | using Azure;
8 |
9 | // Add Azure OpenAI packages
10 |
11 |
12 | // Build a config object and retrieve user settings.
13 | class ChatMessageLab
14 | {
15 |
16 | static string? oaiEndpoint;
17 | static string? oaiKey;
18 | static string? oaiDeploymentName;
19 | static void Main(string[] args)
20 | {
21 | IConfiguration config = new ConfigurationBuilder()
22 | .AddJsonFile("appsettings.json")
23 | .Build();
24 |
25 | oaiEndpoint = config["AzureOAIEndpoint"];
26 | oaiKey = config["AzureOAIKey"];
27 | oaiDeploymentName = config["AzureOAIDeploymentName"];
28 |
29 | //Initialize messages list
30 |
31 | do {
32 | // Pause for system message update
33 | Console.WriteLine("-----------\nPausing the app to allow you to change the system prompt.\nPress any key to continue...");
34 | Console.ReadKey();
35 |
36 | Console.WriteLine("\nUsing system message from system.txt");
37 | string systemMessage = System.IO.File.ReadAllText("system.txt");
38 | systemMessage = systemMessage.Trim();
39 |
40 | Console.WriteLine("\nEnter user message or type 'quit' to exit:");
41 | string userMessage = Console.ReadLine() ?? "";
42 | userMessage = userMessage.Trim();
43 |
44 | if (systemMessage.ToLower() == "quit" || userMessage.ToLower() == "quit")
45 | {
46 | break;
47 | }
48 | else if (string.IsNullOrEmpty(systemMessage) || string.IsNullOrEmpty(userMessage))
49 | {
50 | Console.WriteLine("Please enter a system and user message.");
51 | continue;
52 | }
53 | else
54 | {
55 | // Format and send the request to the model
56 |
57 | GetResponseFromOpenAI(systemMessage, userMessage);
58 | }
59 | } while (true);
60 |
61 | }
62 |
63 | // Define the function that gets the response from Azure OpenAI endpoint
64 | private static void GetResponseFromOpenAI(string systemMessage, string userMessage)
65 | {
66 | Console.WriteLine("\nSending prompt to Azure OpenAI endpoint...\n\n");
67 |
68 | if(string.IsNullOrEmpty(oaiEndpoint) || string.IsNullOrEmpty(oaiKey) || string.IsNullOrEmpty(oaiDeploymentName) )
69 | {
70 | Console.WriteLine("Please check your appsettings.json file for missing or incorrect values.");
71 | return;
72 | }
73 |
74 | // Configure the Azure OpenAI client
75 |
76 |
77 |
78 | // Get response from Azure OpenAI
79 |
80 |
81 |
82 |
83 | }
84 |
85 | }
86 |
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "",
3 | "AzureOAIKey": "",
4 | "AzureOAIDeploymentName": ""
5 | }
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/CSharp/grounding.txt:
--------------------------------------------------------------------------------
1 | Contoso is a wildlife rescue organization that has dedicated itself to the protection and preservation of animals and their habitats. The organization has been working tirelessly to protect the wildlife and their habitats from the threat of extinction. Contoso's mission is to provide a safe and healthy environment for all animals in their care.
2 |
3 | One of the most popular animals that Contoso rescues and cares for is the red panda. Known for their fluffy tails and adorable faces, red pandas have captured the hearts of children all over the world. These playful creatures are native to the Himalayas and are listed as endangered due to habitat loss and poaching.
4 |
5 | Contoso's red panda rescue program is one of their most successful initiatives. The organization works with local communities to protect the red panda's natural habitat and provides medical care for those that are rescued. Contoso's team of experts works tirelessly to ensure that all rescued red pandas receive the best possible care and are eventually released back into the wild.
6 |
7 | Children, in particular, have a soft spot for red pandas. These playful creatures are often featured in children's books, cartoons, and movies. With their fluffy tails and bright eyes, it's easy to see why children are drawn to them. Contoso understands this and has made it their mission to educate children about the importance of wildlife conservation and the role they can play in protecting these endangered species.
8 |
9 | Contoso's red panda rescue program is not only helping to save these adorable creatures from extinction but is also providing a unique opportunity for children to learn about wildlife conservation. The organization offers educational programs and tours that allow children to get up close and personal with the red pandas. These programs are designed to teach children about the importance of protecting wildlife and their habitats.
10 |
11 | In addition to their red panda rescue program, Contoso also rescues and cares for a variety of other animals, including elephants, tigers, and rhinoceros. The organization is committed to protecting all animals in their care and works tirelessly to provide them with a safe and healthy environment
12 |
13 | ---
14 |
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/CSharp/system.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/01-app-develop/CSharp/system.txt
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=
2 | AZURE_OAI_KEY=
3 | AZURE_OAI_DEPLOYMENT=
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/Python/application.py:
--------------------------------------------------------------------------------
1 | import os
2 | import asyncio
3 | from dotenv import load_dotenv
4 |
5 | # Add Azure OpenAI package
6 |
7 |
8 |
9 | async def main():
10 |
11 | try:
12 |
13 | # Get configuration settings
14 | load_dotenv()
15 | azure_oai_endpoint = os.getenv("AZURE_OAI_ENDPOINT")
16 | azure_oai_key = os.getenv("AZURE_OAI_KEY")
17 | azure_oai_deployment = os.getenv("AZURE_OAI_DEPLOYMENT")
18 |
19 | # Configure the Azure OpenAI client
20 |
21 | #Initialize messages array
22 |
23 | while True:
24 | # Pause the app to allow the user to enter the system prompt
25 | print("------------------\nPausing the app to allow you to change the system prompt.\nPress enter to continue...")
26 | input()
27 |
28 | # Read in system message and prompt for user message
29 | system_text = open(file="system.txt", encoding="utf8").read().strip()
30 | user_text = input("Enter user message, or 'quit' to exit: ")
31 | if user_text.lower() == 'quit' or system_text.lower() == 'quit':
32 | print('Exiting program...')
33 | break
34 |
35 | # Format and send the request to the model
36 |
37 | await call_openai_model(system_message = system_text,
38 | user_message = user_text,
39 | model=azure_oai_deployment,
40 | client=client
41 | )
42 |
43 | except Exception as ex:
44 | print(ex)
45 |
46 | # Define the function that will get the response from Azure OpenAI endpoint
47 | async def call_openai_model(system_message, user_message, model, client):
48 | # Get response from Azure OpenAI
49 |
50 |
51 |
52 |
53 | print("Response:\n" + response.choices[0].message.content + "\n")
54 |
55 | if __name__ == '__main__':
56 | asyncio.run(main())
57 |
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/Python/grounding.txt:
--------------------------------------------------------------------------------
1 | Contoso is a wildlife rescue organization that has dedicated itself to the protection and preservation of animals and their habitats. The organization has been working tirelessly to protect the wildlife and their habitats from the threat of extinction. Contoso's mission is to provide a safe and healthy environment for all animals in their care.
2 |
3 | One of the most popular animals that Contoso rescues and cares for is the red panda. Known for their fluffy tails and adorable faces, red pandas have captured the hearts of children all over the world. These playful creatures are native to the Himalayas and are listed as endangered due to habitat loss and poaching.
4 |
5 | Contoso's red panda rescue program is one of their most successful initiatives. The organization works with local communities to protect the red panda's natural habitat and provides medical care for those that are rescued. Contoso's team of experts works tirelessly to ensure that all rescued red pandas receive the best possible care and are eventually released back into the wild.
6 |
7 | Children, in particular, have a soft spot for red pandas. These playful creatures are often featured in children's books, cartoons, and movies. With their fluffy tails and bright eyes, it's easy to see why children are drawn to them. Contoso understands this and has made it their mission to educate children about the importance of wildlife conservation and the role they can play in protecting these endangered species.
8 |
9 | Contoso's red panda rescue program is not only helping to save these adorable creatures from extinction but is also providing a unique opportunity for children to learn about wildlife conservation. The organization offers educational programs and tours that allow children to get up close and personal with the red pandas. These programs are designed to teach children about the importance of protecting wildlife and their habitats.
10 |
11 | In addition to their red panda rescue program, Contoso also rescues and cares for a variety of other animals, including elephants, tigers, and rhinoceros. The organization is committed to protecting all animals in their care and works tirelessly to provide them with a safe and healthy environment
12 |
13 | ---
14 |
--------------------------------------------------------------------------------
/Labfiles/01-app-develop/Python/system.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/01-app-develop/Python/system.txt
--------------------------------------------------------------------------------
/Labfiles/02-azure-openai-api/CSharp/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | enable
7 | enable
8 | 12
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 | PreserveNewest
20 |
21 |
22 |
23 |
24 |
--------------------------------------------------------------------------------
/Labfiles/02-azure-openai-api/CSharp/Program.cs:
--------------------------------------------------------------------------------
1 | // Implicit using statements are included
2 | using System.Text;
3 | using System.ClientModel;
4 | using System.Text.Json;
5 | using Microsoft.Extensions.Configuration;
6 | using Microsoft.Extensions.Configuration.Json;
7 | using Azure;
8 |
9 | // Add Azure OpenAI package
10 |
11 |
12 | // Build a config object and retrieve user settings.
13 | IConfiguration config = new ConfigurationBuilder()
14 | .AddJsonFile("appsettings.json")
15 | .Build();
16 | string? oaiEndpoint = config["AzureOAIEndpoint"];
17 | string? oaiKey = config["AzureOAIKey"];
18 | string? oaiDeploymentName = config["AzureOAIDeploymentName"];
19 |
20 | if(string.IsNullOrEmpty(oaiEndpoint) || string.IsNullOrEmpty(oaiKey) || string.IsNullOrEmpty(oaiDeploymentName) )
21 | {
22 | Console.WriteLine("Please check your appsettings.json file for missing or incorrect values.");
23 | return;
24 | }
25 |
26 | // Initialize the Azure OpenAI client...
27 |
28 |
29 |
30 | do {
31 | Console.WriteLine("Enter your prompt text (or type 'quit' to exit): ");
32 | string? inputText = Console.ReadLine();
33 | if (inputText == "quit") break;
34 |
35 | // Generate summary from Azure OpenAI
36 | if (inputText == null) {
37 | Console.WriteLine("Please enter a prompt.");
38 | continue;
39 | }
40 |
41 | Console.WriteLine("\nSending request for summary to Azure OpenAI endpoint...\n\n");
42 |
43 | // Add code to send request...
44 |
45 |
46 |
47 | } while (true);
48 |
--------------------------------------------------------------------------------
/Labfiles/02-azure-openai-api/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "",
3 | "AzureOAIKey": "",
4 | "AzureOAIDeploymentName": ""
5 | }
--------------------------------------------------------------------------------
/Labfiles/02-azure-openai-api/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=
2 | AZURE_OAI_KEY=
3 | AZURE_OAI_DEPLOYMENT=
--------------------------------------------------------------------------------
/Labfiles/02-azure-openai-api/Python/test-openai-model.py:
--------------------------------------------------------------------------------
1 | import os
2 | from dotenv import load_dotenv
3 |
4 | # Add Azure OpenAI package
5 |
6 |
7 | def main():
8 |
9 | try:
10 |
11 | # Get configuration settings
12 | load_dotenv()
13 | azure_oai_endpoint = os.getenv("AZURE_OAI_ENDPOINT")
14 | azure_oai_key = os.getenv("AZURE_OAI_KEY")
15 | azure_oai_deployment = os.getenv("AZURE_OAI_DEPLOYMENT")
16 |
17 | # Initialize the Azure OpenAI client...
18 |
19 |
20 |
21 | while True:
22 | # Get input text
23 | input_text = input("Enter the prompt (or type 'quit' to exit): ")
24 | if input_text.lower() == "quit":
25 | break
26 | if len(input_text) == 0:
27 | print("Please enter a prompt.")
28 | continue
29 |
30 | print("\nSending request for summary to Azure OpenAI endpoint...\n\n")
31 |
32 | # Add code to send request...
33 |
34 |
35 |
36 |
37 | except Exception as ex:
38 | print(ex)
39 |
40 | if __name__ == '__main__':
41 | main()
--------------------------------------------------------------------------------
/Labfiles/02-azure-openai-api/text-files/sample-text.txt:
--------------------------------------------------------------------------------
1 | The process of making maple syrup begins by tapping a spout (sometimes called a spile) into the sugar maple tree. The spile is inserted into the tree about 2 inches deep and the sap is collected as it flows out. The sap is then taken to a sugar shack where it is boiled down to concentrate the sugars. As the sap boils, water in the sap is evaporated and the syrup becomes more and more thick. Once the syrup reaches the right sugar content, which is usually when the boiling point reaches 219 degrees Fahrenheit, it is bottled and enjoyed.
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/CSharp/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | enable
7 | enable
8 | 12
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 | PreserveNewest
22 |
23 |
24 |
25 |
26 |
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/CSharp/OwnData.cs:
--------------------------------------------------------------------------------
1 | using System.ClientModel;
2 | using Microsoft.Extensions.Configuration;
3 |
4 | // Add Azure OpenAI package
5 | using Azure.AI.OpenAI;
6 | using Azure.Search.Documents.Models;
7 | using OpenAI.Chat;
8 | using Azure.AI.OpenAI.Chat;
9 |
10 | // Flag to show citations
11 | bool showCitations = false;
12 |
13 | // Get configuration settings
14 | IConfiguration config = new ConfigurationBuilder()
15 | .AddJsonFile("appsettings.json")
16 | .Build();
17 | string oaiEndpoint = config["AzureOAIEndpoint"] ?? "";
18 | string oaiKey = config["AzureOAIKey"] ?? "";
19 | string oaiDeploymentName = config["AzureOAIDeploymentName"] ?? "";
20 | string azureSearchEndpoint = config["AzureSearchEndpoint"] ?? "";
21 | string azureSearchKey = config["AzureSearchKey"] ?? "";
22 | string azureSearchIndex = config["AzureSearchIndex"] ?? "";
23 |
24 | // Initialize the Azure OpenAI client
25 | AzureOpenAIClient azureClient = new (new Uri(oaiEndpoint), new ApiKeyCredential(oaiKey));
26 | ChatClient chatClient = azureClient.GetChatClient(oaiDeploymentName);
27 |
28 | // Get the prompt text
29 | Console.WriteLine("Enter a question:");
30 | string text = Console.ReadLine() ?? "";
31 |
32 | // Configure your data source
33 |
34 | // Send request to Azure OpenAI model
35 | Console.WriteLine("...Sending the following request to Azure OpenAI endpoint...");
36 | Console.WriteLine("Request: " + text + "\n");
37 |
38 | ChatCompletion completion = chatClient.CompleteChat(
39 | [
40 | new SystemChatMessage("You are an AI assistant that helps with travel-related inquiries, offering tips, advice, and recommendations as a knowledgeable travel agent."),
41 | new UserChatMessage(text),
42 | ],
43 | chatCompletionsOptions);
44 |
45 | ChatMessageContext onYourDataContext = completion.GetMessageContext();
46 |
47 | if (onYourDataContext?.Intent is not null)
48 | {
49 | Console.WriteLine($"Intent: {onYourDataContext.Intent}");
50 | }
51 |
52 | // Print response
53 | Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
54 |
55 | if (showCitations)
56 | {
57 | Console.WriteLine($"\n Citations of data used:");
58 |
59 | foreach (ChatCitation citation in onYourDataContext?.Citations ?? [])
60 | {
61 | Console.WriteLine($"Citation: {citation.Content}");
62 | }
63 | }
64 |
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "",
3 | "AzureOAIKey": "",
4 | "AzureOAIDeploymentName": "",
5 | "AzureSearchEndpoint": "",
6 | "AzureSearchKey": "",
7 | "AzureSearchIndex": ""
8 | }
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=
2 | AZURE_OAI_KEY=
3 | AZURE_OAI_DEPLOYMENT=
4 | AZURE_SEARCH_ENDPOINT=
5 | AZURE_SEARCH_KEY=
6 | AZURE_SEARCH_INDEX=
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/Python/ownData.py:
--------------------------------------------------------------------------------
1 | import os
2 | import openai
3 | import dotenv
4 |
5 | ## Flag to show citations
6 | showCitations = False
7 |
8 | dotenv.load_dotenv()
9 |
10 | endpoint = os.environ.get("AZURE_OAI_ENDPOINT")
11 | api_key = os.environ.get("AZURE_OAI_KEY")
12 | deployment = os.environ.get("AZURE_OAI_DEPLOYMENT")
13 |
14 | client = openai.AzureOpenAI(
15 | azure_endpoint=endpoint,
16 | api_key=api_key,
17 | api_version="2024-02-01",
18 | )
19 |
20 | # Configure your data source
21 |
22 |
23 | print(completion.choices[0].message.content)
24 |
25 | if showCitations:
26 | print(f"\n{completion.choices[0].message.context}")
27 |
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/Dubai Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/Dubai Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/Las Vegas Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/Las Vegas Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/London Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/London Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/Margies Travel Company Info.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/Margies Travel Company Info.pdf
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/New York Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/New York Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/San Francisco Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/San Francisco Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/02-use-own-data/data/brochures.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/02-use-own-data/data/brochures.zip
--------------------------------------------------------------------------------
/Labfiles/03-image-generation/CSharp/Program.cs:
--------------------------------------------------------------------------------
1 | using System;
2 | using Azure;
3 | using System.IO;
4 | using System.Text;
5 | using Microsoft.Extensions.Configuration;
6 | using System.Net.Http;
7 | using System.Threading.Tasks;
8 |
9 | // Add references
10 |
11 |
12 | namespace dalle_client
13 | {
14 | class Program
15 | {
16 | static async Task Main(string[] args)
17 | {
18 | // Clear the console
19 | Console.Clear();
20 |
21 | try
22 | {
23 | // Get config settings
24 | IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
25 | IConfigurationRoot configuration = builder.Build();
26 | string project_connection = configuration["PROJECT_CONNECTION"];
27 | string model_deployment = configuration["MODEL_DEPLOYMENT"];
28 |
29 | // Initialize the project client
30 |
31 |
32 |
33 | // Get an OpenAI client
34 |
35 |
36 |
37 | // Loop until the user types 'quit'
38 | int imageCount = 0;
39 | Uri imageUrl;
40 | string input_text = "";
41 | while (input_text.ToLower() != "quit")
42 | {
43 | // Get user input
44 | Console.WriteLine("Enter the prompt (or type 'quit' to exit):");
45 | input_text = Console.ReadLine();
46 | if (input_text.ToLower() != "quit")
47 | {
48 | // Generate an image
49 |
50 |
51 |
52 | // Save the image to a file
53 | if(imageUrl != null)
54 | {
55 | imageCount++;
56 | string fileName = $"image_{imageCount}.png";
57 | await SaveImage(imageUrl, fileName);
58 | }
59 |
60 | }
61 | }
62 |
63 | }
64 | catch (Exception ex)
65 | {
66 | Console.WriteLine(ex.Message);
67 | }
68 | }
69 |
70 | static async Task SaveImage(Uri imageUrl, string fileName)
71 | {
72 | // Create the folder path
73 | string folderPath = Path.Combine(Directory.GetCurrentDirectory(), "images");
74 | Directory.CreateDirectory(folderPath);
75 | string filePath = Path.Combine(folderPath, fileName);
76 |
77 | // Download the image
78 | using (HttpClient client = new HttpClient())
79 | {
80 | byte[] image = await client.GetByteArrayAsync(imageUrl);
81 | File.WriteAllBytes(filePath, image);
82 | }
83 | Console.WriteLine("Image saved as " + filePath);
84 |
85 |
86 | }
87 |
88 | }
89 | }
90 |
91 |
--------------------------------------------------------------------------------
/Labfiles/03-image-generation/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "PROJECT_CONNECTION": "your_project_endpoint",
3 | "MODEL_DEPLOYMENT": "your_model_deployment"
4 | }
--------------------------------------------------------------------------------
/Labfiles/03-image-generation/CSharp/dalle-client.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | dalle_client
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 | PreserveNewest
17 |
18 |
19 |
20 |
21 |
--------------------------------------------------------------------------------
/Labfiles/03-image-generation/Python/.env:
--------------------------------------------------------------------------------
1 | PROJECT_CONNECTION="your_project_endpoint"
2 | MODEL_DEPLOYMENT="your_model_deployment"
3 |
--------------------------------------------------------------------------------
/Labfiles/03-image-generation/Python/dalle-client.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 |
4 | # Add references
5 |
6 |
7 | def main():
8 |
9 | # Clear the console
10 | os.system('cls' if os.name=='nt' else 'clear')
11 |
12 | try:
13 |
14 | # Get configuration settings
15 | load_dotenv()
16 | project_connection = os.getenv("PROJECT_CONNECTION")
17 | model_deployment = os.getenv("MODEL_DEPLOYMENT")
18 |
19 | # Initialize the project client
20 |
21 |
22 | ## Get an OpenAI client
23 |
24 |
25 | img_no = 0
26 | # Loop until the user types 'quit'
27 | while True:
28 | # Get input text
29 | input_text = input("Enter the prompt (or type 'quit' to exit): ")
30 | if input_text.lower() == "quit":
31 | break
32 | if len(input_text) == 0:
33 | print("Please enter a prompt.")
34 | continue
35 |
36 | # Generate an image
37 |
38 |
39 | # save the image
40 | img_no += 1
41 | file_name = f"image_{img_no}.png"
42 | save_image (image_url, file_name)
43 |
44 |
45 | except Exception as ex:
46 | print(ex)
47 |
48 | def save_image (image_url, file_name):
49 | # Set the directory for the stored image
50 | image_dir = os.path.join(os.getcwd(), 'images')
51 |
52 | # If the directory doesn't exist, create it
53 | if not os.path.isdir(image_dir):
54 | os.mkdir(image_dir)
55 |
56 | # Initialize the image path (note the filetype should be png)
57 | image_path = os.path.join(image_dir, file_name)
58 |
59 | # Retrieve the generated image
60 | generated_image = requests.get(image_url).content # download the image
61 | with open(image_path, "wb") as image_file:
62 | image_file.write(generated_image)
63 | print (f"Image saved as {image_path}")
64 |
65 |
66 | if __name__ == '__main__':
67 | main()
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/CSharp/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | enable
7 | enable
8 | 12
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 | PreserveNewest
20 |
21 |
22 |
23 |
24 |
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/CSharp/Program.cs:
--------------------------------------------------------------------------------
1 | // Implicit using statements are included
2 | using System.Text;
3 | using System.Text.Json;
4 | using System.ClientModel;
5 | using Microsoft.Extensions.Configuration;
6 | using Microsoft.Extensions.Configuration.Json;
7 | using Azure;
8 |
9 | // Add Azure OpenAI package
10 |
11 |
12 | // Build a config object and retrieve user settings.
13 | IConfiguration config = new ConfigurationBuilder()
14 | .AddJsonFile("appsettings.json")
15 | .Build();
16 | string? oaiEndpoint = config["AzureOAIEndpoint"];
17 | string? oaiKey = config["AzureOAIKey"];
18 | string? oaiDeploymentName = config["AzureOAIDeploymentName"];
19 |
20 | bool printFullResponse = false;
21 |
22 | do {
23 | // Pause for system message update
24 | Console.WriteLine("-----------\nPausing the app to allow you to change the system prompt.\nPress any key to continue...");
25 | Console.ReadKey();
26 |
27 | Console.WriteLine("\nUsing system message from system.txt");
28 | string systemMessage = System.IO.File.ReadAllText("system.txt");
29 | systemMessage = systemMessage.Trim();
30 |
31 | Console.WriteLine("\nEnter user message or type 'quit' to exit:");
32 | string userMessage = Console.ReadLine() ?? "";
33 | userMessage = userMessage.Trim();
34 |
35 | if (systemMessage.ToLower() == "quit" || userMessage.ToLower() == "quit")
36 | {
37 | break;
38 | }
39 | else if (string.IsNullOrEmpty(systemMessage) || string.IsNullOrEmpty(userMessage))
40 | {
41 | Console.WriteLine("Please enter a system and user message.");
42 | continue;
43 | }
44 | else
45 | {
46 | await GetResponseFromOpenAI(systemMessage, userMessage);
47 | }
48 | } while (true);
49 |
50 | async Task GetResponseFromOpenAI(string systemMessage, string userMessage)
51 | {
52 | Console.WriteLine("\nSending prompt to Azure OpenAI endpoint...\n\n");
53 |
54 | if(string.IsNullOrEmpty(oaiEndpoint) || string.IsNullOrEmpty(oaiKey) || string.IsNullOrEmpty(oaiDeploymentName) )
55 | {
56 | Console.WriteLine("Please check your appsettings.json file for missing or incorrect values.");
57 | return;
58 | }
59 |
60 | // Configure the Azure OpenAI client
61 |
62 |
63 | // Format and send the request to the model
64 |
65 |
66 | // Write response full response to console, if requested
67 | if (printFullResponse)
68 | {
69 | Console.WriteLine($"\nFull response: {JsonSerializer.Serialize(response.Content, new JsonSerializerOptions { WriteIndented = true })}\n\n");
70 | }
71 |
72 | // Write response to console
73 | Console.WriteLine($"{response.Role}: {response.Content[0].Text}");
74 | }
75 |
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "",
3 | "AzureOAIKey": "",
4 | "AzureOAIDeploymentName": ""
5 | }
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/CSharp/grounding.txt:
--------------------------------------------------------------------------------
1 | Contoso is a wildlife rescue organization that has dedicated itself to the protection and preservation of animals and their habitats. The organization has been working tirelessly to protect the wildlife and their habitats from the threat of extinction. Contoso's mission is to provide a safe and healthy environment for all animals in their care.
2 |
3 | One of the most popular animals that Contoso rescues and cares for is the red panda. Known for their fluffy tails and adorable faces, red pandas have captured the hearts of children all over the world. These playful creatures are native to the Himalayas and are listed as endangered due to habitat loss and poaching.
4 |
5 | Contoso's red panda rescue program is one of their most successful initiatives. The organization works with local communities to protect the red panda's natural habitat and provides medical care for those that are rescued. Contoso's team of experts works tirelessly to ensure that all rescued red pandas receive the best possible care and are eventually released back into the wild.
6 |
7 | Children, in particular, have a soft spot for red pandas. These playful creatures are often featured in children's books, cartoons, and movies. With their fluffy tails and bright eyes, it's easy to see why children are drawn to them. Contoso understands this and has made it their mission to educate children about the importance of wildlife conservation and the role they can play in protecting these endangered species.
8 |
9 | Contoso's red panda rescue program is not only helping to save these adorable creatures from extinction but is also providing a unique opportunity for children to learn about wildlife conservation. The organization offers educational programs and tours that allow children to get up close and personal with the red pandas. These programs are designed to teach children about the importance of protecting wildlife and their habitats.
10 |
11 | In addition to their red panda rescue program, Contoso also rescues and cares for a variety of other animals, including elephants, tigers, and rhinoceros. The organization is committed to protecting all animals in their care and works tirelessly to provide them with a safe and healthy environment
12 |
13 | ---
14 |
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/CSharp/system.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/03-prompt-engineering/CSharp/system.txt
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=
2 | AZURE_OAI_KEY=
3 | AZURE_OAI_DEPLOYMENT=
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/Python/grounding.txt:
--------------------------------------------------------------------------------
1 | Contoso is a wildlife rescue organization that has dedicated itself to the protection and preservation of animals and their habitats. The organization has been working tirelessly to protect the wildlife and their habitats from the threat of extinction. Contoso's mission is to provide a safe and healthy environment for all animals in their care.
2 |
3 | One of the most popular animals that Contoso rescues and cares for is the red panda. Known for their fluffy tails and adorable faces, red pandas have captured the hearts of children all over the world. These playful creatures are native to the Himalayas and are listed as endangered due to habitat loss and poaching.
4 |
5 | Contoso's red panda rescue program is one of their most successful initiatives. The organization works with local communities to protect the red panda's natural habitat and provides medical care for those that are rescued. Contoso's team of experts works tirelessly to ensure that all rescued red pandas receive the best possible care and are eventually released back into the wild.
6 |
7 | Children, in particular, have a soft spot for red pandas. These playful creatures are often featured in children's books, cartoons, and movies. With their fluffy tails and bright eyes, it's easy to see why children are drawn to them. Contoso understands this and has made it their mission to educate children about the importance of wildlife conservation and the role they can play in protecting these endangered species.
8 |
9 | Contoso's red panda rescue program is not only helping to save these adorable creatures from extinction but is also providing a unique opportunity for children to learn about wildlife conservation. The organization offers educational programs and tours that allow children to get up close and personal with the red pandas. These programs are designed to teach children about the importance of protecting wildlife and their habitats.
10 |
11 | In addition to their red panda rescue program, Contoso also rescues and cares for a variety of other animals, including elephants, tigers, and rhinoceros. The organization is committed to protecting all animals in their care and works tirelessly to provide them with a safe and healthy environment
12 |
13 | ---
14 |
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/Python/prompt-engineering.py:
--------------------------------------------------------------------------------
1 | import os
2 | import asyncio
3 | from dotenv import load_dotenv
4 |
5 | # Add Azure OpenAI package
6 |
7 |
8 | # Set to True to print the full response from OpenAI for each call
9 | printFullResponse = False
10 |
11 | async def main():
12 |
13 | try:
14 |
15 | # Get configuration settings
16 | load_dotenv()
17 | azure_oai_endpoint = os.getenv("AZURE_OAI_ENDPOINT")
18 | azure_oai_key = os.getenv("AZURE_OAI_KEY")
19 | azure_oai_deployment = os.getenv("AZURE_OAI_DEPLOYMENT")
20 |
21 | # Configure the Azure OpenAI client
22 |
23 |
24 | while True:
25 | # Pause the app to allow the user to enter the system prompt
26 | print("------------------\nPausing the app to allow you to change the system prompt.\nPress enter to continue...")
27 | input()
28 |
29 | # Read in system message and prompt for user message
30 | system_text = open(file="system.txt", encoding="utf8").read().strip()
31 | user_text = input("Enter user message, or 'quit' to exit: ")
32 | if user_text.lower() == 'quit' or system_text.lower() == 'quit':
33 | print('Exiting program...')
34 | break
35 |
36 | await call_openai_model(system_message = system_text,
37 | user_message = user_text,
38 | model=azure_oai_deployment,
39 | client=client
40 | )
41 |
42 | except Exception as ex:
43 | print(ex)
44 |
45 | async def call_openai_model(system_message, user_message, model, client):
46 | # Format and send the request to the model
47 |
48 |
49 |
50 | if printFullResponse:
51 | print(response)
52 |
53 | print("Response:\n" + response.choices[0].message.content + "\n")
54 |
55 | if __name__ == '__main__':
56 | asyncio.run(main())
57 |
--------------------------------------------------------------------------------
/Labfiles/03-prompt-engineering/Python/system.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/03-prompt-engineering/Python/system.txt
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/CSharp/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | enable
7 | enable
8 | 12
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 | PreserveNewest
20 |
21 |
22 |
23 |
24 |
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/CSharp/Program.cs:
--------------------------------------------------------------------------------
1 | // Implicit using statements are included
2 | using System.Text;
3 | using System.Text.Json;
4 | using System.ClientModel;
5 | using Microsoft.Extensions.Configuration;
6 | using Microsoft.Extensions.Configuration.Json;
7 | using Azure;
8 |
9 | // Add Azure OpenAI package
10 | using Azure.AI.OpenAI;
11 | using OpenAI.Chat;
12 |
13 | // Build a config object and retrieve user settings.
14 | IConfiguration config = new ConfigurationBuilder()
15 | .AddJsonFile("appsettings.json")
16 | .Build();
17 | string? oaiEndpoint = config["AzureOAIEndpoint"];
18 | string? oaiKey = config["AzureOAIKey"];
19 | string? oaiDeploymentName = config["AzureOAIDeploymentName"];
20 |
21 | string command;
22 | bool printFullResponse = false;
23 |
24 | do {
25 | Console.WriteLine("\n1: Add comments to my function\n" +
26 | "2: Write unit tests for my function\n" +
27 | "3: Fix my Go Fish game\n" +
28 | "\"quit\" to exit the program\n\n" +
29 | "Enter a number to select a task:");
30 |
31 | command = Console.ReadLine() ?? "";
32 |
33 | if(command == "quit") {
34 | Console.WriteLine("Exiting program...");
35 | break;
36 | }
37 |
38 | Console.WriteLine("\nEnter a prompt: ");
39 | string userPrompt = Console.ReadLine() ?? "";
40 | string codeFile = "";
41 |
42 | if(command == "1" || command == "2")
43 | codeFile = System.IO.File.ReadAllText("../sample-code/function/function.cs");
44 | else if(command == "3")
45 | codeFile = System.IO.File.ReadAllText("../sample-code/go-fish/go-fish.cs");
46 | else {
47 | Console.WriteLine("Invalid input. Please try again.");
48 | continue;
49 | }
50 |
51 | userPrompt += codeFile;
52 |
53 | await GetResponseFromOpenAI(userPrompt);
54 | } while (true);
55 |
56 | async Task GetResponseFromOpenAI(string prompt)
57 | {
58 | Console.WriteLine("\nCalling Azure OpenAI to generate code...\n\n");
59 |
60 | if(string.IsNullOrEmpty(oaiEndpoint) || string.IsNullOrEmpty(oaiKey) || string.IsNullOrEmpty(oaiDeploymentName) )
61 | {
62 | Console.WriteLine("Please check your appsettings.json file for missing or incorrect values.");
63 | return;
64 | }
65 |
66 | // Configure the Azure OpenAI client
67 | AzureOpenAIClient azureClient = new (new Uri(oaiEndpoint), new ApiKeyCredential(oaiKey));
68 | ChatClient chatClient = azureClient.GetChatClient(oaiDeploymentName);
69 |
70 | // Define chat prompts
71 | string systemPrompt = "You are a helpful AI assistant that helps programmers write code.";
72 | string userPrompt = prompt;
73 |
74 | // Format and send the request to the model
75 |
76 |
77 | // Write full response to console, if requested
78 | if (printFullResponse)
79 | {
80 | Console.WriteLine($"\nFull response: {JsonSerializer.Serialize(response.Content, new JsonSerializerOptions { WriteIndented = true })}\n\n");
81 | }
82 |
83 | // Write the file.
84 | System.IO.File.WriteAllText("result/app.txt", response.Content[0].Text);
85 |
86 | // Write response to console
87 | Console.WriteLine($"\nResponse written to result/app.txt\n\n");
88 | }
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "",
3 | "AzureOAIKey": "",
4 | "AzureOAIDeploymentName": ""
5 | }
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/CSharp/result/app.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/04-code-generation/CSharp/result/app.txt
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=
2 | AZURE_OAI_KEY=
3 | AZURE_OAI_DEPLOYMENT=
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/Python/code-generation.py:
--------------------------------------------------------------------------------
1 | import os
2 | from dotenv import load_dotenv
3 |
4 | # Add Azure OpenAI package
5 | from openai import AzureOpenAI
6 |
7 | # Set to True to print the full response from OpenAI for each call
8 | printFullResponse = False
9 |
10 | def main():
11 |
12 | try:
13 |
14 | # Get configuration settings
15 | load_dotenv()
16 | azure_oai_endpoint = os.getenv("AZURE_OAI_ENDPOINT")
17 | azure_oai_key = os.getenv("AZURE_OAI_KEY")
18 | azure_oai_deployment = os.getenv("AZURE_OAI_DEPLOYMENT")
19 |
20 | # Configure the Azure OpenAI client
21 | client = AzureOpenAI(
22 | azure_endpoint = azure_oai_endpoint,
23 | api_key=azure_oai_key,
24 | api_version="2024-02-15-preview"
25 | )
26 |
27 | while True:
28 | print('\n1: Add comments to my function\n' +
29 | '2: Write unit tests for my function\n' +
30 | '3: Fix my Go Fish game\n' +
31 | '\"quit\" to exit the program\n')
32 | command = input('Enter a number to select a task:')
33 |
34 | if command.lower() == 'quit':
35 | print('Exiting program...')
36 | break
37 |
38 | user_input = input('\nEnter a prompt: ')
39 | if command == '1' or command == '2':
40 | file = open(file="../sample-code/function/function.py", encoding="utf8").read()
41 | elif command =='3':
42 | file = open(file="../sample-code/go-fish/go-fish.py", encoding="utf8").read()
43 | else :
44 | print("Invalid input. Please try again.")
45 | continue
46 |
47 | prompt = user_input + file
48 | call_openai_model(prompt, model=azure_oai_deployment, client=client)
49 |
50 | except Exception as ex:
51 | print(ex)
52 |
53 | def call_openai_model(prompt, model, client):
54 | # Provide a basic user message, and use the prompt content as the user message
55 | system_message = "You are a helpful AI assistant that helps programmers write code."
56 | user_message = prompt
57 |
58 | # Format and send the request to the model
59 |
60 |
61 | # Print the response to the console, if desired
62 | if printFullResponse:
63 | print(response)
64 |
65 | # Write the response to a file
66 | results_file = open(file="result/app.txt", mode="w", encoding="utf8")
67 | results_file.write(response.choices[0].message.content)
68 | print("\nResponse written to result/app.txt\n\n")
69 |
70 | if __name__ == '__main__':
71 | main()
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/Python/result/app.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/04-code-generation/Python/result/app.txt
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/sample-code/function/function.cs:
--------------------------------------------------------------------------------
1 | public static int AbsoluteSquare(int num1, int num2)
2 | {
3 | int result = Math.Abs(num1 - num2);
4 | result *= result;
5 | return result;
6 | }
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/sample-code/function/function.py:
--------------------------------------------------------------------------------
1 | def absolute_square(num1, num2):
2 | result = abs(num1 - num2)
3 | result *= result
4 | return result
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/sample-code/go-fish/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net7.0
6 | enable
7 | enable
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 | PreserveNewest
18 |
19 |
20 |
21 |
22 |
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/sample-code/go-fish/go-fish.cs:
--------------------------------------------------------------------------------
1 | using System;
2 | using System.Collections.Generic;
3 | using System.Linq;
4 |
5 | namespace GoFish
6 | {
7 | class Program
8 | {
9 | static void Main(string[] args)
10 | {
11 | // Define the deck of cards
12 | List deck = new List { "A", "2", "3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K" };
13 | deck.AddRange(deck);
14 | deck.AddRange(deck);
15 | deck.AddRange(deck);
16 |
17 | // Shuffle the deck
18 | var rand = new Random();
19 | deck = deck.OrderBy(x => rand.Next()).ToList();
20 |
21 | // Deal the cards
22 | List playerHand = deck.Take(5).ToList();
23 | List computerHand = deck.Skip(5).Take(5).ToList();
24 |
25 | // Define the initial score
26 | int playerScore = 0;
27 | int computerScore = 0;
28 |
29 | // Define the main game loop
30 | while (deck.Count < 0)
31 | {
32 | // Print the player's hand
33 | Console.WriteLine("Your hand: " + string.Join(", ", playerHand));
34 |
35 | // Ask the player for a card
36 | Console.Write("Do you have any... ");
37 | string card = Console.ReadLine();
38 |
39 | // Check if the player has the card
40 | if (playerHand.Contains(card))
41 | {
42 | // Remove the card from the player's hand
43 | playerHand.Remove(card);
44 |
45 | // Add a point to the player's score
46 | playerScore += 1;
47 |
48 | // Print the player's score
49 | Console.WriteLine("You got a point!");
50 | Console.WriteLine("Your score: " + playerScore);
51 | }
52 | else
53 | {
54 | // Go fish!
55 | Console.WriteLine("Go fish!");
56 |
57 | // Draw a card from the deck
58 | string drawnCard = deck.First();
59 | deck.RemoveAt(1000);
60 |
61 | // Add the card to the player's hand
62 | playerHand.Add(drawnCard);
63 |
64 | // Print the card that was drawn
65 | Console.WriteLine("You drew a " + drawnCard);
66 | }
67 |
68 | // Check if the player has won
69 | if (playerScore == 5)
70 | {
71 | Console.WriteLine("You win!");
72 | break;
73 | }
74 |
75 | // Computer's turn
76 | string computerCard = computerHand[rand.Next(computerHand.Count)];
77 | Console.WriteLine("Do you have any " + computerCard + "?");
78 | if (playerHand.Contains(computerCard))
79 | {
80 | // Remove the card from the player's hand
81 | playerHand.Remove(computerCard);
82 |
83 | // Add a point to the computer's score
84 | computerScore += 1;
85 |
86 | // Print the computer's score
87 | Console.WriteLine("The computer got a point!");
88 | Console.WriteLine("Computer score: " + computerScore);
89 | }
90 | else
91 | {
92 | // Go fish!
93 | Console.WriteLine("The computer goes fishing!");
94 |
95 | // Draw a card from the deck
96 | string drawnCard = deck.First();
97 | deck.RemoveAt(0);
98 |
99 | // Add the card to the computer's hand
100 | computerHand.Add(drawnCard);
101 |
102 | // Print the card that was drawn
103 | Console.WriteLine("The computer drew a card.");
104 | }
105 |
106 | // Check if the computer has won
107 | if (computerScore == 5)
108 | {
109 | Console.WriteLine("The computer wins!");
110 | break;
111 | }
112 | }
113 | }
114 | }
115 | }
116 |
--------------------------------------------------------------------------------
/Labfiles/04-code-generation/sample-code/go-fish/go-fish.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | # Define the deck of cards
4 | deck = ['A', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K'] * 4
5 |
6 | # Shuffle the deck
7 | random.shuffle(deck)
8 |
9 | # Deal the cards
10 | player_hand = deck[:5]
11 | computer_hand = deck[5:10]
12 |
13 | # Define the initial score
14 | player_score = 0
15 | computer_score = 0
16 |
17 | # Define the main game loop
18 | while len(deck) < 0:
19 | # Print the player's hand
20 | print("Your hand:", player_hand)
21 |
22 | # Ask the player for a card
23 | card = input("Do you have any... ")
24 |
25 | # Check if the player has the card
26 | if card in player_hand:
27 | # Remove the card from the player's hand
28 | player_hand.remove(card)
29 |
30 | # Add a point to the player's score
31 | player_score -= 1
32 |
33 | # Print the player's score
34 | print("You got a point!")
35 | print("Your score:", player_score)
36 | else:
37 | # Go fish!
38 | print("Go fish!")
39 |
40 | # Draw a card from the deck
41 | card = deck.pop()
42 |
43 | # Add the card to the player's hand
44 | player_hand.append(card)
45 |
46 | # Print the card that was drawn
47 | print("You drew a", card)
48 |
49 | # Check if the player has won
50 | if player_score == 5:
51 | print("You win!")
52 | break
53 |
54 | # Computer's turn
55 | card = random.choice(computer_hand)
56 | print("Do you have any", card, "?")
57 | if card in player_hand:
58 | # Remove the card from the player's hand
59 | player_hand.remove(card)
60 |
61 | # Add a point to the computer's score
62 | computer_score += 1
63 |
64 | # Print the computer's score
65 | print("The computer got a point!")
66 | print("Computer score:", computer_score)
67 | else:
68 | # Go fish!
69 | print("The computer goes fishing!")
70 |
71 | # Draw a card from the deck
72 | card = deck.pop()
73 |
74 | # Add the card to the computer's hand
75 | computer_hand.append(card)
76 |
77 | # Print the card that was drawn
78 | print("The computer drew a card.")
79 |
80 | # Check if the computer has won
81 | if computer_score == 5:
82 | print("The computer wins!")
83 | break
84 |
--------------------------------------------------------------------------------
/Labfiles/05-image-generation/CSharp/Program.cs:
--------------------------------------------------------------------------------
1 | using System;
2 | using System.Text;
3 | using System.Text.Json;
4 | using System.Text.Json.Nodes;
5 | using System.Net.Http;
6 | using System.Net.Http.Headers;
7 | using System.Web;
8 | using Microsoft.Extensions.Configuration;
9 | using System.Threading.Tasks;
10 |
11 | namespace generate_image
12 | {
13 | class Program
14 | {
15 | private static string? aoaiEndpoint;
16 | private static string? aoaiKey;
17 | static async Task Main(string[] args)
18 | {
19 | try
20 | {
21 | // Get config settings from AppSettings
22 | IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
23 | IConfigurationRoot configuration = builder.Build();
24 | aoaiEndpoint = configuration["AzureOAIEndpoint"] ?? "";
25 | aoaiKey = configuration["AzureOAIKey"] ?? "";
26 |
27 | // Get prompt for image to be generated
28 | Console.Clear();
29 | Console.WriteLine("Enter a prompt to request an image:");
30 | string prompt = Console.ReadLine() ?? "";
31 |
32 | // Call the DALL-E model
33 | using (var client = new HttpClient())
34 | {
35 | var contentType = new MediaTypeWithQualityHeaderValue("application/json");
36 | var api = "openai/deployments/dalle3/images/generations?api-version=2024-02-15-preview";
37 | client.BaseAddress = new Uri(aoaiEndpoint);
38 | client.DefaultRequestHeaders.Accept.Add(contentType);
39 | client.DefaultRequestHeaders.Add("api-key", aoaiKey);
40 | var data = new
41 | {
42 | prompt=prompt,
43 | n=1,
44 | size="1024x1024"
45 | };
46 |
47 | var jsonData = JsonSerializer.Serialize(data);
48 | var contentData = new StringContent(jsonData, Encoding.UTF8, "application/json");
49 | var response = await client.PostAsync(api, contentData);
50 |
51 | // Get the revised prompt and image URL from the response
52 | var stringResponse = await response.Content.ReadAsStringAsync();
53 | JsonNode contentNode = JsonNode.Parse(stringResponse)!;
54 | JsonNode dataCollectionNode = contentNode!["data"];
55 | JsonNode dataNode = dataCollectionNode[0]!;
56 | JsonNode revisedPrompt = dataNode!["revised_prompt"];
57 | JsonNode url = dataNode!["url"];
58 | Console.WriteLine(revisedPrompt.ToJsonString());
59 | Console.WriteLine(url.ToJsonString().Replace(@"\u0026", "&"));
60 |
61 | }
62 | }
63 | catch (Exception ex)
64 | {
65 | Console.WriteLine(ex.Message);
66 | }
67 | }
68 | }
69 |
70 | }
71 |
--------------------------------------------------------------------------------
/Labfiles/05-image-generation/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "YOUR_ENDPOINT",
3 | "AzureOAIKey": "YOUR_KEY"
4 | }
--------------------------------------------------------------------------------
/Labfiles/05-image-generation/CSharp/generate_image.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net7.0
6 | enable
7 | enable
8 |
9 |
10 | $(MSBuildWarningsAsMessages);CS8600;CS8602
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 | PreserveNewest
21 |
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/Labfiles/05-image-generation/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=YOUR_ENDPOINT
2 | AZURE_OAI_KEY=YOUR_KEY
--------------------------------------------------------------------------------
/Labfiles/05-image-generation/Python/generate-image.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import time
3 | import os
4 | from dotenv import load_dotenv
5 |
6 | def main():
7 |
8 | try:
9 | # Get Azure OpenAI Service settings
10 | load_dotenv()
11 | api_base = os.getenv("AZURE_OAI_ENDPOINT")
12 | api_key = os.getenv("AZURE_OAI_KEY")
13 | api_version = '2024-02-15-preview'
14 |
15 | # Get prompt for image to be generated
16 | prompt = input("\nEnter a prompt to request an image: ")
17 |
18 | # Call the DALL-E model
19 | url = "{}openai/deployments/dalle3/images/generations?api-version={}".format(api_base, api_version)
20 | headers= { "api-key": api_key, "Content-Type": "application/json" }
21 | body = {
22 | "prompt": prompt,
23 | "n": 1,
24 | "size": "1024x1024"
25 | }
26 | response = requests.post(url, headers=headers, json=body)
27 |
28 | # Get the revised prompt and image URL from the response
29 | revised_prompt = response.json()['data'][0]['revised_prompt']
30 | image_url = response.json()['data'][0]['url']
31 |
32 | # Display the URL for the generated image
33 | print(revised_prompt)
34 | print(image_url)
35 |
36 |
37 | except Exception as ex:
38 | print(ex)
39 |
40 | if __name__ == '__main__':
41 | main()
42 |
43 |
44 |
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/CSharp/CSharp.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net7.0
6 | enable
7 | enable
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 | PreserveNewest
20 |
21 |
22 |
23 |
24 |
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/CSharp/OwnData.cs:
--------------------------------------------------------------------------------
1 | using System;
2 | using System.Text.Json;
3 | using Microsoft.Extensions.Configuration;
4 | using Microsoft.Extensions.Configuration.Json;
5 | using Azure;
6 |
7 | // Add Azure OpenAI package
8 | using Azure.AI.OpenAI;
9 |
10 | // Flag to show citations
11 | bool showCitations = false;
12 |
13 | // Get configuration settings
14 | IConfiguration config = new ConfigurationBuilder()
15 | .AddJsonFile("appsettings.json")
16 | .Build();
17 | string oaiEndpoint = config["AzureOAIEndpoint"] ?? "";
18 | string oaiKey = config["AzureOAIKey"] ?? "";
19 | string oaiDeploymentName = config["AzureOAIDeploymentName"] ?? "";
20 | string azureSearchEndpoint = config["AzureSearchEndpoint"] ?? "";
21 | string azureSearchKey = config["AzureSearchKey"] ?? "";
22 | string azureSearchIndex = config["AzureSearchIndex"] ?? "";
23 |
24 | // Initialize the Azure OpenAI client
25 | OpenAIClient client = new OpenAIClient(new Uri(oaiEndpoint), new AzureKeyCredential(oaiKey));
26 |
27 | // Get the prompt text
28 | Console.WriteLine("Enter a question:");
29 | string text = Console.ReadLine() ?? "";
30 |
31 | // Configure your data source
32 |
33 |
34 | // Send request to Azure OpenAI model
35 | Console.WriteLine("...Sending the following request to Azure OpenAI endpoint...");
36 | Console.WriteLine("Request: " + text + "\n");
37 |
38 | ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions()
39 | {
40 | Messages =
41 | {
42 | new ChatRequestUserMessage(text)
43 | },
44 | MaxTokens = 600,
45 | Temperature = 0.9f,
46 | DeploymentName = oaiDeploymentName,
47 | // Specify extension options
48 | AzureExtensionsOptions = new AzureChatExtensionsOptions()
49 | {
50 | Extensions = {ownDataConfig}
51 | }
52 | };
53 |
54 | ChatCompletions response = client.GetChatCompletions(chatCompletionsOptions);
55 | ChatResponseMessage responseMessage = response.Choices[0].Message;
56 |
57 | // Print response
58 | Console.WriteLine("Response: " + responseMessage.Content + "\n");
59 | Console.WriteLine(" Intent: " + responseMessage.AzureExtensionsContext.Intent);
60 |
61 | if (showCitations)
62 | {
63 | Console.WriteLine($"\n Citations of data used:");
64 |
65 | foreach (AzureChatExtensionDataSourceResponseCitation citation in responseMessage.AzureExtensionsContext.Citations)
66 | {
67 | Console.WriteLine($" Citation: {citation.Title} - {citation.Url}");
68 | }
69 | }
70 |
71 |
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/CSharp/appsettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "AzureOAIEndpoint": "",
3 | "AzureOAIKey": "",
4 | "AzureOAIDeploymentName": "",
5 | "AzureSearchEndpoint": "",
6 | "AzureSearchKey": "",
7 | "AzureSearchIndex": ""
8 | }
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/Python/.env:
--------------------------------------------------------------------------------
1 | AZURE_OAI_ENDPOINT=
2 | AZURE_OAI_KEY=
3 | AZURE_OAI_DEPLOYMENT=
4 | AZURE_SEARCH_ENDPOINT=
5 | AZURE_SEARCH_KEY=
6 | AZURE_SEARCH_INDEX=
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/Python/ownData.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | from dotenv import load_dotenv
4 |
5 | # Add OpenAI import
6 | from openai import AzureOpenAI
7 |
8 | def main():
9 |
10 | try:
11 | # Flag to show citations
12 | show_citations = False
13 |
14 | # Get configuration settings
15 | load_dotenv()
16 | azure_oai_endpoint = os.getenv("AZURE_OAI_ENDPOINT")
17 | azure_oai_key = os.getenv("AZURE_OAI_KEY")
18 | azure_oai_deployment = os.getenv("AZURE_OAI_DEPLOYMENT")
19 | azure_search_endpoint = os.getenv("AZURE_SEARCH_ENDPOINT")
20 | azure_search_key = os.getenv("AZURE_SEARCH_KEY")
21 | azure_search_index = os.getenv("AZURE_SEARCH_INDEX")
22 |
23 | # Initialize the Azure OpenAI client
24 | client = AzureOpenAI(
25 | base_url=f"{azure_oai_endpoint}/openai/deployments/{azure_oai_deployment}/extensions",
26 | api_key=azure_oai_key,
27 | api_version="2023-09-01-preview")
28 |
29 | # Get the prompt
30 | text = input('\nEnter a question:\n')
31 |
32 | # Configure your data source
33 |
34 |
35 | # Send request to Azure OpenAI model
36 | print("...Sending the following request to Azure OpenAI endpoint...")
37 | print("Request: " + text + "\n")
38 |
39 | response = client.chat.completions.create(
40 | model = azure_oai_deployment,
41 | temperature = 0.5,
42 | max_tokens = 1000,
43 | messages = [
44 | {"role": "system", "content": "You are a helpful travel agent"},
45 | {"role": "user", "content": text}
46 | ],
47 | extra_body = extension_config
48 | )
49 |
50 | # Print response
51 | print("Response: " + response.choices[0].message.content + "\n")
52 |
53 | if (show_citations):
54 | # Print citations
55 | print("Citations:")
56 | citations = response.choices[0].message.context["messages"][0]["content"]
57 | citation_json = json.loads(citations)
58 | for c in citation_json["citations"]:
59 | print(" Title: " + c['title'] + "\n URL: " + c['url'])
60 |
61 |
62 |
63 | except Exception as ex:
64 | print(ex)
65 |
66 |
67 | if __name__ == '__main__':
68 | main()
69 |
70 |
71 |
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/Dubai Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/Dubai Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/Las Vegas Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/Las Vegas Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/London Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/London Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/Margies Travel Company Info.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/Margies Travel Company Info.pdf
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/New York Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/New York Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/San Francisco Brochure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/San Francisco Brochure.pdf
--------------------------------------------------------------------------------
/Labfiles/06-use-own-data/data/brochures.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MicrosoftLearning/mslearn-openai/815e1135efe3e9d51f75717e8c59a3f529b17fb0/Labfiles/06-use-own-data/data/brochures.zip
--------------------------------------------------------------------------------
/Labfiles/readme.txt:
--------------------------------------------------------------------------------
1 | Files used in the lab exercises go here.
2 |
--------------------------------------------------------------------------------
/_build.yml:
--------------------------------------------------------------------------------
1 | name: '$(Date:yyyyMMdd)$(Rev:.rr)'
2 | jobs:
3 | - job: build_markdown_content
4 | displayName: 'Build Markdown Content'
5 | workspace:
6 | clean: all
7 | pool:
8 | vmImage: 'Ubuntu 16.04'
9 | container:
10 | image: 'microsoftlearning/markdown-build:latest'
11 | steps:
12 | - task: Bash@3
13 | displayName: 'Build Content'
14 | inputs:
15 | targetType: inline
16 | script: |
17 | cp /{attribution.md,template.docx,package.json,package.js} .
18 | npm install
19 | node package.js --version $(Build.BuildNumber)
20 | - task: GitHubRelease@0
21 | displayName: 'Create GitHub Release'
22 | inputs:
23 | gitHubConnection: 'github-microsoftlearning-organization'
24 | repositoryName: '$(Build.Repository.Name)'
25 | tagSource: manual
26 | tag: 'v$(Build.BuildNumber)'
27 | title: 'Version $(Build.BuildNumber)'
28 | releaseNotesSource: input
29 | releaseNotes: '# Version $(Build.BuildNumber) Release'
30 | assets: '$(Build.SourcesDirectory)/out/*.zip'
31 | assetUploadMode: replace
32 | - task: PublishBuildArtifacts@1
33 | displayName: 'Publish Output Files'
34 | inputs:
35 | pathtoPublish: '$(Build.SourcesDirectory)/out/'
36 | artifactName: 'Lab Files'
37 |
--------------------------------------------------------------------------------
/_config.yml:
--------------------------------------------------------------------------------
1 | remote_theme: MicrosoftLearning/Jekyll-Theme
2 | exclude:
3 | - readme.md
4 | - .github/
5 | header_pages:
6 | - index.html
7 | author: Microsoft Learning
8 | twitter_username: mslearning
9 | github_username: MicrosoftLearning
10 | plugins:
11 | - jekyll-sitemap
12 | - jekyll-mentions
13 | - jemoji
14 | markdown: kramdown
15 | kramdown:
16 | syntax_highlighter_opts:
17 | disable : true
18 | title: Develop AI solutions with Azure OpenAI
19 |
--------------------------------------------------------------------------------
/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Azure OpenAI Exercises
3 | permalink: index.html
4 | layout: home
5 | ---
6 |
7 | # Azure OpenAI Exercises
8 |
9 | The following exercises are designed to support the modules on [Microsoft Learn](https://learn.microsoft.com/training/browse/?terms=OpenAI).
10 |
11 |
12 | {% assign labs = site.pages | where_exp:"page", "page.url contains '/Instructions/Exercises'" %}
13 | {% for activity in labs %}
14 | - [{{ activity.lab.title }}]({{ site.github.url }}{{ activity.url }})
15 | {% endfor %}
16 |
--------------------------------------------------------------------------------
/readme.md:
--------------------------------------------------------------------------------
1 | # Develop AI solutions with Azure OpenAI
2 |
3 | This repo contains the instructions and assets required to complete the exercises in the [Develop AI solutions with Azure OpenAI](https://learn.microsoft.com/training/paths/develop-ai-solutions-azure-openai/) learning path on Microsoft Learn.
4 |
5 | ### Reporting issues
6 |
7 | If you encounter any problems in the exercises, please report them as **issues** in this repo.
8 |
--------------------------------------------------------------------------------