├── .github ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE.md └── PULL_REQUEST_TEMPLATE.md ├── Instructions ├── 01-agent-fundamentals.md ├── 02-build-ai-agent.md ├── 03-agent-custom-functions.md ├── 04-semantic-kernel.md ├── 05-agent-orchestration.md └── Media │ ├── ai-agent-add-files.png │ ├── ai-agent-playground.png │ ├── ai-agent-setup.png │ ├── ai-foundry-agents-playground.png │ ├── ai-foundry-home.png │ └── ai-foundry-project.png ├── LICENSE ├── Labfiles ├── 01-agent-fundamentals │ └── Expenses_Policy.docx ├── 02-build-ai-agent │ └── Python │ │ ├── .env │ │ ├── agent.py │ │ ├── data.txt │ │ └── requirements.txt ├── 03-ai-agent-functions │ ├── Python │ │ ├── .env │ │ ├── agent.py │ │ ├── requirements.txt │ │ └── user_functions.py │ └── readme.md ├── 04-semantic-kernel │ └── python │ │ ├── .env │ │ ├── data.txt │ │ └── semantic-kernel.py ├── 05-agent-orchestration │ └── Python │ │ ├── .env │ │ ├── agent_chat.py │ │ └── sample_logs │ │ ├── log1.log │ │ ├── log2.log │ │ ├── log3.log │ │ └── log4.log └── update-python.sh ├── _build.yml ├── _config.yml ├── index.md └── readme.md /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Microsoft Learning Repositories 2 | 3 | MCT contributions are a key part of keeping the lab and demo content current as the Azure platform changes. We want to make it as easy as possible for you to contribute changes to the lab files. Here are a few guidelines to keep in mind as you contribute changes. 4 | 5 | ## GitHub Use & Purpose 6 | 7 | Microsoft Learning is using GitHub to publish the lab steps and lab scripts for courses that cover cloud services like Azure. Using GitHub allows the course’s authors and MCTs to keep the lab content current with Azure platform changes. Using GitHub allows the MCTs to provide feedback and suggestions for lab changes, and then the course authors can update lab steps and scripts quickly and relatively easily. 8 | 9 | > When you prepare to teach these courses, you should ensure that you are using the latest lab steps and scripts by downloading the appropriate files from GitHub. GitHub should not be used to discuss technical content in the course, or how to prep. It should only be used to address changes in the labs. 10 | 11 | It is strongly recommended that MCTs and Partners access these materials and in turn, provide them separately to students. Pointing students directly to GitHub to access Lab steps as part of an ongoing class will require them to access yet another UI as part of the course, contributing to a confusing experience for the student. An explanation to the student regarding why they are receiving separate Lab instructions can highlight the nature of an always-changing cloud-based interface and platform. Microsoft Learning support for accessing files on GitHub and support for navigation of the GitHub site is limited to MCTs teaching this course only. 12 | 13 | > As an alternative to pointing students directly to the GitHub repository, you can point students to the GitHub Pages website to view the lab instructions. The URL for the GitHub Pages website can be found at the top of the repository. 14 | 15 | To address general comments about the course and demos, or how to prepare for a course delivery, please use the existing MCT forums. 16 | 17 | ## Additional Resources 18 | 19 | A user guide has been provided for MCTs who are new to GitHub. It provides steps for connecting to GitHub, downloading and printing course materials, updating the scripts that students use in labs, and explaining how you can help ensure that this course’s content remains current. 20 | 21 | 22 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | # Module: 00 2 | ## Lab/Demo: 00 3 | ### Task: 00 4 | #### Step: 00 5 | 6 | Description of issue 7 | 8 | Repro steps: 9 | 10 | 1. 11 | 1. 12 | 1. -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | # Module: 00 2 | ## Lab/Demo: 00 3 | 4 | Fixes # . 5 | 6 | Changes proposed in this pull request: 7 | 8 | - 9 | - 10 | - -------------------------------------------------------------------------------- /Instructions/01-agent-fundamentals.md: -------------------------------------------------------------------------------- 1 | --- 2 | lab: 3 | title: 'Explore AI Agent development' 4 | description: 'Take your first steps in developing AI agents by exploring the Azure AI Agent service in the Azure AI Foundry portal.' 5 | --- 6 | 7 | # Explore AI Agent development 8 | 9 | In this exercise, you use the Azure AI Agent service in the Azure AI Foundry portal to create a simple AI agent that assists employees with expense claims. 10 | 11 | This exercise takes approximately **30** minutes. 12 | 13 | > **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors. 14 | 15 | ## Create an Azure AI Foundry project and agent 16 | 17 | Let's start by creating an Azure AI Foundry project. 18 | 19 | 1. In a web browser, open the [Azure AI Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the **Azure AI Foundry** logo at the top left to navigate to the home page, which looks similar to the following image (close the **Help** pane if it's open): 20 | 21 | ![Screenshot of Azure AI Foundry portal.](./Media/ai-foundry-home.png) 22 | 23 | 1. In the home page, select **Create an agent**. 24 | 1. When prompted to create a project, enter a valid name for your project. 25 | 1. Expand **Advanced options** and specify the following settings: 26 | - **Azure AI Foundry resource**: *A valid name for your Azure AI Foundry resource* 27 | - **Subscription**: *Your Azure subscription* 28 | - **Resource group**: *Select your resource group, or create a new one* 29 | - **Region**: *Select any **AI Services supported location***\* 30 | 31 | > \* Some Azure AI resources are constrained by regional model quotas. In the event of a quota limit being exceeded later in the exercise, there's a possibility you may need to create another resource in a different region. 32 | 33 | 1. Select **Create** and wait for your project to be created. 34 | 1. When your project is created, the Agents playground will be opened automatically so you can select or deploy a model: 35 | 36 | ![Screenshot of a Azure AI Foundry project Agents playground.](./Media/ai-foundry-agents-playground.png) 37 | 38 | >**Note**: A GPT-4o base model is automatically deployed when creating your Agent and project. 39 | 40 | You'll see that an agent with a default name has been created for you, along with your base model deployment. 41 | 42 | ## Create your agent 43 | 44 | Now that you have a model deployed, you're ready to build an AI agent. In this exercise, you'll build a simple agent that answers questions based on a corporate expenses policy. You'll download the expenses policy document, and use it as *grounding* data for the agent. 45 | 46 | 1. Open another browser tab, and download [Expenses_policy.docx](https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/main/Labfiles/01-agent-fundamentals/Expenses_Policy.docx) from `https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/main/Labfiles/01-agent-fundamentals/Expenses_Policy.docx` and save it locally. This document contains details of the expenses policy for the fictional Contoso corporation. 47 | 1. Return to the browser tab containing the Foundry Agents playground, and find the **Setup** pane (it may be to the side or below the chat window). 48 | 1. Set the **Agent name** to `ExpensesAgent`, ensure that the gpt-4o model deployment you created previously is selected, and set the **Instructions** to: 49 | 50 | ```prompt 51 | You are an AI assistant for corporate expenses. 52 | You answer questions about expenses based on the expenses policy data. 53 | If a user wants to submit an expense claim, you get their email address, a description of the claim, and the amount to be claimed and write the claim details to a text file that the user can download. 54 | ``` 55 | 56 | ![Screenshot of the AI agent setup page in Azure AI Foundry portal.](./Media/ai-agent-setup.png) 57 | 58 | 1. Further down in the **Setup** pane, next to the **Knowledge** header, select **+ Add**. Then in the **Add knowledge** dialog box, select **Files**. 59 | 1. In the **Adding files** dialog box, create a new vector store named `Expenses_Vector_Store`, uploading and saving the **Expenses_policy.docx** local file that you downloaded previously. 60 | 1. In the **Setup** pane, in the **Knowledge** section, verify that **Expenses_Vector_Store** is listed and shown as containing 1 file. 61 | 1. Below the **Knowledge** section, next to **Actions**, select **+ Add**. Then in the **Add action** dialog box, select **Code interpreter** and then select **Save** (you do not need to upload any files for the code interpreter). 62 | 63 | Your agent will use the document you uploaded as its knowledge source to *ground* its responses (in other words, it will answer questions based on the contents of this document). It will use the code interpreter tool as required to perform actions by generating and running its own Python code. 64 | 65 | ## Test your agent 66 | 67 | Now that you've created an agent, you can test it in the playground chat. 68 | 69 | 1. In the playground chat entry, enter the prompt: `What's the maximum I can claim for meals?` and review the agent's response - which should be based on information in the expenses policy document you added as knowledge to the agent setup. 70 | 71 | > **Note**: If the agent fails to respond because the rate limit is exceeded. Wait a few seconds and try again. If there is insufficient quota available in your subscription, the model may not be able to respond. If the problem persists, try to increase the quota for your model on the **Models + endpoints** page. 72 | 73 | 1. Try the following follow-up prompt: `I'd like to submit a claim for a meal.` and review the response. The agent should ask you for the required information to submit a claim. 74 | 1. Provide the agent with an email address; for example, `fred@contoso.com`. The agent should acknowledge the response and request the remaining information required for the expense claim (description and amount) 75 | 1. Submit a prompt that describes the claim and the amount; for example, `Breakfast cost me $20`. 76 | 1. The agent should use the code interpreter to prepare the expense claim text file, and provide a link so you can download it. 77 | 78 | ![Screenshot of the Agent Playground in Azure AI Foundry portal.](./Media/ai-agent-playground.png) 79 | 80 | 1. Download and open the text document to see the expense claim details. 81 | 82 | ## Clean up 83 | 84 | Now that you've finished the exercise, you should delete the cloud resources you've created to avoid unnecessary resource usage. 85 | 86 | 1. Open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com` and view the contents of the resource group where you deployed the hub resources used in this exercise. 87 | 1. On the toolbar, select **Delete resource group**. 88 | 1. Enter the resource group name and confirm that you want to delete it. 89 | -------------------------------------------------------------------------------- /Instructions/02-build-ai-agent.md: -------------------------------------------------------------------------------- 1 | --- 2 | lab: 3 | title: 'Develop an AI agent' 4 | description: 'Use the Azure AI Agent Service to develop an agent that uses built-in tools.' 5 | --- 6 | 7 | # Develop an AI agent 8 | 9 | In this exercise, you'll use Azure AI Agent Service to create a simple agent that analyzes data and creates charts. The agent uses the built-in *Code Interpreter* tool to dynamically generate the code required to create charts as images, and then saves the resulting chart images. 10 | 11 | This exercise should take approximately **30** minutes to complete. 12 | 13 | > **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors. 14 | 15 | ## Create an Azure AI Foundry project 16 | 17 | Let's start by creating an Azure AI Foundry project. 18 | 19 | 1. In a web browser, open the [Azure AI Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the **Azure AI Foundry** logo at the top left to navigate to the home page, which looks similar to the following image (close the **Help** pane if it's open): 20 | 21 | ![Screenshot of Azure AI Foundry portal.](./Media/ai-foundry-home.png) 22 | 23 | 1. In the home page, select **Create an agent**. 24 | 1. When prompted to create a project, enter a valid name for your project and expand **Advanced options**. 25 | 1. Confirm the following settings for your project: 26 | - **Azure AI Foundry resource**: *A valid name for your Azure AI Foundry resource* 27 | - **Subscription**: *Your Azure subscription* 28 | - **Resource group**: *Create or select a resource group* 29 | - **Region**: *Select any **AI Services supported location***\* 30 | 31 | > \* Some Azure AI resources are constrained by regional model quotas. In the event of a quota limit being exceeded later in the exercise, there's a possibility you may need to create another resource in a different region. 32 | 33 | 1. Select **Create** and wait for your project to be created. 34 | 1. When your project is created, the Agents playground will be opened automatically so you can select or deploy a model: 35 | 36 | ![Screenshot of a Azure AI Foundry project Agents playground.](./Media/ai-foundry-agents-playground.png) 37 | 38 | >**Note**: A GPT-4o base model is automatically deployed when creating your Agent and project. 39 | 40 | 1. In the navigation pane on the left, select **Overview** to see the main page for your project; which looks like this: 41 | 42 | > **Note**: If an *Insufficient permissions** error is displayed, use the **Fix me** button to resolve it. 43 | 44 | ![Screenshot of a Azure AI Foundry project overview page.](./Media/ai-foundry-project.png) 45 | 46 | 1. Copy the **Azure AI Foundry project endpoint** values to a notepad, as you'll use them to connect to your project in a client application. 47 | 48 | ## Create an agent client app 49 | 50 | Now you're ready to create a client app that uses an agent. Some code has been provided for you in a GitHub repository. 51 | 52 | ### Clone the repo containing the application code 53 | 54 | 1. Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`; signing in with your Azure credentials if prompted. 55 | 56 | Close any welcome notifications to see the Azure portal home page. 57 | 58 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment with no storage in your subscription. 59 | 60 | The cloud shell provides a command-line interface in a pane at the bottom of the Azure portal. You can resize or maximize this pane to make it easier to work in. 61 | 62 | > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***. 63 | 64 | 1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor). 65 | 66 | **Ensure you've switched to the classic version of the cloud shell before continuing.** 67 | 68 | 1. In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text): 69 | 70 | ``` 71 | rm -r ai-agents -f 72 | git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents 73 | ``` 74 | 75 | > **Tip**: As you enter commands into the cloudshell, the output may take up a large amount of the screen buffer and the cursor on the current line may be obscured. You can clear the screen by entering the `cls` command to make it easier to focus on each task. 76 | 77 | 1. Enter the following command to change the working directory to the folder containing the code files and list them all. 78 | 79 | ``` 80 | cd ai-agents/Labfiles/02-build-ai-agent/Python 81 | ls -a -l 82 | ``` 83 | 84 | The provided files include application code, configuration settings, and data. 85 | 86 | ### Configure the application settings 87 | 88 | 1. In the cloud shell command-line pane, enter the following command to install the libraries you'll use: 89 | 90 | ``` 91 | python -m venv labenv 92 | ./labenv/bin/Activate.ps1 93 | pip install -r requirements.txt azure-ai-projects 94 | ``` 95 | 96 | 1. Enter the following command to edit the configuration file that has been provided: 97 | 98 | ``` 99 | code .env 100 | ``` 101 | 102 | The file is opened in a code editor. 103 | 104 | 1. In the code file, replace the **your_project_endpoint** placeholder with the endpoint for your project (copied from the project **Overview** page in the Azure AI Foundry portal). 105 | 1. After you've replaced the placeholder, use the **CTRL+S** command to save your changes and then use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open. 106 | 107 | ### Write code for an agent app 108 | 109 | > **Tip**: As you add code, be sure to maintain the correct indentation. Use the comment indentation levels as a guide. 110 | 111 | 1. Enter the following command to edit the code file that has been provided: 112 | 113 | ``` 114 | code agent.py 115 | ``` 116 | 117 | 1. Review the existing code, which retrieves the application configuration settings and loads data from *data.txt* to be analyzed. The rest of the file includes comments where you'll add the necessary code to implement your data analysis agent. 118 | 1. Find the comment **Add references** and add the following code to import the classes you'll need to build an Azure AI agent that uses the built-in code interpreter tool: 119 | 120 | ```python 121 | # Add references 122 | from azure.identity import DefaultAzureCredential 123 | from azure.ai.agents import AgentsClient 124 | from azure.ai.agents.models import FilePurpose, CodeInterpreterTool, ListSortOrder, MessageRole 125 | ``` 126 | 127 | 1. Find the comment **Connect to the Agent client** and add the following code to connect to the Azure AI project. 128 | 129 | > **Tip**: Be careful to maintain the correct indentation level. 130 | 131 | ```python 132 | # Connect to the Agent client 133 | agent_client = AgentsClient( 134 | endpoint=project_endpoint, 135 | credential=DefaultAzureCredential 136 | (exclude_environment_credential=True, 137 | exclude_managed_identity_credential=True) 138 | ) 139 | with agent_client: 140 | ``` 141 | 142 | The code connects to the Azure AI Foundry project using the current Azure credentials. The final *with agent_client* statement starts a code block that defines the scope of the client, ensuring it's cleaned up when the code within the block is finished. 143 | 144 | 1. Find the comment **Upload the data file and create a CodeInterpreterTool**, within the *with agent_client* block, and add the following code to upload the data file to the project and create a CodeInterpreterTool that can access the data in it: 145 | 146 | ```python 147 | # Upload the data file and create a CodeInterpreterTool 148 | file = agent_client.files.upload_and_poll( 149 | file_path=file_path, purpose=FilePurpose.AGENTS 150 | ) 151 | print(f"Uploaded {file.filename}") 152 | 153 | code_interpreter = CodeInterpreterTool(file_ids=[file.id]) 154 | ``` 155 | 156 | 1. Find the comment **Define an agent that uses the CodeInterpreterTool** and add the following code to define an AI agent that analyzes data and can use the code interpreter tool you defined previously: 157 | 158 | ```python 159 | # Define an agent that uses the CodeInterpreterTool 160 | agent = agent_client.create_agent( 161 | model=model_deployment, 162 | name="data-agent", 163 | instructions="You are an AI agent that analyzes the data in the file that has been uploaded. If the user requests a chart, create it and save it as a .png file.", 164 | tools=code_interpreter.definitions, 165 | tool_resources=code_interpreter.resources, 166 | ) 167 | print(f"Using agent: {agent.name}") 168 | ``` 169 | 170 | 1. Find the comment **Create a thread for the conversation** and add the following code to start a thread on which the chat session with the agent will run: 171 | 172 | ```python 173 | # Create a thread for the conversation 174 | thread = agent_client.threads.create() 175 | ``` 176 | 177 | 1. Note that the next section of code sets up a loop for a user to enter a prompt, ending when the user enters "quit". 178 | 179 | 1. Find the comment **Send a prompt to the agent** and add the following code to add a user message to the prompt (along with the data from the file that was loaded previously), and then run thread with the agent. 180 | 181 | ```python 182 | # Send a prompt to the agent 183 | message = agent_client.messages.create( 184 | thread_id=thread.id, 185 | role="user", 186 | content=user_prompt, 187 | ) 188 | 189 | run = agent_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) 190 | 191 | # Check the run status for failures 192 | if run.status == "failed": 193 | print(f"Run failed: {run.last_error}") 194 | ``` 195 | 196 | 1. Find the comment **Show the latest response from the agent** and add the following code to retrieve the messages from the completed thread and display the last one that was sent by the agent. 197 | 198 | ```python 199 | # Show the latest response from the agent 200 | last_msg = agent_client.messages.get_last_message_text_by_role( 201 | thread_id=thread.id, 202 | role=MessageRole.AGENT, 203 | ) 204 | if last_msg: 205 | print(f"Last Message: {last_msg.text.value}") 206 | ``` 207 | 208 | 1. Find the comment **Get the conversation history**, which is after the loop ends, and add the following code to print out the messages from the conversation thread; reversing the order to show them in chronological sequence 209 | 210 | ```python 211 | # Get the conversation history 212 | print("\nConversation Log:\n") 213 | messages = agent_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) 214 | for message in messages: 215 | if message.text_messages: 216 | last_msg = message.text_messages[-1] 217 | print(f"{message.role}: {last_msg.text.value}\n") 218 | ``` 219 | 220 | 1. Find the comment **Get any generated files** and add the following code to get any file path annotations from the messages (which indicate that the agent saved a file in its internal storage) and copy the files to the app folder. _NOTE_: Currently the image contents are not available by the system. 221 | 222 | ```python 223 | # Get any generated files 224 | for msg in messages: 225 | # Save every image file in the message 226 | for img in msg.image_contents: 227 | file_id = img.image_file.file_id 228 | file_name = f"{file_id}_image_file.png" 229 | agent_client.files.save(file_id=file_id, file_name=file_name) 230 | print(f"Saved image file to: {Path.cwd() / file_name}") 231 | ``` 232 | 233 | 1. Find the comment **Clean up** and add the following code to delete the agent and thread when no longer needed. 234 | 235 | ```python 236 | # Clean up 237 | agent_client.delete_agent(agent.id) 238 | ``` 239 | 240 | 1. Review the code, using the comments to understand how it: 241 | - Connects to the AI Foundry project. 242 | - Uploads the data file and creates a code interpreter tool that can access it. 243 | - Creates a new agent that uses the code interpreter tool and has explicit instructions to analyze the data and create charts as .png files. 244 | - Runs a thread with a prompt message from the user along with the data to be analyzed. 245 | - Checks the status of the run in case there's a failure 246 | - Retrieves the messages from the completed thread and displays the last one sent by the agent. 247 | - Displays the conversation history 248 | - Saves each file that was generated. 249 | - Deletes the agent and thread when they're no longer required. 250 | 251 | 1. Save the code file (*CTRL+S*) when you have finished. You can also close the code editor (*CTRL+Q*); though you may want to keep it open in case you need to make any edits to the code you added. In either case, keep the cloud shell command-line pane open. 252 | 253 | ### Sign into Azure and run the app 254 | 255 | 1. In the cloud shell command-line pane, enter the following command to sign into Azure. 256 | 257 | ``` 258 | az login 259 | ``` 260 | 261 | **You must sign into Azure - even though the cloud shell session is already authenticated.** 262 | 263 | > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details. 264 | 265 | 1. When prompted, follow the instructions to open the sign-in page in a new tab and enter the authentication code provided and your Azure credentials. Then complete the sign in process in the command line, selecting the subscription containing your Azure AI Foundry hub if prompted. 266 | 1. After you have signed in, enter the following command to run the application: 267 | 268 | ``` 269 | python agent.py 270 | ``` 271 | 272 | The application runs using the credentials for your authenticated Azure session to connect to your project and create and run the agent. 273 | 274 | 1. When prompted, view the data that the app has loaded from the *data.txt* text file. Then enter a prompt such as: 275 | 276 | ``` 277 | What's the category with the highest cost? 278 | ``` 279 | 280 | > **Tip**: If the app fails because the rate limit is exceeded. Wait a few seconds and try again. If there is insufficient quota available in your subscription, the model may not be able to respond. 281 | 282 | 1. View the response. Then enter another prompt, this time requesting a chart: 283 | 284 | ``` 285 | Create a pie chart showing cost by category 286 | ``` 287 | 288 | The agent should selectively use the code interpreter tool as required, in this case to create a chart based on your request. 289 | 290 | 1. You can continue the conversation if you like. The thread is *stateful*, so it retains the conversation history - meaning that the agent has the full context for each response. Enter `quit` when you're done. 291 | 1. Review the conversation messages that were retrieved from the thread, and the files that were generated. 292 | 293 | 1. When the application has finished, use the cloud shell **download** command to download each .png file that was saved in the app folder. For example: 294 | 295 | ``` 296 | download ./.png 297 | ``` 298 | 299 | The download command creates a popup link at the bottom right of your browser, which you can select to download and open the file. 300 | 301 | ## Summary 302 | 303 | In this exercise, you used the Azure AI Agent Service SDK to create a client application that uses an AI agent. The agent uses the built-in Code Interpreter tool to run dynamic code that creates images. 304 | 305 | ## Clean up 306 | 307 | If you've finished exploring Azure AI Agent Service, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs. 308 | 309 | 1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com` in a new browser tab) and view the contents of the resource group where you deployed the resources used in this exercise. 310 | 1. On the toolbar, select **Delete resource group**. 311 | 1. Enter the resource group name and confirm that you want to delete it. 312 | -------------------------------------------------------------------------------- /Instructions/03-agent-custom-functions.md: -------------------------------------------------------------------------------- 1 | --- 2 | lab: 3 | title: 'Use a custom function in an AI agent' 4 | description: 'Learn how to use functions to add custom capabilities to your agents.' 5 | --- 6 | 7 | # Use a custom function in an AI agent 8 | 9 | In this exercise you'll explore creating an agent that can use custom functions as a tool to complete tasks. 10 | 11 | You'll build a simple technical support agent that can collect details of a technical problem and generate a support ticket. 12 | 13 | This exercise should take approximately **30** minutes to complete. 14 | 15 | > **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors. 16 | 17 | ## Create an Azure AI Foundry project 18 | 19 | Let's start by creating an Azure AI Foundry project. 20 | 21 | 1. In a web browser, open the [Azure AI Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the **Azure AI Foundry** logo at the top left to navigate to the home page, which looks similar to the following image (close the **Help** pane if it's open): 22 | 23 | ![Screenshot of Azure AI Foundry portal.](./Media/ai-foundry-home.png) 24 | 25 | 1. In the home page, select **Create an agent**. 26 | 1. When prompted to create a project, enter a valid name for your project and expand **Advanced options**. 27 | 1. Confirm the following settings for your project: 28 | - **Azure AI Foundry resource**: *A valid name for your Azure AI Foundry resource* 29 | - **Subscription**: *Your Azure subscription* 30 | - **Resource group**: *Create or select a resource group* 31 | - **Region**: *Select any **AI Services supported location***\* 32 | 33 | > \* Some Azure AI resources are constrained by regional model quotas. In the event of a quota limit being exceeded later in the exercise, there's a possibility you may need to create another resource in a different region. 34 | 35 | 1. Select **Create** and wait for your project to be created. 36 | 1. When your project is created, the Agents playground will be opened automatically so you can select or deploy a model: 37 | 38 | ![Screenshot of a Azure AI Foundry project Agents playground.](./Media/ai-foundry-agents-playground.png) 39 | 40 | >**Note**: A GPT-4o base model is automatically deployed when creating your Agent and project. 41 | 42 | 1. In the navigation pane on the left, select **Overview** to see the main page for your project; which looks like this: 43 | 44 | > **Note**: If an *Insufficient permissions** error is displayed, use the **Fix me** button to resolve it. 45 | 46 | ![Screenshot of a Azure AI Foundry project overview page.](./Media/ai-foundry-project.png) 47 | 48 | 1. Copy the **Azure AI Foundry project endpoint** value to a notepad, as you'll use it to connect to your project in a client application. 49 | 50 | ## Develop an agent that uses function tools 51 | 52 | Now that you've created your project in AI Foundry, let's develop an app that implements an agent using custom function tools. 53 | 54 | ### Clone the repo containing the application code 55 | 56 | 1. Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`; signing in with your Azure credentials if prompted. 57 | 58 | Close any welcome notifications to see the Azure portal home page. 59 | 60 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment with no storage in your subscription. 61 | 62 | The cloud shell provides a command-line interface in a pane at the bottom of the Azure portal. You can resize or maximize this pane to make it easier to work in. 63 | 64 | > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***. 65 | 66 | 1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor). 67 | 68 | **Ensure you've switched to the classic version of the cloud shell before continuing.** 69 | 70 | 1. In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text): 71 | 72 | ``` 73 | rm -r ai-agents -f 74 | git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents 75 | ``` 76 | 77 | > **Tip**: As you enter commands into the cloudshell, the output may take up a large amount of the screen buffer and the cursor on the current line may be obscured. You can clear the screen by entering the `cls` command to make it easier to focus on each task. 78 | 79 | 1. Enter the following command to change the working directory to the folder containing the code files and list them all. 80 | 81 | ``` 82 | cd ai-agents/Labfiles/03-ai-agent-functions/Python 83 | ls -a -l 84 | ``` 85 | 86 | The provided files include application code and a file for configuration settings. 87 | 88 | ### Configure the application settings 89 | 90 | 1. In the cloud shell command-line pane, enter the following command to install the libraries you'll use: 91 | 92 | ``` 93 | python -m venv labenv 94 | ./labenv/bin/Activate.ps1 95 | pip install -r requirements.txt azure-ai-projects 96 | ``` 97 | 98 | >**Note:** You can ignore any warning or error messages displayed during the library installation. 99 | 100 | 1. Enter the following command to edit the configuration file that has been provided: 101 | 102 | ``` 103 | code .env 104 | ``` 105 | 106 | The file is opened in a code editor. 107 | 108 | 1. In the code file, replace the **your_project_endpoint** placeholder with the endpoint for your project (copied from the project **Overview** page in the Azure AI Foundry portal). 109 | 1. After you've replaced the placeholder, use the **CTRL+S** command to save your changes and then use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open. 110 | 111 | ### Define a custom function 112 | 113 | 1. Enter the following command to edit the code file that has been provided for your function code: 114 | 115 | ``` 116 | code user_functions.py 117 | ``` 118 | 119 | 1. Find the comment **Create a function to submit a support ticket** and add the following code, which generates a ticket number and saves a support ticket as a text file. 120 | 121 | ```python 122 | # Create a function to submit a support ticket 123 | def submit_support_ticket(email_address: str, description: str) -> str: 124 | script_dir = Path(__file__).parent # Get the directory of the script 125 | ticket_number = str(uuid.uuid4()).replace('-', '')[:6] 126 | file_name = f"ticket-{ticket_number}.txt" 127 | file_path = script_dir / file_name 128 | text = f"Support ticket: {ticket_number}\nSubmitted by: {email_address}\nDescription:\n{description}" 129 | file_path.write_text(text) 130 | 131 | message_json = json.dumps({"message": f"Support ticket {ticket_number} submitted. The ticket file is saved as {file_name}"}) 132 | return message_json 133 | ``` 134 | 135 | 1. Find the comment **Define a set of callable functions** and add the following code, which statically defines a set of callable functions in this code file (in this case, there's only one - but in a real solution you may have multiple functions that your agent can call): 136 | 137 | ```python 138 | # Define a set of callable functions 139 | user_functions: Set[Callable[..., Any]] = { 140 | submit_support_ticket 141 | } 142 | ``` 143 | 1. Save the file (*CTRL+S*). 144 | 145 | ### Write code to implement an agent that can use your function 146 | 147 | 1. Enter the following command to begin editing the agent code. 148 | 149 | ``` 150 | code agent.py 151 | ``` 152 | 153 | > **Tip**: As you add code to the code file, be sure to maintain the correct indentation. 154 | 155 | 1. Review the existing code, which retrieves the application configuration settings and sets up a loop in which the user can enter prompts for the agent. The rest of the file includes comments where you'll add the necessary code to implement your technical support agent. 156 | 1. Find the comment **Add references** and add the following code to import the classes you'll need to build an Azure AI agent that uses your function code as a tool: 157 | 158 | ```python 159 | # Add references 160 | from azure.identity import DefaultAzureCredential 161 | from azure.ai.agents import AgentsClient 162 | from azure.ai.agents.models import FunctionTool, ToolSet, ListSortOrder, MessageRole 163 | from user_functions import user_functions 164 | ``` 165 | 166 | 1. Find the comment **Connect to the Agent client** and add the following code to connect to the Azure AI project using the current Azure credentials. 167 | 168 | > **Tip**: Be careful to maintain the correct indentation level. 169 | 170 | ```python 171 | # Connect to the Agent client 172 | agent_client = AgentsClient( 173 | endpoint=project_endpoint, 174 | credential=DefaultAzureCredential 175 | (exclude_environment_credential=True, 176 | exclude_managed_identity_credential=True) 177 | ) 178 | ``` 179 | 180 | 1. Find the comment **Define an agent that can use the custom functions** section, and add the following code to add your function code to a toolset, and then create an agent that can use the toolset and a thread on which to run the chat session. 181 | 182 | ```python 183 | # Define an agent that can use the custom functions 184 | with agent_client: 185 | 186 | functions = FunctionTool(user_functions) 187 | toolset = ToolSet() 188 | toolset.add(functions) 189 | agent_client.enable_auto_function_calls(toolset) 190 | 191 | agent = agent_client.create_agent( 192 | model=model_deployment, 193 | name="support-agent", 194 | instructions="""You are a technical support agent. 195 | When a user has a technical issue, you get their email address and a description of the issue. 196 | Then you use those values to submit a support ticket using the function available to you. 197 | If a file is saved, tell the user the file name. 198 | """, 199 | toolset=toolset 200 | ) 201 | 202 | thread = agent_client.threads.create() 203 | print(f"You're chatting with: {agent.name} ({agent.id})") 204 | 205 | ``` 206 | 207 | 1. Find the comment **Send a prompt to the agent** and add the following code to add the user's prompt as a message and run the thread. 208 | 209 | ```python 210 | # Send a prompt to the agent 211 | message = agent_client.messages.create( 212 | thread_id=thread.id, 213 | role="user", 214 | content=user_prompt 215 | ) 216 | run = agent_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) 217 | ``` 218 | 219 | > **Note**: Using the **create_and_process** method to run the thread enables the agent to automatically find your functions and choose to use them based on their names and parameters. As an alternative, you could use the **create_run** method, in which case you would be responsible for writing code to poll for run status to determine when a function call is required, call the function, and return the results to the agent. 220 | 221 | 1. Find the comment **Check the run status for failures** and add the following code to show any errors that occur. 222 | 223 | ```python 224 | # Check the run status for failures 225 | if run.status == "failed": 226 | print(f"Run failed: {run.last_error}") 227 | ``` 228 | 229 | 1. Find the comment **Show the latest response from the agent** and add the following code to retrieve the messages from the completed thread and display the last one that was sent by the agent. 230 | 231 | ```python 232 | # Show the latest response from the agent 233 | last_msg = agent_client.messages.get_last_message_text_by_role( 234 | thread_id=thread.id, 235 | role=MessageRole.AGENT, 236 | ) 237 | if last_msg: 238 | print(f"Last Message: {last_msg.text.value}") 239 | ``` 240 | 241 | 1. Find the comment **Get the conversation history** and add the following code to print out the messages from the conversation thread; ordering them in chronological sequence 242 | 243 | ```python 244 | # Get the conversation history 245 | print("\nConversation Log:\n") 246 | messages = agent_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) 247 | for message in messages: 248 | if message.text_messages: 249 | last_msg = message.text_messages[-1] 250 | print(f"{message.role}: {last_msg.text.value}\n") 251 | ``` 252 | 253 | 1. Find the comment **Clean up** and add the following code to delete the agent and thread when no longer needed. 254 | 255 | ```python 256 | # Clean up 257 | agent_client.delete_agent(agent.id) 258 | print("Deleted agent") 259 | ``` 260 | 261 | 1. Review the code, using the comments to understand how it: 262 | - Adds your set of custom functions to a toolset 263 | - Creates an agent that uses the toolset. 264 | - Runs a thread with a prompt message from the user. 265 | - Checks the status of the run in case there's a failure 266 | - Retrieves the messages from the completed thread and displays the last one sent by the agent. 267 | - Displays the conversation history 268 | - Deletes the agent and thread when they're no longer required. 269 | 270 | 1. Save the code file (*CTRL+S*) when you have finished. You can also close the code editor (*CTRL+Q*); though you may want to keep it open in case you need to make any edits to the code you added. In either case, keep the cloud shell command-line pane open. 271 | 272 | ### Sign into Azure and run the app 273 | 274 | 1. In the cloud shell command-line pane, enter the following command to sign into Azure. 275 | 276 | ``` 277 | az login 278 | ``` 279 | 280 | **You must sign into Azure - even though the cloud shell session is already authenticated.** 281 | 282 | > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details. 283 | 284 | 1. When prompted, follow the instructions to open the sign-in page in a new tab and enter the authentication code provided and your Azure credentials. Then complete the sign in process in the command line, selecting the subscription containing your Azure AI Foundry hub if prompted. 285 | 1. After you have signed in, enter the following command to run the application: 286 | 287 | ``` 288 | python agent.py 289 | ``` 290 | 291 | The application runs using the credentials for your authenticated Azure session to connect to your project and create and run the agent. 292 | 293 | 1. When prompted, enter a prompt such as: 294 | 295 | ``` 296 | I have a technical problem 297 | ``` 298 | 299 | > **Tip**: If the app fails because the rate limit is exceeded. Wait a few seconds and try again. If there is insufficient quota available in your subscription, the model may not be able to respond. 300 | 301 | 1. View the response. The agent may ask for your email address and a description of the issue. You can use any email address (for example, `alex@contoso.com`) and any issue description (for example `my computer won't start`) 302 | 303 | When it has enough information, the agent should choose to use your function as required. 304 | 305 | 1. You can continue the conversation if you like. The thread is *stateful*, so it retains the conversation history - meaning that the agent has the full context for each response. Enter `quit` when you're done. 306 | 1. Review the conversation messages that were retrieved from the thread, and the tickets that were generated. 307 | 1. The tool should have saved support tickets in the app folder. You can use the `ls` command to check, and then use the `cat` command to view the file contents, like this: 308 | 309 | ``` 310 | cat ticket-.txt 311 | ``` 312 | 313 | ## Clean up 314 | 315 | Now that you've finished the exercise, you should delete the cloud resources you've created to avoid unnecessary resource usage. 316 | 317 | 1. Open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com` and view the contents of the resource group where you deployed the hub resources used in this exercise. 318 | 1. On the toolbar, select **Delete resource group**. 319 | 1. Enter the resource group name and confirm that you want to delete it. 320 | -------------------------------------------------------------------------------- /Instructions/04-semantic-kernel.md: -------------------------------------------------------------------------------- 1 | --- 2 | lab: 3 | title: 'Develop an Azure AI agent with the Semantic Kernel SDK' 4 | description: 'Learn how to use the Semantic Kernel SDK to create and use an Azure AI Agent Service agent.' 5 | --- 6 | 7 | # Develop an Azure AI agent with the Semantic Kernel SDK 8 | 9 | In this exercise, you'll use Azure AI Agent Service and Semantic Kernel to create an AI agent that processes expense claims. 10 | 11 | This exercise should take approximately **30** minutes to complete. 12 | 13 | > **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors. 14 | 15 | ## Deploy a model in an Azure AI Foundry project 16 | 17 | Let's start by deploying a model in an Azure AI Foundry project. 18 | 19 | 1. In a web browser, open the [Azure AI Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the **Azure AI Foundry** logo at the top left to navigate to the home page, which looks similar to the following image (close the **Help** pane if it's open): 20 | 21 | ![Screenshot of Azure AI Foundry portal.](./Media/ai-foundry-home.png) 22 | 23 | 1. On the home page, in the **Explore models and capabilities** section, search for the `gpt-4o` model; which we'll use in our project. 24 | 1. In the search results, select the **gpt-4o** model to see its details, and then at the top of the page for the model, select **Use this model**. 25 | 1. When prompted to create a project, enter a valid name for your project and expand **Advanced options**. 26 | 1. Confirm the following settings for your project: 27 | - **Azure AI Foundry resource**: *A valid name for your Azure AI Foundry resource* 28 | - **Subscription**: *Your Azure subscription* 29 | - **Resource group**: *Create or select a resource group* 30 | - **Region**: *Select any **AI Services supported location***\* 31 | 32 | > \* Some Azure AI resources are constrained by regional model quotas. In the event of a quota limit being exceeded later in the exercise, there's a possibility you may need to create another resource in a different region. 33 | 34 | 1. Select **Create** and wait for your project, including the gpt-4 model deployment you selected, to be created. 35 | 1. When your project is created, the chat playground will be opened automatically. 36 | 1. In the **Setup** pane, note the name of your model deployment; which should be **gpt-4o**. You can confirm this by viewing the deployment in the **Models and endpoints** page (just open that page in the navigation pane on the left). 37 | 1. In the navigation pane on the left, select **Overview** to see the main page for your project; which looks like this: 38 | 39 | > **Note**: If an *Insufficient permissions** error is displayed, use the **Fix me** button to resolve it. 40 | 41 | ![Screenshot of a Azure AI project details in Azure AI Foundry portal.](./Media/ai-foundry-project.png) 42 | 43 | ## Create an agent client app 44 | 45 | Now you're ready to create a client app that defines an agent and a custom function. Some code has been provided for you in a GitHub repository. 46 | 47 | ### Prepare the environment 48 | 49 | 1. Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`; signing in with your Azure credentials if prompted. 50 | 51 | Close any welcome notifications to see the Azure portal home page. 52 | 53 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment with no storage in your subscription. 54 | 55 | The cloud shell provides a command-line interface in a pane at the bottom of the Azure portal. You can resize or maximize this pane to make it easier to work in. 56 | 57 | > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***. 58 | 59 | 1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor). 60 | 61 | **Ensure you've switched to the classic version of the cloud shell before continuing.** 62 | 63 | 1. In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text): 64 | 65 | ``` 66 | rm -r ai-agents -f 67 | git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents 68 | ``` 69 | 70 | > **Tip**: As you enter commands into the cloudshell, the output may take up a large amount of the screen buffer and the cursor on the current line may be obscured. You can clear the screen by entering the `cls` command to make it easier to focus on each task. 71 | 72 | 1. When the repo has been cloned, enter the following command to change the working directory to the folder containing the code files and list them all. 73 | 74 | ``` 75 | cd ai-agents/Labfiles/04-semantic-kernel/python 76 | ls -a -l 77 | ``` 78 | 79 | The provided files include application code a file for configuration settings, and a file containing expenses data. 80 | 81 | ### Configure the application settings 82 | 83 | 1. In the cloud shell command-line pane, enter the following command to install the libraries you'll use: 84 | 85 | ``` 86 | python -m venv labenv 87 | ./labenv/bin/Activate.ps1 88 | pip install python-dotenv azure-identity semantic-kernel[azure] 89 | ``` 90 | 91 | > **Note**: Installing *semantic-kernel[azure]* autmatically installs a semantic kernel-compatible version of *azure-ai-projects*. 92 | 93 | 1. Enter the following command to edit the configuration file that has been provided: 94 | 95 | ``` 96 | code .env 97 | ``` 98 | 99 | The file is opened in a code editor. 100 | 101 | 1. In the code file, replace the **your_project_endpoint** placeholder with the endpoint for your project (copied from the project **Overview** page in the Azure AI Foundry portal), and the **your_model_deployment** placeholder with the name you assigned to your gpt-4o model deployment. 102 | 1. After you've replaced the placeholders, use the **CTRL+S** command to save your changes and then use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open. 103 | 104 | ### Write code for an agent app 105 | 106 | > **Tip**: As you add code, be sure to maintain the correct indentation. Use the existing comments as a guide, entering the new code at the same level of indentation. 107 | 108 | 1. Enter the following command to edit the agent code file that has been provided: 109 | 110 | ``` 111 | code semantic-kernel.py 112 | ``` 113 | 114 | 1. Review the code in the file. It contains: 115 | - Some **import** statements to add references to commonly used namespaces 116 | - A *main* function that loads a file containing expenses data, asks the user for instructions, and and then calls... 117 | - A **process_expenses_data** function in which the code to create and use your agent must be added 118 | - An **EmailPlugin** class that includes a kernel function named **send_email**; which will be used by your agent to simulate the functionality used to send an email. 119 | 120 | 1. At the top of the file, after the existing **import** statement, find the comment **Add references**, and add the following code to reference the namespaces in the libraries you'll need to implement your agent: 121 | 122 | ```python 123 | # Add references 124 | from dotenv import load_dotenv 125 | from azure.identity.aio import DefaultAzureCredential 126 | from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings, AzureAIAgentThread 127 | from semantic_kernel.functions import kernel_function 128 | from typing import Annotated 129 | ``` 130 | 131 | 1. Near the bottom of the file, find the comment **Create a Plugin for the email functionality**, and add the following code to define a class for a plugin containing a function that your agent will use to send email (plug-ins are a way to add custom functionality to Semantic Kernel agents) 132 | 133 | ```python 134 | # Create a Plugin for the email functionality 135 | class EmailPlugin: 136 | """A Plugin to simulate email functionality.""" 137 | 138 | @kernel_function(description="Sends an email.") 139 | def send_email(self, 140 | to: Annotated[str, "Who to send the email to"], 141 | subject: Annotated[str, "The subject of the email."], 142 | body: Annotated[str, "The text body of the email."]): 143 | print("\nTo:", to) 144 | print("Subject:", subject) 145 | print(body, "\n") 146 | ``` 147 | 148 | > **Note**: The function *simulates* sending an email by printing it to the console. In a real application, you'd use an SMTP service or similar to actually send the email! 149 | 150 | 1. Back up above the new **EmailPlugin** class code, in the **create_expense_claim** function, find the comment **Get configuration settings**, and add the following code to load the configuration file and create an **AzureAIAgentSettings** object (which will automatically include the Azure AI Agent settings from the configuration). 151 | 152 | (Be sure to maintain the indentation level) 153 | 154 | ```python 155 | # Get configuration settings 156 | load_dotenv() 157 | ai_agent_settings = AzureAIAgentSettings() 158 | ``` 159 | 160 | 1. Find the comment **Connect to the Azure AI Foundry project**, and add the following code to connect to your Azure AI Foundry project using the Azure credentials you're currently signed in with. 161 | 162 | (Be sure to maintain the indentation level) 163 | 164 | ```python 165 | # Connect to the Azure AI Foundry project 166 | async with ( 167 | DefaultAzureCredential( 168 | exclude_environment_credential=True, 169 | exclude_managed_identity_credential=True) as creds, 170 | AzureAIAgent.create_client( 171 | credential=creds 172 | ) as project_client, 173 | ): 174 | ``` 175 | 176 | 1. Find the comment **Define an Azure AI agent that sends an expense claim email**, and add the following code to create an Azure AI Agent definition for your agent. 177 | 178 | (Be sure to maintain the indentation level) 179 | 180 | ```python 181 | # Define an Azure AI agent that sends an expense claim email 182 | expenses_agent_def = await project_client.agents.create_agent( 183 | model= ai_agent_settings.model_deployment_name, 184 | name="expenses_agent", 185 | instructions="""You are an AI assistant for expense claim submission. 186 | When a user submits expenses data and requests an expense claim, use the plug-in function to send an email to expenses@contoso.com with the subject 'Expense Claim`and a body that contains itemized expenses with a total. 187 | Then confirm to the user that you've done so.""" 188 | ) 189 | ``` 190 | 191 | 1. Find the comment **Create a semantic kernel agent**, and add the following code to create a semantic kernel agent object for your Azure AI agent, and includes a reference to the **EmailPlugin** plugin. 192 | 193 | (Be sure to maintain the indentation level) 194 | 195 | ```python 196 | # Create a semantic kernel agent 197 | expenses_agent = AzureAIAgent( 198 | client=project_client, 199 | definition=expenses_agent_def, 200 | plugins=[EmailPlugin()] 201 | ) 202 | ``` 203 | 204 | 1. Find the comment **Use the agent to process the expenses data**, and add the following code to create a thread for your agent to run on, and then invoke it with a chat message. 205 | 206 | (Be sure to maintain the indentation level): 207 | 208 | ```python 209 | # Use the agent to process the expenses data 210 | thread: AzureAIAgentThread = AzureAIAgentThread(client=project_client) 211 | try: 212 | # Add the input prompt to a list of messages to be submitted 213 | prompt_messages = [f"{prompt}: {expenses_data}"] 214 | # Invoke the agent for the specified thread with the messages 215 | response = await expenses_agent.get_response(thread_id=thread.id, messages=prompt_messages) 216 | # Display the response 217 | print(f"\n# {response.name}:\n{response}") 218 | except Exception as e: 219 | # Something went wrong 220 | print (e) 221 | finally: 222 | # Cleanup: Delete the thread and agent 223 | await thread.delete() if thread else None 224 | await project_client.agents.delete_agent(expenses_agent.id) 225 | ``` 226 | 227 | 1. Review that the completed code for your agent, using the comments to help you understand what each block of code does, and then save your code changes (**CTRL+S**). 228 | 1. Keep the code editor open in case you need to correct any typo's in the code, but resize the panes so you can see more of the command line console. 229 | 230 | ### Sign into Azure and run the app 231 | 232 | 1. In the cloud shell command-line pane beneath the code editor, enter the following command to sign into Azure. 233 | 234 | ``` 235 | az login 236 | ``` 237 | 238 | **You must sign into Azure - even though the cloud shell session is already authenticated.** 239 | 240 | > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details. 241 | 242 | 1. When prompted, follow the instructions to open the sign-in page in a new tab and enter the authentication code provided and your Azure credentials. Then complete the sign in process in the command line, selecting the subscription containing your Azure AI Foundry hub if prompted. 243 | 1. After you have signed in, enter the following command to run the application: 244 | 245 | ``` 246 | python semantic-kernel.py 247 | ``` 248 | 249 | The application runs using the credentials for your authenticated Azure session to connect to your project and create and run the agent. 250 | 251 | 1. When asked what to do with the expenses data, enter the following prompt: 252 | 253 | ``` 254 | Submit an expense claim 255 | ``` 256 | 257 | 1. When the application has finished, review the output. The agent should have composed an email for an expenses claim based on the data that was provided. 258 | 259 | > **Tip**: If the app fails because the rate limit is exceeded. Wait a few seconds and try again. If there is insufficient quota available in your subscription, the model may not be able to respond. 260 | 261 | ## Summary 262 | 263 | In this exercise, you used the Azure AI Agent Service SDK and Semantic Kernel to create an agent. 264 | 265 | ## Clean up 266 | 267 | If you've finished exploring Azure AI Agent Service, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs. 268 | 269 | 1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com` in a new browser tab) and view the contents of the resource group where you deployed the resources used in this exercise. 270 | 1. On the toolbar, select **Delete resource group**. 271 | 1. Enter the resource group name and confirm that you want to delete it. 272 | -------------------------------------------------------------------------------- /Instructions/05-agent-orchestration.md: -------------------------------------------------------------------------------- 1 | --- 2 | lab: 3 | title: 'Develop a multi-agent solution' 4 | description: 'Learn to configure multiple agents to collaborate using the Semantic Kernel SDK' 5 | --- 6 | 7 | # Develop a multi-agent solution 8 | 9 | In this exercise, you'll create a project that orchestrates two AI agents using the Semantic Kernel SDK. An *Incident Manager* agent will analyze service log files for issues. If an issue is found, the Incident Manager will recommend a resolution action, and a *DevOps Assistant* agent will receive the recommendation and invoke the corrective function and perform the resolution. The Incident Manager agent will then review the updated logs to make sure the resolution was successful. 10 | 11 | For this exercise, four sample log files are provided. The DevOps Assistant agent code only updates the sample log files with some example log messages. 12 | 13 | This exercise should take approximately **30** minutes to complete. 14 | 15 | > **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors. 16 | 17 | ## Deploy a model in an Azure AI Foundry project 18 | 19 | Let's start by deploying a model in an Azure AI Foundry project. 20 | 21 | 1. In a web browser, open the [Azure AI Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the **Azure AI Foundry** logo at the top left to navigate to the home page, which looks similar to the following image (close the **Help** pane if it's open): 22 | 23 | ![Screenshot of Azure AI Foundry portal.](./Media/ai-foundry-home.png) 24 | 25 | 1. In the home page, in the **Explore models and capabilities** section, search for the `gpt-4o` model; which we'll use in our project. 26 | 1. In the search results, select the **gpt-4o** model to see its details, and then at the top of the page for the model, select **Use this model**. 27 | 1. When prompted to create a project, enter a valid name for your project and expand **Advanced options**. 28 | 1. Confirm the following settings for your project: 29 | - **Azure AI Foundry resource**: *A valid name for your Azure AI Foundry resource* 30 | - **Subscription**: *Your Azure subscription* 31 | - **Resource group**: *Create or select a resource group* 32 | - **Region**: *Select any **AI Services supported location***\* 33 | 34 | > \* Some Azure AI resources are constrained by regional model quotas. In the event of a quota limit being exceeded later in the exercise, there's a possibility you may need to create another resource in a different region. 35 | 36 | 1. Select **Create** and wait for your project, including the gpt-4 model deployment you selected, to be created. 37 | 1. When your project is created, the chat playground will be opened automatically. 38 | 39 | > **Note**: The default TPM setting for this model may be too low for this exercise. A lower TPM helps avoid over-using the quota available in the subscription you are using. 40 | 41 | 1. In the navigation pane on the left, select **Models and endpoints** and select your **gpt-4o** deployment. 42 | 43 | 1. Select **Edit** then increase the **Tokens per Minute Rate Limit** 44 | 45 | > **NOTE**: 40,000 TPM should be sufficient for the data used in this exercise. If your available quota is lower than this, you will be able to complete the exercise but you may need to wait and resubmit prompts if the rate limit is exceeded. 46 | 47 | 1. In the **Setup** pane, note the name of your model deployment; which should be **gpt-4o**. You can confirm this by viewing the deployment in the **Models and endpoints** page (just open that page in the navigation pane on the left). 48 | 1. In the navigation pane on the left, select **Overview** to see the main page for your project; which looks like this: 49 | 50 | > **Note**: If an *Insufficient permissions** error is displayed, use the **Fix me** button to resolve it. 51 | 52 | ![Screenshot of a Azure AI project details in Azure AI Foundry portal.](./Media/ai-foundry-project.png) 53 | 54 | ## Create an AI Agent client app 55 | 56 | Now you're ready to create a client app that defines an agent and a custom function. Some code is provided for you in a GitHub repository. 57 | 58 | ### Prepare the environment 59 | 60 | 1. Open a new browser tab (keeping the Azure AI Foundry portal open in the existing tab). Then in the new tab, browse to the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`; signing in with your Azure credentials if prompted. 61 | 62 | Close any welcome notifications to see the Azure portal home page. 63 | 64 | 1. Use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment with no storage in your subscription. 65 | 66 | The cloud shell provides a command-line interface in a pane at the bottom of the Azure portal. You can resize or maximize this pane to make it easier to work in. 67 | 68 | > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***. 69 | 70 | 1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor). 71 | 72 | **Ensure you've switched to the classic version of the cloud shell before continuing.** 73 | 74 | 1. In the cloud shell pane, enter the following commands to clone the GitHub repo containing the code files for this exercise (type the command, or copy it to the clipboard and then right-click in the command line and paste as plain text): 75 | 76 | ``` 77 | rm -r ai-agents -f 78 | git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents 79 | ``` 80 | 81 | > **Tip**: As you enter commands into the cloud shell, the output may take up a large amount of the screen buffer and the cursor on the current line may be obscured. You can clear the screen by entering the `cls` command to make it easier to focus on each task. 82 | 83 | 1. When the repo has been cloned, enter the following command to change the working directory to the folder containing the code files and list them all. 84 | 85 | ``` 86 | cd ai-agents/Labfiles/05-agent-orchestration/Python 87 | ls -a -l 88 | ``` 89 | 90 | The provided files include application code and a file for configuration settings. 91 | 92 | ### Configure the application settings 93 | 94 | 1. In the cloud shell command-line pane, enter the following command to install the libraries you'll use: 95 | 96 | ``` 97 | python -m venv labenv 98 | ./labenv/bin/Activate.ps1 99 | pip install python-dotenv azure-identity semantic-kernel[azure] 100 | ``` 101 | 102 | > **Note**: Installing *semantic-kernel[azure]* automatically installs a semantic kernel-compatible version of *azure-ai-projects*. 103 | 104 | 1. Enter the following command to edit the configuration file that is provided: 105 | 106 | ``` 107 | code .env 108 | ``` 109 | 110 | The file is opened in a code editor. 111 | 112 | 1. In the code file, replace the **your_project_endpoint** placeholder with the endpoint for your project (copied from the project **Overview** page in the Azure AI Foundry portal), and the **your_model_deployment** placeholder with the name you assigned to your gpt-4o model deployment. 113 | 114 | 1. After you've replaced the placeholders, use the **CTRL+S** command to save your changes and then use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open. 115 | 116 | ### Create AI agents 117 | 118 | Now you're ready to create the agents for your multi-agent solution! Let's get started! 119 | 120 | 1. Enter the following command to edit the **agent_chat.py** file: 121 | 122 | ``` 123 | code agent_chat.py 124 | ``` 125 | 126 | 1. Review the code in the file, noting that it contains: 127 | - Constants that define the names and instructions for your two agents. 128 | - A **main** function where most of the code to implement your multi-agent solution will be added. 129 | - A **SelectionStrategy** class, which you'll use to implement the logic required to determine which agent should be selected for each turn in the conversation. 130 | - An **ApprovalTerminationStrategy** class, which you'll use to implement the logic needed to determine when the conversation to end. 131 | - A **DevopsPlugin** class that contains functions to perform devops operations. 132 | - A **LogFilePlugin** class that contains functions to read and write log files. 133 | 134 | First, you'll create the *Incident Manager* agent, which will analyze service log files, identify potential issues, and recommend resolution actions or escalate issues when necessary. 135 | 136 | 1. Note the **INCIDENT_MANAGER_INSTRUCTIONS** string. These are the instructions for your agent. 137 | 138 | 1. In the **main** function, find the comment **Create the incident manager agent on the Azure AI agent service**, and add the following code to create an Azure AI Agent. 139 | 140 | ```python 141 | # Create the incident manager agent on the Azure AI agent service 142 | incident_agent_definition = await client.agents.create_agent( 143 | model=ai_agent_settings.model_deployment_name, 144 | name=INCIDENT_MANAGER, 145 | instructions=INCIDENT_MANAGER_INSTRUCTIONS 146 | ) 147 | ``` 148 | 149 | This code creates the agent definition on your Azure AI Project client. 150 | 151 | 1. Find the comment **Create a Semantic Kernel agent for the Azure AI incident manager agent**, and add the following code to create a Semantic Kernel agent based on the Azure AI Agent definition. 152 | 153 | ```python 154 | # Create a Semantic Kernel agent for the Azure AI incident manager agent 155 | agent_incident = AzureAIAgent( 156 | client=client, 157 | definition=incident_agent_definition, 158 | plugins=[LogFilePlugin()] 159 | ) 160 | ``` 161 | 162 | This code creates the Semantic Kernel agent with access to the **LogFilePlugin**. This plugin allows the agent to read the log file contents. 163 | 164 | Now let's create the second agent, which will respond to issues and perform DevOps operations to resolve them. 165 | 166 | 1. At the top of the code file, take a moment to observe the **DEVOPS_ASSISTANT_INSTRUCTIONS** string. These are the instructions you'll provide to the new DevOps assistant agent. 167 | 168 | 1. Find the comment **Create the devops agent on the Azure AI agent service**, and add the following code to create an Azure AI Agent definition: 169 | 170 | ```python 171 | # Create the devops agent on the Azure AI agent service 172 | devops_agent_definition = await client.agents.create_agent( 173 | model=ai_agent_settings.model_deployment_name, 174 | name=DEVOPS_ASSISTANT, 175 | instructions=DEVOPS_ASSISTANT_INSTRUCTIONS, 176 | ) 177 | ``` 178 | 179 | 1. Find the comment **Create a Semantic Kernel agent for the devops Azure AI agent**, and add the following code to create a Semantic Kernel agent based on the Azure AI Agent definition. 180 | 181 | ```python 182 | # Create a Semantic Kernel agent for the devops Azure AI agent 183 | agent_devops = AzureAIAgent( 184 | client=client, 185 | definition=devops_agent_definition, 186 | plugins=[DevopsPlugin()] 187 | ) 188 | ``` 189 | 190 | The **DevopsPlugin** allows the agent to simulate devops tasks, such as restarting the service or rolling back a transaction. 191 | 192 | ### Define group chat strategies 193 | 194 | Now you need to provide the logic used to determine which agent should be selected to take the next turn in a conversation, and when the conversation should be ended. 195 | 196 | Let's start with the **SelectionStrategy**, which identifies which agent should take the next turn. 197 | 198 | 1. In the **SelectionStrategy** class (below the **main** function), find the comment **Select the next agent that should take the next turn in the chat**, and add the following code to define a selection function: 199 | 200 | ```python 201 | # Select the next agent that should take the next turn in the chat 202 | async def select_agent(self, agents, history): 203 | """"Check which agent should take the next turn in the chat.""" 204 | 205 | # The Incident Manager should go after the User or the Devops Assistant 206 | if (history[-1].name == DEVOPS_ASSISTANT or history[-1].role == AuthorRole.USER): 207 | agent_name = INCIDENT_MANAGER 208 | return next((agent for agent in agents if agent.name == agent_name), None) 209 | 210 | # Otherwise it is the Devops Assistant's turn 211 | return next((agent for agent in agents if agent.name == DEVOPS_ASSISTANT), None) 212 | ``` 213 | 214 | This code runs on every turn to determine which agent should respond, checking the chat history to see who last responded. 215 | 216 | Now let's implement the **ApprovalTerminationStrategy** class to help signal when the goal is complete and the conversation can be ended. 217 | 218 | 1. In the **ApprovalTerminationStrategy** class, find the comment **End the chat if the agent has indicated there is no action needed**, and add the following code to define the termination function: 219 | 220 | ```python 221 | # End the chat if the agent has indicated there is no action needed 222 | async def should_agent_terminate(self, agent, history): 223 | """Check if the agent should terminate.""" 224 | return "no action needed" in history[-1].content.lower() 225 | ``` 226 | 227 | The kernel invokes this function after the agent's response to determine if the completion criteria are met. In this case, the goal is met when the incident manager responds with "No action needed." This phrase is defined in the incident manager agent instructions. 228 | 229 | ### Implement the group chat 230 | 231 | Now that you have two agents, and strategies to help them take turns and end a chat, you can implement the group chat. 232 | 233 | 1. Back up in the main function, find the comment **Add the agents to a group chat with a custom termination and selection strategy**, and add the following code to create the group chat: 234 | 235 | ```python 236 | # Add the agents to a group chat with a custom termination and selection strategy 237 | chat = AgentGroupChat( 238 | agents=[agent_incident, agent_devops], 239 | termination_strategy=ApprovalTerminationStrategy( 240 | agents=[agent_incident], 241 | maximum_iterations=10, 242 | automatic_reset=True 243 | ), 244 | selection_strategy=SelectionStrategy(agents=[agent_incident,agent_devops]), 245 | ) 246 | ``` 247 | 248 | In this code, you create an agent group chat object with the incident manager and devops agents. You also define the termination and selection strategies for the chat. Notice that the **ApprovalTerminationStrategy** is tied to the incident manager agent only, and not the devops agent. This makes the incident manager agent is responsible for signaling the end of the chat. The **SelectionStrategy** includes all agents that should take a turn in the chat. 249 | 250 | Note that the automatic reset flag will automatically clear the chat when it ends. This way, the agent can continue analyzing the files without the chat history object using too many unnecessary tokens. 251 | 252 | 1. Find the comment **Append the current log file to the chat**, and add the following code to add the most recently read log file text to the chat: 253 | 254 | ```python 255 | # Append the current log file to the chat 256 | await chat.add_chat_message(logfile_msg) 257 | print() 258 | ``` 259 | 260 | 1. Find the comment **Invoke a response from the agents**, and add the following code to invoke the group chat: 261 | 262 | ```python 263 | # Invoke a response from the agents 264 | async for response in chat.invoke(): 265 | if response is None or not response.name: 266 | continue 267 | print(f"{response.content}") 268 | ``` 269 | 270 | This is the code that triggers the chat. Since the log file text has been added as a message, the selection strategy will determine which agent should read and respond to it and then the conversation will continue between the agents until the conditions of the termination strategy are met or the maximum number of iterations is reached. 271 | 272 | 1. Use the **CTRL+S** command to save your changes to the code file. You can keep it open (in case you need to edit the code to fix any errors) or use the **CTRL+Q** command to close the code editor while keeping the cloud shell command line open. 273 | 274 | ### Sign into Azure and run the app 275 | 276 | Now you're ready to run your code and watch your AI agents collaborate. 277 | 278 | 1. In the cloud shell command-line pane, enter the following command to sign into Azure. 279 | 280 | ``` 281 | az login 282 | ``` 283 | 284 | **You must sign into Azure - even though the cloud shell session is already authenticated.** 285 | 286 | > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details. 287 | 288 | 1. When prompted, follow the instructions to open the sign-in page in a new tab and enter the authentication code provided and your Azure credentials. Then complete the sign in process in the command line, selecting the subscription containing your Azure AI Foundry hub if prompted. 289 | 290 | 1. After you have signed in, enter the following command to run the application: 291 | 292 | ``` 293 | python agent_chat.py 294 | ``` 295 | 296 | You should see some output similar to the following: 297 | 298 | ```output 299 | 300 | INCIDENT_MANAGER > /home/.../logs/log1.log | Restart service ServiceX 301 | DEVOPS_ASSISTANT > Service ServiceX restarted successfully. 302 | INCIDENT_MANAGER > No action needed. 303 | 304 | INCIDENT_MANAGER > /home/.../logs/log2.log | Rollback transaction for transaction ID 987654. 305 | DEVOPS_ASSISTANT > Transaction rolled back successfully. 306 | INCIDENT_MANAGER > No action needed. 307 | 308 | INCIDENT_MANAGER > /home/.../logs/log3.log | Increase quota. 309 | DEVOPS_ASSISTANT > Successfully increased quota. 310 | (continued) 311 | ``` 312 | 313 | > **Note**: The app includes some code to wait between processing each log file to try to reduce the risk of a TPM rate limit being exceeded, and exception handling in case it happens anyway. If there is insufficient quota available in your subscription, the model may not be able to respond. 314 | 315 | 1. Verify that the log files in the **logs** folder are updated with resolution operation messages from the DevopsAssistant. 316 | 317 | For example, log1.log should have the following log messages appended: 318 | 319 | ```log 320 | [2025-02-27 12:43:38] ALERT DevopsAssistant: Multiple failures detected in ServiceX. Restarting service. 321 | [2025-02-27 12:43:38] INFO ServiceX: Restart initiated. 322 | [2025-02-27 12:43:38] INFO ServiceX: Service restarted successfully. 323 | ``` 324 | 325 | ## Summary 326 | 327 | In this exercise, you used the Azure AI Agent Service and Semantic Kernel SDK to create AI incident and devops agents that can automatically detect issues and apply resolutions. Great work! 328 | 329 | ## Clean up 330 | 331 | If you've finished exploring Azure AI Agent Service, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs. 332 | 333 | 1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com` in a new browser tab) and view the contents of the resource group where you deployed the resources used in this exercise. 334 | 335 | 1. On the toolbar, select **Delete resource group**. 336 | 337 | 1. Enter the resource group name and confirm that you want to delete it. 338 | -------------------------------------------------------------------------------- /Instructions/Media/ai-agent-add-files.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Instructions/Media/ai-agent-add-files.png -------------------------------------------------------------------------------- /Instructions/Media/ai-agent-playground.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Instructions/Media/ai-agent-playground.png -------------------------------------------------------------------------------- /Instructions/Media/ai-agent-setup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Instructions/Media/ai-agent-setup.png -------------------------------------------------------------------------------- /Instructions/Media/ai-foundry-agents-playground.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Instructions/Media/ai-foundry-agents-playground.png -------------------------------------------------------------------------------- /Instructions/Media/ai-foundry-home.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Instructions/Media/ai-foundry-home.png -------------------------------------------------------------------------------- /Instructions/Media/ai-foundry-project.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Instructions/Media/ai-foundry-project.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Microsoft 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /Labfiles/01-agent-fundamentals/Expenses_Policy.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MicrosoftLearning/mslearn-ai-agents/af6a7e077cefb29d583951c477a370ea8f00e8a8/Labfiles/01-agent-fundamentals/Expenses_Policy.docx -------------------------------------------------------------------------------- /Labfiles/02-build-ai-agent/Python/.env: -------------------------------------------------------------------------------- 1 | PROJECT_ENDPOINT="your_project_endpoint" 2 | MODEL_DEPLOYMENT_NAME="gpt-4o" 3 | -------------------------------------------------------------------------------- /Labfiles/02-build-ai-agent/Python/agent.py: -------------------------------------------------------------------------------- 1 | import os 2 | from dotenv import load_dotenv 3 | from typing import Any 4 | from pathlib import Path 5 | 6 | 7 | # Add references 8 | 9 | 10 | def main(): 11 | 12 | # Clear the console 13 | os.system('cls' if os.name=='nt' else 'clear') 14 | 15 | # Load environment variables from .env file 16 | load_dotenv() 17 | project_endpoint= os.getenv("PROJECT_ENDPOINT") 18 | model_deployment = os.getenv("MODEL_DEPLOYMENT_NAME") 19 | 20 | # Display the data to be analyzed 21 | script_dir = Path(__file__).parent # Get the directory of the script 22 | file_path = script_dir / 'data.txt' 23 | 24 | with file_path.open('r') as file: 25 | data = file.read() + "\n" 26 | print(data) 27 | 28 | # Connect to the Agent client 29 | 30 | 31 | # Upload the data file and create a CodeInterpreterTool 32 | 33 | 34 | # Define an agent that uses the CodeInterpreterTool 35 | 36 | 37 | # Create a thread for the conversation 38 | 39 | 40 | # Loop until the user types 'quit' 41 | while True: 42 | # Get input text 43 | user_prompt = input("Enter a prompt (or type 'quit' to exit): ") 44 | if user_prompt.lower() == "quit": 45 | break 46 | if len(user_prompt) == 0: 47 | print("Please enter a prompt.") 48 | continue 49 | 50 | # Send a prompt to the agent 51 | 52 | 53 | # Check the run status for failures 54 | 55 | 56 | # Show the latest response from the agent 57 | 58 | 59 | # Get the conversation history 60 | 61 | 62 | 63 | # Get any generated files 64 | 65 | 66 | 67 | # Clean up 68 | 69 | 70 | 71 | 72 | if __name__ == '__main__': 73 | main() 74 | -------------------------------------------------------------------------------- /Labfiles/02-build-ai-agent/Python/data.txt: -------------------------------------------------------------------------------- 1 | Category,Cost 2 | Accommodation, 674.56 3 | Transportation, 2301.00 4 | Meals, 267.89 5 | Misc., 34.50 -------------------------------------------------------------------------------- /Labfiles/02-build-ai-agent/Python/requirements.txt: -------------------------------------------------------------------------------- 1 | python-dotenv 2 | azure-identity -------------------------------------------------------------------------------- /Labfiles/03-ai-agent-functions/Python/.env: -------------------------------------------------------------------------------- 1 | PROJECT_ENDPOINT="your_project_endpoint" 2 | MODEL_DEPLOYMENT_NAME="gpt-4o" 3 | -------------------------------------------------------------------------------- /Labfiles/03-ai-agent-functions/Python/agent.py: -------------------------------------------------------------------------------- 1 | import os 2 | from dotenv import load_dotenv 3 | from typing import Any 4 | from pathlib import Path 5 | 6 | 7 | # Add references 8 | 9 | def main(): 10 | 11 | # Clear the console 12 | os.system('cls' if os.name=='nt' else 'clear') 13 | 14 | # Load environment variables from .env file 15 | load_dotenv() 16 | project_endpoint= os.getenv("PROJECT_ENDPOINT") 17 | model_deployment = os.getenv("MODEL_DEPLOYMENT_NAME") 18 | 19 | 20 | # Connect to the Agent client 21 | 22 | 23 | 24 | # Define an agent that can use the custom functions 25 | 26 | 27 | 28 | # Loop until the user types 'quit' 29 | while True: 30 | # Get input text 31 | user_prompt = input("Enter a prompt (or type 'quit' to exit): ") 32 | if user_prompt.lower() == "quit": 33 | break 34 | if len(user_prompt) == 0: 35 | print("Please enter a prompt.") 36 | continue 37 | 38 | # Send a prompt to the agent 39 | 40 | 41 | # Check the run status for failures 42 | 43 | 44 | # Show the latest response from the agent 45 | 46 | 47 | # Get the conversation history 48 | 49 | 50 | # Clean up 51 | 52 | 53 | 54 | 55 | 56 | if __name__ == '__main__': 57 | main() 58 | -------------------------------------------------------------------------------- /Labfiles/03-ai-agent-functions/Python/requirements.txt: -------------------------------------------------------------------------------- 1 | python-dotenv 2 | azure-identity 3 | -------------------------------------------------------------------------------- /Labfiles/03-ai-agent-functions/Python/user_functions.py: -------------------------------------------------------------------------------- 1 | import json 2 | from pathlib import Path 3 | import uuid 4 | from typing import Any, Callable, Set 5 | 6 | # Create a function to submit a support ticket 7 | 8 | 9 | # Define a set of callable functions 10 | 11 | 12 | -------------------------------------------------------------------------------- /Labfiles/03-ai-agent-functions/readme.md: -------------------------------------------------------------------------------- 1 | Folder containing source files for exercises. -------------------------------------------------------------------------------- /Labfiles/04-semantic-kernel/python/.env: -------------------------------------------------------------------------------- 1 | AZURE_AI_AGENT_ENDPOINT="your_project_endpoint" 2 | AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME="your_model_deployment" 3 | -------------------------------------------------------------------------------- /Labfiles/04-semantic-kernel/python/data.txt: -------------------------------------------------------------------------------- 1 | date,description,amount 2 | 07-Mar-2025,taxi,24.00 3 | 07-Mar-2025,dinner,65.50 4 | 07-Mar-2025,hotel,125.90 -------------------------------------------------------------------------------- /Labfiles/04-semantic-kernel/python/semantic-kernel.py: -------------------------------------------------------------------------------- 1 | import os 2 | import asyncio 3 | from pathlib import Path 4 | 5 | # Add references 6 | 7 | 8 | 9 | async def main(): 10 | # Clear the console 11 | os.system('cls' if os.name=='nt' else 'clear') 12 | 13 | # Load the expnses data file 14 | script_dir = Path(__file__).parent 15 | file_path = script_dir / 'data.txt' 16 | with file_path.open('r') as file: 17 | data = file.read() + "\n" 18 | 19 | # Ask for a prompt 20 | user_prompt = input(f"Here is the expenses data in your file:\n\n{data}\n\nWhat would you like me to do with it?\n\n") 21 | 22 | # Run the async agent code 23 | await process_expenses_data (user_prompt, data) 24 | 25 | async def process_expenses_data(prompt, expenses_data): 26 | 27 | # Get configuration settings 28 | 29 | 30 | # Connect to the Azure AI Foundry project 31 | 32 | 33 | # Define an Azure AI agent that sends an expense claim email 34 | 35 | 36 | # Create a semantic kernel agent 37 | 38 | 39 | # Use the agent to process the expenses data 40 | 41 | 42 | 43 | # Create a Plugin for the email functionality 44 | 45 | 46 | 47 | 48 | if __name__ == "__main__": 49 | asyncio.run(main()) 50 | -------------------------------------------------------------------------------- /Labfiles/05-agent-orchestration/Python/.env: -------------------------------------------------------------------------------- 1 | AZURE_AI_AGENT_ENDPOINT="your_project_endpoint" 2 | AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME="your_model_deployment" 3 | -------------------------------------------------------------------------------- /Labfiles/05-agent-orchestration/Python/agent_chat.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | import textwrap 4 | from datetime import datetime 5 | from pathlib import Path 6 | import shutil 7 | 8 | from azure.identity.aio import DefaultAzureCredential 9 | from semantic_kernel.agents import AgentGroupChat 10 | from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings 11 | from semantic_kernel.agents.strategies import TerminationStrategy, SequentialSelectionStrategy 12 | from semantic_kernel.contents.chat_message_content import ChatMessageContent 13 | from semantic_kernel.contents.utils.author_role import AuthorRole 14 | from semantic_kernel.functions.kernel_function_decorator import kernel_function 15 | 16 | INCIDENT_MANAGER = "INCIDENT_MANAGER" 17 | INCIDENT_MANAGER_INSTRUCTIONS = """ 18 | Analyze the given log file or the response from the devops assistant. 19 | Recommend which one of the following actions should be taken: 20 | 21 | Restart service {service_name} 22 | Rollback transaction 23 | Redeploy resource {resource_name} 24 | Increase quota 25 | 26 | If there are no issues or if the issue has already been resolved, respond with "INCIDENT_MANAGER > No action needed." 27 | If none of the options resolve the issue, respond with "Escalate issue." 28 | 29 | RULES: 30 | - Do not perform any corrective actions yourself. 31 | - Read the log file on every turn. 32 | - Prepend your response with this text: "INCIDENT_MANAGER > {logfilepath} | " 33 | - Only respond with the corrective action instructions. 34 | """ 35 | 36 | DEVOPS_ASSISTANT = "DEVOPS_ASSISTANT" 37 | DEVOPS_ASSISTANT_INSTRUCTIONS = """ 38 | Read the instructions from the INCIDENT_MANAGER and apply the appropriate resolution function. 39 | Return the response as "{function_response}" 40 | If the instructions indicate there are no issues or actions needed, 41 | take no action and respond with "No action needed." 42 | 43 | RULES: 44 | - Use the instructions provided. 45 | - Do not read any log files yourself. 46 | - Prepend your response with this text: "DEVOPS_ASSISTANT > " 47 | """ 48 | 49 | async def main(): 50 | # Clear the console 51 | os.system('cls' if os.name=='nt' else 'clear') 52 | 53 | # Get the log files 54 | print("Getting log files...\n") 55 | script_dir = Path(__file__).parent # Get the directory of the script 56 | src_path = script_dir / "sample_logs" 57 | file_path = script_dir / "logs" 58 | shutil.copytree(src_path, file_path, dirs_exist_ok=True) 59 | 60 | # Get the Azure AI Agent settings 61 | ai_agent_settings = AzureAIAgentSettings() 62 | 63 | async with ( 64 | DefaultAzureCredential(exclude_environment_credential=True, 65 | exclude_managed_identity_credential=True) as creds, 66 | AzureAIAgent.create_client(credential=creds) as client, 67 | ): 68 | 69 | # Create the incident manager agent on the Azure AI agent service 70 | 71 | 72 | # Create a Semantic Kernel agent for the Azure AI incident manager agent 73 | 74 | 75 | # Create the devops agent on the Azure AI agent service 76 | 77 | 78 | # Create a Semantic Kernel agent for the devops Azure AI agent 79 | 80 | 81 | # Add the agents to a group chat with a custom termination and selection strategy 82 | 83 | 84 | # Process log files 85 | for filename in os.listdir(file_path): 86 | logfile_msg = ChatMessageContent(role=AuthorRole.USER, content=f"USER > {file_path}/{filename}") 87 | await asyncio.sleep(30) # Wait to reduce TPM 88 | print(f"\nReady to process log file: {filename}\n") 89 | 90 | 91 | # Append the current log file to the chat 92 | 93 | 94 | try: 95 | print() 96 | 97 | # Invoke a response from the agents 98 | 99 | 100 | except Exception as e: 101 | print(f"Error during chat invocation: {e}") 102 | # If TPM rate exceeded, wait 60 secs 103 | if "Rate limit is exceeded" in str(e): 104 | print ("Waiting...") 105 | await asyncio.sleep(60) 106 | continue 107 | else: 108 | break 109 | 110 | 111 | 112 | # class for selection strategy 113 | class SelectionStrategy(SequentialSelectionStrategy): 114 | """A strategy for determining which agent should take the next turn in the chat.""" 115 | 116 | # Select the next agent that should take the next turn in the chat 117 | 118 | 119 | 120 | # class for temination strategy 121 | class ApprovalTerminationStrategy(TerminationStrategy): 122 | """A strategy for determining when an agent should terminate.""" 123 | 124 | # End the chat if the agent has indicated there is no action needed 125 | 126 | 127 | 128 | 129 | # class for DevOps functions 130 | class DevopsPlugin: 131 | """A plugin that performs developer operation tasks.""" 132 | 133 | def append_to_log_file(self, filepath: str, content: str) -> None: 134 | with open(filepath, 'a', encoding='utf-8') as file: 135 | file.write('\n' + textwrap.dedent(content).strip()) 136 | 137 | @kernel_function(description="A function that restarts the named service") 138 | def restart_service(self, service_name: str = "", logfile: str = "") -> str: 139 | log_entries = [ 140 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] ALERT DevopsAssistant: Multiple failures detected in {service_name}. Restarting service.", 141 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO {service_name}: Restart initiated.", 142 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO {service_name}: Service restarted successfully.", 143 | ] 144 | 145 | log_message = "\n".join(log_entries) 146 | self.append_to_log_file(logfile, log_message) 147 | 148 | return f"Service {service_name} restarted successfully." 149 | 150 | @kernel_function(description="A function that rollsback the transaction") 151 | def rollback_transaction(self, logfile: str = "") -> str: 152 | log_entries = [ 153 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] ALERT DevopsAssistant: Transaction failure detected. Rolling back transaction batch.", 154 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO TransactionProcessor: Rolling back transaction batch.", 155 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO Transaction rollback completed successfully.", 156 | ] 157 | 158 | log_message = "\n".join(log_entries) 159 | self.append_to_log_file(logfile, log_message) 160 | 161 | return "Transaction rolled back successfully." 162 | 163 | @kernel_function(description="A function that redeploys the named resource") 164 | def redeploy_resource(self, resource_name: str = "", logfile: str = "") -> str: 165 | log_entries = [ 166 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] ALERT DevopsAssistant: Resource deployment failure detected in '{resource_name}'. Redeploying resource.", 167 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO DeploymentManager: Redeployment request submitted.", 168 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO DeploymentManager: Service successfully redeployed, resource '{resource_name}' created successfully.", 169 | ] 170 | 171 | log_message = "\n".join(log_entries) 172 | self.append_to_log_file(logfile, log_message) 173 | 174 | return f"Resource '{resource_name}' redeployed successfully." 175 | 176 | @kernel_function(description="A function that increases the quota") 177 | def increase_quota(self, logfile: str = "") -> str: 178 | log_entries = [ 179 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] ALERT DevopsAssistant: High request volume detected. Increasing quota.", 180 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO APIManager: Quota increase request submitted.", 181 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] INFO APIManager: Quota successfully increased to 150% of previous limit.", 182 | ] 183 | 184 | log_message = "\n".join(log_entries) 185 | self.append_to_log_file(logfile, log_message) 186 | 187 | return "Successfully increased quota." 188 | 189 | @kernel_function(description="A function that escalates the issue") 190 | def escalate_issue(self, logfile: str = "") -> str: 191 | log_entries = [ 192 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] ALERT DevopsAssistant: Cannot resolve issue.", 193 | f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] ALERT DevopsAssistant: Requesting escalation.", 194 | ] 195 | 196 | log_message = "\n".join(log_entries) 197 | self.append_to_log_file(logfile, log_message) 198 | 199 | return "Submitted escalation request." 200 | 201 | 202 | # class for Log File functions 203 | class LogFilePlugin: 204 | """A plugin that reads and writes log files.""" 205 | 206 | @kernel_function(description="Accesses the given file path string and returns the file contents as a string") 207 | def read_log_file(self, filepath: str = "") -> str: 208 | with open(filepath, 'r', encoding='utf-8') as file: 209 | return file.read() 210 | 211 | 212 | # Start the app 213 | if __name__ == "__main__": 214 | asyncio.run(main()) -------------------------------------------------------------------------------- /Labfiles/05-agent-orchestration/Python/sample_logs/log1.log: -------------------------------------------------------------------------------- 1 | [2025-02-21 10:05:12] INFO ServiceX: Initialization complete. 2 | [2025-02-21 10:10:34] WARNING ServiceX: Response time exceeding threshold (4500ms) 3 | [2025-02-21 10:15:45] ERROR ServiceX: Connection timeout with DatabaseY 4 | [2025-02-21 10:16:02] WARNING ServiceX: Retrying connection... 5 | [2025-02-21 10:16:30] ERROR ServiceX: Connection retry failed. 6 | [2025-02-21 10:18:14] ERROR ServiceX: Critical failure - unable to connect to DatabaseY. 7 | -------------------------------------------------------------------------------- /Labfiles/05-agent-orchestration/Python/sample_logs/log2.log: -------------------------------------------------------------------------------- 1 | [2025-02-21 11:05:12] INFO TransactionProcessor: Transaction batch initiated. 2 | [2025-02-21 11:10:34] INFO TransactionProcessor: Processing transaction ID 987654. 3 | [2025-02-21 11:12:00] WARNING TransactionProcessor: Delay detected in processing transaction ID 987654. 4 | [2025-02-21 11:13:15] ERROR TransactionProcessor: Integrity check failed for transaction ID 987654. 5 | [2025-02-21 11:14:02] WARNING TransactionProcessor: Attempting recovery... 6 | [2025-02-21 11:15:30] ERROR TransactionProcessor: Recovery attempt failed. 7 | -------------------------------------------------------------------------------- /Labfiles/05-agent-orchestration/Python/sample_logs/log3.log: -------------------------------------------------------------------------------- 1 | [2025-02-21 12:05:12] INFO APIManager: Request rate monitoring initiated. 2 | [2025-02-21 12:10:34] INFO APIManager: Processing request ID 56789. 3 | [2025-02-21 12:12:00] WARNING APIManager: Request rate nearing quota limit (95% used). 4 | [2025-02-21 12:13:15] ERROR APIManager: Request limit exceeded for user account 12345. 5 | [2025-02-21 12:14:02] WARNING APIManager: Temporary throttling applied. 6 | [2025-02-21 12:15:30] ERROR APIManager: Critical service disruption due to quota exhaustion. -------------------------------------------------------------------------------- /Labfiles/05-agent-orchestration/Python/sample_logs/log4.log: -------------------------------------------------------------------------------- 1 | [2025-02-21 13:05:12] INFO DeploymentManager: Attempting to create new resource 'ResourceX'. 2 | [2025-02-21 13:10:34] INFO ResourceManager: Allocating resources for 'ResourceX'. 3 | [2025-02-21 13:12:00] WARNING ResourceManager: Delayed response from provisioning service. 4 | [2025-02-21 13:13:15] ERROR ResourceManager: Resource creation failed due to missing dependencies. 5 | [2025-02-21 13:14:02] WARNING ResourceManager: Retrying resource creation... 6 | [2025-02-21 13:15:30] ERROR ResourceManager: Retry failed - dependency resolution unsuccessful. -------------------------------------------------------------------------------- /Labfiles/update-python.sh: -------------------------------------------------------------------------------- 1 | wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh 2 | sh Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda/ 3 | ~/miniconda/bin/conda init powershell --all -------------------------------------------------------------------------------- /_build.yml: -------------------------------------------------------------------------------- 1 | name: '$(Date:yyyyMMdd)$(Rev:.rr)' 2 | jobs: 3 | - job: build_markdown_content 4 | displayName: 'Build Markdown Content' 5 | workspace: 6 | clean: all 7 | pool: 8 | vmImage: 'ubuntu-latest' 9 | container: 10 | image: 'microsoftlearning/markdown-build:latest' 11 | steps: 12 | - task: Bash@3 13 | displayName: 'Build Content' 14 | inputs: 15 | targetType: inline 16 | script: | 17 | cp /{attribution.md,template.docx,package.json,package.js} . 18 | npm install 19 | node package.js --version $(Build.BuildNumber) 20 | - task: GitHubRelease@0 21 | displayName: 'Create GitHub Release' 22 | inputs: 23 | gitHubConnection: 'github-microsoftlearning-organization' 24 | repositoryName: '$(Build.Repository.Name)' 25 | tagSource: manual 26 | tag: 'v$(Build.BuildNumber)' 27 | title: 'Version $(Build.BuildNumber)' 28 | releaseNotesSource: input 29 | releaseNotes: '# Version $(Build.BuildNumber) Release' 30 | assets: '$(Build.SourcesDirectory)/out/*.zip' 31 | assetUploadMode: replace 32 | - task: PublishBuildArtifacts@1 33 | displayName: 'Publish Output Files' 34 | inputs: 35 | pathtoPublish: '$(Build.SourcesDirectory)/out/' 36 | artifactName: 'Lab Files' 37 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | remote_theme: MicrosoftLearning/Jekyll-Theme 2 | exclude: 3 | - readme.md 4 | - .github/ 5 | header_pages: 6 | - index.html 7 | author: Microsoft Learning 8 | twitter_username: mslearning 9 | github_username: MicrosoftLearning 10 | plugins: 11 | - jekyll-sitemap 12 | - jekyll-mentions 13 | - jemoji 14 | markdown: kramdown 15 | kramdown: 16 | syntax_highlighter_opts: 17 | disable : true 18 | title: Develop AI Agents in Azure 19 | -------------------------------------------------------------------------------- /index.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Develop AI Agents in Azure 3 | permalink: index.html 4 | layout: home 5 | --- 6 | 7 | The following exercises are designed to provide you with a hands-on learning experience in which you'll explore common tasks that developers do when building AI agents on Microsoft Azure. 8 | 9 | > **Note**: To complete the exercises, you'll need an Azure subscription in which you have sufficient permissions and quota to provision the necessary Azure resources and generative AI models. If you don't already have one, you can sign up for an [Azure account](https://azure.microsoft.com/free). There's a free trial option for new users that includes credits for the first 30 days. 10 | 11 | ## Exercises 12 | 13 | {% assign labs = site.pages | where_exp:"page", "page.url contains '/Instructions'" %} 14 | {% for activity in labs %} 15 |
16 | ### [{{ activity.lab.title }}]({{ site.github.url }}{{ activity.url }}) 17 | 18 | {{activity.lab.description}} 19 | 20 | {% endfor %} 21 | 22 |
23 | 24 | > **Note**: While you can complete these exercises on their own, they're designed to complement modules on [Microsoft Learn](https://learn.microsoft.com/training/paths/develop-ai-agents-on-azure/); in which you'll find a deeper dive into some of the underlying concepts on which these exercises are based. 25 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Develop AI Agents in Azure 2 | 3 | The exercises in this repo are designed to provide you with a hands-on learning experience in which you'll explore common tasks that developers perform when building AI agents on Microsoft Azure. 4 | 5 | > **Note**: To complete the exercises, you'll need an Azure subscription in which you have sufficient permissions and quota to provision the necessary Azure resources and generative AI models. If you don't already have one, you can sign up for an [Azure account](https://azure.microsoft.com/free). There's a free trial option for new users that includes credits for the first 30 days. 6 | 7 | View the exercises in the [GitHub Pages site for this repo](https://go.microsoft.com/fwlink/?linkid=2310820). 8 | 9 | > **Note**: While you can complete these exercises on their own, they're designed to complement modules on [Microsoft Learn](https://learn.microsoft.com/training/paths/develop-ai-agents-on-azure/); in which you'll find a deeper dive into some of the underlying concepts on which these exercises are based. 10 | 11 | ## Reporting issues 12 | 13 | If you encounter any problems in the exercises, please report them as **issues** in this repo. 14 | 15 | --------------------------------------------------------------------------------