├── .devcontainer ├── Dockerfile └── devcontainer.json ├── .github ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE.md ├── SECURITY.md ├── dependabot.yaml └── workflows │ ├── azure-dev.yaml │ └── template-validation.yaml ├── .gitignore ├── .pre-commit-config.yaml ├── LICENSE ├── README.md ├── azure.yaml ├── example_data ├── .llama_index_storage │ ├── docs1 │ │ ├── default__vector_store.json │ │ ├── docstore.json │ │ ├── graph_store.json │ │ ├── image__vector_store.json │ │ └── index_store.json │ └── docs2 │ │ ├── default__vector_store.json │ │ ├── docstore.json │ │ ├── graph_store.json │ │ ├── image__vector_store.json │ │ └── index_store.json ├── PerksPlus.pdf └── employee_handbook.pdf ├── examples ├── autogen_basic.py ├── autogen_magenticone.py ├── autogen_swarm.py ├── autogen_tools.py ├── azureai_githubmodels.py ├── langgraph_agent.py ├── llamaindex.py ├── openai_agents_basic.py ├── openai_agents_handoffs.py ├── openai_agents_tools.py ├── openai_functioncalling.py ├── openai_githubmodels.py ├── pydanticai_basic.py ├── pydanticai_graph.py ├── pydanticai_multiagent.py ├── pydanticai_tools.py ├── semantickernel_basic.py ├── semantickernel_groupchat.py ├── smolagents_codeagent.py └── spanish │ ├── README.md │ ├── autogen_basic.py │ ├── autogen_magenticone.py │ ├── autogen_swarm.py │ ├── autogen_tools.py │ ├── azureai_githubmodels.py │ ├── langgraph_agent.py │ ├── llamaindex.py │ ├── openai_agents_basic.py │ ├── openai_agents_handoffs.py │ ├── openai_agents_tools.py │ ├── openai_functioncalling.py │ ├── openai_githubmodels.py │ ├── pydanticai_basic.py │ ├── pydanticai_graph.py │ ├── pydanticai_multiagent.py │ ├── semantickernel_basic.py │ ├── semantickernel_groupchat.py │ └── smolagents_codeagent.py ├── infra ├── main.bicep ├── main.parameters.json ├── write_dot_env.ps1 └── write_dot_env.sh ├── pyproject.toml └── requirements.txt /.devcontainer/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM mcr.microsoft.com/devcontainers/python:3.12-bookworm 2 | 3 | COPY requirements.txt /tmp/pip-tmp/ 4 | 5 | RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \ 6 | && rm -rf /tmp/pip-tmp 7 | -------------------------------------------------------------------------------- /.devcontainer/devcontainer.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "python-ai-agent-frameworks-demos", 3 | "build": { 4 | "dockerfile": "Dockerfile", 5 | "context": ".." 6 | }, 7 | "features": { 8 | "ghcr.io/azure/azure-dev/azd:latest": {} 9 | }, 10 | "customizations": { 11 | "vscode": { 12 | "extensions": [ 13 | "ms-python.python", 14 | "ms-azuretools.vscode-bicep" 15 | ], 16 | "python.defaultInterpreterPath": "/usr/local/bin/python" 17 | } 18 | }, 19 | "remoteUser": "vscode", 20 | "hostRequirements": { 21 | "memory": "8gb" 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Microsoft Open Source Code of Conduct 2 | 3 | This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). 4 | 5 | Resources: 6 | 7 | - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) 8 | - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) 9 | - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns 10 | -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to azure-openai-entity-extraction 2 | 3 | This project welcomes contributions and suggestions. Most contributions require you to agree to a 4 | Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us 5 | the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. 6 | 7 | When you submit a pull request, a CLA bot will automatically determine whether you need to provide 8 | a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions 9 | provided by the bot. You will only need to do this once across all repos using our CLA. 10 | 11 | This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). 12 | For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or 13 | contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. 14 | 15 | - [Code of Conduct](#coc) 16 | - [Issues and Bugs](#issue) 17 | - [Feature Requests](#feature) 18 | - [Submission Guidelines](#submit) 19 | 20 | ## Code of Conduct 21 | Help us keep this project open and inclusive. Please read and follow our [Code of Conduct](https://opensource.microsoft.com/codeofconduct/). 22 | 23 | ## Found an Issue? 24 | If you find a bug in the source code or a mistake in the documentation, you can help us by 25 | [submitting an issue](#submit-issue) to the GitHub Repository. Even better, you can 26 | [submit a Pull Request](#submit-pr) with a fix. 27 | 28 | ## Want a Feature? 29 | You can *request* a new feature by [submitting an issue](#submit-issue) to the GitHub 30 | Repository. If you would like to *implement* a new feature, please submit an issue with 31 | a proposal for your work first, to be sure that we can use it. 32 | 33 | * **Small Features** can be crafted and directly [submitted as a Pull Request](#submit-pr). 34 | 35 | ## Submission Guidelines 36 | 37 | ### Submitting an Issue 38 | Before you submit an issue, search the archive, maybe your question was already answered. 39 | 40 | If your issue appears to be a bug, and hasn't been reported, open a new issue. 41 | Help us to maximize the effort we can spend fixing issues and adding new 42 | features, by not reporting duplicate issues. Providing the following information will increase the 43 | chances of your issue being dealt with quickly: 44 | 45 | * **Overview of the Issue** - if an error is being thrown a non-minified stack trace helps 46 | * **Version** - what version is affected (e.g. 0.1.2) 47 | * **Motivation for or Use Case** - explain what are you trying to do and why the current behavior is a bug for you 48 | * **Browsers and Operating System** - is this a problem with all browsers? 49 | * **Reproduce the Error** - provide a live example or a unambiguous set of steps 50 | * **Related Issues** - has a similar issue been reported before? 51 | * **Suggest a Fix** - if you can't fix the bug yourself, perhaps you can point to what might be 52 | causing the problem (line of code or commit) 53 | 54 | You can file new issues by providing the above information at the corresponding repository's issues link: https://github.com/Azure-samples/azure-openai-entity-extraction/issues/new]. 55 | 56 | ### Submitting a Pull Request (PR) 57 | Before you submit your Pull Request (PR) consider the following guidelines: 58 | 59 | * Search the repository (https://github.com/Azure-samples/azure-openai-entity-extraction/pulls) for an open or closed PR 60 | that relates to your submission. You don't want to duplicate effort. 61 | 62 | * Make your changes in a new git fork: 63 | 64 | * Commit your changes using a descriptive commit message 65 | * Push your fork to GitHub: 66 | * In GitHub, create a pull request 67 | * If we suggest changes then: 68 | * Make the required updates. 69 | * Rebase your fork and force push to your GitHub repository (this will update your Pull Request): 70 | 71 | ```shell 72 | git rebase master -i 73 | git push -f 74 | ``` 75 | 76 | That's it! Thank you for your contribution! 77 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 4 | > Please provide us with the following information: 5 | > --------------------------------------------------------------- 6 | 7 | ### This issue is for a: (mark with an `x`) 8 | ``` 9 | - [ ] bug report -> please search issues before submitting 10 | - [ ] feature request 11 | - [ ] documentation issue or request 12 | - [ ] regression (a behavior that used to work and stopped in a new release) 13 | ``` 14 | 15 | ### Minimal steps to reproduce 16 | > 17 | 18 | ### Any log messages given by the failure 19 | > 20 | 21 | ### Expected/desired behavior 22 | > 23 | 24 | ### OS and Version? 25 | > Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?) 26 | 27 | ### Versions 28 | > 29 | 30 | ### Mention any other details that might be useful 31 | 32 | > --------------------------------------------------------------- 33 | > Thanks! We'll be in touch soon. 34 | -------------------------------------------------------------------------------- /.github/SECURITY.md: -------------------------------------------------------------------------------- 1 | # Security 2 | 3 | Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). 4 | 5 | If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](), please report it to us as described below. 6 | 7 | ## Reporting Security Issues 8 | 9 | **Please do not report security vulnerabilities through public GitHub issues.** 10 | 11 | Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report). 12 | 13 | If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/msrc/pgp-key-msrc). 14 | 15 | You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc). 16 | 17 | Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: 18 | 19 | - Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) 20 | - Full paths of source file(s) related to the manifestation of the issue 21 | - The location of the affected source code (tag/branch/commit or direct URL) 22 | - Any special configuration required to reproduce the issue 23 | - Step-by-step instructions to reproduce the issue 24 | - Proof-of-concept or exploit code (if possible) 25 | - Impact of the issue, including how an attacker might exploit the issue 26 | 27 | This information will help us triage your report more quickly. 28 | 29 | If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) page for more details about our active programs. 30 | 31 | ## Preferred Languages 32 | 33 | We prefer all communications to be in English. 34 | 35 | ## Policy 36 | 37 | Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://www.microsoft.com/msrc/cvd). 38 | -------------------------------------------------------------------------------- /.github/dependabot.yaml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | 4 | # Maintain dependencies for GitHub Actions 5 | - package-ecosystem: "github-actions" 6 | directory: "/" 7 | schedule: 8 | interval: "weekly" 9 | groups: 10 | github-actions: 11 | patterns: 12 | - "*" 13 | -------------------------------------------------------------------------------- /.github/workflows/azure-dev.yaml: -------------------------------------------------------------------------------- 1 | name: Provision with azd 2 | 3 | on: 4 | workflow_dispatch: 5 | push: 6 | # Run when commits are pushed to mainline branch (main or master) 7 | # Set this to the mainline branch you are using 8 | branches: 9 | - main 10 | 11 | # GitHub Actions workflow to deploy to Azure using azd 12 | # To configure required secrets for connecting to Azure, simply run `azd pipeline config` 13 | 14 | # Set up permissions for deploying with secretless Azure federated credentials 15 | # https://learn.microsoft.com/en-us/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux#set-up-azure-login-with-openid-connect-authentication 16 | permissions: 17 | id-token: write 18 | contents: read 19 | 20 | jobs: 21 | build: 22 | runs-on: ubuntu-latest 23 | env: 24 | # azd required 25 | AZURE_CLIENT_ID: ${{ vars.AZURE_CLIENT_ID }} 26 | AZURE_TENANT_ID: ${{ vars.AZURE_TENANT_ID }} 27 | AZURE_SUBSCRIPTION_ID: ${{ vars.AZURE_SUBSCRIPTION_ID }} 28 | AZURE_ENV_NAME: ${{ vars.AZURE_ENV_NAME }} 29 | AZURE_LOCATION: ${{ vars.AZURE_LOCATION }} 30 | steps: 31 | - name: Checkout 32 | uses: actions/checkout@v4 33 | 34 | - name: Install azd 35 | uses: Azure/setup-azd@v2.1.0 36 | 37 | - name: Install Nodejs 38 | uses: actions/setup-node@v4 39 | with: 40 | node-version: 18 41 | 42 | - name: Log in with Azure (Federated Credentials) 43 | if: ${{ env.AZURE_CLIENT_ID != '' }} 44 | run: | 45 | azd auth login ` 46 | --client-id "$Env:AZURE_CLIENT_ID" ` 47 | --federated-credential-provider "github" ` 48 | --tenant-id "$Env:AZURE_TENANT_ID" 49 | shell: pwsh 50 | 51 | - name: Provision Infrastructure 52 | run: azd provision --no-prompt 53 | env: 54 | AZD_INITIAL_ENVIRONMENT_CONFIG: ${{ secrets.AZD_INITIAL_ENVIRONMENT_CONFIG }} 55 | AZURE_SERVER_APP_SECRET: ${{ secrets.AZURE_SERVER_APP_SECRET }} 56 | AZURE_CLIENT_APP_SECRET: ${{ secrets.AZURE_CLIENT_APP_SECRET }} 57 | -------------------------------------------------------------------------------- /.github/workflows/template-validation.yaml: -------------------------------------------------------------------------------- 1 | # This is for internal use to make sure our samples follow best practices. You can delete this in your fork. 2 | name: Template validation sample workflow 3 | on: 4 | workflow_dispatch: 5 | 6 | permissions: 7 | contents: read 8 | id-token: write 9 | pull-requests: write 10 | 11 | jobs: 12 | template_validation_job: 13 | runs-on: ubuntu-latest 14 | name: template validation 15 | steps: 16 | - uses: actions/checkout@v4 17 | 18 | - uses: microsoft/template-validation-action@v0.4.2 19 | env: 20 | AZURE_CLIENT_ID: ${{ vars.AZURE_CLIENT_ID }} 21 | AZURE_TENANT_ID: ${{ vars.AZURE_TENANT_ID }} 22 | AZURE_SUBSCRIPTION_ID: ${{ vars.AZURE_SUBSCRIPTION_ID }} 23 | AZURE_ENV_NAME: ${{ vars.AZURE_ENV_NAME }} 24 | AZURE_LOCATION: ${{ vars.AZURE_LOCATION }} 25 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 26 | 27 | - name: print result 28 | run: cat ${{ steps.validation.outputs.resultFile }} 29 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Azure az webapp deployment details 2 | .azure 3 | *_env 4 | 5 | # Byte-compiled / optimized / DLL files 6 | __pycache__/ 7 | *.py[cod] 8 | *$py.class 9 | 10 | # C extensions 11 | *.so 12 | 13 | # Distribution / packaging 14 | .Python 15 | build/ 16 | develop-eggs/ 17 | dist/ 18 | downloads/ 19 | eggs/ 20 | .eggs/ 21 | lib/ 22 | lib64/ 23 | parts/ 24 | sdist/ 25 | var/ 26 | wheels/ 27 | share/python-wheels/ 28 | *.egg-info/ 29 | .installed.cfg 30 | *.egg 31 | MANIFEST 32 | 33 | # PyInstaller 34 | # Usually these files are written by a python script from a template 35 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 36 | *.manifest 37 | *.spec 38 | 39 | # Installer logs 40 | pip-log.txt 41 | pip-delete-this-directory.txt 42 | 43 | # Unit test / coverage reports 44 | htmlcov/ 45 | .tox/ 46 | .nox/ 47 | .coverage 48 | .coverage.* 49 | .cache 50 | nosetests.xml 51 | coverage.xml 52 | *.cover 53 | *.py,cover 54 | .hypothesis/ 55 | .pytest_cache/ 56 | cover/ 57 | 58 | # Translations 59 | *.mo 60 | *.pot 61 | 62 | # Django stuff: 63 | *.log 64 | local_settings.py 65 | db.sqlite3 66 | db.sqlite3-journal 67 | 68 | # Flask stuff: 69 | instance/ 70 | .webassets-cache 71 | 72 | # Scrapy stuff: 73 | .scrapy 74 | 75 | # Sphinx documentation 76 | docs/_build/ 77 | 78 | # PyBuilder 79 | .pybuilder/ 80 | target/ 81 | 82 | # Jupyter Notebook 83 | .ipynb_checkpoints 84 | 85 | # IPython 86 | profile_default/ 87 | ipython_config.py 88 | 89 | # pyenv 90 | # For a library or package, you might want to ignore these files since the code is 91 | # intended to run in multiple environments; otherwise, check them in: 92 | # .python-version 93 | 94 | # pipenv 95 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 96 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 97 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 98 | # install all needed dependencies. 99 | #Pipfile.lock 100 | 101 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 102 | __pypackages__/ 103 | 104 | # Celery stuff 105 | celerybeat-schedule 106 | celerybeat.pid 107 | 108 | # SageMath parsed files 109 | *.sage.py 110 | 111 | # Environments 112 | .env 113 | .venv 114 | env/ 115 | venv/ 116 | ENV/ 117 | env.bak/ 118 | venv.bak/ 119 | 120 | # Spyder project settings 121 | .spyderproject 122 | .spyproject 123 | 124 | # Rope project settings 125 | .ropeproject 126 | 127 | # mkdocs documentation 128 | /site 129 | 130 | # mypy 131 | .mypy_cache/ 132 | .dmypy.json 133 | dmypy.json 134 | 135 | # Pyre type checker 136 | .pyre/ 137 | 138 | # pytype static type analyzer 139 | .pytype/ 140 | 141 | # Cython debug symbols 142 | cython_debug/ 143 | 144 | # NPM 145 | npm-debug.log* 146 | node_modules 147 | static/ 148 | 149 | .DS_Store 150 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | - repo: https://github.com/pre-commit/pre-commit-hooks 3 | rev: v4.5.0 4 | hooks: 5 | - id: check-yaml 6 | - id: end-of-file-fixer 7 | exclude: ^tests/snapshots 8 | - id: trailing-whitespace 9 | - repo: https://github.com/astral-sh/ruff-pre-commit 10 | rev: v0.1.0 11 | hooks: 12 | # Run the linter. 13 | - id: ruff 14 | args: [ --fix ] 15 | # Run the formatter. 16 | - id: ruff-format 17 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Azure Samples 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 14 | # Python AI Agent Frameworks Demos 15 | 16 | [![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/Azure-Samples/python-ai-agent-frameworks-demos) 17 | [![Open in Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/python-ai-agent-frameworks-demos) 18 | 19 | This repository provides examples of many popular Python AI agent frameworks using LLMs from [GitHub Models](https://github.com/marketplace/models). Those models are free to use for anyone with a GitHub account, up to a [daily rate limit](https://docs.github.com/github-models/prototyping-with-ai-models#rate-limits). 20 | 21 | * [Getting started](#getting-started) 22 | * [GitHub Codespaces](#github-codespaces) 23 | * [VS Code Dev Containers](#vs-code-dev-containers) 24 | * [Local environment](#local-environment) 25 | * [Running the Python examples](#running-the-python-examples) 26 | * [Guidance](#guidance) 27 | * [Costs](#costs) 28 | * [Security guidelines](#security-guidelines) 29 | * [Resources](#resources) 30 | 31 | ## Getting started 32 | 33 | You have a few options for getting started with this repository. 34 | The quickest way to get started is GitHub Codespaces, since it will setup everything for you, but you can also [set it up locally](#local-environment). 35 | 36 | ### GitHub Codespaces 37 | 38 | You can run this repository virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser: 39 | 40 | 1. Open the repository (this may take several minutes): 41 | 42 | [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Azure-Samples/python-ai-agent-frameworks-demos) 43 | 44 | 2. Open a terminal window 45 | 3. Continue with the steps to run the examples 46 | 47 | ### VS Code Dev Containers 48 | 49 | A related option is VS Code Dev Containers, which will open the project in your local VS Code using the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers): 50 | 51 | 1. Start Docker Desktop (install it if not already installed) 52 | 2. Open the project: 53 | 54 | [![Open in Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/python-ai-agent-frameworks-demos) 55 | 56 | 3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window. 57 | 4. Continue with the steps to run the examples 58 | 59 | ### Local environment 60 | 61 | 1. Make sure the following tools are installed: 62 | 63 | * [Python 3.10+](https://www.python.org/downloads/) 64 | * Git 65 | 66 | 2. Clone the repository: 67 | 68 | ```shell 69 | git clone https://github.com/Azure-Samples/python-ai-agent-frameworks-demos 70 | cd python-ai-agents-demos 71 | ``` 72 | 73 | 3. Set up a virtual environment: 74 | 75 | ```shell 76 | python -m venv venv 77 | source venv/bin/activate # On Windows: venv\Scripts\activate 78 | ``` 79 | 80 | 4. Install the requirements: 81 | 82 | ```shell 83 | pip install -r requirements.txt 84 | ``` 85 | 86 | ## Running the Python examples 87 | 88 | You can run the examples in this repository by executing the scripts in the `examples` directory. Each script demonstrates a different AI agent pattern or framework. 89 | 90 | | Example | Description | 91 | | ------- | ----------- | 92 | | autogen_basic.py | Uses AutoGen to build a single agent. | 93 | | autogen_tools.py | Uses AutoGen to build a single agent with tools. | 94 | | autogen_magenticone.py | Uses AutoGen with the MagenticOne orchestrator agent for travel planning. | 95 | | autogen_swarm.py | Uses AutoGen with the Swarm orchestrator agent for flight refunding requests. | 96 | | langgraph.py | Uses LangGraph to build an agent with a StateGraph to play songs. | 97 | | llamaindex.py | Uses LlamaIndex to build a ReAct agent for RAG on multiple indexes. | 98 | | openai_agents_basic.py | Uses the OpenAI Agents framework to build a single agent. | 99 | | openai_agents.py | Uses the OpenAI Agents framework to handoff between several agents with tools. | 100 | | openai_functioncalling.py | Uses OpenAI Function Calling to call functions based on LLM output. | 101 | | pydanticai.py | Uses PydanticAI to build a two-agent sequential workflow for flight planning. | 102 | | semantickernel.py | Uses Semantic Kernel to build a writer/editor two-agent workflow. | 103 | | smolagents_codeagent.py | Uses SmolAgents to build a question-answering agent that can search the web and run code. | 104 | 105 | ## Configuring GitHub Models 106 | 107 | If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your `GITHUB_TOKEN` is already configured in the Codespaces environment. 108 | 109 | If you want to run the scripts locally, you need to set up the `GITHUB_TOKEN` environment variable with a GitHub personal access token (PAT). You can create a PAT by following these steps: 110 | 111 | 1. Go to your GitHub account settings. 112 | 2. Click on "Developer settings" in the left sidebar. 113 | 3. Click on "Personal access tokens" in the left sidebar. 114 | 4. Click on "Tokens (classic)" or "Fine-grained tokens" depending on your preference. 115 | 5. Click on "Generate new token". 116 | 6. Give your token a name and select the scopes you want to grant. For this project, you don't need any specific scopes. 117 | 7. Click on "Generate token". 118 | 8. Copy the generated token. 119 | 9. Set the `GITHUB_TOKEN` environment variable in your terminal or IDE: 120 | 121 | ```shell 122 | export GITHUB_TOKEN=your_personal_access_token 123 | ``` 124 | 125 | 10. Optionally, you can use a model other than "gpt-4o" by setting the `GITHUB_MODEL` environment variable. Use a model that supports function calling, such as: `gpt-4o`, `gpt-4o-mini`, `o3-mini`, `AI21-Jamba-1.5-Large`, `AI21-Jamba-1.5-Mini`, `Codestral-2501`, `Cohere-command-r`, `Ministral-3B`, `Mistral-Large-2411`, `Mistral-Nemo`, `Mistral-small` 126 | 127 | ## Provisioning Azure AI resources 128 | 129 | You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure OpenAI instead, you need to provision the Azure AI resources, which will incur costs. 130 | 131 | This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-4o" and "text-embedding-3-large". The IaC is defined in the `infra` directory and uses the Azure Developer CLI to provision the resources. 132 | 133 | 1. Make sure the [Azure Developer CLI (azd)](https://aka.ms/install-azd) is installed. 134 | 135 | 2. Login to Azure: 136 | 137 | ```shell 138 | azd auth login 139 | ``` 140 | 141 | For GitHub Codespaces users, if the previous command fails, try: 142 | 143 | ```shell 144 | azd auth login --use-device-code 145 | ``` 146 | 147 | 3. Provision the OpenAI account: 148 | 149 | ```shell 150 | azd provision 151 | ``` 152 | 153 | It will prompt you to provide an `azd` environment name (like "agents-demos"), select a subscription from your Azure account, and select a location. Then it will provision the resources in your account. 154 | 155 | 4. Once the resources are provisioned, you should now see a local `.env` file with all the environment variables needed to run the scripts. 156 | 5. To delete the resources, run: 157 | 158 | ```shell 159 | azd down 160 | ``` 161 | 162 | ## Resources 163 | 164 | * [AutoGen Documentation](https://microsoft.github.io/autogen/) 165 | * [LangGraph Documentation](https://langchain-ai.github.io/langgraph/tutorials/introduction/) 166 | * [LlamaIndex Documentation](https://docs.llamaindex.ai/en/latest/) 167 | * [OpenAI Agents Documentation](https://openai.github.io/openai-agents-python/) 168 | * [OpenAI Function Calling Documentation](https://platform.openai.com/docs/guides/function-calling?api-mode=chat) 169 | * [PydanticAI Documentation](https://ai.pydantic.dev/multi-agent-applications/) 170 | * [Semantic Kernel Documentation](https://learn.microsoft.com/semantic-kernel/overview/) 171 | * [SmolAgents Documentation](https://huggingface.co/docs/smolagents/index) 172 | -------------------------------------------------------------------------------- /azure.yaml: -------------------------------------------------------------------------------- 1 | # yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json 2 | 3 | name: azure-openai-entity-extraction 4 | metadata: 5 | template: azure-openai-entity-extraction@0.0.4 6 | hooks: 7 | postprovision: 8 | windows: 9 | shell: pwsh 10 | run: ./infra/write_dot_env.ps1 11 | interactive: false 12 | continueOnError: false 13 | posix: 14 | shell: sh 15 | run: ./infra/write_dot_env.sh 16 | interactive: false 17 | continueOnError: false 18 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs1/docstore.json: -------------------------------------------------------------------------------- 1 | {"docstore/metadata": {"073c9ba8-a773-4e6a-be37-84981f5e6a2c": {"doc_hash": "cc1025676fdf6c912195b57189ef5dc17b13d3b79ffe2f13469eb571ca4dc6cf"}, "76c3c404-5e36-4b51-9609-fc606bc17975": {"doc_hash": "c096166b4454ecf54d43b94b7cacecef2d78ae915934d1d6eb78bce42d11f96c"}, "57f6a2ee-8f27-48b5-806a-052d09d925a7": {"doc_hash": "c3809a226a60c90af30d826358a6b9cf15daad919dd2244f495c185f239739d5"}, "35874492-a71e-4b49-aaf5-3c58b196de33": {"doc_hash": "4af2ff890efb94c275f9ac77f6943a8f8ffbdb1d330e096176ffdd5fba335fad"}, "4433322c-782b-4bac-839a-b08d7783e6c8": {"doc_hash": "0eedb7f6bbb64144d8f6734c48460f2b661642bf969bafa3fc0835079b12e18e"}, "2f1e518c-dcab-476f-9bdc-dbae42288cec": {"doc_hash": "277e02feaaefd508a7a807e9e9ef0c84104058f7415d3791287c5fbca9fb915d"}, "ecdf1d5c-d9d1-456d-bc9b-2cb305e9c96f": {"doc_hash": "9bb4f149396a43f97232b31871bf22ffa0fb5aa7957f83e3734104a06939f0b6"}, "66fe6503-d471-4c71-8071-223c93a40487": {"doc_hash": "f8089fb431397712bb8c6b549f995ab92ad53f4874c3e0de2242af49f02d800e"}, "064cdc80-4c75-4d19-8209-f1612b7992c5": {"doc_hash": "93b337cf8e783eef73241f392163c56e25eaf8465faa000b796c709d7121fc6c"}, "ad321e04-d7c9-4db1-9ca9-5fc4339107ca": {"doc_hash": "c62148cd51f414ca952540397c6807323bbeb34b2e089af3c3eea4aab3a6d1ee"}, "c7bfcdce-2277-481d-a7f8-07ee118be8b4": {"doc_hash": "7c65f1096b8f93282332bccc6b7e08ca583cdf4de8c055e7964fb3386e56aa1f"}, "d91d3e1c-88a0-43ea-a758-3c835c42bc87": {"doc_hash": "cbf6b7700a9b1c89d66ee1888b656f8b70150edfbc351da092c595676e589fe0", "ref_doc_id": "073c9ba8-a773-4e6a-be37-84981f5e6a2c"}, "176a5a69-7e47-45fa-b172-bf58a95d164c": {"doc_hash": "5d02dc08c0e48c3ac24bff1de429550d8f7874a25646c001f796eabcdb20e734", "ref_doc_id": "76c3c404-5e36-4b51-9609-fc606bc17975"}, "12f32e53-c22e-4dc9-9574-106807f1e9a3": {"doc_hash": "cc1bc6966b6a0654607ac077236f7e0a84d19c8a37d58a7e71e17f0bae01adfa", "ref_doc_id": "57f6a2ee-8f27-48b5-806a-052d09d925a7"}, "e443336e-ef2c-402c-9937-fe24d9e09544": {"doc_hash": "9dbe5346f5320d97ed624a2864653da60c2f555abf7f0db791c367ddb7d11dc8", "ref_doc_id": "35874492-a71e-4b49-aaf5-3c58b196de33"}, "e7fa5797-4c95-49cf-abde-ae930ce39197": {"doc_hash": "ddd5aeafbadff5b4879aa836ed8aa587620804ebbd99d7dc784c41cada917a74", "ref_doc_id": "4433322c-782b-4bac-839a-b08d7783e6c8"}, "ffd898c1-3c98-4144-a486-725ebbf32c93": {"doc_hash": "1d0ff0d18efdea1cbd812914ca6431a55340317f5316e8d5729cd5326953d088", "ref_doc_id": "2f1e518c-dcab-476f-9bdc-dbae42288cec"}, "fdaf75e9-edab-4f64-bfb2-1823706c4938": {"doc_hash": "464f07345f41a00e0db33019883a82e1d5b1af129661c66f884dd5b80c09e14a", "ref_doc_id": "ecdf1d5c-d9d1-456d-bc9b-2cb305e9c96f"}, "5e1de6c0-44b7-49e2-a557-b0e14436e54e": {"doc_hash": "122472b55a944968af6424595446fc8f985068b781087b1e12bc9ccf7d8879bd", "ref_doc_id": "66fe6503-d471-4c71-8071-223c93a40487"}, "20d96b89-f430-41b8-8d2d-920a0eca77dd": {"doc_hash": "7a44b2db38f712c4074b20f0bd3b239def1b4bd06159963e7907a084a2affb77", "ref_doc_id": "064cdc80-4c75-4d19-8209-f1612b7992c5"}, "858d5c1d-1ddc-41d4-95fb-fe98f44c5dd0": {"doc_hash": "efa92e6ef3d25041ebd66493f6af685d0799dcd1f9e6f31dffd42b1a9ddad4d6", "ref_doc_id": "ad321e04-d7c9-4db1-9ca9-5fc4339107ca"}, "c2075dd6-c0e3-4aa0-873f-d788840209c9": {"doc_hash": "07f32ce94ecbd9468a0fcb5efb81df8d1da738295805ba17112e574e1d2452b4", "ref_doc_id": "c7bfcdce-2277-481d-a7f8-07ee118be8b4"}}, "docstore/data": {"d91d3e1c-88a0-43ea-a758-3c835c42bc87": {"__data__": {"id_": "d91d3e1c-88a0-43ea-a758-3c835c42bc87", "embedding": null, "metadata": {"page_label": "1", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "073c9ba8-a773-4e6a-be37-84981f5e6a2c", "node_type": "4", "metadata": {"page_label": "1", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "cc1025676fdf6c912195b57189ef5dc17b13d3b79ffe2f13469eb571ca4dc6cf", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "Contoso Electronics \nEmployee Handbook", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 38, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "176a5a69-7e47-45fa-b172-bf58a95d164c": {"__data__": {"id_": "176a5a69-7e47-45fa-b172-bf58a95d164c", "embedding": null, "metadata": {"page_label": "2", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "76c3c404-5e36-4b51-9609-fc606bc17975", "node_type": "4", "metadata": {"page_label": "2", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "c096166b4454ecf54d43b94b7cacecef2d78ae915934d1d6eb78bce42d11f96c", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "This document contains information generated using a language model (Azure OpenAI). The \ninformation contained in this document is only for demonstration purposes and does not \nreflect the opinions or beliefs of Microsoft. Microsoft makes no representations or \nwarranties of any kind, express or implied, about the completeness, accuracy, reliability, \nsuitability or availability with respect to the information contained in this document. \nAll rights reserved to Microsoft", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 476, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "12f32e53-c22e-4dc9-9574-106807f1e9a3": {"__data__": {"id_": "12f32e53-c22e-4dc9-9574-106807f1e9a3", "embedding": null, "metadata": {"page_label": "3", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "57f6a2ee-8f27-48b5-806a-052d09d925a7", "node_type": "4", "metadata": {"page_label": "3", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "c3809a226a60c90af30d826358a6b9cf15daad919dd2244f495c185f239739d5", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "Contoso Electronics Employee Handbook \nLast Updated: 2023-03-05 \n \nContoso Electronics is a leader in the aerospace industry, providing advanced electronic \ncomponents for both commercial and military aircraft. We specialize in creating cutting-\nedge systems that are both reliable and efficient. Our mission is to provide the highest \nquality aircraft components to our customers, while maintaining a commitment to safety \nand excellence. We are proud to have built a strong reputation in the aerospace industry \nand strive to continually improve our products and services. Our experienced team of \nengineers and technicians are dedicated to providing the best products and services to our \ncustomers. With our commitment to excellence, we are sure to remain a leader in the \naerospace industry for years to come. \nOur Mission \n \nContoso Electronics is a leader in the aerospace industry, providing advanced electronic \ncomponents for both commercial and military aircraft. We specialize in creating cutting-\nedge systems that are both reliable and efficient. Our mission is to provide the highest \nquality aircraft components to our customers, while maintaining a commitment to safety \nand excellence. We are proud to have built a strong reputation in the aerospace industry \nand strive to continually improve our products and services. Our experienced team of \nengineers and technicians are dedicated to providing the best products and services to our \ncustomers. With our commitment to excellence, we are sure to remain a leader in the \naerospace industry for years to come. \nValues \n \nAt Contoso Electronics, we strive to create an environment that values hard work, \ninnovation, and collaboration. Our core values serve as the foundation for our success, and \nthey guide our employees in how we should act and interact with each other and our \ncustomers. \n \nCompany Values: \n1. Quality: We strive to provide the highest quality products and services to our customers. \n2. Integrity: We value honesty, respect, and trustworthiness in all our interactions. \n3. Innovation: We encourage creativity and support new ideas and approaches to our \nbusiness. \n4. Teamwork: We believe that by working together, we can achieve greater success. \n5. Respect: We treat all our employees, customers, and partners with respect and dignity. \n6. Excellence: We strive to exceed expectations and provide excellent service.", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 2409, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "e443336e-ef2c-402c-9937-fe24d9e09544": {"__data__": {"id_": "e443336e-ef2c-402c-9937-fe24d9e09544", "embedding": null, "metadata": {"page_label": "4", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "35874492-a71e-4b49-aaf5-3c58b196de33", "node_type": "4", "metadata": {"page_label": "4", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "4af2ff890efb94c275f9ac77f6943a8f8ffbdb1d330e096176ffdd5fba335fad", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "7. Accountability: We take responsibility for our actions and hold ourselves and others \naccountable for their performance. \n8. Community: We are committed to making a positive impact in the communities in which \nwe work and live. \nPerformance Reviews \n \nPerformance Reviews at Contoso Electronics \n \nAt Contoso Electronics, we strive to ensure our employees are getting the feedback they \nneed to continue growing and developing in their roles. We understand that performance \nreviews are a key part of this process and it is important to us that they are conducted in an \neffective and efficient manner. \n \nPerformance reviews are conducted annually and are an important part of your career \ndevelopment. During the review, your supervisor will discuss your performance over the \npast year and provide feedback on areas for improvement. They will also provide you with \nan opportunity to discuss your goals and objectives for the upcoming year. \n \nPerformance reviews are a two-way dialogue between managers and employees. We \nencourage all employees to be honest and open during the review process, as it is an \nimportant opportunity to discuss successes and challenges in the workplace. \n \nWe aim to provide positive and constructive feedback during performance reviews. This \nfeedback should be used as an opportunity to help employees develop and grow in their \nroles. \n \nEmployees will receive a written summary of their performance review which will be \ndiscussed during the review session. This written summary will include a rating of the \nemployee\u2019s performance, feedback, and goals and objectives for the upcoming year. \n \nWe understand that performance reviews can be a stressful process. We are committed to \nmaking sure that all employees feel supported and empowered during the process. We \nencourage all employees to reach out to their managers with any questions or concerns \nthey may have. \n \nWe look forward to conducting performance reviews with all our employees. They are an \nimportant part of our commitment to helping our employees grow and develop in their \nroles.", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 2090, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "e7fa5797-4c95-49cf-abde-ae930ce39197": {"__data__": {"id_": "e7fa5797-4c95-49cf-abde-ae930ce39197", "embedding": null, "metadata": {"page_label": "5", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "4433322c-782b-4bac-839a-b08d7783e6c8", "node_type": "4", "metadata": {"page_label": "5", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "0eedb7f6bbb64144d8f6734c48460f2b661642bf969bafa3fc0835079b12e18e", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "Workplace Safety \n \nWelcome to Contoso Electronics! Our goal is to provide a safe and healthy work \nenvironment for our employees and to maintain a safe workplace that is free from \nrecognized hazards. We believe that workplace safety is everyone's responsibility and we \nare committed to providing a safe working environment for all of our employees. \n \nContoso Electronics' Workplace Safety Program \n \nAt Contoso Electronics, we have established a comprehensive workplace safety program \nthat is designed to protect our employees from workplace hazards. Our program includes: \n \n\u2022 Hazard Identification and Risk Assessment \u2013 We strive to identify and assess potential \nsafety hazards in the workplace and take the necessary steps to reduce or eliminate them. \n \n\u2022 Training \u2013 We provide our employees with safety training to ensure that they are aware of \nsafety procedures and protocols. \n \n\u2022 Personal Protective Equipment (PPE) \u2013 We provide our employees with the necessary PPE \nto ensure their safety. \n \n\u2022 Emergency Preparedness \u2013 We have established procedures and protocols in the event of \nan emergency. \n \n\u2022 Reporting \u2013 We encourage our employees to report any safety concerns or incidents to \nour safety department. \n \n\u2022 Inspections \u2013 We conduct regular safety inspections to ensure that our workplace is free \nfrom hazards. \n \n\u2022 Record Keeping \u2013 We maintain accurate records of all safety incidents, inspections and \ntraining. \n \nWe believe that our workplace safety program is essential to providing a safe and healthy \nwork environment for our employees. We are committed to providing a safe working \nenvironment and to protecting our employees from workplace hazards. If you have any \nquestions or concerns related to workplace safety, please contact our safety department. \nThank you for being a part of the Contoso Electronics team. \nWorkplace Violence \n \nWorkplace Violence Prevention Program", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 1910, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "ffd898c1-3c98-4144-a486-725ebbf32c93": {"__data__": {"id_": "ffd898c1-3c98-4144-a486-725ebbf32c93", "embedding": null, "metadata": {"page_label": "6", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "2f1e518c-dcab-476f-9bdc-dbae42288cec", "node_type": "4", "metadata": {"page_label": "6", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "277e02feaaefd508a7a807e9e9ef0c84104058f7415d3791287c5fbca9fb915d", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "At Contoso Electronics, we are committed to providing a safe, respectful and healthy \nworkplace for all of our employees. In order to ensure that we maintain this, we have \ndeveloped a comprehensive Workplace Violence Prevention Program. \n \nPurpose \n \nThe purpose of this program is to promote a safe and healthy work environment by \npreventing violence, threats, and abuse in the workplace. It is also intended to provide a \nsafe, secure and protected environment for our employees, customers, and visitors. \n \nDefinition of Workplace Violence \n \nWorkplace violence is any act of physical aggression, intimidation, or threat of physical \nharm toward another individual in the workplace. This includes but is not limited to \nphysical assault, threats of violence, verbal abuse, intimidation, harassment, bullying, \nstalking, and any other behavior that creates a hostile work environment. \n \nPrevention and Response \n \nContoso Electronics is committed to preventing workplace violence and will not tolerate \nany acts of violence, threats, or abuse in the workplace. All employees are expected to \nfollow the company\u2019s zero tolerance policy for workplace violence. \n \nIf an employee believes that they are in danger or are the victim or witness of workplace \nviolence, they should immediately notify their supervisor or Human Resources \nRepresentative. Employees are also encouraged to report any suspicious activity or \nbehavior to their supervisor or Human Resources Representative. \n \nIn the event of an incident of workplace violence, Contoso Electronics will respond \npromptly and appropriately. All incidents will be thoroughly investigated and the \nappropriate disciplinary action will be taken. \n \nTraining and Education \n \nContoso Electronics will provide regular training and education to all employees on \nworkplace violence prevention and response. This training will include information on \nrecognizing potential signs of workplace violence, strategies for responding to incidents, \nand the company\u2019s zero tolerance policy. \n \nWe are committed to creating a safe and secure work environment for all of our employees. \nBy following the guidelines outlined in this program, we can ensure that our workplace is \nfree from violence and abuse.", "mimetype": "text/plain", "start_char_idx": 2, "end_char_idx": 2252, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "fdaf75e9-edab-4f64-bfb2-1823706c4938": {"__data__": {"id_": "fdaf75e9-edab-4f64-bfb2-1823706c4938", "embedding": null, "metadata": {"page_label": "7", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "ecdf1d5c-d9d1-456d-bc9b-2cb305e9c96f", "node_type": "4", "metadata": {"page_label": "7", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "9bb4f149396a43f97232b31871bf22ffa0fb5aa7957f83e3734104a06939f0b6", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "Privacy \n \nPrivacy Policy \n \nAt Contoso Electronics, we are committed to protecting the privacy and security of our \ncustomers, employees, and partners. We have developed a comprehensive privacy program \nto ensure that we comply with applicable laws, regulations, and industry standards. \n \nThis policy applies to all Contoso Electronics employees, contractors, and partners. \n \nCollection and Use of Personal Information \n \nContoso Electronics collects, stores, and uses personal information for a variety of purposes, \nsuch as to provide services, process orders, respond to customer inquiries, and to provide \nmarketing communications. \n \nWe may also collect information from third parties, such as our partners and vendors. We \nmay use this information to better understand our customers and improve our services. \n \nContoso Electronics will not sell or rent your personal information to any third parties. \n \nData Security and Protection \n \nContoso Electronics is committed to protecting the security of your personal information. \nWe have implemented physical, technical, and administrative measures to protect your data \nfrom unauthorized access, alteration, or disclosure. \n \nWe use secure servers and encryption technology to protect data transmitted over the \nInternet. \n \nAccess to Personal Information \n \nYou have the right to access, review, and request a copy of your personal information that \nwe have collected and stored. You may also request that we delete or correct any inaccurate \ninformation. \n \nTo access or make changes to your personal information, please contact the Privacy Officer \nat privacy@contoso.com. \n \nChanges to This Policy \n \nWe may update this policy from time to time to reflect changes in our practices or \napplicable laws. We will notify you of any changes by posting a revised policy on our", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 1832, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "5e1de6c0-44b7-49e2-a557-b0e14436e54e": {"__data__": {"id_": "5e1de6c0-44b7-49e2-a557-b0e14436e54e", "embedding": null, "metadata": {"page_label": "8", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "66fe6503-d471-4c71-8071-223c93a40487", "node_type": "4", "metadata": {"page_label": "8", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "f8089fb431397712bb8c6b549f995ab92ad53f4874c3e0de2242af49f02d800e", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "website. \n \nQuestions or Concerns \n \nIf you have any questions or concerns about our privacy policies or practices, please contact \nthe Privacy Officer at privacy@contoso.com. \nWhistleblower Policy \n \nContoso Electronics Whistleblower Policy \n \nAt Contoso Electronics, we believe in maintaining a safe and transparent working \nenvironment for all of our team members. To ensure the well-being of the entire \norganization, we have established a Whistleblower Policy. This policy encourages \nemployees to come forth and report any unethical or illegal activities they may witness \nwhile working at Contoso Electronics. \n \nThis policy applies to all Contoso Electronics employees, contractors, and other third \nparties. \n \nDefinition: \n \nA whistleblower is an individual who reports activities that are illegal, unethical, or \notherwise not in accordance with company policy. \n \nReporting Procedures: \n \nIf you witness any activity that you believe to be illegal, unethical, or not in accordance with \ncompany policy, it is important that you report it immediately. You can do this by: \n \n1. Contacting the Human Resources Department. \n \n2. Emailing the Compliance Officer at compliance@contoso.com. \n \n3. Calling the Compliance Hotline at 1-800-555-1212. \n \nWhen making a report, please provide as much detail as possible. This information should \ninclude: \n \n1. The time and date of the incident. \n \n2. Who was involved.", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 1419, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "20d96b89-f430-41b8-8d2d-920a0eca77dd": {"__data__": {"id_": "20d96b89-f430-41b8-8d2d-920a0eca77dd", "embedding": null, "metadata": {"page_label": "9", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "064cdc80-4c75-4d19-8209-f1612b7992c5", "node_type": "4", "metadata": {"page_label": "9", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "93b337cf8e783eef73241f392163c56e25eaf8465faa000b796c709d7121fc6c", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "3. What happened. \n \n4. Any evidence you may have related to the incident. \n \nIf you choose to report anonymously, you may do so by calling the Compliance Hotline at 1-\n800-555-1212. \n \nRetaliation Prohibited: \n \nRetaliation of any kind is strictly prohibited. Any employee who retaliates against a \nwhistleblower will be subject to disciplinary action, up to and including termination. \n \nConfidentiality: \n \nThe identity of the whistleblower will be kept confidential to the extent permitted by law. \n \nInvestigation: \n \nAll reported incidents will be investigated promptly and thoroughly. \n \nThank you for taking the time to read our Whistleblower Policy. We value your commitment \nto ethical and responsible behavior and appreciate your efforts to help us maintain a safe \nand transparent working environment. \nData Security \n \nData Security at Contoso Electronics \n \nAt Contoso Electronics, data security is of the utmost importance. We understand that the \nsecurity of our customers\u2019 data is paramount and we are committed to protecting it. We \nhave a comprehensive data security program in place to ensure that all customer data is \nkept secure and confidential. \n \nData Security Policies: \n \n\u2022 All employees must adhere to data security policies and procedures established by \nContoso Electronics. \n \n\u2022 All customer data must be encrypted when stored or transferred. \n \n\u2022 Access to customer data must be restricted to authorized personnel only.", "mimetype": "text/plain", "start_char_idx": 2, "end_char_idx": 1454, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "858d5c1d-1ddc-41d4-95fb-fe98f44c5dd0": {"__data__": {"id_": "858d5c1d-1ddc-41d4-95fb-fe98f44c5dd0", "embedding": null, "metadata": {"page_label": "10", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "ad321e04-d7c9-4db1-9ca9-5fc4339107ca", "node_type": "4", "metadata": {"page_label": "10", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "c62148cd51f414ca952540397c6807323bbeb34b2e089af3c3eea4aab3a6d1ee", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "\u2022 All computers, servers, and other digital devices used to store customer data must be \nprotected with up-to-date anti-virus and security software. \n \n\u2022 All passwords used to access customer data must be complex and regularly updated. \n \n\u2022 All customer data must be backed up regularly and stored securely. \n \n\u2022 All customer data must be destroyed securely when no longer needed. \n \nData Security Training: \n \nAll employees must complete data security training at the start of employment and annually \nthereafter. This training will cover topics such as data security policies and procedures, \nencryption, access control, password security, and data backup and destruction. \n \nData Security Audits: \n \nContoso Electronics will conduct regular audits of our data security program to ensure that \nit is functioning as intended. Audits will cover topics such as system security, access control, \nand data protection. \n \nIf you have any questions or concerns about Contoso Electronics\u2019 data security program, \nplease contact our data security team. We are committed to keeping your data secure and \nwe appreciate your continued trust. Thank you for being a valued customer. \nJob Roles \n \n1. Chief Executive Officer \n2. Chief Operating Officer \n3. Chief Financial Officer \n4. Chief Technology Officer \n5. Vice President of Sales \n6. Vice President of Marketing \n7. Vice President of Operations \n8. Vice President of Human Resources \n9. Vice President of Research and Development \n10. Vice President of Product Management \n11. Director of Sales \n12. Director of Marketing \n13. Director of Operations \n14. Director of Human Resources", "mimetype": "text/plain", "start_char_idx": 2, "end_char_idx": 1629, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "c2075dd6-c0e3-4aa0-873f-d788840209c9": {"__data__": {"id_": "c2075dd6-c0e3-4aa0-873f-d788840209c9", "embedding": null, "metadata": {"page_label": "11", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "c7bfcdce-2277-481d-a7f8-07ee118be8b4", "node_type": "4", "metadata": {"page_label": "11", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "7c65f1096b8f93282332bccc6b7e08ca583cdf4de8c055e7964fb3386e56aa1f", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "15. Director of Research and Development \n16. Director of Product Management \n17. Senior Manager of Sales \n18. Senior Manager of Marketing \n19. Senior Manager of Operations \n20. Senior Manager of Human Resources \n21. Senior Manager of Research and Development \n22. Senior Manager of Product Management \n23. Manager of Sales \n24. Manager of Marketing \n25. Manager of Operations \n26. Manager of Human Resources \n27. Manager of Research and Development \n28. Manager of Product Management \n29. Sales Representative \n30. Customer Service Representative", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 547, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}}, "docstore/ref_doc_info": {"073c9ba8-a773-4e6a-be37-84981f5e6a2c": {"node_ids": ["d91d3e1c-88a0-43ea-a758-3c835c42bc87"], "metadata": {"page_label": "1", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "76c3c404-5e36-4b51-9609-fc606bc17975": {"node_ids": ["176a5a69-7e47-45fa-b172-bf58a95d164c"], "metadata": {"page_label": "2", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "57f6a2ee-8f27-48b5-806a-052d09d925a7": {"node_ids": ["12f32e53-c22e-4dc9-9574-106807f1e9a3"], "metadata": {"page_label": "3", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "35874492-a71e-4b49-aaf5-3c58b196de33": {"node_ids": ["e443336e-ef2c-402c-9937-fe24d9e09544"], "metadata": {"page_label": "4", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "4433322c-782b-4bac-839a-b08d7783e6c8": {"node_ids": ["e7fa5797-4c95-49cf-abde-ae930ce39197"], "metadata": {"page_label": "5", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "2f1e518c-dcab-476f-9bdc-dbae42288cec": {"node_ids": ["ffd898c1-3c98-4144-a486-725ebbf32c93"], "metadata": {"page_label": "6", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "ecdf1d5c-d9d1-456d-bc9b-2cb305e9c96f": {"node_ids": ["fdaf75e9-edab-4f64-bfb2-1823706c4938"], "metadata": {"page_label": "7", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "66fe6503-d471-4c71-8071-223c93a40487": {"node_ids": ["5e1de6c0-44b7-49e2-a557-b0e14436e54e"], "metadata": {"page_label": "8", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "064cdc80-4c75-4d19-8209-f1612b7992c5": {"node_ids": ["20d96b89-f430-41b8-8d2d-920a0eca77dd"], "metadata": {"page_label": "9", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "ad321e04-d7c9-4db1-9ca9-5fc4339107ca": {"node_ids": ["858d5c1d-1ddc-41d4-95fb-fe98f44c5dd0"], "metadata": {"page_label": "10", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "c7bfcdce-2277-481d-a7f8-07ee118be8b4": {"node_ids": ["c2075dd6-c0e3-4aa0-873f-d788840209c9"], "metadata": {"page_label": "11", "file_name": "employee_handbook.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/employee_handbook.pdf", "file_type": "application/pdf", "file_size": 142977, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs1/graph_store.json: -------------------------------------------------------------------------------- 1 | {"graph_dict": {}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs1/image__vector_store.json: -------------------------------------------------------------------------------- 1 | {"embedding_dict": {}, "text_id_to_ref_doc_id": {}, "metadata_dict": {}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs1/index_store.json: -------------------------------------------------------------------------------- 1 | {"index_store/data": {"ab1cee27-84b8-4c4f-a12e-46ca4dcd2310": {"__type__": "vector_store", "__data__": "{\"index_id\": \"ab1cee27-84b8-4c4f-a12e-46ca4dcd2310\", \"summary\": null, \"nodes_dict\": {\"d91d3e1c-88a0-43ea-a758-3c835c42bc87\": \"d91d3e1c-88a0-43ea-a758-3c835c42bc87\", \"176a5a69-7e47-45fa-b172-bf58a95d164c\": \"176a5a69-7e47-45fa-b172-bf58a95d164c\", \"12f32e53-c22e-4dc9-9574-106807f1e9a3\": \"12f32e53-c22e-4dc9-9574-106807f1e9a3\", \"e443336e-ef2c-402c-9937-fe24d9e09544\": \"e443336e-ef2c-402c-9937-fe24d9e09544\", \"e7fa5797-4c95-49cf-abde-ae930ce39197\": \"e7fa5797-4c95-49cf-abde-ae930ce39197\", \"ffd898c1-3c98-4144-a486-725ebbf32c93\": \"ffd898c1-3c98-4144-a486-725ebbf32c93\", \"fdaf75e9-edab-4f64-bfb2-1823706c4938\": \"fdaf75e9-edab-4f64-bfb2-1823706c4938\", \"5e1de6c0-44b7-49e2-a557-b0e14436e54e\": \"5e1de6c0-44b7-49e2-a557-b0e14436e54e\", \"20d96b89-f430-41b8-8d2d-920a0eca77dd\": \"20d96b89-f430-41b8-8d2d-920a0eca77dd\", \"858d5c1d-1ddc-41d4-95fb-fe98f44c5dd0\": \"858d5c1d-1ddc-41d4-95fb-fe98f44c5dd0\", \"c2075dd6-c0e3-4aa0-873f-d788840209c9\": \"c2075dd6-c0e3-4aa0-873f-d788840209c9\"}, \"doc_id_dict\": {}, \"embeddings_dict\": {}}"}}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs2/docstore.json: -------------------------------------------------------------------------------- 1 | {"docstore/metadata": {"99ee2cf8-cc91-4fcb-9827-de91acce2234": {"doc_hash": "ff43bfbc704f47450f5cb9a3f2f0feac80a812927c8d72d209f2e9e4c7df1bc7"}, "0d6f04f6-17d3-4533-84ad-9407b1b85bd8": {"doc_hash": "1149993e3df2d09c7244bd7894c59b4f92c82f52cbfd44702f5cb4a1151bc05e"}, "c3b03294-57c5-47b2-8d5b-d9b221fb5acd": {"doc_hash": "8e9d3f78ac7f03794e0e78442d00ecd3dbddd523ebe4a0a0411855302bba238e"}, "28b1fa5c-bef5-43fd-b868-033026cc8327": {"doc_hash": "36dd4d9ea19adf7bcd2201a10927447a665b2382f1ea4d7750ca32281961a9b8"}, "850b8e5c-00f6-45dd-b53a-b0b0822659ef": {"doc_hash": "dc89c9ea74b8d0d71a7a22a6985999d43c9b908ac9053f26b754e532612d80e7", "ref_doc_id": "99ee2cf8-cc91-4fcb-9827-de91acce2234"}, "28092f0d-bf08-419d-952d-9fe29b811b72": {"doc_hash": "331fd2db5f8bf1456093a989bd4d29782d8e313f581cdaefc2424cb4bae5c13f", "ref_doc_id": "0d6f04f6-17d3-4533-84ad-9407b1b85bd8"}, "3a5d49e1-389d-4b45-900c-c21d42263bf9": {"doc_hash": "c83b7037983134c506156cd1fe6ceaade32d269874083f39fa8bc4f122f7c405", "ref_doc_id": "c3b03294-57c5-47b2-8d5b-d9b221fb5acd"}, "9bc4a096-80d0-4aa8-b730-4aee1abb64ac": {"doc_hash": "054d0f503d5a63e7d486ab5f9bf74e9e8d4a2257ce1ae26aa6828ae6d11bf02b", "ref_doc_id": "28b1fa5c-bef5-43fd-b868-033026cc8327"}}, "docstore/data": {"850b8e5c-00f6-45dd-b53a-b0b0822659ef": {"__data__": {"id_": "850b8e5c-00f6-45dd-b53a-b0b0822659ef", "embedding": null, "metadata": {"page_label": "1", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "99ee2cf8-cc91-4fcb-9827-de91acce2234", "node_type": "4", "metadata": {"page_label": "1", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "ff43bfbc704f47450f5cb9a3f2f0feac80a812927c8d72d209f2e9e4c7df1bc7", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "PerksPlus Health and Wellness \nReimbursement Program for \nContoso Electronics Employees", "mimetype": "text/plain", "start_char_idx": 6, "end_char_idx": 93, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "28092f0d-bf08-419d-952d-9fe29b811b72": {"__data__": {"id_": "28092f0d-bf08-419d-952d-9fe29b811b72", "embedding": null, "metadata": {"page_label": "2", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "0d6f04f6-17d3-4533-84ad-9407b1b85bd8", "node_type": "4", "metadata": {"page_label": "2", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "1149993e3df2d09c7244bd7894c59b4f92c82f52cbfd44702f5cb4a1151bc05e", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "This document contains information generated using a language model (Azure OpenAI). The information \ncontained in this document is only for demonstration purposes and does not reflect the opinions or \nbeliefs of Microsoft. Microsoft makes no representations or warranties of any kind, express or implied, \nabout the completeness, accuracy, reliability, suitability or availability with respect to the information \ncontained in this document. \nAll rights reserved to Microsoft", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 476, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "3a5d49e1-389d-4b45-900c-c21d42263bf9": {"__data__": {"id_": "3a5d49e1-389d-4b45-900c-c21d42263bf9", "embedding": null, "metadata": {"page_label": "3", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "c3b03294-57c5-47b2-8d5b-d9b221fb5acd", "node_type": "4", "metadata": {"page_label": "3", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "8e9d3f78ac7f03794e0e78442d00ecd3dbddd523ebe4a0a0411855302bba238e", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "Overview \nIntroducing PerksPlus - the ultimate benefits program designed to support the health and wellness of \nemployees. With PerksPlus, employees have the opportunity to expense up to $1000 for fitness-related \nprograms, making it easier and more affordable to maintain a healthy lifestyle. PerksPlus is not only \ndesigned to support employees' physical health, but also their mental health. Regular exercise has been \nshown to reduce stress, improve mood, and enhance overall well-being. With PerksPlus, employees can \ninvest in their health and wellness, while enjoying the peace of mind that comes with knowing they are \ngetting the support they need to lead a healthy life. \nWhat is Covered? \nPerksPlus covers a wide range of fitness activities, including but not limited to: \n\u2022 Gym memberships \n\u2022 Personal training sessions \n\u2022 Yoga and Pilates classes \n\u2022 Fitness equipment purchases \n\u2022 Sports team fees \n\u2022 Health retreats and spas \n\u2022 Outdoor adventure activities (such as rock climbing, hiking, and kayaking) \n\u2022 Group fitness classes (such as dance, martial arts, and cycling) \n\u2022 Virtual fitness programs (such as online yoga and workout classes) \nIn addition to the wide range of fitness activities covered by PerksPlus, the program also covers a variety \nof lessons and experiences that promote health and wellness. Some of the lessons covered under \nPerksPlus include: \n\u2022 Skiing and snowboarding lessons \n\u2022 Scuba diving lessons \n\u2022 Surfing lessons \n\u2022 Horseback riding lessons \nThese lessons provide employees with the opportunity to try new things, challenge themselves, and \nimprove their physical skills. They are also a great way to relieve stress and have fun while staying active. \nWith PerksPlus, employees can choose from a variety of fitness programs to suit their individual needs \nand preferences. Whether you're looking to improve your physical fitness, reduce stress, or just have \nsome fun, PerksPlus has you covered. \nWhat is Not Covered? \nIn addition to the wide range of activities covered by PerksPlus, there is also a list of things that are not \ncovered under the program. These include but are not limited to: \n\u2022 Non-fitness related expenses \n\u2022 Medical treatments and procedures \n\u2022 Travel expenses (unless related to a fitness program)", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 2265, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}, "9bc4a096-80d0-4aa8-b730-4aee1abb64ac": {"__data__": {"id_": "9bc4a096-80d0-4aa8-b730-4aee1abb64ac", "embedding": null, "metadata": {"page_label": "4", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "28b1fa5c-bef5-43fd-b868-033026cc8327", "node_type": "4", "metadata": {"page_label": "4", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}, "hash": "36dd4d9ea19adf7bcd2201a10927447a665b2382f1ea4d7750ca32281961a9b8", "class_name": "RelatedNodeInfo"}}, "metadata_template": "{key}: {value}", "metadata_separator": "\n", "text": "\u2022 Food and supplements", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 22, "metadata_seperator": "\n", "text_template": "{metadata_str}\n\n{content}", "class_name": "TextNode"}, "__type__": "1"}}, "docstore/ref_doc_info": {"99ee2cf8-cc91-4fcb-9827-de91acce2234": {"node_ids": ["850b8e5c-00f6-45dd-b53a-b0b0822659ef"], "metadata": {"page_label": "1", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "0d6f04f6-17d3-4533-84ad-9407b1b85bd8": {"node_ids": ["28092f0d-bf08-419d-952d-9fe29b811b72"], "metadata": {"page_label": "2", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "c3b03294-57c5-47b2-8d5b-d9b221fb5acd": {"node_ids": ["3a5d49e1-389d-4b45-900c-c21d42263bf9"], "metadata": {"page_label": "3", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}, "28b1fa5c-bef5-43fd-b868-033026cc8327": {"node_ids": ["9bc4a096-80d0-4aa8-b730-4aee1abb64ac"], "metadata": {"page_label": "4", "file_name": "PerksPlus.pdf", "file_path": "/Users/pamelafox/python-ai-agents-demos/example_data/PerksPlus.pdf", "file_type": "application/pdf", "file_size": 115310, "creation_date": "2025-04-09", "last_modified_date": "2025-04-08"}}}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs2/graph_store.json: -------------------------------------------------------------------------------- 1 | {"graph_dict": {}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs2/image__vector_store.json: -------------------------------------------------------------------------------- 1 | {"embedding_dict": {}, "text_id_to_ref_doc_id": {}, "metadata_dict": {}} 2 | -------------------------------------------------------------------------------- /example_data/.llama_index_storage/docs2/index_store.json: -------------------------------------------------------------------------------- 1 | {"index_store/data": {"12ae9a29-5b74-413c-b285-58024a596b0b": {"__type__": "vector_store", "__data__": "{\"index_id\": \"12ae9a29-5b74-413c-b285-58024a596b0b\", \"summary\": null, \"nodes_dict\": {\"850b8e5c-00f6-45dd-b53a-b0b0822659ef\": \"850b8e5c-00f6-45dd-b53a-b0b0822659ef\", \"28092f0d-bf08-419d-952d-9fe29b811b72\": \"28092f0d-bf08-419d-952d-9fe29b811b72\", \"3a5d49e1-389d-4b45-900c-c21d42263bf9\": \"3a5d49e1-389d-4b45-900c-c21d42263bf9\", \"9bc4a096-80d0-4aa8-b730-4aee1abb64ac\": \"9bc4a096-80d0-4aa8-b730-4aee1abb64ac\"}, \"doc_id_dict\": {}, \"embeddings_dict\": {}}"}}} 2 | -------------------------------------------------------------------------------- /example_data/PerksPlus.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Azure-Samples/python-ai-agent-frameworks-demos/5ef89369e0d647c58f536f1d1072c4c2b9df2963/example_data/PerksPlus.pdf -------------------------------------------------------------------------------- /example_data/employee_handbook.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Azure-Samples/python-ai-agent-frameworks-demos/5ef89369e0d647c58f536f1d1072c4c2b9df2963/example_data/employee_handbook.pdf -------------------------------------------------------------------------------- /examples/autogen_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from autogen_agentchat.agents import AssistantAgent 6 | from autogen_agentchat.messages import TextMessage 7 | from autogen_core import CancellationToken 8 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 9 | from dotenv import load_dotenv 10 | 11 | # Setup the client to use either Azure OpenAI or GitHub Models 12 | load_dotenv(override=True) 13 | API_HOST = os.getenv("API_HOST", "github") 14 | if API_HOST == "github": 15 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 16 | elif API_HOST == "azure": 17 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 18 | client = AzureOpenAIChatCompletionClient(model=os.environ["AZURE_OPENAI_CHAT_MODEL"], api_version=os.environ["AZURE_OPENAI_VERSION"], azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_ad_token_provider=token_provider) 19 | 20 | 21 | agent = AssistantAgent( 22 | "spanish_tutor", 23 | model_client=client, 24 | system_message="You are a Spanish tutor. Help the user learn Spanish. ONLY respond in Spanish.", 25 | ) 26 | 27 | 28 | async def main() -> None: 29 | response = await agent.on_messages( 30 | [TextMessage(content="hii how are you?", source="user")], 31 | cancellation_token=CancellationToken(), 32 | ) 33 | print(response.chat_message.content) 34 | 35 | 36 | if __name__ == "__main__": 37 | asyncio.run(main()) 38 | -------------------------------------------------------------------------------- /examples/autogen_magenticone.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from autogen_agentchat.agents import AssistantAgent 6 | from autogen_agentchat.conditions import TextMentionTermination 7 | from autogen_agentchat.teams import MagenticOneGroupChat 8 | from autogen_agentchat.ui import Console 9 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 10 | from dotenv import load_dotenv 11 | 12 | # Setup the client to use either Azure OpenAI or GitHub Models 13 | load_dotenv(override=True) 14 | API_HOST = os.getenv("API_HOST", "github") 15 | 16 | 17 | if API_HOST == "github": 18 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 19 | elif API_HOST == "azure": 20 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 21 | client = AzureOpenAIChatCompletionClient( 22 | model=os.environ["AZURE_OPENAI_CHAT_MODEL"], 23 | api_version=os.environ["AZURE_OPENAI_VERSION"], 24 | azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 25 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 26 | azure_ad_token_provider=token_provider, 27 | ) 28 | 29 | local_agent = AssistantAgent( 30 | "local_agent", 31 | model_client=client, 32 | description="A local assistant that can suggest local activities or places to visit.", 33 | system_message="You are a helpful assistant that can suggest authentic and interesting local activities or places to visit for a user and can utilize any context information provided.", 34 | ) 35 | 36 | language_agent = AssistantAgent( 37 | "language_agent", 38 | model_client=client, 39 | description="A helpful assistant that can provide language tips for a given destination.", 40 | system_message="You are a helpful assistant that can review travel plans, providing feedback on important/critical tips about how best to address language or communication challenges for the given destination. If the plan already includes language tips, you can mention that the plan is satisfactory, with rationale.", 41 | ) 42 | 43 | travel_summary_agent = AssistantAgent( 44 | "travel_summary_agent", 45 | model_client=client, 46 | description="A helpful assistant that can summarize the travel plan.", 47 | system_message="You are a helpful assistant that can take in all of the suggestions and advice from the other agents and provide a detailed final travel plan. You must ensure that the final plan is integrated and complete. YOUR FINAL RESPONSE MUST BE THE COMPLETE PLAN. When the plan is complete and all perspectives are integrated, you can respond with TERMINATE.", 48 | ) 49 | 50 | 51 | async def run_agents(): 52 | termination = TextMentionTermination("TERMINATE") 53 | group_chat = MagenticOneGroupChat( 54 | [local_agent, language_agent, travel_summary_agent], 55 | termination_condition=termination, 56 | model_client=client, 57 | ) 58 | await Console(group_chat.run_stream(task="Plan a 3 day trip to Egypt")) 59 | 60 | 61 | if __name__ == "__main__": 62 | asyncio.run(run_agents()) 63 | -------------------------------------------------------------------------------- /examples/autogen_swarm.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from autogen_agentchat.agents import AssistantAgent 6 | from autogen_agentchat.conditions import HandoffTermination, TextMentionTermination 7 | from autogen_agentchat.messages import HandoffMessage 8 | from autogen_agentchat.teams import Swarm 9 | from autogen_agentchat.ui import Console 10 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 11 | from dotenv import load_dotenv 12 | 13 | # Setup the client to use either Azure OpenAI or GitHub Models 14 | load_dotenv(override=True) 15 | API_HOST = os.getenv("API_HOST", "github") 16 | 17 | 18 | if API_HOST == "github": 19 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 20 | elif API_HOST == "azure": 21 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 22 | client = AzureOpenAIChatCompletionClient( 23 | model=os.environ["AZURE_OPENAI_CHAT_MODEL"], 24 | api_version=os.environ["AZURE_OPENAI_VERSION"], 25 | azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 26 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 27 | azure_ad_token_provider=token_provider, 28 | ) 29 | 30 | travel_agent = AssistantAgent( 31 | "travel_agent", 32 | model_client=client, 33 | handoffs=["flights_refunder", "user"], 34 | system_message="""You are a travel agent. 35 | The flights_refunder is in charge of refunding flights. 36 | If you need information from the user, you must first send your message, then you can handoff to the user. 37 | Use TERMINATE when the travel planning is complete.""", 38 | ) 39 | 40 | 41 | def refund_flight(flight_id: str) -> str: 42 | """Refund a flight""" 43 | return f"Flight {flight_id} refunded" 44 | 45 | 46 | flights_refunder = AssistantAgent( 47 | "flights_refunder", 48 | model_client=client, 49 | handoffs=["travel_agent", "user"], 50 | tools=[refund_flight], 51 | system_message="""You are an agent specialized in refunding flights. 52 | You only need flight reference numbers to refund a flight. 53 | You have the ability to refund a flight using the refund_flight tool. 54 | If you need information from the user, you must first send your message, then you can handoff to the user. 55 | When the transaction is complete, handoff to the travel agent to finalize.""", 56 | ) 57 | 58 | 59 | async def run_team_stream(task: str) -> None: 60 | termination = HandoffTermination(target="user") | TextMentionTermination("TERMINATE") 61 | team = Swarm([travel_agent, flights_refunder], termination_condition=termination) 62 | 63 | task_result = await Console(team.run_stream(task=task)) 64 | last_message = task_result.messages[-1] 65 | 66 | while isinstance(last_message, HandoffMessage) and last_message.target == "user": 67 | user_message = input("User: ") 68 | 69 | task_result = await Console(team.run_stream(task=HandoffMessage(source="user", target=last_message.source, content=user_message))) 70 | last_message = task_result.messages[-1] 71 | 72 | 73 | if __name__ == "__main__": 74 | asyncio.run(run_team_stream("I need to refund my flight.")) 75 | -------------------------------------------------------------------------------- /examples/autogen_tools.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | import os 4 | import random 5 | from datetime import datetime 6 | 7 | import azure.identity 8 | from autogen_agentchat.agents import AssistantAgent 9 | from autogen_agentchat.conditions import TextMessageTermination 10 | from autogen_agentchat.teams import RoundRobinGroupChat 11 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 12 | from dotenv import load_dotenv 13 | from rich.logging import RichHandler 14 | 15 | # Setup logging with rich 16 | logging.basicConfig(level=logging.WARNING, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]) 17 | logger = logging.getLogger("weekend_planner") 18 | 19 | 20 | # Setup the client to use either Azure OpenAI or GitHub Models 21 | load_dotenv(override=True) 22 | API_HOST = os.getenv("API_HOST", "github") 23 | 24 | 25 | if API_HOST == "github": 26 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com", parallel_tool_calls=False) 27 | elif API_HOST == "azure": 28 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 29 | client = AzureOpenAIChatCompletionClient(model=os.environ["AZURE_OPENAI_CHAT_MODEL"], api_version=os.environ["AZURE_OPENAI_VERSION"], azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_ad_token_provider=token_provider, parallel_tool_calls=False) 30 | 31 | 32 | def get_weather(city: str) -> str: 33 | logger.info(f"Getting weather for {city}") 34 | if random.random() < 0.05: 35 | return { 36 | "city": city, 37 | "temperature": 72, 38 | "description": "Sunny", 39 | } 40 | else: 41 | return { 42 | "city": city, 43 | "temperature": 60, 44 | "description": "Rainy", 45 | } 46 | 47 | 48 | def get_activities(city: str, date: str) -> list: 49 | logger.info(f"Getting activities for {city} on {date}") 50 | return [ 51 | {"name": "Hiking", "location": city}, 52 | {"name": "Beach", "location": city}, 53 | {"name": "Museum", "location": city}, 54 | ] 55 | 56 | 57 | def get_current_date() -> str: 58 | logger.info("Getting current date") 59 | return datetime.now().strftime("%Y-%m-%d") 60 | 61 | 62 | agent = AssistantAgent( 63 | "weekend_planner", 64 | model_client=client, 65 | tools=[get_weather, get_activities, get_current_date], 66 | system_message="You help users plan their weekends and choose the best activities for the given weather. If an activity would be unpleasant in the weather, don't suggest it. Include the date of the weekend in your response.", 67 | ) 68 | 69 | 70 | async def main() -> None: 71 | team = RoundRobinGroupChat([agent], termination_condition=TextMessageTermination(agent.name)) 72 | 73 | async for task_result in team.run_stream(task="what can I do for funzies this weekend in Seattle?"): 74 | logger.debug("%s: %s", type(task_result).__name__, task_result) 75 | print(task_result.messages[-1].content) 76 | 77 | 78 | if __name__ == "__main__": 79 | logger.setLevel(logging.INFO) 80 | asyncio.run(main()) 81 | -------------------------------------------------------------------------------- /examples/azureai_githubmodels.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from azure.ai.inference import ChatCompletionsClient 4 | from azure.ai.inference.models import SystemMessage, UserMessage 5 | from azure.core.credentials import AzureKeyCredential 6 | 7 | client = ChatCompletionsClient( 8 | endpoint="https://models.inference.ai.azure.com", 9 | credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]), 10 | ) 11 | 12 | response = client.complete( 13 | messages=[ 14 | SystemMessage(content="You are a helpful assistant."), 15 | UserMessage(content="What is the capital of France?"), 16 | ], 17 | model=os.getenv("GITHUB_MODEL", "gpt-4o"), 18 | ) 19 | print(response.choices[0].message.content) 20 | -------------------------------------------------------------------------------- /examples/langgraph_agent.py: -------------------------------------------------------------------------------- 1 | # https://github.com/JRAlexander/IntroToAgents1-Oxford/blob/main/intro-langgraph/time-travel.ipynb 2 | 3 | import os 4 | 5 | import azure.identity 6 | from dotenv import load_dotenv 7 | from langchain_core.messages import HumanMessage 8 | from langchain_core.tools import tool 9 | from langchain_openai import AzureChatOpenAI, ChatOpenAI 10 | from langgraph.checkpoint.memory import MemorySaver 11 | from langgraph.graph import END, START, MessagesState, StateGraph 12 | from langgraph.prebuilt import ToolNode 13 | 14 | 15 | @tool 16 | def play_song_on_spotify(song: str): 17 | """Play a song on Spotify""" 18 | # Call the spotify API ... 19 | return f"Successfully played {song} on Spotify!" 20 | 21 | 22 | @tool 23 | def play_song_on_apple(song: str): 24 | """Play a song on Apple Music""" 25 | # Call the apple music API ... 26 | return f"Successfully played {song} on Apple Music!" 27 | 28 | 29 | tools = [play_song_on_apple, play_song_on_spotify] 30 | tool_node = ToolNode(tools) 31 | 32 | # Setup the client to use either Azure OpenAI or GitHub Models 33 | load_dotenv(override=True) 34 | API_HOST = os.getenv("API_HOST", "github") 35 | 36 | if API_HOST == "azure": 37 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 38 | model = AzureChatOpenAI( 39 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 40 | azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 41 | openai_api_version=os.environ["AZURE_OPENAI_VERSION"], 42 | azure_ad_token_provider=token_provider, 43 | ) 44 | else: 45 | model = ChatOpenAI(model=os.getenv("GITHUB_MODEL", "gpt-4o"), base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 46 | 47 | model = model.bind_tools(tools, parallel_tool_calls=False) 48 | 49 | # Define nodes and conditional edges 50 | 51 | 52 | # Define the function that determines whether to continue or not 53 | def should_continue(state): 54 | messages = state["messages"] 55 | last_message = messages[-1] 56 | # If there is no function call, then we finish 57 | if not last_message.tool_calls: 58 | return "end" 59 | # Otherwise if there is, we continue 60 | else: 61 | return "continue" 62 | 63 | 64 | # Define the function that calls the model 65 | def call_model(state): 66 | messages = state["messages"] 67 | response = model.invoke(messages) 68 | # We return a list, because this will get added to the existing list 69 | return {"messages": [response]} 70 | 71 | 72 | # Define a new graph 73 | workflow = StateGraph(MessagesState) 74 | 75 | # Define the two nodes we will cycle between 76 | workflow.add_node("agent", call_model) 77 | workflow.add_node("action", tool_node) 78 | 79 | # Set the entrypoint as `agent` 80 | # This means that this node is the first one called 81 | workflow.add_edge(START, "agent") 82 | 83 | # We now add a conditional edge 84 | workflow.add_conditional_edges( 85 | # First, we define the start node. We use `agent`. 86 | # This means these are the edges taken after the `agent` node is called. 87 | "agent", 88 | # Next, we pass in the function that will determine which node is called next. 89 | should_continue, 90 | # Finally we pass in a mapping. 91 | # The keys are strings, and the values are other nodes. 92 | # END is a special node marking that the graph should finish. 93 | # What will happen is we will call `should_continue`, and then the output of that 94 | # will be matched against the keys in this mapping. 95 | # Based on which one it matches, that node will then be called. 96 | { 97 | # If `tools`, then we call the tool node. 98 | "continue": "action", 99 | # Otherwise we finish. 100 | "end": END, 101 | }, 102 | ) 103 | 104 | # We now add a normal edge from `tools` to `agent`. 105 | # This means that after `tools` is called, `agent` node is called next. 106 | workflow.add_edge("action", "agent") 107 | 108 | # Set up memory 109 | memory = MemorySaver() 110 | 111 | # Finally, we compile it! 112 | # This compiles it into a LangChain Runnable, 113 | # meaning you can use it as you would any other runnable 114 | 115 | # We add in `interrupt_before=["action"]` 116 | # This will add a breakpoint before the `action` node is called 117 | app = workflow.compile(checkpointer=memory) 118 | 119 | config = {"configurable": {"thread_id": "1"}} 120 | input_message = HumanMessage(content="Can you play Taylor Swift's most popular song?") 121 | for event in app.stream({"messages": [input_message]}, config, stream_mode="values"): 122 | event["messages"][-1].pretty_print() 123 | -------------------------------------------------------------------------------- /examples/llamaindex.py: -------------------------------------------------------------------------------- 1 | # https://docs.llamaindex.ai/en/stable/examples/agent/react_agent_with_query_engine/ 2 | 3 | import os 4 | from pathlib import Path 5 | 6 | import azure.identity 7 | from dotenv import load_dotenv 8 | from llama_index.core import Settings, SimpleDirectoryReader, StorageContext, VectorStoreIndex, load_index_from_storage 9 | from llama_index.core.agent.workflow import AgentStream, ReActAgent 10 | from llama_index.core.tools import QueryEngineTool 11 | from llama_index.core.workflow import Context 12 | from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding 13 | from llama_index.embeddings.openai import OpenAIEmbedding 14 | from llama_index.llms.azure_openai import AzureOpenAI 15 | from llama_index.llms.openai_like import OpenAILike 16 | 17 | # Setup the client to use either Azure OpenAI or GitHub Models 18 | load_dotenv(override=True) 19 | API_HOST = os.getenv("API_HOST", "github") 20 | 21 | if API_HOST == "azure": 22 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 23 | Settings.llm = AzureOpenAI( 24 | model=os.environ["AZURE_OPENAI_CHAT_MODEL"], 25 | deployment_name=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 26 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 27 | api_version=os.environ["AZURE_OPENAI_VERSION"], 28 | use_azure_ad=True, 29 | azure_ad_token_provider=token_provider, 30 | ) 31 | 32 | Settings.embed_model = AzureOpenAIEmbedding( 33 | model=os.environ["AZURE_OPENAI_EMBEDDING_MODEL"], 34 | deployment_name=os.environ["AZURE_OPENAI_EMBEDDING_DEPLOYMENT"], 35 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 36 | api_version=os.environ["AZURE_OPENAI_VERSION"], 37 | use_azure_ad=True, 38 | azure_ad_token_provider=token_provider, 39 | ) 40 | else: 41 | Settings.llm = OpenAILike( 42 | model=os.getenv("GITHUB_MODEL", "gpt-4o"), 43 | api_base="https://models.inference.ai.azure.com", 44 | api_key=os.environ["GITHUB_TOKEN"], 45 | is_chat_model=True, 46 | ) 47 | 48 | Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small", api_base="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 49 | 50 | # Try to load the index from storage 51 | try: 52 | storage_context = StorageContext.from_defaults(persist_dir="./storage/docs1") 53 | index1 = load_index_from_storage(storage_context) 54 | 55 | storage_context = StorageContext.from_defaults(persist_dir="./storage/docs2") 56 | index2 = load_index_from_storage(storage_context) 57 | 58 | index_loaded = True 59 | except FileNotFoundError: 60 | index_loaded = False 61 | 62 | if not index_loaded: 63 | root_dir = Path(__file__).parent.parent 64 | 65 | docs1 = SimpleDirectoryReader(input_files=[root_dir / "example_data/employee_handbook.pdf"]).load_data() 66 | docs2 = SimpleDirectoryReader(input_files=[root_dir / "example_data/PerksPlus.pdf"]).load_data() 67 | index1 = VectorStoreIndex.from_documents(docs1) 68 | index2 = VectorStoreIndex.from_documents(docs2) 69 | 70 | index1.storage_context.persist(persist_dir=root_dir / "example_data/.llama_index_storage/docs1") 71 | index2.storage_context.persist(persist_dir=root_dir / "example_data/.llama_index_storage/docs2") 72 | 73 | engine1 = index1.as_query_engine(similarity_top_k=3) 74 | engine2 = index2.as_query_engine(similarity_top_k=3) 75 | 76 | query_engine_tools = [ 77 | QueryEngineTool.from_defaults( 78 | query_engine=engine1, 79 | name="engine1", 80 | description=("Provides information about Contoso employee handbook - covering basic job roles, policies, workplace safety, HR, etc."), 81 | ), 82 | QueryEngineTool.from_defaults( 83 | query_engine=engine2, 84 | name="engine2", 85 | description=("Provides information about Contoso PerksPlus program, including what can be reimbursed. "), 86 | ), 87 | ] 88 | 89 | 90 | async def main(): 91 | agent = ReActAgent(tools=query_engine_tools, llm=Settings.llm) 92 | ctx = Context(agent) 93 | 94 | handler = agent.run("can i get my gardening tools reimbursed?", ctx=ctx) 95 | 96 | async for ev in handler.stream_events(): 97 | if isinstance(ev, AgentStream): 98 | print(f"{ev.delta}", end="", flush=True) 99 | 100 | response = await handler 101 | print(str(response)) 102 | 103 | 104 | if __name__ == "__main__": 105 | import asyncio 106 | 107 | asyncio.run(main()) 108 | -------------------------------------------------------------------------------- /examples/openai_agents_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | import openai 6 | from agents import Agent, OpenAIChatCompletionsModel, Runner, set_tracing_disabled 7 | from dotenv import load_dotenv 8 | 9 | # Disable tracing since we're not connected to a supported tracing provider 10 | set_tracing_disabled(disabled=True) 11 | 12 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 13 | load_dotenv(override=True) 14 | API_HOST = os.getenv("API_HOST", "github") 15 | if API_HOST == "github": 16 | client = openai.AsyncOpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 17 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 18 | elif API_HOST == "azure": 19 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 20 | client = openai.AsyncAzureOpenAI( 21 | api_version=os.environ["AZURE_OPENAI_VERSION"], 22 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 23 | azure_ad_token_provider=token_provider, 24 | ) 25 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 26 | 27 | 28 | agent = Agent( 29 | name="Spanish tutor", 30 | instructions="You are a Spanish tutor. Help the user learn Spanish. ONLY respond in Spanish.", 31 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 32 | ) 33 | 34 | 35 | async def main(): 36 | result = await Runner.run(agent, input="hi how are you?") 37 | print(result.final_output) 38 | 39 | 40 | if __name__ == "__main__": 41 | asyncio.run(main()) 42 | -------------------------------------------------------------------------------- /examples/openai_agents_handoffs.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | import openai 6 | from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled 7 | from agents.extensions.visualization import draw_graph 8 | from dotenv import load_dotenv 9 | 10 | # Disable tracing since we're not using OpenAI.com models 11 | set_tracing_disabled(disabled=True) 12 | 13 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 14 | load_dotenv(override=True) 15 | API_HOST = os.getenv("API_HOST", "github") 16 | 17 | if API_HOST == "github": 18 | client = openai.AsyncOpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 19 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 20 | elif API_HOST == "azure": 21 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 22 | client = openai.AsyncAzureOpenAI( 23 | api_version=os.environ["AZURE_OPENAI_VERSION"], 24 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 25 | azure_ad_token_provider=token_provider, 26 | ) 27 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 28 | 29 | 30 | @function_tool 31 | def get_weather(city: str) -> str: 32 | return { 33 | "city": city, 34 | "temperature": 72, 35 | "description": "Sunny", 36 | } 37 | 38 | 39 | agent = Agent( 40 | name="Weather agent", 41 | instructions="You can only provide weather information.", 42 | tools=[get_weather], 43 | ) 44 | 45 | spanish_agent = Agent( 46 | name="Spanish agent", 47 | instructions="You only speak Spanish.", 48 | tools=[get_weather], 49 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 50 | ) 51 | 52 | english_agent = Agent( 53 | name="English agent", 54 | instructions="You only speak English", 55 | tools=[get_weather], 56 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 57 | ) 58 | 59 | triage_agent = Agent( 60 | name="Triage agent", 61 | instructions="Handoff to the appropriate agent based on the language of the request.", 62 | handoffs=[spanish_agent, english_agent], 63 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 64 | ) 65 | 66 | 67 | async def main(): 68 | result = await Runner.run(triage_agent, input="Hola, ¿cómo estás? ¿Puedes darme el clima para San Francisco CA?") 69 | print(result.final_output) 70 | 71 | 72 | if __name__ == "__main__": 73 | asyncio.run(main()) 74 | -------------------------------------------------------------------------------- /examples/openai_agents_tools.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | import os 4 | import random 5 | from datetime import datetime 6 | 7 | import azure.identity 8 | import openai 9 | from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled 10 | from dotenv import load_dotenv 11 | from rich.logging import RichHandler 12 | 13 | # Setup logging with rich 14 | logging.basicConfig(level=logging.WARNING, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]) 15 | logger = logging.getLogger("weekend_planner") 16 | 17 | # Disable tracing since we're not connected to a supported tracing provider 18 | set_tracing_disabled(disabled=True) 19 | 20 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 21 | load_dotenv(override=True) 22 | API_HOST = os.getenv("API_HOST", "github") 23 | if API_HOST == "github": 24 | client = openai.AsyncOpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 25 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 26 | elif API_HOST == "azure": 27 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 28 | client = openai.AsyncAzureOpenAI( 29 | api_version=os.environ["AZURE_OPENAI_VERSION"], 30 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 31 | azure_ad_token_provider=token_provider, 32 | ) 33 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 34 | 35 | 36 | @function_tool 37 | def get_weather(city: str) -> str: 38 | logger.info(f"Getting weather for {city}") 39 | if random.random() < 0.05: 40 | return { 41 | "city": city, 42 | "temperature": 72, 43 | "description": "Sunny", 44 | } 45 | else: 46 | return { 47 | "city": city, 48 | "temperature": 60, 49 | "description": "Rainy", 50 | } 51 | 52 | 53 | @function_tool 54 | def get_activities(city: str, date: str) -> list: 55 | logger.info(f"Getting activities for {city} on {date}") 56 | return [ 57 | {"name": "Hiking", "location": city}, 58 | {"name": "Beach", "location": city}, 59 | {"name": "Museum", "location": city}, 60 | ] 61 | 62 | 63 | @function_tool 64 | def get_current_date() -> str: 65 | """Gets the current date and returns as a string in format YYYY-MM-DD.""" 66 | logger.info("Getting current date") 67 | return datetime.now().strftime("%Y-%m-%d") 68 | 69 | 70 | agent = Agent( 71 | name="Weekend Planner", 72 | instructions="You help users plan their weekends and choose the best activities for the given weather. If an activity would be unpleasant in the weather, don't suggest it. Include the date of the weekend in your response.", 73 | tools=[get_weather, get_activities, get_current_date], 74 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 75 | ) 76 | 77 | 78 | async def main(): 79 | result = await Runner.run(agent, input="hii what can I do this weekend in Seattle?") 80 | print(result.final_output) 81 | 82 | 83 | if __name__ == "__main__": 84 | logger.setLevel(logging.INFO) 85 | asyncio.run(main()) 86 | -------------------------------------------------------------------------------- /examples/openai_functioncalling.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import azure.identity 4 | import openai 5 | from dotenv import load_dotenv 6 | 7 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 8 | load_dotenv(override=True) 9 | API_HOST = os.getenv("API_HOST", "github") 10 | 11 | if API_HOST == "github": 12 | client = openai.OpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 13 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 14 | elif API_HOST == "azure": 15 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 16 | client = openai.AzureOpenAI( 17 | api_version=os.environ["AZURE_OPENAI_VERSION"], 18 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 19 | azure_ad_token_provider=token_provider, 20 | ) 21 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 22 | 23 | tools = [ 24 | { 25 | "type": "function", 26 | "function": { 27 | "name": "lookup_weather", 28 | "description": "Lookup the weather for a given city name or zip code.", 29 | "parameters": { 30 | "type": "object", 31 | "properties": { 32 | "city_name": { 33 | "type": "string", 34 | "description": "The city name", 35 | }, 36 | "zip_code": { 37 | "type": "string", 38 | "description": "The zip code", 39 | }, 40 | }, 41 | "additionalProperties": False, 42 | }, 43 | }, 44 | }, 45 | { 46 | "type": "function", 47 | "function": { 48 | "name": "lookup_movies", 49 | "description": "Lookup movies playing in a given city name or zip code.", 50 | "parameters": { 51 | "type": "object", 52 | "properties": { 53 | "city_name": { 54 | "type": "string", 55 | "description": "The city name", 56 | }, 57 | "zip_code": { 58 | "type": "string", 59 | "description": "The zip code", 60 | }, 61 | }, 62 | "additionalProperties": False, 63 | }, 64 | }, 65 | }, 66 | ] 67 | 68 | response = client.chat.completions.create( 69 | model=MODEL_NAME, 70 | messages=[ 71 | {"role": "system", "content": "You are a tourism chatbot."}, 72 | {"role": "user", "content": "is it rainy enough in sydney to watch movies and which ones are on?"}, 73 | ], 74 | tools=tools, 75 | tool_choice="auto", 76 | ) 77 | 78 | print(f"Response from {MODEL_NAME} on {API_HOST}: \n") 79 | for message in response.choices[0].message.tool_calls: 80 | print(message.function.name) 81 | print(message.function.arguments) 82 | -------------------------------------------------------------------------------- /examples/openai_githubmodels.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from openai import OpenAI 4 | 5 | client = OpenAI( 6 | base_url="https://models.inference.ai.azure.com", 7 | api_key=os.environ["GITHUB_TOKEN"], 8 | ) 9 | response = client.chat.completions.create( 10 | messages=[ 11 | { 12 | "role": "system", 13 | "content": "You are a helpful assistant.", 14 | }, 15 | { 16 | "role": "user", 17 | "content": "What is the capital of France?", 18 | }, 19 | ], 20 | model=os.getenv("GITHUB_MODEL", "gpt-4o"), 21 | ) 22 | print(response.choices[0].message.content) 23 | -------------------------------------------------------------------------------- /examples/pydanticai_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from dotenv import load_dotenv 6 | from openai import AsyncAzureOpenAI, AsyncOpenAI 7 | from pydantic_ai import Agent 8 | from pydantic_ai.models.openai import OpenAIModel 9 | from pydantic_ai.providers.openai import OpenAIProvider 10 | 11 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 12 | load_dotenv(override=True) 13 | API_HOST = os.getenv("API_HOST", "github") 14 | 15 | if API_HOST == "github": 16 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 17 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 18 | elif API_HOST == "azure": 19 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 20 | client = AsyncAzureOpenAI( 21 | api_version=os.environ["AZURE_OPENAI_VERSION"], 22 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 23 | azure_ad_token_provider=token_provider, 24 | ) 25 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 26 | 27 | agent: Agent[None, str] = Agent( 28 | model, 29 | system_prompt="You are a Spanish tutor. Help the user learn Spanish. ONLY respond in Spanish.", 30 | result_type=str, 31 | ) 32 | 33 | 34 | async def main(): 35 | result = await agent.run("oh hey how are you?") 36 | print(result.data) 37 | 38 | 39 | if __name__ == "__main__": 40 | asyncio.run(main()) 41 | -------------------------------------------------------------------------------- /examples/pydanticai_graph.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations as _annotations 2 | 3 | import asyncio 4 | import os 5 | from dataclasses import dataclass, field 6 | 7 | import azure.identity 8 | from dotenv import load_dotenv 9 | from groq import BaseModel 10 | from openai import AsyncAzureOpenAI, AsyncOpenAI 11 | from pydantic_ai import Agent 12 | from pydantic_ai.format_as_xml import format_as_xml 13 | from pydantic_ai.messages import ModelMessage 14 | from pydantic_ai.models.openai import OpenAIModel 15 | from pydantic_ai.providers.openai import OpenAIProvider 16 | from pydantic_graph import ( 17 | BaseNode, 18 | End, 19 | Graph, 20 | GraphRunContext, 21 | ) 22 | 23 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 24 | load_dotenv(override=True) 25 | API_HOST = os.getenv("API_HOST", "github") 26 | 27 | if API_HOST == "github": 28 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 29 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 30 | elif API_HOST == "azure": 31 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 32 | client = AsyncAzureOpenAI( 33 | api_version=os.environ["AZURE_OPENAI_VERSION"], 34 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 35 | azure_ad_token_provider=token_provider, 36 | ) 37 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 38 | 39 | 40 | """ 41 | Agent definitions 42 | """ 43 | 44 | ask_agent = Agent(model, result_type=str, instrument=True) 45 | 46 | 47 | class EvaluationResult(BaseModel, use_attribute_docstrings=True): 48 | correct: bool 49 | """Whether the answer is correct.""" 50 | comment: str 51 | """Comment on the answer, reprimand the user if the answer is wrong.""" 52 | 53 | 54 | evaluate_agent = Agent( 55 | model, 56 | result_type=EvaluationResult, 57 | system_prompt="Given a question and answer, evaluate if the answer is correct.", 58 | ) 59 | 60 | """ 61 | Graph state and nodes 62 | """ 63 | 64 | 65 | @dataclass 66 | class QuestionState: 67 | question: str | None = None 68 | ask_agent_messages: list[ModelMessage] = field(default_factory=list) 69 | evaluate_agent_messages: list[ModelMessage] = field(default_factory=list) 70 | 71 | 72 | @dataclass 73 | class Ask(BaseNode[QuestionState]): 74 | async def run(self, ctx: GraphRunContext[QuestionState]) -> Answer: 75 | result = await ask_agent.run( 76 | "Ask a simple question with a single correct answer.", 77 | message_history=ctx.state.ask_agent_messages, 78 | ) 79 | ctx.state.ask_agent_messages += result.all_messages() 80 | ctx.state.question = result.data 81 | return Answer(result.data) 82 | 83 | 84 | @dataclass 85 | class Answer(BaseNode[QuestionState]): 86 | question: str 87 | 88 | async def run(self, ctx: GraphRunContext[QuestionState]) -> Evaluate: 89 | answer = input(f"{self.question}: ") 90 | return Evaluate(answer) 91 | 92 | 93 | @dataclass 94 | class Evaluate(BaseNode[QuestionState, None, str]): 95 | answer: str 96 | 97 | async def run( 98 | self, 99 | ctx: GraphRunContext[QuestionState], 100 | ) -> End[str] | Reprimand: 101 | assert ctx.state.question is not None 102 | result = await evaluate_agent.run( 103 | format_as_xml({"question": ctx.state.question, "answer": self.answer}), 104 | message_history=ctx.state.evaluate_agent_messages, 105 | ) 106 | ctx.state.evaluate_agent_messages += result.all_messages() 107 | if result.data.correct: 108 | return End(result.data.comment) 109 | else: 110 | return Reprimand(result.data.comment) 111 | 112 | 113 | @dataclass 114 | class Reprimand(BaseNode[QuestionState]): 115 | comment: str 116 | 117 | async def run(self, ctx: GraphRunContext[QuestionState]) -> Ask: 118 | print(f"Comment: {self.comment}") 119 | ctx.state.question = None 120 | return Ask() 121 | 122 | 123 | question_graph = Graph(nodes=(Ask, Answer, Evaluate, Reprimand), state_type=QuestionState) 124 | 125 | 126 | async def main(): 127 | state = QuestionState() 128 | node = Ask() 129 | end = await question_graph.run(node, state=state) 130 | print("END:", end.output) 131 | 132 | 133 | if __name__ == "__main__": 134 | asyncio.run(main()) 135 | -------------------------------------------------------------------------------- /examples/pydanticai_multiagent.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | from typing import Literal 4 | 5 | import azure.identity 6 | from dotenv import load_dotenv 7 | from openai import AsyncAzureOpenAI, AsyncOpenAI 8 | from pydantic import BaseModel, Field 9 | from pydantic_ai import Agent, RunContext 10 | from pydantic_ai.messages import ModelMessage 11 | from pydantic_ai.models.openai import OpenAIModel 12 | from pydantic_ai.providers.openai import OpenAIProvider 13 | from rich.prompt import Prompt 14 | 15 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 16 | load_dotenv(override=True) 17 | API_HOST = os.getenv("API_HOST", "github") 18 | 19 | if API_HOST == "github": 20 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 21 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 22 | elif API_HOST == "azure": 23 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 24 | client = AsyncAzureOpenAI( 25 | api_version=os.environ["AZURE_OPENAI_VERSION"], 26 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 27 | azure_ad_token_provider=token_provider, 28 | ) 29 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 30 | 31 | 32 | class Flight(BaseModel): 33 | flight_number: str 34 | 35 | 36 | class Failed(BaseModel): 37 | """Unable to find a satisfactory choice.""" 38 | 39 | 40 | flight_search_agent = Agent( 41 | model, 42 | result_type=Flight | Failed, 43 | system_prompt=('Use the "flight_search" tool to find a flight from the given origin to the given destination.'), 44 | ) 45 | 46 | 47 | @flight_search_agent.tool 48 | async def flight_search(ctx: RunContext[None], origin: str, destination: str) -> Flight | None: 49 | # in reality, this would call a flight search API or 50 | # use a browser to scrape a flight search website 51 | return Flight(flight_number="AK456") 52 | 53 | 54 | async def find_flight() -> Flight | None: 55 | message_history: list[ModelMessage] | None = None 56 | for _ in range(3): 57 | prompt = Prompt.ask( 58 | "Where would you like to fly from and to?", 59 | ) 60 | result = await flight_search_agent.run(prompt, message_history=message_history) 61 | if isinstance(result.data, Flight): 62 | return result.data 63 | else: 64 | message_history = result.all_messages(result_tool_return_content="Please try again.") 65 | 66 | 67 | class Seat(BaseModel): 68 | row: int = Field(ge=1, le=30) 69 | seat: Literal["A", "B", "C", "D", "E", "F"] 70 | 71 | 72 | # This agent is responsible for extracting the user's seat selection 73 | seat_preference_agent = Agent( 74 | model, 75 | result_type=Seat | Failed, 76 | system_prompt=("Extract the user's seat preference. " "Seats A and F are window seats. " "Row 1 is the front row and has extra leg room. " "Rows 14, and 20 also have extra leg room. "), 77 | ) 78 | 79 | 80 | async def find_seat() -> Seat: 81 | message_history: list[ModelMessage] | None = None 82 | while True: 83 | answer = Prompt.ask("What seat would you like?") 84 | 85 | result = await seat_preference_agent.run(answer, message_history=message_history) 86 | if isinstance(result.data, Seat): 87 | return result.data 88 | else: 89 | print("Could not understand seat preference. Please try again.") 90 | message_history = result.all_messages() 91 | 92 | 93 | async def main(): 94 | opt_flight_details = await find_flight() 95 | if opt_flight_details is not None: 96 | print(f"Flight found: {opt_flight_details.flight_number}") 97 | seat_preference = await find_seat() 98 | print(f"Seat preference: {seat_preference}") 99 | 100 | 101 | if __name__ == "__main__": 102 | asyncio.run(main()) 103 | -------------------------------------------------------------------------------- /examples/pydanticai_tools.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | import os 4 | import random 5 | from datetime import datetime 6 | 7 | import azure.identity 8 | from dotenv import load_dotenv 9 | from openai import AsyncAzureOpenAI, AsyncOpenAI 10 | from pydantic_ai import Agent 11 | from pydantic_ai.models.openai import OpenAIModel 12 | from pydantic_ai.providers.openai import OpenAIProvider 13 | from rich.logging import RichHandler 14 | 15 | # Setup logging with rich 16 | logging.basicConfig(level=logging.WARNING, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]) 17 | logger = logging.getLogger("weekend_planner") 18 | 19 | 20 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 21 | load_dotenv(override=True) 22 | API_HOST = os.getenv("API_HOST", "github") 23 | 24 | if API_HOST == "github": 25 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 26 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 27 | elif API_HOST == "azure": 28 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 29 | client = AsyncAzureOpenAI( 30 | api_version=os.environ["AZURE_OPENAI_VERSION"], 31 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 32 | azure_ad_token_provider=token_provider, 33 | ) 34 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 35 | 36 | 37 | def get_weather(city: str) -> str: 38 | logger.info(f"Getting weather for {city}") 39 | if random.random() < 0.05: 40 | return { 41 | "city": city, 42 | "temperature": 72, 43 | "description": "Sunny", 44 | } 45 | else: 46 | return { 47 | "city": city, 48 | "temperature": 60, 49 | "description": "Rainy", 50 | } 51 | 52 | 53 | def get_activities(city: str, date: str) -> list: 54 | logger.info(f"Getting activities for {city} on {date}") 55 | return [ 56 | {"name": "Hiking", "location": city}, 57 | {"name": "Beach", "location": city}, 58 | {"name": "Museum", "location": city}, 59 | ] 60 | 61 | def get_current_date() -> str: 62 | logger.info("Getting current date") 63 | return datetime.now().strftime("%Y-%m-%d") 64 | 65 | 66 | agent = Agent( 67 | model, 68 | system_prompt="You help users plan their weekends and choose the best activities for the given weather. If an activity would be unpleasant in the weather, don't suggest it. Include the date of the weekend in your response.", 69 | tools=[get_weather, get_activities, get_current_date], 70 | ) 71 | 72 | async def main(): 73 | result = await agent.run("what can I do for funzies this weekend in Seattle?") 74 | print(result.output) 75 | 76 | 77 | if __name__ == "__main__": 78 | logger.setLevel(logging.INFO) 79 | asyncio.run(main()) 80 | -------------------------------------------------------------------------------- /examples/semantickernel_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from dotenv import load_dotenv 6 | from openai import AsyncAzureOpenAI, AsyncOpenAI 7 | from semantic_kernel.agents import ChatCompletionAgent 8 | from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion 9 | 10 | load_dotenv(override=True) 11 | API_HOST = os.getenv("API_HOST", "github") 12 | if API_HOST == "azure": 13 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 14 | chat_client = AsyncAzureOpenAI( 15 | api_version=os.environ["AZURE_OPENAI_VERSION"], 16 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 17 | azure_ad_token_provider=token_provider, 18 | ) 19 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.environ["AZURE_OPENAI_CHAT_MODEL"], async_client=chat_client) 20 | else: 21 | chat_client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 22 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.getenv("GITHUB_MODEL", "gpt-4o"), async_client=chat_client) 23 | 24 | agent = ChatCompletionAgent(name="spanish_tutor", instructions="You are a Spanish tutor. Help the user learn Spanish. ONLY respond in Spanish.", service=chat_completion_service) 25 | 26 | 27 | async def main(): 28 | response = await agent.get_response(messages="oh hey how are you?") 29 | print(response.content) 30 | 31 | 32 | if __name__ == "__main__": 33 | asyncio.run(main()) 34 | -------------------------------------------------------------------------------- /examples/semantickernel_groupchat.py: -------------------------------------------------------------------------------- 1 | """ 2 | The following sample demonstrates how to create a simple, 3 | agent group chat that utilizes a Reviewer Chat Completion 4 | Agent along with a Writer Chat Completion Agent to 5 | complete a user's task. 6 | 7 | From full tutorial here: 8 | https://learn.microsoft.com/semantic-kernel/frameworks/agent/examples/example-agent-collaboration?pivots=programming-language-python 9 | """ 10 | 11 | import asyncio 12 | import os 13 | 14 | import azure.identity 15 | from dotenv import load_dotenv 16 | from openai import AsyncAzureOpenAI, AsyncOpenAI 17 | from semantic_kernel import Kernel 18 | from semantic_kernel.agents import AgentGroupChat, ChatCompletionAgent 19 | from semantic_kernel.agents.strategies import ( 20 | KernelFunctionSelectionStrategy, 21 | KernelFunctionTerminationStrategy, 22 | ) 23 | from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion 24 | from semantic_kernel.contents import ChatHistoryTruncationReducer 25 | from semantic_kernel.functions import KernelFunctionFromPrompt 26 | 27 | # Define agent names 28 | REVIEWER_NAME = "Reviewer" 29 | WRITER_NAME = "Writer" 30 | 31 | load_dotenv(override=True) 32 | API_HOST = os.getenv("API_HOST", "github") 33 | 34 | 35 | def create_kernel() -> Kernel: 36 | """Creates a Kernel instance with an Azure OpenAI ChatCompletion service.""" 37 | kernel = Kernel() 38 | 39 | if API_HOST == "azure": 40 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 41 | chat_client = AsyncAzureOpenAI( 42 | api_version=os.environ["AZURE_OPENAI_VERSION"], 43 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 44 | azure_ad_token_provider=token_provider, 45 | ) 46 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.environ["AZURE_OPENAI_CHAT_MODEL"], async_client=chat_client) 47 | else: 48 | chat_client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 49 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.getenv("GITHUB_MODEL", "gpt-4o"), async_client=chat_client) 50 | kernel.add_service(chat_completion_service) 51 | return kernel 52 | 53 | 54 | async def main(): 55 | # Create a single kernel instance for all agents. 56 | kernel = create_kernel() 57 | 58 | # Create ChatCompletionAgents using the same kernel. 59 | agent_reviewer = ChatCompletionAgent( 60 | kernel=kernel, 61 | name=REVIEWER_NAME, 62 | instructions=""" 63 | Your responsibility is to review and identify how to improve user provided content. 64 | If the user has provided input or direction for content already provided, specify how to address this input. 65 | Never directly perform the correction or provide an example. 66 | Once the content has been updated in a subsequent response, review it again until it is satisfactory. 67 | 68 | RULES: 69 | - Only identify suggestions that are specific and actionable. 70 | - Verify previous suggestions have been addressed. 71 | - Never repeat previous suggestions. 72 | """, 73 | ) 74 | 75 | agent_writer = ChatCompletionAgent( 76 | kernel=kernel, 77 | name=WRITER_NAME, 78 | instructions=""" 79 | Your sole responsibility is to rewrite content according to review suggestions. 80 | - Always apply all review directions. 81 | - Always revise the content in its entirety without explanation. 82 | - Never address the user. 83 | """, 84 | ) 85 | 86 | # Define a selection function to determine which agent should take the next turn. 87 | selection_function = KernelFunctionFromPrompt( 88 | function_name="selection", 89 | prompt=f""" 90 | Examine the provided RESPONSE and choose the next participant. 91 | State only the name of the chosen participant without explanation. 92 | Never choose the participant named in the RESPONSE. 93 | 94 | Choose only from these participants: 95 | - {REVIEWER_NAME} 96 | - {WRITER_NAME} 97 | 98 | Rules: 99 | - If RESPONSE is user input, it is {REVIEWER_NAME}'s turn. 100 | - If RESPONSE is by {REVIEWER_NAME}, it is {WRITER_NAME}'s turn. 101 | - If RESPONSE is by {WRITER_NAME}, it is {REVIEWER_NAME}'s turn. 102 | 103 | RESPONSE: 104 | {{{{$lastmessage}}}} 105 | """, 106 | ) 107 | 108 | # Define a termination function where the reviewer signals completion with "yes". 109 | termination_keyword = "yes" 110 | 111 | termination_function = KernelFunctionFromPrompt( 112 | function_name="termination", 113 | prompt=f""" 114 | Examine the RESPONSE and determine whether the content has been deemed satisfactory. 115 | If the content is satisfactory, respond with a single word without explanation: {termination_keyword}. 116 | If specific suggestions are being provided, it is not satisfactory. 117 | If no correction is suggested, it is satisfactory. 118 | 119 | RESPONSE: 120 | {{{{$lastmessage}}}} 121 | """, 122 | ) 123 | 124 | history_reducer = ChatHistoryTruncationReducer(target_count=5) 125 | 126 | # Create the AgentGroupChat with selection and termination strategies. 127 | chat = AgentGroupChat( 128 | agents=[agent_reviewer, agent_writer], 129 | selection_strategy=KernelFunctionSelectionStrategy( 130 | initial_agent=agent_reviewer, 131 | function=selection_function, 132 | kernel=kernel, 133 | result_parser=lambda result: str(result.value[0]).strip() if result.value[0] is not None else WRITER_NAME, 134 | history_variable_name="lastmessage", 135 | history_reducer=history_reducer, 136 | ), 137 | termination_strategy=KernelFunctionTerminationStrategy( 138 | agents=[agent_reviewer], 139 | function=termination_function, 140 | kernel=kernel, 141 | result_parser=lambda result: termination_keyword in str(result.value[0]).lower(), 142 | history_variable_name="lastmessage", 143 | maximum_iterations=10, 144 | history_reducer=history_reducer, 145 | ), 146 | ) 147 | 148 | print("Ready! Type your input, or 'exit' to quit, 'reset' to restart the conversation. " "You may pass in a file path using @.") 149 | 150 | is_complete = False 151 | while not is_complete: 152 | print() 153 | user_input = input("User > ").strip() 154 | if not user_input: 155 | continue 156 | 157 | if user_input.lower() == "exit": 158 | is_complete = True 159 | break 160 | 161 | await chat.add_chat_message(message=user_input) 162 | try: 163 | async for response in chat.invoke(): 164 | if response is None or not response.name: 165 | continue 166 | print() 167 | print(f"# {response.name.upper()}:\n{response.content}") 168 | except Exception as e: 169 | print(f"Error during chat invocation: {e}") 170 | 171 | # Reset the chat's complete flag for the new conversation round. 172 | chat.is_complete = False 173 | 174 | 175 | if __name__ == "__main__": 176 | asyncio.run(main()) 177 | -------------------------------------------------------------------------------- /examples/smolagents_codeagent.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import azure.identity 4 | from dotenv import load_dotenv 5 | from smolagents import AzureOpenAIServerModel, CodeAgent, DuckDuckGoSearchTool, OpenAIServerModel 6 | 7 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 8 | load_dotenv(override=True) 9 | API_HOST = os.getenv("API_HOST", "github") 10 | 11 | if API_HOST == "github": 12 | model = OpenAIServerModel(model_id=os.getenv("GITHUB_MODEL", "gpt-4o"), api_base="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 13 | elif API_HOST == "azure": 14 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 15 | model = AzureOpenAIServerModel(model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], api_version=os.environ["AZURE_OPENAI_VERSION"], azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], client_kwargs={"azure_ad_token_provider": token_provider}) 16 | 17 | 18 | agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model) 19 | 20 | agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?") 21 | -------------------------------------------------------------------------------- /examples/spanish/README.md: -------------------------------------------------------------------------------- 1 | 14 | 15 | # Demos de Frameworks de Agentes de IA en Python 16 | 17 | [![Abrir en GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/Azure-Samples/python-ai-agent-frameworks-demos) 18 | [![Abrir en Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/python-ai-agent-frameworks-demos) 19 | 20 | Este repositorio ofrece ejemplos de muchos frameworks populares de agentes de IA en Python usando LLMs de [GitHub Models](https://github.com/marketplace/models). Estos modelos son gratuitos para cualquiera con una cuenta de GitHub, hasta un [límite diario](https://docs.github.com/github-models/prototyping-with-ai-models#rate-limits). 21 | 22 | * [Cómo empezar](#cómo-empezar) 23 | * [GitHub Codespaces](#github-codespaces) 24 | * [VS Code Dev Containers](#vs-code-dev-containers) 25 | * [Entorno local](#entorno-local) 26 | * [Ejecutar los ejemplos en Python](#ejecutar-los-ejemplos-en-python) 27 | * [Guía](#guía) 28 | * [Costos](#costos) 29 | * [Pautas de seguridad](#pautas-de-seguridad) 30 | * [Recursos](#recursos) 31 | 32 | ## Cómo empezar 33 | 34 | Tenés varias opciones para comenzar con este repositorio. 35 | La forma más rápida es usar GitHub Codespaces, ya que te configurará todo automáticamente, pero también podés [configurarlo localmente](#entorno-local). 36 | 37 | ### GitHub Codespaces 38 | 39 | Podés ejecutar este repositorio virtualmente usando GitHub Codespaces. El botón abrirá una instancia de VS Code basada en web en tu navegador: 40 | 41 | 1. Abre el repositorio (esto puede tardar varios minutos): 42 | 43 | [![Abrir en GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Azure-Samples/python-ai-agent-frameworks-demos) 44 | 45 | 2. Abre una ventana de terminal 46 | 3. Continúa con los pasos para ejecutar los ejemplos 47 | 48 | ### VS Code Dev Containers 49 | 50 | Una opción relacionada es VS Code Dev Containers, que abrirá el proyecto en tu VS Code local usando la [extensión Dev Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers): 51 | 52 | 1. Inicia Docker Desktop (instálalo si no lo tenés ya) 53 | 2. Abre el proyecto: 54 | 55 | [![Abrir en Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/python-ai-agent-frameworks-demos) 56 | 57 | 3. En la ventana de VS Code que se abre, una vez que aparezcan los archivos del proyecto (esto puede tardar varios minutos), abre una ventana de terminal. 58 | 4. Continúa con los pasos para ejecutar los ejemplos 59 | 60 | ### Entorno local 61 | 62 | 1. Asegúrate de tener instaladas las siguientes herramientas: 63 | 64 | * [Python 3.10+](https://www.python.org/downloads/) 65 | * Git 66 | 67 | 2. Clona el repositorio: 68 | 69 | ```shell 70 | git clone https://github.com/Azure-Samples/python-ai-agent-frameworks-demos 71 | cd python-ai-agents-demos 72 | ``` 73 | 74 | 3. Configura un entorno virtual: 75 | 76 | ```shell 77 | python -m venv venv 78 | source venv/bin/activate # En Windows: venv\Scripts\activate 79 | ``` 80 | 81 | 4. Instala los requisitos: 82 | 83 | ```shell 84 | pip install -r requirements.txt 85 | ``` 86 | 87 | ## Ejecutar los ejemplos en Python 88 | 89 | Podés ejecutar los ejemplos en este repositorio ejecutando los scripts en el directorio `examples/spanish`. Cada script demuestra un patrón o framework diferente de agente de IA. 90 | 91 | | Ejemplo | Descripción | 92 | | ------- | ----------- | 93 | | autogen_basic.py | Usa AutoGen para crear un agente conversacional básico. | 94 | | autogen_magneticone.py | Implementa el patrón MagneticOne de AutoGen para conversar con múltiples agentes. | 95 | | autogen_swarm.py | Usa AutoGen para crear un enjambre de agentes que trabajan juntos. | 96 | | autogen_tools.py | Usa AutoGen con herramientas personalizadas para resolver tareas complejas. | 97 | | azureai_githubmodels.py | Muestra cómo configurar el acceso a modelos de GitHub y Azure OpenAI. | 98 | | langgraph_agent.py | Usa LangGraph para crear un agente con un flujo de trabajo estructurado. | 99 | | llamaindex.py | Usa LlamaIndex para construir un agente ReAct para RAG en múltiples índices. | 100 | | openai_agents_basic.py | Implementación básica de un agente usando el framework de Agentes de OpenAI. | 101 | | openai_agents_handoffs.py | Usa el framework de Agentes de OpenAI para transferir entre varios agentes con herramientas. | 102 | | openai_agents_tools.py | Usa el framework de Agentes de OpenAI para crear un planificador de fin de semana. | 103 | | openai_functioncalling.py | Usa OpenAI Function Calling para llamar funciones basadas en la salida del LLM. | 104 | | openai_githubmodels.py | Configuración básica para usar modelos de GitHub con la API de OpenAI. | 105 | | pydanticai_basic.py | Implementación básica usando PydanticAI para crear un agente estructurado. | 106 | | pydanticai_graph.py | Usa PydanticAI para construir un grafo de agentes para hacer preguntas y evaluar respuestas. | 107 | | pydanticai_multiagent.py | Usa PydanticAI para construir un flujo de trabajo secuencial de dos agentes para planificación de vuelos. | 108 | | semantickernel_basic.py | Usa Semantic Kernel para construir un agente simple que enseña español. | 109 | | semantickernel_groupchat.py | Usa Semantic Kernel para construir un flujo de trabajo de dos agentes escritor/editor. | 110 | | smolagents_codeagent.py | Usa SmolAgents para construir un agente de respuesta a preguntas que puede buscar en la web y ejecutar código. | 111 | 112 | ## Configurar GitHub Models 113 | 114 | Si abres este repositorio en GitHub Codespaces, podés ejecutar los scripts gratis usando GitHub Models sin pasos adicionales, ya que tu `GITHUB_TOKEN` ya está configurado en el entorno de Codespaces. 115 | 116 | Si querés ejecutar los scripts localmente, necesitás configurar la variable de entorno `GITHUB_TOKEN` con un token de acceso personal (PAT) de GitHub. Podés crear un PAT siguiendo estos pasos: 117 | 118 | 1. Ve a la configuración de tu cuenta de GitHub. 119 | 2. Haz clic en "Developer settings" en la barra lateral izquierda. 120 | 3. Haz clic en "Personal access tokens" en la barra lateral izquierda. 121 | 4. Haz clic en "Tokens (classic)" o "Fine-grained tokens" según tu preferencia. 122 | 5. Haz clic en "Generate new token". 123 | 6. Dale un nombre a tu token y selecciona los alcances que querés otorgar. Para este proyecto, no necesitás alcances específicos. 124 | 7. Haz clic en "Generate token". 125 | 8. Copia el token generado. 126 | 9. Configura la variable de entorno `GITHUB_TOKEN` en tu terminal o IDE: 127 | 128 | ```shell 129 | export GITHUB_TOKEN=tu_token_de_acceso_personal 130 | ``` 131 | 132 | ## Provisionar recursos de Azure AI 133 | 134 | Podés ejecutar todos los ejemplos en este repositorio usando GitHub Models. Si querés ejecutar los ejemplos usando modelos de Azure OpenAI, necesitás provisionar los recursos de Azure AI, lo que generará costos. 135 | 136 | Este proyecto incluye infraestructura como código (IaC) para provisionar despliegues de Azure OpenAI de "gpt-4o" y "text-embedding-3-large". La IaC está definida en el directorio `infra` y usa Azure Developer CLI para provisionar los recursos. 137 | 138 | 1. Asegúrate de tener instalado [Azure Developer CLI (azd)](https://aka.ms/install-azd). 139 | 140 | 2. Inicia sesión en Azure: 141 | 142 | ```shell 143 | azd auth login 144 | ``` 145 | 146 | Para usuarios de GitHub Codespaces, si el comando anterior falla, prueba: 147 | 148 | ```shell 149 | azd auth login --use-device-code 150 | ``` 151 | 152 | 3. Provisiona la cuenta de OpenAI: 153 | 154 | ```shell 155 | azd provision 156 | ``` 157 | 158 | Te pedirá que proporciones un nombre de entorno `azd` (como "agents-demos"), selecciones una suscripción de tu cuenta de Azure y selecciones una ubicación. Luego aprovisionará los recursos en tu cuenta. 159 | 160 | 4. Una vez que los recursos estén aprovisionados, deberías ver un archivo local `.env` con todas las variables de entorno necesarias para ejecutar los scripts. 161 | 5. Para eliminar los recursos, ejecuta: 162 | 163 | ```shell 164 | azd down 165 | ``` 166 | 167 | ## Recursos 168 | 169 | * [Documentación de AutoGen](https://microsoft.github.io/autogen/) 170 | * [Documentación de LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/) 171 | * [Documentación de LlamaIndex](https://docs.llamaindex.ai/en/latest/) 172 | * [Documentación de OpenAI Agents](https://openai.github.io/openai-agents-python/) 173 | * [Documentación de OpenAI Function Calling](https://platform.openai.com/docs/guides/function-calling?api-mode=chat) 174 | * [Documentación de PydanticAI](https://ai.pydantic.dev/multi-agent-applications/) 175 | * [Documentación de Semantic Kernel](https://learn.microsoft.com/semantic-kernel/overview/) 176 | * [Documentación de SmolAgents](https://huggingface.co/docs/smolagents/index) -------------------------------------------------------------------------------- /examples/spanish/autogen_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from autogen_agentchat.agents import AssistantAgent 6 | from autogen_agentchat.messages import TextMessage 7 | from autogen_core import CancellationToken 8 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 9 | from dotenv import load_dotenv 10 | 11 | # Setup the client to use either Azure OpenAI or GitHub Models 12 | load_dotenv(override=True) 13 | API_HOST = os.getenv("API_HOST", "github") 14 | if API_HOST == "github": 15 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 16 | elif API_HOST == "azure": 17 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 18 | client = AzureOpenAIChatCompletionClient(model=os.environ["AZURE_OPENAI_CHAT_MODEL"], api_version=os.environ["AZURE_OPENAI_VERSION"], azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_ad_token_provider=token_provider) 19 | 20 | 21 | agent = AssistantAgent( 22 | "english_tutor", 23 | model_client=client, 24 | system_message=( 25 | "Eres un tutor de inglés. Ayuda al usuario a aprender inglés. " 26 | "Traducir solo lenguaje natural al español informal latinoamericano. " 27 | "SOLO RESPONDE en español informal latinoamericano." 28 | ), 29 | ) 30 | 31 | 32 | async def main() -> None: 33 | response = await agent.on_messages( 34 | [TextMessage(content="hola como estas?", source="user")], 35 | cancellation_token=CancellationToken(), 36 | ) 37 | print(response.chat_message.content) 38 | 39 | 40 | if __name__ == "__main__": 41 | asyncio.run(main()) 42 | -------------------------------------------------------------------------------- /examples/spanish/autogen_magenticone.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from autogen_agentchat.agents import AssistantAgent 6 | from autogen_agentchat.conditions import TextMentionTermination 7 | from autogen_agentchat.teams import MagenticOneGroupChat 8 | from autogen_agentchat.ui import Console 9 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 10 | from dotenv import load_dotenv 11 | 12 | # Setup the client to use either Azure OpenAI or GitHub Models 13 | load_dotenv(override=True) 14 | API_HOST = os.getenv("API_HOST", "github") 15 | 16 | 17 | if API_HOST == "github": 18 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 19 | elif API_HOST == "azure": 20 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 21 | client = AzureOpenAIChatCompletionClient( 22 | model=os.environ["AZURE_OPENAI_CHAT_MODEL"], 23 | api_version=os.environ["AZURE_OPENAI_VERSION"], 24 | azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 25 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 26 | azure_ad_token_provider=token_provider, 27 | ) 28 | 29 | local_agent = AssistantAgent( 30 | "local_agent", 31 | model_client=client, 32 | description="Un asistente local que puede sugerir actividades o lugares interesantes para visitar.", 33 | system_message="Eres un asistente buena onda que puede sugerir actividades auténticas y lugares interesantes para visitar, aprovechando cualquier información adicional que te den.", 34 | ) 35 | 36 | language_agent = AssistantAgent( 37 | "language_agent", 38 | model_client=client, 39 | description="Un asistente buena onda que puede dar consejos sobre el idioma del destino.", 40 | system_message="Eres un asistente buena onda que revisa planes de viaje y da consejos importantes sobre cómo manejar mejor el idioma o los problemas de comunicación en el destino elegido. Si el plan ya incluye buenos tips de idioma, puedes decir que está todo bien y explicar por qué.", 41 | ) 42 | 43 | travel_summary_agent = AssistantAgent( 44 | "travel_summary_agent", 45 | model_client=client, 46 | description="Un asistente buena onda que puede resumir el plan de viaje.", 47 | system_message="Eres un asistente buena onda que toma todas las sugerencias y consejos de los otros asistentes y arma un plan de viaje final detallado. Asegúrate de que el plan final esté completo y bien integrado. TU RESPUESTA FINAL DEBE SER EL PLAN COMPLETO. Cuando el plan esté listo y todas las perspectivas integradas, responde con TERMINATE.", 48 | ) 49 | 50 | 51 | async def run_agents(): 52 | termination = TextMentionTermination("TERMINATE") 53 | group_chat = MagenticOneGroupChat( 54 | [local_agent, language_agent, travel_summary_agent], 55 | termination_condition=termination, 56 | model_client=client, 57 | ) 58 | await Console(group_chat.run_stream(task="Planea un viaje de 3 días a Egipto")) 59 | 60 | 61 | if __name__ == "__main__": 62 | asyncio.run(run_agents()) 63 | -------------------------------------------------------------------------------- /examples/spanish/autogen_swarm.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from autogen_agentchat.agents import AssistantAgent 6 | from autogen_agentchat.conditions import HandoffTermination, TextMentionTermination 7 | from autogen_agentchat.messages import HandoffMessage 8 | from autogen_agentchat.teams import Swarm 9 | from autogen_agentchat.ui import Console 10 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 11 | from dotenv import load_dotenv 12 | 13 | # Setup the client to use either Azure OpenAI or GitHub Models 14 | load_dotenv(override=True) 15 | API_HOST = os.getenv("API_HOST", "github") 16 | 17 | 18 | if API_HOST == "github": 19 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 20 | elif API_HOST == "azure": 21 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 22 | client = AzureOpenAIChatCompletionClient( 23 | model=os.environ["AZURE_OPENAI_CHAT_MODEL"], 24 | api_version=os.environ["AZURE_OPENAI_VERSION"], 25 | azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 26 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 27 | azure_ad_token_provider=token_provider, 28 | ) 29 | 30 | travel_agent = AssistantAgent( 31 | "travel_agent", 32 | model_client=client, 33 | handoffs=["flights_refunder", "user"], 34 | system_message="""Eres un agente de viajes. 35 | El flights_refunder se encarga de reembolsar vuelos. 36 | Si necesitas información del usuario, primero manda tu mensaje y después pásale la conversación al usuario. 37 | Usa TERMINATE cuando la planificación del viaje esté completa.""", 38 | ) 39 | 40 | 41 | def refund_flight(flight_id: str) -> str: 42 | """Reembolsar un vuelo""" 43 | return f"Vuelo {flight_id} reembolsado" 44 | 45 | 46 | flights_refunder = AssistantAgent( 47 | "flights_refunder", 48 | model_client=client, 49 | handoffs=["travel_agent", "user"], 50 | tools=[refund_flight], 51 | system_message="""Eres un agente especializado en reembolsar vuelos. 52 | Solo necesitas el número de referencia del vuelo para hacer el reembolso. 53 | Puedes reembolsar un vuelo usando la herramienta refund_flight. 54 | Si necesitas información del usuario, primero manda tu mensaje y después pásale la conversación al usuario. 55 | Cuando termines la transacción, pásale la conversación al agente de viajes para finalizar.""", 56 | ) 57 | 58 | 59 | async def run_team_stream(task: str) -> None: 60 | termination = HandoffTermination(target="user") | TextMentionTermination("TERMINATE") 61 | team = Swarm([travel_agent, flights_refunder], termination_condition=termination) 62 | 63 | task_result = await Console(team.run_stream(task=task)) 64 | last_message = task_result.messages[-1] 65 | 66 | while isinstance(last_message, HandoffMessage) and last_message.target == "user": 67 | user_message = input("Usuario: ") 68 | 69 | task_result = await Console(team.run_stream(task=HandoffMessage(source="user", target=last_message.source, content=user_message))) 70 | last_message = task_result.messages[-1] 71 | 72 | 73 | if __name__ == "__main__": 74 | asyncio.run(run_team_stream("Necesito que me reembolsen mi vuelo.")) 75 | -------------------------------------------------------------------------------- /examples/spanish/autogen_tools.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | import os 4 | import random 5 | from datetime import datetime 6 | 7 | import azure.identity 8 | from autogen_agentchat.agents import AssistantAgent 9 | from autogen_agentchat.conditions import TextMessageTermination 10 | from autogen_agentchat.teams import RoundRobinGroupChat 11 | from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient 12 | from dotenv import load_dotenv 13 | from rich.logging import RichHandler 14 | 15 | # Setup logging with rich 16 | logging.basicConfig(level=logging.WARNING, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]) 17 | logger = logging.getLogger("weekend_planner") 18 | 19 | 20 | # Setup the client to use either Azure OpenAI or GitHub Models 21 | load_dotenv(override=True) 22 | API_HOST = os.getenv("API_HOST", "github") 23 | 24 | 25 | if API_HOST == "github": 26 | client = OpenAIChatCompletionClient(model=os.getenv("GITHUB_MODEL", "gpt-4o"), api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com", parallel_tool_calls=False) 27 | elif API_HOST == "azure": 28 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 29 | client = AzureOpenAIChatCompletionClient(model=os.environ["AZURE_OPENAI_CHAT_MODEL"], api_version=os.environ["AZURE_OPENAI_VERSION"], azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_ad_token_provider=token_provider, parallel_tool_calls=False) 30 | 31 | 32 | def get_weather(city: str) -> str: 33 | logger.info(f"Obteniendo el clima para {city}") 34 | if random.random() < 0.05: 35 | return { 36 | "city": city, 37 | "temperature": 72, 38 | "description": "Soleado", 39 | } 40 | else: 41 | return { 42 | "city": city, 43 | "temperature": 60, 44 | "description": "Lluvioso", 45 | } 46 | 47 | 48 | def get_activities(city: str, date: str) -> list: 49 | logger.info(f"Obteniendo actividades para {city} el {date}") 50 | return [ 51 | {"name": "Senderismo", "location": city}, 52 | {"name": "Playa", "location": city}, 53 | {"name": "Museo", "location": city}, 54 | ] 55 | 56 | 57 | def get_current_date() -> str: 58 | logger.info("Obteniendo la fecha actual") 59 | return datetime.now().strftime("%Y-%m-%d") 60 | 61 | 62 | agent = AssistantAgent( 63 | "weekend_planner", 64 | model_client=client, 65 | tools=[get_weather, get_activities, get_current_date], 66 | system_message="Ayudas a los usuarios a planear su fin de semana y elegir las mejores actividades según el clima. Si una actividad no es agradable con el clima actual, no la sugieras. Incluye la fecha del fin de semana en tu respuesta.", 67 | ) 68 | 69 | 70 | async def main() -> None: 71 | team = RoundRobinGroupChat([agent], termination_condition=TextMessageTermination(agent.name)) 72 | 73 | async for task_result in team.run_stream(task="¿Qué puedo hacer para divertirme este fin de semana en Seattle?"): 74 | logger.debug("%s: %s", type(task_result).__name__, task_result) 75 | print(task_result.messages[-1].content) 76 | 77 | 78 | if __name__ == "__main__": 79 | logger.setLevel(logging.INFO) 80 | asyncio.run(main()) 81 | -------------------------------------------------------------------------------- /examples/spanish/azureai_githubmodels.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from azure.ai.inference import ChatCompletionsClient 4 | from azure.ai.inference.models import SystemMessage, UserMessage 5 | from azure.core.credentials import AzureKeyCredential 6 | 7 | client = ChatCompletionsClient( 8 | endpoint="https://models.inference.ai.azure.com", 9 | credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]), 10 | ) 11 | 12 | response = client.complete( 13 | messages=[ 14 | SystemMessage(content="Eres un asistente ùtil."), 15 | UserMessage(content="¿Cuál es la capital de Francia?"), 16 | ], 17 | model=os.getenv("GITHUB_MODEL", "gpt-4o"), 18 | ) 19 | print(response.choices[0].message.content) 20 | -------------------------------------------------------------------------------- /examples/spanish/langgraph_agent.py: -------------------------------------------------------------------------------- 1 | # https://github.com/JRAlexander/IntroToAgents1-Oxford/blob/main/intro-langgraph/time-travel.ipynb 2 | 3 | import os 4 | 5 | import azure.identity 6 | from dotenv import load_dotenv 7 | from langchain_core.messages import HumanMessage 8 | from langchain_core.tools import tool 9 | from langchain_openai import AzureChatOpenAI, ChatOpenAI 10 | from langgraph.checkpoint.memory import MemorySaver 11 | from langgraph.graph import END, START, MessagesState, StateGraph 12 | from langgraph.prebuilt import ToolNode 13 | 14 | 15 | @tool 16 | def play_song_on_spotify(song: str): 17 | """Reproducir una canción en Spotify""" 18 | # Acá llamarías a la API de Spotify... 19 | return f"¡Listo! Ya puse {song} en Spotify." 20 | 21 | 22 | @tool 23 | def play_song_on_apple(song: str): 24 | """Reproducir una canción en Apple Music""" 25 | # Acá llamarías a la API de Apple Music... 26 | return f"¡Listo! Ya puse {song} en Apple Music." 27 | 28 | 29 | tools = [play_song_on_apple, play_song_on_spotify] 30 | tool_node = ToolNode(tools) 31 | 32 | # Configurar el cliente para usar Azure OpenAI o modelos de GitHub 33 | load_dotenv(override=True) 34 | API_HOST = os.getenv("API_HOST", "github") 35 | 36 | if API_HOST == "azure": 37 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 38 | model = AzureChatOpenAI( 39 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 40 | azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 41 | openai_api_version=os.environ["AZURE_OPENAI_VERSION"], 42 | azure_ad_token_provider=token_provider, 43 | ) 44 | else: 45 | model = ChatOpenAI(model=os.getenv("GITHUB_MODEL", "gpt-4o"), base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 46 | 47 | model = model.bind_tools(tools, parallel_tool_calls=False) 48 | 49 | # Definir nodos y conexiones condicionales 50 | 51 | 52 | # Definir la función que decide si continuar o no 53 | def should_continue(state): 54 | messages = state["messages"] 55 | last_message = messages[-1] 56 | # Si no hay una llamada a función, terminamos 57 | if not last_message.tool_calls: 58 | return "end" 59 | # Si hay llamada a función, seguimos 60 | else: 61 | return "continue" 62 | 63 | 64 | # Definir la función que llama al modelo 65 | def call_model(state): 66 | messages = state["messages"] 67 | response = model.invoke(messages) 68 | # Devolvemos una lista porque esto se agregará a la lista existente 69 | return {"messages": [response]} 70 | 71 | 72 | # Definir un nuevo grafo 73 | workflow = StateGraph(MessagesState) 74 | 75 | # Definir los dos nodos entre los que vamos a alternar 76 | workflow.add_node("agent", call_model) 77 | workflow.add_node("action", tool_node) 78 | 79 | # Establecer el punto de entrada como `agent` 80 | # Esto significa que este nodo es el primero que se llama 81 | workflow.add_edge(START, "agent") 82 | 83 | # Ahora agregamos una conexión condicional 84 | workflow.add_conditional_edges( 85 | # Primero, definimos el nodo inicial. Usamos `agent`. 86 | # Esto significa que estas conexiones se toman después de llamar al nodo `agent`. 87 | "agent", 88 | # Luego, pasamos la función que determina qué nodo se llama después. 89 | should_continue, 90 | # Finalmente, pasamos un mapeo. 91 | # Las claves son strings, y los valores son otros nodos. 92 | # END es un nodo especial que indica que el grafo debe terminar. 93 | # Lo que pasa es que llamamos a `should_continue`, y luego el resultado 94 | # se compara con las claves de este mapeo. 95 | # Según cuál coincida, se llamará a ese nodo. 96 | { 97 | # Si es `continue`, llamamos al nodo de acción. 98 | "continue": "action", 99 | # Si no, terminamos. 100 | "end": END, 101 | }, 102 | ) 103 | 104 | # Ahora agregamos una conexión normal desde `action` hacia `agent`. 105 | # Esto significa que después de llamar a `action`, se llama al nodo `agent`. 106 | workflow.add_edge("action", "agent") 107 | 108 | # Configurar memoria 109 | memory = MemorySaver() 110 | 111 | # Finalmente, ¡lo compilamos! 112 | # Esto lo convierte en un Runnable de LangChain, 113 | # lo que significa que podés usarlo como cualquier otro runnable. 114 | 115 | # Agregamos `interrupt_before=["action"]` 116 | # Esto agrega un punto de interrupción antes de llamar al nodo `action` 117 | app = workflow.compile(checkpointer=memory) 118 | 119 | config = {"configurable": {"thread_id": "1"}} 120 | input_message = HumanMessage(content="¿Podés poner la canción más popular de Taylor Swift?") 121 | for event in app.stream({"messages": [input_message]}, config, stream_mode="values"): 122 | event["messages"][-1].pretty_print() -------------------------------------------------------------------------------- /examples/spanish/llamaindex.py: -------------------------------------------------------------------------------- 1 | # https://docs.llamaindex.ai/en/stable/examples/agent/react_agent_with_query_engine/ 2 | 3 | import os 4 | from pathlib import Path 5 | 6 | import azure.identity 7 | from dotenv import load_dotenv 8 | from llama_index.core import Settings, SimpleDirectoryReader, StorageContext, VectorStoreIndex, load_index_from_storage 9 | from llama_index.core.agent.workflow import AgentStream, ReActAgent 10 | from llama_index.core.tools import QueryEngineTool 11 | from llama_index.core.workflow import Context 12 | from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding 13 | from llama_index.embeddings.openai import OpenAIEmbedding 14 | from llama_index.llms.azure_openai import AzureOpenAI 15 | from llama_index.llms.openai_like import OpenAILike 16 | 17 | # Configuramos el cliente para usar Azure OpenAI o Modelos de GitHub 18 | load_dotenv(override=True) 19 | API_HOST = os.getenv("API_HOST", "github") 20 | 21 | if API_HOST == "azure": 22 | token_provider = azure.identity.get_bearer_token_provider( 23 | azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default" 24 | ) 25 | Settings.llm = AzureOpenAI( 26 | model=os.environ["AZURE_OPENAI_CHAT_MODEL"], 27 | deployment_name=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], 28 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 29 | api_version=os.environ["AZURE_OPENAI_VERSION"], 30 | use_azure_ad=True, 31 | azure_ad_token_provider=token_provider, 32 | ) 33 | 34 | Settings.embed_model = AzureOpenAIEmbedding( 35 | model=os.environ["AZURE_OPENAI_EMBEDDING_MODEL"], 36 | deployment_name=os.environ["AZURE_OPENAI_EMBEDDING_DEPLOYMENT"], 37 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 38 | api_version=os.environ["AZURE_OPENAI_VERSION"], 39 | use_azure_ad=True, 40 | azure_ad_token_provider=token_provider, 41 | ) 42 | else: 43 | Settings.llm = OpenAILike( 44 | model=os.getenv("GITHUB_MODEL", "gpt-4o"), 45 | api_base="https://models.inference.ai.azure.com", 46 | api_key=os.environ["GITHUB_TOKEN"], 47 | is_chat_model=True, 48 | ) 49 | 50 | Settings.embed_model = OpenAIEmbedding( 51 | model="text-embedding-3-small", api_base="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"] 52 | ) 53 | 54 | # Intentamos cargar el índice desde el almacenamiento 55 | try: 56 | storage_context = StorageContext.from_defaults(persist_dir="./storage/docs1") 57 | index1 = load_index_from_storage(storage_context) 58 | 59 | storage_context = StorageContext.from_defaults(persist_dir="./storage/docs2") 60 | index2 = load_index_from_storage(storage_context) 61 | 62 | index_loaded = True 63 | except FileNotFoundError: 64 | index_loaded = False 65 | 66 | if not index_loaded: 67 | root_dir = Path(__file__).parent.parent 68 | 69 | docs1 = SimpleDirectoryReader(input_files=[root_dir / "../example_data/employee_handbook.pdf"]).load_data() 70 | docs2 = SimpleDirectoryReader(input_files=[root_dir / "../example_data/PerksPlus.pdf"]).load_data() 71 | index1 = VectorStoreIndex.from_documents(docs1) 72 | index2 = VectorStoreIndex.from_documents(docs2) 73 | 74 | index1.storage_context.persist(persist_dir=root_dir / "../example_data/.llama_index_storage/docs1") 75 | index2.storage_context.persist(persist_dir=root_dir / "../example_data/.llama_index_storage/docs2") 76 | 77 | engine1 = index1.as_query_engine(similarity_top_k=3) 78 | engine2 = index2.as_query_engine(similarity_top_k=3) 79 | 80 | query_engine_tools = [ 81 | QueryEngineTool.from_defaults( 82 | query_engine=engine1, 83 | name="engine1", 84 | description=( 85 | "Proporciona información sobre el manual para empleados de Contoso - cubre roles básicos de trabajo, políticas, seguridad laboral, RRHH, etc." 86 | ), 87 | ), 88 | QueryEngineTool.from_defaults( 89 | query_engine=engine2, 90 | name="engine2", 91 | description=("Proporciona información sobre el programa PerksPlus de Contoso, incluyendo qué puede ser reembolsado."), 92 | ), 93 | ] 94 | 95 | 96 | async def main(): 97 | agent = ReActAgent(tools=query_engine_tools, llm=Settings.llm) 98 | ctx = Context(agent) 99 | 100 | handler = agent.run("¿puedo obtener reembolso por mis herramientas de jardinería?", ctx=ctx) 101 | 102 | async for ev in handler.stream_events(): 103 | if isinstance(ev, AgentStream): 104 | print(f"{ev.delta}", end="", flush=True) 105 | 106 | response = await handler 107 | print(str(response)) 108 | 109 | 110 | if __name__ == "__main__": 111 | import asyncio 112 | 113 | asyncio.run(main()) -------------------------------------------------------------------------------- /examples/spanish/openai_agents_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | import openai 6 | from agents import Agent, OpenAIChatCompletionsModel, Runner, set_tracing_disabled 7 | from dotenv import load_dotenv 8 | 9 | # Disable tracing since we're not connected to a supported tracing provider 10 | set_tracing_disabled(disabled=True) 11 | 12 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 13 | load_dotenv(override=True) 14 | API_HOST = os.getenv("API_HOST", "github") 15 | if API_HOST == "github": 16 | client = openai.AsyncOpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 17 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 18 | elif API_HOST == "azure": 19 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 20 | client = openai.AsyncAzureOpenAI( 21 | api_version=os.environ["AZURE_OPENAI_VERSION"], 22 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 23 | azure_ad_token_provider=token_provider, 24 | ) 25 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 26 | 27 | 28 | agent = Agent( 29 | name="Tutor de Ingles", 30 | instructions="Eres un tutor de Ingles. Ayuda al usuario aprender Ingles. SOLO responde in Inglés.", 31 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 32 | ) 33 | 34 | 35 | async def main(): 36 | result = await Runner.run(agent, input="hola hola, como estas?") 37 | print(result.final_output) 38 | 39 | 40 | if __name__ == "__main__": 41 | asyncio.run(main()) 42 | -------------------------------------------------------------------------------- /examples/spanish/openai_agents_handoffs.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | import openai 6 | from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled 7 | from agents.extensions.visualization import draw_graph 8 | from dotenv import load_dotenv 9 | 10 | # Desactivamos el rastreo ya que no estamos usando modelos de OpenAI.com 11 | set_tracing_disabled(disabled=True) 12 | 13 | # Configuramos el cliente OpenAI para usar Azure OpenAI o Modelos de GitHub 14 | load_dotenv(override=True) 15 | API_HOST = os.getenv("API_HOST", "github") 16 | 17 | if API_HOST == "github": 18 | client = openai.AsyncOpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 19 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 20 | elif API_HOST == "azure": 21 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 22 | client = openai.AsyncAzureOpenAI( 23 | api_version=os.environ["AZURE_OPENAI_VERSION"], 24 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 25 | azure_ad_token_provider=token_provider, 26 | ) 27 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 28 | 29 | 30 | @function_tool 31 | def get_weather(city: str) -> str: 32 | return { 33 | "city": city, 34 | "temperature": 72, 35 | "description": "Sunny", 36 | } 37 | 38 | 39 | agent = Agent( 40 | name="Agente del clima", 41 | instructions="Solo puedes proporcionar información del clima.", 42 | tools=[get_weather], 43 | ) 44 | 45 | spanish_agent = Agent( 46 | name="Agente en español", 47 | instructions="Solo hablas español.", 48 | tools=[get_weather], 49 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 50 | ) 51 | 52 | english_agent = Agent( 53 | name="Agente en inglés", 54 | instructions="Solo hablas inglés", 55 | tools=[get_weather], 56 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 57 | ) 58 | 59 | triage_agent = Agent( 60 | name="Agente de clasificación", 61 | instructions="Transfiere al agente apropiado según el idioma de la solicitud.", 62 | handoffs=[spanish_agent, english_agent], 63 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 64 | ) 65 | 66 | 67 | async def main(): 68 | result = await Runner.run(triage_agent, input="Hola, ¿cómo estás? ¿Puedes darme el clima para Cuenca, Ecuador?") 69 | gz_source = draw_graph(triage_agent, filename="openai_agents_handoffs.png") 70 | # guardamos el grafo en un archivo en formato graphviz 71 | gz_source.save("openai_agents_handoffs.dot") 72 | 73 | print(result.final_output) 74 | 75 | 76 | if __name__ == "__main__": 77 | asyncio.run(main()) -------------------------------------------------------------------------------- /examples/spanish/openai_agents_tools.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | import os 4 | import random 5 | from datetime import datetime 6 | 7 | import azure.identity 8 | import openai 9 | from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled 10 | from dotenv import load_dotenv 11 | from rich.logging import RichHandler 12 | 13 | # Configuración de logging con rich 14 | logging.basicConfig(level=logging.WARNING, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]) 15 | logger = logging.getLogger("weekend_planner") 16 | 17 | # Desactivamos el rastreo ya que no estamos conectados a un proveedor compatible 18 | set_tracing_disabled(disabled=True) 19 | 20 | # Configuramos el cliente OpenAI para usar Azure OpenAI o Modelos de GitHub 21 | load_dotenv(override=True) 22 | API_HOST = os.getenv("API_HOST", "github") 23 | if API_HOST == "github": 24 | client = openai.AsyncOpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 25 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 26 | elif API_HOST == "azure": 27 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 28 | client = openai.AsyncAzureOpenAI( 29 | api_version=os.environ["AZURE_OPENAI_VERSION"], 30 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 31 | azure_ad_token_provider=token_provider, 32 | ) 33 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 34 | 35 | 36 | @function_tool 37 | def get_weather(city: str) -> str: 38 | logger.info(f"Obteniendo el clima para {city}") 39 | if random.random() < 0.05: 40 | return { 41 | "city": city, 42 | "temperature": 72, 43 | "description": "Soleado", 44 | } 45 | else: 46 | return { 47 | "city": city, 48 | "temperature": 60, 49 | "description": "Lluvioso", 50 | } 51 | 52 | 53 | @function_tool 54 | def get_activities(city: str, date: str) -> list: 55 | logger.info(f"Obteniendo actividades para {city} el {date}") 56 | return [ 57 | {"name": "Senderismo", "location": city}, 58 | {"name": "Playa", "location": city}, 59 | {"name": "Museo", "location": city}, 60 | ] 61 | 62 | 63 | @function_tool 64 | def get_current_date() -> str: 65 | logger.info("Obteniendo fecha actual") 66 | return datetime.now().strftime("%Y-%m-%d") 67 | 68 | 69 | agent = Agent( 70 | name="Planificador de Finde", 71 | instructions="Ayudas a los usuarios a planificar sus fines de semana y elegir las mejores actividades según el clima. Si una actividad sería desagradable con el clima actual, no la sugieras. Incluye la fecha del fin de semana en tu respuesta.", 72 | tools=[get_weather, get_activities, get_current_date], 73 | model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), 74 | ) 75 | 76 | 77 | async def main(): 78 | result = await Runner.run(agent, input="hola ¿qué puedo hacer este fin de semana en Quito?") 79 | print(result.final_output) 80 | 81 | 82 | if __name__ == "__main__": 83 | logger.setLevel(logging.INFO) 84 | asyncio.run(main()) -------------------------------------------------------------------------------- /examples/spanish/openai_functioncalling.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import azure.identity 4 | import openai 5 | from dotenv import load_dotenv 6 | 7 | # Configuración del cliente OpenAI para usar Azure OpenAI o Modelos de GitHub 8 | load_dotenv(override=True) 9 | API_HOST = os.getenv("API_HOST", "github") 10 | 11 | if API_HOST == "github": 12 | client = openai.OpenAI(base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 13 | MODEL_NAME = os.getenv("GITHUB_MODEL", "gpt-4o") 14 | elif API_HOST == "azure": 15 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 16 | client = openai.AzureOpenAI( 17 | api_version=os.environ["AZURE_OPENAI_VERSION"], 18 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 19 | azure_ad_token_provider=token_provider, 20 | ) 21 | MODEL_NAME = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] 22 | 23 | tools = [ 24 | { 25 | "type": "function", 26 | "function": { 27 | "name": "lookup_weather", 28 | "description": "Consulta el clima para una ciudad o código postal dado.", 29 | "parameters": { 30 | "type": "object", 31 | "properties": { 32 | "city_name": { 33 | "type": "string", 34 | "description": "El nombre de la ciudad", 35 | }, 36 | "zip_code": { 37 | "type": "string", 38 | "description": "El código postal", 39 | }, 40 | }, 41 | "additionalProperties": False, 42 | }, 43 | }, 44 | }, 45 | { 46 | "type": "function", 47 | "function": { 48 | "name": "lookup_movies", 49 | "description": "Consulta películas en cartelera en una ciudad o código postal dado.", 50 | "parameters": { 51 | "type": "object", 52 | "properties": { 53 | "city_name": { 54 | "type": "string", 55 | "description": "El nombre de la ciudad", 56 | }, 57 | "zip_code": { 58 | "type": "string", 59 | "description": "El código postal", 60 | }, 61 | }, 62 | "additionalProperties": False, 63 | }, 64 | }, 65 | }, 66 | ] 67 | 68 | response = client.chat.completions.create( 69 | model=MODEL_NAME, 70 | messages=[ 71 | {"role": "system", "content": "Eres un chatbot de turismo."}, 72 | {"role": "user", "content": "¿está lloviendo lo suficiente en Cuenca, Ecuador como para ir a ver películas y cuáles están en cartelera?"}, 73 | ], 74 | tools=tools, 75 | tool_choice="auto", 76 | ) 77 | 78 | print(f"Respuesta de {MODEL_NAME} en {API_HOST}: \n") 79 | for message in response.choices[0].message.tool_calls: 80 | print(message.function.name) 81 | print(message.function.arguments) -------------------------------------------------------------------------------- /examples/spanish/openai_githubmodels.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from openai import OpenAI 4 | 5 | client = OpenAI( 6 | base_url="https://models.inference.ai.azure.com", 7 | api_key=os.environ["GITHUB_TOKEN"], 8 | ) 9 | response = client.chat.completions.create( 10 | messages=[ 11 | { 12 | "role": "system", 13 | "content": "Eres un asistente muy útil.", 14 | }, 15 | { 16 | "role": "user", 17 | "content": "Cual es la capital de Ecuador?", 18 | }, 19 | ], 20 | model=os.getenv("GITHUB_MODEL", "gpt-4o"), 21 | ) 22 | print(response.choices[0].message.content) 23 | -------------------------------------------------------------------------------- /examples/spanish/pydanticai_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from dotenv import load_dotenv 6 | from openai import AsyncAzureOpenAI, AsyncOpenAI 7 | from pydantic_ai import Agent 8 | from pydantic_ai.models.openai import OpenAIModel 9 | from pydantic_ai.providers.openai import OpenAIProvider 10 | 11 | # Setup the OpenAI client to use either Azure OpenAI or GitHub Models 12 | load_dotenv(override=True) 13 | API_HOST = os.getenv("API_HOST", "github") 14 | 15 | if API_HOST == "github": 16 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 17 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 18 | elif API_HOST == "azure": 19 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 20 | client = AsyncAzureOpenAI( 21 | api_version=os.environ["AZURE_OPENAI_VERSION"], 22 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 23 | azure_ad_token_provider=token_provider, 24 | ) 25 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 26 | 27 | agent: Agent[None, str] = Agent( 28 | model, 29 | system_prompt="Eres un tutor de Ingles. Ayuda al usuario aprender Ingles. SOLO responde in Inglés.", 30 | result_type=str, 31 | ) 32 | 33 | 34 | async def main(): 35 | result = await agent.run("hola hola, como estas?") 36 | print(result.data) 37 | 38 | 39 | if __name__ == "__main__": 40 | asyncio.run(main()) 41 | -------------------------------------------------------------------------------- /examples/spanish/pydanticai_graph.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations as _annotations 2 | 3 | import asyncio 4 | import os 5 | from dataclasses import dataclass, field 6 | 7 | import azure.identity 8 | from dotenv import load_dotenv 9 | from groq import BaseModel 10 | from openai import AsyncAzureOpenAI, AsyncOpenAI 11 | from pydantic_ai import Agent 12 | from pydantic_ai.format_as_xml import format_as_xml 13 | from pydantic_ai.messages import ModelMessage 14 | from pydantic_ai.models.openai import OpenAIModel 15 | from pydantic_ai.providers.openai import OpenAIProvider 16 | from pydantic_graph import ( 17 | BaseNode, 18 | End, 19 | Graph, 20 | GraphRunContext, 21 | ) 22 | 23 | # Configuración del cliente OpenAI para usar Azure OpenAI o Modelos de GitHub 24 | load_dotenv(override=True) 25 | API_HOST = os.getenv("API_HOST", "github") 26 | 27 | if API_HOST == "github": 28 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 29 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 30 | elif API_HOST == "azure": 31 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 32 | client = AsyncAzureOpenAI( 33 | api_version=os.environ["AZURE_OPENAI_VERSION"], 34 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 35 | azure_ad_token_provider=token_provider, 36 | ) 37 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 38 | 39 | 40 | """ 41 | Definiciones de agentes 42 | """ 43 | 44 | ask_agent = Agent(model, result_type=str, instrument=True) 45 | 46 | 47 | class EvaluationResult(BaseModel, use_attribute_docstrings=True): 48 | correct: bool 49 | """Si la respuesta es correcta.""" 50 | comment: str 51 | """Comentario sobre la respuesta, regaña al usuario si la respuesta está mal.""" 52 | 53 | 54 | evaluate_agent = Agent( 55 | model, 56 | result_type=EvaluationResult, 57 | system_prompt="Dada una pregunta y respuesta, evalúa si la respuesta es correcta.", 58 | ) 59 | 60 | """ 61 | Estado del grafo y nodos 62 | """ 63 | 64 | 65 | @dataclass 66 | class QuestionState: 67 | question: str | None = None 68 | ask_agent_messages: list[ModelMessage] = field(default_factory=list) 69 | evaluate_agent_messages: list[ModelMessage] = field(default_factory=list) 70 | 71 | 72 | @dataclass 73 | class Ask(BaseNode[QuestionState]): 74 | async def run(self, ctx: GraphRunContext[QuestionState]) -> Answer: 75 | result = await ask_agent.run( 76 | "Haz una pregunta simple con una única respuesta correcta.", 77 | message_history=ctx.state.ask_agent_messages, 78 | ) 79 | ctx.state.ask_agent_messages += result.all_messages() 80 | ctx.state.question = result.data 81 | return Answer(result.data) 82 | 83 | 84 | @dataclass 85 | class Answer(BaseNode[QuestionState]): 86 | question: str 87 | 88 | async def run(self, ctx: GraphRunContext[QuestionState]) -> Evaluate: 89 | answer = input(f"{self.question}: ") 90 | return Evaluate(answer) 91 | 92 | 93 | @dataclass 94 | class Evaluate(BaseNode[QuestionState, None, str]): 95 | answer: str 96 | 97 | async def run( 98 | self, 99 | ctx: GraphRunContext[QuestionState], 100 | ) -> End[str] | Reprimand: 101 | assert ctx.state.question is not None 102 | result = await evaluate_agent.run( 103 | format_as_xml({"question": ctx.state.question, "answer": self.answer}), 104 | message_history=ctx.state.evaluate_agent_messages, 105 | ) 106 | ctx.state.evaluate_agent_messages += result.all_messages() 107 | if result.data.correct: 108 | return End(result.data.comment) 109 | else: 110 | return Reprimand(result.data.comment) 111 | 112 | 113 | @dataclass 114 | class Reprimand(BaseNode[QuestionState]): 115 | comment: str 116 | 117 | async def run(self, ctx: GraphRunContext[QuestionState]) -> Ask: 118 | print(f"Comentario: {self.comment}") 119 | ctx.state.question = None 120 | return Ask() 121 | 122 | 123 | question_graph = Graph(nodes=(Ask, Answer, Evaluate, Reprimand), state_type=QuestionState) 124 | 125 | 126 | async def main(): 127 | state = QuestionState() 128 | node = Ask() 129 | end = await question_graph.run(node, state=state) 130 | print("FIN:", end.output) 131 | 132 | 133 | if __name__ == "__main__": 134 | asyncio.run(main()) -------------------------------------------------------------------------------- /examples/spanish/pydanticai_multiagent.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | from typing import Literal 4 | 5 | import azure.identity 6 | from dotenv import load_dotenv 7 | from openai import AsyncAzureOpenAI, AsyncOpenAI 8 | from pydantic import BaseModel, Field 9 | from pydantic_ai import Agent, RunContext 10 | from pydantic_ai.messages import ModelMessage 11 | from pydantic_ai.models.openai import OpenAIModel 12 | from pydantic_ai.providers.openai import OpenAIProvider 13 | from rich.prompt import Prompt 14 | 15 | # Configurar el cliente de OpenAI para usar Azure OpenAI o Modelos de GitHub 16 | load_dotenv(override=True) 17 | API_HOST = os.getenv("API_HOST", "github") 18 | 19 | if API_HOST == "github": 20 | client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 21 | model = OpenAIModel(os.getenv("GITHUB_MODEL", "gpt-4o"), provider=OpenAIProvider(openai_client=client)) 22 | elif API_HOST == "azure": 23 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 24 | client = AsyncAzureOpenAI( 25 | api_version=os.environ["AZURE_OPENAI_VERSION"], 26 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 27 | azure_ad_token_provider=token_provider, 28 | ) 29 | model = OpenAIModel(os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], provider=OpenAIProvider(openai_client=client)) 30 | 31 | 32 | class Flight(BaseModel): 33 | flight_number: str 34 | 35 | 36 | class Failed(BaseModel): 37 | """No se pudo encontrar una opción satisfactoria.""" 38 | 39 | 40 | flight_search_agent = Agent( 41 | model, 42 | result_type=Flight | Failed, 43 | system_prompt=('Usa la herramienta "flight_search" para encontrar un vuelo desde el origen hasta el destino indicado.'), 44 | ) 45 | 46 | 47 | @flight_search_agent.tool 48 | async def flight_search(ctx: RunContext[None], origin: str, destination: str) -> Flight | None: 49 | # en realidad, esto llamaría a una API de búsqueda de vuelos o 50 | # usaría un navegador para buscar en un sitio web de búsqueda de vuelos 51 | return Flight(flight_number="AK456") 52 | 53 | 54 | async def find_flight() -> Flight | None: 55 | message_history: list[ModelMessage] | None = None 56 | for _ in range(3): 57 | prompt = Prompt.ask( 58 | "¿Desde dónde y hacia dónde te gustaría volar?", 59 | ) 60 | result = await flight_search_agent.run(prompt, message_history=message_history) 61 | if isinstance(result.data, Flight): 62 | return result.data 63 | else: 64 | message_history = result.all_messages(result_tool_return_content="Por favor, intenta de nuevo.") 65 | 66 | 67 | class Seat(BaseModel): 68 | row: int = Field(ge=1, le=30) 69 | seat: Literal["A", "B", "C", "D", "E", "F"] 70 | 71 | 72 | # Este agente es responsable de extraer la selección de asiento del usuario 73 | seat_preference_agent = Agent( 74 | model, 75 | result_type=Seat | Failed, 76 | system_prompt=("Extrae la preferencia de asiento del usuario. " "Los asientos A y F son asientos de ventana. " "La fila 1 es la fila delantera y tiene más espacio para las piernas. " "Las filas 14 y 20 también tienen más espacio para las piernas. "), 77 | ) 78 | 79 | 80 | async def find_seat() -> Seat: 81 | message_history: list[ModelMessage] | None = None 82 | while True: 83 | answer = Prompt.ask("¿Qué asiento te gustaría?") 84 | 85 | result = await seat_preference_agent.run(answer, message_history=message_history) 86 | if isinstance(result.data, Seat): 87 | return result.data 88 | else: 89 | print("No pude entender tu preferencia de asiento. Por favor, intenta de nuevo.") 90 | message_history = result.all_messages() 91 | 92 | 93 | async def main(): 94 | opt_flight_details = await find_flight() 95 | if opt_flight_details is not None: 96 | print(f"Vuelo encontrado: {opt_flight_details.flight_number}") 97 | seat_preference = await find_seat() 98 | print(f"Preferencia de asiento: {seat_preference}") 99 | 100 | 101 | if __name__ == "__main__": 102 | asyncio.run(main()) -------------------------------------------------------------------------------- /examples/spanish/semantickernel_basic.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import azure.identity 5 | from dotenv import load_dotenv 6 | from openai import AsyncAzureOpenAI, AsyncOpenAI 7 | from semantic_kernel.agents import ChatCompletionAgent 8 | from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion 9 | 10 | load_dotenv(override=True) 11 | API_HOST = os.getenv("API_HOST", "github") 12 | if API_HOST == "azure": 13 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 14 | chat_client = AsyncAzureOpenAI( 15 | api_version=os.environ["AZURE_OPENAI_VERSION"], 16 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 17 | azure_ad_token_provider=token_provider, 18 | ) 19 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.environ["AZURE_OPENAI_CHAT_MODEL"], async_client=chat_client) 20 | else: 21 | chat_client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 22 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.getenv("GITHUB_MODEL", "gpt-4o"), async_client=chat_client) 23 | 24 | agent = ChatCompletionAgent(name="spanish_tutor", instructions="You are a Spanish tutor. Help the user learn Spanish. ONLY respond in Spanish.", service=chat_completion_service) 25 | 26 | 27 | async def main(): 28 | response = await agent.get_response(messages="oh hey how are you?") 29 | print(response.content) 30 | 31 | 32 | if __name__ == "__main__": 33 | asyncio.run(main()) 34 | -------------------------------------------------------------------------------- /examples/spanish/semantickernel_groupchat.py: -------------------------------------------------------------------------------- 1 | """ 2 | El siguiente ejemplo muestra cómo crear un simple 3 | chat grupal de agentes que utiliza un Agente de Revisión 4 | junto con un Agente Escritor para 5 | completar una tarea del usuario. 6 | 7 | Del tutorial completo aquí: 8 | https://learn.microsoft.com/semantic-kernel/frameworks/agent/examples/example-agent-collaboration?pivots=programming-language-python 9 | """ 10 | 11 | import asyncio 12 | import os 13 | 14 | import azure.identity 15 | from dotenv import load_dotenv 16 | from openai import AsyncAzureOpenAI, AsyncOpenAI 17 | from semantic_kernel import Kernel 18 | from semantic_kernel.agents import AgentGroupChat, ChatCompletionAgent 19 | from semantic_kernel.agents.strategies import ( 20 | KernelFunctionSelectionStrategy, 21 | KernelFunctionTerminationStrategy, 22 | ) 23 | from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion 24 | from semantic_kernel.contents import ChatHistoryTruncationReducer 25 | from semantic_kernel.functions import KernelFunctionFromPrompt 26 | 27 | # Define nombres de agentes 28 | REVIEWER_NAME = "Revisor" 29 | WRITER_NAME = "Escritor" 30 | 31 | load_dotenv(override=True) 32 | API_HOST = os.getenv("API_HOST", "github") 33 | 34 | 35 | def create_kernel() -> Kernel: 36 | """Crea una instancia de Kernel con un servicio de Azure OpenAI ChatCompletion.""" 37 | kernel = Kernel() 38 | 39 | if API_HOST == "azure": 40 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 41 | chat_client = AsyncAzureOpenAI( 42 | api_version=os.environ["AZURE_OPENAI_VERSION"], 43 | azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], 44 | azure_ad_token_provider=token_provider, 45 | ) 46 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.environ["AZURE_OPENAI_CHAT_MODEL"], async_client=chat_client) 47 | else: 48 | chat_client = AsyncOpenAI(api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") 49 | chat_completion_service = OpenAIChatCompletion(ai_model_id=os.getenv("GITHUB_MODEL", "gpt-4o"), async_client=chat_client) 50 | kernel.add_service(chat_completion_service) 51 | return kernel 52 | 53 | 54 | async def main(): 55 | # Crear una única instancia de kernel para todos los agentes 56 | kernel = create_kernel() 57 | 58 | # Crear ChatCompletionAgents usando el mismo kernel 59 | agent_reviewer = ChatCompletionAgent( 60 | kernel=kernel, 61 | name=REVIEWER_NAME, 62 | instructions=""" 63 | Tu responsabilidad es revisar e identificar cómo mejorar el contenido proporcionado por el usuario. 64 | Si el usuario ha proporcionado indicaciones sobre contenido ya entregado, especifica cómo abordar estas indicaciones. 65 | Nunca realices la corrección directamente ni proporciones ejemplos. 66 | Una vez que el contenido haya sido actualizado en una respuesta posterior, revísalo nuevamente hasta que sea satisfactorio. 67 | 68 | REGLAS: 69 | - Solo identifica sugerencias específicas y accionables. 70 | - Verifica que las sugerencias previas hayan sido implementadas. 71 | - Nunca repitas sugerencias anteriores. 72 | """, 73 | ) 74 | 75 | agent_writer = ChatCompletionAgent( 76 | kernel=kernel, 77 | name=WRITER_NAME, 78 | instructions=""" 79 | Tu única responsabilidad es reescribir contenido según las sugerencias de revisión. 80 | - Siempre aplica todas las indicaciones de la revisión. 81 | - Siempre revisa el contenido en su totalidad sin explicaciones. 82 | - Nunca te dirijas al usuario. 83 | """, 84 | ) 85 | 86 | # Definir una función de selección para determinar qué agente debe tomar el siguiente turno 87 | selection_function = KernelFunctionFromPrompt( 88 | function_name="selection", 89 | prompt=f""" 90 | Examina la RESPUESTA proporcionada y elige al próximo participante. 91 | Indica solo el nombre del participante elegido sin explicación. 92 | Nunca elijas al participante nombrado en la RESPUESTA. 93 | 94 | Elige solo entre estos participantes: 95 | - {REVIEWER_NAME} 96 | - {WRITER_NAME} 97 | 98 | Reglas: 99 | - Si la RESPUESTA es input del usuario, es turno de {REVIEWER_NAME}. 100 | - Si la RESPUESTA es de {REVIEWER_NAME}, es turno de {WRITER_NAME}. 101 | - Si la RESPUESTA es de {WRITER_NAME}, es turno de {REVIEWER_NAME}. 102 | 103 | RESPUESTA: 104 | {{{{$lastmessage}}}} 105 | """, 106 | ) 107 | 108 | # Definir una función de terminación donde el revisor señala la finalización con "sí" 109 | termination_keyword = "sí" 110 | 111 | termination_function = KernelFunctionFromPrompt( 112 | function_name="termination", 113 | prompt=f""" 114 | Examina la RESPUESTA y determina si el contenido ha sido considerado satisfactorio. 115 | Si el contenido es satisfactorio, responde con una sola palabra sin explicación: {termination_keyword}. 116 | Si se están proporcionando sugerencias específicas, no es satisfactorio. 117 | Si no se sugiere ninguna corrección, es satisfactorio. 118 | 119 | RESPUESTA: 120 | {{{{$lastmessage}}}} 121 | """, 122 | ) 123 | 124 | history_reducer = ChatHistoryTruncationReducer(target_count=5) 125 | 126 | # Crear el AgentGroupChat con estrategias de selección y terminación 127 | chat = AgentGroupChat( 128 | agents=[agent_reviewer, agent_writer], 129 | selection_strategy=KernelFunctionSelectionStrategy( 130 | initial_agent=agent_reviewer, 131 | function=selection_function, 132 | kernel=kernel, 133 | result_parser=lambda result: str(result.value[0]).strip() if result.value[0] is not None else WRITER_NAME, 134 | history_variable_name="lastmessage", 135 | history_reducer=history_reducer, 136 | ), 137 | termination_strategy=KernelFunctionTerminationStrategy( 138 | agents=[agent_reviewer], 139 | function=termination_function, 140 | kernel=kernel, 141 | result_parser=lambda result: termination_keyword in str(result.value[0]).lower(), 142 | history_variable_name="lastmessage", 143 | maximum_iterations=10, 144 | history_reducer=history_reducer, 145 | ), 146 | ) 147 | 148 | print("¡Listo! Escribe tu mensaje, o 'exit' para salir, 'reset' para reiniciar la conversación. " "Puedes pasar un archivo usando @.") 149 | 150 | is_complete = False 151 | while not is_complete: 152 | print() 153 | user_input = input("Usuario > ").strip() 154 | if not user_input: 155 | continue 156 | 157 | if user_input.lower() == "exit": 158 | is_complete = True 159 | break 160 | 161 | await chat.add_chat_message(message=user_input) 162 | try: 163 | async for response in chat.invoke(): 164 | if response is None or not response.name: 165 | continue 166 | print() 167 | print(f"# {response.name.upper()}:\n{response.content}") 168 | except Exception as e: 169 | print(f"Error durante la invocación del chat: {e}") 170 | 171 | # Reinicia la bandera de completado del chat para la nueva ronda de conversación 172 | chat.is_complete = False 173 | 174 | 175 | if __name__ == "__main__": 176 | asyncio.run(main()) -------------------------------------------------------------------------------- /examples/spanish/smolagents_codeagent.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import azure.identity 4 | from dotenv import load_dotenv 5 | from smolagents import AzureOpenAIServerModel, CodeAgent, DuckDuckGoSearchTool, OpenAIServerModel 6 | 7 | # Configuración del cliente OpenAI para usar Azure OpenAI o Modelos de GitHub 8 | load_dotenv(override=True) 9 | API_HOST = os.getenv("API_HOST", "github") 10 | 11 | if API_HOST == "github": 12 | model = OpenAIServerModel(model_id=os.getenv("GITHUB_MODEL", "gpt-4o"), api_base="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) 13 | elif API_HOST == "azure": 14 | token_provider = azure.identity.get_bearer_token_provider(azure.identity.DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") 15 | model = AzureOpenAIServerModel(model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"], api_version=os.environ["AZURE_OPENAI_VERSION"], azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], client_kwargs={"azure_ad_token_provider": token_provider}) 16 | 17 | 18 | agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model) 19 | 20 | agent.run("¿Cuántos segundos le tomaría a un leopardo a máxima velocidad atravesar el Puente de las Artes?") -------------------------------------------------------------------------------- /infra/main.bicep: -------------------------------------------------------------------------------- 1 | targetScope = 'subscription' 2 | 3 | @minLength(1) 4 | @maxLength(64) 5 | @description('Name of the the environment which is used to generate a short unique hash used in all resources.') 6 | param environmentName string 7 | 8 | @minLength(1) 9 | @description('Location for the OpenAI resource') 10 | // https://learn.microsoft.com/azure/ai-services/openai/concepts/models?tabs=python-secure%2Cglobal-standard%2Cstandard-chat-completions#models-by-deployment-type 11 | @allowed([ 12 | 'australiaeast' 13 | 'brazilsouth' 14 | 'canadaeast' 15 | 'eastus' 16 | 'eastus2' 17 | 'francecentral' 18 | 'germanywestcentral' 19 | 'japaneast' 20 | 'koreacentral' 21 | 'northcentralus' 22 | 'norwayeast' 23 | 'polandcentral' 24 | 'southafricanorth' 25 | 'southcentralus' 26 | 'southindia' 27 | 'spaincentral' 28 | 'swedencentral' 29 | 'switzerlandnorth' 30 | 'uaenorth' 31 | 'uksouth' 32 | 'westeurope' 33 | 'westus' 34 | 'westus3' 35 | ]) 36 | @metadata({ 37 | azd: { 38 | type: 'location' 39 | } 40 | }) 41 | param location string 42 | 43 | @description('Name of the GPT model to deploy') 44 | param gptModelName string = 'gpt-4o' 45 | 46 | @description('Version of the GPT model to deploy') 47 | // See version availability in this table: 48 | // https://learn.microsoft.com/azure/ai-services/openai/concepts/models?tabs=python-secure%2Cglobal-standard%2Cstandard-chat-completions#models-by-deployment-type 49 | param gptModelVersion string = '2024-08-06' 50 | 51 | @description('Name of the model deployment (can be different from the model name)') 52 | param gptDeploymentName string = 'gpt-4o' 53 | 54 | @description('Capacity of the GPT deployment') 55 | // You can increase this, but capacity is limited per model/region, so you will get errors if you go over 56 | // https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits 57 | param gptDeploymentCapacity int = 30 58 | 59 | 60 | @description('Name of the text embedding model to deploy') 61 | param embeddingModelName string = 'text-embedding-3-large' 62 | 63 | @description('Version of the text embedding model to deploy') 64 | // See version availability in this table: 65 | // https://learn.microsoft.com/azure/ai-services/openai/concepts/models?tabs=python-secure%2Cglobal-standard%2Cstandard-chat-completions#models-by-deployment-type 66 | param embeddingModelVersion string = '1' 67 | 68 | @description('Name of the model deployment (can be different from the model name)') 69 | param embeddingDeploymentName string = 'text-embedding-3-large' 70 | 71 | @description('Capacity of the text embedding deployment') 72 | // You can increase this, but capacity is limited per model/region, so you will get errors if you go over 73 | // https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits 74 | param embeddingDeploymentCapacity int = 30 75 | 76 | @description('Id of the user or app to assign application roles') 77 | param principalId string = '' 78 | 79 | @description('Non-empty if the deployment is running on GitHub Actions') 80 | param runningOnGitHub string = '' 81 | 82 | var principalType = empty(runningOnGitHub) ? 'User' : 'ServicePrincipal' 83 | 84 | var resourceToken = toLower(uniqueString(subscription().id, environmentName, location)) 85 | var prefix = '${environmentName}${resourceToken}' 86 | var tags = { 'azd-env-name': environmentName } 87 | 88 | // Organize resources in a resource group 89 | resource resourceGroup 'Microsoft.Resources/resourceGroups@2021-04-01' = { 90 | name: '${prefix}-rg' 91 | location: location 92 | tags: tags 93 | } 94 | 95 | var openAiServiceName = '${prefix}-openai' 96 | module openAi 'br/public:avm/res/cognitive-services/account:0.7.1' = { 97 | name: 'openai' 98 | scope: resourceGroup 99 | params: { 100 | name: openAiServiceName 101 | location: location 102 | tags: tags 103 | kind: 'OpenAI' 104 | sku: 'S0' 105 | customSubDomainName: openAiServiceName 106 | networkAcls: { 107 | defaultAction: 'Allow' 108 | bypass: 'AzureServices' 109 | } 110 | deployments: [ 111 | { 112 | name: gptDeploymentName 113 | model: { 114 | format: 'OpenAI' 115 | name: gptModelName 116 | version: gptModelVersion 117 | } 118 | sku: { 119 | name: 'GlobalStandard' 120 | capacity: gptDeploymentCapacity 121 | } 122 | } 123 | { 124 | name: embeddingDeploymentName 125 | model: { 126 | format: 'OpenAI' 127 | name: embeddingModelName 128 | version: embeddingModelVersion 129 | } 130 | sku: { 131 | name: 'GlobalStandard' 132 | capacity: embeddingDeploymentCapacity 133 | } 134 | } 135 | ] 136 | roleAssignments: [ 137 | { 138 | principalId: principalId 139 | roleDefinitionIdOrName: 'Cognitive Services OpenAI User' 140 | principalType: principalType 141 | } 142 | ] 143 | } 144 | } 145 | 146 | output AZURE_LOCATION string = location 147 | output AZURE_TENANT_ID string = tenant().tenantId 148 | output AZURE_RESOURCE_GROUP string = resourceGroup.name 149 | 150 | // Specific to Azure OpenAI 151 | output AZURE_OPENAI_SERVICE string = openAi.outputs.name 152 | output AZURE_OPENAI_ENDPOINT string = openAi.outputs.endpoint 153 | 154 | output AZURE_OPENAI_CHAT_MODEL string = gptModelName 155 | output AZURE_OPENAI_CHAT_DEPLOYMENT string = gptDeploymentName 156 | output AZURE_OPENAI_EMBEDDING_MODEL string = embeddingModelName 157 | output AZURE_OPENAI_EMBEDDING_DEPLOYMENT string = embeddingDeploymentName 158 | -------------------------------------------------------------------------------- /infra/main.parameters.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", 3 | "contentVersion": "1.0.0.0", 4 | "parameters": { 5 | "environmentName": { 6 | "value": "${AZURE_ENV_NAME}" 7 | }, 8 | "location": { 9 | "value": "${AZURE_LOCATION}" 10 | }, 11 | "principalId": { 12 | "value": "${AZURE_PRINCIPAL_ID}" 13 | }, 14 | "runningOnGitHub": { 15 | "value": "${GITHUB_ACTIONS}" 16 | } 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /infra/write_dot_env.ps1: -------------------------------------------------------------------------------- 1 | # Clear the contents of the .env file 2 | Set-Content -Path .env -Value "" 3 | 4 | # Append new values to the .env file 5 | $azureTenantId = azd env get-value AZURE_TENANT_ID 6 | $azureOpenAiService = azd env get-value AZURE_OPENAI_SERVICE 7 | $azureOpenAiEndpoint = azd env get-value AZURE_OPENAI_ENDPOINT 8 | $azureOpenAiChatDeployment = azd env get-value AZURE_OPENAI_CHAT_DEPLOYMENT 9 | $azureOpenAiChatModel = azd env get-value AZURE_OPENAI_CHAT_MODEL 10 | $azureOpenAiEmbeddingDeployment = azd env get-value AZURE_OPENAI_EMBEDDING_DEPLOYMENT 11 | $azureOpenAiEmbeddingModel = azd env get-value AZURE_OPENAI_EMBEDDING_MODEL 12 | 13 | Add-Content -Path .env -Value "API_HOST=azure" 14 | Add-Content -Path .env -Value "AZURE_TENANT_ID=$azureTenantId" 15 | Add-Content -Path .env -Value "AZURE_OPENAI_SERVICE=$azureOpenAiService" 16 | Add-Content -Path .env -Value "AZURE_OPENAI_ENDPOINT=$azureOpenAiEndpoint" 17 | Add-Content -Path .env -Value "AZURE_OPENAI_VERSION=2024-10-21" 18 | Add-Content -Path .env -Value "AZURE_OPENAI_CHAT_DEPLOYMENT=$azureOpenAiChatDeployment" 19 | Add-Content -Path .env -Value "AZURE_OPENAI_CHAT_MODEL=$azureOpenAiChatModel" 20 | Add-Content -Path .env -Value "AZURE_OPENAI_EMBEDDING_DEPLOYMENT=$azureOpenAiEmbeddingDeployment" 21 | Add-Content -Path .env -Value "AZURE_OPENAI_EMBEDDING_MODEL=$azureOpenAiEmbeddingModel" 22 | -------------------------------------------------------------------------------- /infra/write_dot_env.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Clear the contents of the .env file 4 | > .env 5 | 6 | # Append new values to the .env file 7 | echo "API_HOST=azure" >> .env 8 | echo "AZURE_TENANT_ID=$(azd env get-value AZURE_TENANT_ID)" >> .env 9 | echo "AZURE_OPENAI_SERVICE=$(azd env get-value AZURE_OPENAI_SERVICE)" >> .env 10 | echo "AZURE_OPENAI_ENDPOINT=$(azd env get-value AZURE_OPENAI_ENDPOINT)" >> .env 11 | echo "AZURE_OPENAI_VERSION=2024-10-21" >> .env 12 | echo "AZURE_OPENAI_CHAT_DEPLOYMENT=$(azd env get-value AZURE_OPENAI_CHAT_DEPLOYMENT)" >> .env 13 | echo "AZURE_OPENAI_CHAT_MODEL=$(azd env get-value AZURE_OPENAI_CHAT_MODEL)" >> .env 14 | echo "AZURE_OPENAI_EMBEDDING_DEPLOYMENT=$(azd env get-value AZURE_OPENAI_EMBEDDING_DEPLOYMENT)" >> .env 15 | echo "AZURE_OPENAI_EMBEDDING_MODEL=$(azd env get-value AZURE_OPENAI_EMBEDDING_MODEL)" >> .env 16 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.ruff] 2 | line-length = 1000 3 | target-version = "py310" 4 | lint.select = ["E", "F", "I", "UP"] 5 | lint.ignore = ["D203"] 6 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | azure-identity 2 | openai 3 | python-dotenv 4 | pydantic 5 | rich 6 | azure-ai-inference 7 | dotenv-azd 8 | aiohttp 9 | autogen-agentchat 10 | autogen-ext[openai] 11 | azure-ai-inference==1.0.0b9 12 | openai-agents 13 | semantic-kernel 14 | langgraph 15 | langchain_openai 16 | pydantic-ai 17 | llama-index 18 | llama-index-llms-azure-openai 19 | llama-index-llms-openai-like 20 | llama-index-embeddings-azure-openai 21 | smolagents 22 | --------------------------------------------------------------------------------