├── samples ├── curl │ ├── sample.png │ ├── basic.sh │ ├── streaming.sh │ ├── README.md │ ├── multi_turn.sh │ └── image_sample.sh ├── js │ ├── openai │ │ ├── sample.png │ │ ├── README.md │ │ ├── basic.js │ │ ├── multi_turn.js │ │ ├── streaming.js │ │ ├── embeddings.js │ │ ├── chat_with_image_file.js │ │ └── tools.js │ ├── azure_ai_inference │ │ ├── sample.png │ │ ├── README.md │ │ ├── basic.js │ │ ├── multi_turn.js │ │ ├── embeddings.js │ │ ├── streaming.js │ │ ├── chat_with_image_file.js │ │ └── tools.js │ ├── mistralai │ │ ├── README.md │ │ ├── basic.js │ │ ├── multi_turn.js │ │ ├── streaming.js │ │ └── tools.js │ └── README.md ├── python │ ├── openai │ │ ├── sample.png │ │ ├── embeddings.py │ │ ├── basic.py │ │ ├── streaming.py │ │ ├── README.md │ │ ├── multi_turn.py │ │ ├── chat_with_image_file.py │ │ ├── tools.py │ │ └── embeddings_getting_started.ipynb │ ├── azure_ai_inference │ │ ├── sample.png │ │ ├── README.md │ │ ├── embeddings.py │ │ ├── basic.py │ │ ├── chat_with_image_file.py │ │ ├── streaming.py │ │ ├── multi_turn.py │ │ └── tools.py │ ├── mistralai │ │ ├── README.md │ │ ├── basic.py │ │ ├── streaming.py │ │ ├── multi_turn.py │ │ └── tools.py │ ├── README.md │ └── azure_ai_evaluation │ │ └── evaluation.py └── README.md ├── .vscode ├── extensions.json └── launch.json ├── cookbooks ├── python │ ├── openai │ │ ├── data │ │ │ ├── Chinook.db │ │ │ ├── sad-puppy.png │ │ │ ├── hotel_invoices │ │ │ │ ├── hotel_DB.db │ │ │ │ ├── receipts_2019_de_hotel │ │ │ │ │ ├── 20190119_002.pdf │ │ │ │ │ ├── hampton-25789.pdf │ │ │ │ │ ├── hampton_24361.pdf │ │ │ │ │ ├── hampton_28646.pdf │ │ │ │ │ ├── madison-489347.pdf │ │ │ │ │ ├── madison-490057.pdf │ │ │ │ │ ├── madison-490969.pdf │ │ │ │ │ ├── madison-492602.pdf │ │ │ │ │ ├── madison-493304.pdf │ │ │ │ │ ├── madison-496987.pdf │ │ │ │ │ ├── madison-502875.pdf │ │ │ │ │ ├── madison-5036666.pdf │ │ │ │ │ ├── madison_497810.pdf │ │ │ │ │ ├── citadines_08372561.pdf │ │ │ │ │ ├── hampton_20190411.pdf │ │ │ │ │ ├── mercure-37816396.pdf │ │ │ │ │ ├── motelone-524544306.pdf │ │ │ │ │ ├── motelone_20191111.pdf │ │ │ │ │ ├── motelone_20191118.pdf │ │ │ │ │ ├── moxy-20191221_007.pdf │ │ │ │ │ ├── moxy_20191221_006.pdf │ │ │ │ │ ├── premierinn_GABCI19014325.pdf │ │ │ │ │ ├── 20190202_THE MADISON HAMBURG.pdf │ │ │ │ │ ├── citadines-20190331_Invoice.pdf │ │ │ │ │ ├── 20190202_THE MADISON HAMBURG_001.pdf │ │ │ │ │ ├── madison_folio_g_cp_efolio5895702.pdf │ │ │ │ │ ├── madison_folio_g_cp_efolio5895707.pdf │ │ │ │ │ ├── madison_folio_g_cp_efolio5945547.pdf │ │ │ │ │ ├── madison_folio_g_cp_efolio5972171.pdf │ │ │ │ │ ├── madison_folio_g_cp_efolio5976009.pdf │ │ │ │ │ └── madison_folio_g_cp_efolio5991896.pdf │ │ │ │ ├── invoice_schema.json │ │ │ │ ├── transformed_invoice_json │ │ │ │ │ ├── transformed_madison-502875_extracted.json │ │ │ │ │ ├── transformed_madison_folio_g_cp_efolio5895707_extracted.json │ │ │ │ │ └── transformed_hampton_20190411_extracted.json │ │ │ │ └── extracted_invoice_json │ │ │ │ │ ├── madison_folio_g_cp_efolio5895707_extracted.json │ │ │ │ │ ├── madison-502875_extracted.json │ │ │ │ │ └── hampton_20190411_extracted.json │ │ │ ├── fia_f1_power_unit_financial_regulations_issue_1_-_2022-08-16.pdf │ │ │ └── example_events_openapi.json │ │ ├── images │ │ │ ├── elt_workflow.png │ │ │ └── sample_hotel_invoice.png │ │ ├── README.md │ │ └── How_to_stream_completions.ipynb │ ├── README.md │ ├── langchain │ │ └── README.md │ ├── mistralai │ │ └── README.md │ └── llamaindex │ │ ├── README.md │ │ ├── data │ │ ├── product_info_7.md │ │ ├── product_info_14.md │ │ ├── product_info_12.md │ │ ├── product_info_15.md │ │ └── product_info_11.md │ │ └── rag_getting_started.ipynb └── README.md ├── requirements.txt ├── .env-sample ├── .devcontainer ├── first-run-notice.txt ├── bootstrap ├── Dockerfile └── devcontainer.json ├── package.json ├── .gitignore ├── LICENSE └── README.md /samples/curl/sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/samples/curl/sample.png -------------------------------------------------------------------------------- /samples/js/openai/sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/samples/js/openai/sample.png -------------------------------------------------------------------------------- /.vscode/extensions.json: -------------------------------------------------------------------------------- 1 | { 2 | "unwantedRecommendations": [ 3 | "ms-azuretools.vscode-docker" 4 | ] 5 | } -------------------------------------------------------------------------------- /samples/python/openai/sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/samples/python/openai/sample.png -------------------------------------------------------------------------------- /cookbooks/python/openai/data/Chinook.db: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/Chinook.db -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/samples/js/azure_ai_inference/sample.png -------------------------------------------------------------------------------- /cookbooks/python/openai/data/sad-puppy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/sad-puppy.png -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/samples/python/azure_ai_inference/sample.png -------------------------------------------------------------------------------- /cookbooks/python/openai/images/elt_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/images/elt_workflow.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | azure-ai-inference~=1.0.0b4 2 | azure-ai-evaluation~=1.1.0 3 | openai~=1.72.0 4 | mistralai~=0.4.2 5 | python-dotenv~=1.0.1 6 | ipykernel~=6.29.5 7 | -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/hotel_DB.db: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/hotel_DB.db -------------------------------------------------------------------------------- /cookbooks/python/openai/images/sample_hotel_invoice.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/images/sample_hotel_invoice.png -------------------------------------------------------------------------------- /.env-sample: -------------------------------------------------------------------------------- 1 | # get your pat token from: https://github.com/settings/tokens?type=beta 2 | # if creating a new token, ensure it has `models: read` permissions 3 | GITHUB_TOKEN="github_pat_****" 4 | -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/20190119_002.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/20190119_002.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton-25789.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton-25789.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton_24361.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton_24361.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton_28646.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton_28646.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-489347.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-489347.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-490057.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-490057.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-490969.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-490969.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-492602.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-492602.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-493304.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-493304.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-496987.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-496987.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-502875.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-502875.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-5036666.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison-5036666.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_497810.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_497810.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/citadines_08372561.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/citadines_08372561.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton_20190411.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/hampton_20190411.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/mercure-37816396.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/mercure-37816396.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/motelone-524544306.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/motelone-524544306.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/motelone_20191111.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/motelone_20191111.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/motelone_20191118.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/motelone_20191118.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/moxy-20191221_007.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/moxy-20191221_007.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/moxy_20191221_006.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/moxy_20191221_006.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/fia_f1_power_unit_financial_regulations_issue_1_-_2022-08-16.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/fia_f1_power_unit_financial_regulations_issue_1_-_2022-08-16.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/premierinn_GABCI19014325.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/premierinn_GABCI19014325.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/20190202_THE MADISON HAMBURG.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/20190202_THE MADISON HAMBURG.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/citadines-20190331_Invoice.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/citadines-20190331_Invoice.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/20190202_THE MADISON HAMBURG_001.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/20190202_THE MADISON HAMBURG_001.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5895702.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5895702.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5895707.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5895707.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5945547.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5945547.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5972171.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5972171.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5976009.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5976009.pdf -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5991896.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/github/codespaces-models/main/cookbooks/python/openai/data/hotel_invoices/receipts_2019_de_hotel/madison_folio_g_cp_efolio5991896.pdf -------------------------------------------------------------------------------- /.devcontainer/first-run-notice.txt: -------------------------------------------------------------------------------- 1 | 👋 Welcome to your shiny new Codespace for interacting with GitHub Models! We've got everything fired up and ready for you to explore AI Models hosted on Azure AI. 2 | 3 | Take a look at the README to find all of the information you need to get started. 4 | 5 | -------------------------------------------------------------------------------- /.devcontainer/bootstrap: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script runs during the container prebuild step and is used to cache dependencies 4 | # and other artifacts that are not expected to change frequently. 5 | # 6 | 7 | ROOT_DIR=/workspaces/codespaces-models 8 | 9 | npm install ${ROOT_DIR} 10 | 11 | pip install -r ${ROOT_DIR}/requirements.txt 12 | 13 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "codespaces-models", 3 | "version": "0.0.1", 4 | "type": "module", 5 | "private": true, 6 | "dependencies": { 7 | "@azure-rest/ai-inference": "^1.0.0-beta.2", 8 | "@azure/core-auth": "^1.7.2", 9 | "@azure/core-sse": "^2.1.2", 10 | "@mistralai/mistralai": "^0.5.0", 11 | "openai": "^4.52.7" 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /cookbooks/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Cookbooks 2 | 3 | This folder contains cookbooks for interacting with GitHub Models in several languages. Similar cookbooks are available on model providers' websites, but have been altered here to interact with the GitHub Models service. 4 | 5 | - [Python](python/README.md) 6 | 7 | The samples were modified slightly to better run with the GitHub Models service. 8 | -------------------------------------------------------------------------------- /.devcontainer/Dockerfile: -------------------------------------------------------------------------------- 1 | ARG VARIANT="focal" 2 | FROM buildpack-deps:${VARIANT}-curl 3 | 4 | LABEL dev.containers.features="common" 5 | 6 | COPY first-run-notice.txt /tmp/scripts/ 7 | 8 | # Move first run notice to right spot 9 | RUN mkdir -p "/usr/local/etc/vscode-dev-containers/" \ 10 | && mv -f /tmp/scripts/first-run-notice.txt /usr/local/etc/vscode-dev-containers/ 11 | 12 | # Remove scripts now that we're done with them 13 | RUN rm -rf /tmp/scripts 14 | -------------------------------------------------------------------------------- /cookbooks/python/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Cookbooks - Python 2 | 3 | This folder contains cookbooks for interacting with GitHub Models using Python. Similar cookbooks are available on model providers' websites, but have been altered here to interact with the GitHub Models service. 4 | 5 | - [OpenAI](openai/README.md) 6 | - [Mistral AI](mistralai/README.md) 7 | - [LlamaIndex](llamaindex/README.md) 8 | - [LangChain](langchain/README.md) 9 | 10 | The samples were modified slightly to better run with the GitHub Models service. 11 | -------------------------------------------------------------------------------- /cookbooks/python/langchain/README.md: -------------------------------------------------------------------------------- 1 | # LangChain cookbooks 2 | 3 | This folder contains examples of how to achieve specific tasks using the LangChain framework. LangChain is a modular framework designed to streamline the development and integration of applications utilizing large language models (LLMs) by providing tools for effective prompt engineering, data handling, and complex workflows. 4 | 5 | ## Examples 6 | 7 | - [LangChain](lc_openai_getting_started.ipynb): Examples of how LangChain can be used with models provided by the GitHub Models service 8 | -------------------------------------------------------------------------------- /samples/curl/basic.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | curl -X POST "https://models.github.ai/inference/chat/completions" \ 3 | -H "Content-Type: application/json" \ 4 | -H "Authorization: Bearer $GITHUB_TOKEN" \ 5 | -d '{ 6 | "messages": [ 7 | { 8 | "role": "system", 9 | "content": "You are a helpful assistant." 10 | }, 11 | { 12 | "role": "user", 13 | "content": "What is the capital of France?" 14 | } 15 | ], 16 | "model": "openai/gpt-4o-mini" 17 | }' 18 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .vscode/* 2 | !.vscode/extensions.json 3 | !.vscode/launch.json 4 | __pycache__/ 5 | .env 6 | .DS_Store 7 | cookbooks/python/openai/vector_databases/chroma/vector_database_wikipedia_articles_embedded*.zip 8 | cookbooks/python/openai/vector_databases/data/vector_database_wikipedia_articles_embedded.csv 9 | .env.save 10 | cookbooks/python/openai/data/hotel_invoices/transformed_invoice_json/* 11 | cookbooks/python/openai/data/hotel_invoices/extracted_invoice_json/* 12 | cookbooks/python/openai/data/hotel_invoices/hotel_DB.db 13 | cookbooks/python/openai/hallucination_results.csv 14 | node_modules/ -------------------------------------------------------------------------------- /samples/curl/streaming.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | curl -X POST "https://models.github.ai/inference/chat/completions" \ 3 | -H "Content-Type: application/json" \ 4 | -H "Authorization: Bearer $GITHUB_TOKEN" \ 5 | -d '{ 6 | "messages": [ 7 | { 8 | "role": "system", 9 | "content": "You are a helpful assistant." 10 | }, 11 | { 12 | "role": "user", 13 | "content": "Give me 5 good reasons why I should exercise every day." 14 | } 15 | ], 16 | "stream": true, 17 | "model": "openai/gpt-4o-mini" 18 | }' 19 | -------------------------------------------------------------------------------- /.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | "configurations": [ 3 | { 4 | "name": "Run JavaScript Sample", 5 | "program": "${file}", 6 | "cwd": "${fileDirname}", 7 | "envFile": "${workspaceFolder}/.env", 8 | "outputCapture": "std", 9 | "request": "launch", 10 | "skipFiles": [ 11 | "/**" 12 | ], 13 | "type": "node" 14 | }, 15 | { 16 | "name": "Run Python Sample", 17 | "program": "${file}", 18 | "cwd": "${fileDirname}", 19 | "envFile": "${workspaceFolder}/.env", 20 | "redirectOutput": false, 21 | "request": "launch", 22 | "type": "debugpy" 23 | } 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /samples/curl/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples - cURL 2 | 3 | This folder contains samples for interacting with GitHub Models using cURL. 4 | 5 | - [basic.sh](basic.sh): Provide a simple prompt to gpt-4o-mini and display the chat response. 6 | - [multi_turn.sh](multi_turn.sh): Provide a conversation history to gpt-4o-mini, and display the response to the most recent chat prompt. 7 | - [streaming.sh](streaming.sh): Respond to a simple prompt to gpt-4o-mini and provide the response one token at a time. 8 | - [image_sample.sh](image_sample.sh): Have gpt-4o-mini respond to a prompt that includes image data. 9 | 10 | ## Running a sample 11 | 12 | To run a cURL sample, run the following command in your terminal: 13 | 14 | ```shell 15 | # To run the multi-turn sample: 16 | $ ./samples/curl/multi_turn.sh 17 | ``` 18 | -------------------------------------------------------------------------------- /samples/js/openai/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples for OpenAI JavaScript SDK 2 | 3 | This folder contains samples that leverage the OpenAI JavaScript SDK with the GitHub Models endpoint. 4 | 5 | This codespace comes with dependencies pre-installed. If you want to use this code outside of this codespace, install OpenAI SDK using `npm`: 6 | 7 | ```shell 8 | npm install openai 9 | ``` 10 | 11 | ## Running a sample 12 | 13 | To run a JavaScript sample, run a command like the following in your terminal: 14 | 15 | ```shell 16 | node samples/js/openai/multi_turn.js 17 | ``` 18 | 19 | * [basic.js](basic.js): basic call to the gpt-4o-mini chat completion API 20 | * [multi_turn.js](multi_turn.js): multi-turn conversation with the chat completion API 21 | * [streaming.js](streaming.js): generate a response in streaming mode, token by token -------------------------------------------------------------------------------- /samples/python/openai/embeddings.py: -------------------------------------------------------------------------------- 1 | import os 2 | from openai import OpenAI 3 | 4 | token = os.environ["GITHUB_TOKEN"] 5 | endpoint = "https://models.github.ai/inference" 6 | 7 | # Pick one of the OpenAI embeddings models from the GitHub Models service 8 | model_name = "text-embedding-3-small" 9 | 10 | client = OpenAI( 11 | base_url=endpoint, 12 | api_key=token, 13 | ) 14 | 15 | response = client.embeddings.create( 16 | input=["first phrase", "second phrase", "third phrase"], 17 | model=model_name, 18 | ) 19 | 20 | for item in response.data: 21 | length = len(item.embedding) 22 | print( 23 | f"data[{item.index}]: length={length}, " 24 | f"[{item.embedding[0]}, {item.embedding[1]}, " 25 | f"..., {item.embedding[length-2]}, {item.embedding[length-1]}]" 26 | ) 27 | print(response.usage) 28 | -------------------------------------------------------------------------------- /samples/js/mistralai/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples for Mistral JavaScript SDK 2 | 3 | This folder contains samples that leverage the Mistral JavaScript SDK with the GitHub Models endpoint. 4 | 5 | This codespace comes with dependencies pre-installed. If you want to use this code outside of this codespace, install Mistral SDK using `npm`: 6 | 7 | ```shell 8 | npm install @mistralai/mistralai 9 | ``` 10 | ## Running a sample 11 | 12 | To run a JavaScript sample, run a command like the following in your terminal: 13 | 14 | ```shell 15 | node samples/js/mistralai/multi_turn.js 16 | ``` 17 | 18 | * [basic.js](basic.js): basic call to the gpt-4o-mini chat completion API 19 | * [multi_turn.js](multi_turn.js): multi-turn conversation with the chat completion API 20 | * [streaming.js](streaming.js): generate a response in streaming mode, token by token -------------------------------------------------------------------------------- /samples/curl/multi_turn.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | curl -X POST "https://models.github.ai/inference/chat/completions" \ 3 | -H "Content-Type: application/json" \ 4 | -H "Authorization: Bearer $GITHUB_TOKEN" \ 5 | -d '{ 6 | "messages": [ 7 | { 8 | "role": "system", 9 | "content": "You are a helpful assistant." 10 | }, 11 | { 12 | "role": "user", 13 | "content": "What is the capital of France?" 14 | }, 15 | { 16 | "role": "assistant", 17 | "content": "The capital of France is Paris." 18 | }, 19 | { 20 | "role": "user", 21 | "content": "What about Spain?" 22 | } 23 | ], 24 | "model": "openai/gpt-4o-mini" 25 | }' 26 | -------------------------------------------------------------------------------- /cookbooks/python/mistralai/README.md: -------------------------------------------------------------------------------- 1 | # Mistral Cookbook Examples 2 | 3 | This folder contains examples adapted from the [Mistral Cookbook repository](https://github.com/mistralai/cookbook). 4 | 5 | The samples were modified slightly to better run with the GitHub Models service. 6 | 7 | ## Examples 8 | 9 | The following cookbook examples are available: 10 | 11 | - [Evaluation](evaluation.ipynb): Provides a number of examples for evaluating the performance of a task performed by an LLM, concretely information extraction, code generation, summarization 12 | - [Function Calling](function_calling.ipynb): Simple example to demonstrate how function calling works with Mistral models 13 | - [Prefix: Use Cases](prefix_use_cases.ipynb): Add a prefix to the model's response via the API 14 | - [Prompting Capabilities](prompting_capabilities.ipynb): Example prompts showing classification, summarization, personalization, and evaluation -------------------------------------------------------------------------------- /samples/js/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples - JavaScript 2 | 3 | This folder contains samples for interacting with GitHub Models using JavaScript. 4 | 5 | Multiple model-specific SDKs are compatible with the endpoint served under the GitHub Models catalog, such as [`openai`](./openai/README.md) and [`mistralai`](./mistralai/README.md) packages for their respective models. This makes it easy to port your existing code using one of those SDKs. 6 | 7 | You can also use the [`azure-ai-inference`](./azure_ai_inference/README.md) package for a cross-model unified SDK. 8 | 9 | ## SDKs 10 | - [`openai`](./openai/README.md) 11 | - [`azure-ai-inference`](./azure_ai_inference/README.md) 12 | - [`mistralai`](./mistralai/README.md) 13 | 14 | ## Running a sample 15 | 16 | To run a JavaScript sample, a command like the following in your terminal: 17 | 18 | ```shell 19 | node samples/js/azure_ai_inference/multi_turn.js 20 | ``` 21 | -------------------------------------------------------------------------------- /samples/curl/image_sample.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | SCRIPT_DIR=$(dirname $0) 3 | PAYLOAD_FILE="payload.json" 4 | IMAGE_DATA="`cat \"${SCRIPT_DIR}/sample.png\" | base64`" 5 | echo '{ 6 | "messages": [ 7 | { 8 | "role": "system", 9 | "content": "You are a helpful assistant that describes images in details." 10 | }, 11 | { 12 | "role": "user", 13 | "content": [{"text": "What''s in this image?", "type": "text"}, {"image_url": {"url":"data:image/png;base64,'"${IMAGE_DATA}"'","detail":"low"}, "type": "image_url"}] 14 | } 15 | ], 16 | "model": "openai/gpt-4o-mini" 17 | }' > "$PAYLOAD_FILE" 18 | 19 | curl -X POST "https://models.github.ai/inference/chat/completions" \ 20 | -H "Content-Type: application/json" \ 21 | -H "Authorization: Bearer $GITHUB_TOKEN" \ 22 | -d @payload.json 23 | echo 24 | rm -f "$PAYLOAD_FILE" -------------------------------------------------------------------------------- /samples/python/mistralai/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples for Mistral Python SDK 2 | 3 | This folder contains samples that leverage the Mistral Python SDK with the GitHub Models endpoint. 4 | 5 | ## Jupyter Notebooks 6 | 7 | To run these notebooks, click on a link below to open it in Codespaces and select a Python3 kernel. 8 | 9 | * [getting_started.ipynb](getting_started.ipynb). Basic interaction, multi-turn conversations, image inputs, streamed responses, Functions API. 10 | 11 | ## Running a sample 12 | 13 | To run these scripts, open your terminal and run a command like: 14 | 15 | ```shell 16 | python3 samples/python/mistralai/basic.py 17 | ``` 18 | 19 | * [basic.py](basic.py): basic call to the Mistral-small chat completion API 20 | * [multi_turn.py](multi_turn.py): multi-turn conversation with the chat completion API 21 | * [streaming.py](streaming.py): generate a response in streaming mode, token by token 22 | -------------------------------------------------------------------------------- /samples/js/mistralai/basic.js: -------------------------------------------------------------------------------- 1 | import MistralClient from '@mistralai/mistralai'; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Mistral models from the GitHub Models service */ 7 | const modelName = "mistral-ai/Mistral-small"; 8 | 9 | export async function main() { 10 | 11 | const client = new MistralClient(token, endpoint); 12 | 13 | const response = await client.chat({ 14 | model: modelName, 15 | messages: [ 16 | { role:"system", content: "You are a helpful assistant." }, 17 | { role:"user", content: "What is the capital of France?" } 18 | ], 19 | // Optional parameters 20 | temperature: 1., 21 | max_tokens: 1000, 22 | top_p: 1. 23 | }); 24 | 25 | console.log(response.choices[0].message.content); 26 | } 27 | 28 | main().catch((err) => { 29 | console.error("The sample encountered an error:", err); 30 | }); 31 | -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples for Azure AI Inference SDK 2 | 3 | This folder contains samples that leverage the Azure AI Inference Python SDK with the GitHub Models endpoint. 4 | 5 | ## Jupyter Notebooks 6 | 7 | To run these notebooks, click on a link below to open it in Codespaces and select a Python3 kernel. 8 | 9 | * [getting_started.ipynb](getting_started.ipynb). Basic interaction, multi-turn conversations, image inputs, streamed responses, Functions API. 10 | 11 | ## Running a sample 12 | 13 | To run these scripts, open your terminal and run a command like: 14 | 15 | ```shell 16 | python3 samples/python/azure_ai_inference/basic.py 17 | ``` 18 | 19 | * [basic.py](basic.py): basic call to the gpt-4o-mini chat completion API 20 | * [multi_turn.py](multi_turn.py): multi-turn conversation with the chat completion API 21 | * [streaming.py](streaming.py): generate a response in streaming mode, token by token 22 | -------------------------------------------------------------------------------- /samples/js/openai/basic.js: -------------------------------------------------------------------------------- 1 | import OpenAI from "openai"; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Azure OpenAI models from the GitHub Models service */ 7 | const modelName = "openai/gpt-4o-mini"; 8 | 9 | export async function main() { 10 | 11 | const client = new OpenAI({ baseURL: endpoint, apiKey: token }); 12 | 13 | const response = await client.chat.completions.create({ 14 | messages: [ 15 | { role:"system", content: "You are a helpful assistant." }, 16 | { role:"user", content: "What is the capital of France?" } 17 | ], 18 | model: modelName, 19 | // Optional parameters 20 | temperature: 1., 21 | max_tokens: 1000, 22 | top_p: 1. 23 | }); 24 | 25 | console.log(response.choices[0].message.content); 26 | } 27 | 28 | main().catch((err) => { 29 | console.error("The sample encountered an error:", err); 30 | }); 31 | -------------------------------------------------------------------------------- /samples/js/mistralai/multi_turn.js: -------------------------------------------------------------------------------- 1 | import MistralClient from '@mistralai/mistralai'; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Mistral models from the GitHub Models service */ 7 | const modelName = "mistral-ai/Mistral-small"; 8 | 9 | export async function main() { 10 | 11 | const client = new MistralClient(token, endpoint); 12 | 13 | const response = await client.chat({ 14 | model: modelName, 15 | messages: [ 16 | { role: "system", content: "You are a helpful assistant." }, 17 | { role: "user", content: "What is the capital of France?" }, 18 | { role: "assistant", content: "The capital of France is Paris." }, 19 | { role: "user", content: "What about Spain?" } 20 | ], 21 | }); 22 | 23 | console.log(response.choices[0].message.content); 24 | } 25 | 26 | main().catch((err) => { 27 | console.error("The sample encountered an error:", err); 28 | }); -------------------------------------------------------------------------------- /samples/python/mistralai/basic.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates a basic call to the chat completion API. 2 | It is leveraging your endpoint and key. The call is synchronous.""" 3 | 4 | import os 5 | from mistralai.client import MistralClient 6 | from mistralai.models.chat_completion import ChatMessage 7 | 8 | token = os.environ["GITHUB_TOKEN"] 9 | endpoint = "https://models.github.ai/inference" 10 | 11 | # Pick one of the Mistral models from the GitHub Models service 12 | model_name = "Mistral-small" 13 | 14 | client = MistralClient(api_key=token, endpoint=endpoint) 15 | 16 | response = client.chat( 17 | model=model_name, 18 | messages=[ 19 | ChatMessage(role="system", content="You are a helpful assistant."), 20 | ChatMessage(role="user", content="What is the capital of France?"), 21 | ], 22 | # Optional parameters 23 | temperature=1., 24 | max_tokens=1000, 25 | top_p=1. 26 | ) 27 | 28 | print(response.choices[0].message.content) 29 | -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/README.md: -------------------------------------------------------------------------------- 1 | # LlamaIndex cookbooks 2 | 3 | This folder contains examples of how to achieve specific tasks using the [LlamaIndex framework](https://github.com/run-llama/llama_index). 4 | LlamaIndex, formerly known as GPT Index, is a framework designed to integrate with various data sources and create indices optimized for efficient querying. It supports different types of indices, such as keyword and semantic indices, allowing for organized and rapid data retrieval. By leveraging the power of large language models like GPT-3 or GPT-4, LlamaIndex enables complex searches beyond simple keyword matching, making it suitable for extensive datasets. 5 | 6 | ## Examples 7 | 8 | - [Retrieval-Augmented Generation](./rag_getting_started.ipynb): This example demonstrates how to use the Retrieval-Augmented Generation (RAG) model to generate answers to questions based on a given context. The RAG model combines a retriever and a generator to provide more accurate and relevant answers. 9 | -------------------------------------------------------------------------------- /samples/js/openai/multi_turn.js: -------------------------------------------------------------------------------- 1 | import OpenAI from "openai"; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Azure OpenAI models from the GitHub Models service */ 7 | const modelName = "openai/gpt-4o-mini"; 8 | 9 | export async function main() { 10 | 11 | const client = new OpenAI({ baseURL: endpoint, apiKey: token }); 12 | 13 | const response = await client.chat.completions.create({ 14 | messages: [ 15 | { role: "system", content: "You are a helpful assistant." }, 16 | { role: "user", content: "What is the capital of France?" }, 17 | { role: "assistant", content: "The capital of France is Paris." }, 18 | { role: "user", content: "What about Spain?" } 19 | ], 20 | model: modelName 21 | }); 22 | 23 | console.log(response.choices[0].message.content); 24 | } 25 | 26 | main().catch((err) => { 27 | console.error("The sample encountered an error:", err); 28 | }); 29 | -------------------------------------------------------------------------------- /samples/js/openai/streaming.js: -------------------------------------------------------------------------------- 1 | import OpenAI from "openai"; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Azure OpenAI models from the GitHub Models service */ 7 | const modelName = "openai/gpt-4o-mini"; 8 | 9 | export async function main() { 10 | 11 | const client = new OpenAI({ baseURL: endpoint, apiKey: token }); 12 | 13 | const stream = await client.chat.completions.create({ 14 | messages: [ 15 | { role: "system", content: "You are a helpful assistant." }, 16 | { role: "user", content: "Give me 5 good reasons why I should exercise every day." }, 17 | ], 18 | model: modelName, 19 | stream: true 20 | }); 21 | 22 | for await (const part of stream) { 23 | process.stdout.write(part.choices[0]?.delta?.content || ''); 24 | } 25 | process.stdout.write('\n'); 26 | } 27 | 28 | main().catch((err) => { 29 | console.error("The sample encountered an error:", err); 30 | }); 31 | -------------------------------------------------------------------------------- /samples/python/openai/basic.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates a basic call to the chat completion API. 2 | It is leveraging your endpoint and key. The call is synchronous.""" 3 | 4 | import os 5 | from openai import OpenAI 6 | 7 | token = os.environ["GITHUB_TOKEN"] 8 | endpoint = "https://models.github.ai/inference" 9 | 10 | # Pick one of the Azure OpenAI models from the GitHub Models service 11 | model_name = "openai/gpt-4o-mini" 12 | 13 | client = OpenAI( 14 | base_url=endpoint, 15 | api_key=token, 16 | ) 17 | 18 | response = client.chat.completions.create( 19 | messages=[ 20 | { 21 | "role": "system", 22 | "content": "You are a helpful assistant.", 23 | }, 24 | { 25 | "role": "user", 26 | "content": "What is the capital of France?", 27 | }, 28 | ], 29 | model=model_name, 30 | # Optional parameters 31 | temperature=1., 32 | max_tokens=1000, 33 | top_p=1. 34 | ) 35 | 36 | print(response.choices[0].message.content) 37 | -------------------------------------------------------------------------------- /samples/js/openai/embeddings.js: -------------------------------------------------------------------------------- 1 | import OpenAI from "openai"; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference"; 5 | 6 | /* Pick one of the OpenAI embeddings models from the GitHub Models service */ 7 | const modelName = "text-embedding-3-small"; 8 | 9 | export async function main() { 10 | 11 | const client = new OpenAI({ baseURL: endpoint, apiKey: token }); 12 | 13 | const response = await client.embeddings.create({ 14 | input: ["first phrase", "second phrase", "third phrase"], 15 | model: modelName 16 | }); 17 | 18 | for (const item of response.data) { 19 | let length = item.embedding.length; 20 | console.log( 21 | `data[${item.index}]: length=${length}, ` + 22 | `[${item.embedding[0]}, ${item.embedding[1]}, ` + 23 | `..., ${item.embedding[length - 2]}, ${item.embedding[length - 1]}]`); 24 | } 25 | console.log(response.usage); 26 | } 27 | 28 | main().catch((err) => { 29 | console.error("The sample encountered an error:", err); 30 | }); 31 | -------------------------------------------------------------------------------- /samples/js/mistralai/streaming.js: -------------------------------------------------------------------------------- 1 | import MistralClient from '@mistralai/mistralai'; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Mistral models from the GitHub Models service */ 7 | const modelName = "mistral-ai/Mistral-small"; 8 | 9 | export async function main() { 10 | 11 | const client = new MistralClient(token, endpoint); 12 | 13 | const response = await client.chatStream({ 14 | model: modelName, 15 | messages: [ 16 | { role:"system", content: "You are a helpful assistant." }, 17 | { role:"user", content: "Give me 5 good reasons why I should exercise every day." } 18 | ], 19 | }); 20 | 21 | for await (const chunk of response) { 22 | if (chunk.choices[0].delta.content !== undefined) { 23 | const streamText = chunk.choices[0].delta.content; 24 | process.stdout.write(streamText); 25 | } 26 | } 27 | } 28 | 29 | main().catch((err) => { 30 | console.error("The sample encountered an error:", err); 31 | }); -------------------------------------------------------------------------------- /samples/python/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples - Python 2 | 3 | This folder contains samples for interacting with GitHub Models using Python. 4 | 5 | Multiple model-specific SDKs are compatible with the endpoint served under the GitHub Models catalog, such as `openai` and `mistralai` packages for their respective models. This makes it easy to port your currently existing code using one of those SDKs. 6 | 7 | You can also use the `azure-ai-inference` package for a cross-model unified SDK. 8 | 9 | ## SDKs 10 | 11 | - [openai](./openai/README.md) (works with all Azure OpenAI models) 12 | - [azure-ai-inference](./azure_ai_inference/README.md) (works with all models) 13 | - [mistral](./mistralai/README.md) (works with all Mistral AI models) 14 | 15 | ## Running a sample 16 | 17 | To run a Python sample, run the following command in your terminal: 18 | 19 | ```shell 20 | python samples/python/azure_ai_inference/multi_turn.py 21 | ``` 22 | 23 | ## Running a cookbook 24 | 25 | To run a [Python cookbook](../../cookbooks/python/README.md), simply open one in your IDE and execute the code cells. 26 | -------------------------------------------------------------------------------- /cookbooks/python/openai/README.md: -------------------------------------------------------------------------------- 1 | # OpenAI Cookbook Examples 2 | 3 | This folder contains examples adapted from the [OpenAI Cookbook repository](https://github.com/openai/openai-cookbook). 4 | The samples were modified slightly to better run with the GitHub Models service. 5 | 6 | ## Examples 7 | 8 | - [How to process image and video with GPT-4](how_to_process_image_and_video_with_gpt4o.ipynb): This notebook shows how to process images and videos with GPT-4. 9 | - [How to call functions with chat models](How_to_call_functions_with_chat_models.ipynb): This notebook shows how to get GPT-4o to determing which of a set of functions to call to answer a user's question. 10 | - [Data extraction and transformation](Data_extraction_transformation.ipynb): This notebook shows how to extract data from documents using gpt-4o-mini. 11 | - [How to stream completions](How_to_stream_completions.ipynb): This notebook shows detailed instructions on how to stream chat completions. 12 | - [Developing Hallucination Guardrails](Developing_hallucination_guardrails.ipynb): Develop an output guardrail that specifically checks model outputs for hallucinations 13 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 GitHub 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples for Azure AI Inference JavaScript SDK 2 | 3 | This folder contains samples that leverage the Azure AI Inference JavaScript SDK with the GitHub Models endpoint. The Azure AI Inference JavaScript SDK supports a wide range of models, so it's easy to adapt these samples to use a different model by changing the value of: 4 | 5 | ```js 6 | const modelName = "MODEL"; 7 | ``` 8 | 9 | This codespace comes with dependencies pre-installed. If you want to use this code outside of this codespace, install the Azure AI Inference SDK using `npm`: 10 | 11 | ```shell 12 | npm install @azure-rest/ai-inference @azure/core-auth @azure/core-sse 13 | ``` 14 | 15 | ## Running a sample 16 | 17 | To run a JavaScript sample, run a command like the following in your terminal: 18 | 19 | ```shell 20 | node samples/js/azure_ai_inference/multi_turn.js 21 | ``` 22 | 23 | * [basic.js](basic.js): basic call to the gpt-4o-mini chat completion API 24 | * [multi_turn.js](multi_turn.js): multi-turn conversation with the chat completion API 25 | * [streaming.js](streaming.js): generate a response in streaming mode, token by token -------------------------------------------------------------------------------- /samples/python/mistralai/streaming.py: -------------------------------------------------------------------------------- 1 | """For a better user experience, you will want to stream the response of the model 2 | so that the first token shows up early and you avoid waiting for long responses.""" 3 | 4 | import os 5 | from mistralai.client import MistralClient 6 | from mistralai.models.chat_completion import ChatMessage 7 | 8 | token = os.environ["GITHUB_TOKEN"] 9 | endpoint = "https://models.github.ai/inference" 10 | 11 | # Pick one of the Mistral models from the GitHub Models service 12 | model_name = "Mistral-small" 13 | 14 | # Create a client 15 | client = MistralClient(api_key=token, endpoint=endpoint) 16 | 17 | # Call the chat completion API 18 | response = client.chat_stream( 19 | model=model_name, 20 | messages=[ 21 | ChatMessage(role="system", content="You are a helpful assistant."), 22 | ChatMessage( 23 | role="user", 24 | content="Give me 5 good reasons why I should exercise every day.", 25 | ), 26 | ], 27 | ) 28 | 29 | # Print the streamed response 30 | for update in response: 31 | if update.choices: 32 | print(update.choices[0].delta.content or "", end="") 33 | 34 | print() -------------------------------------------------------------------------------- /samples/python/mistralai/multi_turn.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates a multi-turn conversation with the chat completion API. 2 | When using the model for a chat application, you'll need to manage the history of that 3 | conversation and send the latest messages to the model. 4 | """ 5 | 6 | import os 7 | from mistralai.client import MistralClient 8 | from mistralai.models.chat_completion import ChatMessage 9 | 10 | token = os.environ["GITHUB_TOKEN"] 11 | endpoint = "https://models.github.ai/inference" 12 | 13 | # Pick one of the Mistral models from the GitHub Models service 14 | model_name = "Mistral-small" 15 | 16 | # Create a client 17 | client = MistralClient(api_key=token, endpoint=endpoint) 18 | 19 | # Call the chat completion API 20 | response = client.chat( 21 | model=model_name, 22 | messages=[ 23 | ChatMessage(role="system", content="You are a helpful assistant."), 24 | ChatMessage(role="user", content="What is the capital of France?"), 25 | ChatMessage(role="assistant", content="The capital of France is Paris."), 26 | ChatMessage(role="user", content="What about Spain?"), 27 | ], 28 | ) 29 | 30 | # Print the response 31 | print(response.choices[0].message.content) 32 | -------------------------------------------------------------------------------- /samples/python/openai/streaming.py: -------------------------------------------------------------------------------- 1 | """For a better user experience, you will want to stream the response of the model 2 | so that the first token shows up early and you avoid waiting for long responses.""" 3 | 4 | import os 5 | from openai import OpenAI 6 | 7 | token = os.environ["GITHUB_TOKEN"] 8 | endpoint = "https://models.github.ai/inference" 9 | 10 | # Pick one of the Azure OpenAI models from the GitHub Models service 11 | model_name = "openai/gpt-4o-mini" 12 | 13 | # Create a client 14 | client = OpenAI( 15 | base_url=endpoint, 16 | api_key=token, 17 | ) 18 | 19 | # Call the chat completion API 20 | response = client.chat.completions.create( 21 | messages=[ 22 | { 23 | "role": "system", 24 | "content": "You are a helpful assistant.", 25 | }, 26 | { 27 | "role": "user", 28 | "content": "Give me 5 good reasons why I should exercise every day.", 29 | }, 30 | ], 31 | model=model_name, 32 | stream=True, 33 | ) 34 | 35 | # Print the streamed response 36 | for update in response: 37 | if update.choices: 38 | content = update.choices[0].delta.content 39 | if content: 40 | print(content, end="") 41 | 42 | print() -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/basic.js: -------------------------------------------------------------------------------- 1 | import ModelClient from "@azure-rest/ai-inference"; 2 | import { AzureKeyCredential } from "@azure/core-auth"; 3 | 4 | const token = process.env["GITHUB_TOKEN"]; 5 | const endpoint = "https://models.github.ai/inference/"; 6 | 7 | /* By using the Azure AI Inference SDK, you can easily experiment with different models 8 | by modifying the value of `modelName` in the code below. */ 9 | const modelName = "openai/gpt-4o-mini"; 10 | 11 | export async function main() { 12 | 13 | const client = new ModelClient(endpoint, new AzureKeyCredential(token)); 14 | 15 | const response = await client.path("/chat/completions").post({ 16 | body: { 17 | messages: [ 18 | { role:"system", content: "You are a helpful assistant." }, 19 | { role:"user", content: "What is the capital of France?" } 20 | ], 21 | model: modelName, 22 | // Optional parameters 23 | temperature: 1., 24 | max_tokens: 1000, 25 | top_p: 1. 26 | } 27 | }); 28 | 29 | if (response.status !== "200") { 30 | throw response.body.error; 31 | } 32 | console.log(response.body.choices[0].message.content); 33 | } 34 | 35 | main().catch((err) => { 36 | console.error("The sample encountered an error:", err); 37 | }); 38 | -------------------------------------------------------------------------------- /samples/python/openai/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples for OpenAI Python SDK 2 | 3 | This folder contains samples that leverage the OpenAI Python SDK with the GitHub Models endpoint. Samples are available as Jupyter notebooks and as Python scripts. 4 | 5 | ## Jupyter Notebooks 6 | 7 | To run these notebooks, click on a link below to open it in Codespaces and select a Python3 kernel. 8 | 9 | * [getting_started.ipynb](getting_started.ipynb). Basic interaction, multi-turn conversations, image inputs, streamed responses, Functions API. 10 | * [embeddings_getting_started.ipynb](embeddings_getting_started.ipynb): create embeddings for strings using the `text-embedding-3-small` model 11 | 12 | ## Python Scripts 13 | 14 | To run these scripts, open your terminal and run a command like: 15 | 16 | ```shell 17 | python samples/python/openai/basic.py 18 | ``` 19 | 20 | * [basic.py](basic.py): basic call to the gpt-4o-mini chat completion API 21 | * [chat_with_image_file.py](chat_with_image_file.py): image file as input 22 | * [multi_turn.py](multi_turn.py): multi-turn conversation with the chat completion API 23 | * [streaming.py](streaming.py): generate a response in streaming mode, token by token 24 | * [tools.py](tools.py): run specific actions depending on the context of the conversation with the functions API 25 | -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/embeddings.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from azure.ai.inference import EmbeddingsClient 4 | from azure.core.credentials import AzureKeyCredential 5 | 6 | token = os.environ["GITHUB_TOKEN"] 7 | endpoint = "https://models.github.ai/inference" 8 | 9 | # By using the Azure AI Inference SDK, you can easily experiment with different models 10 | # by modifying the value of `modelName` in the code below. For this code sample 11 | # you need an embedding model. The following embedding models are 12 | # available in the GitHub Models service: 13 | # 14 | # Cohere: Cohere-embed-v3-english, Cohere-embed-v3-multilingual 15 | # Azure OpenAI: text-embedding-3-small, text-embedding-3-large 16 | model_name = "text-embedding-3-small" 17 | 18 | client = EmbeddingsClient( 19 | endpoint=endpoint, 20 | credential=AzureKeyCredential(token) 21 | ) 22 | 23 | response = client.embed( 24 | input=["first phrase", "second phrase", "third phrase"], 25 | model=model_name 26 | ) 27 | 28 | for item in response.data: 29 | length = len(item.embedding) 30 | print( 31 | f"data[{item.index}]: length={length}, " 32 | f"[{item.embedding[0]}, {item.embedding[1]}, " 33 | f"..., {item.embedding[length-2]}, {item.embedding[length-1]}]" 34 | ) 35 | print(response.usage) 36 | -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/multi_turn.js: -------------------------------------------------------------------------------- 1 | import ModelClient from "@azure-rest/ai-inference"; 2 | import { AzureKeyCredential } from "@azure/core-auth"; 3 | 4 | const token = process.env["GITHUB_TOKEN"]; 5 | const endpoint = "https://models.github.ai/inference/"; 6 | 7 | /* By using the Azure AI Inference SDK, you can easily experiment with different models 8 | by modifying the value of `modelName` in the code below. */ 9 | const modelName = "openai/gpt-4o-mini"; 10 | 11 | export async function main() { 12 | 13 | const client = new ModelClient(endpoint, new AzureKeyCredential(token)); 14 | 15 | const response = await client.path("/chat/completions").post({ 16 | body: { 17 | messages: [ 18 | { role: "system", content: "You are a helpful assistant." }, 19 | { role: "user", content: "What is the capital of France?" }, 20 | { role: "assistant", content: "The capital of France is Paris." }, 21 | { role: "user", content: "What about Spain?" }, 22 | ], 23 | model: modelName, 24 | } 25 | }); 26 | 27 | if (response.status !== "200") { 28 | throw response.body.error; 29 | } 30 | 31 | for (const choice of response.body.choices) { 32 | console.log(choice.message.content); 33 | } 34 | } 35 | 36 | main().catch((err) => { 37 | console.error("The sample encountered an error:", err); 38 | }); 39 | -------------------------------------------------------------------------------- /samples/python/openai/multi_turn.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates a multi-turn conversation with the chat completion API. 2 | When using the model for a chat application, you'll need to manage the history of that 3 | conversation and send the latest messages to the model. 4 | """ 5 | 6 | import os 7 | from openai import OpenAI 8 | 9 | token = os.environ["GITHUB_TOKEN"] 10 | endpoint = "https://models.github.ai/inference" 11 | 12 | # Pick one of the Azure OpenAI models from the GitHub Models service 13 | model_name = "openai/gpt-4o-mini" 14 | 15 | # Create a client 16 | client = OpenAI( 17 | base_url=endpoint, 18 | api_key=token, 19 | default_headers={ 20 | "x-ms-useragent": "github-models-sample", 21 | } 22 | ) 23 | 24 | # Call the chat completion API 25 | response = client.chat.completions.create( 26 | messages=[ 27 | { 28 | "role": "system", 29 | "content": "You are a helpful assistant.", 30 | }, 31 | { 32 | "role": "user", 33 | "content": "What is the capital of France?", 34 | }, 35 | { 36 | "role": "assistant", 37 | "content": "The capital of France is Paris.", 38 | }, 39 | { 40 | "role": "user", 41 | "content": "What about Spain?", 42 | }, 43 | ], 44 | model=model_name, 45 | ) 46 | 47 | # Print the response 48 | print(response.choices[0].message.content) 49 | -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/embeddings.js: -------------------------------------------------------------------------------- 1 | import ModelClient from "@azure-rest/ai-inference"; 2 | import { isUnexpected } from "@azure-rest/ai-inference"; 3 | import { AzureKeyCredential } from "@azure/core-auth"; 4 | 5 | const token = process.env["GITHUB_TOKEN"]; 6 | const endpoint = "https://models.github.ai/inference"; 7 | 8 | /* By using the Azure AI Inference SDK, you can easily experiment with different models 9 | by modifying the value of `modelName` in the code below. For this code sample 10 | you need an embedding model. The following embedding models are 11 | available in the GitHub Models service: 12 | 13 | Cohere: Cohere-embed-v3-english, Cohere-embed-v3-multilingual 14 | Azure OpenAI: text-embedding-3-small, text-embedding-3-large */ 15 | const modelName = "text-embedding-3-small"; 16 | 17 | export async function main() { 18 | 19 | const client = new ModelClient(endpoint, new AzureKeyCredential(token)); 20 | 21 | const response = await client.path("/embeddings").post({ 22 | body: { 23 | input: ["first phrase", "second phrase", "third phrase"], 24 | model: modelName 25 | } 26 | }); 27 | 28 | if (isUnexpected(response)) { 29 | throw response.body.error; 30 | } 31 | 32 | for (const item of response.body.data) { 33 | let length = item.embedding.length; 34 | console.log( 35 | `data[${item.index}]: length=${length}, ` + 36 | `[${item.embedding[0]}, ${item.embedding[1]}, ` + 37 | `..., ${item.embedding[length - 2]}, ${item.embedding[length - 1]}]`); 38 | } 39 | console.log(response.body.usage); 40 | } 41 | 42 | main().catch((err) => { 43 | console.error("The sample encountered an error:", err); 44 | }); 45 | -------------------------------------------------------------------------------- /samples/README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models Samples 2 | 3 | This folder contains samples for interacting with GitHub Models. Each subfolder contains examples for a specific language: JavaScript, Python, and cURL. 4 | 5 | ## Languages 6 | 7 | We provide samples in the following languages: 8 | 9 | - [JavaScript](js/README.md) 10 | - [Python](python/README.md) 11 | - [cURL](curl/README.md) 12 | 13 | ## Things to try 14 | 15 | Use this Codespace to edit the samples and see what happens! Here are a few suggestions of things you can try. 16 | 17 | ### Try a different model 18 | 19 | Try switching to a different model by finding a line like the one below and changing the model selected. You can find other models to try at [GitHub Marketplace](https://github.com/marketplace/models). 20 | 21 | ```json 22 | "model": "gpt-4o-mini" 23 | ``` 24 | 25 | ### Try a different prompt 26 | 27 | Try a different input to the model (prompt) by changing the text following `"content":` in the lines below. Some examples provide multiple turns of conversation history, and you can modify those too. 28 | 29 | ```json 30 | { 31 | "role": "user", 32 | "content": "What is the capital of France?" 33 | } 34 | ``` 35 | 36 | ### Change the way the model responds 37 | 38 | Some (but not all) models allow you to modify the "system prompt", which does not generate a response directly but changes the *way* the model responds. You can modifying the system prompt by finding a section like the lines below and changing the `"content":` value. 39 | 40 | ```json 41 | { 42 | "role": "system", 43 | "content": "You are a helpful assistant." 44 | } 45 | ``` 46 | -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/basic.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates a basic call to the chat completion API. 2 | It is leveraging your endpoint and key. The call is synchronous.""" 3 | 4 | import os 5 | from azure.ai.inference import ChatCompletionsClient 6 | from azure.ai.inference.models import SystemMessage, UserMessage 7 | from azure.core.credentials import AzureKeyCredential 8 | 9 | token = os.environ["GITHUB_TOKEN"] 10 | endpoint = "https://models.github.ai/inference/" 11 | 12 | # By using the Azure AI Inference SDK, you can easily experiment with different models 13 | # by modifying the value of `model_name` in the code below. The following models are 14 | # available in the GitHub Models service: 15 | # 16 | # AI21 Labs: AI21-Jamba-Instruct 17 | # Cohere: Cohere-command-r, Cohere-command-r-plus 18 | # Meta: Meta-Llama-3-70B-Instruct, Meta-Llama-3-8B-Instruct, Meta-Llama-3.1-405B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-8B-Instruct 19 | # Mistral AI: Mistral-large, Mistral-large-2407, Mistral-Nemo, Mistral-small 20 | # Azure OpenAI: gpt-4o-mini, gpt-4o 21 | # Microsoft: Phi-3-medium-128k-instruct, Phi-3-medium-4k-instruct, Phi-3-mini-128k-instruct, Phi-3-mini-4k-instruct, Phi-3-small-128k-instruct, Phi-3-small-8k-instruct 22 | model_name = "openai/gpt-4o-mini" 23 | 24 | client = ChatCompletionsClient( 25 | endpoint=endpoint, 26 | credential=AzureKeyCredential(token), 27 | ) 28 | 29 | response = client.complete( 30 | messages=[ 31 | SystemMessage(content="You are a helpful assistant."), 32 | UserMessage(content="What is the capital of France?"), 33 | ], 34 | model=model_name, 35 | # Optional parameters 36 | temperature=1., 37 | max_tokens=1000, 38 | top_p=1. 39 | ) 40 | 41 | print(response.choices[0].message.content) -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/chat_with_image_file.py: -------------------------------------------------------------------------------- 1 | """If a model supports using images as inputs, you can run a chat completion 2 | using a local image file, use the following sample.""" 3 | 4 | import os 5 | from azure.ai.inference import ChatCompletionsClient 6 | from azure.ai.inference.models import ( 7 | SystemMessage, 8 | UserMessage, 9 | TextContentItem, 10 | ImageContentItem, 11 | ImageUrl, 12 | ImageDetailLevel, 13 | ) 14 | from azure.core.credentials import AzureKeyCredential 15 | 16 | token = os.environ["GITHUB_TOKEN"] 17 | endpoint = "https://models.github.ai/inference" 18 | 19 | # By using the Azure AI Inference SDK, you can easily experiment with different models 20 | # by modifying the value of `modelName` in the code below. For this code sample 21 | # you need to use a model that supports image inputs. The following image models are 22 | # available in the GitHub Models service: 23 | # 24 | # Azure OpenAI: gpt-4o-mini, gpt-4o 25 | model_name = "openai/gpt-4o-mini" 26 | 27 | client = ChatCompletionsClient( 28 | endpoint=endpoint, 29 | credential=AzureKeyCredential(token), 30 | ) 31 | 32 | response = client.complete( 33 | messages=[ 34 | SystemMessage( 35 | content="You are a helpful assistant that describes images in details." 36 | ), 37 | UserMessage( 38 | content=[ 39 | TextContentItem(text="What's in this image?"), 40 | ImageContentItem( 41 | image_url=ImageUrl.load( 42 | image_file=os.path.join(os.path.dirname(__file__), "sample.png"), 43 | image_format="png", 44 | detail=ImageDetailLevel.LOW) 45 | ), 46 | ], 47 | ), 48 | ], 49 | model=model_name, 50 | ) 51 | 52 | print(response.choices[0].message.content) 53 | -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/streaming.py: -------------------------------------------------------------------------------- 1 | """For a better user experience, you will want to stream the response of the model 2 | so that the first token shows up early and you avoid waiting for long responses.""" 3 | 4 | import os 5 | from azure.ai.inference import ChatCompletionsClient 6 | from azure.ai.inference.models import SystemMessage, UserMessage 7 | from azure.core.credentials import AzureKeyCredential 8 | 9 | token = os.environ["GITHUB_TOKEN"] 10 | endpoint = "https://models.github.ai/inference" 11 | 12 | # By using the Azure AI Inference SDK, you can easily experiment with different models 13 | # by modifying the value of `model_name` in the code below. The following models are 14 | # available in the GitHub Models service: 15 | # 16 | # AI21 Labs: AI21-Jamba-Instruct 17 | # Cohere: Cohere-command-r, Cohere-command-r-plus 18 | # Meta: Meta-Llama-3-70B-Instruct, Meta-Llama-3-8B-Instruct, Meta-Llama-3.1-405B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-8B-Instruct 19 | # Mistral AI: Mistral-large, Mistral-large-2407, Mistral-Nemo, Mistral-small 20 | # Azure OpenAI: gpt-4o-mini, gpt-4o 21 | # Microsoft: Phi-3-medium-128k-instruct, Phi-3-medium-4k-instruct, Phi-3-mini-128k-instruct, Phi-3-mini-4k-instruct, Phi-3-small-128k-instruct, Phi-3-small-8k-instruct 22 | model_name = "openai/gpt-4o-mini" 23 | 24 | client = ChatCompletionsClient( 25 | endpoint=endpoint, 26 | credential=AzureKeyCredential(token), 27 | ) 28 | 29 | response = client.complete( 30 | stream=True, 31 | messages=[ 32 | SystemMessage(content="You are a helpful assistant."), 33 | UserMessage(content="Give me 5 good reasons why I should exercise every day."), 34 | ], 35 | model=model_name, 36 | ) 37 | 38 | for update in response: 39 | if update.choices: 40 | print(update.choices[0].delta.content or "", end="") 41 | 42 | print() 43 | 44 | client.close() 45 | -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/multi_turn.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates a multi-turn conversation with the chat completion API. 2 | When using the model for a chat application, you'll need to manage the history of that 3 | conversation and send the latest messages to the model. 4 | """ 5 | 6 | import os 7 | from azure.ai.inference import ChatCompletionsClient 8 | from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage 9 | from azure.core.credentials import AzureKeyCredential 10 | 11 | token = os.environ["GITHUB_TOKEN"] 12 | endpoint = "https://models.github.ai/inference" 13 | 14 | # By using the Azure AI Inference SDK, you can easily experiment with different models 15 | # by modifying the value of `model_name` in the code below. The following models are 16 | # available in the GitHub Models service: 17 | # 18 | # AI21 Labs: AI21-Jamba-Instruct 19 | # Cohere: Cohere-command-r, Cohere-command-r-plus 20 | # Meta: Meta-Llama-3-70B-Instruct, Meta-Llama-3-8B-Instruct, Meta-Llama-3.1-405B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-8B-Instruct 21 | # Mistral AI: Mistral-large, Mistral-large-2407, Mistral-Nemo, Mistral-small 22 | # Azure OpenAI: gpt-4o-mini, gpt-4o 23 | # Microsoft: Phi-3-medium-128k-instruct, Phi-3-medium-4k-instruct, Phi-3-mini-128k-instruct, Phi-3-mini-4k-instruct, Phi-3-small-128k-instruct, Phi-3-small-8k-instruct 24 | model_name = "openai/gpt-4o-mini" 25 | 26 | client = ChatCompletionsClient( 27 | endpoint=endpoint, 28 | credential=AzureKeyCredential(token), 29 | ) 30 | 31 | messages = [ 32 | SystemMessage(content="You are a helpful assistant."), 33 | UserMessage(content="What is the capital of France?"), 34 | AssistantMessage(content="The capital of France is Paris."), 35 | UserMessage(content="What about Spain?"), 36 | ] 37 | 38 | response = client.complete(messages=messages, model=model_name) 39 | 40 | print(response.choices[0].message.content) -------------------------------------------------------------------------------- /samples/js/openai/chat_with_image_file.js: -------------------------------------------------------------------------------- 1 | import fs from 'fs'; 2 | import path from 'path'; 3 | import OpenAI from "openai"; 4 | 5 | const token = process.env["GITHUB_TOKEN"]; 6 | const endpoint = "https://models.github.ai/inference/"; 7 | 8 | /* Pick one of the Azure OpenAI models from the GitHub Models service */ 9 | const modelName = "openai/gpt-4o-mini"; 10 | 11 | export async function main() { 12 | 13 | const client = new OpenAI({ baseURL: endpoint, apiKey: token }); 14 | 15 | const response = await client.chat.completions.create({ 16 | messages: [ 17 | { role: "system", content: "You are a helpful assistant that describes images in details." }, 18 | { role: "user", content: [ 19 | { type: "text", text: "What's in this image?"}, 20 | { type: "image_url", image_url: { 21 | url: getImageDataUrl(path.join(import.meta.dirname, "sample.png"), "png"), details: "low"}} 22 | ] 23 | } 24 | ], 25 | model: modelName 26 | }); 27 | 28 | console.log(response.choices[0].message.content); 29 | } 30 | 31 | /** 32 | * Get the data URL of an image file. 33 | * @param {string} imageFile - The path to the image file. 34 | * @param {string} imageFormat - The format of the image file. For example: "jpeg", "png". 35 | * @returns {string} The data URL of the image. 36 | */ 37 | function getImageDataUrl(imageFile, imageFormat) { 38 | try { 39 | const imageBuffer = fs.readFileSync(imageFile); 40 | const imageBase64 = imageBuffer.toString('base64'); 41 | return `data:image/${imageFormat};base64,${imageBase64}`; 42 | } catch (error) { 43 | console.error(`Could not read '${imageFile}'.`); 44 | console.error('Set the correct path to the image file before running this sample.'); 45 | process.exit(1); 46 | } 47 | } 48 | 49 | main().catch((err) => { 50 | console.error("The sample encountered an error:", err); 51 | }); 52 | -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/invoice_schema.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "hotel_information": { 4 | "name": "string", 5 | "address": { 6 | "street": "string", 7 | "city": "string", 8 | "country": "string", 9 | "postal_code": "string" 10 | }, 11 | "contact": { 12 | "phone": "string", 13 | "fax": "string", 14 | "email": "string", 15 | "website": "string" 16 | } 17 | }, 18 | "guest_information": { 19 | "company": "string", 20 | "address": "string", 21 | "guest_name": "string" 22 | }, 23 | "invoice_information": { 24 | "invoice_number": "string", 25 | "reservation_number": "string", 26 | "date": "YYYY-MM-DD", 27 | "room_number": "string", 28 | "check_in_date": "YYYY-MM-DD", 29 | "check_out_date": "YYYY-MM-DD" 30 | }, 31 | "charges": [ 32 | { 33 | "date": "YYYY-MM-DD", 34 | "description": "string", 35 | "charge": "number", 36 | "credit": "number" 37 | } 38 | ], 39 | "totals_summary": { 40 | "currency": "string", 41 | "total_net": "number", 42 | "total_tax": "number", 43 | "total_gross": "number", 44 | "total_charge": "number", 45 | "total_credit": "number", 46 | "balance_due": "number" 47 | }, 48 | "taxes": [ 49 | { 50 | "tax_type": "string", 51 | "tax_rate": "string", 52 | "net_amount": "number", 53 | "tax_amount": "number", 54 | "gross_amount": "number" 55 | } 56 | ] 57 | } 58 | ] 59 | -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/transformed_invoice_json/transformed_madison-502875_extracted.json: -------------------------------------------------------------------------------- 1 | { 2 | "hotel_information": { 3 | "name": "MADISON Hotel GmbH", 4 | "address": { 5 | "street": "Schaarsteinweg 4", 6 | "city": "Hamburg", 7 | "country": "Germany", 8 | "postal_code": "20459" 9 | }, 10 | "contact": { 11 | "phone": "+49.40.37 666-0", 12 | "fax": "+49.40.37 666-137", 13 | "email": "info@madisonhotel.de", 14 | "website": "madisonhotel.de" 15 | } 16 | }, 17 | "guest_information": { 18 | "company": "APIMeister Consulting GmbH", 19 | "address": "Friedrichstr. 123, 10117 Berlin", 20 | "guest_name": "Mr. Jens Walter" 21 | }, 22 | "invoice_information": { 23 | "invoice_number": "502875", 24 | "reservation_number": "BD", 25 | "date": "2019-06-14", 26 | "room_number": "426", 27 | "check_in_date": "2019-06-10", 28 | "check_out_date": "2019-06-14" 29 | }, 30 | "charges": [ 31 | { 32 | "date": "2019-06-10", 33 | "description": "Overnight stay excluding breakfast", 34 | "charge": 110.0, 35 | "credit": 0 36 | }, 37 | { 38 | "date": "2019-06-11", 39 | "description": "Overnight stay excluding breakfast", 40 | "charge": 110.0, 41 | "credit": 0 42 | }, 43 | { 44 | "date": "2019-06-12", 45 | "description": "Overnight stay excluding breakfast", 46 | "charge": 110.0, 47 | "credit": 0 48 | }, 49 | { 50 | "date": "2019-06-13", 51 | "description": "Overnight stay excluding breakfast", 52 | "charge": 110.0, 53 | "credit": 0 54 | }, 55 | { 56 | "date": "2019-06-14", 57 | "description": "Mastercard IFC", 58 | "charge": 440.0, 59 | "credit": 440.0 60 | } 61 | ], 62 | "totals_summary": { 63 | "currency": "EUR", 64 | "total_net": 411.21, 65 | "total_tax": 28.79, 66 | "total_gross": 440.0, 67 | "total_charge": 440.0, 68 | "total_credit": 440.0, 69 | "balance_due": 0.0 70 | }, 71 | "taxes": [ 72 | { 73 | "tax_type": "VAT", 74 | "tax_rate": "19%", 75 | "net_amount": 411.21, 76 | "tax_amount": 28.79, 77 | "gross_amount": 440.0 78 | } 79 | ] 80 | } -------------------------------------------------------------------------------- /.devcontainer/devcontainer.json: -------------------------------------------------------------------------------- 1 | { 2 | "build": { 3 | "dockerfile": "./Dockerfile", 4 | "context": "." 5 | }, 6 | "features": { 7 | "ghcr.io/devcontainers/features/common-utils:2": { 8 | "username": "codespace", 9 | "userUid": "1000", 10 | "userGid": "1000" 11 | }, 12 | "ghcr.io/devcontainers/features/node:1": { 13 | "version": "20" 14 | }, 15 | "ghcr.io/devcontainers/features/python:1": { 16 | "version": "3.11.9", 17 | "installJupyterLab": "false" 18 | }, 19 | "ghcr.io/devcontainers/features/git:1": { 20 | "version": "latest", 21 | "ppa": "false" 22 | }, 23 | "ghcr.io/devcontainers/features/git-lfs:1": { 24 | "version": "latest" 25 | }, 26 | "ghcr.io/devcontainers/features/github-cli:1": { 27 | "version": "latest" 28 | } 29 | }, 30 | "overrideFeatureInstallOrder": [ 31 | "ghcr.io/devcontainers/features/common-utils", 32 | "ghcr.io/devcontainers/features/git", 33 | "ghcr.io/devcontainers/features/node", 34 | "ghcr.io/devcontainers/features/python", 35 | "ghcr.io/devcontainers/features/git-lfs", 36 | "ghcr.io/devcontainers/features/github-cli" 37 | ], 38 | "remoteUser": "codespace", 39 | "containerUser": "codespace", 40 | "onCreateCommand": "${containerWorkspaceFolder}/.devcontainer/bootstrap", 41 | "customizations": { 42 | "codespaces": { 43 | "disableAutomaticConfiguration": true, 44 | "openFiles": [ 45 | "README.md" 46 | ] 47 | }, 48 | "vscode": { 49 | "extensions": [ 50 | "ms-python.python", 51 | "ms-toolsai.jupyter", 52 | "ms-toolsai.prompty" 53 | ], 54 | "settings": { 55 | /* 56 | NOTE: excluding these Python environments causes Jupyter to select the remaining environment by default 57 | The default environment will be: /usr/local/python/current/bin/python 58 | */ 59 | "jupyter.kernels.excludePythonEnvironments": [ 60 | "/usr/local/python/current/bin/python3", 61 | "/usr/bin/python3", 62 | "/bin/python3" 63 | ], 64 | "workbench.editorAssociations": { 65 | "*.md": "vscode.markdown.preview.editor" 66 | } 67 | } 68 | } 69 | } 70 | } -------------------------------------------------------------------------------- /samples/python/openai/chat_with_image_file.py: -------------------------------------------------------------------------------- 1 | """This model supports using images as inputs. To run a chat completion using 2 | a local image file, use the following sample.""" 3 | 4 | import os 5 | import base64 6 | from openai import OpenAI 7 | 8 | token = os.environ["GITHUB_TOKEN"] 9 | endpoint = "https://models.github.ai/inference" 10 | 11 | # Pick one of the Azure OpenAI models from the GitHub Models service 12 | model_name = "openai/gpt-4o-mini" 13 | 14 | # Create a client 15 | client = OpenAI( 16 | base_url=endpoint, 17 | api_key=token, 18 | ) 19 | 20 | 21 | def get_image_data_url(image_file: str, image_format: str) -> str: 22 | """ 23 | Helper function to converts an image file to a data URL string. 24 | Args: 25 | image_file (str): The path to the image file. 26 | image_format (str): The format of the image file. 27 | Returns: 28 | str: The data URL of the image. 29 | """ 30 | try: 31 | with open(image_file, "rb") as f: 32 | image_data = base64.b64encode(f.read()).decode("utf-8") 33 | except FileNotFoundError: 34 | print(f"Could not read '{image_file}'.") 35 | exit() 36 | return f"data:image/{image_format};base64,{image_data}" 37 | 38 | 39 | # Call the chat completion API 40 | response = client.chat.completions.create( 41 | messages=[ 42 | { 43 | "role": "system", 44 | "content": "You are a helpful assistant that describes images in detail.", 45 | }, 46 | { 47 | "role": "user", 48 | "content": [ 49 | { 50 | "type": "text", 51 | "text": "What's in this image?", 52 | }, 53 | { 54 | "type": "image_url", 55 | "image_url": { 56 | # using a file located in this directory 57 | "url": get_image_data_url( 58 | os.path.join(os.path.dirname(__file__), "sample.png"), "png" 59 | ) 60 | }, 61 | }, 62 | ], 63 | }, 64 | ], 65 | model=model_name, 66 | ) 67 | 68 | # Print the response 69 | print(response.choices[0].message.content) -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/transformed_invoice_json/transformed_madison_folio_g_cp_efolio5895707_extracted.json: -------------------------------------------------------------------------------- 1 | { 2 | "hotel_information": { 3 | "name": "MADISON Hotel GmbH", 4 | "address": { 5 | "street": "Schaartvenweg 4", 6 | "city": "Hamburg", 7 | "country": "Germany", 8 | "postal_code": "20459" 9 | }, 10 | "contact": { 11 | "phone": "+49.40.37 666-0", 12 | "fax": "+49.40.37 666-137", 13 | "email": "info@madisonhotel.de", 14 | "website": "madisonhotel.de" 15 | } 16 | }, 17 | "guest_information": { 18 | "company": "APfmeister Consulting GmbH", 19 | "address": "Friedrichstr. 123, 10117 Berlin", 20 | "guest_name": "Mr. Jens Walter" 21 | }, 22 | "invoice_information": { 23 | "invoice_number": "505050 /", 24 | "reservation_number": null, 25 | "date": "2019-07-04", 26 | "room_number": "203", 27 | "check_in_date": "2019-06-30", 28 | "check_out_date": "2019-07-04" 29 | }, 30 | "charges": [ 31 | { 32 | "date": "2019-06-30", 33 | "description": "Overnight stay excluding breakfast*", 34 | "charge": 110.0, 35 | "credit": 0.0 36 | }, 37 | { 38 | "date": "2019-07-01", 39 | "description": "Overnight stay excluding breakfast*", 40 | "charge": 110.0, 41 | "credit": 0.0 42 | }, 43 | { 44 | "date": "2019-07-02", 45 | "description": "Overnight stay excluding breakfast*", 46 | "charge": 110.0, 47 | "credit": 0.0 48 | }, 49 | { 50 | "date": "2019-07-03", 51 | "description": "Overnight stay excluding breakfast*", 52 | "charge": 110.0, 53 | "credit": 0.0 54 | }, 55 | { 56 | "date": "2019-07-04", 57 | "description": "Mastercard IFC", 58 | "charge": 0.0, 59 | "credit": 440.0 60 | } 61 | ], 62 | "totals_summary": { 63 | "currency": "EUR", 64 | "total_net": 411.21, 65 | "total_tax": 28.79, 66 | "total_gross": 440.0, 67 | "total_charge": 440.0, 68 | "total_credit": 440.0, 69 | "balance_due": 0.0 70 | }, 71 | "taxes": [ 72 | { 73 | "tax_type": "VAT", 74 | "tax_rate": "7%", 75 | "net_amount": 411.21, 76 | "tax_amount": 28.79, 77 | "gross_amount": 440.0 78 | } 79 | ] 80 | } -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/streaming.js: -------------------------------------------------------------------------------- 1 | import ModelClient from "@azure-rest/ai-inference"; 2 | import { AzureKeyCredential } from "@azure/core-auth"; 3 | import { createSseStream } from "@azure/core-sse"; 4 | 5 | const token = process.env["GITHUB_TOKEN"]; 6 | const endpoint = "https://models.github.ai/inference/"; 7 | 8 | /* By using the Azure AI Inference SDK, you can easily experiment with different models 9 | by modifying the value of `modelName` in the code below. The following models are 10 | available in the GitHub Models service: 11 | 12 | AI21 Labs: AI21-Jamba-Instruct 13 | Cohere: Cohere-command-r, Cohere-command-r-plus 14 | Meta: Meta-Llama-3-70B-Instruct, Meta-Llama-3-8B-Instruct, Meta-Llama-3.1-405B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-8B-Instruct 15 | Mistral AI: Mistral-large, Mistral-large-2407, Mistral-Nemo, Mistral-small 16 | Azure OpenAI: gpt-4o-mini, gpt-4o 17 | Microsoft: Phi-3-medium-128k-instruct, Phi-3-medium-4k-instruct, Phi-3-mini-128k-instruct, Phi-3-mini-4k-instruct, Phi-3-small-128k-instruct, Phi-3-small-8k-instruct */ 18 | const modelName = "openai/gpt-4o-mini"; 19 | 20 | export async function main() { 21 | 22 | const client = new ModelClient(endpoint, new AzureKeyCredential(token)); 23 | 24 | const response = await client.path("/chat/completions").post({ 25 | body: { 26 | messages: [ 27 | { role: "system", content: "You are a helpful assistant." }, 28 | { role: "user", content: "Give me 5 good reasons why I should exercise every day." }, 29 | ], 30 | model: modelName, 31 | stream: true 32 | } 33 | }).asNodeStream(); 34 | 35 | const stream = response.body; 36 | if (!stream) { 37 | throw new Error("The response stream is undefined"); 38 | } 39 | 40 | if (response.status !== "200") { 41 | throw new Error(`Failed to get chat completions: ${response.body.error}`); 42 | } 43 | 44 | const sseStream = createSseStream(stream); 45 | 46 | for await (const event of sseStream) { 47 | if (event.data === "[DONE]") { 48 | process.stdout.write('\n'); 49 | return; 50 | } 51 | for (const choice of (JSON.parse(event.data)).choices) { 52 | process.stdout.write(choice.delta?.content ?? ``); 53 | } 54 | } 55 | } 56 | 57 | main().catch((err) => { 58 | console.error("The sample encountered an error:", err); 59 | }); 60 | -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/chat_with_image_file.js: -------------------------------------------------------------------------------- 1 | import ModelClient from "@azure-rest/ai-inference"; 2 | import { AzureKeyCredential } from "@azure/core-auth"; 3 | import fs from 'fs'; 4 | import path from 'path'; 5 | 6 | const token = process.env["GITHUB_TOKEN"]; 7 | const endpoint = "https://models.github.ai/inference/"; 8 | 9 | /* By using the Azure AI Inference SDK, you can easily experiment with different models 10 | by modifying the value of `modelName` in the code below. For this code sample 11 | you need to use a model that supports image inputs. The following image models are 12 | available in the GitHub Models service: 13 | 14 | Azure OpenAI: gpt-4o-mini, gpt-4o */ 15 | const modelName = "openai/gpt-4o-mini"; 16 | 17 | export async function main() { 18 | 19 | const client = new ModelClient(endpoint, new AzureKeyCredential(token)); 20 | 21 | const response = await client.path("/chat/completions").post({ 22 | body: { 23 | messages: [ 24 | { role: "system", content: "You are a helpful assistant that describes images in details." }, 25 | { role: "user", content: [ 26 | { type: "text", text: "What's in this image?"}, 27 | { type: "image_url", image_url: { 28 | url: getImageDataUrl(path.join(import.meta.dirname, "sample.png"), "png"), details: "low"}} 29 | ] 30 | } 31 | ], 32 | model: modelName 33 | } 34 | }); 35 | 36 | if (response.status !== "200") { 37 | throw response.body.error; 38 | } 39 | console.log(response.body.choices[0].message.content); 40 | } 41 | 42 | /** 43 | * Get the data URL of an image file. 44 | * @param {string} imageFile - The path to the image file. 45 | * @param {string} imageFormat - The format of the image file. For example: "jpeg", "png". 46 | * @returns {string} The data URL of the image. 47 | */ 48 | function getImageDataUrl(imageFile, imageFormat) { 49 | try { 50 | const imageBuffer = fs.readFileSync(imageFile); 51 | const imageBase64 = imageBuffer.toString('base64'); 52 | return `data:image/${imageFormat};base64,${imageBase64}`; 53 | } catch (error) { 54 | console.error(`Could not read '${imageFile}'.`); 55 | console.error('Set the correct path to the image file before running this sample.'); 56 | process.exit(1); 57 | } 58 | } 59 | 60 | main().catch((err) => { 61 | console.error("The sample encountered an error:", err); 62 | }); 63 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # GitHub Models - Limited Public Beta 2 | 3 | Welcome to your shiny new Codespace for interacting with GitHub Models! We've got everything fired up and ready for you to explore AI Models hosted on Azure AI. 4 | 5 | The git history is a nearly-blank canvas; there's a single initial commit with the contents you're seeing right now - where you go from here is up to you! 6 | 7 | Everything you do here is contained within this one codespace. There is no repository on GitHub yet. When you’re ready, you can click "Publish Branch" and we’ll create your repository and push up your project. If you were just exploring and have no further need for this code, you can simply delete your codespace and it's gone forever. 8 | 9 | For more information about the Models available on GitHub Models, check out the [Marketplace](https://github.com/marketplace/models). 10 | 11 | When bringing your application to scale, you must provision resources and authenticate from Azure, not GitHub. Learn more about deploying models to meet your use case with Azure AI. 12 | 13 | ## Getting Started 14 | 15 | There are a few basic examples that are ready for you to run. You can find them in the [samples directory](samples/README.md). If you want to jump straight to your favorite language, you can find the examples in the following directories: 16 | 17 | - [JavaScript](samples/js/README.md) 18 | - [Python](samples/python/README.md) 19 | - [cURL](samples/curl/README.md) 20 | 21 | If you are already familiar with the GitHub Models service, you can start by running our Cookbook examples. You can find them in the [cookbooks directory](cookbooks/README.md). Here are the direct links to the available languages (at this point only Python): 22 | 23 | - [Python](cookbooks/python/README.md) 24 | 25 | ## Disclosures 26 | 27 | Remember when interacting with a model you are experimenting with AI, so content mistakes are possible. 28 | 29 | The feature is subject to various limits (including requests per minute, requests per day, tokens per request, and concurrent requests) and is not designed for production use cases. 30 | 31 | GitHub Models uses [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety). These filters cannot be turned off as part of the GitHub Models experience. If you decide to employ models through a paid service, please configure your content filters to meet your requirements. 32 | 33 | This service is under GitHub’s [Pre-release Terms](https://docs.github.com/en/site-policy/github-terms/github-pre-release-license-terms). Your use of the GitHub Models is subject to the following [Product Terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/allprograms) and [Privacy Statement](https://www.microsoft.com/licensing/terms/product/PrivacyandSecurityTerms/MCA). Content within this Repository may be subject to additional license terms. 34 | -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/transformed_invoice_json/transformed_hampton_20190411_extracted.json: -------------------------------------------------------------------------------- 1 | { 2 | "hotel_information": { 3 | "name": "Hampton by Hilton Hamburg City Centre", 4 | "address": { 5 | "street": "Nordkanalstrasse 18", 6 | "city": "Hamburg", 7 | "country": "Germany", 8 | "postal_code": "20097" 9 | }, 10 | "contact": { 11 | "phone": "+49(0)40-302372-0", 12 | "fax": "+49(0)40-302372-100", 13 | "email": null, 14 | "website": null 15 | } 16 | }, 17 | "guest_information": { 18 | "company": "APIMEISTER CONSULTING GMBH", 19 | "address": "FRIEDRICHSTR. 123, 10117 BERLIN, GERMANY", 20 | "guest_name": "JENS WALTER" 21 | }, 22 | "invoice_information": { 23 | "invoice_number": "31611", 24 | "reservation_number": "92353607", 25 | "date": "2019-11-04", 26 | "room_number": "216", 27 | "check_in_date": "2019-11-04", 28 | "check_out_date": "2019-11-08" 29 | }, 30 | "charges": [ 31 | { 32 | "date": "2019-11-04", 33 | "description": "MC *5620", 34 | "charge": -476.0, 35 | "credit": null 36 | }, 37 | { 38 | "date": "2019-11-04", 39 | "description": "GUEST ROOM", 40 | "charge": 94.05, 41 | "credit": null 42 | }, 43 | { 44 | "date": "2019-11-05", 45 | "description": "BREAKFAST", 46 | "charge": 0.0, 47 | "credit": 94.05 48 | }, 49 | { 50 | "date": "2019-11-05", 51 | "description": "GUEST ROOM", 52 | "charge": 134.05, 53 | "credit": null 54 | }, 55 | { 56 | "date": "2019-11-05", 57 | "description": "BREAKFAST", 58 | "charge": 0.0, 59 | "credit": 94.05 60 | }, 61 | { 62 | "date": "2019-11-06", 63 | "description": "GUEST ROOM", 64 | "charge": 114.05, 65 | "credit": null 66 | }, 67 | { 68 | "date": "2019-11-06", 69 | "description": "BREAKFAST", 70 | "charge": 0.0, 71 | "credit": 94.05 72 | }, 73 | { 74 | "date": "2019-11-07", 75 | "description": "GUEST ROOM", 76 | "charge": 114.05, 77 | "credit": null 78 | }, 79 | { 80 | "date": "2019-11-07", 81 | "description": "BREAKFAST", 82 | "charge": 4.95, 83 | "credit": null 84 | } 85 | ], 86 | "totals_summary": { 87 | "currency": "EUR", 88 | "total_net": 476.0, 89 | "total_tax": 0.0, 90 | "total_gross": 476.0, 91 | "total_charge": 476.0, 92 | "total_credit": 188.1, 93 | "balance_due": 287.9 94 | }, 95 | "taxes": [ 96 | { 97 | "tax_type": "VAT 19%", 98 | "tax_rate": "19%", 99 | "net_amount": 16.64, 100 | "tax_amount": 3.16, 101 | "gross_amount": 19.8 102 | }, 103 | { 104 | "tax_type": "VAT 7%", 105 | "tax_rate": "7%", 106 | "net_amount": 426.36, 107 | "tax_amount": 29.84, 108 | "gross_amount": 456.2 109 | } 110 | ] 111 | } -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/extracted_invoice_json/madison_folio_g_cp_efolio5895707_extracted.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "hotel_information": { 4 | "name": "MADISON Hotel GmbH", 5 | "address": "Schaartvenweg 4, 20459 Hamburg", 6 | "contact": { 7 | "phone": "+49.40.37 666-0", 8 | "fax": "+49.40.37 666-137", 9 | "email": "info@madisonhotel.de", 10 | "website": "madisonhotel.de" 11 | }, 12 | "managing_directors": "Marlies Head, Thomas Kleinertz", 13 | "registration": { 14 | "court": "AG Hamburg HRB 47281", 15 | "VAT_ID": "DE118 696 407" 16 | }, 17 | "bank_details": { 18 | "bank": "HypoVereinsbank", 19 | "IBAN": "DE84 2003 0000 0003 6027 11", 20 | "BIC": "HYVEDEMM300" 21 | } 22 | }, 23 | "guest_information": { 24 | "company": "APfmeister Consulting GmbH", 25 | "address": "Friedrichstr. 123, 10117 Berlin", 26 | "guest_name": "Herr Jens Walter" 27 | }, 28 | "invoice_information": { 29 | "invoice_number": "505050 /", 30 | "date": "04.07.19", 31 | "room_number": "203", 32 | "check_in_date": "30.06.19", 33 | "check_out_date": "04.07.19", 34 | "page": "1 of 1", 35 | "user_id": "RIL" 36 | }, 37 | "charges": [ 38 | { 39 | "date": "30.06.19", 40 | "description": "Übernachtung exklusive Frühstück*", 41 | "charge": 110.0, 42 | "credit": null 43 | }, 44 | { 45 | "date": "01.07.19", 46 | "description": "Übernachtung exklusive Frühstück*", 47 | "charge": 110.0, 48 | "credit": null 49 | }, 50 | { 51 | "date": "02.07.19", 52 | "description": "Übernachtung exklusive Frühstück*", 53 | "charge": 110.0, 54 | "credit": null 55 | }, 56 | { 57 | "date": "03.07.19", 58 | "description": "Übernachtung exklusive Frühstück*", 59 | "charge": 110.0, 60 | "credit": null 61 | }, 62 | { 63 | "date": "04.07.19", 64 | "description": "Mastercard IFC", 65 | "charge": null, 66 | "credit": 440.0 67 | } 68 | ], 69 | "tax_details": { 70 | "total_net_amount": 411.21, 71 | "total_vat": 28.79, 72 | "total_gross_amount": 440.0, 73 | "vat_7_percent": { 74 | "net_amount": 411.21, 75 | "vat_amount": 28.79, 76 | "gross_amount": 440.0 77 | } 78 | }, 79 | "total": { 80 | "amount": 440.0, 81 | "balance": 0.0, 82 | "currency": "EUR" 83 | }, 84 | "payment_information": { 85 | "credit_card_number": "XXXXXXXXXXXX0502", 86 | "authorization_code": "114540", 87 | "transaction_number": "440.00", 88 | "approved_amount": "440.00", 89 | "terminal_id": "69264691", 90 | "approval_code": "114540", 91 | "receipt_number": "8250", 92 | "verification_number": "XXXXXX", 93 | "contract_number": "156648932" 94 | } 95 | } 96 | ] -------------------------------------------------------------------------------- /samples/js/mistralai/tools.js: -------------------------------------------------------------------------------- 1 | import MistralClient from '@mistralai/mistralai'; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Mistral models from the GitHub Models service */ 7 | const modelName = "mistral-ai/Mistral-small"; 8 | 9 | 10 | function getFlightInfo({originCity, destinationCity}){ 11 | if (originCity === "Seattle" && destinationCity === "Miami"){ 12 | return JSON.stringify({ 13 | airline: "Delta", 14 | flight_number: "DL123", 15 | flight_date: "May 7th, 2024", 16 | flight_time: "10:00AM" 17 | }); 18 | } 19 | return JSON.stringify({error: "No flights found between the cities"}); 20 | } 21 | 22 | const namesToFunctions = { 23 | getFlightInfo: (data) => 24 | getFlightInfo(data), 25 | }; 26 | 27 | export async function main() { 28 | 29 | const tool = { 30 | "type": "function", 31 | "function": { 32 | name: "getFlightInfo", 33 | description: "Returns information about the next flight between two cities." + 34 | "This includes the name of the airline, flight number and the date and time" + 35 | "of the next flight", 36 | parameters: { 37 | "type": "object", 38 | "properties": { 39 | "originCity": { 40 | "type": "string", 41 | "description": "The name of the city where the flight originates", 42 | }, 43 | "destinationCity": { 44 | "type": "string", 45 | "description": "The flight destination city", 46 | }, 47 | }, 48 | "required": [ 49 | "originCity", 50 | "destinationCity" 51 | ], 52 | } 53 | } 54 | }; 55 | 56 | const client = new MistralClient(token, endpoint); 57 | 58 | let messages = [ 59 | { role:"system", content: "You an assistant that helps users find flight information." }, 60 | { role:"user", content: "I'm interested in going to Miami. What is the next flight there from Seattle?" } 61 | ]; 62 | 63 | let response = await client.chat({ 64 | model: modelName, 65 | messages: messages, 66 | tools: [tool] 67 | }); 68 | 69 | if (response.choices[0].finish_reason === "tool_calls"){ 70 | // Append the model response to the chat history 71 | messages.push(response.choices[0].message); 72 | 73 | // We expect a single tool call 74 | if (response.choices[0].message && response.choices[0].message.tool_calls.length === 1){ 75 | const toolCall = response.choices[0].message.tool_calls[0]; 76 | // Parse the function call arguments and call the function 77 | const functionArgs = JSON.parse(toolCall.function.arguments); 78 | console.log(`Calling function \`${toolCall.function.name}\` with arguments ${toolCall.function.arguments}`); 79 | const callableFunc = namesToFunctions[toolCall.function.name]; 80 | const functionReturn = callableFunc(functionArgs); 81 | console.log(`Function returned = ${functionReturn}`); 82 | 83 | // Append the function call result fo the chat history 84 | messages.push({ 85 | role: "tool", 86 | name: toolCall.function.name, 87 | content: functionReturn, 88 | tool_call_id: toolCall.id, 89 | }); 90 | 91 | // Get another response from the model 92 | response = await client.chat({ 93 | model: modelName, 94 | messages: messages, 95 | tools: [tool] 96 | }); 97 | 98 | console.log(`Model response = ${response.choices[0].message.content}`); 99 | } 100 | } 101 | } 102 | 103 | main().catch((err) => { 104 | console.error("The sample encountered an error:", err); 105 | }); 106 | -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/extracted_invoice_json/madison-502875_extracted.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "Hotel Information": { 4 | "Name": "MADISON Hotel GmbH", 5 | "Address": "Schaarsteinweg 4, 20459 Hamburg", 6 | "Contact": { 7 | "Phone": "+49.40.37 666-0", 8 | "Fax": "+49.40.37 666-137", 9 | "Email": "info@madisonhotel.de", 10 | "Website": "madisonhotel.de" 11 | }, 12 | "Geschäftsführer": "Marlies Head, Thomas Kleinertz", 13 | "AG Hamburg HRB": "47881", 14 | "VAT ID": "DE118 690 407", 15 | "HypoVereinsbank": { 16 | "BLZ": "200 300 00", 17 | "Konto-Nr": "360 27 11", 18 | "IBAN": "DE46 20030000 0036027111", 19 | "BIC": "HYVEDEMM300" 20 | } 21 | }, 22 | "Guest Information": { 23 | "Name": "Herr Jens Walter", 24 | "Company": "APIMeister Consulting GmbH", 25 | "Address": "Friedrichstr. 123, 10117 Berlin" 26 | }, 27 | "Invoice Information": { 28 | "Rechnungs-Nr.": "502875", 29 | "Datum": "14.06.19", 30 | "Zimmer": "426", 31 | "Anreise": "10.06.19", 32 | "Abreise": "14.06.19", 33 | "Seite": "1 of 1", 34 | "Benutzer ID": "BD" 35 | }, 36 | "Charges": { 37 | "Items": [ 38 | { 39 | "Datum": "10.06.19", 40 | "Beschreibung": "Übernachtung exklusive Frühstück", 41 | "Belastung": "110.00", 42 | "Entlastung": null 43 | }, 44 | { 45 | "Datum": "11.06.19", 46 | "Beschreibung": "Übernachtung exklusive Frühstück", 47 | "Belastung": "110.00", 48 | "Entlastung": null 49 | }, 50 | { 51 | "Datum": "12.06.19", 52 | "Beschreibung": "Übernachtung exklusive Frühstück", 53 | "Belastung": "110.00", 54 | "Entlastung": null 55 | }, 56 | { 57 | "Datum": "13.06.19", 58 | "Beschreibung": "Übernachtung exklusive Frühstück", 59 | "Belastung": "110.00", 60 | "Entlastung": null 61 | }, 62 | { 63 | "Datum": "14.06.19", 64 | "Beschreibung": "Mastercard IFC", 65 | "Belastung": "440.00", 66 | "Entlastung": "440.00" 67 | } 68 | ], 69 | "Total": { 70 | "Netto EUR": "411.21", 71 | "MwSt. EUR": "28.79", 72 | "Brutto EUR": "440.00" 73 | }, 74 | "Saldo": { 75 | "Amount": "0.00", 76 | "Currency": "EUR" 77 | } 78 | }, 79 | "Tax Information": { 80 | "Finanzamt": "Hamburg Mitte", 81 | "Steuernummer": "47/841/01228" 82 | }, 83 | "KreditkartenDetails": { 84 | "Vertragspartnernummer": "15694832", 85 | "Kreditkartennummer": "XXXXXXXXXXXX0502", 86 | "Verfallsdatum": "XX/XX", 87 | "Terminal ID": "89264892", 88 | "Beleg Nr.": "27142", 89 | "Transaktionsbetrag": "440.00", 90 | "Genehmigter Betrag": "440.00", 91 | "Genehmigungscode": "366417" 92 | } 93 | }, 94 | { 95 | "hotel_information": { 96 | "name": "THE MADISON", 97 | "location": "HAMBURG" 98 | }, 99 | "guest_information": null, 100 | "invoice_information": null, 101 | "room_charges": null, 102 | "taxes": null, 103 | "total_charges": null 104 | } 105 | ] -------------------------------------------------------------------------------- /samples/js/openai/tools.js: -------------------------------------------------------------------------------- 1 | import OpenAI from "openai"; 2 | 3 | const token = process.env["GITHUB_TOKEN"]; 4 | const endpoint = "https://models.github.ai/inference/"; 5 | 6 | /* Pick one of the Azure OpenAI models from the GitHub Models service */ 7 | const modelName = "openai/gpt-4o-mini"; 8 | 9 | function getFlightInfo({originCity, destinationCity}){ 10 | if (originCity === "Seattle" && destinationCity === "Miami"){ 11 | return JSON.stringify({ 12 | airline: "Delta", 13 | flight_number: "DL123", 14 | flight_date: "May 7th, 2024", 15 | flight_time: "10:00AM" 16 | }); 17 | } 18 | return JSON.stringify({error: "No flights found between the cities"}); 19 | } 20 | 21 | const namesToFunctions = { 22 | getFlightInfo: (data) => 23 | getFlightInfo(data), 24 | }; 25 | 26 | export async function main() { 27 | 28 | const tool = { 29 | "type": "function", 30 | "function": { 31 | name: "getFlightInfo", 32 | description: "Returns information about the next flight between two cities." + 33 | "This includes the name of the airline, flight number and the date and time" + 34 | "of the next flight", 35 | parameters: { 36 | "type": "object", 37 | "properties": { 38 | "originCity": { 39 | "type": "string", 40 | "description": "The name of the city where the flight originates", 41 | }, 42 | "destinationCity": { 43 | "type": "string", 44 | "description": "The flight destination city", 45 | }, 46 | }, 47 | "required": [ 48 | "originCity", 49 | "destinationCity" 50 | ], 51 | } 52 | } 53 | }; 54 | 55 | const client = new OpenAI({ baseURL: endpoint, apiKey: token }); 56 | 57 | let messages=[ 58 | {role: "system", content: "You an assistant that helps users find flight information."}, 59 | {role: "user", content: "I'm interested in going to Miami. What is the next flight there from Seattle?"}, 60 | ]; 61 | 62 | let response = await client.chat.completions.create({ 63 | messages: messages, 64 | tools: [tool], 65 | model: modelName 66 | }); 67 | 68 | // We expect the model to ask for a tool call 69 | if (response.choices[0].finish_reason === "tool_calls"){ 70 | 71 | // Append the model response to the chat history 72 | messages.push(response.choices[0].message); 73 | 74 | // We expect a single tool call 75 | if (response.choices[0].message && response.choices[0].message.tool_calls.length === 1){ 76 | 77 | const toolCall = response.choices[0].message.tool_calls[0]; 78 | // We expect the tool to be a function call 79 | if (toolCall.type === "function"){ 80 | const toolCall = response.choices[0].message.tool_calls[0]; 81 | // Parse the function call arguments and call the function 82 | const functionArgs = JSON.parse(toolCall.function.arguments); 83 | console.log(`Calling function \`${toolCall.function.name}\` with arguments ${toolCall.function.arguments}`); 84 | const callableFunc = namesToFunctions[toolCall.function.name]; 85 | const functionReturn = callableFunc(functionArgs); 86 | console.log(`Function returned = ${functionReturn}`); 87 | 88 | // Append the function call result fo the chat history 89 | messages.push( 90 | { 91 | "tool_call_id": toolCall.id, 92 | "role": "tool", 93 | "name": toolCall.function.name, 94 | "content": functionReturn, 95 | } 96 | ) 97 | 98 | response = await client.chat.completions.create({ 99 | messages: messages, 100 | tools: [tool], 101 | model: modelName 102 | }); 103 | console.log(`Model response = ${response.choices[0].message.content}`); 104 | } 105 | } 106 | } 107 | } 108 | 109 | main().catch((err) => { 110 | console.error("The sample encountered an error:", err); 111 | }); 112 | -------------------------------------------------------------------------------- /samples/python/openai/tools.py: -------------------------------------------------------------------------------- 1 | """A language model can be given a set of tools it can invoke, 2 | for running specific actions depending on the context of the conversation. 3 | This sample demonstrates how to define a function tool and how to act on a request from the model to invoke it.""" 4 | 5 | import os 6 | import json 7 | from openai import OpenAI 8 | 9 | token = os.environ["GITHUB_TOKEN"] 10 | endpoint = "https://models.github.ai/inference" 11 | 12 | # Pick one of the Azure OpenAI models from the GitHub Models service 13 | model_name = "openai/gpt-4o-mini" 14 | 15 | # Create a client 16 | client = OpenAI( 17 | base_url=endpoint, 18 | api_key=token, 19 | ) 20 | 21 | 22 | # Define a function that returns flight information between two cities (mock implementation) 23 | def get_flight_info(origin_city: str, destination_city: str): 24 | if origin_city == "Seattle" and destination_city == "Miami": 25 | return json.dumps( 26 | { 27 | "airline": "Delta", 28 | "flight_number": "DL123", 29 | "flight_date": "May 7th, 2024", 30 | "flight_time": "10:00AM", 31 | } 32 | ) 33 | return json.dump({"error": "No flights found between the cities"}) 34 | 35 | 36 | # Define a function tool that the model can ask to invoke in order to retrieve flight information 37 | tool = { 38 | "type": "function", 39 | "function": { 40 | "name": "get_flight_info", 41 | "description": """Returns information about the next flight between two cities. 42 | This includes the name of the airline, flight number and the date and time 43 | of the next flight""", 44 | "parameters": { 45 | "type": "object", 46 | "properties": { 47 | "origin_city": { 48 | "type": "string", 49 | "description": "The name of the city where the flight originates", 50 | }, 51 | "destination_city": { 52 | "type": "string", 53 | "description": "The flight destination city", 54 | }, 55 | }, 56 | "required": ["origin_city", "destination_city"], 57 | }, 58 | }, 59 | } 60 | 61 | 62 | messages = [ 63 | { 64 | "role": "system", 65 | "content": "You an assistant that helps users find flight information.", 66 | }, 67 | { 68 | "role": "user", 69 | "content": "I'm interested in going to Miami. What is the next flight there from Seattle?", 70 | }, 71 | ] 72 | 73 | response = client.chat.completions.create( 74 | messages=messages, 75 | tools=[tool], 76 | model=model_name, 77 | ) 78 | 79 | # We expect the model to ask for a tool call 80 | if response.choices[0].finish_reason == "tool_calls": 81 | 82 | # Append the model response to the chat history 83 | messages.append(response.choices[0].message) 84 | 85 | # We expect a single tool call 86 | if ( 87 | response.choices[0].message.tool_calls 88 | and len(response.choices[0].message.tool_calls) == 1 89 | ): 90 | 91 | tool_call = response.choices[0].message.tool_calls[0] 92 | 93 | # We expect the tool to be a function call 94 | if tool_call.type == "function": 95 | 96 | # Parse the function call arguments and call the function 97 | function_args = json.loads(tool_call.function.arguments.replace("'", '"')) 98 | print( 99 | f"Calling function `{tool_call.function.name}` with arguments {function_args}" 100 | ) 101 | callable_func = locals()[tool_call.function.name] 102 | function_return = callable_func(**function_args) 103 | print(f"Function returned = {function_return}") 104 | 105 | # Append the function call result fo the chat history 106 | messages.append( 107 | { 108 | "tool_call_id": tool_call.id, 109 | "role": "tool", 110 | "name": tool_call.function.name, 111 | "content": function_return, 112 | } 113 | ) 114 | 115 | # Get another response from the model 116 | response = client.chat.completions.create( 117 | messages=messages, 118 | tools=[tool], 119 | model=model_name, 120 | ) 121 | 122 | print(f"Model response = {response.choices[0].message.content}") -------------------------------------------------------------------------------- /samples/python/mistralai/tools.py: -------------------------------------------------------------------------------- 1 | """A language model can be given a set of tools it can invoke, 2 | for running specific actions depending on the context of the conversation. 3 | This sample demonstrates how to define a function tool 4 | and how to act on a request from the model to invoke it.""" 5 | import os 6 | import json 7 | from mistralai.client import MistralClient 8 | from mistralai.models.chat_completion import ChatMessage, Function 9 | 10 | token = os.environ["GITHUB_TOKEN"] 11 | endpoint = "https://models.github.ai/inference" 12 | 13 | # Pick one of the Mistral models from the GitHub Models service 14 | model_name = "Mistral-large" 15 | 16 | 17 | # Define a function that returns flight 18 | # information between two cities (mock implementation) 19 | def get_flight_info(origin_city: str, destination_city: str): 20 | if origin_city == "Seattle" and destination_city == "Miami": 21 | return json.dumps({ 22 | "airline": "Delta", 23 | "flight_number": "DL123", 24 | "flight_date": "May 7th, 2024", 25 | "flight_time": "10:00AM"}) 26 | return json.dump({"error": "No flights found between the cities"}) 27 | 28 | 29 | # Define a function tool that the model 30 | # can ask to invoke in order to retrieve flight information 31 | tool = { 32 | "type": "function", 33 | "function": Function( 34 | name="get_flight_info", 35 | description="""Returns information about the next flight 36 | between two cities. 37 | This includes the name of the airline, 38 | flight number and the date and time 39 | of the next flight""", 40 | parameters={ 41 | "type": "object", 42 | "properties": { 43 | "origin_city": { 44 | "type": "string", 45 | "description": ("The name of the city" 46 | " where the flight originates"), 47 | }, 48 | "destination_city": { 49 | "type": "string", 50 | "description": "The flight destination city", 51 | }, 52 | }, 53 | "required": [ 54 | "origin_city", 55 | "destination_city" 56 | ], 57 | } 58 | ) 59 | } 60 | 61 | 62 | client = MistralClient(api_key=token, endpoint=endpoint) 63 | 64 | messages = [ 65 | ChatMessage( 66 | role="system", 67 | content="You an assistant that helps users find flight information."), 68 | ChatMessage( 69 | role="user", 70 | content=("I'm interested in going to Miami. What is " 71 | "the next flight there from Seattle?")), 72 | ] 73 | 74 | response = client.chat( 75 | messages=messages, 76 | tools=[tool], 77 | model=model_name, 78 | ) 79 | 80 | # We expect the model to ask for a tool call 81 | if response.choices[0].finish_reason == "tool_calls": 82 | 83 | # Append the model response to the chat history 84 | messages.append(response.choices[0].message) 85 | 86 | # We expect a single tool call 87 | if response.choices[0].message.tool_calls and len( 88 | response.choices[0].message.tool_calls) == 1: 89 | 90 | tool_call = response.choices[0].message.tool_calls[0] 91 | 92 | # We expect the tool to be a function call 93 | if tool_call.type == "function": 94 | 95 | # Parse the function call arguments and call the function 96 | function_args = json.loads( 97 | tool_call.function.arguments.replace("'", '"')) 98 | print(f"Calling function `{tool_call.function.name}` " 99 | f"with arguments {function_args}") 100 | callable_func = locals()[tool_call.function.name] 101 | function_return = callable_func(**function_args) 102 | print(f"Function returned = {function_return}") 103 | 104 | # Append the function call result fo the chat history 105 | messages.append( 106 | ChatMessage( 107 | role="tool", 108 | name=tool_call.function.name, 109 | content=function_return, 110 | tool_call_id=tool_call.id, 111 | ) 112 | ) 113 | 114 | # Get another response from the model 115 | response = client.chat( 116 | messages=messages, 117 | tools=[tool], 118 | model=model_name, 119 | ) 120 | 121 | print(f"Model response = {response.choices[0].message.content}") 122 | -------------------------------------------------------------------------------- /samples/python/azure_ai_inference/tools.py: -------------------------------------------------------------------------------- 1 | """A language model can be given a set of tools it can invoke, 2 | for running specific actions depending on the context of the conversation. 3 | This sample demonstrates how to define a function tool and how to act on a request from the model to invoke it.""" 4 | 5 | import os 6 | import json 7 | from azure.ai.inference import ChatCompletionsClient 8 | from azure.ai.inference.models import ( 9 | AssistantMessage, 10 | ChatCompletionsToolCall, 11 | ChatCompletionsToolDefinition, 12 | CompletionsFinishReason, 13 | FunctionDefinition, 14 | SystemMessage, 15 | ToolMessage, 16 | UserMessage, 17 | ) 18 | from azure.core.credentials import AzureKeyCredential 19 | 20 | assert "GITHUB_TOKEN" in os.environ, "Please set the GITHUB_TOKEN environment variable." 21 | token = os.environ["GITHUB_TOKEN"] 22 | 23 | endpoint = "https://models.github.ai/inference" 24 | 25 | # By using the Azure AI Inference SDK, you can easily experiment with different models 26 | # by modifying the value of `modelName` in the code below. For this code sample 27 | # you need a model supporting tools. The following compatible models are 28 | # available in the GitHub Models service: 29 | # 30 | # Cohere: Cohere-command-r, Cohere-command-r-plus 31 | # Mistral AI: Mistral-large, Mistral-large-2407, Mistral-Nemo, Mistral-small 32 | # Azure OpenAI: gpt-4o-mini, gpt-4o 33 | model_name = "openai/gpt-4o-mini" 34 | 35 | client = ChatCompletionsClient( 36 | endpoint=endpoint, 37 | credential=AzureKeyCredential(token), 38 | ) 39 | 40 | def get_flight_info(origin_city: str, destination_city: str): 41 | if origin_city == "Seattle" and destination_city == "Miami": 42 | return json.dumps( 43 | { 44 | "airline": "Delta", 45 | "flight_number": "DL123", 46 | "flight_date": "May 7th, 2024", 47 | "flight_time": "10:00AM", 48 | } 49 | ) 50 | return json.dump({"error": "No flights found between the cities"}) 51 | 52 | flight_info = ChatCompletionsToolDefinition( 53 | function=FunctionDefinition( 54 | name="get_flight_info", 55 | description="""Returns information about the next flight between two cities. 56 | This includes the name of the airline, flight number and the date and 57 | time of the next flight""", 58 | parameters={ 59 | "type": "object", 60 | "properties": { 61 | "origin_city": { 62 | "type": "string", 63 | "description": "The name of the city where the flight originates", 64 | }, 65 | "destination_city": { 66 | "type": "string", 67 | "description": "The flight destination city", 68 | }, 69 | }, 70 | "required": ["origin_city", "destination_city"], 71 | }, 72 | ) 73 | ) 74 | 75 | 76 | messages = [ 77 | SystemMessage(content="You an assistant that helps users find flight information."), 78 | UserMessage( 79 | content="I'm interested in going to Miami. What is the next flight there from Seattle?" 80 | ), 81 | ] 82 | 83 | response = client.complete( 84 | messages=messages, 85 | tools=[flight_info], 86 | model=model_name, 87 | ) 88 | 89 | if response.choices[0].finish_reason == CompletionsFinishReason.TOOL_CALLS: 90 | 91 | messages.append(AssistantMessage(tool_calls=response.choices[0].message.tool_calls)) 92 | 93 | if ( 94 | response.choices[0].message.tool_calls 95 | and len(response.choices[0].message.tool_calls) == 1 96 | ): 97 | 98 | tool_call = response.choices[0].message.tool_calls[0] 99 | 100 | if isinstance(tool_call, ChatCompletionsToolCall): 101 | 102 | function_args = json.loads(tool_call.function.arguments.replace("'", '"')) 103 | print( 104 | f"Calling function `{tool_call.function.name}` with arguments {function_args}" 105 | ) 106 | callable_func = locals()[tool_call.function.name] 107 | function_return = callable_func(**function_args) 108 | print(f"Function returned = {function_return}") 109 | 110 | messages.append( 111 | ToolMessage(tool_call_id=tool_call.id, content=function_return) 112 | ) 113 | 114 | response = client.complete( 115 | messages=messages, 116 | tools=[flight_info], 117 | model=model_name, 118 | ) 119 | 120 | print(f"Model response = {response.choices[0].message.content}") -------------------------------------------------------------------------------- /samples/python/openai/embeddings_getting_started.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Getting started with GitHub Models - OpenAI SDK and embeddings\n", 8 | "\n", 9 | "## 1. Personal access token\n", 10 | "\n", 11 | "A personal access token is made available in the Codespaces environment in the `GITHUB_TOKEN` environment variable. " 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "## 2. Install dependencies" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "%pip install openai --quiet\n", 28 | "%pip install python-dotenv --quiet" 29 | ] 30 | }, 31 | { 32 | "cell_type": "markdown", 33 | "metadata": {}, 34 | "source": [ 35 | "\n", 36 | "\n", 37 | "## 3. Set environment variables and create the client\n", 38 | "\n", 39 | "\n" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": null, 45 | "metadata": {}, 46 | "outputs": [], 47 | "source": [ 48 | "import os\n", 49 | "import dotenv\n", 50 | "from openai import OpenAI\n", 51 | "\n", 52 | "dotenv.load_dotenv()\n", 53 | "\n", 54 | "if not os.getenv(\"GITHUB_TOKEN\"):\n", 55 | " raise ValueError(\"GITHUB_TOKEN is not set\")\n", 56 | "\n", 57 | "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"GITHUB_TOKEN\")\n", 58 | "os.environ[\"OPENAI_BASE_URL\"] = \"https://models.github.ai/inference\"\n", 59 | "\n", 60 | "client = OpenAI()\n" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "## 4. Embed a string\n", 68 | "\n", 69 | "This is just calling the `embeddings.create` endpoint with a simple prompt.\n", 70 | "\n", 71 | "\n" 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": null, 77 | "metadata": {}, 78 | "outputs": [], 79 | "source": [ 80 | "model_name = \"text-embedding-3-small\"\n", 81 | "\n", 82 | "response = client.embeddings.create(\n", 83 | " model=model_name,\n", 84 | " input=[\"Hello, world!\"]\n", 85 | ")\n", 86 | "embeddings = response.data[0].embedding\n", 87 | "print(len(embeddings))\n", 88 | "print(embeddings[:10])" 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "metadata": {}, 94 | "source": [ 95 | "As you can see, the response from the `text-embedding-3-small` model contains a vector of length 1536, which is the embedding of the input string. Different models have different embedding sizes. Please consult the [model documentation](https://github.com/marketplace/models/) for more information about the embedding model you are using. \n", 96 | "\n", 97 | "## 5. Embed a list of strings\n", 98 | "\n", 99 | "To save on API calls, you can embed a list of strings in a single call. The response will contain a list of embeddings, one for each input string.\n" 100 | ] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "execution_count": null, 105 | "metadata": {}, 106 | "outputs": [], 107 | "source": [ 108 | "model_name = \"text-embedding-3-small\"\n", 109 | "inputs = [\"Hello, world!\", \"How are you?\", \"What's the weather like?\"]\n", 110 | "\n", 111 | "response = client.embeddings.create(\n", 112 | " model=model_name,\n", 113 | " input=inputs\n", 114 | ")\n", 115 | "for data in response.data:\n", 116 | " print(data.embedding[:10])" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "## Next Steps\n", 124 | "\n", 125 | "A common use case for embeddings in generative AI is to use them to implement Retrieval Augmented Generation (RAG) systems.\n", 126 | "\n", 127 | "See the cookbook [rag_getting_started](../../../cookbooks/python/llamaindex/rag_getting_started.ipynb) for an example of how to do this using the LLamaIndex framework.\n", 128 | "\n", 129 | "To learn more about what you can do with the GitHub models using the OpenAI Python API, [check out theses cookbooks](../../../cookbooks/python/openai/README.md)\n" 130 | ] 131 | } 132 | ], 133 | "metadata": { 134 | "kernelspec": { 135 | "display_name": "Python 3", 136 | "language": "python", 137 | "name": "python3" 138 | }, 139 | "language_info": { 140 | "codemirror_mode": { 141 | "name": "ipython", 142 | "version": 3 143 | }, 144 | "file_extension": ".py", 145 | "mimetype": "text/x-python", 146 | "name": "python", 147 | "nbconvert_exporter": "python", 148 | "pygments_lexer": "ipython3", 149 | "version": "3.11.9" 150 | } 151 | }, 152 | "nbformat": 4, 153 | "nbformat_minor": 2 154 | } 155 | -------------------------------------------------------------------------------- /samples/js/azure_ai_inference/tools.js: -------------------------------------------------------------------------------- 1 | import ModelClient from "@azure-rest/ai-inference"; 2 | import { AzureKeyCredential } from "@azure/core-auth"; 3 | 4 | const token = process.env["GITHUB_TOKEN"]; 5 | const endpoint = "https://models.github.ai/inference/"; 6 | 7 | /* By using the Azure AI Inference SDK, you can easily experiment with different models 8 | by modifying the value of `modelName` in the code below. For this code sample 9 | you need a model supporting tools. The following compatible models are 10 | available in the GitHub Models service: 11 | 12 | Cohere: Cohere-command-r, Cohere-command-r-plus 13 | Mistral AI: Mistral-large, Mistral-large-2407, Mistral-Nemo, Mistral-small 14 | Azure OpenAI: gpt-4o-mini, gpt-4o */ 15 | const modelName = "openai/gpt-4o-mini"; 16 | 17 | function getFlightInfo({originCity, destinationCity}){ 18 | if (originCity === "Seattle" && destinationCity === "Miami"){ 19 | return JSON.stringify({ 20 | airline: "Delta", 21 | flight_number: "DL123", 22 | flight_date: "May 7th, 2024", 23 | flight_time: "10:00AM" 24 | }); 25 | } 26 | return JSON.stringify({error: "No flights found between the cities"}); 27 | } 28 | 29 | const namesToFunctions = { 30 | getFlightInfo: (data) => 31 | getFlightInfo(data), 32 | }; 33 | 34 | export async function main() { 35 | 36 | const tool = { 37 | "type": "function", 38 | "function": { 39 | name: "getFlightInfo", 40 | description: "Returns information about the next flight between two cities." + 41 | "This includes the name of the airline, flight number and the date and time" + 42 | "of the next flight", 43 | parameters: { 44 | "type": "object", 45 | "properties": { 46 | "originCity": { 47 | "type": "string", 48 | "description": "The name of the city where the flight originates", 49 | }, 50 | "destinationCity": { 51 | "type": "string", 52 | "description": "The flight destination city", 53 | }, 54 | }, 55 | "required": [ 56 | "originCity", 57 | "destinationCity" 58 | ], 59 | } 60 | } 61 | }; 62 | 63 | const client = new ModelClient(endpoint, new AzureKeyCredential(token)); 64 | 65 | let messages=[ 66 | {role: "system", content: "You an assistant that helps users find flight information."}, 67 | {role: "user", content: "I'm interested in going to Miami. What is the next flight there from Seattle?"}, 68 | ]; 69 | 70 | let response = await client.path("/chat/completions").post({ 71 | body: { 72 | messages: messages, 73 | tools: [tool], 74 | model: modelName 75 | } 76 | }); 77 | if (response.status !== "200") { 78 | throw response.body.error; 79 | } 80 | 81 | // We expect the model to ask for a tool call 82 | if (response.body.choices[0].finish_reason === "tool_calls"){ 83 | 84 | // Append the model response to the chat history 85 | messages.push(response.body.choices[0].message); 86 | 87 | // We expect a single tool call 88 | if (response.body.choices[0].message && response.body.choices[0].message.tool_calls.length === 1){ 89 | 90 | const toolCall = response.body.choices[0].message.tool_calls[0]; 91 | // We expect the tool to be a function call 92 | if (toolCall.type === "function"){ 93 | const toolCall = response.body.choices[0].message.tool_calls[0]; 94 | // Parse the function call arguments and call the function 95 | const functionArgs = JSON.parse(toolCall.function.arguments); 96 | console.log(`Calling function \`${toolCall.function.name}\` with arguments ${toolCall.function.arguments}`); 97 | const callableFunc = namesToFunctions[toolCall.function.name]; 98 | const functionReturn = callableFunc(functionArgs); 99 | console.log(`Function returned = ${functionReturn}`); 100 | 101 | // Append the function call result fo the chat history 102 | messages.push( 103 | { 104 | "tool_call_id": toolCall.id, 105 | "role": "tool", 106 | "name": toolCall.function.name, 107 | "content": functionReturn, 108 | } 109 | ) 110 | 111 | response = await client.path("/chat/completions").post({ 112 | body: { 113 | messages: messages, 114 | tools: [tool], 115 | model: modelName 116 | } 117 | }); 118 | if (response.status !== "200") { 119 | throw response.body.error; 120 | } 121 | console.log(`Model response = ${response.body.choices[0].message.content}`); 122 | } 123 | } 124 | } 125 | } 126 | 127 | main().catch((err) => { 128 | console.error("The sample encountered an error:", err); 129 | }); -------------------------------------------------------------------------------- /cookbooks/python/openai/data/example_events_openapi.json: -------------------------------------------------------------------------------- 1 | { 2 | "openapi": "3.0.0", 3 | "info": { 4 | "version": "1.0.0", 5 | "title": "Event Management API", 6 | "description": "An API for managing event data" 7 | }, 8 | "paths": { 9 | "/events": { 10 | "get": { 11 | "summary": "List all events", 12 | "operationId": "listEvents", 13 | "responses": { 14 | "200": { 15 | "description": "A list of events", 16 | "content": { 17 | "application/json": { 18 | "schema": { 19 | "type": "array", 20 | "items": { 21 | "$ref": "#/components/schemas/Event" 22 | } 23 | } 24 | } 25 | } 26 | } 27 | } 28 | }, 29 | "post": { 30 | "summary": "Create a new event", 31 | "operationId": "createEvent", 32 | "requestBody": { 33 | "required": true, 34 | "content": { 35 | "application/json": { 36 | "schema": { 37 | "$ref": "#/components/schemas/Event" 38 | } 39 | } 40 | } 41 | }, 42 | "responses": { 43 | "201": { 44 | "description": "The event was created", 45 | "content": { 46 | "application/json": { 47 | "schema": { 48 | "$ref": "#/components/schemas/Event" 49 | } 50 | } 51 | } 52 | } 53 | } 54 | } 55 | }, 56 | "/events/{id}": { 57 | "get": { 58 | "summary": "Retrieve an event by ID", 59 | "operationId": "getEventById", 60 | "parameters": [ 61 | { 62 | "name": "id", 63 | "in": "path", 64 | "required": true, 65 | "schema": { 66 | "type": "string" 67 | } 68 | } 69 | ], 70 | "responses": { 71 | "200": { 72 | "description": "The event", 73 | "content": { 74 | "application/json": { 75 | "schema": { 76 | "$ref": "#/components/schemas/Event" 77 | } 78 | } 79 | } 80 | } 81 | } 82 | }, 83 | "delete": { 84 | "summary": "Delete an event by ID", 85 | "operationId": "deleteEvent", 86 | "parameters": [ 87 | { 88 | "name": "id", 89 | "in": "path", 90 | "required": true, 91 | "schema": { 92 | "type": "string" 93 | } 94 | } 95 | ], 96 | "responses": { 97 | "204": { 98 | "description": "The event was deleted" 99 | } 100 | } 101 | }, 102 | "patch": { 103 | "summary": "Update an event's details by ID", 104 | "operationId": "updateEventDetails", 105 | "parameters": [ 106 | { 107 | "name": "id", 108 | "in": "path", 109 | "required": true, 110 | "schema": { 111 | "type": "string" 112 | } 113 | } 114 | ], 115 | "requestBody": { 116 | "required": true, 117 | "content": { 118 | "application/json": { 119 | "schema": { 120 | "type": "object", 121 | "properties": { 122 | "name": { 123 | "type": "string" 124 | }, 125 | "date": { 126 | "type": "string", 127 | "format": "date-time" 128 | }, 129 | "location": { 130 | "type": "string" 131 | } 132 | }, 133 | "required": [ 134 | "name", 135 | "date", 136 | "location" 137 | ] 138 | } 139 | } 140 | } 141 | }, 142 | "responses": { 143 | "200": { 144 | "description": "The event's details were updated", 145 | "content": { 146 | "application/json": { 147 | "schema": { 148 | "$ref": "#/components/schemas/Event" 149 | } 150 | } 151 | } 152 | } 153 | } 154 | } 155 | } 156 | }, 157 | "components": { 158 | "schemas": { 159 | "Event": { 160 | "type": "object", 161 | "properties": { 162 | "id": { 163 | "type": "string" 164 | }, 165 | "name": { 166 | "type": "string" 167 | }, 168 | "date": { 169 | "type": "string", 170 | "format": "date-time" 171 | }, 172 | "location": { 173 | "type": "string" 174 | } 175 | }, 176 | "required": [ 177 | "name", 178 | "date", 179 | "location" 180 | ] 181 | } 182 | } 183 | } 184 | } -------------------------------------------------------------------------------- /cookbooks/python/openai/data/hotel_invoices/extracted_invoice_json/hampton_20190411_extracted.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "hotel_information": { 4 | "name": "Hampton by Hilton Hamburg City Centre", 5 | "address": "Nordkanalstrasse 18, 20097 Hamburg", 6 | "phone": "+49(0)40-302372-0", 7 | "fax": "+49(0)40-302372-100" 8 | }, 9 | "guest_information": { 10 | "name": "APIMEISTER CONSULTING GMBH", 11 | "address": "FRIEDRICHSTR. 123, 10117 BERLIN, GERMANY", 12 | "guest_name": "JENS WALTER" 13 | }, 14 | "invoice_information": { 15 | "invoice_number": "31611", 16 | "confirmation_number": "92353607", 17 | "vat_number": "DE258969536", 18 | "folio_number": "126069 A" 19 | }, 20 | "stay_information": { 21 | "room_number": "216", 22 | "arrival_date": "04/11/2019 19:00:00", 23 | "departure_date": "08/11/2019", 24 | "adults/children": "1/0", 25 | "room_rate": "99.00", 26 | "rate_plan": "2G2" 27 | }, 28 | "charges": [ 29 | { 30 | "date": "04/11/2019", 31 | "description": "MC *5620", 32 | "cashier_id": "ANPE", 33 | "ref_no": "437200", 34 | "guest_charges": "-476.00", 35 | "credit": null, 36 | "balance": null 37 | }, 38 | { 39 | "date": "04/11/2019", 40 | "description": "GUEST ROOM", 41 | "cashier_id": "MACH", 42 | "ref_no": "437361", 43 | "guest_charges": "94.05", 44 | "credit": null, 45 | "balance": null 46 | }, 47 | { 48 | "date": "05/11/2019", 49 | "description": "BREAKFAST", 50 | "cashier_id": "MACH", 51 | "ref_no": "437361", 52 | "guest_charges": "0.00", 53 | "credit": "94.05", 54 | "balance": null 55 | }, 56 | { 57 | "date": "05/11/2019", 58 | "description": "GUEST ROOM", 59 | "cashier_id": "MACH", 60 | "ref_no": "437770", 61 | "guest_charges": "134.05", 62 | "credit": null, 63 | "balance": null 64 | }, 65 | { 66 | "date": "05/11/2019", 67 | "description": "BREAKFAST", 68 | "cashier_id": "MACH", 69 | "ref_no": "437770", 70 | "guest_charges": "0.00", 71 | "credit": "94.05", 72 | "balance": null 73 | }, 74 | { 75 | "date": "06/11/2019", 76 | "description": "GUEST ROOM", 77 | "cashier_id": "ANPE", 78 | "ref_no": "438260", 79 | "guest_charges": "114.05", 80 | "credit": null, 81 | "balance": null 82 | }, 83 | { 84 | "date": "06/11/2019", 85 | "description": "BREAKFAST", 86 | "cashier_id": "ANPE", 87 | "ref_no": "438200", 88 | "guest_charges": "0.00", 89 | "credit": "94.05", 90 | "balance": null 91 | }, 92 | { 93 | "date": "07/11/2019", 94 | "description": "GUEST ROOM", 95 | "cashier_id": "MACH", 96 | "ref_no": "438616", 97 | "guest_charges": "114.05", 98 | "credit": null, 99 | "balance": null 100 | } 101 | ] 102 | }, 103 | { 104 | "hotel_information": { 105 | "name": "Hampton by Hilton Hamburg City Centre", 106 | "address": "Nordkanalstrasse 18, 20097 Hamburg", 107 | "phone": "+49(0)40-302372-0", 108 | "fax": "+49(0)40-302372-100" 109 | }, 110 | "guest_information": { 111 | "name": "APIMEISTER CONSULTING GMBH", 112 | "address": "FRIEDRICHSTR. 123, 10117 BERLIN, GERMANY", 113 | "contact_person": "JENS WALTER" 114 | }, 115 | "invoice_information": { 116 | "invoice_number": "31611", 117 | "confirmation_number": "92353607", 118 | "VAT_number": "DE259985536", 119 | "folio_number": "126609 A" 120 | }, 121 | "stay_information": { 122 | "room_number": "216", 123 | "arrival_date": "04/11/2019 19:09:00", 124 | "departure_date": "08/11/2019", 125 | "audit_date": "10", 126 | "adults_children": "1/0", 127 | "rate_plan": "2G2" 128 | }, 129 | "charges": [ 130 | { 131 | "date": "07/11/2019", 132 | "description": "BREAKFAST", 133 | "cashier_id": "MACH", 134 | "ref_no": "438616", 135 | "guest_charges": "€4.95", 136 | "credit": null, 137 | "balance": "€0.00" 138 | } 139 | ], 140 | "tax_summary": { 141 | "trade_receivable_net_19%": "€16.64", 142 | "trade_receivable_net_7%": "€0.00", 143 | "trade_receivable_net_7%_2": "€426.36", 144 | "VAT_19%": "€3.16", 145 | "F&B_VAT_7%": "€0.00", 146 | "VAT_7%": "€29.84", 147 | "trade_receivables_incl_VAT": "€476.00" 148 | }, 149 | "payment_information": { 150 | "approval_amount": "€476.00", 151 | "approval_code": "109423", 152 | "transaction_id": "437200", 153 | "balance": "-€476.00" 154 | }, 155 | "signature": { 156 | "guest_signature": null, 157 | "debit_related_verbiage": null 158 | } 159 | } 160 | ] -------------------------------------------------------------------------------- /samples/python/azure_ai_evaluation/evaluation.py: -------------------------------------------------------------------------------- 1 | """This sample demonstrates how to use Azure AI Foundry SDK to run GitHub model catalog with evaluation. 2 | It is leveraging your endpoint and key. The call is synchronous. 3 | 4 | For those who have Azure credentials, you can run the risk and safety evaluators from Azure AI. 5 | 6 | Azure Evaluation SDK: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/evaluate-sdk 7 | """ 8 | 9 | import os 10 | import json 11 | from pathlib import Path 12 | from azure.ai.inference import ChatCompletionsClient 13 | from azure.ai.inference.models import SystemMessage, UserMessage 14 | from azure.ai import evaluation 15 | from azure.ai.evaluation import RougeType, evaluate 16 | from azure.core.credentials import AzureKeyCredential 17 | from azure.identity import DefaultAzureCredential 18 | 19 | 20 | token = os.environ['GITHUB_TOKEN'] 21 | 22 | # Target model is the model to be evaluated. 23 | target_model_name = "mistral-ai/Mistral-small" 24 | target_model_endpoint = "https://models.github.ai/inference/" 25 | # Judge model is the model to evaluate the target model. 26 | judge_model_name = "openai/gpt-4o-mini" 27 | judge_model_endpoint = "https://models.github.ai/inference/" 28 | 29 | evaluation_name = "GitHub models evaluation" 30 | eval_data_file = Path("./eval_data.jsonl") 31 | eval_result_file_perf_and_quality = Path("./eval_result_perf_and_quality.json") 32 | eval_result_file_risk_and_safety = Path("./eval_result_risk_and_safety.json") 33 | 34 | 35 | def generate_eval_data(): 36 | eval_data_queries = [{ 37 | "query": "What is the capital of France?", 38 | "ground_truth": "Paris", 39 | }, { 40 | "query": "Where is Wineglass Bay?", 41 | "ground_truth": "Wineglass Bay is located on the Freycinet Peninsula on the east coast of Tasmania, Australia.", 42 | }] 43 | 44 | with eval_data_file.open("w") as f: 45 | for eval_data_query in eval_data_queries: 46 | client = ChatCompletionsClient( 47 | endpoint=target_model_endpoint, 48 | credential=AzureKeyCredential(token), 49 | ) 50 | 51 | context = "You are a geography teacher." 52 | response = client.complete( 53 | messages=[ 54 | SystemMessage(content=context), 55 | UserMessage(content=eval_data_query["query"]), 56 | ], 57 | model=target_model_name, 58 | temperature=1., 59 | max_tokens=1000, 60 | top_p=1. 61 | ) 62 | result = response.choices[0].message.content 63 | 64 | eval_data = { 65 | "id": "1", 66 | "description": "Evaluate the model", 67 | "query": eval_data_query["query"], 68 | "context": context, 69 | "response": result, 70 | "ground_truth": eval_data_query["ground_truth"], 71 | } 72 | f.write(json.dumps(eval_data) + "\n") 73 | 74 | 75 | def run_perf_and_quality_evaluators(): 76 | model_config = { 77 | "azure_endpoint": judge_model_endpoint, 78 | "azure_deployment": judge_model_name, 79 | "api_key": token, 80 | } 81 | 82 | evaluators = { 83 | "BleuScoreEvaluator": evaluation.BleuScoreEvaluator(), 84 | "F1ScoreEvaluator": evaluation.F1ScoreEvaluator(), 85 | "GleuScoreEvaluator": evaluation.GleuScoreEvaluator(), 86 | "MeteorScoreEvaluator": evaluation.MeteorScoreEvaluator(), 87 | "RougeScoreEvaluator": evaluation.RougeScoreEvaluator(rouge_type=RougeType.ROUGE_L), 88 | "CoherenceEvaluator": evaluation.CoherenceEvaluator(model_config=model_config), 89 | "FluencyEvaluator": evaluation.FluencyEvaluator(model_config=model_config), 90 | "GroundednessEvaluator": evaluation.GroundednessEvaluator(model_config=model_config), 91 | "QAEvaluator": evaluation.QAEvaluator(model_config=model_config, _parallel=False), 92 | "RelevanceEvaluator": evaluation.RelevanceEvaluator(model_config=model_config), 93 | "RetrievalEvaluator": evaluation.RetrievalEvaluator(model_config=model_config), 94 | "SimilarityEvaluator": evaluation.SimilarityEvaluator(model_config=model_config), 95 | } 96 | 97 | eval_results = evaluate( 98 | data=eval_data_file, 99 | evaluators=evaluators, 100 | evaluation_name=evaluation_name, 101 | target=None, 102 | output_path=eval_result_file_perf_and_quality, 103 | ) 104 | print(json.dumps(eval_results, indent=4)) 105 | 106 | 107 | def run_risk_and_safety_evaluators_with_azure(): 108 | azure_ai_project = { 109 | "subscription_id": os.environ.get("AZURE_SUBSCRIPTION_ID"), 110 | "resource_group_name": os.environ.get("AZURE_RESOURCE_GROUP_NAME"), 111 | "project_name": os.environ.get("AZURE_PROJECT_NAME"), 112 | } 113 | credential = DefaultAzureCredential() 114 | evaluators = { 115 | "ContentSafetyEvaluator": evaluation.ContentSafetyEvaluator(azure_ai_project=azure_ai_project, credential=credential), 116 | "HateUnfairnessEvaluator": evaluation.HateUnfairnessEvaluator(azure_ai_project=azure_ai_project, credential=credential), 117 | "SelfHarmEvaluator": evaluation.SelfHarmEvaluator(azure_ai_project=azure_ai_project, credential=credential), 118 | "SexualEvaluator": evaluation.SexualEvaluator(azure_ai_project=azure_ai_project, credential=credential), 119 | "ViolenceEvaluator": evaluation.ViolenceEvaluator(azure_ai_project=azure_ai_project, credential=credential), 120 | "ProtectedMaterialEvaluator": evaluation.ProtectedMaterialEvaluator(azure_ai_project=azure_ai_project, credential=credential), 121 | "IndirectAttackEvaluator": evaluation.IndirectAttackEvaluator(azure_ai_project=azure_ai_project, credential=credential), 122 | "GroundednessProEvaluator": evaluation.GroundednessProEvaluator(azure_ai_project=azure_ai_project, credential=credential), 123 | } 124 | 125 | risk_and_safety_result_dict = {} 126 | with eval_data_file.open("r") as f: 127 | for line in f: 128 | eval_data = json.loads(line) 129 | for name, evaluator in evaluators.items(): 130 | if name != "GroundednessProEvaluator": 131 | score = evaluator(query=eval_data["query"], response=eval_data["response"]) 132 | else: 133 | score = evaluator(query=eval_data["query"], response=eval_data["response"], context=eval_data["context"]) 134 | print(f"{name}: {score}") 135 | risk_and_safety_result_dict[name] = score 136 | 137 | with eval_result_file_risk_and_safety.open("w") as f: 138 | f.write(json.dumps(risk_and_safety_result_dict, indent=4)) 139 | 140 | 141 | if __name__ == "__main__": 142 | # Generate evaluation data with GitHub model catalog and save it to a file. 143 | generate_eval_data() 144 | 145 | # Run performance and quality evaluators with GitHub model catalog. 146 | run_perf_and_quality_evaluators() 147 | 148 | # # Uncomment the following code with Azure credentials, then we can run the risk and safety evaluators from Azure AI. 149 | # run_risk_and_safety_evaluators_with_azure() 150 | -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/data/product_info_7.md: -------------------------------------------------------------------------------- 1 | # Information about product item_number: 7 2 | CozyNights Sleeping Bag, price $100, 3 | 4 | ## Brand 5 | CozyNights 6 | 7 | Main Category: CAMPING & HIKING 8 | Sub Category: SLEEPING GEAR 9 | Product Type: SLEEPING BAGS 10 | 11 | ## Features 12 | - Lightweight: Designed to be lightweight for easy carrying during outdoor adventures. 13 | - 3-Season: Suitable for use in spring, summer, and fall seasons. 14 | - Compact Design: Folds down to a compact size for convenient storage and transport. 15 | - Included Stuff Sack: Comes with a stuff sack for easy packing and carrying. 16 | - Insulation: Provides warmth and comfort during sleep. 17 | - Zipper Closure: Features a durable zipper closure for easy access and temperature control. 18 | - Hood: Equipped with a hood to keep your head warm and protected. 19 | - Durable Construction: Made with high-quality materials for long-lasting durability. 20 | - Versatile: Suitable for various outdoor activities such as camping, hiking, and backpacking. 21 | - Comfortable: Offers a comfortable sleeping experience with ample room and padding. 22 | - Temperature Rating: Provides reliable performance within a specific temperature range. 23 | 24 | ## Technical Specifications 25 | - Material: Polyester 26 | - Color: Red 27 | - Dimensions: 80 inches x 33 inches (Length x Width) 28 | - Weight: 3.5 lbs 29 | - Temperature Rating: 3.5/5 30 | - Comfort Rating: 4/5 31 | - Season Rating: 3-season 32 | - Insulation Type: Synthetic 33 | - Shell Material: Polyester 34 | - Lining Material: Polyester 35 | - Zipper Type: Full-length zipper 36 | - Hood: Yes 37 | - Draft Collar: Yes 38 | - Draft Tube: Yes 39 | - Compression Sack: Yes 40 | - Pillow Pocket: Yes 41 | - Internal Storage Pocket: Yes 42 | - Zipper Baffles: Yes 43 | - Footbox Design: Contoured 44 | - Zipper Compatibility: Can be zipped together with another sleeping bag 45 | 46 | ## CozyNights Sleeping Bag User Guide 47 | 48 | Thank you for choosing the CozyNights Sleeping Bag. This user guide provides instructions on how to use and maintain your sleeping bag effectively. Please read this guide thoroughly before using the sleeping bag. 49 | 50 | ### Usage Instructions 51 | 52 | 1. Unpack the sleeping bag and lay it flat on a clean surface. 53 | 2. Open the zipper completely. 54 | 3. Slide into the sleeping bag, ensuring your feet are at the bottom end. 55 | 4. Pull the hood over your head to retain warmth if desired. 56 | 5. Adjust the zipper as needed to control ventilation and temperature. 57 | 6. When not in use, roll the sleeping bag tightly and secure it with the attached straps or use the included stuff sack for compact storage. 58 | 7. For cleaning instructions, refer to the maintenance section below. 59 | 60 | ### Maintenance 61 | 62 | - Spot cleaning: If the sleeping bag gets dirty, gently spot clean the affected area with mild soap and water. 63 | - Machine washing: If necessary, the sleeping bag can be machine washed in a front-loading machine using a gentle cycle and mild detergent. Follow the manufacturer's instructions for specific care details. 64 | - Drying: Hang the sleeping bag in a well-ventilated area or use a low heat setting in the dryer. Avoid high heat as it may damage the fabric. 65 | - Storage: Store the sleeping bag in a dry and clean place, away from direct sunlight and moisture. Ensure it is completely dry before storing to prevent mold and mildew. 66 | 67 | ### Safety Precautions 68 | 69 | - Do not place the sleeping bag near open flames or direct heat sources. 70 | - Avoid sharp objects that may puncture or damage the sleeping bag. 71 | - Do not leave the sleeping bag in damp or wet conditions for an extended period as it may affect its insulation properties. 72 | - Keep the sleeping bag away from pets to prevent potential damage. 73 | 74 | If you have any further questions or need assistance, please contact our customer support using the provided contact information. 75 | We hope you have a comfortable and enjoyable experience with your CozyNights Sleeping Bag! 76 | 77 | ## Cautions: 78 | 1. Do not machine wash the sleeping bag with harsh detergents or bleach, as it may damage the fabric and insulation. 79 | 2. Avoid exposing the sleeping bag to direct sunlight for prolonged periods, as it may cause fading or deterioration of the materials. 80 | 3. Do not store the sleeping bag when it is damp or wet, as this can lead to mold and mildew growth. 81 | 4. Avoid dragging or dragging the sleeping bag on rough or abrasive surfaces, as it may cause tears or damage to the fabric. 82 | 5. Do not place heavy objects on top of the stored sleeping bag, as it may affect its loft and insulation properties. 83 | 6. Avoid using sharp objects, such as knives or scissors, near the sleeping bag to prevent accidental punctures. 84 | 7. Do not fold or roll the sleeping bag when it is wet, as it may lead to the development of unpleasant odors or bacterial growth. 85 | 8. Avoid storing the sleeping bag in a compressed or tightly packed state for extended periods, as it may affect its loft and insulation. 86 | 87 | Following these not-to-do's or cautionary points will help ensure the longevity and performance of your CozyNights Sleeping Bag. 88 | 89 | Certainly! Here is the warranty information for the CozyNights Sleeping Bag: 90 | 91 | Warranty Duration: The CozyNights Sleeping Bag is covered by a 1-year limited warranty from the date of purchase. 92 | 93 | Warranty Coverage: The warranty covers manufacturing defects in materials and workmanship. It includes issues such as zipper malfunctions, stitching defects, or fabric flaws. 94 | 95 | Exclusions: The warranty does not cover damages caused by improper use, accidents, normal wear and tear, unauthorized repairs or modifications, or failure to follow care instructions. 96 | 97 | Warranty Claim Process: In the event of a warranty claim, please contact our customer support with your proof of purchase and a detailed description of the issue. Our support team will guide you through the necessary steps to assess the claim and provide a resolution. 98 | 99 | Warranty Remedies: Depending on the nature of the warranty claim, we will either repair or replace the defective sleeping bag. If the exact product is no longer available, we may offer a comparable replacement at our discretion. 100 | 101 | Limitations: The warranty is non-transferable and applies only to the original purchaser of the CozyNights Sleeping Bag. It is valid only when the product is purchased from an authorized retailer. 102 | 103 | Legal Rights: This warranty gives you specific legal rights, and you may also have other rights that vary by jurisdiction. 104 | 105 | For any warranty-related inquiries or to initiate a warranty claim, please contact our customer support using the provided contact information. 106 | 107 | Please retain your proof of purchase as it will be required to verify warranty eligibility. 108 | 109 | ## Return Policy 110 | - If Membership status "None": If you are not satisfied with your purchase, you can return it within 30 days for a full refund. The product must be unused and in its original packaging. 111 | - If Membership status "Gold": Gold members can return their sleeping bags within 60 days of purchase for a full refund or exchange. The product must be unused and in its original packaging. 112 | - If Membership status "Platinum": Platinum members can return their sleeping bags within 90 days of purchase for a full refund or exchange. The product must be unused and in its original packaging. Additionally, Platinum members receive a 10% discount on all sleeping bags purchases but from the same product brand. 113 | 114 | ## Reviews 115 | 31) Rating: 5 116 | Review: The CozyNights Sleeping Bag is perfect for my camping trips. It's lightweight, warm, and the compact design makes it easy to pack. The included stuff sack is a great bonus. Highly recommend! 117 | 118 | 32) Rating: 4 119 | Review: I bought the CozyNights Sleeping Bag, and while it's warm and lightweight, I wish it had a built-in pillow. Overall, it's a good sleeping bag for camping. 120 | 121 | 33) Rating: 5 122 | Review: The CozyNights Sleeping Bag is perfect for my camping adventures. It's comfortable, warm, and packs down small, making it easy to carry. I love it! 123 | 124 | 34) Rating: 4 125 | Review: I like the CozyNights Sleeping Bag, but I wish it came in more colors. The red color is nice, but I would prefer something more unique. Overall, it's a great sleeping bag. 126 | 127 | 35) Rating: 5 128 | Review: This sleeping bag is a game changer for my camping trips. It's warm, lightweight, and the compact design makes it easy to pack. I'm extremely satisfied with my purchase! 129 | 130 | ## FAQ 131 | 31) What is the temperature rating of the CozyNights Sleeping Bag? 132 | The CozyNights Sleeping Bag is rated for 3-season use and has a temperature rating of 20�F to 60�F (-6�C to 15�C). 133 | 134 | 32) Can the CozyNights Sleeping Bag be zipped together with another sleeping bag to create a double sleeping bag? 135 | Yes, two CozyNights Sleeping Bags can be zipped together to create a double sleeping bag, provided they have compatible zippers. 136 | 137 | 33) Does the CozyNights Sleeping Bag have a draft collar or draft tube? 138 | The CozyNights Sleeping Bag features a draft tube along the zipper to help prevent cold air from entering and warm air from escaping. 139 | -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/data/product_info_14.md: -------------------------------------------------------------------------------- 1 | # Information about product item_number: 14 2 | MountainDream Sleeping Bag, price $130, 3 | 4 | ## Brand 5 | MountainDream 6 | 7 | Main Category: CAMPING & HIKING 8 | Sub Category: SLEEPING GEAR 9 | Product Type: SLEEPING BAGS 10 | 11 | 12 | ## Features 13 | - Temperature Rating: Suitable for 3-season camping (rated for temperatures between 15°F to 30°F) 14 | - Insulation: Premium synthetic insulation for warmth and comfort 15 | - Shell Material: Durable and water-resistant ripstop nylon 16 | - Lining Material: Soft and breathable polyester fabric 17 | - Zipper: Smooth and snag-free YKK zipper with anti-snag design 18 | - Hood Design: Adjustable hood with drawcord for customized fit and added warmth 19 | - Draft Tube: Insulated draft tube along the zipper to prevent heat loss 20 | - Zipper Baffle: Full-length zipper baffle to seal in warmth and block cold drafts 21 | - Mummy Shape: Contoured mummy shape for optimal heat retention and reduced weight 22 | - Compression Sack: Included compression sack for compact storage and easy transport 23 | - Size Options: Available in multiple sizes to accommodate different body types 24 | - Weight: Lightweight design for backpacking and outdoor adventures 25 | - Durability: Reinforced stitching and construction for long-lasting durability 26 | - Extra Features: Interior pocket for storing small essentials, hanging loops for airing out the sleeping bag 27 | 28 | ## Technical Specs 29 | 30 | - Temperature Rating: 15°F to 30°F 31 | - Insulation: Premium synthetic insulation 32 | - Color: Green 33 | - Shell Material: Durable and water-resistant ripstop nylon 34 | - Lining Material: Soft and breathable polyester fabric 35 | - Zipper: YKK zipper with anti-snag design 36 | - Hood Design: Adjustable hood with drawcord 37 | - Draft Tube: Insulated draft tube along the zipper 38 | - Zipper Baffle: Full-length zipper baffle 39 | - Shape: Mummy shape 40 | - Compression Sack: Included 41 | - Sizes Available: Multiple sizes available 42 | - Weight: Varies depending on size, approximately 2.5 lbs 43 | - Dimensions (packed): 84 in x 32 in 44 | - Gender: Unisex 45 | - Recommended Use: Hiking, camping, backpacking 46 | - Price: $130 47 | 48 | ## User Guide/Manual 49 | 50 | ### 1. Unpacking and Inspection: 51 | Upon receiving your sleeping bag, carefully remove it from the packaging. Inspect the sleeping bag for any damage or defects. If you notice any issues, please contact our customer care (contact information provided in Section 8). 52 | 53 | ### 2. Proper Use: 54 | - Before using the sleeping bag, make sure to read and understand the user guide. 55 | - Ensure the sleeping bag is clean and dry before each use. 56 | - Insert yourself into the sleeping bag, ensuring your body is fully covered. 57 | - Adjust the hood using the drawcord to achieve a snug fit around your head for added warmth. 58 | - Use the zipper to open or close the sleeping bag according to your comfort level. 59 | - Keep the sleeping bag zipped up to maximize insulation during colder temperatures. 60 | - Avoid placing sharp objects inside the sleeping bag to prevent punctures or damage. 61 | 62 | ### 3. Temperature Rating and Comfort: 63 | The MountainDream Sleeping Bag is rated for temperatures between 15°F to 30°F. However, personal comfort preferences may vary. It is recommended to use additional layers or adjust ventilation using the zipper and hood to achieve the desired temperature. 64 | 65 | ### 4. Sleeping Bag Care: 66 | - Spot clean any spills or stains on the sleeping bag using a mild detergent and a soft cloth. 67 | - If necessary, hand wash the sleeping bag in cold water with a gentle detergent. Rinse thoroughly and air dry. 68 | - Avoid using bleach or harsh chemicals, as they can damage the materials. 69 | - Do not dry clean the sleeping bag, as it may affect its performance. 70 | - Regularly inspect the sleeping bag for signs of wear and tear. Repair or replace any damaged parts as needed. 71 | 72 | ### 5. Storage: 73 | - Before storing the sleeping bag, ensure it is clean and completely dry to prevent mold or mildew. 74 | - Store the sleeping bag in a cool, dry place away from direct sunlight. 75 | - Avoid compressing the sleeping bag for extended periods, as it may affect its loft and insulation. Instead, store it in the included compression sack. 76 | 77 | ## Caution Information 78 | 79 | 1. DO NOT machine wash the sleeping bag 80 | 2. DO NOT expose the sleeping bag to direct heat sources 81 | 3. DO NOT store the sleeping bag in a compressed state 82 | 4. DO NOT use the sleeping bag as a ground cover 83 | 5. DO NOT leave the sleeping bag wet or damp 84 | 6. DO NOT use sharp objects inside the sleeping bag 85 | 7. DO NOT exceed the temperature rating 86 | 87 | ## Warranty Information 88 | 89 | 1. Warranty Coverage 90 | 91 | The warranty covers the following: 92 | 93 | 1. Stitching or seam failure 94 | 2. Zipper defects 95 | 3. Material and fabric defects 96 | 4. Insulation defects 97 | 5. Issues with the drawcord, Velcro closures, or other functional components 98 | 99 | 2. Warranty Exclusions 100 | 101 | The warranty does not cover the following: 102 | 103 | 1. Normal wear and tear 104 | 2. Damage caused by misuse, neglect, or improper care 105 | 3. Damage caused by accidents, alterations, or unauthorized repairs 106 | 4. Damage caused by exposure to extreme temperatures or weather conditions beyond the sleeping bag's intended use 107 | 5. Damage caused by improper storage or compression 108 | 6. Cosmetic imperfections that do not affect the performance of the sleeping bag 109 | 110 | 3. Making a Warranty Claim 111 | 112 | In the event of a warranty claim, please contact our customer care team at the following fictitious contact details: 113 | 114 | - Customer Care: MountainDream Outdoor Gear 115 | - Phone: 1-800-213-2316 116 | - Email: support@MountainDream.com 117 | - Address: www.MountainDream.com/support 118 | 119 | To process your warranty claim, you will need to provide proof of purchase, a description of the issue, and any relevant photographs. Our customer care team will guide you through the warranty claim process and provide further instructions. 120 | 121 | Please note that any shipping costs associated with the warranty claim are the responsibility of the customer. 122 | 123 | 4. Limitations of Liability 124 | 125 | In no event shall MountainDream Outdoor Gear be liable for any incidental, consequential, or indirect damages arising from the use or inability to use the MountainDream Sleeping Bag. The maximum liability of MountainDream Outdoor Gear under this warranty shall not exceed the original purchase price of the sleeping bag. 126 | 127 | This warranty is in addition to any rights provided by consumer protection laws and regulations in your jurisdiction. 128 | 129 | Please refer to the accompanying product documentation for more information on care and maintenance instructions. 130 | 131 | ## Return Policy 132 | - If Membership status "None": If you are not satisfied with your purchase, you can return it within 30 days for a full refund. The product must be unused and in its original packaging. 133 | - If Membership status "Gold": Gold members can return their sleeping bags within 60 days of purchase for a full refund or exchange. The product must be unused and in its original packaging. 134 | - If Membership status "Platinum": Platinum members can return their sleeping bags within 90 days of purchase for a full refund or exchange. The product must be unused and in its original packaging. Additionally, Platinum members receive a 10% discount on all sleeping bags purchases but from the same product brand. 135 | 136 | ## Reviews 137 | 1) Rating: 4 138 | Review: I recently used the MountainDream Sleeping Bag on a backpacking trip, and it kept me warm and comfortable throughout the night. The insulation is excellent, and the materials feel high-quality. The size is perfect for me, and I appreciate the included compression sack for easy storage. Overall, a great sleeping bag for the price. 139 | 140 | 2) Rating: 5 141 | Review: I am extremely impressed with the MountainDream Sleeping Bag. It exceeded my expectations in terms of warmth and comfort. The insulation is top-notch, and I stayed cozy even on colder nights. The design is well-thought-out with a hood and draft collar to keep the warmth in. The zippers are smooth and sturdy. Highly recommended for any camping or backpacking adventure. 142 | 143 | 3) Rating: 3 144 | Review: The MountainDream Sleeping Bag is decent for the price, but I found it a bit bulky and heavy to carry on long hikes. The insulation kept me warm, but it could be improved for colder temperatures. The zippers tended to snag occasionally, which was a bit frustrating. Overall, it's an average sleeping bag suitable for casual camping trips. 145 | 146 | 4) Rating: 5 147 | Review: I've used the MountainDream Sleeping Bag on multiple camping trips, and it has never disappointed me. The insulation is fantastic, providing excellent warmth even in chilly weather. The fabric feels durable, and the zippers glide smoothly. The included stuff sack makes it convenient to pack and carry. Highly satisfied with my purchase! 148 | 149 | 5) Rating: 4 150 | Review: The MountainDream Sleeping Bag is a solid choice for backpacking and camping. It's lightweight and compact, making it easy to fit into my backpack. The insulation kept me warm during cold nights, and the hood design provided extra comfort. The only downside is that it's a bit snug for taller individuals. Overall, a reliable sleeping bag for outdoor adventures. 151 | 152 | ## FAQ 153 | 63) What is the temperature rating for the MountainDream Sleeping Bag? 154 | The MountainDream Sleeping Bag is rated for temperatures as low as 15�F (-9�C), making it suitable for 4-season use. 155 | 156 | 64) How small can the MountainDream Sleeping Bag be compressed? 157 | The MountainDream Sleeping Bag comes with a compression sack, allowing it to be packed down to a compact size of 9" x 6" (23cm x 15cm). 158 | 159 | 65) Is the MountainDream Sleeping Bag suitable for taller individuals? 160 | Yes, the MountainDream Sleeping Bag is designed to fit individuals up to 6'6" (198cm) tall comfortably. 161 | 162 | 66) How does the water-resistant shell of the MountainDream Sleeping Bag work? 163 | The water-resistant shell of the MountainDream Sleeping Bag features a durable water repellent (DWR) finish, which repels moisture and keeps you dry in damp conditions. 164 | -------------------------------------------------------------------------------- /cookbooks/python/openai/How_to_stream_completions.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "attachments": {}, 5 | "cell_type": "markdown", 6 | "metadata": {}, 7 | "source": [ 8 | "# How to stream completions\n", 9 | "\n", 10 | "By default, when you request a completion from the OpenAI, the entire completion is generated before being sent back in a single response.\n", 11 | "\n", 12 | "If you're generating long completions, waiting for the response can take many seconds.\n", 13 | "\n", 14 | "To get responses sooner, you can 'stream' the completion as it's being generated. This allows you to start printing or processing the beginning of the completion before the full completion is finished.\n", 15 | "\n", 16 | "To stream completions, set `stream=True` when calling the chat completions or completions endpoints. This will return an object that streams back the response as [data-only server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#event_stream_format). Extract chunks from the `delta` field rather than the `message` field.\n", 17 | "\n", 18 | "## Downsides\n", 19 | "\n", 20 | "Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate.\n", 21 | "\n", 22 | "Another small drawback of streaming responses is that the response no longer includes the `usage` field to tell you how many tokens were consumed. After receiving and combining all of the responses, you can calculate this yourself using [`tiktoken`](https://github.com/openai/tiktoken).\n", 23 | "\n", 24 | "## Example code\n", 25 | "\n", 26 | "Below, this notebook shows:\n", 27 | "1. What a typical chat completion response looks like\n", 28 | "2. What a streaming chat completion response looks like\n", 29 | "3. How much time is saved by streaming a chat completion\n" 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "execution_count": null, 35 | "metadata": {}, 36 | "outputs": [], 37 | "source": [ 38 | "# imports\n", 39 | "import time # for measuring time duration of API calls\n", 40 | "from openai import OpenAI\n", 41 | "import dotenv\n", 42 | "import os\n", 43 | "\n", 44 | "dotenv.load_dotenv()\n", 45 | "\n", 46 | "if not os.getenv(\"GITHUB_TOKEN\"):\n", 47 | " raise ValueError(\"GITHUB_TOKEN is not set\")\n", 48 | "\n", 49 | "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"GITHUB_TOKEN\")\n", 50 | "os.environ[\"OPENAI_BASE_URL\"] = \"https://models.github.ai/inference\"\n", 51 | "\n", 52 | "GPT_MODEL = \"openai/gpt-4o-mini\"\n", 53 | "\n", 54 | "client = OpenAI()" 55 | ] 56 | }, 57 | { 58 | "attachments": {}, 59 | "cell_type": "markdown", 60 | "metadata": {}, 61 | "source": [ 62 | "### 1. What a typical chat completion response looks like\n", 63 | "\n", 64 | "With a typical ChatCompletions API call, the response is first computed and then returned all at once." 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": null, 70 | "metadata": {}, 71 | "outputs": [], 72 | "source": [ 73 | "# Example of an OpenAI ChatCompletion request\n", 74 | "# https://platform.openai.com/docs/guides/chat\n", 75 | "\n", 76 | "# record the time before the request is sent\n", 77 | "start_time = time.time()\n", 78 | "\n", 79 | "# send a ChatCompletion request to count to 100\n", 80 | "response = client.chat.completions.create(\n", 81 | " model=GPT_MODEL,\n", 82 | " messages=[\n", 83 | " {'role': 'user', 'content': 'Count to 100, with a comma between each number and no newlines. E.g., 1, 2, 3, ...'}\n", 84 | " ],\n", 85 | " temperature=0,\n", 86 | ")\n", 87 | "\n", 88 | "# calculate the time it took to receive the response\n", 89 | "response_time = time.time() - start_time\n", 90 | "\n", 91 | "# print the time delay and text received\n", 92 | "print(f\"Full response received {response_time:.2f} seconds after request\")\n", 93 | "print(f\"Full response received:\\n{response}\")\n" 94 | ] 95 | }, 96 | { 97 | "attachments": {}, 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "The reply can be extracted with `response['choices'][0]['message']`.\n", 102 | "\n", 103 | "The content of the reply can be extracted with `response['choices'][0]['message']['content']`." 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": null, 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [ 112 | "reply = response.choices[0].message\n", 113 | "print(f\"Extracted reply: \\n{reply}\")\n", 114 | "\n", 115 | "reply_content = response.choices[0].message.content\n", 116 | "print(f\"Extracted content: \\n{reply_content}\")\n" 117 | ] 118 | }, 119 | { 120 | "attachments": {}, 121 | "cell_type": "markdown", 122 | "metadata": {}, 123 | "source": [ 124 | "### 2. How to stream a chat completion\n", 125 | "\n", 126 | "With a streaming API call, the response is sent back incrementally in chunks via an [event stream](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#event_stream_format). In Python, you can iterate over these events with a `for` loop.\n", 127 | "\n", 128 | "Let's see what it looks like:" 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": null, 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "# Example of an OpenAI ChatCompletion request with stream=True\n", 138 | "# https://platform.openai.com/docs/guides/chat\n", 139 | "\n", 140 | "# a ChatCompletion request\n", 141 | "response = client.chat.completions.create(\n", 142 | " model=GPT_MODEL,\n", 143 | " messages=[\n", 144 | " {'role': 'user', 'content': \"What's 1+1? Answer in one word.\"}\n", 145 | " ],\n", 146 | " temperature=0,\n", 147 | " stream=True # this time, we set stream=True\n", 148 | ")\n", 149 | "\n", 150 | "for chunk in response:\n", 151 | " print(chunk)" 152 | ] 153 | }, 154 | { 155 | "attachments": {}, 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "As you can see above, streaming responses have a `delta` field rather than a `message` field. `delta` can hold things like:\n", 160 | "- a role token (e.g., `role='assistant'`)\n", 161 | "- a content token (e.g., `content='Two'}`)\n", 162 | "- nothing (e.g., `{}`), when the stream is over" 163 | ] 164 | }, 165 | { 166 | "attachments": {}, 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "### 3. How much time is saved by streaming a chat completion\n", 171 | "\n", 172 | "Now let's ask `gpt-4o-mini` to count to 100 again, and see how long it takes." 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": null, 178 | "metadata": {}, 179 | "outputs": [], 180 | "source": [ 181 | "# Example of an OpenAI ChatCompletion request with stream=True\n", 182 | "# https://platform.openai.com/docs/guides/chat\n", 183 | "\n", 184 | "# record the time before the request is sent\n", 185 | "start_time = time.time()\n", 186 | "\n", 187 | "# send a ChatCompletion request to count to 100\n", 188 | "response = client.chat.completions.create(\n", 189 | " model=GPT_MODEL,\n", 190 | " messages=[\n", 191 | " {'role': 'user', 'content': 'Count to 100, with a comma between each number and no newlines. E.g., 1, 2, 3, ...'}\n", 192 | " ],\n", 193 | " temperature=0,\n", 194 | " stream=True # again, we set stream=True\n", 195 | ")\n", 196 | "\n", 197 | "# create variables to collect the stream of chunks\n", 198 | "collected_chunks = []\n", 199 | "collected_messages = []\n", 200 | "# iterate through the stream of events\n", 201 | "for chunk in response:\n", 202 | " chunk_time = time.time() - start_time # calculate the time delay of the chunk\n", 203 | " collected_chunks.append(chunk) # save the event response\n", 204 | " chunk_message = chunk.choices[0].delta # extract the message\n", 205 | " collected_messages.append(chunk_message) # save the message\n", 206 | " print(f\"Message received {chunk_time:.2f} seconds after request: {chunk_message}\") # print the delay and text\n", 207 | "\n", 208 | "# print the time delay and text received\n", 209 | "print(f\"Full response received {chunk_time:.2f} seconds after request\")\n", 210 | "full_reply_content = ''.join([m.content or '' for m in collected_messages])\n", 211 | "print(f\"Full conversation received: {full_reply_content}\")\n" 212 | ] 213 | }, 214 | { 215 | "cell_type": "code", 216 | "execution_count": null, 217 | "metadata": {}, 218 | "outputs": [], 219 | "source": [ 220 | "import json\n", 221 | "print(json.dumps(collected_chunks[-1].to_dict(), indent=2))" 222 | ] 223 | }, 224 | { 225 | "attachments": {}, 226 | "cell_type": "markdown", 227 | "metadata": {}, 228 | "source": [ 229 | "#### Time comparison\n", 230 | "\n", 231 | "In the example above, both requests took about 7 seconds to fully complete. Request times will vary depending on load and other stochastic factors.\n", 232 | "\n", 233 | "However, with the streaming request, we received the first token after 0.1 seconds, and subsequent tokens every ~0.01-0.02 seconds." 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": {}, 239 | "source": [] 240 | } 241 | ], 242 | "metadata": { 243 | "kernelspec": { 244 | "display_name": "Python 3.9.9 ('openai')", 245 | "language": "python", 246 | "name": "python3" 247 | }, 248 | "language_info": { 249 | "codemirror_mode": { 250 | "name": "ipython", 251 | "version": 3 252 | }, 253 | "file_extension": ".py", 254 | "mimetype": "text/x-python", 255 | "name": "python", 256 | "nbconvert_exporter": "python", 257 | "pygments_lexer": "ipython3", 258 | "version": "3.11.9" 259 | }, 260 | "orig_nbformat": 4, 261 | "vscode": { 262 | "interpreter": { 263 | "hash": "365536dcbde60510dc9073d6b991cd35db2d9bac356a11f5b64279a5e6708b97" 264 | } 265 | } 266 | }, 267 | "nbformat": 4, 268 | "nbformat_minor": 2 269 | } 270 | -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/data/product_info_12.md: -------------------------------------------------------------------------------- 1 | # Information about product item_number: 12 2 | TrekMaster Camping Chair, price $50 3 | 4 | ## Brand 5 | CampBuddy 6 | 7 | Main Category: CAMPING & HIKING 8 | Sub Category: OTHER 9 | Product Type: OTHER 10 | 11 | ## Features 12 | - Sturdy Construction: Built with high-quality materials for durability and long-lasting performance. 13 | - Lightweight and Portable: Designed to be lightweight and easy to carry, making it convenient for camping, hiking, and outdoor activities. 14 | - Foldable Design: Allows for compact storage and effortless transportation. 15 | - Comfortable Seating: Provides ergonomic support and comfortable seating experience with padded seat and backrest. 16 | - Adjustable Recline: Offers multiple reclining positions for personalized comfort. 17 | - Cup Holder: Integrated cup holder for keeping beverages within reach. 18 | - Side Pockets: Convenient side pockets for storing small items like phones, books, or snacks. 19 | - Robust Frame: Strong frame construction to support various body types and provide stability on uneven terrain. 20 | - Easy Setup: Quick and hassle-free setup with a folding mechanism or snap-together design. 21 | - Weight Capacity: High weight capacity to accommodate different individuals. 22 | - Weather Resistant: Resistant to outdoor elements such as rain, sun, and wind. 23 | - Easy to Clean: Simple to clean and maintain, often featuring removable and washable seat covers. 24 | - Versatile Use: Suitable for various outdoor activities like camping, picnics, sporting events, and backyard gatherings. 25 | - Carry Bag: Includes a carrying bag for convenient storage and transportation. 26 | 27 | ## Technical Specs 28 | - Best Use: Camping, outdoor activities 29 | - Material:: Durable polyester fabric with reinforced stitching 30 | - Color: Blue 31 | - Weight: 6lbs 32 | - Frame Material: Sturdy steel or lightweight aluminum 33 | - Weight Capacity: Typically supports up to 250 pounds (113 kilograms) 34 | - Weight Varies between 2 to 5 pounds (0.9 to 2.3 kilograms), depending on the model 35 | - Folded Dimensions: Compact size for easy storage and transport (e.g., approximately 20 x 5 x 5 inches) 36 | - Open Dimensions: Provides comfortable seating area (e.g., approximately 20 x 20 x 30 inches) 37 | - Seat Height: Comfortable height from the ground (e.g., around 12 to 18 inches) 38 | - Backrest Height: Provides back support (e.g., approximately 20 to 25 inches) 39 | - Cup Holder: Integrated cup holder for holding beverages securely 40 | - Side Pockets: Convenient storage pockets for small items like phones, books, or snacks 41 | - Armrests: Padded armrests for added comfort 42 | - Reclining Positions: Adjustable backrest with multiple reclining positions for personalized comfort 43 | - Sunshade or Canopy: Optional feature providing sun protection and shade 44 | - Carrying Case: Includes a carrying bag or case for easy transport and storage 45 | - Easy Setup: Simple assembly with foldable or snap-together design 46 | - Weather Resistance: Water-resistant or waterproof material for durability in various weather conditions 47 | - Cleaning: Easy to clean with removable and washable seat covers (if applicable) 48 | 49 | ## Return Policy 50 | - If Membership status "None": If you are not satisfied with your purchase, you can return it within 30 days for a full refund. The product must be unused and in its original packaging. 51 | - If Membership status "Gold": Gold members can return their camping tables within 60 days of purchase for a full refund or exchange. The product must be unused and in its original packaging. 52 | - If Membership status "Platinum": Platinum members can return their camping tables within 90 days of purchase for a full refund or exchange. The product must be unused and in its original packaging. Additionally, Platinum members receive a 10% discount on all camping table purchases but from the same product brand. 53 | 54 | ## User Guide/Manual 55 | 56 | ### 1. Safety Guidelines 57 | 58 | - Read and understand all instructions and warnings before using the camping chair. 59 | - Always ensure that the chair is placed on a stable and level surface to prevent tipping or accidents. 60 | - Do not exceed the weight capacity specified in the technical specifications. 61 | - Keep children away from the chair to avoid potential hazards. 62 | - Avoid placing the chair near open flames or heat sources. 63 | - Use caution when adjusting or reclining the chair to prevent pinching or injury. 64 | - Do not use the chair as a step stool or ladder. 65 | - Inspect the chair before each use for any signs of damage or wear. If any issues are found, discontinue use and contact customer support. 66 | 67 | ### 2. Setup and Assembly 68 | 69 | To set up the camping chair, follow these steps: 70 | 71 | 1. Open the carrying case and remove the folded chair. 72 | 2. Unfold the chair by extending the frame until it locks into place. 73 | 3. Ensure that all locking mechanisms are fully engaged and secure. 74 | 4. Pull the fabric seat taut and adjust it for optimal comfort. 75 | 5. Make sure the chair is stable and balanced before use. 76 | 77 | ### 3. Adjustments and Usage 78 | 79 | - To adjust the backrest, locate the reclining mechanism and choose the desired angle. Engage the lock to secure the position. 80 | - Use the padded armrests for added comfort and support. 81 | - The integrated cup holder and side pockets provide convenient storage for your beverages, books, or other small items. 82 | - Take advantage of the chair's lightweight and foldable design for easy transportation and storage. 83 | 84 | ### 4. Care and Maintenance 85 | 86 | - Regularly inspect the chair for any signs of damage or wear. If any parts are damaged, contact customer support for assistance. 87 | - To clean the chair, use a mild detergent and water solution. Avoid using harsh chemicals or abrasive cleaners that may damage the fabric or frame. 88 | - If the chair includes removable and washable seat covers, follow the provided instructions for proper cleaning and care. 89 | - Store the chair in a dry and cool place when not in use to prevent damage from moisture or extreme temperatures. 90 | - Avoid prolonged exposure to direct sunlight to maintain the color and integrity of the fabric. 91 | 92 | ## Caution Information 93 | 94 | 1. Do not exceed the weight capacity 95 | 2. Do not use on uneven or unstable surfaces 96 | 3. Do not use as a step stool or ladder 97 | 4. Do not leave unattended near open flames or heat sources 98 | 5. Do not lean back excessively 99 | 6. Do not use harsh chemicals or abrasive cleaners 100 | 7. Do not leave exposed to prolonged sunlight 101 | 8. Do not drag or slide the chair 102 | 9. Do not place sharp objects in the storage pockets 103 | 10. Do not modify or alter the chair 104 | 105 | Certainly! Here's a sample warranty information for the TrekMaster Camping Chair, along with fictitious contact details for customer care: 106 | 107 | ## Warranty Information 108 | 109 | 1. Limited Warranty Coverage: 110 | 111 | - Warranty Duration: 1 year from the date of purchase. 112 | - Coverage: This warranty covers manufacturing defects in materials and workmanship. 113 | 114 | 2. Warranty Exclusions: 115 | 116 | - Damage caused by misuse, abuse, or improper care. 117 | - Normal wear and tear, including natural fading of colors and gradual deterioration over time. 118 | - Any modifications or alterations made to the chair. 119 | - Damage caused by accidents, fire, or acts of nature. 120 | 121 | 3. Warranty Claim Process: 122 | 123 | In the event of a warranty claim, please follow these steps: 124 | 125 | - Contact our Customer Care within the warranty period. 126 | - Provide proof of purchase, such as a receipt or order number. 127 | - Describe the issue with the camping chair and provide any necessary supporting information or photographs. 128 | - Our customer care representative will assess the claim and provide further instructions. 129 | 130 | 4. Contact Information: 131 | 132 | For any questions, concerns, or warranty claims, please reach out to our friendly customer care team: 133 | 134 | - Customer Care Phone: 1-800-925-4351 135 | - Customer Care Email: support@trekmaster.com 136 | - Customer Care Hours: Monday-Friday, 9:00 AM to 5:00 PM (PST) 137 | - Website: www.trekmaster.com/support 138 | 139 | ## Reviews 140 | 141 | 1) Rating: 5 142 | Review: I absolutely love the TrekMaster Camping Chair! It's lightweight, sturdy, and super comfortable. The padded armrests and breathable fabric make it perfect for long camping trips. Highly recommended for outdoor enthusiasts! 143 | 144 | 2) Rating: 4 145 | Review: The TrekMaster Camping Chair is a great value for the price. It's easy to set up and packs down nicely. The cup holder and side pockets are convenient features. The only downside is that it could be a bit more cushioned for added comfort. 146 | 147 | 3) Rating: 5 148 | Review: This camping chair exceeded my expectations! It's well-built, durable, and provides excellent back support. The compact design and lightweight construction make it perfect for backpacking trips. I'm thrilled with my purchase! 149 | 150 | 4) Rating: 3 151 | Review: The TrekMaster Camping Chair is decent for short outings. It's lightweight and easy to carry, but I found the seat fabric to be less durable than expected. It's suitable for occasional use, but I would recommend something sturdier for frequent camping trips. 152 | 153 | 5) Rating: 4 154 | Review: I'm happy with my TrekMaster Camping Chair. It's comfortable and sturdy enough to support my weight. The adjustable armrests and storage pockets are handy features. I deducted one star because the chair is a bit low to the ground, making it a bit challenging to get in and out of for some individuals. 155 | 156 | ## FAQ 157 | 54) What is the weight capacity of the TrekMaster Camping Chair? 158 | The TrekMaster Camping Chair can support up to 300 lbs (136 kg), thanks to its durable steel frame and strong fabric. 159 | 160 | 55) Can the TrekMaster Camping Chair be used on uneven ground? 161 | Yes, the TrekMaster Camping Chair has non-slip feet, which provide stability and prevent sinking into soft or uneven ground. 162 | 163 | 56) How compact is the TrekMaster Camping Chair when folded? 164 | When folded, the TrekMaster Camping Chair measures approximately 38in x 5in x 5in, making it compact and easy to carry or pack in your vehicle. 165 | 166 | 57) Is the TrekMaster Camping Chair easy to clean? 167 | Yes, the TrekMaster Camping Chair is made of durable and easy-to-clean fabric. Simply wipe it down with a damp cloth and let it air dry. 168 | 169 | 58) Are there any accessories available for the TrekMaster Camping Chair? 170 | While there are no specific accessories designed for the TrekMaster Camping Chair, it comes with a built-in cup holder and can be used with a variety of universal camping chair accessories such as footrests, side tables, or organizers. 171 | -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/data/product_info_15.md: -------------------------------------------------------------------------------- 1 | # Information about product item_number: 15 2 | SkyView 2-Person Tent, price $200, 3 | 4 | ## Brand 5 | OutdoorLiving 6 | 7 | Main Category: CAMPING & HIKING 8 | Sub Category: TENTS & SHELTERS 9 | Product Type: BACKPACKING TENTS 10 | 11 | ## Features 12 | - Spacious interior comfortably accommodates two people 13 | - Durable and waterproof materials for reliable protection against the elements 14 | - Easy and quick setup with color-coded poles and intuitive design 15 | - Two large doors for convenient entry and exit 16 | - Vestibules provide extra storage space for gear 17 | - Mesh panels for enhanced ventilation and reduced condensation 18 | - Rainfly included for added weather protection 19 | - Freestanding design allows for versatile placement 20 | - Multiple interior pockets for organizing small items 21 | - Reflective guy lines and stake points for improved visibility at night 22 | - Compact and lightweight for easy transportation and storage 23 | - Double-stitched seams for increased durability 24 | - Comes with a carrying bag for convenient portability 25 | 26 | ## Technical Specs 27 | 28 | - Best Use: Camping, Hiking 29 | - Capacity: 2-person 30 | - Seasons: 3-season 31 | - Packed Weight: Approx. 8 lbs 32 | - Number of Doors: 2 33 | - Number of Vestibules: 2 34 | - Vestibule Area: Approx. 8 square feet per vestibule 35 | - Rainfly: Included 36 | - Pole Material: Lightweight aluminum 37 | - Freestanding: Yes 38 | - Footprint Included: No 39 | - Tent Bag Dimensions: 7ft x 5ft x 4ft 40 | - Packed Size: Compact 41 | - Color: Blue 42 | - Warranty: Manufacturer's warranty included 43 | 44 | ## User Guide/Manual 45 | 46 | 1. Tent Components 47 | 48 | The SkyView 2-Person Tent includes the following components: 49 | - Tent body 50 | - Rainfly 51 | - Aluminum tent poles 52 | - Tent stakes 53 | - Guy lines 54 | - Tent bag 55 | 56 | 2. Tent Setup 57 | 58 | Follow these steps to set up your SkyView 2-Person Tent: 59 | 60 | Step 1: Find a suitable camping site with a level ground and clear of debris. 61 | Step 2: Lay out the tent body on the ground, aligning the doors and vestibules as desired. 62 | Step 3: Assemble the tent poles and insert them into the corresponding pole sleeves or grommets on the tent body. 63 | Step 4: Attach the rainfly over the tent body, ensuring a secure fit. 64 | Step 5: Stake down the tent and rainfly using the provided tent stakes, ensuring a taut pitch. 65 | Step 6: Adjust the guy lines as needed to enhance stability and ventilation. 66 | Step 7: Once the tent is properly set up, organize your gear inside and enjoy your camping experience. 67 | 68 | 3. Tent Takedown 69 | 70 | To dismantle and pack up your SkyView 2-Person Tent, follow these steps: 71 | 72 | Step 1: Remove all gear and belongings from the tent. 73 | Step 2: Remove the stakes and guy lines from the ground. 74 | Step 3: Detach the rainfly from the tent body. 75 | Step 4: Disassemble the tent poles and remove them from the tent body. 76 | Step 5: Fold and roll up the tent body, rainfly, and poles separately. 77 | Step 6: Place all components back into the tent bag, ensuring a compact and organized packing. 78 | 79 | 4. Tent Care and Maintenance 80 | 81 | To extend the lifespan of your SkyView 2-Person Tent, follow these care and maintenance guidelines: 82 | 83 | - Always clean and dry the tent before storing it. 84 | - Avoid folding or storing the tent when it is wet or damp to prevent mold or mildew growth. 85 | - Use a mild soap and water solution to clean the tent if necessary, and avoid using harsh chemicals or solvents. 86 | - Inspect the tent regularly for any damages such as tears, punctures, or broken components. Repair or replace as needed. 87 | - Store the tent in a cool, dry place away from direct sunlight and extreme temperatures. 88 | - Avoid placing sharp objects or excessive weight on the tent, as this may cause damage. 89 | - Follow the manufacturer's recommendations for seam sealing or re-waterproofing the tent if necessary. 90 | 91 | 5. Safety Precautions 92 | 93 | - Always choose a safe and suitable camping location, considering factors such as terrain, weather conditions, and potential hazards. 94 | - Ensure proper ventilation inside the tent to prevent condensation buildup and maintain air quality. 95 | - Never use open flames or heating devices inside the tent, as this poses a fire hazard. 96 | - Securely stake down the tent and use guy lines as needed to enhance stability during windy conditions. 97 | - Do not exceed the recommended maximum occupancy of the tent. 98 | - Keep all flammable materials away from the tent. 99 | - Follow proper camping etiquette and leave no trace by properly disposing of waste and leaving the campsite clean. 100 | 101 | ## Caution Information 102 | 103 | 1. Do not exceed the tent's maximum occupancy 104 | 2. Do not use sharp objects inside the tent 105 | 3. Do not place the tent near open flames 106 | 4. Do not store food inside the tent 107 | 5. Do not smoke inside the tent 108 | 6. Do not force the tent during setup or takedown 109 | 7. Do not leave the tent unattended during inclement weather 110 | 8. Do not neglect proper tent maintenance 111 | 9. Do not drag the tent on rough surfaces 112 | 10. Do not dismantle the tent while wet 113 | 114 | ## Warranty Information 115 | 116 | 1. Limited Warranty: The SkyView 2-Person Tent is covered by a limited warranty for a period of one year from the date of purchase. This warranty is valid only for the original purchaser and is non-transferable. 117 | 118 | 2. Warranty Coverage: The warranty covers defects in materials and workmanship under normal use during the warranty period. If the tent exhibits any defects during this time, we will, at our discretion, repair or replace the product free of charge. 119 | 120 | 3. Exclusions: The warranty does not cover damage resulting from improper use, negligence, accidents, modifications, unauthorized repairs, normal wear and tear, or natural disasters. It also does not cover damages caused by transportation or storage without proper care. 121 | 122 | 4. Claim Process: In the event of a warranty claim, please contact our customer care department using the details provided below. You will be required to provide proof of purchase, a description of the issue, and any supporting documentation or images. 123 | 124 | 5. Contact Details for Customer Care: 125 | - Address: Customer Care Department 126 | SkyView Outdoor Gear 127 | 1234 Outdoor Avenue 128 | Cityville, USA 129 | - Phone: 1-800-123-4567 130 | - Email: support@skyviewgear.com 131 | 132 | Please ensure that you have registered your product by completing the warranty registration card or online form available on our website. This will help expedite the warranty claim process. 133 | 134 | 6. Important Notes: 135 | - Any repairs or replacements made under warranty will not extend the original warranty period. 136 | - The customer is responsible for shipping costs associated with returning the product for warranty service. 137 | - SkyView Outdoor Gear reserves the right to assess and determine the validity of warranty claims. 138 | 139 | ## Return Policy 140 | - If Membership status "None ": Returns are accepted within 30 days of purchase, provided the tent is unused, undamaged and in its original packaging. Customer is responsible for the cost of return shipping. Once the returned item is received, a refund will be issued for the cost of the item minus a 10% restocking fee. If the item was damaged during shipping or if there is a defect, the customer should contact customer service within 7 days of receiving the item. 141 | - If Membership status "Gold": Returns are accepted within 60 days of purchase, provided the tent is unused, undamaged and in its original packaging. Free return shipping is provided. Once the returned item is received, a full refund will be issued. If the item was damaged during shipping or if there is a defect, the customer should contact customer service within 7 days of receiving the item. 142 | - If Membership status "Platinum": Returns are accepted within 90 days of purchase, provided the tent is unused, undamaged and in its original packaging. Free return shipping is provided, and a full refund will be issued. If the item was damaged during shipping or if there is a defect, the customer should contact customer service within 7 days of receiving the item. 143 | 144 | ## Reviews 145 | 1) Rating: 5 146 | Review: I absolutely love the SkyView 2-Person Tent! It's incredibly spacious and provides ample room for two people. The setup is a breeze, and the materials feel durable and reliable. We used it during a rainy camping trip, and it kept us completely dry. Highly recommended! 147 | 148 | 2) Rating: 4 149 | Review: The SkyView 2-Person Tent is a great choice for camping. It offers excellent ventilation and airflow, which is perfect for warm weather. The tent is sturdy and well-built, with high-quality materials. The only minor drawback is that it takes a little longer to set up compared to some other tents I've used. 150 | 151 | 3) Rating: 5 152 | Review: This tent exceeded my expectations! The SkyView 2-Person Tent is incredibly lightweight and packs down small, making it ideal for backpacking trips. Despite its compact size, it offers plenty of room inside for two people and their gear. The waterproof design worked flawlessly during a rainy weekend. Highly satisfied with my purchase! 153 | 154 | 4) Rating: 3 155 | Review: The SkyView 2-Person Tent is decent overall. It provides adequate space for two people and offers good protection against the elements. However, I found the zippers to be a bit flimsy, and they occasionally got stuck. It's a functional tent for the price, but I expected better quality in some aspects. 156 | 157 | 5) Rating: 5 158 | Review: I've used the SkyView 2-Person Tent on multiple camping trips, and it has been fantastic. The tent is spacious, well-ventilated, and keeps us comfortable throughout the night. The setup is straightforward, even for beginners. I appreciate the attention to detail in the design, such as the convenient storage pockets. Highly recommended for camping enthusiasts! 159 | 160 | ## FAQ 161 | 67) How easy is it to set up the SkyView 2-Person Tent? 162 | The SkyView 2-Person Tent features a simple and intuitive setup process, with color-coded poles and clips, allowing you to pitch the tent within minutes. 163 | 164 | 68) Is the SkyView 2-Person Tent well-ventilated? 165 | Yes, the SkyView 2-Person Tent has mesh windows and vents, providing excellent airflow and reducing condensation inside the tent. 166 | 167 | 69) Can the SkyView 2-Person Tent withstand strong winds? 168 | The SkyView 2-Person Tent is designed with strong aluminum poles and reinforced guylines, ensuring stability and durability in windy conditions. 169 | 170 | 70) Are there any storage options inside the SkyView 2-Person Tent? 171 | Yes, the SkyView 2-Person Tent features interior mesh pockets and a gear loft for keeping your belongings organized and easily accessible. -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/rag_getting_started.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# LLamaIndex + GitHub Models for RAG\n", 8 | "\n", 9 | "This notebook demonstrates how to perform Retrieval-Augmented Generation (RAG) with LLamaIndex.\n", 10 | "\n", 11 | "## Introduction\n", 12 | "\n", 13 | "Retrieval-Augmented Generation (RAG) is a technique in natural language processing that combines the strengths of retrieval-based and generation-based methods to enhance the quality and accuracy of generated text. It integrates a retriever module, which searches a large corpus of documents for relevant information, with a generator module to produce coherent and contextually appropriate responses. This hybrid approach allows RAG to leverage vast amounts of external knowledge stored in documents, making it particularly effective for tasks requiring detailed information and context beyond the model's pre-existing knowledge.\n", 14 | "\n", 15 | "RAG operates by first using the retriever to identify the most relevant pieces of information from a database or collection of texts. These retrieved passages are then fed into the generator, which synthesizes the information to produce a final response. This process enables the model to provide more accurate and informative answers, as it dynamically incorporates up-to-date and specific details from the retrieval stage. The combination of retrieval and generation ensures that RAG models are both knowledgeable and flexible, making them valuable for applications such as question answering, summarization, and dialogue systems.\n", 16 | "\n", 17 | "In this sample, we will create an index from a set of markdown documents that contain product descriptions. Using a retriever, we will search the index with a user question to find the most relevant documents. Then we will use llama-index's query for a full Retrieval-Augmented Generation (RAG) implementation.\n", 18 | "\n", 19 | "## 1. Install dependencies" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "%pip install llama-index\n", 29 | "%pip install openai\n", 30 | "%pip install python-dotenv" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "## 2. Setup classes to a chat model and an embedding model\n", 38 | "\n", 39 | "To run RAG, you need 2 models: a chat model, and an embedding model. The GitHub Model service offers different options.\n", 40 | "\n", 41 | "For instance you could use an Azure OpenAI chat model (`gpt-4o-mini`) and embedding model (`text-embedding-3-small`), or a Cohere chat model (`Cohere-command-r-plus`) and embedding model (`Cohere-embed-v3-multilingual`).\n", 42 | "\n", 43 | "We'll proceed using some of the Azure OpenAI models below. You can find [how to leverage Cohere models in the LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/examples/llm/cohere/).\n", 44 | "\n", 45 | "### Example using Azure OpenAI models" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "import os\n", 55 | "import dotenv\n", 56 | "\n", 57 | "dotenv.load_dotenv()\n", 58 | "\n", 59 | "if not os.getenv(\"GITHUB_TOKEN\"):\n", 60 | " raise ValueError(\"GITHUB_TOKEN is not set\")\n", 61 | "\n", 62 | "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"GITHUB_TOKEN\")\n", 63 | "os.environ[\"OPENAI_BASE_URL\"] = \"https://models.github.ai/inference\"" 64 | ] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "Below, we are setting up the embedding model and the llm model to be used. " 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": null, 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "from llama_index.llms.openai import OpenAI\n", 80 | "from llama_index.embeddings.openai import OpenAIEmbedding\n", 81 | "from llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n", 82 | "from llama_index.core import Settings\n", 83 | "import logging\n", 84 | "import sys, os\n", 85 | "\n", 86 | "logging.basicConfig(\n", 87 | " stream=sys.stdout, level=logging.INFO\n", 88 | ") # logging.DEBUG for more verbose output\n", 89 | "logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n", 90 | "\n", 91 | "llm = OpenAI(\n", 92 | " model=\"gpt-4o-mini\",\n", 93 | " api_key=os.getenv(\"OPENAI_API_KEY\"),\n", 94 | " api_base=os.getenv(\"OPENAI_BASE_URL\"),\n", 95 | ")\n", 96 | "\n", 97 | "embed_model = OpenAIEmbedding(\n", 98 | " model=\"text-embedding-3-small\",\n", 99 | " api_key=os.getenv(\"OPENAI_API_KEY\"),\n", 100 | " api_base=os.getenv(\"OPENAI_BASE_URL\"),\n", 101 | ")\n", 102 | "\n", 103 | "Settings.llm = llm\n", 104 | "Settings.embed_model = embed_model" 105 | ] 106 | }, 107 | { 108 | "cell_type": "markdown", 109 | "metadata": {}, 110 | "source": [ 111 | "## 3. Create an index and retriever\n", 112 | "\n", 113 | "In the data folder, we have some product information files in markdown format. Here is a sample of the data:\n", 114 | "\n", 115 | "```markdown\n", 116 | "# Information about product item_number: 1\n", 117 | "TrailMaster X4 Tent, price $250,\n", 118 | "\n", 119 | "\n", 120 | "## Brand\n", 121 | "OutdoorLiving\n", 122 | "\n", 123 | "Main Category: CAMPING & HIKING\n", 124 | "Sub Category: TENTS & SHELTERS\n", 125 | "Product Type: BACKPACKING TENTS\n", 126 | "\n", 127 | "## Features\n", 128 | "- Polyester material for durability\n", 129 | "- Spacious interior to accommodate multiple people\n", 130 | "- Easy setup with included instructions\n", 131 | "- Water-resistant construction to withstand light rain\n", 132 | "- Mesh panels for ventilation and insect protection\n", 133 | "- Rainfly included for added weather protection\n", 134 | "- Multiple doors for convenient entry and exit\n", 135 | "- Interior pockets for organizing small items\n", 136 | "- Reflective guy lines for improved visibility at night\n", 137 | "- Freestanding design for easy setup and relocation\n", 138 | "- Carry bag included for convenient storage and transportation\n", 139 | "```\n", 140 | "Here is the link to the full file: [data/product_info_1.md](data/product_info_1.md). As you can see, the files are rather long and contain different sections like Brand, Features, User Guide, Warranty Information, Reviews, etc. All these can be useful when answering user questions.\n", 141 | "\n", 142 | "To be able to find the right information, we will create a vector index that stores the embeddings of the documents. Note that we are reducing the batch size of the indexer to prevent rate limiting. The GitHub Model Service is rate limited to 64K tokens per request for embedding models. \n" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": null, 148 | "metadata": {}, 149 | "outputs": [], 150 | "source": [ 151 | "#Note: we have to reduce the batch size to stay within the token limits of the free service\n", 152 | "documents = SimpleDirectoryReader(\"data\").load_data()\n", 153 | "index = VectorStoreIndex.from_documents(documents, insert_batch_size=150)" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "Now that we have an index, we can use the retriever to find the most relevant documents for a user question." 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "metadata": {}, 167 | "outputs": [], 168 | "source": [ 169 | "retriever = index.as_retriever()\n", 170 | "fragments = retriever.retrieve(\"What is the temperature rating of the cozynights sleeping bag?\")\n", 171 | "\n", 172 | "for fragment in fragments:\n", 173 | " print(fragment)" 174 | ] 175 | }, 176 | { 177 | "cell_type": "markdown", 178 | "metadata": {}, 179 | "source": [ 180 | "## 4. Use chat model to generate an answer\n", 181 | "\n", 182 | "Now that we have the documents that match the user question, we can ask our chat model to generate an answer based on the retrieved documents:" 183 | ] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": null, 188 | "metadata": {}, 189 | "outputs": [], 190 | "source": [ 191 | "from llama_index.core.llms import ChatMessage\n", 192 | "\n", 193 | "context = \"\\n------\\n\".join([ fragment.text for fragment in fragments ])\n", 194 | "\n", 195 | "messages = [\n", 196 | " ChatMessage(role=\"system\", content=\"You are a helpful assistant that answers some questions with the help of some context data.\\n\\nHere is the context data:\\n\\n\" + context),\n", 197 | " ChatMessage(role=\"user\", content=\"What is the temperature rating of the cozynights sleeping bag?\")\n", 198 | "]\n", 199 | "\n", 200 | "response = llm.chat(messages)\n", 201 | "print()\n", 202 | "print(response)" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "LLamaIndex provides a simple API to query the retriever and the generator in one go. The query function takes the question as input and returns the answer generated by the generator." 210 | ] 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "metadata": {}, 216 | "outputs": [], 217 | "source": [ 218 | "query_engine = index.as_query_engine()\n", 219 | "response = query_engine.query(\"What is the temperature rating of the cozynights sleeping bag?\")\n", 220 | "print()\n", 221 | "print(response)" 222 | ] 223 | }, 224 | { 225 | "cell_type": "code", 226 | "execution_count": null, 227 | "metadata": {}, 228 | "outputs": [], 229 | "source": [ 230 | "response = query_engine.query(\"What is a good 2 person tent?\")\n", 231 | "print(response)" 232 | ] 233 | }, 234 | { 235 | "cell_type": "code", 236 | "execution_count": null, 237 | "metadata": {}, 238 | "outputs": [], 239 | "source": [ 240 | "response = query_engine.query(\"Does the SkyView 2-Person Tent have a rain fly?\")\n", 241 | "print(response)" 242 | ] 243 | } 244 | ], 245 | "metadata": { 246 | "kernelspec": { 247 | "display_name": "Python 3", 248 | "language": "python", 249 | "name": "python3" 250 | }, 251 | "language_info": { 252 | "codemirror_mode": { 253 | "name": "ipython", 254 | "version": 3 255 | }, 256 | "file_extension": ".py", 257 | "mimetype": "text/x-python", 258 | "name": "python", 259 | "nbconvert_exporter": "python", 260 | "pygments_lexer": "ipython3", 261 | "version": "3.11.9" 262 | } 263 | }, 264 | "nbformat": 4, 265 | "nbformat_minor": 2 266 | } 267 | -------------------------------------------------------------------------------- /cookbooks/python/llamaindex/data/product_info_11.md: -------------------------------------------------------------------------------- 1 | # Information about product item_number: 11 2 | TrailWalker Hiking Shoes, price $110 3 | 4 | ## Brand 5 | TrekReady 6 | 7 | Main Category: FOOTWEAR 8 | Sub Category: MEN'S FOOTWEAR 9 | Product Type: HIKING BOOTS 10 | 11 | ## Features 12 | - Durable and waterproof construction to withstand various terrains and weather conditions 13 | - High-quality materials, including synthetic leather and mesh for breathability 14 | - Reinforced toe cap and heel for added protection and durability 15 | - Cushioned insole for enhanced comfort during long hikes 16 | - Supportive midsole for stability and shock absorption 17 | - Traction outsole with multidirectional lugs for excellent grip on different surfaces 18 | - Breathable mesh lining to keep feet cool and dry 19 | - Padded collar and tongue for extra comfort and to prevent chafing 20 | - Lightweight design for reduced fatigue during long hikes 21 | - Quick-lace system for easy and secure fit adjustments 22 | - EVA (ethylene vinyl acetate) foam for lightweight cushioning and support 23 | - Removable insole for customization or replacement 24 | - Protective mudguard to prevent debris from entering the shoe 25 | - Reflective accents for increased visibility in low-light conditions 26 | - Available in multiple sizes and widths for a better fit 27 | - Suitable for hiking, trekking, and outdoor adventures 28 | 29 | ## Technical Specs 30 | 31 | - Best Use: Hiking 32 | - Upper Material: Synthetic leather, mesh 33 | - Waterproof: Yes 34 | - Color: Black 35 | - Dimensions: 7-12 (US) 36 | - Toe Protection: Reinforced toe cap 37 | - Heel Protection: Reinforced heel 38 | - Insole Type: Cushioned 39 | - Midsole Type: Supportive 40 | - Outsole Type: Traction outsole with multidirectional lugs 41 | - Lining Material: Breathable mesh 42 | - Closure Type: Quick-lace system 43 | - Cushioning Material: EVA foam 44 | - Removable Insole: Yes 45 | - Collar and Tongue Padding: Yes 46 | - Weight (per shoe): Varies by size 47 | - Reflective Accents: Yes 48 | - Mudguard: Protective mudguard 49 | 50 | ## User Guide/Manual: 51 | 52 | ### 1. Getting Started 53 | 54 | Before your first use, please take a moment to inspect the shoes for any visible defects or damage. If you notice any issues, please contact our customer support for assistance. 55 | 56 | ### 2. Fitting and Adjustment 57 | 58 | To ensure a proper fit and maximum comfort, follow these steps: 59 | 60 | 1. Loosen the quick-lace system by pulling up the lace lock. 61 | 2. Slide your foot into the shoe and position it properly. 62 | 3. Adjust the tension of the laces by pulling both ends simultaneously. Find the desired tightness and comfort level. 63 | 4. Push the lace lock down to secure the laces in place. 64 | 5. Tuck any excess lace into the lace pocket for safety and to prevent tripping. 65 | 66 | Note: It's recommended to wear hiking socks for the best fit and to prevent blisters or discomfort. 67 | 68 | ### 3. Shoe Care and Maintenance 69 | 70 | Proper care and maintenance will help prolong the life of your TrailWalker Hiking Shoes: 71 | 72 | - After each use, remove any dirt or debris by brushing or wiping the shoes with a damp cloth. 73 | - If the shoes are muddy or heavily soiled, rinse them with clean water and gently scrub with a soft brush. Avoid using harsh detergents or solvents. 74 | - Allow the shoes to air dry naturally, away from direct sunlight or heat sources. 75 | - To maintain waterproof properties, periodically apply a waterproofing treatment according to the manufacturer's instructions. 76 | - Inspect the shoes regularly for any signs of wear and tear, such as worn outsoles or loose stitching. If any issues are found, contact our customer support for assistance. 77 | - Store the shoes in a cool, dry place when not in use, away from extreme temperatures or moisture. 78 | 79 | ## Caution Information 80 | 81 | 1. Do not expose to extreme temperatures 82 | 2. Do not machine wash or dry 83 | 3. Do not force-dry with heat sources 84 | 4. Do not use harsh chemicals or solvents 85 | 5. Do not store when wet or damp 86 | 6. Do not ignore signs of wear or damage 87 | 7. Do not ignore discomfort or pain 88 | 8. Do not use for activities beyond their intended purpose 89 | 9. Do not share footwear 90 | 10. Do not ignore manufacturer's instructions 91 | 92 | ## Warranty Information 93 | Please read the following warranty information carefully. 94 | 95 | 1. Warranty Coverage: 96 | - The TrailWalker Hiking Shoes are covered by a limited manufacturer's warranty. 97 | - The warranty covers defects in materials and workmanship under normal use and conditions. 98 | - The warranty is valid for a period of [insert duration] from the date of purchase. 99 | 100 | 2. Warranty Claims: 101 | - To initiate a warranty claim, please contact our customer care team at the following contact details: 102 | - Customer Care: TrailWalker Gear 103 | - Email: customerservice@trailwalkergear.com 104 | - Phone: 1-800-123-4567 105 | 106 | 3. Warranty Exclusions: 107 | - The warranty does not cover damage resulting from misuse, neglect, accidents, improper care, or unauthorized repairs. 108 | - Normal wear and tear, including worn outsoles, laces, or minor cosmetic imperfections, are not covered under the warranty. 109 | - Modifications or alterations made to the shoes will void the warranty. 110 | 111 | 4. Warranty Resolution: 112 | - Upon receiving your warranty claim, our customer care team will assess the issue and provide further instructions. 113 | - Depending on the nature of the claim, we may offer a repair, replacement, or store credit for the product. 114 | - In some cases, the warranty claim may require the shoes to be shipped back to us for evaluation. The customer will be responsible for the shipping costs. 115 | 116 | 5. Customer Responsibilities: 117 | - It is the customer's responsibility to provide accurate and detailed information regarding the warranty claim. 118 | - Please retain your original purchase receipt or proof of purchase for warranty validation. 119 | - Any false information provided or attempts to abuse the warranty policy may result in the claim being rejected. 120 | 121 | ## Return Policy 122 | - If Membership status "None ": Returns are accepted within 30 days of purchase, provided the tent is unused, undamaged and in its original packaging. Customer is responsible for the cost of return shipping. Once the returned item is received, a refund will be issued for the cost of the item minus a 10% restocking fee. If the item was damaged during shipping or if there is a defect, the customer should contact customer service within 7 days of receiving the item. 123 | - If Membership status "Gold": Returns are accepted within 60 days of purchase, provided the tent is unused, undamaged and in its original packaging. Free return shipping is provided. Once the returned item is received, a full refund will be issued. If the item was damaged during shipping or if there is a defect, the customer should contact customer service within 7 days of receiving the item. 124 | - If Membership status "Platinum": Returns are accepted within 90 days of purchase, provided the tent is unused, undamaged and in its original packaging. Free return shipping is provided, and a full refund will be issued. If the item was damaged during shipping or if there is a defect, the customer should contact customer service within 7 days of receiving the item. 125 | 126 | ## Reviews 127 | 1) Rating: 4.5 128 | Review: I recently purchased the TrailWalker Hiking Shoes for a weekend hiking trip, and they exceeded my expectations. The fit is comfortable, providing excellent support throughout the journey. The traction is impressive, allowing me to confidently tackle various terrains. The shoes are also durable, showing no signs of wear after a challenging hike. My only minor complaint is that they could provide slightly more cushioning for longer treks. Overall, these shoes are a reliable choice for outdoor enthusiasts. 129 | 130 | 2) Rating: 5 131 | Review: The TrailWalker Hiking Shoes are fantastic! I've used them extensively on multiple hiking trips, and they have never let me down. The grip on various surfaces is exceptional, providing stability even on slippery trails. The shoes offer ample protection for my feet without sacrificing comfort. Additionally, they have withstood rough conditions and still look almost brand new. I highly recommend these shoes to anyone seeking a reliable and durable option for their hiking adventures. 132 | 133 | 3) Rating: 3.5 134 | Review: I have mixed feelings about the TrailWalker Hiking Shoes. On the positive side, they offer decent support and have good traction on most terrains. However, I found the sizing to be slightly off, and the shoes took a bit of breaking in before they felt comfortable. Also, while they are durable overall, I noticed some minor wear and tear on the outsole after a few months of regular use. They are a decent choice for occasional hikers but may not be ideal for intense or prolonged expeditions. 135 | 136 | 4) Rating: 4 137 | Review: I purchased the TrailWalker Hiking Shoes for a hiking trip in rugged mountain terrain. They performed admirably, providing excellent stability and protection. The waterproofing feature kept my feet dry during unexpected rain showers. The shoes are also lightweight, which is a bonus for long hikes. However, I did notice a small amount of discomfort around the toe area after extended periods of walking. Nevertheless, these shoes are a reliable option for most hiking adventures. 138 | 139 | 5) Rating: 5 140 | Review: The TrailWalker Hiking Shoes are hands down the best hiking shoes I've ever owned. From the moment I put them on, they felt like a perfect fit. The traction is outstanding, allowing me to confidently navigate challenging trails without slipping. The shoes provide excellent ankle support, which is crucial on uneven terrain. They are also durable and show no signs of wear, even after multiple hikes. I can't recommend these shoes enough for avid hikers. They are worth every penny. 141 | 142 | ## FAQ 143 | 49) How long does it take to break in the TrailWalker Hiking Shoes? 144 | The TrailWalker Hiking Shoes are made of flexible synthetic materials, so they usually take just a few days of regular use to break in and feel comfortable on your feet. 145 | 146 | 50) Are the TrailWalker Hiking Shoes suitable for trail running? 147 | While the TrailWalker Hiking Shoes are designed primarily for hiking, their lightweight construction and excellent traction also make them suitable for trail running on moderate terrains. 148 | 149 | 51) Do the TrailWalker Hiking Shoes provide good arch support? 150 | Yes, the TrailWalker Hiking Shoes feature a cushioned midsole and supportive insole, providing excellent arch support for long hikes and reducing foot fatigue. 151 | 152 | 52) Are the TrailWalker Hiking Shoes compatible with gaiters? 153 | Yes, the TrailWalker Hiking Shoes can be used with most standard gaiters, providing additional protection against water, snow, and debris while hiking. 154 | 155 | 53) Can the TrailWalker Hiking Shoes be resoled? 156 | While it may be possible to resole the TrailWalker Hiking Shoes, we recommend contacting the manufacturer (TrekReady) or a professional shoe repair service to determine the feasibility and cost of resoling. 157 | --------------------------------------------------------------------------------