├── .gitattributes ├── .github ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── FUNDING.yml └── ISSUE_TEMPLATE │ ├── contribute-your-own-feature.yaml │ ├── documentation-update.yaml │ ├── question-or-support.yaml │ ├── report-a-bug.yaml │ ├── submit-an-idea-or-proposal.yml │ └── suggest-new-feature.yaml ├── .gitignore ├── LICENSE ├── README.md ├── docs ├── azure-ai-integration.md ├── google-gemini-integration.md ├── images │ ├── azure-log-analytics-get-key.png │ ├── azure-log-analytics-show-logs.png │ ├── contribution_cta.png │ └── table-of-contents.png ├── infomaniak-integration.md ├── n8n-integration.md └── setup-azure-log-analytics.md ├── filters ├── google_search_tool.py └── time_token_tracker.py ├── pipelines ├── azure │ └── azure_ai_foundry.py ├── google │ └── google_gemini.py ├── infomaniak │ └── infomaniak.py └── n8n │ ├── Open_WebUI_Test_Agent.json │ └── n8n.py ├── pixi.lock ├── pixi.toml └── ruff.toml /.gitattributes: -------------------------------------------------------------------------------- 1 | # SCM syntax highlighting & preventing 3-way merges 2 | pixi.lock merge=binary linguist-language=YAML linguist-generated=true 3 | -------------------------------------------------------------------------------- /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. 6 | 7 | We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. 8 | 9 | ## Our Standards 10 | 11 | Examples of behavior that contributes to a positive environment for our community include: 12 | 13 | * Demonstrating empathy and kindness toward other people 14 | * Being respectful of differing opinions, viewpoints, and experiences 15 | * Giving and gracefully accepting constructive feedback 16 | * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience 17 | * Focusing on what is best not just for us as individuals, but for the overall community 18 | 19 | Examples of unacceptable behavior include: 20 | 21 | * The use of sexualized language or imagery, and sexual attention or advances of any kind 22 | * Trolling, insulting or derogatory comments, and personal or political attacks 23 | * Public or private harassment 24 | * Publishing others' private information, such as a physical or email address, without their explicit permission 25 | * Contacting individual members, contributors, or leaders privately, outside designated community mechanisms, without their explicit permission 26 | * Other conduct which could reasonably be considered inappropriate in a professional setting 27 | 28 | ## Enforcement Responsibilities 29 | 30 | Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. 31 | 32 | Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. 33 | 34 | ## Scope 35 | 36 | This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. 37 | 38 | ## Enforcement 39 | 40 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at opensource@github.com. All complaints will be reviewed and investigated promptly and fairly. 41 | 42 | All community leaders are obligated to respect the privacy and security of the reporter of any incident. 43 | 44 | ## Enforcement Guidelines 45 | 46 | Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: 47 | 48 | ### 1. Correction 49 | 50 | **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. 51 | 52 | **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. 53 | 54 | ### 2. Warning 55 | 56 | **Community Impact**: A violation through a single incident or series of actions. 57 | 58 | **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. 59 | 60 | ### 3. Temporary Ban 61 | 62 | **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. 63 | 64 | **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. 65 | 66 | ### 4. Permanent Ban 67 | 68 | **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. 69 | 70 | **Consequence**: A permanent ban from any sort of public interaction within the community. 71 | 72 | ## Attribution 73 | 74 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at . 75 | 76 | Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). 77 | 78 | [homepage]: https://www.contributor-covenant.org 79 | 80 | For answers to common questions about this code of conduct, see the FAQ at . Translations are available at . -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Welcome to Open-WebUI-Functions contributing guide 2 | 3 | Thank you for investing your time in contributing to our project! 4 | 5 | Read our [Code of Conduct](https://own.dev/github-owndev-open-webui-functions-code-of-conduct) to keep our community approachable and respectable. 6 | 7 | In this guide you will get an overview of the contribution workflow from opening an issue, creating a PR, reviewing, and merging the PR. 8 | 9 | Use the table of contents icon Table of contents icon on the top left corner of this document to get to a specific section of this guide quickly. 10 | 11 | ## New contributor guide 12 | 13 | To get an overview of the project, read the [README](https://own.dev/github-owndev-open-webui-functions) file. Here are some resources to help you get started with open source contributions: 14 | 15 | - [Finding ways to contribute to open source on GitHub](https://docs.github.com/en/get-started/exploring-projects-on-github/finding-ways-to-contribute-to-open-source-on-github) 16 | - [Set up Git](https://docs.github.com/en/get-started/git-basics/set-up-git) 17 | - [GitHub flow](https://docs.github.com/en/get-started/using-github/github-flow) 18 | - [Collaborating with pull requests](https://docs.github.com/en/github/collaborating-with-pull-requests) 19 | 20 | 21 | ## Getting started 22 | 23 | For more information on how we write our markdown files, see "[Using Markdown in GitHub](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax)." 24 | 25 | ### Issues 26 | 27 | #### Create a new issue 28 | 29 | If you spot a problem, [search if an issue already exists](https://docs.github.com/en/github/searching-for-information-on-github/searching-on-github/searching-issues-and-pull-requests#search-by-the-title-body-or-comments). If a related issue doesn't exist, you can open a new issue using a relevant [issue form](https://own.dev/github-owndev-open-webui-functions-issues-new). 30 | 31 | #### Solve an issue 32 | 33 | Scan through our [existing issues](https://own.dev/github-owndev-open-webui-functions-issues) to find one that interests you. You can narrow down the search using `labels` as filters. As a general rule, we don’t assign issues to anyone. If you find an issue to work on, you are welcome to open a PR with a fix. 34 | 35 | ### Make Changes 36 | 37 | #### Make changes in the UI 38 | 39 | Click **Make a contribution** at the bottom of any docs page to make small changes such as a typo, sentence fix, or a broken link. This takes you to the `.md` file where you can make your changes and [create a pull request](#pull-request) for a review. 40 | 41 | 42 | 43 | #### Make changes locally 44 | 45 | 1. Fork the repository. 46 | - Using GitHub Desktop: 47 | - [Getting started with GitHub Desktop](https://docs.github.com/en/desktop/installing-and-configuring-github-desktop/getting-started-with-github-desktop) will guide you through setting up Desktop. 48 | - Once Desktop is set up, you can use it to [fork the repo](https://docs.github.com/en/desktop/contributing-and-collaborating-using-github-desktop/cloning-and-forking-repositories-from-github-desktop)! 49 | 50 | - Using the command line: 51 | - [Fork the repo](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo#fork-an-example-repository) so that you can make your changes without affecting the original project until you're ready to merge them. 52 | 53 | 3. Create a working branch and start with your changes! 54 | 55 | ### Commit your update 56 | 57 | Commit the changes once you are happy with them. 58 | 59 | ### Pull Request 60 | 61 | When you're finished with the changes, create a pull request, also known as a PR. 62 | - Fill the "Ready for review" template so that we can review your PR. This template helps reviewers understand your changes as well as the purpose of your pull request. 63 | - Don't forget to [link PR to issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) if you are solving one. 64 | - Enable the checkbox to [allow maintainer edits](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork) so the branch can be updated for a merge. 65 | Once you submit your PR, a team member will review your proposal. We may ask questions or request additional information. 66 | - We may ask for changes to be made before a PR can be merged. You can apply suggested changes directly through the UI. You can make any other changes in your fork, then commit them to your branch. 67 | - If you run into any merge issues, checkout this [git tutorial](https://github.com/skills/resolve-merge-conflicts) to help you resolve merge conflicts and other issues. 68 | 69 | ### Your PR is merged! 70 | 71 | Congratulations :tada::tada: the **Open-WebUI-Functions** team thanks you :sparkles:. 72 | 73 | Once your PR is merged, your posts will be made public. -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | github: owndev -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/contribute-your-own-feature.yaml: -------------------------------------------------------------------------------- 1 | name: Contribute your own feature 2 | description: Propose and contribute a self-implemented feature for this project. 3 | labels: 4 | - enhancement 5 | - contribution 6 | body: 7 | - type: markdown 8 | attributes: 9 | value: | 10 | **Thank you for contributing to this project!** 11 | This template is intended for contributors who plan to **submit their own implementation** of a new feature or improvement. 12 | 13 | Please fill out all relevant sections to help the maintainers review and integrate your work efficiently. 14 | 15 | - Make sure you’ve read the [Contributing Guide](https://own.dev/github-owndev-open-webui-functions-contributing). 16 | - Before proceeding, open this issue to **coordinate with maintainers** and avoid duplicate work. 17 | 18 | - type: checkboxes 19 | id: agreement 20 | attributes: 21 | label: Contribution Terms 22 | description: Confirm that you've reviewed the necessary guidelines and intend to contribute code. 23 | options: 24 | - label: I have reviewed the project’s [Code of Conduct](https://own.dev/github-owndev-open-webui-functions-code-of-conduct) and contribution guidelines. 25 | required: true 26 | - label: I plan to implement this feature myself and submit a pull request. 27 | required: true 28 | 29 | - type: input 30 | attributes: 31 | label: Feature title 32 | description: A short, clear name for the feature you're implementing. 33 | validations: 34 | required: true 35 | 36 | - type: textarea 37 | attributes: 38 | label: Feature overview 39 | description: | 40 | Briefly describe the feature you're planning to implement: 41 | - What problem does it solve or what benefit does it bring? 42 | - Who are the target users or use cases? 43 | validations: 44 | required: true 45 | 46 | - type: textarea 47 | attributes: 48 | label: Implementation details 49 | description: | 50 | Describe your planned approach or architecture for implementing this feature. 51 | Include: 52 | - Which components/modules will be affected 53 | - APIs, libraries, or technologies you intend to use 54 | - Any design decisions or constraints 55 | validations: 56 | required: true 57 | 58 | - type: textarea 59 | attributes: 60 | label: Tasks and milestones 61 | description: Break down the work into actionable steps or milestones. This will help with tracking progress. 62 | validations: 63 | required: false 64 | 65 | - type: textarea 66 | attributes: 67 | label: Questions or areas for feedback 68 | description: | 69 | Do you have any uncertainties, decisions to validate, or places where you'd like early feedback from maintainers? 70 | validations: 71 | required: false 72 | 73 | - type: textarea 74 | attributes: 75 | label: Additional context or references 76 | description: Link related discussions, documents, code snippets, or external resources. 77 | validations: 78 | required: false 79 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/documentation-update.yaml: -------------------------------------------------------------------------------- 1 | name: Documentation update 2 | description: Suggest improvements or changes to the documentation. 3 | labels: 4 | - documentation 5 | body: 6 | - type: input 7 | attributes: 8 | label: Documentation area 9 | description: What part of the documentation needs improvement? (e.g., README, API docs, Setup guide) 10 | validations: 11 | required: true 12 | 13 | - type: textarea 14 | attributes: 15 | label: Suggested improvement 16 | description: Describe what should be changed or clarified and why. 17 | validations: 18 | required: true 19 | 20 | - type: textarea 21 | attributes: 22 | label: Additional context 23 | description: Add references, links, or examples to help understand your suggestion. 24 | validations: 25 | required: false 26 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/question-or-support.yaml: -------------------------------------------------------------------------------- 1 | name: Question or Support 2 | description: Ask a question or request help using the project. 3 | labels: 4 | - question 5 | body: 6 | - type: markdown 7 | attributes: 8 | value: | 9 | **Need help or have a question?** 10 | This form is for general usage questions or support requests. 11 | Please check the [documentation](https://own.dev/github-owndev-open-webui-functions) and existing [issues](https://own.dev/github-owndev-open-webui-functions-issues) before submitting. 12 | 13 | - type: input 14 | attributes: 15 | label: Question summary 16 | description: Briefly describe your question or what you need help with. 17 | validations: 18 | required: true 19 | 20 | - type: textarea 21 | attributes: 22 | label: Details 23 | description: Please describe the question or issue in more detail. Include context, logs, error messages, etc. if applicable. 24 | validations: 25 | required: true 26 | 27 | - type: textarea 28 | attributes: 29 | label: What have you tried? 30 | description: Share what you've already tried to solve the issue on your own. 31 | validations: 32 | required: false 33 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/report-a-bug.yaml: -------------------------------------------------------------------------------- 1 | name: Report a bug 2 | description: Report an issue that you encountered while using the project. 3 | labels: 4 | - bug 5 | body: 6 | - type: markdown 7 | attributes: 8 | value: | 9 | **Thanks for reporting a bug!** 10 | Please complete the information below to help us reproduce and fix the issue. 11 | 12 | - type: input 13 | attributes: 14 | label: Bug title 15 | description: A brief summary of the bug. 16 | validations: 17 | required: true 18 | 19 | - type: textarea 20 | attributes: 21 | label: Describe the bug 22 | description: | 23 | What went wrong? Include: 24 | - What you expected to happen 25 | - What actually happened 26 | - Any error messages or screenshots 27 | validations: 28 | required: true 29 | 30 | - type: textarea 31 | attributes: 32 | label: Steps to reproduce 33 | description: Describe the steps we need to take to see the issue. Be as specific as possible. 34 | validations: 35 | required: true 36 | 37 | - type: textarea 38 | attributes: 39 | label: Environment 40 | description: | 41 | Provide relevant environment details: 42 | - OS, browser/device (if relevant) 43 | - Software version or commit hash 44 | - Any configuration or plugins used 45 | validations: 46 | required: true 47 | 48 | - type: textarea 49 | attributes: 50 | label: Additional context 51 | description: Add links to logs, screenshots, or related issues if available. 52 | validations: 53 | required: false 54 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/submit-an-idea-or-proposal.yml: -------------------------------------------------------------------------------- 1 | name: Submit an idea or proposal 2 | description: Suggest a general idea or concept for discussion, not necessarily planned for immediate implementation. 3 | labels: 4 | - idea 5 | - discussion 6 | body: 7 | - type: markdown 8 | attributes: 9 | value: | 10 | **Have an idea to share?** 11 | Use this form to submit general proposals, conceptual ideas, or directions for future development. This is a space for open-ended discussions that might not yet be tied to a concrete implementation. 12 | 13 | > If you're planning to contribute code for your idea, please use the **"Contribute your own feature"** template instead. 14 | 15 | - type: input 16 | attributes: 17 | label: Idea title 18 | description: A clear and concise title for your idea or proposal. 19 | validations: 20 | required: true 21 | 22 | - type: textarea 23 | attributes: 24 | label: Describe your idea 25 | description: | 26 | Describe your idea or proposal in detail. Consider including: 27 | - What problem or opportunity you're addressing 28 | - How this could improve the project 29 | - Any inspiration or background 30 | validations: 31 | required: true 32 | 33 | - type: textarea 34 | attributes: 35 | label: Possible directions or next steps 36 | description: What could be explored next? Are there any potential ways this idea could be evaluated or tested? 37 | validations: 38 | required: false 39 | 40 | - type: textarea 41 | attributes: 42 | label: Additional context 43 | description: Add any links, discussions, screenshots, or references to support your idea. 44 | validations: 45 | required: false 46 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/suggest-new-feature.yaml: -------------------------------------------------------------------------------- 1 | name: Suggest a new feature 2 | description: Submit a request for a new feature or improvement for the project. 3 | labels: 4 | - enhancement 5 | body: 6 | - type: markdown 7 | attributes: 8 | value: | 9 | **Hello Contributors!** Thank you for taking the time to suggest a new feature for the project. Please provide as much detail as possible to help us understand your request. 10 | 11 | * Before submitting this request, ensure you've reviewed the [Contributing Guide](https://own.dev/github-owndev-open-webui-functions-contributing). 12 | * Check if a similar request already exists in the [issues](https://own.dev/github-owndev-open-webui-functions-issues). 13 | 14 | - type: checkboxes 15 | id: agreement 16 | attributes: 17 | label: Contribution Terms 18 | description: Please confirm that you have read and understood the project's contribution guidelines. 19 | options: 20 | - label: I have reviewed the project’s [Code of Conduct](https://own.dev/github-owndev-open-webui-functions-code-of-conduct) and contribution guidelines. 21 | required: true 22 | 23 | - type: input 24 | attributes: 25 | label: Feature title 26 | description: Provide a short and descriptive title for the feature you'd like to propose. 27 | validations: 28 | required: true 29 | 30 | - type: textarea 31 | attributes: 32 | label: Feature description 33 | description: | 34 | Please describe the feature in detail. Include: 35 | - The problem it solves or the improvement it adds. 36 | - Why it is important or useful. 37 | - Any relevant use cases or examples. 38 | validations: 39 | required: true 40 | 41 | - type: textarea 42 | attributes: 43 | label: Potential implementation ideas 44 | description: Share any ideas, approaches, or examples for implementing this feature. (Optional) 45 | validations: 46 | required: false 47 | 48 | - type: textarea 49 | attributes: 50 | label: Additional context or information 51 | description: Add any additional context, references, or related links that might be helpful. 52 | validations: 53 | required: false -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | # pixi environments 3 | .pixi 4 | *.egg-info 5 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2025 owndev 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Open-WebUI-Functions 2 | ![GitHub stars](https://img.shields.io/github/stars/owndev/Open-WebUI-Functions?style=social) 3 | ![GitHub forks](https://img.shields.io/github/forks/owndev/Open-WebUI-Functions?style=social) 4 | ![GitHub watchers](https://img.shields.io/github/watchers/owndev/Open-WebUI-Functions?style=social) 5 | ![GitHub top language](https://img.shields.io/github/languages/top/owndev/Open-WebUI-Functions) 6 | ![GitHub contributors](https://img.shields.io/github/contributors/owndev/Open-WebUI-Functions) 7 | ![GitHub License](https://img.shields.io/github/license/owndev/Open-WebUI-Functions) 8 | 9 | **Open-WebUI-Functions** is a collection of Python-based functions designed to extend the capabilities of [Open WebUI](https://own.dev/github-com-open-webui-open-webui) with additional **pipelines**, **filters**, and **integrations**. These functions allow users to interact with various AI models, process data efficiently, and customize the Open WebUI experience. 10 | 11 |
12 | 13 | ## Features ⭐ 14 | 15 | - 🧩 **Custom Pipelines**: Extend Open WebUI with AI processing pipelines, including model inference and data transformations. 16 | 17 | - 🔍 **Filters for Data Processing**: Apply custom filtering logic to refine, manipulate, or preprocess input and output data. 18 | 19 | - 🤝 **Azure AI Support**: Seamlessly connect Open WebUI with **Azure OpenAI** and other **Azure AI** models. 20 | 21 | - 🤝 **N8N Workflow Integration**: Enable interactions with [N8N](https://own.dev/n8n-io) for automation. 22 | 23 | - 📱 **Flexible Configuration**: Use environment variables to adjust function settings dynamically. 24 | 25 | - 🚀 **Streaming and Non-Streaming Support**: Handle both real-time and batch processing efficiently. 26 | 27 | - 🛡️ **Secure API Key Management**: Automatic encryption of sensitive information like API keys. 28 | 29 |
30 | 31 | ## Prerequisites 🔗 32 | 33 | To use these functions, ensure the following: 34 | 35 | 1. **An Active Open WebUI Instance**: You must have [Open WebUI](https://own.dev/github-com-open-webui-open-webui) installed and running. 36 | 37 | 2. **Required AI Services (if applicable)**: Some pipelines require external AI services, such as [Azure AI](https://own.dev/ai-azure-com). 38 | 39 | 3. **Admin Access**: To install functions in Open WebUI, you must have administrator privileges. 40 | 41 |
42 | 43 | ## Installation 🚀 44 | 45 | To install and configure functions in Open WebUI, follow these steps: 46 | 47 | 1. **Ensure Admin Access**: 48 | - You must be an admin in Open WebUI to install functions. 49 | 50 | 2. **Access Admin Settings**: 51 | - Navigate to the **Admin Settings** section in Open WebUI. 52 | 53 | 3. **Go to the Function Tab**: 54 | - Open the **Functions** tab in the admin panel. 55 | 56 | 4. **Create a New Function**: 57 | - Click **Add New Function**. 58 | - Copy the function code from this repository and paste it into the function editor. 59 | 60 | 5. **Set Environment Variables (if required)**: 61 | - Some functions require API keys or specific configurations via environment variables. 62 | - Set [WEBUI_SECRET_KEY](https://own.dev/docs-openwebui-com-getting-started-env-configuration-webui-secret-key) for secure encryption of sensitive API keys. 63 | 64 | 6. **Save and Activate**: 65 | - Save the function, and it will be available for use within Open WebUI. 66 | 67 |
68 | 69 | ## Security Features 🛡️ 70 | 71 | ### API Key Encryption 72 | 73 | The functions include a built-in encryption mechanism for sensitive information: 74 | 75 | - **Automatic Encryption**: API keys and other sensitive data are automatically encrypted when stored. 76 | - **Encrypted Storage**: Values are stored with an "encrypted:" prefix followed by the encrypted data. 77 | - **Transparent Usage**: The encryption/decryption happens automatically when values are accessed. 78 | - **No Configuration Required**: Works out-of-the-box when [WEBUI_SECRET_KEY](https://own.dev/docs-openwebui-com-getting-started-env-configuration-webui-secret-key) is set. 79 | 80 | 81 | **To enable encryption:** 82 | ```bash 83 | # Set this in your Open WebUI environment or .env file 84 | WEBUI_SECRET_KEY="your-secure-random-string" 85 | ``` 86 | 87 |
88 | 89 | ## Pipelines 🧩 90 | 91 | Pipelines are processing functions that extend Open WebUI with **custom AI models**, **external integrations**, and **data manipulation logic**. 92 | 93 | ### **1. [Azure AI Foundry Pipeline](https://own.dev/github-owndev-open-webui-functions-azure-ai-foundry)** 94 | 95 | - Enables interaction with **Azure OpenAI** and other **Azure AI** models. 96 | - Supports multiple Azure AI models selection via the `AZURE_AI_MODEL` environment variable (e.g. `gpt-4o;gpt-4o-mini`). 97 | - Filters valid parameters to ensure clean requests. 98 | - Handles both streaming and non-streaming responses. 99 | - Provides configurable error handling and timeouts. 100 | - Predefined models for easy access. 101 | - Supports encryption of sensitive information like API keys. 102 | 103 | 🔗 [Azure AI Pipeline in Open WebUI](https://own.dev/openwebui-com-f-owndev-azure-ai) 104 | 105 | 🔗 [Learn More About Azure AI](https://own.dev/azure-microsoft-com-en-us-solutions-ai) 106 | 107 |
108 | 109 | ### **2. [N8N Pipeline](https://own.dev/github-owndev-open-webui-functions-n8n-pipeline)** 110 | 111 | - Integrates **Open WebUI** with **N8N**, an automation and workflow platform. 112 | - Sends messages from Open WebUI to an **N8N webhook**. 113 | - Supports real-time message processing with dynamic field handling. 114 | - Enables automation of AI-generated responses within an **N8N workflow**. 115 | - Supports encryption of sensitive information like API keys. 116 | - Here is an example [N8N workflow](https://own.dev/github-owndev-open-webui-functions-open-webui-test-agent) for [N8N Pipeline](https://own.dev/github-owndev-open-webui-functions-n8n-pipeline) 117 | 118 | 🔗 [N8N Pipeline in Open WebUI](https://own.dev/openwebui-com-f-owndev-n8n-pipeline) 119 | 120 | 🔗 [Learn More About N8N](https://own.dev/n8n-io) 121 | 122 |
123 | 124 | ### **3. [Infomaniak](https://own.dev/github-owndev-open-webui-functions-infomaniak)** 125 | 126 | - Integrates **Open WebUI** with **Infomaniak**, a Swiss web hosting and cloud services provider. 127 | - Sends messages from Open WebUI to an **Infomaniak AI Tool**. 128 | - Supports encryption of sensitive information like API keys. 129 | 130 | 🔗 [Infomaniak Pipeline in Open WebUI](https://own.dev/openwebui-com-f-owndev-infomaniak-ai-tools) 131 | 132 | 🔗 [Learn More About Infomaniak](https://own.dev/infomaniak-com-en-hosting-ai-tools) 133 | 134 |
135 | 136 | ### **4. [Google Gemini](https://own.dev/github-owndev-open-webui-functions-google-gemini)** 137 | 138 | - Integrates **Open WebUI** with **Google Gemini**, a generative AI model by Google. 139 | - Integration with Google Generative AI or Vertex AI API for content generation. 140 | - Sends messages from Open WebUI to **Google Gemini**. 141 | - Supports encryption of sensitive information like API keys. 142 | - Supports both streaming and non-streaming responses. 143 | - Provides configurable error handling and timeouts. 144 | - Grounding with Google search with [google_search_tool.py filter](./filters/google_search_tool.py) 145 | - Native tool calling support 146 | 147 | 🔗 [Google Gemini Pipeline in Open WebUI](https://own.dev/openwebui-com-f-owndev-google-gemini) 148 | 149 | 🔗 [Learn More About Google Gemini](https://own.dev/ai-google-dev-gemini-api-docs) 150 | 151 |
152 | 153 | ## Filters 🔍 154 | 155 | Filters allow for **preprocessing and postprocessing** of data within Open WebUI. 156 | 157 | ### **1. [Time Token Tracker](https://own.dev/github-owndev-open-webui-functions-time-token-tracker)** 158 | 159 | - Measures **response time** and **token usage** for AI interactions. 160 | - Supports tracking of **total token usage** and **per-message token counts**. 161 | - Can calculate token usage for all messages or only a subset. 162 | - Uses OpenAI's `tiktoken` library for token counting (only accurate for OpenAI models). 163 | - Optional: Can send logs to [Azure Log Analytics Workspace](https://own.dev/learn-microsoft-com-en-us-azure-azure-monitor-logs-log-analytics-workspace-overview). 164 | 165 | 🔗 [Time Token Tracker in Open WebUI](https://own.dev/openwebui-com-f-owndev-time-token-tracker) 166 | 167 | 🔗 [How to Setup Azure Log Analytics](https://own.dev/github-owndev-open-webui-functions-setup-azure-log-analytics) 168 | 169 |
170 | 171 | ## Integrations 🤝 172 | 173 | ### Azure AI 174 | 175 | Look here for [Azure AI Integration](https://own.dev/github-owndev-open-webui-functions-azure-ai-integration-md). 176 | 177 | 178 | ### N8N 179 | 180 | Look here for [N8N Integration](https://own.dev/github-owndev-open-webui-functions-n8n-integration-md). 181 | 182 | 183 | ### Infomaniak 184 | 185 | Look here for [Infomaniak Integration](https://own.dev/github-owndev-open-webui-functions-infomaniak-integration-md). 186 | 187 | 188 | ### Google 189 | 190 | Look here for [Google Gemini Integration](https://own.dev/github-owndev-open-webui-functions-google-gemini-integration-md). 191 | 192 |
193 | 194 | ## Contribute 💪 195 | 196 | We accept different types of contributions, including some that don't require you to write a single line of code. 197 | For detailed instructions on how to get started with our project, see [about contributing to Open-WebUI-Functions](https://own.dev/github-owndev-open-webui-functions-contributing). 198 | 199 | 200 | ## License 📜 201 | 202 | This project is licensed under the [Apache License 2.0](https://own.dev/github-owndev-open-webui-functions-license) - see the [LICENSE](https://own.dev/github-owndev-open-webui-functions-license) file for details. 📄 203 | 204 | 205 | ## Support 💬 206 | 207 | If you have any questions, suggestions, or need assistance, please open an [issue](https://own.dev/github-owndev-open-webui-functions-issues-new) to connect with us! 🤝 208 | 209 | 210 | ## Star History 💫 211 | 212 | 213 | 214 | 215 | 216 | Star History Chart 217 | 218 | 219 | 220 | --- 221 | 222 | Created by [owndev](https://own.dev/github) - Let's make Open WebUI even more amazing together! 💪 -------------------------------------------------------------------------------- /docs/azure-ai-integration.md: -------------------------------------------------------------------------------- 1 | # Azure AI Integration 2 | 3 | The repository includes functions specifically designed for **Azure AI**, supporting both **Azure OpenAI** models and general **Azure AI** services. 4 | 5 | 🔗 [Learn More About Azure AI](https://own.dev/azure-microsoft-com-en-us-solutions-ai) 6 | 7 | 8 | ## Pipeline 9 | - 🧩 [Azure AI Foundry Pipeline](https://own.dev/github-owndev-open-webui-functions-azure-ai-foundry) 10 | 11 | 12 | ### Features: 13 | 14 | - **Azure OpenAI API Support** 15 | Access models like **GPT-4o, o3**, and **other fine-tuned AI models** via Azure. 16 | 17 | - **Azure AI Model Deployment** 18 | Connect to **custom models** hosted on Azure AI. 19 | 20 | - **Secure API Requests** 21 | Supports API key authentication and environment variable configurations. 22 | 23 | 24 | ### Environment Variables: 25 | 26 | Configure the following environment variables to enable Azure AI support: 27 | 28 | ```bash 29 | # API key or token for Azure AI 30 | AZURE_AI_API_KEY="your-api-key" 31 | 32 | # Azure AI endpoint 33 | # Examples: 34 | # - For general Azure AI: "https:///chat/completions?api-version=2024-05-01-preview" 35 | # - For Azure OpenAI: "https:///openai/deployments//chat/completions?api-version=2024-08-01-preview" 36 | AZURE_AI_ENDPOINT="https://.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview" 37 | 38 | # Optional: model names (if not embedded in the URL) 39 | # Supports semicolon or comma separated values: "gpt-4o;gpt-4o-mini" or "gpt-4o,gpt-4o-mini" 40 | AZURE_AI_MODEL="gpt-4o;gpt-4o-mini" 41 | 42 | # If true, the model name will be included in the request body 43 | AZURE_AI_MODEL_IN_BODY=true 44 | 45 | # Whether to use a predefined list of Azure AI models 46 | USE_PREDEFINED_AZURE_AI_MODELS=false 47 | 48 | # If true, use "Authorization: Bearer" instead of "api-key" header 49 | AZURE_AI_USE_AUTHORIZATION_HEADER=false 50 | ``` 51 | 52 | > [!TIP] 53 | > To use **Azure OpenAI** and other **Azure AI** models **simultaneously**, you can use the following URL: `https://.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview` -------------------------------------------------------------------------------- /docs/google-gemini-integration.md: -------------------------------------------------------------------------------- 1 | # Google Gemini Integration 2 | 3 | This integration enables **Open WebUI** to interact with **Google Gemini** models via the official Google Generative AI API (using API Keys) or through Google Cloud Vertex AI (leveraging Google Cloud's infrastructure and authentication). It provides a robust and customizable pipeline to access text and multimodal generation capabilities from Google’s latest AI models. 4 | 5 | 🔗 [Learn More About Google AI](https://own.dev/ai-google-dev) 6 | 7 | ## Pipeline 8 | 9 | - 🧩 [Google Gemini Pipeline](https://own.dev/github-owndev-open-webui-functions-google-gemini) 10 | 11 | ## Features 12 | 13 | - **Asynchronous API Calls** 14 | Improves performance and scalability with non-blocking requests. 15 | 16 | - **Model Caching** 17 | Caches available model lists for faster subsequent access. 18 | 19 | - **Dynamic Model Handling** 20 | Automatically strips provider prefixes for seamless integration. 21 | 22 | - **Streaming Response Support** 23 | Handles token-by-token responses with built-in safety enforcement. 24 | 25 | - **Multimodal Input Support** 26 | Accepts both text and image data for more expressive interactions. 27 | 28 | - **Flexible Error Handling** 29 | Retries failed requests and logs errors for transparency. 30 | 31 | - **Integration with Google Generative AI or Vertex AI API** 32 | Connect using either the Google Generative AI API or Google Cloud Vertex AI for content generation. 33 | 34 | - **Secure API Key Storage** 35 | API keys (for the Google Generative AI API method) are encrypted and never exposed in plaintext. 36 | 37 | - **Safety Configuration** 38 | Control safety behavior using a permissive or strict mode. 39 | 40 | - **Customizable Generation Settings** 41 | Use environment variables to configure token limits, temperature, etc. 42 | 43 | - **Grounding with Google search** 44 | Improve the accuracy and recency of Gemini responses with Google search grounding. 45 | 46 | - **Native tool calling support** 47 | Leverage Google genai native function calling to orchestrate the use of tools 48 | 49 | ## Environment Variables 50 | 51 | Set the following environment variables to configure the Google Gemini integration. 52 | 53 | ### General Settings (Applicable to both connection methods) 54 | 55 | ```bash 56 | # Use permissive safety settings for content generation (true/false) 57 | # Default: false 58 | USE_PERMISSIVE_SAFETY=false 59 | 60 | # Model list cache duration (in seconds) 61 | # Default: 600 62 | GOOGLE_MODEL_CACHE_TTL=600 63 | 64 | # Number of retry attempts for failed API calls 65 | # Default: 2 66 | GOOGLE_RETRY_COUNT=2 67 | ``` 68 | 69 | ### Connection Method: Google Generative AI API (Default) 70 | 71 | Use these settings if you are connecting directly via the Google Generative AI API. This is the default method if `GOOGLE_GENAI_USE_VERTEXAI` is not set to `true`. 72 | 73 | ```bash 74 | # API key for authenticating with Google Generative AI. 75 | # Required if GOOGLE_GENAI_USE_VERTEXAI is "false" or not set. 76 | GOOGLE_API_KEY="your-google-api-key" 77 | ``` 78 | 79 | > [!TIP] 80 | > You can obtain your API key from the [Google AI Studio](https://own.dev/aistudio-google-com) dashboard after signing up. 81 | 82 | ### Connection Method: Google Cloud Vertex AI 83 | 84 | Use these settings if you are connecting via Google Cloud Vertex AI. This method typically uses Application Default Credentials (ADC) or a service account for authentication, and `GOOGLE_API_KEY` is not used. 85 | 86 | ```bash 87 | # Set to "true" to use Google Cloud Vertex AI. 88 | # Default: false 89 | GOOGLE_GENAI_USE_VERTEXAI="true" 90 | 91 | # Your Google Cloud Project ID. 92 | # Required if GOOGLE_GENAI_USE_VERTEXAI is "true". 93 | GOOGLE_CLOUD_PROJECT="your-gcp-project-id" 94 | 95 | # The Google Cloud region for Vertex AI (e.g., "us-central1"). 96 | # Defaults to "global" if not set. 97 | GOOGLE_CLOUD_LOCATION="your-gcp-location" 98 | ``` 99 | 100 | ## Grounding with Google search 101 | 102 | Grounding with Google search is enabled/disabled with the `google_search_tool` feature, which can be switched on/off in a Filter. 103 | 104 | For instance, the following [Filter (google_search_tool.py)](../filters/google_search_tool.py) will replace Open Web UI default web search function with google search grounding. 105 | 106 | When enabled, sources and google queries used by Gemini will be displayed with the response. 107 | 108 | ## Native tool calling support 109 | 110 | Native tool calling is enabled/disabled via the standard 'Function calling' Open Web UI toggle. -------------------------------------------------------------------------------- /docs/images/azure-log-analytics-get-key.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/owndev/Open-WebUI-Functions/89ec869cad8824e6cea8a12bcff8327da656eb73/docs/images/azure-log-analytics-get-key.png -------------------------------------------------------------------------------- /docs/images/azure-log-analytics-show-logs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/owndev/Open-WebUI-Functions/89ec869cad8824e6cea8a12bcff8327da656eb73/docs/images/azure-log-analytics-show-logs.png -------------------------------------------------------------------------------- /docs/images/contribution_cta.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/owndev/Open-WebUI-Functions/89ec869cad8824e6cea8a12bcff8327da656eb73/docs/images/contribution_cta.png -------------------------------------------------------------------------------- /docs/images/table-of-contents.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/owndev/Open-WebUI-Functions/89ec869cad8824e6cea8a12bcff8327da656eb73/docs/images/table-of-contents.png -------------------------------------------------------------------------------- /docs/infomaniak-integration.md: -------------------------------------------------------------------------------- 1 | # Infomaniak AI Tools Integration 2 | 3 | This integration enables Open WebUI to interact with **Infomaniak AI Tools**, using their public API for secure and scalable AI model access. 4 | 5 | 🔗 [Learn More About N8N](https://own.dev/n8n-io) 6 | 7 | 8 | ## Pipeline 9 | 10 | - 🧩 [Infomaniak AI Tools Pipeline](https://own.dev/github-owndev-open-webui-functions-infomaniak) 11 | 12 | 13 | ## Features 14 | 15 | - **Secure API Access** 16 | Authenticate via API key (automatically encrypted and stored securely). 17 | 18 | - **Model Product Binding** 19 | Associate API requests with a specific product ID provided by Infomaniak. 20 | 21 | - **Customizable API Endpoint** 22 | Define a custom base URL for regional or private deployments. 23 | 24 | - **Model Name Prefixing** 25 | Automatically add a prefix to distinguish models from other providers. 26 | 27 | 28 | ## Environment Variables 29 | 30 | Set the following environment variables to enable Infomaniak AI Tools integration: 31 | 32 | ```bash 33 | # API key for authenticating with Infomaniak AI Tools 34 | INFOMANIAK_API_KEY="your-api-key" 35 | 36 | # Product ID (default: 50070) assigned by Infomaniak 37 | INFOMANIAK_PRODUCT_ID=50070 38 | 39 | # Base URL of the Infomaniak API 40 | INFOMANIAK_BASE_URL="https://api.infomaniak.com" 41 | 42 | # Optional: Prefix to add before model names (e.g. for display or routing) 43 | NAME_PREFIX="Infomaniak: " 44 | ``` 45 | 46 | > [!TIP] 47 | > You can find your API key and product ID in your [Infomaniak Manager](https://own.dev/manager-infomaniak-com). -------------------------------------------------------------------------------- /docs/n8n-integration.md: -------------------------------------------------------------------------------- 1 | # N8N Integration 2 | 3 | This integration allows Open WebUI to communicate with workflows created in **n8n**, a powerful workflow automation tool. Messages are sent and received via webhook endpoints, making it easy to plug Open WebUI into your existing automation pipelines. 4 | 5 | 🔗 [Learn More About N8N](https://own.dev/n8n-io) 6 | 7 | 8 | ## Pipeline 9 | - 🧩 [N8N Pipeline](https://own.dev/github-owndev-open-webui-functions-n8n-pipeline) 10 | 11 | 12 | ## Template Workflow 13 | 14 | - 🧩 [N8N Open WebUI Test Agent (Template)](https://own.dev/github-owndev-open-webui-functions-open-webui-test-agent) 15 | 16 | 17 | ## Features 18 | 19 | - **Webhook Communication** 20 | Send messages directly to an n8n workflow via a webhook URL. 21 | 22 | - **Token-Based Authentication** 23 | Secure access to your n8n webhook using a Bearer token or Cloudflare Access tokens. 24 | 25 | - **Flexible Input/Output Mapping** 26 | Customize which fields in the request/response payload are used for communication. 27 | 28 | - **Live Status Feedback** 29 | Optionally emit status updates at a configurable interval. 30 | 31 | 32 | ## Environment Variables 33 | 34 | Set the following environment variables to enable n8n integration: 35 | 36 | ```bash 37 | # n8n webhook endpoint 38 | # Example: "https://n8n.yourdomain.com/webhook/openwebui-agent" 39 | N8N_URL="https:///webhook/" 40 | 41 | # Optional: Bearer token for secure access 42 | N8N_BEARER_TOKEN="your-bearer-token" 43 | 44 | # Payload input field (used by Open WebUI to send messages) 45 | INPUT_FIELD="chatInput" 46 | 47 | # Payload output field (used by Open WebUI to read the response) 48 | RESPONSE_FIELD="output" 49 | 50 | # Interval (in seconds) between emitting status updates to the UI 51 | EMIT_INTERVAL=2.0 52 | 53 | # Enable or disable live status indicators 54 | ENABLE_STATUS_INDICATOR=true 55 | 56 | # Optional: Cloudflare Access tokens (if behind Cloudflare Zero Trust) 57 | CF_ACCESS_CLIENT_ID="your-cloudflare-access-client-id" 58 | CF_ACCESS_CLIENT_SECRET="your-cloudflare-access-client-secret" 59 | ``` 60 | 61 | > [!TIP] 62 | > If your n8n instance is protected behind Cloudflare Zero Trust, you can use service tokens for authentication. 63 | > Learn more: [Cloudflare Access Service Tokens](https://own.dev/developers-cloudflare-com-cloudflare-one-identity-service-tokens) -------------------------------------------------------------------------------- /docs/setup-azure-log-analytics.md: -------------------------------------------------------------------------------- 1 | # Setup Azure Log Analytics Workspace 2 | ## Installation 3 | - [Create Workspace](https://own.dev/learn-microsoft-com-en-us-azure-azure-monitor-logs-quick-create-workspace) 4 | 5 | ## Get Shared Key 6 | To send data from Time Token Tracker to Azure Log Analytics, a shared key is required. This can be found under `Settings` > `Agents` > `Primary key`. 7 | 8 | 9 | 10 | ## Show Logs 11 | To view these logs, go to `Logs` > `Custom Logs`. All logs will be listed there. 12 | > It may take a few minutes for the first logs to become visible. 13 | 14 | 15 | 16 | ## PowerBI Dashboard 17 | - [PowerBI Dashboard](https://github.com/owndev/Open-WebUI-Functions/discussions/26) from [@zic04](https://github.com/zic04) -------------------------------------------------------------------------------- /filters/google_search_tool.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Google Search Tool Filter for https://github.com/owndev/Open-WebUI-Functions/blob/master/pipelines/google/google_gemini.py 3 | author: owndev, olivier-lacroix 4 | author_url: https://github.com/owndev/ 5 | project_url: https://github.com/owndev/Open-WebUI-Functions 6 | funding_url: https://github.com/sponsors/owndev 7 | version: 1.0.0 8 | license: Apache License 2.0 9 | requirements: 10 | - https://github.com/owndev/Open-WebUI-Functions/blob/master/pipelines/google/google_gemini.py 11 | description: Replacing web_search tool with google search grounding 12 | """ 13 | 14 | import logging 15 | from open_webui.env import SRC_LOG_LEVELS 16 | 17 | 18 | class Filter: 19 | def __init__(self): 20 | self.log = logging.getLogger("google_ai.pipe") 21 | self.log.setLevel(SRC_LOG_LEVELS.get("OPENAI", logging.INFO)) 22 | 23 | def inlet(self, body: dict) -> dict: 24 | features = body.get("features", {}) 25 | 26 | # Ensure metadata structure exists and add new feature 27 | metadata = body.setdefault("metadata", {}) 28 | metadata_features = metadata.setdefault("features", {}) 29 | 30 | if features.pop("web_search"): 31 | self.log.debug("Replacing web_search tool with google search grounding") 32 | metadata_features["google_search_tool"] = True 33 | return body 34 | -------------------------------------------------------------------------------- /filters/time_token_tracker.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Time Token Tracker 3 | author: owndev 4 | author_url: https://github.com/owndev/ 5 | project_url: https://github.com/owndev/Open-WebUI-Functions 6 | funding_url: https://github.com/sponsors/owndev 7 | version: 2.5.0 8 | license: Apache License 2.0 9 | description: A filter for tracking the response time and token usage of a request with Azure Log Analytics integration. 10 | features: 11 | - Tracks the response time of a request. 12 | - Tracks Token Usage. 13 | - Calculates the average tokens per message. 14 | - Calculates the tokens per second. 15 | - Sends metrics to Azure Log Analytics. 16 | """ 17 | 18 | import time 19 | import json 20 | import uuid 21 | import hmac 22 | import base64 23 | import hashlib 24 | import datetime 25 | import os 26 | import logging 27 | import aiohttp 28 | from typing import Optional, Any 29 | from open_webui.env import AIOHTTP_CLIENT_TIMEOUT, SRC_LOG_LEVELS 30 | from cryptography.fernet import Fernet, InvalidToken 31 | import tiktoken 32 | from pydantic import BaseModel, Field, GetCoreSchemaHandler 33 | from pydantic_core import core_schema 34 | 35 | # Global variables to track start time and token counts 36 | global start_time, request_token_count, response_token_count 37 | start_time = 0 38 | request_token_count = 0 39 | response_token_count = 0 40 | 41 | 42 | # Simplified encryption implementation with automatic handling 43 | class EncryptedStr(str): 44 | """A string type that automatically handles encryption/decryption""" 45 | 46 | @classmethod 47 | def _get_encryption_key(cls) -> Optional[bytes]: 48 | """ 49 | Generate encryption key from WEBUI_SECRET_KEY if available 50 | Returns None if no key is configured 51 | """ 52 | secret = os.getenv("WEBUI_SECRET_KEY") 53 | if not secret: 54 | return None 55 | 56 | hashed_key = hashlib.sha256(secret.encode()).digest() 57 | return base64.urlsafe_b64encode(hashed_key) 58 | 59 | @classmethod 60 | def encrypt(cls, value: str) -> str: 61 | """ 62 | Encrypt a string value if a key is available 63 | Returns the original value if no key is available 64 | """ 65 | if not value or value.startswith("encrypted:"): 66 | return value 67 | 68 | key = cls._get_encryption_key() 69 | if not key: # No encryption if no key 70 | return value 71 | 72 | f = Fernet(key) 73 | encrypted = f.encrypt(value.encode()) 74 | return f"encrypted:{encrypted.decode()}" 75 | 76 | @classmethod 77 | def decrypt(cls, value: str) -> str: 78 | """ 79 | Decrypt an encrypted string value if a key is available 80 | Returns the original value if no key is available or decryption fails 81 | """ 82 | if not value or not value.startswith("encrypted:"): 83 | return value 84 | 85 | key = cls._get_encryption_key() 86 | if not key: # No decryption if no key 87 | return value[len("encrypted:") :] # Return without prefix 88 | 89 | try: 90 | encrypted_part = value[len("encrypted:") :] 91 | f = Fernet(key) 92 | decrypted = f.decrypt(encrypted_part.encode()) 93 | return decrypted.decode() 94 | except (InvalidToken, Exception): 95 | return value 96 | 97 | # Pydantic integration 98 | @classmethod 99 | def __get_pydantic_core_schema__( 100 | cls, _source_type: Any, _handler: GetCoreSchemaHandler 101 | ) -> core_schema.CoreSchema: 102 | return core_schema.union_schema( 103 | [ 104 | core_schema.is_instance_schema(cls), 105 | core_schema.chain_schema( 106 | [ 107 | core_schema.str_schema(), 108 | core_schema.no_info_plain_validator_function( 109 | lambda value: cls(cls.encrypt(value) if value else value) 110 | ), 111 | ] 112 | ), 113 | ], 114 | serialization=core_schema.plain_serializer_function_ser_schema( 115 | lambda instance: str(instance) 116 | ), 117 | ) 118 | 119 | def get_decrypted(self) -> str: 120 | """Get the decrypted value""" 121 | return self.decrypt(self) 122 | 123 | 124 | # Helper functions 125 | async def cleanup_response( 126 | response: Optional[aiohttp.ClientResponse], 127 | session: Optional[aiohttp.ClientSession], 128 | ) -> None: 129 | """ 130 | Clean up the response and session objects. 131 | 132 | Args: 133 | response: The ClientResponse object to close 134 | session: The ClientSession object to close 135 | """ 136 | if response: 137 | response.close() 138 | if session: 139 | await session.close() 140 | 141 | 142 | class Filter: 143 | class Valves(BaseModel): 144 | priority: int = Field( 145 | default=0, description="Priority level for the filter operations." 146 | ) 147 | CALCULATE_ALL_MESSAGES: bool = Field( 148 | default=True, 149 | description="If true, calculate tokens for all messages. If false, only use the last user and assistant messages.", 150 | ) 151 | SHOW_AVERAGE_TOKENS: bool = Field( 152 | default=True, 153 | description="Show average tokens per message (only used if CALCULATE_ALL_MESSAGES is true).", 154 | ) 155 | SHOW_RESPONSE_TIME: bool = Field( 156 | default=True, description="Show the response time." 157 | ) 158 | SHOW_TOKEN_COUNT: bool = Field( 159 | default=True, description="Show the token count." 160 | ) 161 | SHOW_TOKENS_PER_SECOND: bool = Field( 162 | default=True, description="Show tokens per second for the response." 163 | ) 164 | SEND_TO_LOG_ANALYTICS: bool = Field( 165 | default=bool(os.getenv("SEND_TO_LOG_ANALYTICS", False)), 166 | description="Send logs to Azure Log Analytics workspace", 167 | ) 168 | LOG_ANALYTICS_WORKSPACE_ID: str = Field( 169 | default=os.getenv("LOG_ANALYTICS_WORKSPACE_ID", ""), 170 | description="Azure Log Analytics Workspace ID", 171 | ) 172 | LOG_ANALYTICS_SHARED_KEY: EncryptedStr = Field( 173 | default=os.getenv("LOG_ANALYTICS_SHARED_KEY", ""), 174 | description="Azure Log Analytics Workspace Shared Key", 175 | ) 176 | LOG_ANALYTICS_LOG_TYPE: str = Field( 177 | default="OpenWebuiMetrics", description="Log Analytics log type name." 178 | ) 179 | 180 | def __init__(self): 181 | self.name = "Time Token Tracker" 182 | self.valves = self.Valves() 183 | 184 | def _build_signature(self, date, content_length, method, content_type, resource): 185 | """Build the signature for Log Analytics authentication.""" 186 | x_headers = "x-ms-date:" + date 187 | string_to_hash = ( 188 | method 189 | + "\n" 190 | + str(content_length) 191 | + "\n" 192 | + content_type 193 | + "\n" 194 | + x_headers 195 | + "\n" 196 | + resource 197 | ) 198 | bytes_to_hash = string_to_hash.encode("utf-8") 199 | decoded_key = base64.b64decode( 200 | self.valves.LOG_ANALYTICS_SHARED_KEY.get_decrypted() 201 | ) 202 | encoded_hash = base64.b64encode( 203 | hmac.new(decoded_key, bytes_to_hash, digestmod=hashlib.sha256).digest() 204 | ).decode("utf-8") 205 | authorization = ( 206 | f"SharedKey {self.valves.LOG_ANALYTICS_WORKSPACE_ID}:{encoded_hash}" 207 | ) 208 | return authorization 209 | 210 | async def _send_to_log_analytics_async(self, data): 211 | """Send data to Azure Log Analytics asynchronously using aiohttp.""" 212 | if ( 213 | not self.valves.SEND_TO_LOG_ANALYTICS 214 | or not self.valves.LOG_ANALYTICS_WORKSPACE_ID 215 | or not self.valves.LOG_ANALYTICS_SHARED_KEY 216 | ): 217 | return False 218 | 219 | log = logging.getLogger("time_token_tracker._send_to_log_analytics_async") 220 | log.setLevel(SRC_LOG_LEVELS["OPENAI"]) 221 | 222 | method = "POST" 223 | content_type = "application/json" 224 | resource = "/api/logs" 225 | rfc1123date = datetime.datetime.now(datetime.timezone.utc).strftime( 226 | "%a, %d %b %Y %H:%M:%S GMT" 227 | ) 228 | content_length = len(json.dumps(data)) 229 | 230 | signature = self._build_signature( 231 | rfc1123date, content_length, method, content_type, resource 232 | ) 233 | 234 | uri = f"https://{self.valves.LOG_ANALYTICS_WORKSPACE_ID}.ods.opinsights.azure.com{resource}?api-version=2016-04-01" 235 | 236 | headers = { 237 | "Content-Type": content_type, 238 | "Authorization": signature, 239 | "Log-Type": self.valves.LOG_ANALYTICS_LOG_TYPE, 240 | "x-ms-date": rfc1123date, 241 | "time-generated-field": "timestamp", 242 | } 243 | 244 | session = None 245 | response = None 246 | 247 | try: 248 | session = aiohttp.ClientSession( 249 | trust_env=True, 250 | timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT), 251 | ) 252 | 253 | response = await session.request( 254 | method="POST", 255 | url=uri, 256 | json=data, 257 | headers=headers, 258 | ) 259 | 260 | if response.status == 200: 261 | return True 262 | else: 263 | response_text = await response.text() 264 | log.error( 265 | f"Error sending to Log Analytics: {response.status} - {response_text}" 266 | ) 267 | return False 268 | 269 | except Exception as e: 270 | log.error( 271 | f"Exception when sending to Log Analytics asynchronously: {str(e)}" 272 | ) 273 | return False 274 | finally: 275 | await cleanup_response(response, session) 276 | 277 | async def inlet( 278 | self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None 279 | ) -> dict: 280 | global start_time, request_token_count 281 | start_time = time.time() 282 | 283 | model = body.get("model", "default-model") 284 | all_messages = body.get("messages", []) 285 | 286 | try: 287 | encoding = tiktoken.encoding_for_model(model) 288 | except KeyError: 289 | encoding = tiktoken.get_encoding("cl100k_base") 290 | 291 | # If CALCULATE_ALL_MESSAGES is true, use all "user" and "system" messages 292 | if self.valves.CALCULATE_ALL_MESSAGES: 293 | request_messages = [ 294 | m for m in all_messages if m.get("role") in ("user", "system") 295 | ] 296 | else: 297 | # If CALCULATE_ALL_MESSAGES is false and there are exactly two messages 298 | # (one user and one system), sum them both. 299 | request_user_system = [ 300 | m for m in all_messages if m.get("role") in ("user", "system") 301 | ] 302 | if len(request_user_system) == 2: 303 | request_messages = request_user_system 304 | else: 305 | # Otherwise, take only the last "user" or "system" message if any 306 | reversed_messages = list(reversed(all_messages)) 307 | last_user_system = next( 308 | ( 309 | m 310 | for m in reversed_messages 311 | if m.get("role") in ("user", "system") 312 | ), 313 | None, 314 | ) 315 | request_messages = [last_user_system] if last_user_system else [] 316 | 317 | request_token_count = sum( 318 | len(encoding.encode(self._get_message_content(m))) 319 | for m in request_messages 320 | if m 321 | ) 322 | 323 | return body 324 | 325 | def _get_message_content(self, message): 326 | """Extract content from a message, handling different formats.""" 327 | content = message.get("content", "") 328 | 329 | # Handle None content 330 | if content is None: 331 | content = "" 332 | 333 | # Handle string content 334 | if isinstance(content, str): 335 | return content 336 | 337 | # Handle list content (e.g., for messages with multiple content parts) 338 | if isinstance(content, list): 339 | text_parts = [] 340 | for part in content: 341 | if isinstance(part, dict): 342 | if part.get("type") == "text": 343 | text_parts.append(part.get("text", "")) 344 | else: 345 | # Try to convert other types to string 346 | try: 347 | text_parts.append(str(part)) 348 | except: # noqa: E722 349 | pass 350 | return " ".join(text_parts) 351 | 352 | # Handle function_call in message 353 | if message.get("function_call"): 354 | try: 355 | func_call = message["function_call"] 356 | func_str = f"function: {func_call.get('name', '')}, arguments: {func_call.get('arguments', '')}" 357 | return func_str 358 | except: # noqa: E722 359 | return "" 360 | 361 | # If nothing else works, try converting to string or return empty 362 | try: 363 | return str(content) 364 | except: # noqa: E722 365 | return "" 366 | 367 | async def outlet( 368 | self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None 369 | ) -> dict: 370 | log = logging.getLogger("time_token_tracker.outlet") 371 | log.setLevel(SRC_LOG_LEVELS["OPENAI"]) 372 | 373 | global start_time, request_token_count, response_token_count 374 | end_time = time.time() 375 | response_time = end_time - start_time 376 | 377 | model = body.get("model", "default-model") 378 | all_messages = body.get("messages", []) 379 | 380 | try: 381 | encoding = tiktoken.encoding_for_model(model) 382 | except KeyError: 383 | encoding = tiktoken.get_encoding("cl100k_base") 384 | 385 | reversed_messages = list( 386 | reversed(all_messages) 387 | ) # If CALCULATE_ALL_MESSAGES is true, use all "assistant" messages 388 | if self.valves.CALCULATE_ALL_MESSAGES: 389 | assistant_messages = [ 390 | m for m in all_messages if m.get("role") == "assistant" 391 | ] 392 | else: 393 | # Take only the last "assistant" message if any 394 | last_assistant = next( 395 | (m for m in reversed_messages if m.get("role") == "assistant"), None 396 | ) 397 | assistant_messages = [last_assistant] if last_assistant else [] 398 | 399 | response_token_count = sum( 400 | len(encoding.encode(self._get_message_content(m))) 401 | for m in assistant_messages 402 | if m 403 | ) # Calculate tokens per second (only for the last assistant response) 404 | resp_tokens_per_sec = 0 405 | if self.valves.SHOW_TOKENS_PER_SECOND: 406 | last_assistant_msg = next( 407 | (m for m in reversed_messages if m.get("role") == "assistant"), None 408 | ) 409 | last_assistant_tokens = ( 410 | len(encoding.encode(self._get_message_content(last_assistant_msg))) 411 | if last_assistant_msg 412 | else 0 413 | ) 414 | resp_tokens_per_sec = ( 415 | 0 if response_time == 0 else last_assistant_tokens / response_time 416 | ) 417 | 418 | # Calculate averages only if CALCULATE_ALL_MESSAGES is true 419 | avg_request_tokens = avg_response_tokens = 0 420 | if self.valves.SHOW_AVERAGE_TOKENS and self.valves.CALCULATE_ALL_MESSAGES: 421 | req_count = len( 422 | [m for m in all_messages if m.get("role") in ("user", "system")] 423 | ) 424 | resp_count = len([m for m in all_messages if m.get("role") == "assistant"]) 425 | avg_request_tokens = request_token_count / req_count if req_count else 0 426 | avg_response_tokens = response_token_count / resp_count if resp_count else 0 427 | 428 | # Shorter style, e.g.: "10.90s | Req: 175 (Ø 87.50) | Resp: 439 (Ø 219.50) | 40.18 T/s" 429 | description_parts = [] 430 | if self.valves.SHOW_RESPONSE_TIME: 431 | description_parts.append(f"{response_time:.2f}s") 432 | if self.valves.SHOW_TOKEN_COUNT: 433 | if self.valves.SHOW_AVERAGE_TOKENS and self.valves.CALCULATE_ALL_MESSAGES: 434 | # Add averages (Ø) into short output 435 | short_str = ( 436 | f"Req: {request_token_count} (Ø {avg_request_tokens:.2f}) | " 437 | f"Resp: {response_token_count} (Ø {avg_response_tokens:.2f})" 438 | ) 439 | else: 440 | short_str = f"Req: {request_token_count} | Resp: {response_token_count}" 441 | description_parts.append(short_str) 442 | if self.valves.SHOW_TOKENS_PER_SECOND: 443 | description_parts.append(f"{resp_tokens_per_sec:.2f} T/s") 444 | description = " | ".join(description_parts) 445 | 446 | # Send event with description 447 | await __event_emitter__( 448 | { 449 | "type": "status", 450 | "data": {"description": description, "done": True}, 451 | } 452 | ) 453 | 454 | # If Log Analytics integration is enabled, send the data 455 | if self.valves.SEND_TO_LOG_ANALYTICS: 456 | # Create chat and message IDs for tracking 457 | chat_id = body.get("chat_id", str(uuid.uuid4())) 458 | message_id = str(uuid.uuid4()) 459 | # User ID if available 460 | user_id = __user__.get("id", "unknown") if __user__ else "unknown" 461 | 462 | # Create log data for Log Analytics 463 | log_data = [ 464 | { 465 | "timestamp": datetime.datetime.utcnow().isoformat(), 466 | "chatId": chat_id, 467 | "messageId": message_id, 468 | "model": model, 469 | "userId": user_id, 470 | "responseTime": response_time, 471 | "requestTokens": request_token_count, 472 | "responseTokens": response_token_count, 473 | "tokensPerSecond": resp_tokens_per_sec, 474 | } 475 | ] 476 | 477 | # Add averages if calculated 478 | if self.valves.SHOW_AVERAGE_TOKENS and self.valves.CALCULATE_ALL_MESSAGES: 479 | log_data[0]["avgRequestTokens"] = avg_request_tokens 480 | log_data[0]["avgResponseTokens"] = avg_response_tokens 481 | 482 | # Send to Log Analytics asynchronously (non-blocking) 483 | try: 484 | result = await self._send_to_log_analytics_async(log_data) 485 | if result: 486 | log.info("Log Analytics data sent successfully") 487 | else: 488 | log.warning("Failed to send data to Log Analytics") 489 | except Exception as e: 490 | # Handle exceptions during sending to Log Analytics 491 | log.error(f"Error sending to Log Analytics: {e}") 492 | 493 | return body 494 | -------------------------------------------------------------------------------- /pipelines/azure/azure_ai_foundry.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Azure AI Foundry Pipeline 3 | author: owndev 4 | author_url: https://github.com/owndev/ 5 | project_url: https://github.com/owndev/Open-WebUI-Functions 6 | funding_url: https://github.com/sponsors/owndev 7 | version: 2.3.1 8 | license: Apache License 2.0 9 | description: A pipeline for interacting with Azure AI services, enabling seamless communication with various AI models via configurable headers and robust error handling. This includes support for Azure OpenAI models as well as other Azure AI models by dynamically managing headers and request configurations. 10 | features: 11 | - Supports dynamic model specification via headers. 12 | - Filters valid parameters to ensure clean requests. 13 | - Handles streaming and non-streaming responses. 14 | - Provides flexible timeout and error handling mechanisms. 15 | - Compatible with Azure OpenAI and other Azure AI models. 16 | - Predefined models for easy access. 17 | - Encrypted storage of sensitive API keys 18 | """ 19 | 20 | from typing import List, Union, Generator, Iterator, Optional, Dict, Any, AsyncIterator 21 | from fastapi.responses import StreamingResponse 22 | from pydantic import BaseModel, Field, GetCoreSchemaHandler 23 | from starlette.background import BackgroundTask 24 | from open_webui.env import AIOHTTP_CLIENT_TIMEOUT, SRC_LOG_LEVELS 25 | from cryptography.fernet import Fernet, InvalidToken 26 | import aiohttp 27 | import json 28 | import os 29 | import logging 30 | import base64 31 | import hashlib 32 | from pydantic_core import core_schema 33 | 34 | 35 | # Simplified encryption implementation with automatic handling 36 | class EncryptedStr(str): 37 | """A string type that automatically handles encryption/decryption""" 38 | 39 | @classmethod 40 | def _get_encryption_key(cls) -> Optional[bytes]: 41 | """ 42 | Generate encryption key from WEBUI_SECRET_KEY if available 43 | Returns None if no key is configured 44 | """ 45 | secret = os.getenv("WEBUI_SECRET_KEY") 46 | if not secret: 47 | return None 48 | 49 | hashed_key = hashlib.sha256(secret.encode()).digest() 50 | return base64.urlsafe_b64encode(hashed_key) 51 | 52 | @classmethod 53 | def encrypt(cls, value: str) -> str: 54 | """ 55 | Encrypt a string value if a key is available 56 | Returns the original value if no key is available 57 | """ 58 | if not value or value.startswith("encrypted:"): 59 | return value 60 | 61 | key = cls._get_encryption_key() 62 | if not key: # No encryption if no key 63 | return value 64 | 65 | f = Fernet(key) 66 | encrypted = f.encrypt(value.encode()) 67 | return f"encrypted:{encrypted.decode()}" 68 | 69 | @classmethod 70 | def decrypt(cls, value: str) -> str: 71 | """ 72 | Decrypt an encrypted string value if a key is available 73 | Returns the original value if no key is available or decryption fails 74 | """ 75 | if not value or not value.startswith("encrypted:"): 76 | return value 77 | 78 | key = cls._get_encryption_key() 79 | if not key: # No decryption if no key 80 | return value[len("encrypted:") :] # Return without prefix 81 | 82 | try: 83 | encrypted_part = value[len("encrypted:") :] 84 | f = Fernet(key) 85 | decrypted = f.decrypt(encrypted_part.encode()) 86 | return decrypted.decode() 87 | except (InvalidToken, Exception): 88 | return value 89 | 90 | # Pydantic integration 91 | @classmethod 92 | def __get_pydantic_core_schema__( 93 | cls, _source_type: Any, _handler: GetCoreSchemaHandler 94 | ) -> core_schema.CoreSchema: 95 | return core_schema.union_schema( 96 | [ 97 | core_schema.is_instance_schema(cls), 98 | core_schema.chain_schema( 99 | [ 100 | core_schema.str_schema(), 101 | core_schema.no_info_plain_validator_function( 102 | lambda value: cls(cls.encrypt(value) if value else value) 103 | ), 104 | ] 105 | ), 106 | ], 107 | serialization=core_schema.plain_serializer_function_ser_schema( 108 | lambda instance: str(instance) 109 | ), 110 | ) 111 | 112 | def get_decrypted(self) -> str: 113 | """Get the decrypted value""" 114 | return self.decrypt(self) 115 | 116 | 117 | # Helper functions 118 | async def cleanup_response( 119 | response: Optional[aiohttp.ClientResponse], 120 | session: Optional[aiohttp.ClientSession], 121 | ) -> None: 122 | """ 123 | Clean up the response and session objects. 124 | 125 | Args: 126 | response: The ClientResponse object to close 127 | session: The ClientSession object to close 128 | """ 129 | if response: 130 | response.close() 131 | if session: 132 | await session.close() 133 | 134 | 135 | class Pipe: 136 | # Environment variables for API key, endpoint, and optional model 137 | class Valves(BaseModel): 138 | # API key for Azure AI 139 | AZURE_AI_API_KEY: EncryptedStr = Field( 140 | default=os.getenv("AZURE_AI_API_KEY", "API_KEY"), 141 | description="API key for Azure AI", 142 | ) 143 | 144 | # Endpoint for Azure AI (e.g. "https:///chat/completions?api-version=2024-05-01-preview" or "https:///openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview") 145 | AZURE_AI_ENDPOINT: str = Field( 146 | default=os.getenv( 147 | "AZURE_AI_ENDPOINT", 148 | "https:///chat/completions?api-version=2024-05-01-preview", 149 | ), 150 | description="Endpoint for Azure AI", 151 | ) 152 | 153 | # Optional model name, only necessary if not Azure OpenAI or if model name not in URL (e.g. "https:///openai/deployments//chat/completions") 154 | # Multiple models can be specified as a semicolon-separated list (e.g. "gpt-4o;gpt-4o-mini") 155 | # or a comma-separated list (e.g. "gpt-4o,gpt-4o-mini"). 156 | AZURE_AI_MODEL: str = Field( 157 | default=os.getenv("AZURE_AI_MODEL", ""), 158 | description="Optional model names for Azure AI (e.g. gpt-4o, gpt-4o-mini)", 159 | ) 160 | 161 | # Switch for sending model name in request body 162 | AZURE_AI_MODEL_IN_BODY: bool = Field( 163 | default=os.getenv("AZURE_AI_MODEL_IN_BODY", False), 164 | description="If True, include the model name in the request body instead of as a header.", 165 | ) 166 | 167 | # Flag to indicate if predefined Azure AI models should be used 168 | USE_PREDEFINED_AZURE_AI_MODELS: bool = Field( 169 | default=os.getenv("USE_PREDEFINED_AZURE_AI_MODELS", False), 170 | description="Flag to indicate if predefined Azure AI models should be used.", 171 | ) 172 | 173 | # If True, use Authorization header with Bearer token instead of api-key header. 174 | USE_AUTHORIZATION_HEADER: bool = Field( 175 | default=bool(os.getenv("AZURE_AI_USE_AUTHORIZATION_HEADER", False)), 176 | description="Set to True to use Authorization header with Bearer token instead of api-key header.", 177 | ) 178 | 179 | def __init__(self): 180 | self.valves = self.Valves() 181 | self.name: str = "Azure AI" 182 | 183 | def validate_environment(self) -> None: 184 | """ 185 | Validates that required environment variables are set. 186 | 187 | Raises: 188 | ValueError: If required environment variables are not set. 189 | """ 190 | # Access the decrypted API key 191 | api_key = self.valves.AZURE_AI_API_KEY.get_decrypted() 192 | if not api_key: 193 | raise ValueError("AZURE_AI_API_KEY is not set!") 194 | if not self.valves.AZURE_AI_ENDPOINT: 195 | raise ValueError("AZURE_AI_ENDPOINT is not set!") 196 | 197 | def get_headers(self, model_name: str = None) -> Dict[str, str]: 198 | """ 199 | Constructs the headers for the API request, including the model name if defined. 200 | 201 | Args: 202 | model_name: Optional model name to use instead of the default one 203 | 204 | Returns: 205 | Dictionary containing the required headers for the API request. 206 | """ 207 | # Access the decrypted API key 208 | api_key = self.valves.AZURE_AI_API_KEY.get_decrypted() 209 | if self.valves.USE_AUTHORIZATION_HEADER: 210 | headers = { 211 | "Authorization": f"Bearer {api_key}", 212 | "Content-Type": "application/json", 213 | } 214 | else: 215 | headers = {"api-key": api_key, "Content-Type": "application/json"} 216 | 217 | # If we have a model name and it shouldn't be in the body, add it to headers 218 | if not self.valves.AZURE_AI_MODEL_IN_BODY: 219 | # If specific model name provided, use it 220 | if model_name: 221 | headers["x-ms-model-mesh-model-name"] = model_name 222 | # Otherwise, if AZURE_AI_MODEL has a single value, use that 223 | elif ( 224 | self.valves.AZURE_AI_MODEL 225 | and ";" not in self.valves.AZURE_AI_MODEL 226 | and "," not in self.valves.AZURE_AI_MODEL 227 | and " " not in self.valves.AZURE_AI_MODEL 228 | ): 229 | headers["x-ms-model-mesh-model-name"] = self.valves.AZURE_AI_MODEL 230 | return headers 231 | 232 | def validate_body(self, body: Dict[str, Any]) -> None: 233 | """ 234 | Validates the request body to ensure required fields are present. 235 | 236 | Args: 237 | body: The request body to validate 238 | 239 | Raises: 240 | ValueError: If required fields are missing or invalid. 241 | """ 242 | if "messages" not in body or not isinstance(body["messages"], list): 243 | raise ValueError("The 'messages' field is required and must be a list.") 244 | 245 | def parse_models(self, models_str: str) -> List[str]: 246 | """ 247 | Parses a string of models separated by commas, semicolons, or spaces. 248 | 249 | Args: 250 | models_str: String containing model names separated by commas, semicolons, or spaces 251 | 252 | Returns: 253 | List of individual model names 254 | """ 255 | if not models_str: 256 | return [] 257 | 258 | # Replace semicolons and commas with spaces, then split by spaces and filter empty strings 259 | models = [] 260 | for model in models_str.replace(";", " ").replace(",", " ").split(): 261 | if model.strip(): 262 | models.append(model.strip()) 263 | 264 | return models 265 | 266 | def get_azure_models(self) -> List[Dict[str, str]]: 267 | """ 268 | Returns a list of predefined Azure AI models. 269 | 270 | Returns: 271 | List of dictionaries containing model id and name. 272 | """ 273 | return [ 274 | {"id": "AI21-Jamba-1.5-Large", "name": "AI21 Jamba 1.5 Large"}, 275 | {"id": "AI21-Jamba-1.5-Mini", "name": "AI21 Jamba 1.5 Mini"}, 276 | {"id": "Codestral-2501", "name": "Codestral 25.01"}, 277 | {"id": "Cohere-command-r", "name": "Cohere Command R"}, 278 | {"id": "Cohere-command-r-08-2024", "name": "Cohere Command R 08-2024"}, 279 | {"id": "Cohere-command-r-plus", "name": "Cohere Command R+"}, 280 | { 281 | "id": "Cohere-command-r-plus-08-2024", 282 | "name": "Cohere Command R+ 08-2024", 283 | }, 284 | {"id": "cohere-command-a", "name": "Cohere Command A"}, 285 | {"id": "DeepSeek-R1", "name": "DeepSeek-R1"}, 286 | {"id": "DeepSeek-V3", "name": "DeepSeek-V3"}, 287 | {"id": "DeepSeek-V3-0324", "name": "DeepSeek-V3-0324"}, 288 | {"id": "jais-30b-chat", "name": "JAIS 30b Chat"}, 289 | { 290 | "id": "Llama-3.2-11B-Vision-Instruct", 291 | "name": "Llama-3.2-11B-Vision-Instruct", 292 | }, 293 | { 294 | "id": "Llama-3.2-90B-Vision-Instruct", 295 | "name": "Llama-3.2-90B-Vision-Instruct", 296 | }, 297 | {"id": "Llama-3.3-70B-Instruct", "name": "Llama-3.3-70B-Instruct"}, 298 | {"id": "Meta-Llama-3-70B-Instruct", "name": "Meta-Llama-3-70B-Instruct"}, 299 | {"id": "Meta-Llama-3-8B-Instruct", "name": "Meta-Llama-3-8B-Instruct"}, 300 | { 301 | "id": "Meta-Llama-3.1-405B-Instruct", 302 | "name": "Meta-Llama-3.1-405B-Instruct", 303 | }, 304 | { 305 | "id": "Meta-Llama-3.1-70B-Instruct", 306 | "name": "Meta-Llama-3.1-70B-Instruct", 307 | }, 308 | {"id": "Meta-Llama-3.1-8B-Instruct", "name": "Meta-Llama-3.1-8B-Instruct"}, 309 | {"id": "Ministral-3B", "name": "Ministral 3B"}, 310 | {"id": "Mistral-large", "name": "Mistral Large"}, 311 | {"id": "Mistral-large-2407", "name": "Mistral Large (2407)"}, 312 | {"id": "Mistral-Large-2411", "name": "Mistral Large 24.11"}, 313 | {"id": "Mistral-Nemo", "name": "Mistral Nemo"}, 314 | {"id": "Mistral-small", "name": "Mistral Small"}, 315 | {"id": "mistral-small-2503", "name": "Mistral Small 3.1"}, 316 | {"id": "mistral-medium-2505", "name": "Mistral Medium 3 (25.05)"}, 317 | {"id": "grok-3", "name": "Grok 3"}, 318 | {"id": "gpt-4o", "name": "OpenAI GPT-4o"}, 319 | {"id": "gpt-4o-mini", "name": "OpenAI GPT-4o mini"}, 320 | {"id": "gpt-4.1", "name": "OpenAI GPT-4.1"}, 321 | {"id": "gpt-4.1-mini", "name": "OpenAI GPT-4.1 Mini"}, 322 | {"id": "gpt-4.1-nano", "name": "OpenAI GPT-4.1 Nano"}, 323 | {"id": "o1", "name": "OpenAI o1"}, 324 | {"id": "o1-mini", "name": "OpenAI o1-mini"}, 325 | {"id": "o1-preview", "name": "OpenAI o1-preview"}, 326 | {"id": "o3", "name": "OpenAI o3"}, 327 | {"id": "o3-mini", "name": "OpenAI o3-mini"}, 328 | {"id": "o4-mini", "name": "OpenAI o4-mini"}, 329 | { 330 | "id": "Phi-3-medium-128k-instruct", 331 | "name": "Phi-3-medium instruct (128k)", 332 | }, 333 | {"id": "Phi-3-medium-4k-instruct", "name": "Phi-3-medium instruct (4k)"}, 334 | {"id": "Phi-3-mini-128k-instruct", "name": "Phi-3-mini instruct (128k)"}, 335 | {"id": "Phi-3-mini-4k-instruct", "name": "Phi-3-mini instruct (4k)"}, 336 | {"id": "Phi-3-small-128k-instruct", "name": "Phi-3-small instruct (128k)"}, 337 | {"id": "Phi-3-small-8k-instruct", "name": "Phi-3-small instruct (8k)"}, 338 | {"id": "Phi-3.5-mini-instruct", "name": "Phi-3.5-mini instruct (128k)"}, 339 | {"id": "Phi-3.5-MoE-instruct", "name": "Phi-3.5-MoE instruct (128k)"}, 340 | {"id": "Phi-3.5-vision-instruct", "name": "Phi-3.5-vision instruct (128k)"}, 341 | {"id": "Phi-4", "name": "Phi-4"}, 342 | {"id": "Phi-4-mini-instruct", "name": "Phi-4 mini instruct"}, 343 | {"id": "Phi-4-multimodal-instruct", "name": "Phi-4 multimodal instruct"}, 344 | {"id": "Phi-4-reasoning", "name": "Phi-4 Reasoning"}, 345 | {"id": "Phi-4-mini-reasoning", "name": "Phi-4 Mini Reasoning"}, 346 | {"id": "MAI-DS-R1", "name": "Microsoft Deepseek R1"}, 347 | {"id": "model-router", "name": "Model Router"}, 348 | ] 349 | 350 | def pipes(self) -> List[Dict[str, str]]: 351 | """ 352 | Returns a list of available pipes based on configuration. 353 | 354 | Returns: 355 | List of dictionaries containing pipe id and name. 356 | """ 357 | self.validate_environment() 358 | 359 | # If custom models are provided, parse them and return as pipes 360 | if self.valves.AZURE_AI_MODEL: 361 | self.name = "Azure AI: " 362 | models = self.parse_models(self.valves.AZURE_AI_MODEL) 363 | if models: 364 | return [{"id": model, "name": model} for model in models] 365 | else: 366 | # Fallback for backward compatibility 367 | return [ 368 | { 369 | "id": self.valves.AZURE_AI_MODEL, 370 | "name": self.valves.AZURE_AI_MODEL, 371 | } 372 | ] 373 | 374 | # If custom model is not provided but predefined models are enabled, return those. 375 | if self.valves.USE_PREDEFINED_AZURE_AI_MODELS: 376 | self.name = "Azure AI: " 377 | return self.get_azure_models() 378 | 379 | # Otherwise, use a default name. 380 | return [{"id": "Azure AI", "name": "Azure AI"}] 381 | 382 | async def stream_processor( 383 | self, content: aiohttp.StreamReader, __event_emitter__=None 384 | ) -> AsyncIterator[bytes]: 385 | """ 386 | Process streaming content and properly handle completion status updates. 387 | 388 | Args: 389 | content: The streaming content from the response 390 | __event_emitter__: Optional event emitter for status updates 391 | 392 | Yields: 393 | Bytes from the streaming content 394 | """ 395 | try: 396 | async for chunk in content: 397 | yield chunk 398 | 399 | # Send completion status update when streaming is done 400 | if __event_emitter__: 401 | await __event_emitter__( 402 | { 403 | "type": "status", 404 | "data": {"description": "Streaming completed", "done": True}, 405 | } 406 | ) 407 | except Exception as e: 408 | log = logging.getLogger("azure_ai.stream_processor") 409 | log.error(f"Error processing stream: {e}") 410 | 411 | # Send error status update 412 | if __event_emitter__: 413 | await __event_emitter__( 414 | { 415 | "type": "status", 416 | "data": {"description": f"Error: {str(e)}", "done": True}, 417 | } 418 | ) 419 | 420 | async def pipe( 421 | self, body: Dict[str, Any], __event_emitter__=None 422 | ) -> Union[str, Generator, Iterator, Dict[str, Any], StreamingResponse]: 423 | """ 424 | Main method for sending requests to the Azure AI endpoint. 425 | The model name is passed as a header if defined. 426 | 427 | Args: 428 | body: The request body containing messages and other parameters 429 | __event_emitter__: Optional event emitter function for status updates 430 | 431 | Returns: 432 | Response from Azure AI API, which could be a string, dictionary or streaming response 433 | """ 434 | log = logging.getLogger("azure_ai.pipe") 435 | log.setLevel(SRC_LOG_LEVELS["OPENAI"]) 436 | 437 | # Validate the request body 438 | self.validate_body(body) 439 | selected_model = None 440 | 441 | if "model" in body and body["model"]: 442 | selected_model = body["model"] 443 | # Safer model extraction with split 444 | selected_model = ( 445 | selected_model.split(".", 1)[1] 446 | if "." in selected_model 447 | else selected_model 448 | ) 449 | 450 | # Construct headers with selected model 451 | headers = self.get_headers(selected_model) 452 | 453 | # Filter allowed parameters 454 | allowed_params = { 455 | "model", 456 | "messages", 457 | "frequency_penalty", 458 | "max_tokens", 459 | "presence_penalty", 460 | "reasoning_effort", 461 | "response_format", 462 | "seed", 463 | "stop", 464 | "stream", 465 | "temperature", 466 | "tool_choice", 467 | "tools", 468 | "top_p", 469 | } 470 | filtered_body = {k: v for k, v in body.items() if k in allowed_params} 471 | 472 | if self.valves.AZURE_AI_MODEL and self.valves.AZURE_AI_MODEL_IN_BODY: 473 | # If a model was explicitly selected in the request, use that 474 | if selected_model: 475 | filtered_body["model"] = selected_model 476 | else: 477 | # Otherwise, if AZURE_AI_MODEL contains multiple models, only use the first one to avoid errors 478 | models = self.parse_models(self.valves.AZURE_AI_MODEL) 479 | if models and len(models) > 0: 480 | filtered_body["model"] = models[0] 481 | else: 482 | # Fallback to the original value 483 | filtered_body["model"] = self.valves.AZURE_AI_MODEL 484 | elif "model" in filtered_body and filtered_body["model"]: 485 | # Safer model extraction with split 486 | filtered_body["model"] = ( 487 | filtered_body["model"].split(".", 1)[1] 488 | if "." in filtered_body["model"] 489 | else filtered_body["model"] 490 | ) 491 | 492 | # Convert the modified body back to JSON 493 | payload = json.dumps(filtered_body) 494 | 495 | # Send status update via event emitter if available 496 | if __event_emitter__: 497 | await __event_emitter__( 498 | { 499 | "type": "status", 500 | "data": { 501 | "description": "Sending request to Azure AI...", 502 | "done": False, 503 | }, 504 | } 505 | ) 506 | 507 | request = None 508 | session = None 509 | streaming = False 510 | response = None 511 | 512 | try: 513 | session = aiohttp.ClientSession( 514 | trust_env=True, 515 | timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT), 516 | ) 517 | 518 | request = await session.request( 519 | method="POST", 520 | url=self.valves.AZURE_AI_ENDPOINT, 521 | data=payload, 522 | headers=headers, 523 | ) 524 | 525 | # Check if response is SSE 526 | if "text/event-stream" in request.headers.get("Content-Type", ""): 527 | streaming = True 528 | 529 | # Send status update for successful streaming connection 530 | if __event_emitter__: 531 | await __event_emitter__( 532 | { 533 | "type": "status", 534 | "data": { 535 | "description": "Streaming response from Azure AI...", 536 | "done": False, 537 | }, 538 | } 539 | ) 540 | 541 | return StreamingResponse( 542 | self.stream_processor(request.content, __event_emitter__), 543 | status_code=request.status, 544 | headers=dict(request.headers), 545 | background=BackgroundTask( 546 | cleanup_response, response=request, session=session 547 | ), 548 | ) 549 | else: 550 | try: 551 | response = await request.json() 552 | except Exception as e: 553 | log.error(f"Error parsing JSON response: {e}") 554 | response = await request.text() 555 | 556 | request.raise_for_status() 557 | 558 | # Send completion status update 559 | if __event_emitter__: 560 | await __event_emitter__( 561 | { 562 | "type": "status", 563 | "data": {"description": "Request completed", "done": True}, 564 | } 565 | ) 566 | 567 | return response 568 | 569 | except Exception as e: 570 | log.exception(f"Error in Azure AI request: {e}") 571 | 572 | detail = f"Exception: {str(e)}" 573 | if isinstance(response, dict): 574 | if "error" in response: 575 | detail = f"{response['error']['message'] if 'message' in response['error'] else response['error']}" 576 | elif isinstance(response, str): 577 | detail = response 578 | 579 | # Send error status update 580 | if __event_emitter__: 581 | await __event_emitter__( 582 | { 583 | "type": "status", 584 | "data": {"description": f"Error: {detail}", "done": True}, 585 | } 586 | ) 587 | 588 | return f"Error: {detail}" 589 | finally: 590 | if not streaming and session: 591 | if request: 592 | request.close() 593 | await session.close() 594 | -------------------------------------------------------------------------------- /pipelines/google/google_gemini.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Google Gemini Pipeline 3 | author: owndev, olivier-lacroix 4 | author_url: https://github.com/owndev/ 5 | project_url: https://github.com/owndev/Open-WebUI-Functions 6 | funding_url: https://github.com/sponsors/owndev 7 | version: 1.3.1 8 | license: Apache License 2.0 9 | description: A manifold pipeline for interacting with Google Gemini models, including dynamic model specification, streaming responses, and flexible error handling. 10 | features: 11 | - Asynchronous API calls for better performance 12 | - Model caching to reduce API calls 13 | - Dynamic model specification with automatic prefix stripping 14 | - Streaming response handling with safety checks 15 | - Support for multimodal input (text and images) 16 | - Flexible error handling and logging 17 | - Integration with Google Generative AI or Vertex AI API for content generation 18 | - Support for various generation parameters (temperature, max tokens, etc.) 19 | - Customizable safety settings based on environment variables 20 | - Encrypted storage of sensitive API keys 21 | - Grounding with Google search 22 | - Native tool calling support 23 | """ 24 | 25 | import os 26 | import inspect 27 | from functools import update_wrapper 28 | import re 29 | import time 30 | import asyncio 31 | import base64 32 | import hashlib 33 | import logging 34 | from google import genai 35 | from google.genai import types 36 | from google.genai.errors import ClientError, ServerError, APIError 37 | from typing import List, Union, Optional, Dict, Any, Tuple, AsyncIterator, Callable 38 | from pydantic_core import core_schema 39 | from pydantic import BaseModel, Field, GetCoreSchemaHandler 40 | from cryptography.fernet import Fernet, InvalidToken 41 | from open_webui.env import SRC_LOG_LEVELS 42 | 43 | 44 | # Simplified encryption implementation with automatic handling 45 | class EncryptedStr(str): 46 | """A string type that automatically handles encryption/decryption""" 47 | 48 | @classmethod 49 | def _get_encryption_key(cls) -> Optional[bytes]: 50 | """ 51 | Generate encryption key from WEBUI_SECRET_KEY if available 52 | Returns None if no key is configured 53 | """ 54 | secret = os.getenv("WEBUI_SECRET_KEY") 55 | if not secret: 56 | return None 57 | 58 | hashed_key = hashlib.sha256(secret.encode()).digest() 59 | return base64.urlsafe_b64encode(hashed_key) 60 | 61 | @classmethod 62 | def encrypt(cls, value: str) -> str: 63 | """ 64 | Encrypt a string value if a key is available 65 | Returns the original value if no key is available 66 | """ 67 | if not value or value.startswith("encrypted:"): 68 | return value 69 | 70 | key = cls._get_encryption_key() 71 | if not key: # No encryption if no key 72 | return value 73 | 74 | f = Fernet(key) 75 | encrypted = f.encrypt(value.encode()) 76 | return f"encrypted:{encrypted.decode()}" 77 | 78 | @classmethod 79 | def decrypt(cls, value: str) -> str: 80 | """ 81 | Decrypt an encrypted string value if a key is available 82 | Returns the original value if no key is available or decryption fails 83 | """ 84 | if not value or not value.startswith("encrypted:"): 85 | return value 86 | 87 | key = cls._get_encryption_key() 88 | if not key: # No decryption if no key 89 | return value[len("encrypted:") :] # Return without prefix 90 | 91 | try: 92 | encrypted_part = value[len("encrypted:") :] 93 | f = Fernet(key) 94 | decrypted = f.decrypt(encrypted_part.encode()) 95 | return decrypted.decode() 96 | except (InvalidToken, Exception): 97 | return value 98 | 99 | # Pydantic integration 100 | @classmethod 101 | def __get_pydantic_core_schema__( 102 | cls, _source_type: Any, _handler: GetCoreSchemaHandler 103 | ) -> core_schema.CoreSchema: 104 | return core_schema.union_schema( 105 | [ 106 | core_schema.is_instance_schema(cls), 107 | core_schema.chain_schema( 108 | [ 109 | core_schema.str_schema(), 110 | core_schema.no_info_plain_validator_function( 111 | lambda value: cls(cls.encrypt(value) if value else value) 112 | ), 113 | ] 114 | ), 115 | ], 116 | serialization=core_schema.plain_serializer_function_ser_schema( 117 | lambda instance: str(instance) 118 | ), 119 | ) 120 | 121 | def get_decrypted(self) -> str: 122 | """Get the decrypted value""" 123 | return self.decrypt(self) 124 | 125 | 126 | class Pipe: 127 | """ 128 | Pipeline for interacting with Google Gemini models. 129 | """ 130 | 131 | # Configuration valves for the pipeline 132 | class Valves(BaseModel): 133 | GOOGLE_API_KEY: EncryptedStr = Field( 134 | default=os.getenv("GOOGLE_API_KEY", ""), 135 | description="API key for Google Generative AI (used if USE_VERTEX_AI is false).", 136 | ) 137 | USE_VERTEX_AI: bool = Field( 138 | default=os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "false").lower() == "true", 139 | description="Whether to use Google Cloud Vertex AI instead of the Google Generative AI API.", 140 | ) 141 | VERTEX_PROJECT: str | None = Field( 142 | default=os.getenv("GOOGLE_CLOUD_PROJECT"), 143 | description="The Google Cloud project ID to use with Vertex AI.", 144 | ) 145 | VERTEX_LOCATION: str = Field( 146 | default=os.getenv("GOOGLE_CLOUD_LOCATION", "global"), 147 | description="The Google Cloud region to use with Vertex AI.", 148 | ) 149 | USE_PERMISSIVE_SAFETY: bool = Field( 150 | default=os.getenv("USE_PERMISSIVE_SAFETY", "false").lower() == "true", 151 | description="Use permissive safety settings for content generation.", 152 | ) 153 | MODEL_CACHE_TTL: int = Field( 154 | default=int(os.getenv("GOOGLE_MODEL_CACHE_TTL", "600")), 155 | description="Time in seconds to cache the model list before refreshing", 156 | ) 157 | RETRY_COUNT: int = Field( 158 | default=int(os.getenv("GOOGLE_RETRY_COUNT", "2")), 159 | description="Number of times to retry API calls on temporary failures", 160 | ) 161 | 162 | def __init__(self): 163 | """Initializes the Pipe instance and configures the genai library.""" 164 | self.valves = self.Valves() 165 | self.name: str = "Google Gemini: " 166 | 167 | # Setup logging 168 | self.log = logging.getLogger("google_ai.pipe") 169 | self.log.setLevel(SRC_LOG_LEVELS.get("OPENAI", logging.INFO)) 170 | 171 | # Model cache 172 | self._model_cache: Optional[List[Dict[str, str]]] = None 173 | self._model_cache_time: float = 0 174 | 175 | def _get_client(self) -> genai.Client: 176 | """ 177 | Validates API credentials and returns a genai.Client instance. 178 | """ 179 | self._validate_api_key() 180 | 181 | if self.valves.USE_VERTEX_AI: 182 | self.log.debug( 183 | f"Initializing Vertex AI client (Project: {self.valves.VERTEX_PROJECT}, Location: {self.valves.VERTEX_LOCATION})" 184 | ) 185 | return genai.Client( 186 | vertexai=True, 187 | project=self.valves.VERTEX_PROJECT, 188 | location=self.valves.VERTEX_LOCATION, 189 | ) 190 | else: 191 | self.log.debug("Initializing Google Generative AI client with API Key") 192 | return genai.Client(api_key=self.valves.GOOGLE_API_KEY.get_decrypted()) 193 | 194 | def _validate_api_key(self) -> None: 195 | """ 196 | Validates that the necessary Google API credentials are set. 197 | 198 | Raises: 199 | ValueError: If the required credentials are not set. 200 | """ 201 | if self.valves.USE_VERTEX_AI: 202 | if not self.valves.VERTEX_PROJECT: 203 | self.log.error("USE_VERTEX_AI is true, but VERTEX_PROJECT is not set.") 204 | raise ValueError( 205 | "VERTEX_PROJECT is not set. Please provide the Google Cloud project ID." 206 | ) 207 | # For Vertex AI, location has a default, so project is the main thing to check. 208 | # Actual authentication will be handled by ADC or environment. 209 | self.log.debug( 210 | "Using Vertex AI. Ensure ADC or service account is configured." 211 | ) 212 | else: 213 | if not self.valves.GOOGLE_API_KEY: 214 | self.log.error("GOOGLE_API_KEY is not set (and not using Vertex AI).") 215 | raise ValueError( 216 | "GOOGLE_API_KEY is not set. Please provide the API key in the environment variables or valves." 217 | ) 218 | self.log.debug("Using Google Generative AI API with API Key.") 219 | 220 | def strip_prefix(self, model_name: str) -> str: 221 | """ 222 | Extract the model identifier using regex, handling various naming conventions. 223 | e.g., "google_gemini_pipeline.gemini-2.5-flash-preview-04-17" -> "gemini-2.5-flash-preview-04-17" 224 | e.g., "models/gemini-1.5-flash-001" -> "gemini-1.5-flash-001" 225 | e.g., "publishers/google/models/gemini-1.5-pro" -> "gemini-1.5-pro" 226 | """ 227 | # Use regex to remove everything up to and including the last '/' or the first '.' 228 | stripped = re.sub(r"^(?:.*/|[^.]*\.)", "", model_name) 229 | return stripped 230 | 231 | def get_google_models(self, force_refresh: bool = False) -> List[Dict[str, str]]: 232 | """ 233 | Retrieve available Google models suitable for content generation. 234 | Uses caching to reduce API calls. 235 | 236 | Args: 237 | force_refresh: Whether to force refreshing the model cache 238 | 239 | Returns: 240 | List of dictionaries containing model id and name. 241 | """ 242 | # Check cache first 243 | current_time = time.time() 244 | if ( 245 | not force_refresh 246 | and self._model_cache is not None 247 | and (current_time - self._model_cache_time) < self.valves.MODEL_CACHE_TTL 248 | ): 249 | self.log.debug("Using cached model list") 250 | return self._model_cache 251 | 252 | try: 253 | client = self._get_client() 254 | self.log.debug("Fetching models from Google API") 255 | models = client.models.list() 256 | available_models = [] 257 | for model in models: 258 | actions = model.supported_actions 259 | if actions is None or "generateContent" in actions: 260 | available_models.append( 261 | { 262 | "id": self.strip_prefix(model.name), 263 | "name": model.display_name or self.strip_prefix(model.name), 264 | } 265 | ) 266 | 267 | model_map = {model["id"]: model for model in available_models} 268 | 269 | # Filter map to only include models starting with 'gemini-' 270 | filtered_models = { 271 | k: v for k, v in model_map.items() if k.startswith("gemini-") 272 | } 273 | 274 | # Update cache 275 | self._model_cache = list(filtered_models.values()) 276 | self._model_cache_time = current_time 277 | self.log.debug(f"Found {len(self._model_cache)} Gemini models") 278 | return self._model_cache 279 | 280 | except Exception as e: 281 | self.log.exception(f"Could not fetch models from Google: {str(e)}") 282 | # Return a specific error entry for the UI 283 | return [{"id": "error", "name": f"Could not fetch models: {str(e)}"}] 284 | 285 | def pipes(self) -> List[Dict[str, str]]: 286 | """ 287 | Returns a list of available Google Gemini models for the UI. 288 | 289 | Returns: 290 | List of dictionaries containing model id and name. 291 | """ 292 | try: 293 | self.name = "Google Gemini: " 294 | return self.get_google_models() 295 | except ValueError as e: 296 | # Handle the case where API key is missing during pipe listing 297 | self.log.error(f"Error during pipes listing (validation): {e}") 298 | return [{"id": "error", "name": str(e)}] 299 | except Exception as e: 300 | # Handle other potential errors during model fetching 301 | self.log.exception( 302 | f"An unexpected error occurred during pipes listing: {str(e)}" 303 | ) 304 | return [{"id": "error", "name": f"An unexpected error occurred: {str(e)}"}] 305 | 306 | def _prepare_model_id(self, model_id: str) -> str: 307 | """ 308 | Prepare and validate the model ID for use with the API. 309 | 310 | Args: 311 | model_id: The original model ID from the user 312 | 313 | Returns: 314 | Properly formatted model ID 315 | 316 | Raises: 317 | ValueError: If the model ID is invalid or unsupported 318 | """ 319 | original_model_id = model_id 320 | model_id = self.strip_prefix(model_id) 321 | 322 | # If the model ID doesn't look like a Gemini model, try to find it by name 323 | if not model_id.startswith("gemini-"): 324 | models_list = self.get_google_models() 325 | found_model = next( 326 | (m["id"] for m in models_list if m["name"] == original_model_id), None 327 | ) 328 | if found_model and found_model.startswith("gemini-"): 329 | model_id = found_model 330 | self.log.debug( 331 | f"Mapped model name '{original_model_id}' to model ID '{model_id}'" 332 | ) 333 | else: 334 | # If we still don't have a valid ID, raise an error 335 | if not model_id.startswith("gemini-"): 336 | self.log.error( 337 | f"Invalid or unsupported model ID: '{original_model_id}'" 338 | ) 339 | raise ValueError( 340 | f"Invalid or unsupported Google model ID or name: '{original_model_id}'" 341 | ) 342 | 343 | return model_id 344 | 345 | def _prepare_content( 346 | self, messages: List[Dict[str, Any]] 347 | ) -> Tuple[List[Dict[str, Any]], Optional[str]]: 348 | """ 349 | Prepare messages content for the API and extract system message if present. 350 | 351 | Args: 352 | messages: List of message objects from the request 353 | 354 | Returns: 355 | Tuple of (prepared content list, system message string or None) 356 | """ 357 | # Extract system message 358 | system_message = next( 359 | (msg["content"] for msg in messages if msg.get("role") == "system"), 360 | None, 361 | ) 362 | 363 | # Prepare contents for the API 364 | contents = [] 365 | for message in messages: 366 | role = message.get("role") 367 | if role == "system": 368 | continue # Skip system messages, handled separately 369 | 370 | content = message.get("content", "") 371 | parts = [] 372 | 373 | # Handle different content types 374 | if isinstance(content, list): # Multimodal content 375 | parts.extend(self._process_multimodal_content(content)) 376 | elif isinstance(content, str): # Plain text content 377 | parts.append({"text": content}) 378 | else: 379 | self.log.warning(f"Unsupported message content type: {type(content)}") 380 | continue # Skip unsupported content 381 | 382 | # Map roles: 'assistant' -> 'model', 'user' -> 'user' 383 | api_role = "model" if role == "assistant" else "user" 384 | if parts: # Only add if there are parts 385 | contents.append({"role": api_role, "parts": parts}) 386 | 387 | return contents, system_message 388 | 389 | def _process_multimodal_content( 390 | self, content_list: List[Dict[str, Any]] 391 | ) -> List[Dict[str, Any]]: 392 | """ 393 | Process multimodal content (text and images). 394 | 395 | Args: 396 | content_list: List of content items 397 | 398 | Returns: 399 | List of processed parts for the Gemini API 400 | """ 401 | parts = [] 402 | 403 | for item in content_list: 404 | if item.get("type") == "text": 405 | parts.append({"text": item.get("text", "")}) 406 | elif item.get("type") == "image_url": 407 | image_url = item.get("image_url", {}).get("url", "") 408 | 409 | if image_url.startswith("data:image"): 410 | # Handle base64 encoded image data 411 | try: 412 | header, encoded = image_url.split(",", 1) 413 | mime_type = header.split(":")[1].split(";")[0] 414 | 415 | # Basic validation for image types 416 | if mime_type not in [ 417 | "image/jpeg", 418 | "image/png", 419 | "image/webp", 420 | "image/heic", 421 | "image/heif", 422 | ]: 423 | self.log.warning( 424 | f"Unsupported image mime type: {mime_type}" 425 | ) 426 | parts.append( 427 | {"text": f"[Image type {mime_type} not supported]"} 428 | ) 429 | continue 430 | 431 | parts.append( 432 | { 433 | "inline_data": { 434 | "mime_type": mime_type, 435 | "data": encoded, 436 | } 437 | } 438 | ) 439 | except Exception as img_ex: 440 | self.log.exception(f"Could not parse image data URL: {img_ex}") 441 | parts.append({"text": "[Image data could not be processed]"}) 442 | else: 443 | # Gemini API doesn't directly support image URLs 444 | self.log.warning(f"Direct image URLs not supported: {image_url}") 445 | parts.append({"text": f"[Image URL not processed: {image_url}]"}) 446 | 447 | return parts 448 | 449 | @staticmethod 450 | def _create_tool(tool_def): 451 | """OpenwebUI tool is a functools.partial coroutine, which genai does not support directly. 452 | See https://github.com/googleapis/python-genai/issues/907 453 | 454 | This function wraps the tool into a callable that can be used with genai. 455 | In particular, it sets the signature of the function properly, 456 | removing any frozen keyword arguments (extra_params). 457 | """ 458 | bound_callable = tool_def["callable"] 459 | 460 | # Create a wrapper for bound_callable, which is always async 461 | async def wrapper(*args, **kwargs): 462 | return await bound_callable(*args, **kwargs) 463 | 464 | # Remove 'frozen' keyword arguments (extra_params) from the signature 465 | original_sig = inspect.signature(bound_callable) 466 | frozen_kwargs = { 467 | "__event_emitter__", 468 | "__event_call__", 469 | "__user__", 470 | "__metadata__", 471 | "__request__", 472 | "__model__", 473 | } 474 | new_parameters = [] 475 | 476 | for name, parameter in original_sig.parameters.items(): 477 | # Exclude keyword arguments that are frozen 478 | if name in frozen_kwargs and parameter.kind in ( 479 | inspect.Parameter.POSITIONAL_OR_KEYWORD, 480 | inspect.Parameter.KEYWORD_ONLY, 481 | ): 482 | continue 483 | # Keep remaining parameters 484 | new_parameters.append(parameter) 485 | 486 | new_sig = inspect.Signature( 487 | parameters=new_parameters, return_annotation=original_sig.return_annotation 488 | ) 489 | 490 | # Ensure name, docstring and signature are properly set 491 | update_wrapper(wrapper, bound_callable) 492 | wrapper.__signature__ = new_sig 493 | 494 | return wrapper 495 | 496 | def _configure_generation( 497 | self, 498 | body: Dict[str, Any], 499 | system_instruction: Optional[str], 500 | __metadata__: Dict[str, Any], 501 | __tools__: dict[str, Any] | None = None, 502 | ) -> types.GenerateContentConfig: 503 | """ 504 | Configure generation parameters and safety settings. 505 | 506 | Args: 507 | body: The request body containing generation parameters 508 | system_instruction: Optional system instruction string 509 | 510 | Returns: 511 | types.GenerateContentConfig 512 | """ 513 | gen_config_params = { 514 | "temperature": body.get("temperature"), 515 | "top_p": body.get("top_p"), 516 | "top_k": body.get("top_k"), 517 | "max_output_tokens": body.get("max_tokens"), 518 | "stop_sequences": body.get("stop") or None, 519 | "system_instruction": system_instruction, 520 | } 521 | # Configure safety settings 522 | if self.valves.USE_PERMISSIVE_SAFETY: 523 | safety_settings = [ 524 | types.SafetySetting( 525 | category="HARM_CATEGORY_HARASSMENT", threshold="BLOCK_NONE" 526 | ), 527 | types.SafetySetting( 528 | category="HARM_CATEGORY_HATE_SPEECH", threshold="BLOCK_NONE" 529 | ), 530 | types.SafetySetting( 531 | category="HARM_CATEGORY_SEXUALLY_EXPLICIT", threshold="BLOCK_NONE" 532 | ), 533 | types.SafetySetting( 534 | category="HARM_CATEGORY_DANGEROUS_CONTENT", threshold="BLOCK_NONE" 535 | ), 536 | ] 537 | gen_config_params |= ({"safety_settings": safety_settings},) 538 | 539 | features = __metadata__.get("features", {}) 540 | if features.get("google_search_tool", False): 541 | self.log.debug("Enabling Google search grounding") 542 | gen_config_params.setdefault("tools", []).append( 543 | types.Tool(google_search=types.GoogleSearch()) 544 | ) 545 | 546 | if __tools__ is not None and __metadata__.get("function_calling") == "native": 547 | for name, tool_def in __tools__.items(): 548 | tool = self._create_tool(tool_def) 549 | self.log.debug( 550 | f"Adding tool '{name}' with signature {tool.__signature__}" 551 | ) 552 | 553 | gen_config_params.setdefault("tools", []).append(tool) 554 | 555 | # Filter out None values for generation config 556 | filtered_params = {k: v for k, v in gen_config_params.items() if v is not None} 557 | return types.GenerateContentConfig(**filtered_params) 558 | 559 | @staticmethod 560 | def _format_grounding_chunks_as_sources( 561 | grounding_chunks: list[types.GroundingChunk], 562 | ): 563 | formatted_sources = [] 564 | for chunk in grounding_chunks: 565 | context = chunk.web or chunk.retrieved_context 566 | if not context: 567 | continue 568 | 569 | uri = context.uri 570 | title = context.title or "Source" 571 | 572 | formatted_sources.append( 573 | { 574 | "source": { 575 | "name": title, 576 | "type": "web_search_results", 577 | "url": uri, 578 | }, 579 | "document": ["Click the link to view the content."], 580 | "metadata": [{"source": title}], 581 | } 582 | ) 583 | return formatted_sources 584 | 585 | async def _handle_streaming_response( 586 | self, 587 | response_iterator: Any, 588 | __event_emitter__: Callable, 589 | ) -> AsyncIterator[str]: 590 | """ 591 | Handle streaming response from Gemini API. 592 | 593 | Args: 594 | response_iterator: Iterator from generate_content 595 | 596 | Returns: 597 | Generator yielding text chunks 598 | """ 599 | web_search_queries = [] 600 | grounding_chunks = [] 601 | try: 602 | async for chunk in response_iterator: 603 | # Check for safety feedback or empty chunks 604 | if not chunk.candidates: 605 | # Check prompt feedback 606 | if ( 607 | response_iterator.prompt_feedback 608 | and response_iterator.prompt_feedback.block_reason 609 | ): 610 | yield f"[Blocked due to Prompt Safety: {response_iterator.prompt_feedback.block_reason.name}]" 611 | else: 612 | yield "[Blocked by safety settings]" 613 | return # Stop generation 614 | 615 | grounding_metadata = chunk.candidates[0].grounding_metadata 616 | if grounding_metadata and grounding_metadata.grounding_chunks: 617 | grounding_chunks.extend(grounding_metadata.grounding_chunks) 618 | if grounding_metadata and grounding_metadata.web_search_queries: 619 | web_search_queries.extend(grounding_metadata.web_search_queries) 620 | 621 | if chunk.text: 622 | yield chunk.text 623 | 624 | # After processing all chunks, handle grounding data 625 | # Add sources to the response 626 | if grounding_chunks and __event_emitter__: 627 | sources = self._format_grounding_chunks_as_sources(grounding_chunks) 628 | await __event_emitter__( 629 | {"type": "chat:completion", "data": {"sources": sources}} 630 | ) 631 | 632 | # Add status specifying google queries used for grounding 633 | if web_search_queries and __event_emitter__: 634 | await __event_emitter__( 635 | { 636 | "type": "status", 637 | "data": { 638 | "action": "web_search", 639 | "description": "This response was grounded with Google Search", 640 | "urls": [ 641 | f"https://www.google.com/search?q={query}" 642 | for query in web_search_queries 643 | ], 644 | }, 645 | } 646 | ) 647 | 648 | except Exception as e: 649 | self.log.exception(f"Error during streaming: {e}") 650 | yield f"Error during streaming: {e}" 651 | 652 | def _handle_standard_response(self, response: Any) -> str: 653 | """ 654 | Handle non-streaming response from Gemini API. 655 | 656 | Args: 657 | response: Response from generate_content 658 | 659 | Returns: 660 | Generated text or error message 661 | """ 662 | # Check for prompt safety blocks 663 | if response.prompt_feedback and response.prompt_feedback.block_reason: 664 | return f"[Blocked due to Prompt Safety: {response.prompt_feedback.block_reason.name}]" 665 | 666 | # Check for missing candidates 667 | if not response.candidates: 668 | return "[Blocked by safety settings or no candidates generated]" 669 | 670 | # Check candidate finish reason 671 | candidate = response.candidates[0] 672 | if candidate.finish_reason == types.FinishReason.SAFETY: 673 | # Try to get specific safety rating info 674 | blocking_rating = next( 675 | (r for r in candidate.safety_ratings if r.blocked), None 676 | ) 677 | reason = f" ({blocking_rating.category.name})" if blocking_rating else "" 678 | return f"[Blocked by safety settings{reason}]" 679 | 680 | # Process content parts 681 | if candidate.content and candidate.content.parts: 682 | # Combine text from all parts 683 | return "".join( 684 | part.text for part in candidate.content.parts if hasattr(part, "text") 685 | ) 686 | else: 687 | return "[No content generated or unexpected response structure]" 688 | 689 | async def _retry_with_backoff(self, func, *args, **kwargs) -> Any: 690 | """ 691 | Retry a function with exponential backoff. 692 | 693 | Args: 694 | func: Async function to retry 695 | *args, **kwargs: Arguments to pass to the function 696 | 697 | Returns: 698 | Result from the function 699 | 700 | Raises: 701 | The last exception encountered after all retries 702 | """ 703 | max_retries = self.valves.RETRY_COUNT 704 | retry_count = 0 705 | last_exception = None 706 | 707 | while retry_count <= max_retries: 708 | try: 709 | return await func(*args, **kwargs) 710 | except ServerError as e: 711 | # These errors might be temporary, so retry 712 | retry_count += 1 713 | last_exception = e 714 | 715 | if retry_count <= max_retries: 716 | # Calculate backoff time (exponential with jitter) 717 | wait_time = min(2**retry_count + (0.1 * retry_count), 10) 718 | self.log.warning( 719 | f"Temporary error from Google API: {e}. Retrying in {wait_time:.1f}s ({retry_count}/{max_retries})" 720 | ) 721 | await asyncio.sleep(wait_time) 722 | else: 723 | raise 724 | except Exception: 725 | # Don't retry other exceptions 726 | raise 727 | 728 | # If we get here, we've exhausted retries 729 | assert last_exception is not None 730 | raise last_exception 731 | 732 | async def pipe( 733 | self, 734 | body: Dict[str, Any], 735 | __metadata__: dict[str, Any], 736 | __event_emitter__: Callable, 737 | __tools__: dict[str, Any] | None, 738 | ) -> Union[str, AsyncIterator[str]]: 739 | """ 740 | Main method for sending requests to the Google Gemini endpoint. 741 | 742 | Args: 743 | body: The request body containing messages and other parameters. 744 | 745 | Returns: 746 | Response from Google Gemini API, which could be a string or an iterator for streaming. 747 | """ 748 | # Setup logging for this request 749 | request_id = id(body) 750 | self.log.debug(f"Processing request {request_id}") 751 | 752 | try: 753 | # Parse and validate model ID 754 | model_id = body.get("model", "") 755 | try: 756 | model_id = self._prepare_model_id(model_id) 757 | self.log.debug(f"Using model: {model_id}") 758 | except ValueError as ve: 759 | return f"Model Error: {ve}" 760 | 761 | # Get stream flag 762 | stream = body.get("stream", False) 763 | messages = body.get("messages", []) 764 | 765 | # Prepare content and extract system message 766 | contents, system_instruction = self._prepare_content(messages) 767 | if not contents: 768 | return "Error: No valid message content found" 769 | 770 | # Configure generation parameters and safety settings 771 | generation_config = self._configure_generation( 772 | body, system_instruction, __metadata__, __tools__ 773 | ) 774 | 775 | # Make the API call 776 | client = self._get_client() 777 | if stream: 778 | try: 779 | 780 | async def get_streaming_response(): 781 | return await client.aio.models.generate_content_stream( 782 | model=model_id, 783 | contents=contents, 784 | config=generation_config, 785 | ) 786 | 787 | response_iterator = await self._retry_with_backoff( 788 | get_streaming_response 789 | ) 790 | self.log.debug(f"Request {request_id}: Got streaming response") 791 | return self._handle_streaming_response( 792 | response_iterator, __event_emitter__ 793 | ) 794 | 795 | except Exception as e: 796 | self.log.exception(f"Error in streaming request {request_id}: {e}") 797 | return f"Error during streaming: {e}" 798 | else: 799 | try: 800 | 801 | async def get_response(): 802 | return await client.aio.models.generate_content( 803 | model=model_id, 804 | contents=contents, 805 | config=generation_config, 806 | ) 807 | 808 | response = await self._retry_with_backoff(get_response) 809 | self.log.debug(f"Request {request_id}: Got non-streaming response") 810 | return self._handle_standard_response(response) 811 | 812 | except Exception as e: 813 | self.log.exception( 814 | f"Error in non-streaming request {request_id}: {e}" 815 | ) 816 | return f"Error generating content: {e}" 817 | 818 | except ClientError as ce: 819 | error_msg = f"Client error raised by the GenAI API: {ce}." 820 | self.log.error(f"Client error: {ce}") 821 | return error_msg 822 | 823 | except ServerError as se: 824 | error_msg = f"Server error raised by the GenAI API: {se}" 825 | self.log.error(f"Server error raised by the GenAI API.: {se}") 826 | return error_msg 827 | 828 | except APIError as apie: 829 | error_msg = f"Google API Error: {apie}" 830 | self.log.error(error_msg) 831 | return error_msg 832 | 833 | except ValueError as ve: 834 | error_msg = f"Configuration error: {ve}" 835 | self.log.error(f"Value error: {ve}") 836 | return error_msg 837 | 838 | except Exception as e: 839 | # Log the full error with traceback 840 | import traceback 841 | 842 | error_trace = traceback.format_exc() 843 | self.log.exception(f"Unexpected error: {e}\n{error_trace}") 844 | 845 | # Return a user-friendly error message 846 | return f"An error occurred while processing your request: {e}" 847 | -------------------------------------------------------------------------------- /pipelines/infomaniak/infomaniak.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: Infomaniak AI Tools Pipeline 3 | author: owndev 4 | author_url: https://github.com/owndev/ 5 | project_url: https://github.com/owndev/Open-WebUI-Functions 6 | funding_url: https://github.com/sponsors/owndev 7 | infomaniak_url: https://own.dev/infomaniak-com-en-hosting-ai-tools 8 | version: 2.0.0 9 | license: Apache License 2.0 10 | description: A manifold pipeline for interacting with Infomaniak AI Tools. 11 | features: 12 | - Manifold pipeline for Infomaniak AI Tools 13 | - Lists available models for easy access 14 | - Robust error handling and logging 15 | - Handles streaming and non-streaming responses 16 | - Encrypted storage of sensitive API keys 17 | """ 18 | 19 | from typing import List, Union, Generator, Iterator, Optional, Dict, Any 20 | from fastapi.responses import StreamingResponse 21 | from pydantic import BaseModel, Field, GetCoreSchemaHandler 22 | from starlette.background import BackgroundTask 23 | from open_webui.env import AIOHTTP_CLIENT_TIMEOUT, SRC_LOG_LEVELS 24 | from cryptography.fernet import Fernet, InvalidToken 25 | import aiohttp 26 | import json 27 | import os 28 | import logging 29 | import base64 30 | import hashlib 31 | from pydantic_core import core_schema 32 | 33 | 34 | # Simplified encryption implementation with automatic handling 35 | class EncryptedStr(str): 36 | """A string type that automatically handles encryption/decryption""" 37 | 38 | @classmethod 39 | def _get_encryption_key(cls) -> Optional[bytes]: 40 | """ 41 | Generate encryption key from WEBUI_SECRET_KEY if available 42 | Returns None if no key is configured 43 | """ 44 | secret = os.getenv("WEBUI_SECRET_KEY") 45 | if not secret: 46 | return None 47 | 48 | hashed_key = hashlib.sha256(secret.encode()).digest() 49 | return base64.urlsafe_b64encode(hashed_key) 50 | 51 | @classmethod 52 | def encrypt(cls, value: str) -> str: 53 | """ 54 | Encrypt a string value if a key is available 55 | Returns the original value if no key is available 56 | """ 57 | if not value or value.startswith("encrypted:"): 58 | return value 59 | 60 | key = cls._get_encryption_key() 61 | if not key: # No encryption if no key 62 | return value 63 | 64 | f = Fernet(key) 65 | encrypted = f.encrypt(value.encode()) 66 | return f"encrypted:{encrypted.decode()}" 67 | 68 | @classmethod 69 | def decrypt(cls, value: str) -> str: 70 | """ 71 | Decrypt an encrypted string value if a key is available 72 | Returns the original value if no key is available or decryption fails 73 | """ 74 | if not value or not value.startswith("encrypted:"): 75 | return value 76 | 77 | key = cls._get_encryption_key() 78 | if not key: # No decryption if no key 79 | return value[len("encrypted:") :] # Return without prefix 80 | 81 | try: 82 | encrypted_part = value[len("encrypted:") :] 83 | f = Fernet(key) 84 | decrypted = f.decrypt(encrypted_part.encode()) 85 | return decrypted.decode() 86 | except (InvalidToken, Exception): 87 | return value 88 | 89 | # Pydantic integration 90 | @classmethod 91 | def __get_pydantic_core_schema__( 92 | cls, _source_type: Any, _handler: GetCoreSchemaHandler 93 | ) -> core_schema.CoreSchema: 94 | return core_schema.union_schema( 95 | [ 96 | core_schema.is_instance_schema(cls), 97 | core_schema.chain_schema( 98 | [ 99 | core_schema.str_schema(), 100 | core_schema.no_info_plain_validator_function( 101 | lambda value: cls(cls.encrypt(value) if value else value) 102 | ), 103 | ] 104 | ), 105 | ], 106 | serialization=core_schema.plain_serializer_function_ser_schema( 107 | lambda instance: str(instance) 108 | ), 109 | ) 110 | 111 | def get_decrypted(self) -> str: 112 | """Get the decrypted value""" 113 | return self.decrypt(self) 114 | 115 | 116 | # Helper functions 117 | async def cleanup_response( 118 | response: Optional[aiohttp.ClientResponse], 119 | session: Optional[aiohttp.ClientSession], 120 | ) -> None: 121 | """ 122 | Clean up the response and session objects. 123 | 124 | Args: 125 | response: The ClientResponse object to close 126 | session: The ClientSession object to close 127 | """ 128 | if response: 129 | response.close() 130 | if session: 131 | await session.close() 132 | 133 | 134 | class Pipe: 135 | # Environment variables for API key, endpoint, and optional model 136 | class Valves(BaseModel): 137 | # API key for Infomaniak - automatically encrypted 138 | INFOMANIAK_API_KEY: EncryptedStr = Field( 139 | default=os.getenv("INFOMANIAK_API_KEY", "API_KEY"), 140 | description="API key for Infomaniak AI TOOLS API", 141 | ) 142 | # Product ID for Infomaniak 143 | INFOMANIAK_PRODUCT_ID: int = Field( 144 | default=os.getenv("INFOMANIAK_PRODUCT_ID", 50070), 145 | description="Product ID for Infomaniak AI TOOLS API", 146 | ) 147 | # Base URL for Infomaniak API 148 | INFOMANIAK_BASE_URL: str = Field( 149 | default=os.getenv("INFOMANIAK_BASE_URL", "https://api.infomaniak.com"), 150 | description="Base URL for Infomaniak API", 151 | ) 152 | # Prefix for model names 153 | NAME_PREFIX: str = Field( 154 | default="Infomaniak: ", description="Prefix to be added before model names" 155 | ) 156 | 157 | def __init__(self): 158 | self.type = "manifold" 159 | self.valves = self.Valves() 160 | self.name: str = self.valves.NAME_PREFIX 161 | 162 | def validate_environment(self) -> None: 163 | """ 164 | Validates that required environment variables are set. 165 | 166 | Raises: 167 | ValueError: If required environment variables are not set. 168 | """ 169 | # Access the decrypted API key 170 | api_key = self.valves.INFOMANIAK_API_KEY.get_decrypted() 171 | if not api_key: 172 | raise ValueError("INFOMANIAK_API_KEY is not set!") 173 | if not self.valves.INFOMANIAK_PRODUCT_ID: 174 | raise ValueError("INFOMANIAK_PRODUCT_ID is not set!") 175 | if not self.valves.INFOMANIAK_BASE_URL: 176 | raise ValueError("INFOMANIAK_BASE_URL is not set!") 177 | 178 | def get_headers(self) -> Dict[str, str]: 179 | """ 180 | Constructs the headers for the API request. 181 | 182 | Returns: 183 | Dictionary containing the required headers for the API request. 184 | """ 185 | # Access the decrypted API key 186 | api_key = self.valves.INFOMANIAK_API_KEY.get_decrypted() 187 | headers = { 188 | "Authorization": f"Bearer {api_key}", 189 | "Content-Type": "application/json", 190 | } 191 | return headers 192 | 193 | def get_api_url(self, endpoint: str = "chat/completions") -> str: 194 | """ 195 | Constructs the API URL for Infomaniak requests. 196 | 197 | Args: 198 | endpoint: The API endpoint to use 199 | 200 | Returns: 201 | Full API URL 202 | """ 203 | return f"{self.valves.INFOMANIAK_BASE_URL}/1/ai/{self.valves.INFOMANIAK_PRODUCT_ID}/openai/{endpoint}" 204 | 205 | def validate_body(self, body: Dict[str, Any]) -> None: 206 | """ 207 | Validates the request body to ensure required fields are present. 208 | 209 | Args: 210 | body: The request body to validate 211 | 212 | Raises: 213 | ValueError: If required fields are missing or invalid. 214 | """ 215 | if "messages" not in body or not isinstance(body["messages"], list): 216 | raise ValueError("The 'messages' field is required and must be a list.") 217 | 218 | async def get_infomaniak_models(self) -> List[Dict[str, str]]: 219 | """ 220 | Returns a list of Infomaniak AI LLM models. 221 | 222 | Returns: 223 | List of dictionaries containing model id and name. 224 | """ 225 | log = logging.getLogger("infomaniak_ai_tools.get_models") 226 | log.setLevel(SRC_LOG_LEVELS["OPENAI"]) 227 | 228 | headers = self.get_headers() 229 | models = [] 230 | 231 | try: 232 | async with aiohttp.ClientSession() as session: 233 | async with session.get( 234 | url=f"{self.valves.INFOMANIAK_BASE_URL}/1/ai/models", 235 | headers=headers, 236 | ) as resp: 237 | if resp.status == 200: 238 | data = await resp.json() 239 | if data.get("result") == "success" and "data" in data: 240 | models_data = data["data"] 241 | if isinstance(models_data, list): 242 | for item in models_data: 243 | if not isinstance(item, dict): 244 | log.error( 245 | f"Expected item to be dict but got: {type(item).__name__}" 246 | ) 247 | continue 248 | if ( 249 | item.get("type") == "llm" 250 | ): # only include llm models 251 | models.append( 252 | { 253 | "id": item.get("name", ""), 254 | "name": item.get( 255 | "description", item.get("name", "") 256 | ), 257 | # Profile image and description are currently not working in Open WebUI 258 | "meta": { 259 | "profile_image_url": item.get( 260 | "logo_url", "" 261 | ), 262 | "description": item.get( 263 | "documentation_link", "" 264 | ), 265 | }, 266 | } 267 | ) 268 | return models 269 | else: 270 | log.error( 271 | "Expected 'data' to be a list but received a non-list value." 272 | ) 273 | log.error(f"Failed to get Infomaniak models: {await resp.text()}") 274 | except Exception as e: 275 | log.exception(f"Error getting Infomaniak models: {str(e)}") 276 | 277 | # Default model if API call fails 278 | return [ 279 | { 280 | "id": f"{self.valves.INFOMANIAK_PRODUCT_ID}", 281 | "name": "Infomaniak: LLM API", 282 | } 283 | ] 284 | 285 | async def pipes(self) -> List[Dict[str, str]]: 286 | """ 287 | Returns a list of available pipes based on configuration. 288 | 289 | Returns: 290 | List of dictionaries containing pipe id and name. 291 | """ 292 | self.validate_environment() 293 | return await self.get_infomaniak_models() 294 | 295 | async def pipe( 296 | self, body: Dict[str, Any] 297 | ) -> Union[str, Generator, Iterator, Dict[str, Any], StreamingResponse]: 298 | """ 299 | Main method for sending requests to the Infomaniak AI endpoint. 300 | 301 | Args: 302 | body: The request body containing messages and other parameters 303 | 304 | Returns: 305 | Response from Infomaniak AI API, which could be a string, dictionary or streaming response 306 | """ 307 | log = logging.getLogger("infomaniak_ai_tools.pipe") 308 | log.setLevel(SRC_LOG_LEVELS["OPENAI"]) 309 | 310 | # Validate the request body 311 | self.validate_body(body) 312 | 313 | # Construct headers 314 | headers = self.get_headers() 315 | 316 | # Filter allowed parameters (https://developer.infomaniak.com/docs/api/post/1/ai/%7Bproduct_id%7D/openai/chat/completions) 317 | allowed_params = { 318 | "frequency_penalty", 319 | "logit_bias", 320 | "logprobs", 321 | "max_tokens", 322 | "messages", 323 | "model", 324 | "n", 325 | "presence_penalty", 326 | "profile_type", 327 | "seed", 328 | "stop", 329 | "stream", 330 | "temperature", 331 | "top_logprobs", 332 | "top_p", 333 | } 334 | filtered_body = {k: v for k, v in body.items() if k in allowed_params} 335 | 336 | # Handle model extraction for Infomaniak 337 | if "model" in filtered_body and filtered_body["model"]: 338 | # Extract model ID 339 | filtered_body["model"] = ( 340 | filtered_body["model"].split(".", 1)[1] 341 | if "." in filtered_body["model"] 342 | else filtered_body["model"] 343 | ) 344 | 345 | # Convert the modified body back to JSON 346 | payload = json.dumps(filtered_body) 347 | 348 | request = None 349 | session = None 350 | streaming = False 351 | response = None 352 | 353 | try: 354 | session = aiohttp.ClientSession( 355 | trust_env=True, 356 | timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT), 357 | ) 358 | 359 | api_url = self.get_api_url() 360 | request = await session.request( 361 | method="POST", 362 | url=api_url, 363 | data=payload, 364 | headers=headers, 365 | ) 366 | 367 | # Check if response is SSE 368 | if "text/event-stream" in request.headers.get("Content-Type", ""): 369 | streaming = True 370 | return StreamingResponse( 371 | request.content, 372 | status_code=request.status, 373 | headers=dict(request.headers), 374 | background=BackgroundTask( 375 | cleanup_response, response=request, session=session 376 | ), 377 | ) 378 | else: 379 | try: 380 | response = await request.json() 381 | except Exception as e: 382 | log.error(f"Error parsing JSON response: {e}") 383 | response = await request.text() 384 | 385 | request.raise_for_status() 386 | return response 387 | 388 | except Exception as e: 389 | log.exception(f"Error in Infomaniak AI request: {e}") 390 | 391 | detail = f"Exception: {str(e)}" 392 | if isinstance(response, dict): 393 | if "error" in response: 394 | detail = f"{response['error']['message'] if 'message' in response['error'] else response['error']}" 395 | elif isinstance(response, str): 396 | detail = response 397 | 398 | return f"Error: {detail}" 399 | finally: 400 | if not streaming and session: 401 | if request: 402 | request.close() 403 | await session.close() 404 | -------------------------------------------------------------------------------- /pipelines/n8n/Open_WebUI_Test_Agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Open WebUI Test Agent", 3 | "nodes": [ 4 | { 5 | "parameters": { 6 | "options": { 7 | "frequencyPenalty": 0.2, 8 | "temperature": 0.7 9 | } 10 | }, 11 | "id": "ea4a899a-ad2e-4c2f-992e-ffa3683f6288", 12 | "name": "OpenAI Chat Model", 13 | "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", 14 | "position": [ 15 | 2280, 16 | 1280 17 | ], 18 | "typeVersion": 1, 19 | "credentials": { 20 | "openAiApi": { 21 | "id": "B5ESm3C8oApCbE39", 22 | "name": "OpenAi ownadmin" 23 | } 24 | } 25 | }, 26 | { 27 | "parameters": { 28 | "sessionKey": "={{ $('Webhook').item.json.body.user_id }}_{{ $('Webhook').item.json.body.chat_id }}", 29 | "contextWindowLength": 10 30 | }, 31 | "id": "edc574f1-f176-4572-9c33-bc16855c0250", 32 | "name": "Window Buffer Memory", 33 | "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow", 34 | "position": [ 35 | 2440, 36 | 1100 37 | ], 38 | "typeVersion": 1 39 | }, 40 | { 41 | "parameters": {}, 42 | "type": "@n8n/n8n-nodes-langchain.toolWikipedia", 43 | "typeVersion": 1, 44 | "position": [ 45 | 2600, 46 | 1100 47 | ], 48 | "id": "cbf4bd61-81f4-4063-b5c9-db18d67ff46d", 49 | "name": "Wikipedia" 50 | }, 51 | { 52 | "parameters": {}, 53 | "type": "@n8n/n8n-nodes-langchain.toolCalculator", 54 | "typeVersion": 1, 55 | "position": [ 56 | 2720, 57 | 1100 58 | ], 59 | "id": "dc45541b-f295-41af-9c6b-9b66c3e20557", 60 | "name": "Calculator" 61 | }, 62 | { 63 | "parameters": { 64 | "httpMethod": "POST", 65 | "path": "OpenWebUITestAgent", 66 | "authentication": "headerAuth", 67 | "responseMode": "responseNode", 68 | "options": {} 69 | }, 70 | "type": "n8n-nodes-base.webhook", 71 | "typeVersion": 2, 72 | "position": [ 73 | 2060, 74 | 840 75 | ], 76 | "id": "2b9fa054-971a-4029-8d4e-865698b8f1f9", 77 | "name": "Webhook", 78 | "webhookId": "b3e7d885-0c39-4425-a239-ae065759dbb5", 79 | "credentials": { 80 | "httpHeaderAuth": { 81 | "id": "mf995syl7J6pvo2L", 82 | "name": "Header Auth: Open WebUI Test Agent" 83 | } 84 | } 85 | }, 86 | { 87 | "parameters": { 88 | "options": {} 89 | }, 90 | "type": "n8n-nodes-base.respondToWebhook", 91 | "typeVersion": 1.1, 92 | "position": [ 93 | 2840, 94 | 840 95 | ], 96 | "id": "325974b4-e249-4eb3-81dc-26b392f0894e", 97 | "name": "Respond to Webhook" 98 | }, 99 | { 100 | "parameters": { 101 | "model": "gpt-4o-mini", 102 | "options": {} 103 | }, 104 | "type": "@n8n/n8n-nodes-langchain.lmChatAzureOpenAi", 105 | "typeVersion": 1, 106 | "position": [ 107 | 2280, 108 | 1100 109 | ], 110 | "id": "f2e088e0-aa71-4b42-8eaa-e0197150b9ca", 111 | "name": "Azure OpenAI Chat Model", 112 | "credentials": { 113 | "azureOpenAiApi": { 114 | "id": "jSiXjq4A2WW6lQo5", 115 | "name": "Azure Open AI: owndev" 116 | } 117 | } 118 | }, 119 | { 120 | "parameters": { 121 | "toolDescription": "Get current Time", 122 | "url": "https://www.timeapi.io/api/time/current/zone", 123 | "sendQuery": true, 124 | "parametersQuery": { 125 | "values": [ 126 | { 127 | "name": "timeZone", 128 | "valueProvider": "fieldValue", 129 | "value": "Europe/Berlin" 130 | } 131 | ] 132 | } 133 | }, 134 | "type": "@n8n/n8n-nodes-langchain.toolHttpRequest", 135 | "typeVersion": 1.1, 136 | "position": [ 137 | 2960, 138 | 1100 139 | ], 140 | "id": "28fe049c-6fd7-4bde-9b44-85100001284d", 141 | "name": "Get Time" 142 | }, 143 | { 144 | "parameters": { 145 | "promptType": "define", 146 | "text": "={{ $json.body.chatInput }}", 147 | "options": { 148 | "systemMessage": "={{ $json.body.systemPrompt }}" 149 | } 150 | }, 151 | "type": "@n8n/n8n-nodes-langchain.agent", 152 | "typeVersion": 1.8, 153 | "position": [ 154 | 2380, 155 | 840 156 | ], 157 | "id": "d7d5e7d0-9e85-4c81-8b5e-d56b0ecbe15a", 158 | "name": "AI Agent" 159 | }, 160 | { 161 | "parameters": { 162 | "includeTime": "={{ /*n8n-auto-generated-fromAI-override*/ $fromAI('Include_Current_Time', ``, 'boolean') }}", 163 | "options": { 164 | "timezone": "={{ /*n8n-auto-generated-fromAI-override*/ $fromAI('Timezone', ``, 'string') }}" 165 | } 166 | }, 167 | "type": "n8n-nodes-base.dateTimeTool", 168 | "typeVersion": 2, 169 | "position": [ 170 | 2840, 171 | 1100 172 | ], 173 | "id": "b9f40873-67cc-48a6-b19a-28400805ca6a", 174 | "name": "Date & Time" 175 | } 176 | ], 177 | "pinData": {}, 178 | "connections": { 179 | "OpenAI Chat Model": { 180 | "ai_languageModel": [ 181 | [ 182 | { 183 | "node": "AI Agent", 184 | "type": "ai_languageModel", 185 | "index": 0 186 | } 187 | ] 188 | ] 189 | }, 190 | "Window Buffer Memory": { 191 | "ai_memory": [ 192 | [ 193 | { 194 | "node": "AI Agent", 195 | "type": "ai_memory", 196 | "index": 0 197 | } 198 | ] 199 | ] 200 | }, 201 | "Wikipedia": { 202 | "ai_tool": [ 203 | [ 204 | { 205 | "node": "AI Agent", 206 | "type": "ai_tool", 207 | "index": 0 208 | } 209 | ] 210 | ] 211 | }, 212 | "Calculator": { 213 | "ai_tool": [ 214 | [ 215 | { 216 | "node": "AI Agent", 217 | "type": "ai_tool", 218 | "index": 0 219 | } 220 | ] 221 | ] 222 | }, 223 | "Webhook": { 224 | "main": [ 225 | [ 226 | { 227 | "node": "AI Agent", 228 | "type": "main", 229 | "index": 0 230 | } 231 | ] 232 | ] 233 | }, 234 | "Azure OpenAI Chat Model": { 235 | "ai_languageModel": [ 236 | [] 237 | ] 238 | }, 239 | "Get Time": { 240 | "ai_tool": [ 241 | [] 242 | ] 243 | }, 244 | "AI Agent": { 245 | "main": [ 246 | [ 247 | { 248 | "node": "Respond to Webhook", 249 | "type": "main", 250 | "index": 0 251 | } 252 | ] 253 | ] 254 | }, 255 | "Date & Time": { 256 | "ai_tool": [ 257 | [ 258 | { 259 | "node": "AI Agent", 260 | "type": "ai_tool", 261 | "index": 0 262 | } 263 | ] 264 | ] 265 | } 266 | }, 267 | "active": true, 268 | "settings": { 269 | "executionOrder": "v1" 270 | }, 271 | "versionId": "f8929012-f276-4c61-84e2-108146802a55", 272 | "meta": { 273 | "templateCredsSetupCompleted": true, 274 | "instanceId": "6350a4271a2777a60d73e3a3c6a9549015b6bfe8b8f285cb566cd69ef87215da" 275 | }, 276 | "id": "9yJgWkcblWV7ftWb", 277 | "tags": [] 278 | } -------------------------------------------------------------------------------- /pipelines/n8n/n8n.py: -------------------------------------------------------------------------------- 1 | """ 2 | title: n8n Pipeline 3 | author: owndev 4 | author_url: https://github.com/owndev/ 5 | project_url: https://github.com/owndev/Open-WebUI-Functions 6 | funding_url: https://github.com/sponsors/owndev 7 | n8n_template: https://github.com/owndev/Open-WebUI-Functions/blob/master/pipelines/n8n/Open_WebUI_Test_Agent.json 8 | version: 2.0.0 9 | license: Apache License 2.0 10 | description: A pipeline for interacting with N8N workflows, enabling seamless communication with various N8N workflows via configurable headers and robust error handling. This includes support for dynamic message handling and real-time interaction with N8N workflows. 11 | features: 12 | - Integrates with N8N for seamless communication. 13 | - Supports dynamic message handling. 14 | - Enables real-time interaction with N8N workflows. 15 | - Provides configurable status emissions. 16 | - Cloudflare Access support for secure communication. 17 | - Encrypted storage of sensitive API keys 18 | """ 19 | 20 | from typing import Optional, Callable, Awaitable, Any, Dict 21 | from pydantic import BaseModel, Field, GetCoreSchemaHandler 22 | from cryptography.fernet import Fernet, InvalidToken 23 | import time 24 | import aiohttp 25 | import os 26 | import base64 27 | import hashlib 28 | import logging 29 | from open_webui.env import AIOHTTP_CLIENT_TIMEOUT, SRC_LOG_LEVELS 30 | from pydantic_core import core_schema 31 | 32 | 33 | # Simplified encryption implementation with automatic handling 34 | class EncryptedStr(str): 35 | """A string type that automatically handles encryption/decryption""" 36 | 37 | @classmethod 38 | def _get_encryption_key(cls) -> Optional[bytes]: 39 | """ 40 | Generate encryption key from WEBUI_SECRET_KEY if available 41 | Returns None if no key is configured 42 | """ 43 | secret = os.getenv("WEBUI_SECRET_KEY") 44 | if not secret: 45 | return None 46 | 47 | hashed_key = hashlib.sha256(secret.encode()).digest() 48 | return base64.urlsafe_b64encode(hashed_key) 49 | 50 | @classmethod 51 | def encrypt(cls, value: str) -> str: 52 | """ 53 | Encrypt a string value if a key is available 54 | Returns the original value if no key is available 55 | """ 56 | if not value or value.startswith("encrypted:"): 57 | return value 58 | 59 | key = cls._get_encryption_key() 60 | if not key: # No encryption if no key 61 | return value 62 | 63 | f = Fernet(key) 64 | encrypted = f.encrypt(value.encode()) 65 | return f"encrypted:{encrypted.decode()}" 66 | 67 | @classmethod 68 | def decrypt(cls, value: str) -> str: 69 | """ 70 | Decrypt an encrypted string value if a key is available 71 | Returns the original value if no key is available or decryption fails 72 | """ 73 | if not value or not value.startswith("encrypted:"): 74 | return value 75 | 76 | key = cls._get_encryption_key() 77 | if not key: # No decryption if no key 78 | return value[len("encrypted:") :] # Return without prefix 79 | 80 | try: 81 | encrypted_part = value[len("encrypted:") :] 82 | f = Fernet(key) 83 | decrypted = f.decrypt(encrypted_part.encode()) 84 | return decrypted.decode() 85 | except (InvalidToken, Exception): 86 | return value 87 | 88 | # Pydantic integration 89 | @classmethod 90 | def __get_pydantic_core_schema__( 91 | cls, _source_type: Any, _handler: GetCoreSchemaHandler 92 | ) -> core_schema.CoreSchema: 93 | return core_schema.union_schema( 94 | [ 95 | core_schema.is_instance_schema(cls), 96 | core_schema.chain_schema( 97 | [ 98 | core_schema.str_schema(), 99 | core_schema.no_info_plain_validator_function( 100 | lambda value: cls(cls.encrypt(value) if value else value) 101 | ), 102 | ] 103 | ), 104 | ], 105 | serialization=core_schema.plain_serializer_function_ser_schema( 106 | lambda instance: str(instance) 107 | ), 108 | ) 109 | 110 | def get_decrypted(self) -> str: 111 | """Get the decrypted value""" 112 | return self.decrypt(self) 113 | 114 | 115 | # Helper function for cleaning up aiohttp resources 116 | async def cleanup_session(session: Optional[aiohttp.ClientSession]) -> None: 117 | """ 118 | Clean up the aiohttp session. 119 | 120 | Args: 121 | session: The ClientSession object to close 122 | """ 123 | if session: 124 | await session.close() 125 | 126 | 127 | class Pipe: 128 | class Valves(BaseModel): 129 | N8N_URL: str = Field( 130 | default="https:///webhook/", 131 | description="URL for the N8N webhook", 132 | ) 133 | N8N_BEARER_TOKEN: EncryptedStr = Field( 134 | default="", 135 | description="Bearer token for authenticating with the N8N webhook", 136 | ) 137 | INPUT_FIELD: str = Field( 138 | default="chatInput", 139 | description="Field name for the input message in the N8N payload", 140 | ) 141 | RESPONSE_FIELD: str = Field( 142 | default="output", 143 | description="Field name for the response message in the N8N payload", 144 | ) 145 | EMIT_INTERVAL: float = Field( 146 | default=2.0, description="Interval in seconds between status emissions" 147 | ) 148 | ENABLE_STATUS_INDICATOR: bool = Field( 149 | default=True, description="Enable or disable status indicator emissions" 150 | ) 151 | CF_ACCESS_CLIENT_ID: EncryptedStr = Field( 152 | default="", 153 | description="Only if behind Cloudflare: https://developers.cloudflare.com/cloudflare-one/identity/service-tokens/", 154 | ) 155 | CF_ACCESS_CLIENT_SECRET: EncryptedStr = Field( 156 | default="", 157 | description="Only if behind Cloudflare: https://developers.cloudflare.com/cloudflare-one/identity/service-tokens/", 158 | ) 159 | 160 | def __init__(self): 161 | self.name = "N8N Agent" 162 | self.valves = self.Valves() 163 | self.last_emit_time = 0 164 | self.log = logging.getLogger("n8n_pipeline") 165 | self.log.setLevel(SRC_LOG_LEVELS.get("OPENAI", logging.INFO)) 166 | 167 | async def emit_status( 168 | self, 169 | __event_emitter__: Callable[[dict], Awaitable[None]], 170 | level: str, 171 | message: str, 172 | done: bool, 173 | ): 174 | current_time = time.time() 175 | if ( 176 | __event_emitter__ 177 | and self.valves.ENABLE_STATUS_INDICATOR 178 | and ( 179 | current_time - self.last_emit_time >= self.valves.EMIT_INTERVAL or done 180 | ) 181 | ): 182 | await __event_emitter__( 183 | { 184 | "type": "status", 185 | "data": { 186 | "status": "complete" if done else "in_progress", 187 | "level": level, 188 | "description": message, 189 | "done": done, 190 | }, 191 | } 192 | ) 193 | self.last_emit_time = current_time 194 | 195 | def extract_event_info(self, event_emitter): 196 | if not event_emitter or not event_emitter.__closure__: 197 | return None, None 198 | for cell in event_emitter.__closure__: 199 | if isinstance(request_info := cell.cell_contents, dict): 200 | chat_id = request_info.get("chat_id") 201 | message_id = request_info.get("message_id") 202 | return chat_id, message_id 203 | return None, None 204 | 205 | def get_headers(self) -> Dict[str, str]: 206 | """ 207 | Constructs the headers for the API request. 208 | 209 | Returns: 210 | Dictionary containing the required headers for the API request. 211 | """ 212 | headers = {"Content-Type": "application/json"} 213 | 214 | # Add bearer token if available 215 | bearer_token = self.valves.N8N_BEARER_TOKEN.get_decrypted() 216 | if bearer_token: 217 | headers["Authorization"] = f"Bearer {bearer_token}" 218 | 219 | # Add Cloudflare Access headers if available 220 | cf_client_id = self.valves.CF_ACCESS_CLIENT_ID.get_decrypted() 221 | if cf_client_id: 222 | headers["CF-Access-Client-Id"] = cf_client_id 223 | 224 | cf_client_secret = self.valves.CF_ACCESS_CLIENT_SECRET.get_decrypted() 225 | if cf_client_secret: 226 | headers["CF-Access-Client-Secret"] = cf_client_secret 227 | 228 | return headers 229 | 230 | async def pipe( 231 | self, 232 | body: dict, 233 | __user__: Optional[dict] = None, 234 | __event_emitter__: Callable[[dict], Awaitable[None]] = None, 235 | __event_call__: Callable[[dict], Awaitable[dict]] = None, 236 | ) -> Optional[dict]: 237 | await self.emit_status( 238 | __event_emitter__, "info", f"Calling {self.name} ...", False 239 | ) 240 | 241 | session = None 242 | n8n_response = None 243 | messages = body.get("messages", []) 244 | 245 | # Verify a message is available 246 | if messages: 247 | question = messages[-1]["content"] 248 | if "Prompt: " in question: 249 | question = question.split("Prompt: ")[-1] 250 | try: 251 | # Extract chat_id and message_id 252 | chat_id, message_id = self.extract_event_info(__event_emitter__) 253 | 254 | self.log.info(f"Starting N8N workflow request for chat ID: {chat_id}") 255 | 256 | # Prepare payload for N8N workflow 257 | payload = { 258 | "systemPrompt": f"{messages[0]['content'].split('Prompt: ')[-1]}", 259 | "user_id": __user__.get("id") if __user__ else None, 260 | "user_email": __user__.get("email") if __user__ else None, 261 | "user_name": __user__.get("name") if __user__ else None, 262 | "user_role": __user__.get("role") if __user__ else None, 263 | "chat_id": chat_id, 264 | "message_id": message_id, 265 | } 266 | payload[self.valves.INPUT_FIELD] = question 267 | 268 | # Get headers for the request 269 | headers = self.get_headers() 270 | 271 | # Invoke N8N workflow with aiohttp 272 | session = aiohttp.ClientSession( 273 | trust_env=True, 274 | timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT), 275 | ) 276 | 277 | self.log.debug(f"Sending request to N8N: {self.valves.N8N_URL}") 278 | async with session.post( 279 | self.valves.N8N_URL, json=payload, headers=headers 280 | ) as response: 281 | if response.status == 200: 282 | response_data = await response.json() 283 | self.log.debug( 284 | f"N8N response received with status code: {response.status}" 285 | ) 286 | n8n_response = response_data[self.valves.RESPONSE_FIELD] 287 | else: 288 | error_text = await response.text() 289 | self.log.error( 290 | f"N8N error: Status {response.status} - {error_text}" 291 | ) 292 | raise Exception(f"Error: {response.status} - {error_text}") 293 | 294 | # Set assistant message with chain reply 295 | body["messages"].append({"role": "assistant", "content": n8n_response}) 296 | 297 | except Exception as e: 298 | error_msg = f"Error during sequence execution: {str(e)}" 299 | self.log.exception(error_msg) 300 | await self.emit_status( 301 | __event_emitter__, 302 | "error", 303 | error_msg, 304 | True, 305 | ) 306 | return {"error": str(e)} 307 | finally: 308 | if session: 309 | await cleanup_session(session) 310 | 311 | # If no message is available alert user 312 | else: 313 | error_msg = "No messages found in the request body" 314 | self.log.warning(error_msg) 315 | await self.emit_status( 316 | __event_emitter__, 317 | "error", 318 | error_msg, 319 | True, 320 | ) 321 | body["messages"].append( 322 | { 323 | "role": "assistant", 324 | "content": error_msg, 325 | } 326 | ) 327 | 328 | await self.emit_status(__event_emitter__, "info", "Complete", True) 329 | return n8n_response 330 | -------------------------------------------------------------------------------- /pixi.lock: -------------------------------------------------------------------------------- 1 | version: 6 2 | environments: 3 | default: 4 | channels: 5 | - url: https://conda.anaconda.org/conda-forge/ 6 | packages: 7 | linux-64: 8 | - conda: https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2 9 | - conda: https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2 10 | - conda: https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h4bc722e_7.conda 11 | - conda: https://conda.anaconda.org/conda-forge/noarch/ca-certificates-2025.4.26-hbd8a1cb_0.conda 12 | - conda: https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.43-h712a8e2_4.conda 13 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.7.0-h5888daf_0.conda 14 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.6-h2dba641_1.conda 15 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-15.1.0-h767d61c_2.conda 16 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-15.1.0-h69a702a_2.conda 17 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libgomp-15.1.0-h767d61c_2.conda 18 | - conda: https://conda.anaconda.org/conda-forge/linux-64/liblzma-5.8.1-hb9d3cd8_1.conda 19 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda 20 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.49.2-hee588c1_0.conda 21 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-15.1.0-h8f9b012_2.conda 22 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.38.1-h0b41bf4_0.conda 23 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libxcrypt-4.4.36-hd590300_1.conda 24 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.3.1-hb9d3cd8_2.conda 25 | - conda: https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.5-h2d0b736_3.conda 26 | - conda: https://conda.anaconda.org/conda-forge/linux-64/openssl-3.5.0-h7b32b05_1.conda 27 | - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.10-h9e4cc4f_0_cpython.conda 28 | - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.12-7_cp312.conda 29 | - conda: https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8c095d6_2.conda 30 | - conda: https://conda.anaconda.org/conda-forge/linux-64/ruff-0.11.10-py312h1d08497_1.conda 31 | - conda: https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda 32 | - conda: https://conda.anaconda.org/conda-forge/noarch/tzdata-2025b-h78e105d_0.conda 33 | packages: 34 | - conda: https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2 35 | sha256: fe51de6107f9edc7aa4f786a70f4a883943bc9d39b3bb7307c04c41410990726 36 | md5: d7c89558ba9fa0495403155b64376d81 37 | license: None 38 | size: 2562 39 | timestamp: 1578324546067 40 | - conda: https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2 41 | build_number: 16 42 | sha256: fbe2c5e56a653bebb982eda4876a9178aedfc2b545f25d0ce9c4c0b508253d22 43 | md5: 73aaf86a425cc6e73fcf236a5a46396d 44 | depends: 45 | - _libgcc_mutex 0.1 conda_forge 46 | - libgomp >=7.5.0 47 | constrains: 48 | - openmp_impl 9999 49 | license: BSD-3-Clause 50 | license_family: BSD 51 | size: 23621 52 | timestamp: 1650670423406 53 | - conda: https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h4bc722e_7.conda 54 | sha256: 5ced96500d945fb286c9c838e54fa759aa04a7129c59800f0846b4335cee770d 55 | md5: 62ee74e96c5ebb0af99386de58cf9553 56 | depends: 57 | - __glibc >=2.17,<3.0.a0 58 | - libgcc-ng >=12 59 | license: bzip2-1.0.6 60 | license_family: BSD 61 | size: 252783 62 | timestamp: 1720974456583 63 | - conda: https://conda.anaconda.org/conda-forge/noarch/ca-certificates-2025.4.26-hbd8a1cb_0.conda 64 | sha256: 2a70ed95ace8a3f8a29e6cd1476a943df294a7111dfb3e152e3478c4c889b7ac 65 | md5: 95db94f75ba080a22eb623590993167b 66 | depends: 67 | - __unix 68 | license: ISC 69 | size: 152283 70 | timestamp: 1745653616541 71 | - conda: https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.43-h712a8e2_4.conda 72 | sha256: db73f38155d901a610b2320525b9dd3b31e4949215c870685fd92ea61b5ce472 73 | md5: 01f8d123c96816249efd255a31ad7712 74 | depends: 75 | - __glibc >=2.17,<3.0.a0 76 | constrains: 77 | - binutils_impl_linux-64 2.43 78 | license: GPL-3.0-only 79 | license_family: GPL 80 | size: 671240 81 | timestamp: 1740155456116 82 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.7.0-h5888daf_0.conda 83 | sha256: 33ab03438aee65d6aa667cf7d90c91e5e7d734c19a67aa4c7040742c0a13d505 84 | md5: db0bfbe7dd197b68ad5f30333bae6ce0 85 | depends: 86 | - __glibc >=2.17,<3.0.a0 87 | - libgcc >=13 88 | constrains: 89 | - expat 2.7.0.* 90 | license: MIT 91 | license_family: MIT 92 | size: 74427 93 | timestamp: 1743431794976 94 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.6-h2dba641_1.conda 95 | sha256: 764432d32db45466e87f10621db5b74363a9f847d2b8b1f9743746cd160f06ab 96 | md5: ede4673863426c0883c0063d853bbd85 97 | depends: 98 | - __glibc >=2.17,<3.0.a0 99 | - libgcc >=13 100 | license: MIT 101 | license_family: MIT 102 | size: 57433 103 | timestamp: 1743434498161 104 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-15.1.0-h767d61c_2.conda 105 | sha256: 0024f9ab34c09629621aefd8603ef77bf9d708129b0dd79029e502c39ffc2195 106 | md5: ea8ac52380885ed41c1baa8f1d6d2b93 107 | depends: 108 | - __glibc >=2.17,<3.0.a0 109 | - _openmp_mutex >=4.5 110 | constrains: 111 | - libgcc-ng ==15.1.0=*_2 112 | - libgomp 15.1.0 h767d61c_2 113 | license: GPL-3.0-only WITH GCC-exception-3.1 114 | license_family: GPL 115 | size: 829108 116 | timestamp: 1746642191935 117 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-15.1.0-h69a702a_2.conda 118 | sha256: 0ab5421a89f090f3aa33841036bb3af4ed85e1f91315b528a9d75fab9aad51ae 119 | md5: ddca86c7040dd0e73b2b69bd7833d225 120 | depends: 121 | - libgcc 15.1.0 h767d61c_2 122 | license: GPL-3.0-only WITH GCC-exception-3.1 123 | license_family: GPL 124 | size: 34586 125 | timestamp: 1746642200749 126 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libgomp-15.1.0-h767d61c_2.conda 127 | sha256: 05fff3dc7e80579bc28de13b511baec281c4343d703c406aefd54389959154fb 128 | md5: fbe7d535ff9d3a168c148e07358cd5b1 129 | depends: 130 | - __glibc >=2.17,<3.0.a0 131 | license: GPL-3.0-only WITH GCC-exception-3.1 132 | license_family: GPL 133 | size: 452635 134 | timestamp: 1746642113092 135 | - conda: https://conda.anaconda.org/conda-forge/linux-64/liblzma-5.8.1-hb9d3cd8_1.conda 136 | sha256: eeff241bddc8f1b87567dd6507c9f441f7f472c27f0860a07628260c000ef27c 137 | md5: a76fd702c93cd2dfd89eff30a5fd45a8 138 | depends: 139 | - __glibc >=2.17,<3.0.a0 140 | - libgcc >=13 141 | constrains: 142 | - xz 5.8.1.* 143 | - xz ==5.8.1=*_1 144 | license: 0BSD 145 | size: 112845 146 | timestamp: 1746531470399 147 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda 148 | sha256: 26d77a3bb4dceeedc2a41bd688564fe71bf2d149fdcf117049970bc02ff1add6 149 | md5: 30fd6e37fe21f86f4bd26d6ee73eeec7 150 | depends: 151 | - libgcc-ng >=12 152 | license: LGPL-2.1-only 153 | license_family: GPL 154 | size: 33408 155 | timestamp: 1697359010159 156 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.49.2-hee588c1_0.conda 157 | sha256: 525d4a0e24843f90b3ff1ed733f0a2e408aa6dd18b9d4f15465595e078e104a2 158 | md5: 93048463501053a00739215ea3f36324 159 | depends: 160 | - __glibc >=2.17,<3.0.a0 161 | - libgcc >=13 162 | - libzlib >=1.3.1,<2.0a0 163 | license: Unlicense 164 | size: 916313 165 | timestamp: 1746637007836 166 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-15.1.0-h8f9b012_2.conda 167 | sha256: 6ae3d153e78f6069d503d9309f2cac6de5b93d067fc6433160a4c05226a5dad4 168 | md5: 1cb1c67961f6dd257eae9e9691b341aa 169 | depends: 170 | - __glibc >=2.17,<3.0.a0 171 | - libgcc 15.1.0 h767d61c_2 172 | license: GPL-3.0-only WITH GCC-exception-3.1 173 | license_family: GPL 174 | size: 3902355 175 | timestamp: 1746642227493 176 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.38.1-h0b41bf4_0.conda 177 | sha256: 787eb542f055a2b3de553614b25f09eefb0a0931b0c87dbcce6efdfd92f04f18 178 | md5: 40b61aab5c7ba9ff276c41cfffe6b80b 179 | depends: 180 | - libgcc-ng >=12 181 | license: BSD-3-Clause 182 | license_family: BSD 183 | size: 33601 184 | timestamp: 1680112270483 185 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libxcrypt-4.4.36-hd590300_1.conda 186 | sha256: 6ae68e0b86423ef188196fff6207ed0c8195dd84273cb5623b85aa08033a410c 187 | md5: 5aa797f8787fe7a17d1b0821485b5adc 188 | depends: 189 | - libgcc-ng >=12 190 | license: LGPL-2.1-or-later 191 | size: 100393 192 | timestamp: 1702724383534 193 | - conda: https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.3.1-hb9d3cd8_2.conda 194 | sha256: d4bfe88d7cb447768e31650f06257995601f89076080e76df55e3112d4e47dc4 195 | md5: edb0dca6bc32e4f4789199455a1dbeb8 196 | depends: 197 | - __glibc >=2.17,<3.0.a0 198 | - libgcc >=13 199 | constrains: 200 | - zlib 1.3.1 *_2 201 | license: Zlib 202 | license_family: Other 203 | size: 60963 204 | timestamp: 1727963148474 205 | - conda: https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.5-h2d0b736_3.conda 206 | sha256: 3fde293232fa3fca98635e1167de6b7c7fda83caf24b9d6c91ec9eefb4f4d586 207 | md5: 47e340acb35de30501a76c7c799c41d7 208 | depends: 209 | - __glibc >=2.17,<3.0.a0 210 | - libgcc >=13 211 | license: X11 AND BSD-3-Clause 212 | size: 891641 213 | timestamp: 1738195959188 214 | - conda: https://conda.anaconda.org/conda-forge/linux-64/openssl-3.5.0-h7b32b05_1.conda 215 | sha256: b4491077c494dbf0b5eaa6d87738c22f2154e9277e5293175ec187634bd808a0 216 | md5: de356753cfdbffcde5bb1e86e3aa6cd0 217 | depends: 218 | - __glibc >=2.17,<3.0.a0 219 | - ca-certificates 220 | - libgcc >=13 221 | license: Apache-2.0 222 | license_family: Apache 223 | size: 3117410 224 | timestamp: 1746223723843 225 | - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.10-h9e4cc4f_0_cpython.conda 226 | sha256: 4dc1da115805bd353bded6ab20ff642b6a15fcc72ac2f3de0e1d014ff3612221 227 | md5: a41d26cd4d47092d683915d058380dec 228 | depends: 229 | - __glibc >=2.17,<3.0.a0 230 | - bzip2 >=1.0.8,<2.0a0 231 | - ld_impl_linux-64 >=2.36.1 232 | - libexpat >=2.7.0,<3.0a0 233 | - libffi >=3.4.6,<3.5.0a0 234 | - libgcc >=13 235 | - liblzma >=5.8.1,<6.0a0 236 | - libnsl >=2.0.1,<2.1.0a0 237 | - libsqlite >=3.49.1,<4.0a0 238 | - libuuid >=2.38.1,<3.0a0 239 | - libxcrypt >=4.4.36 240 | - libzlib >=1.3.1,<2.0a0 241 | - ncurses >=6.5,<7.0a0 242 | - openssl >=3.5.0,<4.0a0 243 | - readline >=8.2,<9.0a0 244 | - tk >=8.6.13,<8.7.0a0 245 | - tzdata 246 | constrains: 247 | - python_abi 3.12.* *_cp312 248 | license: Python-2.0 249 | size: 31279179 250 | timestamp: 1744325164633 251 | - conda: https://conda.anaconda.org/conda-forge/noarch/python_abi-3.12-7_cp312.conda 252 | build_number: 7 253 | sha256: a1bbced35e0df66cc713105344263570e835625c28d1bdee8f748f482b2d7793 254 | md5: 0dfcdc155cf23812a0c9deada86fb723 255 | constrains: 256 | - python 3.12.* *_cpython 257 | license: BSD-3-Clause 258 | license_family: BSD 259 | size: 6971 260 | timestamp: 1745258861359 261 | - conda: https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8c095d6_2.conda 262 | sha256: 2d6d0c026902561ed77cd646b5021aef2d4db22e57a5b0178dfc669231e06d2c 263 | md5: 283b96675859b20a825f8fa30f311446 264 | depends: 265 | - libgcc >=13 266 | - ncurses >=6.5,<7.0a0 267 | license: GPL-3.0-only 268 | license_family: GPL 269 | size: 282480 270 | timestamp: 1740379431762 271 | - conda: https://conda.anaconda.org/conda-forge/linux-64/ruff-0.11.10-py312h1d08497_1.conda 272 | sha256: e05bd361165ea880f146f13285c20fa46a26f0bd93376a2e235e1831c67bb1a7 273 | md5: 419131a2969cc17935fa9e435f4970d7 274 | depends: 275 | - __glibc >=2.17,<3.0.a0 276 | - libgcc >=13 277 | - libstdcxx >=13 278 | - python >=3.12,<3.13.0a0 279 | - python_abi 3.12.* *_cp312 280 | constrains: 281 | - __glibc >=2.17 282 | license: MIT 283 | license_family: MIT 284 | size: 8217011 285 | timestamp: 1747401528574 286 | - conda: https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda 287 | sha256: e0569c9caa68bf476bead1bed3d79650bb080b532c64a4af7d8ca286c08dea4e 288 | md5: d453b98d9c83e71da0741bb0ff4d76bc 289 | depends: 290 | - libgcc-ng >=12 291 | - libzlib >=1.2.13,<2.0.0a0 292 | license: TCL 293 | license_family: BSD 294 | size: 3318875 295 | timestamp: 1699202167581 296 | - conda: https://conda.anaconda.org/conda-forge/noarch/tzdata-2025b-h78e105d_0.conda 297 | sha256: 5aaa366385d716557e365f0a4e9c3fca43ba196872abbbe3d56bb610d131e192 298 | md5: 4222072737ccff51314b5ece9c7d6f5a 299 | license: LicenseRef-Public-Domain 300 | size: 122968 301 | timestamp: 1742727099393 302 | -------------------------------------------------------------------------------- /pixi.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | authors = ["owndev"] 3 | channels = ["conda-forge"] 4 | name = "Open-WebUI-Functions" 5 | platforms = ["linux-64"] 6 | 7 | [tasks] 8 | format = "ruff format" 9 | lint = { cmd = "ruff check", depends-on = ["format"] } 10 | 11 | [dependencies] 12 | ruff = ">=0.11.10,<0.12" 13 | -------------------------------------------------------------------------------- /ruff.toml: -------------------------------------------------------------------------------- 1 | line-length = 88 2 | indent-width = 4 --------------------------------------------------------------------------------