├── .github ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md ├── PULL_REQUEST_TEMPLATE │ └── pull_request.md ├── SECURITY.md ├── pplx-logo-dark.png ├── pplx-logo-light.png └── version.json ├── .gitignore ├── LICENSE ├── README.md ├── cli.py ├── client.py ├── config.py ├── example.env ├── examples ├── example_chat.py └── example_search.py ├── loading.py ├── perplexity.py └── requirements.txt /.github/CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | All notable changes to the project will be documented in this file. 4 | 5 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). 6 | 7 | ## [1.2.2] - 2024-05-05 8 | 9 | ### Added 10 | - Added support for `llama-3-sonar-small-32k-chat`, `llama-3-sonar-large-32k-chat`, `llama-3-sonar-small-32k-online` and `llama-3-sonar-large-32k-online`. 11 | 12 | ### Changed 13 | - Updated default base models in `config.py` to use `llama-3-sonar-large-32k-chat` for chat functionalities and `llama-3-sonar-large-32k-online` for search functionalities. 14 | 15 | ### Removed 16 | - Removed references to older models such as `sonar-small-chat`, `sonar-medium-chat`, `sonar-small-online` and `sonar-medium-online` in the code and documentation to streamline the use of newer `llama-3-sonar` models. 17 | - Removed references to `codellama-70b-instruct`, `mistral-7b-instruct` and `mixtral-8x22b-instruct`, as Perplexity Labs no loger supports the models. 18 | 19 | ## [1.2.1] - 2024-04-22 20 | 21 | ### Added 22 | - Added support for `llama-3` 23 | - Added syntax for exiting/quitting the program to the beginning of the conversation in `perplexity.py` 24 | - Added goodbye syntax to the end of the search request in `perplexity.py` 25 | 26 | ### Changed 27 | - Changed variable names in `config.py` 28 | - Set default models to models from config or env in `perplexity.py` 29 | - Updated syntax in `cli.py` 30 | 31 | ### Removed 32 | - Removed unused timeout in `client.py` 33 | 34 | ## [1.2.0] - 2024-03-23 35 | 36 | ### Added 37 | - config.py: Introduced a configuration management system to load environment variables and default settings. 38 | - client.py: Implemented a new client architecture for making API requests, including streaming support. 39 | 40 | ### Improved 41 | - loading.py: No changes detected in functionality, the code remains identical. 42 | - perplexity.py: Enhanced with a modular approach separating chat and search functionalities into distinct classes with improved error handling. 43 | - cli.py: Major overhaul to CLI interface, incorporating new options and better help documentation, facilitating both chat and search operations with the new client architecture. 44 | 45 | ### Changed 46 | - Environment variable management has been centralized and now requires specific keys (`PERPLEXITY_API_KEY`, `PERPLEXITY_DEFAULT_CHAT_MODEL`, `PERPLEXITY_DEFAULT_SEARCH_MODEL`, `PERPLEXITY_BASE_URL`, `PERPLEXITY_TIMEOUT`), but has defaults already set for ease of use. 47 | - The base API interaction logic has been encapsulated within the Client class, abstracting the complexities of request handling, including streaming. 48 | 49 | ### Removed 50 | - `pplx.py`, `pplx_cli.py`, `base_api.py`, `pplx_search.py`, and `pplx_chat.py` have been replaced with the new `client.py`, `config.py`, `perplexity.py`, and `cli.py` files, indicating a structural overhaul. 51 | - The direct dependency on .env file loading within API wrapper files has been removed, now managed centrally in `config.py`. 52 | 53 | ### Fixed 54 | - The handling of API keys and configuration settings has been standardized, fixing inconsistencies in how environment variables were previously managed. 55 | 56 | ## [1.1.2] - 02/25/2024 57 | 58 | `base_api.py` 59 | 60 | ### Improved 61 | 62 | - Enhanced error handling in the post method to provide clearer error messages and gracefully exit the application upon encountering a critical error, improving user experience and debuggability. 63 | - Added a finally block to ensure that the loading animation is always stopped, even if an error occurs, preventing potential terminal display issues. 64 | - Modified the loading animation to start only in non-streaming requests to avoid overlap with streaming output, enhancing output readability. 65 | - Streamlined the API key retrieval process with a more descriptive error message if the API key is not found, aiding users in configuration setup. 66 | 67 | ### Fixed 68 | 69 | - Fixed an issue where the loading animation could potentially continue running or the cursor remained hidden if an exception was thrown during a request. 70 | - Addressed a potential bug by ensuring the terminal is cleared only on error, preserving user input and previous interactions for reference. 71 | 72 | `pplx_cli.py` 73 | 74 | ### Added 75 | 76 | -Introduced a custom argparse formatter CustomFormatter combining ArgumentDefaultsHelpFormatter and RawDescriptionHelpFormatter to improve the help text readability. 77 | - Implemented detailed command descriptions and examples in the CLI help output, providing immediate guidance to users without external documentation. 78 | 79 | ### Changed 80 | 81 | - Unified the -a, --api_key argument declaration across both chat and search commands to improve code maintainability. 82 | - Implemented a shared function to add common arguments to both chat and search parsers, reducing code duplication. 83 | - Updated argument descriptions for enhanced clarity, making it easier for users to understand the purpose and usage of each command. 84 | - Modified all argument metavariables to an empty string, streamlining the help output by removing uppercase type hints for a cleaner interface. 85 | 86 | ### Removed 87 | 88 | - Eliminated redundant argument declarations, specifically for --api_key in both chat and search subparsers, centralizing its declaration for cleaner code. 89 | 90 | ## [1.1.1] - 02/23/2024 91 | 92 | ### Added 93 | - Support for Perplexity Labs latest `sonar-small-chat`, `sonar-small-online`, `sonar-medium-chat`, and `sonar-medium-online` AI models offering improvements in cost-efficiency, speed, and performance. 94 | - Extended context window support, now accommodating up to 16k tokens for models like `mixtral-8x7b-instruct` and all Perplexity models. 95 | - Increased public rate limits across all models to accommodate approximately 2x more requests. 96 | 97 | > [!WARNING] 98 | > On March 15, the `pplx-70b-chat`, `pplx-70b-online`, `llama-2-70b-chat`, and `codellama-34b-instruct` models will no longer be available through the Perplexity API. 99 | 100 | ## [1.1.0] - 02/22/2024 101 | 102 | ### Added 103 | - loading.py for implementing a loading spinner, enhancing user experience during network requests. 104 | - base_api.py introducing BaseAPI class for shared API functionality, including request handling and streaming support. 105 | - pplx_chat.py and pplx_search.py classes, extending BaseAPI to separate concerns for chat and search functionalities. 106 | - Detailed error handling and environmental variable support for API key configuration, increasing usability and flexibility. 107 | - A comprehensive command-line interface setup in pplx_cli.py, facilitating the use of both chat and search functionalities through a unified interface. 108 | 109 | ### Changed 110 | - Modularized the codebase into separate files (pplx_cli.py, loading.py, base_api.py, pplx_chat.py, pplx_search.py), improving code organization and maintainability. 111 | - Enhanced the command-line interface with more detailed options, including model selection, temperature, top_p, top_k, presence penalty, and frequency penalty settings, allowing for a more customized user experience. 112 | - Updated the streaming functionality to use a loading spinner, providing real-time feedback during asynchronous operations. 113 | - Improved API key management by supporting environmental variables and .env files, simplifying configuration. 114 | 115 | ### Removed 116 | - The single-file script structure, replacing it with a more modular and scalable project architecture. 117 | - Direct use of requests and json in the CLI functions, moving this logic to the BaseAPI class to reduce redundancy. 118 | 119 | ### Fixed 120 | - Fixed issue where streaming was not working. 121 | - Fixed issue where command line parameters were not being set. 122 | - Fixed argument parsing structure in pplx_cli.py with subparsers for chat and search commands, enabling a more structured and versatile command-line interface. 123 | 124 | ### Security 125 | - Implemented secure API key handling through environment variables and .env files, reducing the risk of key exposure. 126 | 127 | ## [1.0.1] - 01/30/2024 128 | 129 | ### Added 130 | - Added support for new model "codellama-70b-instruct" by Meta. 131 | 132 | ## [1.0.0] - 01/10/2024 133 | 134 | ### Added 135 | - Initial release. 136 | -------------------------------------------------------------------------------- /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. 6 | 7 | ## Our Standards 8 | 9 | Examples of behavior that contributes to creating a positive environment include: 10 | 11 | - Using welcoming and inclusive language 12 | - Being respectful of differing viewpoints and experiences 13 | - Gracefully accepting constructive criticism 14 | - Focusing on what is best for the community 15 | - Showing empathy towards other community members 16 | 17 | Examples of unacceptable behavior by participants include: 18 | 19 | - The use of sexualized language or imagery and unwelcome sexual attention or advances 20 | - Trolling, insulting/derogatory comments, and personal or political attacks 21 | - Public or private harassment 22 | - Publishing others' private information, such as a physical or electronic address, without explicit permission 23 | - Other conduct which could reasonably be considered inappropriate in a professional setting 24 | 25 | ## Our Responsibilities 26 | 27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. 28 | 29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. 30 | 31 | ## Scope 32 | 33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. 34 | 35 | ## Enforcement 36 | 37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. 38 | 39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. 40 | 41 | ## Attribution 42 | 43 | This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 1.4, available at [https://www.contributor-covenant.org/version/1/4/code-of-conduct.html](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html). -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to the project 2 | 3 | We welcome contributions! This document provides guidelines and instructions for contributing to this project. 4 | 5 | ## Getting Started 6 | 7 | 1. **Fork the Repository**: Begin by forking the repository to your GitHub account. 8 | 2. **Clone the Repository**: Clone your forked repository to your local machine. 9 | 3. **Create a Branch**: Create a new branch for your contribution. 10 | 11 | ## Contribution Guidelines 12 | 13 | - **Code Style**: Follow the established code style in the project. 14 | - **Commit Messages**: Write meaningful commit messages that clearly explain the changes. 15 | - **Pull Requests**: Submit pull requests to the `main` branch. Ensure your code is well-tested and documented. 16 | 17 | ## Submitting Pull Requests 18 | 19 | 1. **Update Your Fork**: Regularly sync your fork with the main repository to keep it up-to-date. 20 | 2. **Make Your Changes**: Implement your feature or fix. 21 | 3. **Test Your Changes**: Ensure your changes do not break existing functionality. 22 | 4. **Document Your Changes**: Update the README or documentation if necessary. 23 | 5. **Submit a Pull Request**: Push your changes to your fork and open a pull request against the main repository. 24 | 25 | ## Reporting Issues 26 | 27 | - Use the GitHub issue tracker to report bugs or suggest enhancements. 28 | - Provide as much information as possible, including steps to reproduce the issue. 29 | - Check if the issue has already been reported to avoid duplicates. 30 | 31 | ## Code of Conduct 32 | 33 | By contributing to this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md). 34 | 35 | ## Questions or Suggestions 36 | 37 | Feel free to open an issue or contact the maintainers if you have any questions or suggestions. 38 | 39 | Thank you for contributing! -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE/pull_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Pull Request 3 | about: Propose changes to the codebase 4 | title: '[PR] ' 5 | labels: enhancement 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Description of the Changes** 11 | A clear and concise description of what the pull request does. 12 | 13 | **Related Issue** 14 | Link to the issue that this pull request addresses. 15 | 16 | **Motivation and Context** 17 | Why is this change required? What problem does it solve? 18 | 19 | **How Has This Been Tested?** 20 | Please describe in detail how you tested your changes. 21 | 22 | **Screenshots (if appropriate):** 23 | 24 | **Types of Changes** 25 | - [ ] Bug fix (non-breaking change which fixes an issue) 26 | - [ ] New feature (non-breaking change which adds functionality) 27 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) 28 | 29 | **Checklist:** 30 | - [ ] My code follows the project's code style 31 | - [ ] I have read the CONTRIBUTING document 32 | - [ ] I have added tests to cover my changes 33 | - [ ] All new and existing tests passed 34 | -------------------------------------------------------------------------------- /.github/SECURITY.md: -------------------------------------------------------------------------------- 1 | # Security Policy 2 | 3 | ## Supported Versions 4 | 5 | | Version | Supported | 6 | | ------- | ------------------ | 7 | | 1.0.x | :white_check_mark: | 8 | | < 1.0 | :x: | 9 | 10 | ## Reporting a Vulnerability 11 | 12 | We take the security of our software seriously. If you have discovered a security vulnerability in the project, please follow these steps to report it responsibly: 13 | 14 | 1. **Do Not Publish the Vulnerability**: Avoid sharing the details of the vulnerability in public forums, issues, or other public channels. 15 | 16 | 2. **Email the Maintainers**: Send an email to the maintainers of the project. Provide a clear description of the vulnerability, including steps to reproduce it. 17 | 18 | 3. **Wait for Response**: Allow a reasonable amount of time for the maintainers to respond to your report and address the vulnerability. 19 | 20 | 4. **Disclosure**: After the issue has been resolved and announced, you may consider disclosing the issue to the public in a responsible manner. 21 | 22 | We appreciate your efforts to responsibly disclose your findings and will make every effort to acknowledge your contributions. 23 | 24 | ## Contact Information 25 | 26 | For any security concerns, please contact us. -------------------------------------------------------------------------------- /.github/pplx-logo-dark.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RMNCLDYO/perplexity-ai-toolkit/c9c2680550629b80f9377cc9ccce8029a2bfeff5/.github/pplx-logo-dark.png -------------------------------------------------------------------------------- /.github/pplx-logo-light.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RMNCLDYO/perplexity-ai-toolkit/c9c2680550629b80f9377cc9ccce8029a2bfeff5/.github/pplx-logo-light.png -------------------------------------------------------------------------------- /.github/version.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "v1.2.2" 3 | } 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | .DS_Store 162 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 RMNCLDYO 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

2 | 3 | Perplexity AI 4 | 5 |

6 | 7 |

8 | 9 | Perplexity AI Toolkit 10 | 11 |

12 | 13 |

14 | maintained - yes 15 | contributions - welcome 16 |

17 | 18 |

19 | 20 | 21 | 22 | 23 | Perplexity AI 24 | 25 | 26 |

27 | 28 | ## Overview 29 | The Perplexity AI Toolkit makes it easy to use Perplexity Labs' `Sonar` language models (built on top of Meta's latest and most advanced model `LLama-3.1`) for creating chatbots, generating text, and searching the web (***in real-time***). It's designed for everyone, from beginners to experienced developers, allowing quick addition of AI features to projects with simple commands. While it offers simplicity and lightweight integration, it doesn't compromise on power; experienced developers can access the full suite of advanced options available via the API, ensuring robust customization and control. This toolkit is perfect for those looking to efficiently tap into advanced AI without getting bogged down in technical details, yet it still provides the depth needed for complex project requirements. 30 | 31 | ## Key Features 32 | - **Conversational AI**: Create interactive, real-time chat experiences (chatbots) or AI assistants. 33 | - **Real-Time Web Search**: Conduct online searches in real-time with precise query responses. 34 | - **Highly Customizable**: Tailor settings like streaming output, system prompts, sampling temperature and more to suit your specific requirements. 35 | - **Lightweight Integration**: Efficiently designed with minimal dependencies, requiring only the `requests` package for core functionality. 36 | 37 | ## Prerequisites 38 | - `Python 3.x` 39 | - An API key from Perplexity AI 40 | 41 | ## Dependencies 42 | The following Python packages are required: 43 | - `requests`: For making HTTP requests to the Perplexity API. 44 | 45 | The following Python packages are optional: 46 | - `python-dotenv`: For managing API keys and other environment variables. 47 | 48 | ## Installation 49 | To use the Perplexity AI Toolkit, clone the repository to your local machine and install the required Python packages. 50 | 51 | Clone the repository: 52 | ```bash 53 | git clone https://github.com/RMNCLDYO/perplexity-ai-toolkit.git 54 | ``` 55 | 56 | Navigate to the repositories folder: 57 | ```bash 58 | cd perplexity-ai-toolkit 59 | ``` 60 | 61 | Install the required dependencies: 62 | ```bash 63 | pip install -r requirements.txt 64 | ``` 65 | 66 | ## Configuration 67 | 1. Obtain an API key from [Perplexity](https://www.perplexity.ai/). 68 | 2. You have three options for managing your API key: 69 |
70 | Click here to view the API key configuration options 71 | 72 | - **Setting it as an environment variable on your device (recommended for everyday use)** 73 | - Navigate to your terminal. 74 | - Add your API key like so: 75 | ```shell 76 | export PERPLEXITY_API_KEY=your_api_key 77 | ``` 78 | This method allows the API key to be loaded automatically when using the wrapper or CLI. 79 | 80 | - **Using an .env file (recommended for development):** 81 | - Install python-dotenv if you haven't already: `pip install python-dotenv`. 82 | - Create a .env file in the project's root directory. 83 | - Add your API key to the .env file like so: 84 | ```makefile 85 | PERPLEXITY_API_KEY=your_api_key 86 | ``` 87 | This method allows the API key to be loaded automatically when using the wrapper or CLI, assuming you have python-dotenv installed and set up correctly. 88 | 89 | - **Direct Input:** 90 | - If you prefer not to use a `.env` file, you can directly pass your API key as an argument to the CLI or the wrapper functions. 91 | 92 | ***CLI*** 93 | ```shell 94 | --api_key "your_api_key" 95 | ``` 96 | ***Wrapper*** 97 | ```shell 98 | api_key="your_api_key" 99 | ``` 100 | This method requires manually inputting your API key each time you initiate an API call, ensuring flexibility for different deployment environments. 101 |
102 | 103 | ## Usage 104 | The Perplexity AI Toolkit can be used in two different modes: `Chat`, and `Search`. Each mode is designed for specific types of interactions with the language models. 105 | 106 | ## Chat Mode 107 | Chat mode is intended for chatting with an AI model (similar to a chatbot) or building conversational applications. 108 | 109 | #### Example Usage 110 | 111 | ***CLI*** 112 | ```bash 113 | python cli.py --chat 114 | ``` 115 | 116 | ***Wrapper*** 117 | ```python 118 | from perplexity import Chat 119 | 120 | Chat().run() 121 | ``` 122 | 123 | > An executable version of this example can be found [here](./examples/example_chat.py). (*You must move this file to the root folder before running the program.*) 124 | 125 | ## Search Mode 126 | Search mode is intended for searching online (in real-time) for a single query as perplexity does not support multi-turn conversations with their online models. 127 | 128 | #### Example Usage 129 | 130 | ***CLI*** 131 | ```bash 132 | python cli.py --search --query "What is today's date?" 133 | ``` 134 | 135 | ***Wrapper*** 136 | ```python 137 | from perplexity import Search 138 | 139 | Search().run(query="What is today's date?") 140 | ``` 141 | 142 | > An executable version of this example can be found [here](./examples/example_search.py). (*You must move this file to the root folder before running the program.*) 143 | 144 | *Search mode is limited to 'online' models, such as `llama-3.1-sonar-small-128k-online`, `llama-3.1-sonar-large-128k-online` and `llama-3.1-sonar-huge-128k-online`.* 145 | 146 | ## Advanced Configuration 147 | 148 | ### CLI and Wrapper Options 149 | | **Description** | **CLI Flags** | **CLI Usage** | **Wrapper Usage** | 150 | |------------------------------------------|------------------------------|-----------------------------------------------------|---------------------------------------------------| 151 | | Enable chat mode | `-c`, `--chat` | --chat | *See mode usage above* | 152 | | Enable online search mode | `-s`, `--search` | --search | *See mode usage above* | 153 | | Online search query | `-q`, `--query` | --query "What is today's date?" | query="What is today's date?" | 154 | | User prompt | `-p`, `--prompt` | --prompt "How many stars are there in our galaxy?" | prompt="How many stars are there in our galaxy?" | 155 | | API key for authentication | `-a`, `--api_key` | --api_key your_api_key | api_key="your_api_key" | 156 | | Model name | `-m`, `--model` | --model "llama-3.1-sonar-small-128k-chat" | model="llama-3.1-sonar-small-128k-chat" | 157 | | Enable streaming mode | `-st`, `--stream` | --stream | stream=True | 158 | | System prompt (instructions) | `-sp`, `--system_prompt` | --system_prompt "Be precise and concise." | system_prompt="Be precise and concise." | 159 | | Maximum tokens to generate | `-mt`, `--max_tokens` | --max_tokens 100 | max_tokens=100 | 160 | | Sampling temperature | `-tm`, `--temperature` | --temperature 0.7 | temperature=0.7 | 161 | | Nucleus sampling threshold | `-tp`, `--top_p` | --top_p 0.9 | top_p=0.9 | 162 | | Top-k sampling threshold | `-tk`, `--top_k` | --top_k 40 | top_k=40 | 163 | | Penalize tokens based on their presence | `-pp`, `--presence_penalty` | --presence_penalty 0.5 | presence_penalty=0.5 | 164 | | Penalize tokens based on their frequency | `-fp`, `--frequency_penalty` | --frequency_penalty 0.5 | frequency_penalty=0.5 | 165 | 166 | > *To exit the program at any time, you can type **`exit`** or **`quit`**. This command works similarly whether you're interacting with the program via the CLI or through the Python wrapper ensuring that you can easily and safely conclude your work with the Perplexity AI Toolkit without having to resort to interrupt signals or forcibly closing the terminal or command prompt.* 167 | 168 | ## Available Models 169 | 170 | Perplexity offers both native models and a selection of large, open-source instruct models. 171 | 172 | ### Online Models 173 | 174 | | **Model** | **Parameter Count** | **Context Length** | 175 | |-------------------------------------|---------------------|--------------------| 176 | | `llama-3.1-sonar-small-128k-online` | 8B | 127,072 | 177 | | `llama-3.1-sonar-large-128k-online` | 70B | 127,072 | 178 | | `llama-3.1-sonar-huge-128k-online` | 405B | 127,072 | 179 | 180 | - *Perplexity makes note that the search subsystem of the Online LLMs do not attend to the system prompt. You can only use the system prompt to provide instructions related to style, tone, and language of the response.* 181 | 182 | ### Chat Models 183 | 184 | | **Model** | **Parameter Count** | **Context Length** | 185 | |-----------------------------------|---------------------|--------------------| 186 | | `llama-3.1-sonar-small-128k-chat` | 8B | 131,072 | 187 | | `llama-3.1-sonar-large-128k-chat` | 70B | 131,072 | 188 | 189 | ### Open-Source Models 190 | 191 | | **Model** | **Parameter Count** | **Context Length** | 192 | |--------------------------|---------------------|--------------------| 193 | | `llama-3.1-8b-instruct` | 8B | 131,072 | 194 | | `llama-3.1-70b-instruct` | 70B | 131,072 | 195 | 196 | - *Where possible, Perplexity tries to match the Hugging Face implementation.* 197 | 198 | ## Contributing 199 | Contributions are welcome! 200 | 201 | Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for detailed guidelines on how to contribute to this project. 202 | 203 | ## Reporting Issues 204 | Encountered a bug? We'd love to hear about it. Please follow these steps to report any issues: 205 | 206 | 1. Check if the issue has already been reported. 207 | 2. Use the [Bug Report](.github/ISSUE_TEMPLATE/bug_report.md) template to create a detailed report. 208 | 3. Submit the report [here](https://github.com/RMNCLDYO/perplexity-ai-toolkit/issues). 209 | 210 | Your report will help us make the project better for everyone. 211 | 212 | ## Feature Requests 213 | Got an idea for a new feature? Feel free to suggest it. Here's how: 214 | 215 | 1. Check if the feature has already been suggested or implemented. 216 | 2. Use the [Feature Request](.github/ISSUE_TEMPLATE/feature_request.md) template to create a detailed request. 217 | 3. Submit the request [here](https://github.com/RMNCLDYO/perplexity-ai-toolkit/issues). 218 | 219 | Your suggestions for improvements are always welcome. 220 | 221 | ## Versioning and Changelog 222 | Stay up-to-date with the latest changes and improvements in each version: 223 | 224 | - [CHANGELOG.md](.github/CHANGELOG.md) provides detailed descriptions of each release. 225 | 226 | ## Security 227 | Your security is important to us. If you discover a security vulnerability, please follow our responsible disclosure guidelines found in [SECURITY.md](.github/SECURITY.md). Please refrain from disclosing any vulnerabilities publicly until said vulnerability has been reported and addressed. 228 | 229 | ## License 230 | Licensed under the MIT License. See [LICENSE](LICENSE) for details. 231 | -------------------------------------------------------------------------------- /cli.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from perplexity import Chat, Search 3 | 4 | def main(): 5 | class CustomFormatter(argparse.ArgumentDefaultsHelpFormatter, 6 | argparse.RawDescriptionHelpFormatter): 7 | pass 8 | parser = argparse.ArgumentParser( 9 | description=""" 10 | ------------------------------------------------------------------ 11 | Perplexity AI Toolkit 12 | API Wrapper & Command-line Interface 13 | [v1.2.2] by @rmncldyo 14 | ------------------------------------------------------------------ 15 | 16 | Perplexity AI toolit is an API wrapper and command-line interface for the suite of large-language models offered by Perplexity Labs. 17 | 18 | | Option(s) | Description | Example Usage | 19 | |--------------------------|------------------------------------------|-------------------------------------------------------------------------------| 20 | | -c, --chat | Enable chat mode | --chat | 21 | | -s, --search | Enable online search mode | --search | 22 | | -p, --prompt | User prompt | --prompt "How many stars are there in our galaxy?" | 23 | | -q, --query | Online search query | --query "What is today's date?" | 24 | | -a, --api_key | API key for authentication | --api_key "api_key_goes_here" | 25 | | -m, --model | Model name | --model "llama-3-sonar-large-32k-chat" | 26 | | -sp, --system_prompt | System prompt (instructions) | --system_prompt "Be precise and concise." | 27 | | -st, --stream | Enable streaming mode | --stream | 28 | | -mt, --max_tokens | Maximum tokens to generate | --max_tokens 1024 | 29 | | -tm, --temperature | Sampling temperature | --temperature 0.7 | 30 | | -tp, --top_p | Nucleus sampling threshold | --top_p 0.9 | 31 | | -tk, --top_k | Top-k sampling threshold | --top_k 40 | 32 | | -pp, --presence_penalty | Penalize tokens based on their presence | --presence_penalty 0.5 | 33 | | -fp, --frequency_penalty | Penalize tokens based on their frequency | --frequency_penalty 0.5 | 34 | """, 35 | formatter_class=CustomFormatter, 36 | epilog="For detailed usage information, visit our ReadMe here: github.com/RMNCLDYO/perplexity-ai-toolkit" 37 | ) 38 | parser.add_argument('-c', '--chat', action='store_true', help='Enable chat mode') 39 | parser.add_argument('-s', '--search', action='store_true', help='Enable search mode') 40 | parser.add_argument('-p', '--prompt', type=str, help='User prompt') 41 | parser.add_argument('-q', '--query', type=str, help='Online search query') 42 | parser.add_argument('-a', '--api_key', type=str, help='API key for authentication') 43 | parser.add_argument('-m', '--model', type=str, help='Model name') 44 | parser.add_argument('-sp', '--system_prompt', type=str, help='System prompt (instructions)') 45 | parser.add_argument('-st', '--stream', action='store_true', help='Enable streaming mode') 46 | parser.add_argument('-mt', '--max_tokens', type=int, help='Maximum tokens to generate') 47 | parser.add_argument('-tm', '--temperature', type=float, help='Sampling temperature') 48 | parser.add_argument('-tp', '--top_p', type=float, help='Nucleus sampling threshold') 49 | parser.add_argument('-tk', '--top_k', type=int, help='Top-k sampling threshold') 50 | parser.add_argument('-pp', '--presence_penalty', type=float, help='Penalize tokens based on their presence') 51 | parser.add_argument('-fp', '--frequency_penalty', type=float, help='Penalize tokens based on their frequency') 52 | 53 | args = parser.parse_args() 54 | 55 | if args.chat: 56 | Chat().run(args.api_key, args.model, args.prompt, args.system_prompt, args.stream, args.max_tokens, args.temperature, args.top_p, args.top_k, args.presence_penalty, args.frequency_penalty) 57 | elif args.search: 58 | Search().run(args.api_key, args.model, args.query, args.system_prompt, args.stream, args.max_tokens, args.temperature, args.top_p, args.top_k, args.presence_penalty, args.frequency_penalty) 59 | else: 60 | print("Error: Please specify a mode to use. Use --help for more information.") 61 | exit() 62 | 63 | if __name__ == "__main__": 64 | main() -------------------------------------------------------------------------------- /client.py: -------------------------------------------------------------------------------- 1 | import json 2 | import requests 3 | from config import load_config 4 | from loading import Loading 5 | 6 | print("------------------------------------------------------------------\n") 7 | print(" Perplexity AI Toolkit \n") 8 | print(" API Wrapper & Command-line Interface \n") 9 | print(" [v1.2.2] by @rmncldyo \n") 10 | print("------------------------------------------------------------------\n") 11 | 12 | class Client: 13 | def __init__(self, api_key=None): 14 | self.config = load_config(api_key=api_key) 15 | self.api_key = api_key if api_key else self.config.get('api_key') 16 | self.base_url = self.config.get('base_url') 17 | self.headers = { 18 | "authorization": f"Bearer {self.api_key}", 19 | "accept": "application/json", 20 | "content-type": "application/json" 21 | } 22 | 23 | def post(self, endpoint, data): 24 | loading = Loading() 25 | url = f"{self.base_url}/{endpoint}" 26 | try: 27 | loading.start() 28 | response = requests.post(url, json=data, headers=self.headers) 29 | response.raise_for_status() 30 | response = response.json() 31 | loading.stop() 32 | try: 33 | if response and response['choices'][0]['message']['role'] == "assistant": 34 | return response['choices'][0]['message']['content'] 35 | except: 36 | return "Error: We encountered an error while retrieving the response. Please try again later." 37 | except Exception as e: 38 | loading.stop() 39 | print(f"HTTP Error: {e}") 40 | raise 41 | finally: 42 | loading.stop() 43 | 44 | def stream_post(self, endpoint, data): 45 | loading = Loading() 46 | url = f"{self.base_url}/{endpoint}" 47 | full_response = [] 48 | try: 49 | loading.start() 50 | response = requests.post(url, json=data, headers=self.headers, stream=True) 51 | response.raise_for_status() 52 | loading.stop() 53 | data_dict = {} 54 | print("Assistant: ", end="", flush=True) 55 | for line in response.iter_lines(): 56 | if line: 57 | json_data = line.decode('utf-8').split('data: ')[1] 58 | data_dict = json.loads(json_data) 59 | print(data_dict['choices'][0]['delta']['content'], end="", flush=True) 60 | full_response.append(data_dict['choices'][0]['delta']['content']) 61 | print() 62 | return ''.join(full_response) 63 | except Exception as e: 64 | loading.stop() 65 | print(f"Stream HTTP Error: {e}") 66 | raise 67 | finally: 68 | loading.stop() -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | def load_required_env_variables(var_name: str): 4 | value = os.getenv(var_name) 5 | if value is None: 6 | try: 7 | from dotenv import load_dotenv 8 | load_dotenv() 9 | value = os.getenv(var_name) 10 | if value is None or value.strip() == "": 11 | print(f"Error: {var_name} environment variable is not defined. Please define it in a .env file or directly in your environment. You can also pass it as an argument to the function or export it as an environment variable.") 12 | exit(1) 13 | except ImportError: 14 | print("Error: dotenv package is not installed. Please install it with 'pip install python-dotenv' or define the environment variables directly.") 15 | exit(1) 16 | except Exception as e: 17 | print(f"Error loading environment variables: {e}") 18 | exit(1) 19 | return value 20 | 21 | def load_config(api_key=None): 22 | if not api_key: 23 | api_key = load_required_env_variables('PERPLEXITY_API_KEY') 24 | 25 | return { 26 | 'api_key': api_key, 27 | 'base_chat_model': os.getenv('PERPLEXITY_BASE_CHAT_MODEL', 'llama-3-sonar-large-32k-chat'), 28 | 'base_search_model': os.getenv('PERPLEXITY_BASE_SEARCH_MODEL', 'llama-3-sonar-large-32k-online'), 29 | 'base_url': os.getenv('PERPLEXITY_BASE_URL', 'https://api.perplexity.ai'), 30 | 'timeout': int(os.getenv('PERPLEXITY_TIMEOUT', 20)), 31 | } 32 | -------------------------------------------------------------------------------- /example.env: -------------------------------------------------------------------------------- 1 | PERPLEXITY_API_KEY=your_api_key_here -------------------------------------------------------------------------------- /examples/example_chat.py: -------------------------------------------------------------------------------- 1 | from perplexity import Chat 2 | 3 | Chat().run() -------------------------------------------------------------------------------- /examples/example_search.py: -------------------------------------------------------------------------------- 1 | from perplexity import Search 2 | 3 | Search().run(query="What is today's date?") 4 | -------------------------------------------------------------------------------- /loading.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import threading 3 | import time 4 | 5 | class Loading: 6 | def __init__(self): 7 | self.spinner = '|/-\\' 8 | self.spinner_index = 0 9 | self.running = False 10 | self.thread = None 11 | 12 | def hide_cursor(self): 13 | sys.stdout.write('\033[?25l') 14 | sys.stdout.flush() 15 | 16 | def show_cursor(self): 17 | sys.stdout.write('\033[?25h') 18 | sys.stdout.flush() 19 | 20 | def clear_line(self): 21 | sys.stdout.write('\r\033[K') 22 | sys.stdout.flush() 23 | 24 | def update(self): 25 | while self.running: 26 | sys.stdout.write('\rWaiting for assistant response... ' + self.spinner[self.spinner_index]) 27 | sys.stdout.flush() 28 | self.spinner_index = (self.spinner_index + 1) % len(self.spinner) 29 | time.sleep(0.1) 30 | self.clear_line() 31 | self.show_cursor() 32 | 33 | def start(self): 34 | if not self.running: 35 | self.running = True 36 | self.hide_cursor() 37 | self.thread = threading.Thread(target=self.update) 38 | self.thread.start() 39 | 40 | def stop(self): 41 | if self.running: 42 | self.running = False 43 | self.thread.join() -------------------------------------------------------------------------------- /perplexity.py: -------------------------------------------------------------------------------- 1 | from client import Client 2 | 3 | class Chat: 4 | def __init__(self): 5 | self.client = None 6 | 7 | def run(self, api_key=None, model=None, prompt=None, system_prompt=None, stream=None, max_tokens=None, temperature=None, top_p=None, top_k=None, presence_penalty=None, frequency_penalty=None): 8 | 9 | self.client = Client(api_key=api_key) 10 | self.model = model if model else self.client.config.get('base_chat_model') 11 | 12 | conversation_history = [] 13 | 14 | if system_prompt: 15 | conversation_history.append({"role": "system", "content": system_prompt}) 16 | 17 | print("Type 'exit' or 'quit' at any time to end the conversation.\n") 18 | 19 | print("Assistant: Hello! How can I assist you today?") 20 | while True: 21 | if prompt: 22 | user_input = prompt.strip() 23 | print(f"User: {user_input}") 24 | prompt = None 25 | else: 26 | user_input = input("User: ").strip() 27 | if user_input.lower() in ['exit', 'quit']: 28 | print("\nThank you for using the Perplexity AI toolkit. Have a great day!") 29 | break 30 | 31 | if not user_input: 32 | print("Invalid input detected. Please enter a valid message.") 33 | continue 34 | 35 | conversation_history.append({"role": "user", "content": user_input}) 36 | 37 | payload = { 38 | "model": self.model, 39 | "messages": conversation_history, 40 | "system_prompt": system_prompt, 41 | "stream": stream, 42 | "max_tokens": max_tokens, 43 | "temperature": temperature, 44 | "top_p": top_p, 45 | "top_k": top_k, 46 | "presence_penalty": presence_penalty, 47 | "frequency_penalty": frequency_penalty 48 | } 49 | 50 | data = {k: v for k, v in payload.items() if v is not None} 51 | 52 | endpoint = "chat/completions" 53 | 54 | if stream: 55 | response = self.client.stream_post(endpoint, data) 56 | assistant_response = response 57 | else: 58 | response = self.client.post(endpoint, data) 59 | assistant_response = response 60 | print(f"Assistant: {assistant_response}") 61 | conversation_history.append({"role": "assistant", "content": assistant_response}) 62 | 63 | 64 | class Search: 65 | def __init__(self): 66 | self.client = None 67 | 68 | def run(self, api_key=None, model=None, query=None, system_prompt=None, stream=None, max_tokens=None, temperature=None, top_p=None, top_k=None, presence_penalty=None, frequency_penalty=None): 69 | 70 | self.client = Client(api_key=api_key) 71 | self.model = model if model else self.client.config.get('base_search_model') 72 | 73 | if "online" not in self.model: 74 | print("Error: { Invalid model type }. Please use a search model instead of a chat model.") 75 | exit(1) 76 | 77 | if not query: 78 | print("Error: { Invalid input detected }. Please enter a valid search query.") 79 | exit(1) 80 | 81 | if system_prompt: 82 | message = [ 83 | {"role": "system", "content": system_prompt}, 84 | {"role": "user", "content": query} 85 | ] 86 | else: 87 | message = [{"role": "user", "content": query}] 88 | 89 | payload = { 90 | "model": self.model, 91 | "messages": message, 92 | "system_prompt": system_prompt, 93 | "stream": stream, 94 | "max_tokens": max_tokens, 95 | "temperature": temperature, 96 | "top_p": top_p, 97 | "top_k": top_k, 98 | "presence_penalty": presence_penalty, 99 | "frequency_penalty": frequency_penalty 100 | } 101 | 102 | data = {k: v for k, v in payload.items() if v is not None} 103 | 104 | endpoint = "chat/completions" 105 | 106 | if stream: 107 | response = self.client.stream_post(endpoint, data) 108 | assistant_response = response 109 | else: 110 | response = self.client.post(endpoint, data) 111 | assistant_response = response 112 | print(f"Assistant: {assistant_response}") 113 | 114 | print("\nThank you for using the Perplexity AI toolkit. Have a great day!") -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | # Core dependencies 2 | requests 3 | 4 | # Optional dependencies 5 | python-dotenv 6 | 7 | # Additional dependencies may be added as needed --------------------------------------------------------------------------------