├── .github ├── CODE_OF_CONDUCT.md ├── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md └── SECURITY.md ├── .gitignore ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── asyncgpt ├── __init__.py ├── chatgpt.py └── types │ ├── ___init__.py │ ├── exceptions.py │ ├── requests.py │ └── responses │ ├── __init__.py │ ├── chatcompletion.py │ └── completion.py ├── examples ├── chatcompletion.py └── completion.py ├── pyproject.toml └── requirements.txt /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our 6 | community a harassment-free experience for everyone, regardless of age, body 7 | size, visible or invisible disability, ethnicity, sex characteristics, gender 8 | identity and expression, level of experience, education, socio-economic status, 9 | nationality, personal appearance, race, religion, or sexual identity 10 | and orientation. 11 | 12 | We pledge to act and interact in ways that contribute to an open, welcoming, 13 | diverse, inclusive, and healthy community. 14 | 15 | ## Our Standards 16 | 17 | Examples of behavior that contributes to a positive environment for our 18 | community include: 19 | 20 | * Demonstrating empathy and kindness toward other people 21 | * Being respectful of differing opinions, viewpoints, and experiences 22 | * Giving and gracefully accepting constructive feedback 23 | * Accepting responsibility and apologizing to those affected by our mistakes, 24 | and learning from the experience 25 | * Focusing on what is best not just for us as individuals, but for the 26 | overall community 27 | 28 | Examples of unacceptable behavior include: 29 | 30 | * The use of sexualized language or imagery, and sexual attention or 31 | advances of any kind 32 | * Trolling, insulting or derogatory comments, and personal or political attacks 33 | * Public or private harassment 34 | * Publishing others' private information, such as a physical or email 35 | address, without their explicit permission 36 | * Other conduct which could reasonably be considered inappropriate in a 37 | professional setting 38 | 39 | ## Enforcement Responsibilities 40 | 41 | Community leaders are responsible for clarifying and enforcing our standards of 42 | acceptable behavior and will take appropriate and fair corrective action in 43 | response to any behavior that they deem inappropriate, threatening, offensive, 44 | or harmful. 45 | 46 | Community leaders have the right and responsibility to remove, edit, or reject 47 | comments, commits, code, wiki edits, issues, and other contributions that are 48 | not aligned to this Code of Conduct, and will communicate reasons for moderation 49 | decisions when appropriate. 50 | 51 | ## Scope 52 | 53 | This Code of Conduct applies within all community spaces, and also applies when 54 | an individual is officially representing the community in public spaces. 55 | Examples of representing our community include using an official e-mail address, 56 | posting via an official social media account, or acting as an appointed 57 | representative at an online or offline event. 58 | 59 | ## Enforcement 60 | 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 62 | reported to the community leaders responsible for enforcement at 63 | telegram @just1zwtf. 64 | All complaints will be reviewed and investigated promptly and fairly. 65 | 66 | All community leaders are obligated to respect the privacy and security of the 67 | reporter of any incident. 68 | 69 | ## Enforcement Guidelines 70 | 71 | Community leaders will follow these Community Impact Guidelines in determining 72 | the consequences for any action they deem in violation of this Code of Conduct: 73 | 74 | ### 1. Correction 75 | 76 | **Community Impact**: Use of inappropriate language or other behavior deemed 77 | unprofessional or unwelcome in the community. 78 | 79 | **Consequence**: A private, written warning from community leaders, providing 80 | clarity around the nature of the violation and an explanation of why the 81 | behavior was inappropriate. A public apology may be requested. 82 | 83 | ### 2. Warning 84 | 85 | **Community Impact**: A violation through a single incident or series 86 | of actions. 87 | 88 | **Consequence**: A warning with consequences for continued behavior. No 89 | interaction with the people involved, including unsolicited interaction with 90 | those enforcing the Code of Conduct, for a specified period of time. This 91 | includes avoiding interactions in community spaces as well as external channels 92 | like social media. Violating these terms may lead to a temporary or 93 | permanent ban. 94 | 95 | ### 3. Temporary Ban 96 | 97 | **Community Impact**: A serious violation of community standards, including 98 | sustained inappropriate behavior. 99 | 100 | **Consequence**: A temporary ban from any sort of interaction or public 101 | communication with the community for a specified period of time. No public or 102 | private interaction with the people involved, including unsolicited interaction 103 | with those enforcing the Code of Conduct, is allowed during this period. 104 | Violating these terms may lead to a permanent ban. 105 | 106 | ### 4. Permanent Ban 107 | 108 | **Community Impact**: Demonstrating a pattern of violation of community 109 | standards, including sustained inappropriate behavior, harassment of an 110 | individual, or aggression toward or disparagement of classes of individuals. 111 | 112 | **Consequence**: A permanent ban from any sort of public interaction within 113 | the community. 114 | 115 | ## Attribution 116 | 117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], 118 | version 2.0, available at 119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. 120 | 121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct 122 | enforcement ladder](https://github.com/mozilla/diversity). 123 | 124 | [homepage]: https://www.contributor-covenant.org 125 | 126 | For answers to common questions about this code of conduct, see the FAQ at 127 | https://www.contributor-covenant.org/faq. Translations are available at 128 | https://www.contributor-covenant.org/translations. 129 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/SECURITY.md: -------------------------------------------------------------------------------- 1 | # Security Policy 2 | 3 | ### This project clearly can not have security vulnerabilities 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .vscode/ 2 | .idea/ 3 | .env 4 | .venv 5 | env/ 6 | venv/ 7 | **/__pycache__/ -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to this project! We welcome any contributions that help improve the project, whether it's fixing a bug, adding a new feature, or improving the documentation. 4 | 5 | Before submitting a contribution, please read these guidelines to ensure that your contribution is consistent with our project goals and coding standards. 6 | ### Getting Started 7 | 8 | Fork the repository and clone it to your local machine. 9 | Install any necessary dependencies by running `pip install -r requirements.txt`. 10 | Make your changes and commit them with a descriptive commit message. 11 | 12 | ### Submitting a Pull Request 13 | 14 | Push your changes to your forked repository. 15 | Submit a pull request (PR) to the master branch of the original repository. 16 | Include a detailed description of your changes and the problem it solves. 17 | Wait for feedback and address any issues that are raised during the review process. 18 | 19 | ### Code Standards 20 | 21 | All code should adhere to the PEP8 coding standards. 22 | All code should preferably be well-documented and include appropriate comments. 23 | All code should be thoroughly tested to ensure it works as expected. 24 | 25 | ## Bug Reports and Feature Requests 26 | 27 | If you encounter a bug or would like to request a new feature, please open an issue on the repository's issue tracker. Please provide as much detail as possible, including steps to reproduce the bug and any relevant code samples. 28 | Code of Conduct 29 | 30 | Please note that we have a code of conduct in place for all contributors to this project. By contributing to this project, you agree to abide by the code of conduct. Please read the CODE_OF_CONDUCT.md file for more information. 31 | 32 | Thank you for your contributions! 33 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Just1z 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

🤖 AsyncGPT 🤖

2 |

AsyncGPT is an open-source unofficial asynchronous framework for ChatGPT API written in Python 3.11 using asyncio and aiohttp

3 | 4 | ## Installation 5 | ```bash 6 | pip install git+https://github.com/Just1z/asyncgpt 7 | ``` 8 | 9 | ## Usage 10 | The simplest usage for now: 11 | ```python 12 | import asyncio 13 | import asyncgpt 14 | 15 | 16 | async def main(): 17 | bot = asyncgpt.GPT(apikey="YOUR API KEY") 18 | completion = await bot.chat_complete([{"role": "user", "content": "Hello!"}]) 19 | print(completion) 20 | 21 | 22 | if __name__ == "__main__": 23 | asyncio.run(main()) 24 | # Hello there! How can I assist you today? 25 | ``` 26 | 27 | ## How to get API key? 28 | You should get one on the official OpenAI site 29 | 30 | https://platform.openai.com/account/api-keys 31 | -------------------------------------------------------------------------------- /asyncgpt/__init__.py: -------------------------------------------------------------------------------- 1 | """ PyChatGPT is an open-source unofficial asynchronous framework for ChatGPT API written in Python 3.11 using asyncio and aiohttp """ 2 | from .chatgpt import GPT 3 | -------------------------------------------------------------------------------- /asyncgpt/chatgpt.py: -------------------------------------------------------------------------------- 1 | from typing import Dict, Union 2 | from .types.requests import post 3 | from .types.responses import ChatCompletion, ChatCompletionChoice, Completion, CompletionChoice 4 | from .types.exceptions import AsyncGPTError 5 | 6 | 7 | class GPT: 8 | """Class is used to access ChatGPT method ChatCompletion""" 9 | def __init__(self, apikey: str) -> None: 10 | """ 11 | `apikey`: str 12 | The OpenAI API uses API keys for authentication. 13 | Visit https://platform.openai.com/account/api-keys to retrieve the API key. 14 | Remember that your API key is a secret! 15 | Do not share it with others or expose it in any client-side code (browsers, apps) 16 | """ 17 | if not isinstance(apikey, str): 18 | raise ValueError("apikey should be string") 19 | self.apikey = apikey 20 | 21 | @property 22 | def headers(self): 23 | return {"Content-Type": "application/json", 24 | "Authorization": "Bearer " + self.apikey} 25 | 26 | async def chat_complete( 27 | self, messages: list[Dict[str, str]], 28 | model: str = "gpt-3.5-turbo", 29 | temperature: float = 1.0, top_p: float = 1.0, 30 | stop: Union[str, list] = None, n: int = 1, stream: bool = False, 31 | max_tokens: int = None, presence_penalty: float = 0, 32 | frequency_penalty: float = 0, user: str = None) -> ChatCompletion: 33 | """Given a chat conversation, the model will return a chat completion response. 34 | 35 | `messages`: list[Dict[str, str]], required 36 | The messages to generate chat completions for, in the chat format, e.g. 37 | [ 38 | {"role": "system", "content": "You are a helpful assistant."}, 39 | 40 | {"role": "user", "content": "Who won the world series in 2020?"}, 41 | 42 | {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, 43 | 44 | {"role": "user", "content": "Where was it played?"} 45 | ] 46 | Visit https://platform.openai.com/docs/guides/chat/introduction for more info 47 | 48 | `temperature`: float, defaults to 1 49 | What sampling temperature to use, between 0 and 2. 50 | Higher values like 0.8 will make the output more random, 51 | while lower values like 0.2 will make it more focused and deterministic. 52 | We generally recommend altering this or `top_p` but not both. 53 | 54 | `top_p`: float, defaults to 1 55 | An alternative to sampling with temperature, called nucleus sampling, where the model 56 | considers the results of the tokens with top_p probability mass. 57 | So 0.1 means only the tokens comprising the top 10% probability mass are considered. 58 | We generally recommend altering this or `temperature` but not both. 59 | 60 | `n`: int, defaults to 1 61 | How many chat completion choices to generate for each input message. 62 | 63 | `stream`: bool, defaults to False 64 | If set, partial message deltas will be sent, like in ChatGPT. 65 | Tokens will be sent as data-only server-sent events as they become available, 66 | with the stream terminated 67 | by a `data: [DONE]` message 68 | 69 | `stop`: str or list, defaults to None 70 | Up to 4 sequences where the API will stop generating further tokens. 71 | 72 | `max_tokens`: int, defaults to inf 73 | The maximum number of tokens allowed for the generated answer. 74 | 75 | `presence_penalty`: float, defaults to 0 76 | Number between -2.0 and 2.0. Positive values penalize new tokens based on 77 | whether they appear in the text so far, increasing the model's likelihood 78 | to talk about new topics. 79 | Learn more at https://platform.openai.com/docs/api-reference/parameter-details 80 | 81 | `frequency_penalty`: float, defaults to 0 82 | Number between -2.0 and 2.0. Positive values penalize new tokens based on 83 | their existing frequency in the text so far, decreasing the model's 84 | likelihood to repeat the same line verbatim. 85 | Learn more at https://platform.openai.com/docs/api-reference/parameter-details 86 | 87 | `user`: str, default to None 88 | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 89 | Learn more at https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids 90 | """ 91 | if not all((messages[0].get("role"), messages[0].get("content"))): 92 | raise ValueError("Invalid messages object") 93 | params = { 94 | "model": model, "messages": messages, 95 | "temperature": float(temperature), "top_p": float(top_p), 96 | "stop": stop, "n": int(n), "stream": bool(stream), 97 | "max_tokens": max_tokens, 98 | "presence_penalty": float(presence_penalty), 99 | "frequency_penalty": float(frequency_penalty) 100 | } 101 | if user: 102 | params["user"] = user 103 | response = await post( 104 | "https://api.openai.com/v1/chat/completions", 105 | json=params, headers=self.headers) 106 | if "error" in response: 107 | raise AsyncGPTError(f"{response['error']['type']}: {response['error']['message']}") 108 | return ChatCompletion( 109 | id=response["id"], created=response["created"], 110 | usage=response["usage"], 111 | choices=[ChatCompletionChoice(**choice) for choice in response["choices"]] 112 | ) 113 | 114 | async def complete( 115 | self, prompt: str = "", 116 | model: str = "text-davinci-003", suffix: str = None, 117 | temperature: float = 1.0, top_p: float = 1.0, echo: bool = False, 118 | stop: Union[str, list] = None, n: int = 1, stream: bool = False, 119 | best_of: int = 1, logprobs: int = None, 120 | max_tokens: int = None, presence_penalty: float = 0, 121 | frequency_penalty: float = 0, user: str = None) -> Completion: 122 | """Given a prompt, the model will return a completion response. 123 | 124 | `prompt`: str, defaults to <|endoftext|> 125 | The prompt(s) to generate completions for, encoded as a string, 126 | array of strings, array of tokens, or array of token arrays. 127 | Note that <|endoftext|> is the document separator that the model sees during training, 128 | so if a prompt is not specified the model will generate 129 | as if from the beginning of a new document. 130 | 131 | `suffix`: str, defaults to None 132 | The suffix that comes after a completion of inserted text. 133 | 134 | `max_tokens`: int, defaults to inf 135 | The maximum number of tokens to generate in the completion. 136 | The token count of your prompt plus `max_tokens` cannot exceed 137 | the model's context length. Most models have a context length of 2048 tokens 138 | (except for the newest models, which support 4096). 139 | 140 | `temperature`: float, defaults to 1 141 | What sampling temperature to use, between 0 and 2. 142 | Higher values like 0.8 will make the output more random, 143 | while lower values like 0.2 will make it more focused and deterministic. 144 | We generally recommend altering this or `top_p` but not both. 145 | 146 | `top_p`: float, defaults to 1 147 | An alternative to sampling with temperature, called nucleus sampling, where the model 148 | considers the results of the tokens with top_p probability mass. 149 | So 0.1 means only the tokens comprising the top 10% probability mass are considered. 150 | We generally recommend altering this or `temperature` but not both. 151 | 152 | `n`: int, defaults to 1 153 | How many completions to generate for each prompt. 154 | Note: Because this parameter generates many completions, 155 | it can quickly consume your token quota. 156 | Use carefully and ensure that you have 157 | reasonable settings for max_tokens and stop. 158 | 159 | `stream`: bool, defaults to False 160 | If set, partial message deltas will be sent, like in ChatGPT. 161 | Tokens will be sent as data-only server-sent events as they become available, 162 | with the stream terminated 163 | by a `data: [DONE]` message 164 | 165 | `logprobs`: int, defaults to None 166 | Include the log probabilities on the `logprobs` most likely tokens, 167 | as well the chosen tokens. For example, if `logprobs` is 5, the API 168 | will return a list of the 5 most likely tokens. The API will always 169 | return the `logprob` of the sampled token, so there may be 170 | up to `logprobs+1` elements in the response. 171 | 172 | `echo`: bool, defaults to False 173 | Echo back the prompt in addition to the completion 174 | 175 | `stop`: str or list, defaults to None 176 | Up to 4 sequences where the API will stop generating further tokens. 177 | 178 | `presence_penalty`: float, defaults to 0 179 | Number between -2.0 and 2.0. Positive values penalize new tokens based on 180 | whether they appear in the text so far, increasing the model's likelihood 181 | to talk about new topics. 182 | Learn more at https://platform.openai.com/docs/api-reference/parameter-details 183 | 184 | `frequency_penalty`: float, defaults to 0 185 | Number between -2.0 and 2.0. Positive values penalize new tokens based on 186 | their existing frequency in the text so far, decreasing the model's 187 | likelihood to repeat the same line verbatim. 188 | Learn more at https://platform.openai.com/docs/api-reference/parameter-details 189 | 190 | `best_of`: int, defaults to 1 191 | Generates `best_of` completions server-side and returns the "best" 192 | (the one with the highest log probability per token). Results cannot be streamed. 193 | When used with `n`, `best_of` controls the number of candidate completions and 194 | `n` specifies how many to return - `best_of` must be greater than `n`. 195 | Note: Because this parameter generates many completions, it can 196 | quickly consume your token quota. Use carefully and ensure that 197 | you have reasonable settings for `max_tokens` and `stop`. 198 | 199 | `user`: str, default to None 200 | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 201 | Learn more at https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids 202 | """ 203 | params = { 204 | "model": model, "prompt": prompt, "suffix": suffix, 205 | "temperature": float(temperature), "top_p": float(top_p), 206 | "stop": stop, "n": int(n), "stream": bool(stream), 207 | "max_tokens": max_tokens, "echo": echo, 208 | "logprobs": logprobs, "best_of": best_of, 209 | "presence_penalty": float(presence_penalty), 210 | "frequency_penalty": float(frequency_penalty) 211 | } 212 | if user: 213 | params["user"] = user 214 | response = await post( 215 | "https://api.openai.com/v1/completions", 216 | json=params, headers=self.headers) 217 | if "error" in response: 218 | raise AsyncGPTError(f"{response['error']['type']}: {response['error']['message']}") 219 | return Completion( 220 | id=response["id"], created=response["created"], 221 | usage=response["usage"], model=model, 222 | choices=[CompletionChoice(**choice) for choice in response["choices"]] 223 | ) 224 | -------------------------------------------------------------------------------- /asyncgpt/types/___init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Just1z/asyncgpt/7582343e517d8d14fc9d41b88e44b901c202135f/asyncgpt/types/___init__.py -------------------------------------------------------------------------------- /asyncgpt/types/exceptions.py: -------------------------------------------------------------------------------- 1 | class AsyncGPTError(Exception): 2 | pass 3 | -------------------------------------------------------------------------------- /asyncgpt/types/requests.py: -------------------------------------------------------------------------------- 1 | from aiohttp import ClientSession 2 | 3 | 4 | async def post(url, json, headers): 5 | async with ClientSession() as session: 6 | response = await session.post(url, json=json, headers=headers) 7 | response = await response.json() 8 | return response -------------------------------------------------------------------------------- /asyncgpt/types/responses/__init__.py: -------------------------------------------------------------------------------- 1 | from .chatcompletion import ChatCompletion, ChatCompletionChoice 2 | from .completion import Completion, CompletionChoice 3 | -------------------------------------------------------------------------------- /asyncgpt/types/responses/chatcompletion.py: -------------------------------------------------------------------------------- 1 | from typing import Dict 2 | from dataclasses import dataclass 3 | 4 | 5 | @dataclass 6 | class ChatCompletionChoice: 7 | index: int 8 | message: Dict[str, str] 9 | finish_reason: str 10 | 11 | def __str__(self) -> str: 12 | return self.message["content"] 13 | 14 | 15 | @dataclass 16 | class ChatCompletion: 17 | id: str 18 | created: int 19 | choices: list[ChatCompletionChoice] 20 | usage: Dict[str, int] 21 | object: str = "chat.completion" 22 | 23 | def __str__(self) -> str: 24 | return str(self.choices[0]) 25 | -------------------------------------------------------------------------------- /asyncgpt/types/responses/completion.py: -------------------------------------------------------------------------------- 1 | from typing import Dict 2 | from dataclasses import dataclass 3 | 4 | 5 | @dataclass 6 | class CompletionChoice: 7 | text: str 8 | index: int 9 | logprobs: int 10 | finish_reason: str 11 | 12 | def __str__(self) -> str: 13 | return self.text 14 | 15 | 16 | @dataclass 17 | class Completion: 18 | id: str 19 | created: int 20 | model: str 21 | choices: list[CompletionChoice] 22 | usage: Dict[str, int] 23 | object: str = "text_completion" 24 | 25 | def __str__(self) -> str: 26 | return str(self.choices[0]) 27 | -------------------------------------------------------------------------------- /examples/chatcompletion.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import asyncgpt 3 | 4 | 5 | async def main(): 6 | bot = asyncgpt.GPT(apikey="YOUR API KEY") 7 | completion = await bot.chat_complete([{"role": "user", "content": "Hello!"}]) 8 | print(completion) 9 | 10 | 11 | if __name__ == "__main__": 12 | asyncio.run(main()) 13 | # Hello there! How can I assist you today? 14 | -------------------------------------------------------------------------------- /examples/completion.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import asyncgpt 3 | 4 | 5 | async def main(): 6 | bot = asyncgpt.GPT(apikey="YOUR API KEY") 7 | completion = await bot.complete("Say this is a test") 8 | print(completion) 9 | 10 | 11 | if __name__ == "__main__": 12 | asyncio.run(main()) 13 | # This is indeed a test 14 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["poetry-core>=1.0.0"] 3 | build-backend = "poetry.core.masonry.api" 4 | 5 | [tool.poetry] 6 | name = "asyncgpt" 7 | version = "1.2.0" 8 | authors = ["Just1z"] 9 | description = "Asynchronus framework for ChatGPT API" 10 | license = "MIT" 11 | readme = "README.md" 12 | classifiers = [ 13 | "Programming Language :: Python :: 3", 14 | "Framework :: AsyncIO", 15 | "Intended Audience :: Developers", 16 | "License :: OSI Approved :: MIT License", 17 | "Operating System :: OS Independent", 18 | "Topic :: Software Development :: Libraries :: Application Frameworks", 19 | ] 20 | homepage="https://github.com/Just1z/asyncgpt" 21 | repository="https://github.com/Just1z/asyncgpt" 22 | keywords = [ 23 | "chatgpt", "gpt", "async", "api", "openai", 24 | "framework", "asyncio", "wrapper"] 25 | 26 | [tool.poetry.dependencies] 27 | python = "^3.10" 28 | aiohttp = {version="^3.8.4"} 29 | 30 | [tool.poetry.urls] 31 | "Bug Tracker" = "https://github.com/Just1z/asyncgpt/issues" -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiohttp --------------------------------------------------------------------------------