└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # UN-OFFICIAL OPENAI API Service Documentation 2 | 3 | ## Introduction 4 | 5 | Welcome to the **UN-OFFICIAL OPENAI API Service** documentation. This service provides an OpenAI-compatible API interface for interacting with various GPT models, including text completion, audio generation, and image generation capabilities. The API is designed to be compatible with OpenAI's API endpoints, ensuring seamless integration with existing OpenAI API clients and applications. 6 | 7 | **Important Note:** While the endpoints and payload structures are compatible with OpenAI's API, the service is **currently not optimized for use with the OpenAI Python SDK**. Users should interact with the API using standard HTTP requests. 8 | 9 | This API service was originally reverse-engineered by **Mr Leader** and is further developed and maintained by **DevsDoCode (Sreejan)**. This collaboration combines reverse engineering expertise with robust development practices to deliver a reliable and efficient API for your AI model interaction needs. 10 | 11 | --- 12 | 13 | ## Base URL 14 | 15 | ### Primary API Endpoint 16 | 17 | All API requests should be made to the following base URL: 18 | 19 | ``` 20 | https://devsdocode-openai.hf.space 21 | ``` 22 | 23 | This endpoint is hosted on Hugging Face Spaces and offers continuous free-tier availability, making it the recommended endpoint for long-term usage due to its stability and scalability. 24 | 25 | ### Secondary API Endpoint (Optional) 26 | 27 | As an alternative, you may also use the following endpoint: 28 | 29 | ``` 30 | https://openai-devsdocode.up.railway.app 31 | ``` 32 | 33 | Please note that this endpoint is hosted on Railway's free tier, which may have limitations and service interruptions after the free tier expires. We recommend using the Hugging Face endpoint for uninterrupted long-term service. 34 | 35 | --- 36 | 37 | ## Authentication 38 | 39 | **Note:** Currently, the API does not require any authentication tokens or API keys. Users are expected to use the service responsibly and within reasonable limits. 40 | 41 | --- 42 | 43 | ## Available Endpoints 44 | 45 | 1. **List Available Models** 46 | - **Endpoint:** `GET /models` 47 | - **Description:** Retrieve a list of all available models. 48 | - **Response Format:** JSON compatible with OpenAI's model list format. 49 | 50 | 2. **API Information** 51 | - **Endpoint:** `GET /about` 52 | - **Description:** Get detailed information about the API service, including version, description, developer info, and more. 53 | - **Response Format:** JSON 54 | 55 | 3. **Chat Completions** 56 | - **Endpoint:** `POST /chat/completions` 57 | - **Description:** Generate chat-based text completions using GPT models. 58 | - **Supports Streaming:** Yes 59 | - **Compatible Models:** GPT-3.5 series, GPT-4 series, and others. 60 | - **Response Format:** JSON or Streaming Response (as per OpenAI API) 61 | 62 | 4. **Audio Speech Generation** 63 | - **Endpoint:** `POST /audio/speech` 64 | - **Description:** Generate speech audio from text input using Text-to-Speech models. 65 | - **Response Format:** Streaming audio (`audio/mpeg`), compatible with OpenAI's audio response format. 66 | 67 | 5. **Image Generation** 68 | - **Endpoint:** `POST /images/generations` 69 | - **Description:** Generate images based on text prompts. 70 | - **Compatible Models:** `dall-e-2`, `dall-e-3` 71 | - **Response Format:** JSON compatible with OpenAI's image generation response. 72 | 73 | --- 74 | 75 | ## Endpoint Details 76 | 77 | ### 1. Get List of Available Models 78 | 79 | **Endpoint:** 80 | 81 | ``` 82 | GET /models 83 | ``` 84 | 85 | **Description:** 86 | 87 | Retrieves a list of all available models that can be used with this API. The response structure is compatible with OpenAI's model listing. 88 | 89 | **Response Example:** 90 | 91 | ```json 92 | { 93 | "object": "list", 94 | "data": [ 95 | { 96 | "id": "gpt-4-turbo-2024-04-09", 97 | "object": "model", 98 | "created": 1712601677, 99 | "owned_by": "DevsDoCode" 100 | }, 101 | { 102 | "id": "tts-1-1106", 103 | "object": "model", 104 | "created": 1699053241, 105 | "owned_by": "DevsDoCode" 106 | }, 107 | // ... other models 108 | ] 109 | } 110 | ``` 111 | 112 | --- 113 | 114 | ### 2. Get API Information 115 | 116 | **Endpoint:** 117 | 118 | ``` 119 | GET /about 120 | ``` 121 | 122 | **Description:** 123 | 124 | Provides detailed information about the API service, including version, description, developer info, and social links. 125 | 126 | **Response Example:** 127 | 128 | ```json 129 | { 130 | "name": "Unofficial OpenAI API", 131 | "version": "1.0.1", 132 | "description": "An unofficial OpenAI API service compatible with OpenAI API endpoints", 133 | "developer": "Sreejan (DevsDoCode)", 134 | "owner": "Sreejan", 135 | "last_updated": "2024-11-06", 136 | "originator": "Mr Leader", 137 | "originator_info": { 138 | "name": "Mr Leader", 139 | "expertise": ["Python Development", "Reverse Engineering"], 140 | "youtube": "https://www.youtube.com/@mr_leaderyt", 141 | "contribution": "Original founder of this API & has reverse-engineered it" 142 | }, 143 | "social_links": { 144 | "telegram": "https://t.me/DevsDoCode", 145 | "youtube": "https://www.youtube.com/@DevsDoCode", 146 | "instagram": "https://www.instagram.com/sree.shades_/", 147 | "github": "https://github.com/SreejanPesonal" 148 | }, 149 | "credits": "Special thanks to Mr Leader for the foundational work and expertise in finding this API", 150 | "report": "To report any bugs, please contact the owner of this API on either Telegram or Instagram", 151 | "validity": "This API is currently open and free to use until further announcement. The service is hosted on both Railway (1-month free tier) and Hugging Face (continuous free tier). Please note that the Railway API endpoint may be discontinued after the free tier expires in one month. However, the Hugging Face endpoint will remain continuously available as it is hosted on their free tier Spaces using Docker. For uninterrupted service, we recommend using the Hugging Face endpoint for long-term usage.", 152 | "maintained_by": "DevsDoCode" 153 | } 154 | ``` 155 | 156 | --- 157 | 158 | ### 3. Chat Completions 159 | 160 | **Endpoint:** 161 | 162 | ``` 163 | POST /chat/completions 164 | ``` 165 | 166 | **Description:** 167 | 168 | Generates chat-based text completions using GPT models. Supports both streaming and non-streaming responses. Fully compatible with OpenAI's `/v1/chat/completions` endpoint. 169 | 170 | **Important Note:** While the API endpoints and response formats are compatible with OpenAI's API, it is currently **not optimized for use with the OpenAI Python SDK**. Users should interact with the API using standard HTTP requests. 171 | 172 | **Request Parameters:** 173 | 174 | - **model** (string): ID of the model to use (e.g., `gpt-4o-mini-2024-07-18`). 175 | - **messages** (array): A list of message objects in the conversation. 176 | - **temperature** (float, optional): Sampling temperature between 0 and 2. 177 | - **top_p** (float, optional): Nucleus sampling probability. 178 | - **stream** (boolean, optional): If true, the response will be streamed. 179 | - **presence_penalty** (float, optional): Penalty for new topics. 180 | - **frequency_penalty** (float, optional): Penalty for repetition. 181 | 182 | #### Non-Streaming Request Example 183 | 184 | **Request:** 185 | 186 | ```json 187 | { 188 | "model": "gpt-4o-mini-2024-07-18", 189 | "messages": [ 190 | {"role": "user", "content": "How many 'r's are there in Strawberry?"} 191 | ], 192 | "temperature": 0.5, 193 | "top_p": 1, 194 | "stream": false 195 | } 196 | ``` 197 | 198 | **Response Example:** 199 | 200 | ```json 201 | { 202 | "id": "chatcmpl-xxxxxxxxxxxxxx", 203 | "object": "chat.completion", 204 | "created": 1664139435, 205 | "model": "gpt-4o-mini-2024-07-18", 206 | "choices": [ 207 | { 208 | "index": 0, 209 | "message": { 210 | "role": "assistant", 211 | "content": "There are three 'r's in 'Strawberry'." 212 | }, 213 | "finish_reason": "stop" 214 | } 215 | ], 216 | "usage": { 217 | "prompt_tokens": 15, 218 | "completion_tokens": 9, 219 | "total_tokens": 24 220 | } 221 | } 222 | ``` 223 | 224 | #### Streaming Request Example 225 | 226 | **Request:** 227 | 228 | ```json 229 | { 230 | "model": "gpt-4o-mini-2024-07-18", 231 | "messages": [ 232 | {"role": "user", "content": "Write 10 lines on India"} 233 | ], 234 | "temperature": 0.5, 235 | "top_p": 1, 236 | "stream": true 237 | } 238 | ``` 239 | 240 | **Response Example:** 241 | 242 | Streaming responses are sent as a series of data chunks. Each chunk contains a JSON structure. 243 | 244 | Example chunk: 245 | 246 | ``` 247 | data: {"choices": [{"delta": {"content": "India, "}, "index": 0, "finish_reason": null}]} 248 | data: {"choices": [{"delta": {"content": "a "}, "index": 0, "finish_reason": null}]} 249 | data: {"choices": [{"delta": {"content": "land "}, "index": 0, "finish_reason": null}]} 250 | data: {"choices": [{"delta": {"content": "of "}, "index": 0, "finish_reason": null}]} 251 | data: {"choices": [{"delta": {"content": "diversity,"}, "index": 0, "finish_reason": null}]} 252 | ... 253 | ``` 254 | 255 | --- 256 | 257 | ### 4. Audio Speech Generation 258 | 259 | **Endpoint:** 260 | 261 | ``` 262 | POST /audio/speech 263 | ``` 264 | 265 | **Description:** 266 | 267 | Generates speech audio from text input using Text-to-Speech (TTS) models. The response is streamed as audio data, compatible with OpenAI's audio response format. 268 | 269 | **Request Parameters:** 270 | 271 | - **model** (string): ID of the TTS model to use (e.g., `tts-1-hd-1106`). 272 | - **input** (string): The text input to be converted to speech. 273 | - **voice** (string, optional): The voice style to use. Available options: 274 | - `nova` 275 | - `echo` 276 | - `fable` 277 | - `onyx` 278 | - `shimmer` 279 | - `alloy` 280 | 281 | **Request Example:** 282 | 283 | ```json 284 | { 285 | "model": "tts-1-hd-1106", 286 | "voice": "nova", 287 | "input": "I love you & I can't wait to spend time together!" 288 | } 289 | ``` 290 | 291 | **Response:** 292 | 293 | The response is a streaming audio file in `audio/mpeg` format. 294 | 295 | **Usage Example:** 296 | 297 | ```python 298 | import requests 299 | 300 | def download_audio_speech(): 301 | url = "https://devsdocode-openai.hf.space/audio/speech" 302 | payload = { 303 | "model": "tts-1-hd-1106", 304 | "voice": "nova", 305 | "input": "I love you & I can't wait to spend time together!" 306 | } 307 | response = requests.post(url, json=payload, stream=True) 308 | with open("speech.mp3", "wb") as f: 309 | for chunk in response.iter_content(chunk_size=8192): 310 | if chunk: 311 | f.write(chunk) 312 | print("Audio saved as speech.mp3") 313 | ``` 314 | 315 | --- 316 | 317 | ### 5. Image Generation 318 | 319 | **Endpoint:** 320 | 321 | ``` 322 | POST /images/generations 323 | ``` 324 | 325 | **Description:** 326 | 327 | Generates images based on text prompts using DALL·E models. Fully compatible with OpenAI's `/v1/images/generations` endpoint. 328 | 329 | **Request Parameters:** 330 | 331 | - **model** (string): ID of the image generation model to use (e.g., `dall-e-3`). 332 | - **prompt** (string): The text prompt describing the desired image. 333 | - **n** (integer, optional): Number of images to generate (default is 1). 334 | - **size** (string, optional): Size of the generated images (`256x256`, `512x512`, `1024x1024`). 335 | - **response_format** (string, optional): Format of the response (`url` or `b64_json`). 336 | - **quality** (string, optional): Quality of the image (`hd`). 337 | 338 | **Request Example:** 339 | 340 | ```json 341 | { 342 | "model": "dall-e-3", 343 | "prompt": "A futuristic cityscape with flying cars and tall skyscrapers.", 344 | "n": 1, 345 | "size": "1024x1024", 346 | "response_format": "url", 347 | "quality": "hd" 348 | } 349 | ``` 350 | 351 | **Response Example:** 352 | 353 | ```json 354 | { 355 | "created": 1664139435, 356 | "data": [ 357 | { 358 | "url": "https://devsdocode-openai.hf.space/generated_images/image1.png" 359 | } 360 | ] 361 | } 362 | ``` 363 | 364 | --- 365 | 366 | ### 6. GPT-4O Audio Preview Models 367 | 368 | We are excited to introduce the **gpt-4o-audio-preview** family of models. These models allow for an enhanced interaction experience by providing audio responses along with text. **This feature is rare and not available for free elsewhere**, and we are pleased to offer it to our users. 369 | 370 | **Available Models:** 371 | 372 | - `gpt-4o-audio-preview-2024-10-01` 373 | - `gpt-4o-audio-preview` 374 | 375 | **Description:** 376 | 377 | The `gpt-4o-audio-preview` models generate audio responses in addition to text completions. When you use these models, the API returns a streaming response that includes both text and audio data. 378 | 379 | **Note:** This functionality is currently in preview, and we welcome any feedback or contributions to improve it. 380 | 381 | **Usage Example:** 382 | 383 | ```python 384 | import requests 385 | import base64 386 | 387 | def get_audio_text_completion(): 388 | url = "https://devsdocode-openai.hf.space/chat/completions" 389 | payload = { 390 | "messages": [{"role": "user", "content": "You know i follow Devs Do Code on YT for Coding. He is A legend"}], 391 | "model": "gpt-4o-audio-preview-2024-10-01", 392 | "modalities": ["text", "audio"], 393 | "audio": {"voice": "fable", "format": "wav"}, 394 | "temperature": 0.9, 395 | "presence_penalty": 0, 396 | "frequency_penalty": 0, 397 | "top_p": 1 398 | } 399 | 400 | response = requests.post(url, json=payload) 401 | print("Status Code:", response.status_code) 402 | 403 | if response.ok: 404 | data = response.json() 405 | try: 406 | if "choices" in data and data["choices"]: 407 | message = data["choices"][0].get("message", {}) 408 | 409 | if "content" in message: 410 | print("Text response:", message["content"]) 411 | 412 | if "audio" in message and "data" in message["audio"]: 413 | wav_bytes = base64.b64decode(message["audio"]["data"]) 414 | print("Text Response:", message["audio"]["transcript"]) 415 | 416 | output_file = "chat.wav" 417 | with open(output_file, "wb") as f: 418 | f.write(wav_bytes) 419 | print(f"\nAudio saved to {output_file}") 420 | else: 421 | print("No audio data found in the response.") 422 | except Exception as e: 423 | print(f"An error occurred: {e}") 424 | else: 425 | print("Request failed with status code:", response.status_code) 426 | print("Response content:", response.text) 427 | ``` 428 | 429 | **Note:** The exact handling of the audio and text data may vary depending on the API format. We recommend experimenting and adjusting the code as needed. 430 | 431 | --- 432 | 433 | ### 7. GPT-4O Real-Time Models 434 | 435 | We are also introducing the **gpt-4o-realtime** family of models, which support real-time interaction capabilities similar to OpenAI's official real-time API. 436 | 437 | **Available Models:** 438 | 439 | - `gpt-4o-realtime-preview-2024-10-01` 440 | - `gpt-4o-realtime-preview` 441 | 442 | **Description:** 443 | 444 | The `gpt-4o-realtime` models are designed for applications requiring immediate response generation, enabling a more interactive and dynamic experience. 445 | 446 | **Usage Notes:** 447 | 448 | - Currently, the code examples and documentation for these models are in development and will be updated soon. 449 | - If you are familiar with using OpenAI's official real-time API, you can attempt to use these models similarly. 450 | - **We encourage users who successfully utilize these models to contribute code examples and improvements to our GitHub repository.** 451 | 452 | **Contribution Invitation:** 453 | 454 | If you discover a successful method for utilizing the `gpt-4o-realtime` models, please consider contributing to our GitHub repository. Your contributions will help enhance the documentation and assist other users. 455 | 456 | --- 457 | 458 | ## Available Models 459 | 460 | ### GPT Models 461 | 462 | - **GPT-4 Series:** 463 | - `gpt-4o-realtime-preview-2024-10-01` 464 | - `gpt-4o-audio-preview-2024-10-01` 465 | - `gpt-4o-mini-2024-07-18` 466 | - `gpt-4o-mini` 467 | - `gpt-4o` 468 | - `gpt-4-turbo-2024-04-09` 469 | - `gpt-4-turbo` 470 | 471 | - **GPT-3.5 Series:** 472 | - `gpt-3.5-turbo-1106` 473 | - `gpt-3.5-turbo-0613` 474 | - `gpt-3.5-turbo` 475 | - `gpt-3.5-turbo-0125` 476 | - `gpt-3.5-turbo-instruct-0914` 477 | - `gpt-3.5-turbo-instruct` 478 | 479 | ### Text-to-Speech Models 480 | 481 | - `tts-1-hd-1106` 482 | - `tts-1-hd` 483 | - `tts-1-1106` 484 | - `tts-1` 485 | 486 | ### Image Generation Models 487 | 488 | - `dall-e-3` 489 | - `dall-e-2` 490 | 491 | ### Audio-Capable Models 492 | 493 | - `gpt-4o-audio-preview-2024-10-01` 494 | - `gpt-4o-audio-preview` 495 | 496 | ### Real-Time Models 497 | 498 | - `gpt-4o-realtime-preview-2024-10-01` 499 | - `gpt-4o-realtime-preview` 500 | 501 | ### Embedding Models 502 | 503 | - `text-embedding-3-large` 504 | - `text-embedding-3-small` 505 | - `text-embedding-ada-002` 506 | 507 | --- 508 | 509 | ## Parameters 510 | 511 | ### Common Parameters 512 | 513 | | Parameter | Type | Description | 514 | |-----------------------|---------|----------------------------------------------------------------------------------------------| 515 | | **model** | string | ID of the model to use. | 516 | | **messages** | array | Array of message objects, each with `role` and `content`. | 517 | | **temperature** | float | Sampling temperature between 0 and 2. Higher values make the output more random. | 518 | | **top_p** | float | Nucleus sampling probability. An alternative to using temperature. | 519 | | **stream** | boolean | If true, responses are streamed back as they are generated. | 520 | | **presence_penalty** | float | Penalizes new tokens based on whether they appear in the text so far. Range: -2.0 to 2.0. | 521 | | **frequency_penalty** | float | Penalizes new tokens based on their frequency in the text so far. Range: -2.0 to 2.0. | 522 | 523 | ### Audio Speech Parameters 524 | 525 | | Parameter | Type | Description | 526 | |-------------|--------|---------------------------------------------------------| 527 | | **input** | string | The text input to be converted to speech. | 528 | | **voice** | string | The voice style to use for speech synthesis. | 529 | | **format** | string | The audio file format (`mp3`, `wav`). Default is `mp3`. | 530 | 531 | ### Image Generation Parameters 532 | 533 | | Parameter | Type | Description | 534 | |--------------------|---------|-------------------------------------------------------------| 535 | | **prompt** | string | The text prompt to generate the image from. | 536 | | **n** | integer | Number of images to generate. Default is 1. | 537 | | **size** | string | Image dimensions (`256x256`, `512x512`, `1024x1024`). | 538 | | **response_format**| string | Format of the response (`url`, `b64_json`). | 539 | | **quality** | string | Quality of the generated images (`hd`). | 540 | 541 | --- 542 | 543 | ## Usage Examples 544 | 545 | ### 1. Chat Completion (Non-Streaming) 546 | 547 | ```python 548 | import requests 549 | 550 | def get_chat_completion(): 551 | url = "https://devsdocode-openai.hf.space/chat/completions" 552 | payload = { 553 | "model": "gpt-4o-mini-2024-07-18", 554 | "messages": [ 555 | {"role": "user", "content": "Tell me a joke about programmers."} 556 | ], 557 | "temperature": 0.7, 558 | "top_p": 1, 559 | "stream": False 560 | } 561 | response = requests.post(url, json=payload) 562 | if response.ok: 563 | result = response.json() 564 | print(result["choices"][0]["message"]["content"]) 565 | else: 566 | print("Request failed:", response.text) 567 | ``` 568 | 569 | ### 2. Chat Completion (Streaming) 570 | 571 | ```python 572 | import requests 573 | 574 | def get_chat_completion_streaming(): 575 | url = "https://devsdocode-openai.hf.space/chat/completions" 576 | payload = { 577 | "model": "gpt-4o-mini-2024-07-18", 578 | "messages": [ 579 | {"role": "user", "content": "Write a poem about the sea."} 580 | ], 581 | "temperature": 0.7, 582 | "top_p": 1, 583 | "stream": True 584 | } 585 | with requests.post(url, json=payload, stream=True) as response: 586 | for chunk in response.iter_lines(): 587 | if chunk: 588 | data = chunk.decode('utf-8').strip() 589 | if data == "[DONE]": 590 | break 591 | else: 592 | print(data) 593 | ``` 594 | 595 | ### 3. Audio Speech Generation 596 | 597 | ```python 598 | import requests 599 | 600 | def generate_audio_speech(): 601 | url = "https://devsdocode-openai.hf.space/audio/speech" 602 | payload = { 603 | "model": "tts-1-hd-1106", 604 | "voice": "echo", 605 | "input": "Hello, this is a test of the speech synthesis." 606 | } 607 | response = requests.post(url, json=payload, stream=True) 608 | with open("output.mp3", "wb") as f: 609 | for chunk in response.iter_content(chunk_size=8192): 610 | if chunk: 611 | f.write(chunk) 612 | print("Audio saved as output.mp3") 613 | ``` 614 | 615 | ### 4. Image Generation 616 | 617 | ```python 618 | import requests 619 | import json 620 | 621 | def generate_image(): 622 | url = "https://devsdocode-openai.hf.space/images/generations" 623 | payload = { 624 | "model": "dall-e-3", 625 | "prompt": "A serene landscape of mountains during sunset.", 626 | "n": 1, 627 | "size": "1024x1024", 628 | "response_format": "url", 629 | "quality": "hd" 630 | } 631 | response = requests.post(url, json=payload) 632 | if response.ok: 633 | image_url = json.loads(response.json())["data"][0]["url"] 634 | print("Image URL:", image_url) 635 | 636 | else: 637 | print("Request failed:", response.text) 638 | ``` 639 | 640 | ### 5. GPT-4O Audio Preview Models 641 | 642 | ```python 643 | import requests 644 | import base64 645 | 646 | def get_audio_text_completion(): 647 | url = "https://devsdocode-openai.hf.space/chat/completions" 648 | payload = { 649 | "messages": [{"role": "user", "content": "You know i follow Devs Do Code on YT for Coding. He is A legend"}], 650 | "model": "gpt-4o-audio-preview-2024-10-01", 651 | "modalities": ["text", "audio"], 652 | "audio": {"voice": "fable", "format": "wav"}, 653 | "temperature": 0.9, 654 | "presence_penalty": 0, 655 | "frequency_penalty": 0, 656 | "top_p": 1 657 | } 658 | 659 | response = requests.post(url, json=payload) 660 | print("Status Code:", response.status_code) 661 | 662 | if response.ok: 663 | data = response.json() 664 | try: 665 | if "choices" in data and data["choices"]: 666 | message = data["choices"][0].get("message", {}) 667 | 668 | if "content" in message: 669 | print("Text response:", message["content"]) 670 | 671 | if "audio" in message and "data" in message["audio"]: 672 | wav_bytes = base64.b64decode(message["audio"]["data"]) 673 | print("Text Response:", message["audio"]["transcript"]) 674 | 675 | output_file = "chat.wav" 676 | with open(output_file, "wb") as f: 677 | f.write(wav_bytes) 678 | print(f"\nAudio saved to {output_file}") 679 | else: 680 | print("No audio data found in the response.") 681 | except Exception as e: 682 | print(f"An error occurred: {e}") 683 | else: 684 | print("Request failed with status code:", response.status_code) 685 | print("Response content:", response.text) 686 | ``` 687 | 688 | ### 6. GPT-4O Real-Time Models 689 | 690 | **Note:** Code examples for the `gpt-4o-realtime` models will be provided soon. If you have experience using OpenAI's official real-time API, you may attempt to use these models similarly. **We encourage you to contribute working code examples to our GitHub repository.** 691 | 692 | --- 693 | 694 | ## Error Handling 695 | 696 | The API returns errors in a format compatible with OpenAI's API. Error responses include an HTTP status code and a JSON body with details. 697 | 698 | **Error Response Example:** 699 | 700 | ```json 701 | { 702 | "error": { 703 | "message": "The requested model does not exist", 704 | "type": "invalid_request_error", 705 | "param": "model", 706 | "code": "model_not_found" 707 | } 708 | } 709 | ``` 710 | 711 | **Common Error Codes:** 712 | 713 | - `400 Bad Request`: The request was invalid or cannot be served. 714 | - `401 Unauthorized`: Authentication failed or user does not have permissions. 715 | - `404 Not Found`: The requested resource does not exist. 716 | - `429 Too Many Requests`: Rate limit exceeded. 717 | - `500 Internal Server Error`: An error occurred on the server. 718 | 719 | --- 720 | 721 | ## Rate Limits 722 | 723 | - **Hosting Details:** The service is hosted on Hugging Face Spaces (primary) and Railway's free tier (secondary). 724 | - **Usage Guidelines:** Please use the API responsibly. Excessive requests may lead to rate limiting or temporary bans. 725 | - **Recommended Use:** Implement client-side rate limiting and exponential backoff strategies to handle potential `429 Too Many Requests` responses. 726 | 727 | --- 728 | 729 | ## Support and Contact 730 | 731 | For support, bug reports, or inquiries, please reach out through the following channels: 732 | 733 | ### Developer and Maintainer: 734 | 735 | - **Name:** DevsDoCode (Sreejan) 736 | - **Telegram:** [@DevsDoCode](https://t.me/DevsDoCode) 737 | - **Instagram:** [@sree.shades_](https://www.instagram.com/sree.shades_/) 738 | - **YouTube:** [DevsDoCode](https://www.youtube.com/@DevsDoCode) 739 | - **GitHub:** [SreejanPesonal](https://github.com/SreejanPesonal) 740 | 741 | ### Original Founder: 742 | 743 | - **Name:** Mr Leader 744 | - **YouTube:** [@mr_leaderyt](https://www.youtube.com/@mr_leaderyt) 745 | 746 | --- 747 | 748 | ## Credits and Acknowledgments 749 | 750 | - **Mr Leader** 751 | - **Contribution:** Original founder and reverse engineer of the API. 752 | - **Expertise:** Python Development, Reverse Engineering. 753 | - **Note:** Special thanks for the foundational work and expertise in establishing this API. 754 | 755 | - **DevsDoCode (Sreejan)** 756 | - **Contribution:** Current maintainer and developer. 757 | - **Expertise:** Enhancements, additional features, ongoing maintenance. 758 | 759 | --- 760 | 761 | ## Changelog 762 | 763 | ### Version 1.0.1 764 | 765 | - **Date:** 2024-11-06 766 | - **Changes:** 767 | - Added `/audio/speech` endpoint for Text-to-Speech functionality. 768 | - Added `/images/generations` endpoint for image generation. 769 | - Introduced **gpt-4o-audio-preview** and **gpt-4o-realtime** model families. 770 | - Updated available models in `/models` endpoint. 771 | - Improved API compatibility with OpenAI's API. 772 | 773 | ### Version 1.0.0 774 | 775 | - **Date:** 2023-11-03 776 | - **Initial Release** 777 | 778 | --- 779 | 780 | ## License and Legal Notice 781 | 782 | This API service is provided for educational and testing purposes. Users are responsible for ensuring compliance with all applicable laws and regulations when using the service. 783 | 784 | - **Disclaimer:** The service is unofficial and not affiliated with OpenAI. 785 | - **Usage:** By using this API, you agree to use it responsibly and not for any malicious activities. 786 | 787 | --- 788 | 789 | ## Frequently Asked Questions (FAQ) 790 | 791 | **Q1:** *Is this API compatible with OpenAI's official API clients and libraries?* 792 | 793 | **A:** The API endpoints and response formats are designed to be compatible with OpenAI's API. However, it is currently **not optimized for use with the OpenAI Python SDK**. Users should interact with the API using standard HTTP requests. 794 | 795 | **Q2:** *Do I need an API key to use this service?* 796 | 797 | **A:** Currently, no API key or authentication token is required. However, this may change in the future, and users should follow any updates. 798 | 799 | **Q3:** *What are the limitations of this API service?* 800 | 801 | **A:** The service is hosted on platforms with free-tier limitations. Therefore, performance and availability may vary. For critical applications, it's recommended to consider these factors. 802 | 803 | **Q4:** *Can I use this API for commercial purposes?* 804 | 805 | **A:** The API is provided primarily for educational and testing purposes. Commercial use is not officially supported, and users should proceed with caution and ensure compliance with any applicable terms and regulations. 806 | 807 | --- 808 | 809 | ## Final Notes 810 | 811 | While this API aims to provide a compatible and accessible interface for AI model interactions, users should be aware of the following: 812 | 813 | - **Stability:** Being hosted on free-tier services may affect reliability. 814 | - **Updates:** The API may undergo changes. Users should keep an eye on the `/about` endpoint for the latest information. 815 | - **Ethical Use:** Please ensure that your use of the API aligns with ethical guidelines and does not promote harm. 816 | 817 | --- 818 | 819 | ## Feedback 820 | 821 | Your feedback is valuable. If you encounter any issues or have suggestions for improvements, please reach out via the contact channels provided above. 822 | 823 | Thank you for using the UN-OFFICIAL OPENAI API Service! 824 | 825 | --- 826 | --------------------------------------------------------------------------------