├── LICENSE ├── README.md ├── api ├── chat2api.py ├── files.py ├── models.py └── tokens.py ├── app.py ├── chatgpt ├── ChatService.py ├── authorization.py ├── chatFormat.py ├── chatLimit.py ├── fp.py ├── proofofWork.py ├── refreshToken.py ├── turnstile.py └── wssClient.py ├── gateway ├── admin.py ├── auth.py ├── backend.py ├── chatgpt.py ├── gpts.py ├── reverseProxy.py ├── route.py ├── share.py └── v1.py ├── requirements.txt ├── templates ├── admin.html ├── base.html ├── callback.html ├── chatgpt_context.json ├── gpts_context.json ├── home.html ├── signin.html ├── signout.html └── static │ ├── favicon-dashboard.png │ ├── favicon-dashboard.svg │ └── favicon.ico ├── utils ├── Client.py ├── configs.py ├── globals.py ├── kv_utils.py ├── log.py └── retry.py └── version.txt /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 黑猫警长1000000000 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | > [!NOTE] 2 | > 此项目与原项目(不太)兼容。所以请不要进行“替换大法“来切换到本项目。 3 | > 此项目仅供体验,离正式版还差好远呢! 4 | > ——反正别替换就好。 5 | 6 | # FlowGPT 7 | 8 | 一个简单又漂亮的 ChatGPT 代理。 9 | 10 | 支持与最新的 `reasoning` 系列模型以及经典模型进行对话。 11 | 12 | 此项目基于 [Chat2Api](https://github.com/lanqian528/chat2api),但更注重于界面 UI 的舒适度。 13 | 14 | 项目的 API 回复格式与原始 API 保持一致,以适配更多客户端。 15 | 16 | ## 功能 17 | 18 | ### 对话界面 19 | 更漂亮了~ 20 | - 支持 官网历史 UI 镜像。(目前无法实现同步更新 UI) 21 | - 支持 打开调试侧栏(于历史聊天的(…)菜单里打开) 22 | - 支持 全系列模型以及 `GPTs`。 23 | - 支持 输入 `RefreshToken` 直接使用。 24 | - 支持 代理 Gravatar 头像。(须注册 Gravatar 设置头像后才可显示) 25 | - 支持 Tokens 管理,可上传至后台号池 26 | - 支持 从后台账号池随机抽取账号使用,输入 `SeedToken` 以使用随机账号。 27 | - 账户操作 - `/auth/(login | logout)` 28 | - 更美观的界面 29 | - 在注销时,可选择取消注销以防止误操作。 30 | - 不再支持保留 `token` 的 Cookie,点击确认注销后会删除 Cookie。 31 | - 快捷账户操作 (API) - `/api/auth/(signin | signout)/?(signin | signout)=XXX` 32 | - 登录 - `/api/auth/signin/?signin=XXX`。 33 | XXX 可为 `AccessToken`、`RefreshToken` 或者 `SeedToken`(随机抽取) 34 | - 注销 - `/api/auth/signout/?signout=true` 35 | 访问后将会删除本地保存的 `token` Cookie。 36 | - 后端 - `/backend-(api | anon)/` 37 | - 使用共享账号(`SeedToken`)时,会对 `敏感信息接口` 及 `部分设置接口`,返回空值,以实现禁用的效果。 38 | 39 | 40 | ### 对话 API 41 | 和原项目差不多就是了 42 | - 支持 `v1/models` 接口 43 | - 支持 流式、非流式 传输 44 | - 支持 画图、代码、联网 等高级功能 45 | - 支持 `reasoning` 系列模型推理过程输出 46 | - 支持 GPTs(传入模型名:gizmo-g-*) 47 | - 支持 Team 账号(需传入 Team Account ID) 48 | - 支持 上传图片、文件(格式与 OpenAI API 相同,支持 URL 和 base64) 49 | - 支持 下载文件(需开启历史记录) 50 | - 支持 作为网关使用、多机分布部署 51 | - 支持 多账号轮询,以及输入 `AccessToken` 或 `RefreshToken` 52 | - 支持 请求失败自动重试,自动轮询下一个 Token 53 | - 支持 定时使用 `RefreshToken` 刷新 `AccessToken` 54 | - 每次启动将会全部非强制刷新一次,每 4 天晚上 3 点全部强制刷新一次。 55 | 56 | ### 不大可能的 TODO 57 | - [ ] 支持与官网 UI 实时更新 58 | 59 | ## 使用说明 60 | 61 | ### 环境变量 62 | 63 | 每个环境变量都有默认值,如果不懂环境变量的含义,请不要设置,更不要传空值,字符串无需引号。 64 | 65 | 若是非 Docker 环境运行项目可通过在项目目录手动创建 .env 文件来定义环境变量。 66 | 67 | | 分类 | 变量名 | 类型 | 默认值 | 描述 | 68 | |------|-------|------|--------|-----| 69 | | 安全 | API_PREFIX | 字符串 | / | API 前缀 | 70 | | 安全 | AUTHORIZATION | 数组 | / | 多账号轮询授权码 | 71 | | 安全 | AUTH_KEY | 字符串 | / | `auth_key` 请求头 | 72 | | 请求 | CHATGPT_BASE_URL | 字符串 | `https://chatgpt.com` | ChatGPT 官网地址 | 73 | | 请求 | PROXY_URL | 字符串/数组 | / | 全局代理 URL | 74 | | 请求 | EXPORT_PROXY_URL | 字符串 | / | 出口代理 URL | 75 | | 功能 | HISTORY_DISABLED | 布尔值 | `true` | API 请求禁用聊天记录 | 76 | | 功能 | POW_DIFFICULTY | 字符串 | `00003a` | 要解决的工作量证明难度 | 77 | | 功能 | RETRY_TIMES | 整数 | `3` | 出错重试次数 | 78 | | 功能 | CONVERSATION_ONLY | 布尔值 | `false` | 是否直接使用对话接口,而不获取 PoW。 | 79 | | 功能 | ENABLE_LIMIT | 布尔值 | `true` | 不突破官方次数限制 | 80 | | 功能 | UPLOAD_BY_URL | 布尔值 | `false` | 按照 `URL+空格+正文` 进行对话 | 81 | | 功能 | SCHEDULED_REFRESH | 布尔值 | `false` | 定时刷新 `AccessToken` | 82 | | 功能 | RANDOM_TOKEN | 布尔值 | `true` | 是否随机选取后台 `Token` | 83 | | 网关 | PORT | 整数 | `5005` | 服务监听端口 | 84 | | 网关 | ENABLE_GATEWAY | 布尔值 | `false` | 启用网关模式 | 85 | | 网关 | ENABLE_HOMEPAGE | 布尔值 | `false` | 显示网关主页为未登录的官网 | 86 | | 网关 | AUTO_SEED | 布尔值 | `true` | 启用随机账号模式 | 87 | 88 | 由于篇幅有限(懒),更详细的变量介绍请参考原项目。 89 | 90 | #### ENABLE_HOMEPAGE 91 | 开启后在跳转登录页面时可能会泄露代理地址国家,如果关闭,那么会显示为简陋的主页。 92 | 93 | #### AUTHORIZATION 94 | 此变量是一个您给 FlowGPT 设置的一个授权码,设置后才可使用号池中的账号进行令牌轮询,请求时当作令牌传入。 95 | 96 | ### 后台管理 97 | 1. 运行程序。 98 | 2. 如果设置了 `API_PREFIX` 环境变量,那么请先访问 `/(API_PREFIX)` 设置 Cookie。 99 | - 访问后会自动跳转到 `/admin` 管理页面 100 | - 请保管好您的 `API_PREFIX` 密钥,以防被不法分子获取 101 | 3. 访问 `/admin` 可以查看现有 Tokens 数量,也可以上传新的 Tokens,或者清空 Tokens。 102 | 3. 请求时传入 “授权码“ 即可使用轮询的Tokens进行对话 103 | 104 | ### 对话界面 105 | 1. 配置环境变量 `ENABLE_GATEWAY` 为 `true`,然后运行程序。 106 | 2. 访问 `/auth/login` 登录页面,输入令牌。 107 | - 如果想要共享账号给他人,可以在管理页面上传 `RefreshToken` 或 `AccessToken`,这样他人就可用种子令牌随机使用。 108 | 4. 登录后即可使用。 109 | 110 | ### 对话 API 111 | 完全兼容 “OpenAI API“ 格式的返回,传入 `AccessToken` 或 `RefreshToken` 后即可使用 112 | ```bash 113 | curl "http://127.0.0.1:5005/v1/chat/completions" \ 114 | -H "Content-Type: application/json" \ 115 | -H "Authorization: Bearer (令牌)" \ 116 | -d '{ 117 | "model": "gpt-4o-mini", 118 | "messages": [ 119 | { 120 | "role": "system", 121 | "content": "You are a helpful assistant." 122 | }, 123 | { 124 | "role": "user", 125 | "content": "Write a haiku that explains the concept of recursion." 126 | } 127 | ] 128 | }' 129 | ``` 130 | 将你账号的令牌作为 `(X-)Authorization: (令牌)` 传入即可。当然,也可传入你设置的环境变量 `AUTHORIZATION` 的值,将会从后台号池中轮询选择账号使用。 131 | 132 | #### 获取访问令牌(`AccessToken`) 133 | 先登录 ChatGPT 官网,再打开 [https://chatgpt.com/api/auth/session](https://chatgpt.com/api/auth/session),找到 `accessToken` 对应的值即为 AccrssToken。 134 | #### 获取刷新令牌(`RefreshToken`) 135 | 需要使用 Apple 设备安装 ChatGPT 应用获取,详情请自行搜索。 136 | #### 常见错误码 137 | - `401` - 当前 IP 不支持免登录,请尝试更换 IP 地址,或者在环境变量 `PROXY_URL` 中设置代理,或者你的身份验证失败。 138 | - `403` - 请在日志中查看具体报错信息。 139 | - `429` - 当前 IP 请求1小时内请求超过限制,请稍后再试,或更换 IP。 140 | - `500` - 服务器内部错误,请求失败。 141 | - `502` - 服务器网关错误,或网络不可用,请尝试更换网络环境。 142 | 143 | ## 部署 144 | 145 | ### 普通部署 146 | ```bash 147 | git clone https://github.com/hmjz100/FlowGPT 148 | cd chat2api 149 | pip install -r requirements.txt 150 | python app.py 151 | ``` 152 | 153 | ### Docker 154 | 暂时不可能…… 155 | 156 | ## 许可证 157 | MIT License 158 | 159 | -------------------------------------------------------------------------------- /api/chat2api.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import types 3 | import json 4 | 5 | from apscheduler.schedulers.asyncio import AsyncIOScheduler 6 | from fastapi import Request, HTTPException, Form, Security 7 | from fastapi.responses import Response, StreamingResponse, JSONResponse 8 | from fastapi.security import HTTPAuthorizationCredentials 9 | from starlette.background import BackgroundTask 10 | 11 | from gateway.reverseProxy import get_real_req_token 12 | 13 | import utils.globals as globals 14 | from app import app, templates, security_scheme 15 | from gateway.reverseProxy import web_reverse_proxy 16 | from chatgpt.ChatService import ChatService 17 | from chatgpt.authorization import refresh_all_tokens 18 | from utils.log import log 19 | from utils.configs import api_prefix, scheduled_refresh 20 | from utils.retry import async_retry 21 | 22 | scheduler = AsyncIOScheduler() 23 | 24 | @app.on_event("startup") 25 | async def app_start(): 26 | if scheduled_refresh: 27 | scheduler.add_job(id='refresh', func=refresh_all_tokens, trigger='cron', hour=3, minute=0, day='*/2', kwargs={'force_refresh': True}) 28 | scheduler.start() 29 | asyncio.get_event_loop().call_later(0, lambda: asyncio.create_task(refresh_all_tokens(force_refresh=False))) 30 | 31 | async def to_send_conversation(request_data, req_token): 32 | chat_service = ChatService(req_token) 33 | try: 34 | await chat_service.set_dynamic_data(request_data) 35 | await chat_service.get_chat_requirements() 36 | return chat_service 37 | except HTTPException as e: 38 | await chat_service.close_client() 39 | raise HTTPException(status_code=e.status_code, detail=e.detail) 40 | except Exception as e: 41 | await chat_service.close_client() 42 | log.error(f"Server error, {str(e)}") 43 | raise HTTPException(status_code=500, detail="Server error") 44 | 45 | async def process(request_data, req_token): 46 | chat_service = await to_send_conversation(request_data, req_token) 47 | await chat_service.prepare_send_conversation() 48 | res = await chat_service.send_conversation() 49 | return chat_service, res 50 | 51 | @app.post(f"/{api_prefix}/v1/chat/completions" if api_prefix else "/v1/chat/completions") 52 | async def send_conversation(request: Request, credentials: HTTPAuthorizationCredentials = Security(security_scheme)): 53 | req_token = credentials.credentials 54 | 55 | try: 56 | request_data = await request.json() 57 | except Exception: 58 | raise HTTPException(status_code=400, detail={"error": "Invalid JSON body"}) 59 | chat_service, res = await async_retry(process, request_data, req_token) 60 | try: 61 | if isinstance(res, types.AsyncGeneratorType): 62 | background = BackgroundTask(chat_service.close_client) 63 | return StreamingResponse(res, media_type="text/event-stream", background=background) 64 | else: 65 | background = BackgroundTask(chat_service.close_client) 66 | return JSONResponse(res, media_type="application/json", background=background) 67 | except HTTPException as e: 68 | await chat_service.close_client() 69 | if e.status_code == 500: 70 | log.error(f"Server error, {str(e)}") 71 | raise HTTPException(status_code=500, detail="Server error") 72 | raise HTTPException(status_code=e.status_code, detail=e.detail) 73 | except Exception as e: 74 | await chat_service.close_client() 75 | log.error(f"Server error, {str(e)}") 76 | raise HTTPException(status_code=500, detail="Server error") 77 | 78 | @app.get(f"/{api_prefix}/v1/models" if api_prefix else "/v1/models") 79 | async def get_models(request: Request, credentials: HTTPAuthorizationCredentials = Security(security_scheme)): 80 | if not credentials.credentials: 81 | raise HTTPException(status_code=403, detail={"message": "Access denied - You do not have permission to access this resource"}) 82 | 83 | res = await web_reverse_proxy(request, f"/backend-api/models") 84 | 85 | if res and res.status_code == 200 and json.loads(res.body.decode()): 86 | data = json.loads(res.body.decode()) 87 | exclude_models = {"auto", "gpt-4o-canmore"} 88 | filtered_models = [ 89 | { 90 | "id": category["default_model"], 91 | "object": "model", 92 | "owned_by": "FlowGPT" 93 | } 94 | for category in data.get("categories", []) 95 | if category.get("default_model") not in exclude_models 96 | ] 97 | models = { 98 | "data": filtered_models 99 | } 100 | return Response(content=json.dumps(models, indent=4), media_type="application/json") 101 | else: 102 | return res -------------------------------------------------------------------------------- /api/files.py: -------------------------------------------------------------------------------- 1 | import io 2 | 3 | import pybase64 4 | from PIL import Image 5 | 6 | from utils.Client import Client 7 | from utils.configs import export_proxy_url, cf_file_url 8 | 9 | async def get_file_content(url): 10 | if url.startswith("data:"): 11 | mime_type, base64_data = url.split(';')[0].split(':')[1], url.split(',')[1] 12 | file_content = pybase64.b64decode(base64_data) 13 | return file_content, mime_type 14 | else: 15 | client = Client() 16 | try: 17 | if cf_file_url: 18 | body = {"file_url": url} 19 | r = await client.post(cf_file_url, timeout=60, json=body) 20 | else: 21 | r = await client.get(url, proxy=export_proxy_url, timeout=60) 22 | if r.status_code != 200: 23 | return None, None 24 | file_content = r.content 25 | mime_type = r.headers.get('Content-Type', '').split(';')[0].strip() 26 | return file_content, mime_type 27 | finally: 28 | await client.close() 29 | del client 30 | 31 | async def determine_file_use_case(mime_type): 32 | multimodal_types = ["image/jpeg", "image/webp", "image/png", "image/gif"] 33 | my_files_types = ["text/x-php", "application/msword", "text/x-c", "text/html", "application/vnd.openxmlformats-officedocument.wordprocessingml.document", "application/json", "text/javascript", "application/pdf", "text/x-java", "text/x-tex", "text/x-typescript", "text/x-sh", "text/x-csharp", "application/vnd.openxmlformats-officedocument.presentationml.presentation", "text/x-c++", "application/x-latext", "text/markdown", "text/plain", "text/x-ruby", "text/x-script.python"] 34 | 35 | if mime_type in multimodal_types: 36 | return "multimodal" 37 | elif mime_type in my_files_types: 38 | return "my_files" 39 | else: 40 | return "ace_upload" 41 | 42 | async def get_image_size(file_content): 43 | with Image.open(io.BytesIO(file_content)) as img: 44 | return img.width, img.height 45 | 46 | async def get_file_extension(mime_type): 47 | extension_mapping = { 48 | "image/jpeg": ".jpg", 49 | "image/png": ".png", 50 | "image/gif": ".gif", 51 | "image/webp": ".webp", 52 | "text/x-php": ".php", 53 | "application/msword": ".doc", 54 | "text/x-c": ".c", 55 | "text/html": ".html", 56 | "application/vnd.openxmlformats-officedocument.wordprocessingml.document": ".docx", 57 | "application/json": ".json", 58 | "text/javascript": ".js", 59 | "application/pdf": ".pdf", 60 | "text/x-java": ".java", 61 | "text/x-tex": ".tex", 62 | "text/x-typescript": ".ts", 63 | "text/x-sh": ".sh", 64 | "text/x-csharp": ".cs", 65 | "application/vnd.openxmlformats-officedocument.presentationml.presentation": ".pptx", 66 | "text/x-c++": ".cpp", 67 | "application/x-latex": ".latex", 68 | "text/markdown": ".md", 69 | "text/plain": ".txt", 70 | "text/x-ruby": ".rb", 71 | "text/x-script.python": ".py", 72 | "application/zip": ".zip", 73 | "application/x-zip-compressed": ".zip", 74 | "application/x-tar": ".tar", 75 | "application/x-compressed-tar": ".tar.gz", 76 | "application/vnd.rar": ".rar", 77 | "application/x-rar-compressed": ".rar", 78 | "application/x-7z-compressed": ".7z", 79 | "application/octet-stream": ".bin", 80 | "audio/mpeg": ".mp3", 81 | "audio/wav": ".wav", 82 | "audio/ogg": ".ogg", 83 | "audio/aac": ".aac", 84 | "video/mp4": ".mp4", 85 | "video/x-msvideo": ".avi", 86 | "video/x-matroska": ".mkv", 87 | "video/webm": ".webm", 88 | "application/rtf": ".rtf", 89 | "application/vnd.ms-excel": ".xls", 90 | "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": ".xlsx", 91 | "text/css": ".css", 92 | "text/xml": ".xml", 93 | "application/xml": ".xml", 94 | "application/vnd.android.package-archive": ".apk", 95 | "application/vnd.apple.installer+xml": ".mpkg", 96 | "application/x-bzip": ".bz", 97 | "application/x-bzip2": ".bz2", 98 | "application/x-csh": ".csh", 99 | "application/x-debian-package": ".deb", 100 | "application/x-dvi": ".dvi", 101 | "application/java-archive": ".jar", 102 | "application/x-java-jnlp-file": ".jnlp", 103 | "application/vnd.mozilla.xul+xml": ".xul", 104 | "application/vnd.ms-fontobject": ".eot", 105 | "application/ogg": ".ogx", 106 | "application/x-font-ttf": ".ttf", 107 | "application/font-woff": ".woff", 108 | "application/x-shockwave-flash": ".swf", 109 | "application/vnd.visio": ".vsd", 110 | "application/xhtml+xml": ".xhtml", 111 | "application/vnd.ms-powerpoint": ".ppt", 112 | "application/vnd.oasis.opendocument.text": ".odt", 113 | "application/vnd.oasis.opendocument.spreadsheet": ".ods", 114 | "application/x-xpinstall": ".xpi", 115 | "application/vnd.google-earth.kml+xml": ".kml", 116 | "application/vnd.google-earth.kmz": ".kmz", 117 | "application/x-font-otf": ".otf", 118 | "application/vnd.ms-excel.addin.macroEnabled.12": ".xlam", 119 | "application/vnd.ms-excel.sheet.binary.macroEnabled.12": ".xlsb", 120 | "application/vnd.ms-excel.template.macroEnabled.12": ".xltm", 121 | "application/vnd.ms-powerpoint.addin.macroEnabled.12": ".ppam", 122 | "application/vnd.ms-powerpoint.presentation.macroEnabled.12": ".pptm", 123 | "application/vnd.ms-powerpoint.slideshow.macroEnabled.12": ".ppsm", 124 | "application/vnd.ms-powerpoint.template.macroEnabled.12": ".potm", 125 | "application/vnd.ms-word.document.macroEnabled.12": ".docm", 126 | "application/vnd.ms-word.template.macroEnabled.12": ".dotm", 127 | "application/x-ms-application": ".application", 128 | "application/x-ms-wmd": ".wmd", 129 | "application/x-ms-wmz": ".wmz", 130 | "application/x-ms-xbap": ".xbap", 131 | "application/vnd.ms-xpsdocument": ".xps", 132 | "application/x-silverlight-app": ".xap" 133 | } 134 | return extension_mapping.get(mime_type, "") 135 | -------------------------------------------------------------------------------- /api/models.py: -------------------------------------------------------------------------------- 1 | model_proxy = { 2 | "gpt-3.5-turbo": "gpt-3.5-turbo-0125", 3 | "gpt-3.5-turbo-16k": "gpt-3.5-turbo-16k-0613", 4 | "gpt-4": "gpt-4-0613", 5 | "gpt-4-32k": "gpt-4-32k-0613", 6 | "gpt-4-turbo-preview": "gpt-4-0125-preview", 7 | "gpt-4-vision-preview": "gpt-4-1106-vision-preview", 8 | "gpt-4-turbo": "gpt-4-turbo-2024-04-09", 9 | "gpt-4o": "gpt-4o-2024-08-06", 10 | "gpt-4o-mini": "gpt-4o-mini-2024-07-18", 11 | "o1": "o1-2024-12-18", 12 | "o1-preview": "o1-preview-2024-09-12", 13 | "o1-mini": "o1-mini-2024-09-12", 14 | "o3-mini": "o3-mini-2025-01-31", 15 | "o3-mini-high": "o3-mini-high-2025-01-31" 16 | } 17 | 18 | model_system_fingerprint = { 19 | "gpt-3.5-turbo-0125": ["fp_b28b39ffa8"], 20 | "gpt-3.5-turbo-1106": ["fp_592ef5907d"], 21 | "gpt-4-0125-preview": ["fp_f38f4d6482", "fp_2f57f81c11", "fp_a7daf7c51e", "fp_a865e8ede4", "fp_13c70b9f70", "fp_b77cb481ed"], 22 | "gpt-4-1106-preview": ["fp_e467c31c3d", "fp_d986a8d1ba", "fp_99a5a401bb", "fp_123d5a9f90", "fp_0d1affc7a6", "fp_5c95a4634e"], 23 | "gpt-4-turbo-2024-04-09": ["fp_d1bac968b4"], 24 | "gpt-4o-2024-05-13": ["fp_3aa7262c27"], 25 | "gpt-4o-mini-2024-07-18": ["fp_c9aa9c0491"] 26 | } 27 | -------------------------------------------------------------------------------- /api/tokens.py: -------------------------------------------------------------------------------- 1 | import math 2 | 3 | import tiktoken 4 | 5 | async def calculate_image_tokens(width, height, detail): 6 | if detail == "low": 7 | return 85 8 | else: 9 | max_dimension = max(width, height) 10 | if max_dimension > 2048: 11 | scale_factor = 2048 / max_dimension 12 | new_width = int(width * scale_factor) 13 | new_height = int(height * scale_factor) 14 | else: 15 | new_width = width 16 | new_height = height 17 | 18 | width, height = new_width, new_height 19 | min_dimension = min(width, height) 20 | if min_dimension > 768: 21 | scale_factor = 768 / min_dimension 22 | new_width = int(width * scale_factor) 23 | new_height = int(height * scale_factor) 24 | else: 25 | new_width = width 26 | new_height = height 27 | 28 | width, height = new_width, new_height 29 | num_masks_w = math.ceil(width / 512) 30 | num_masks_h = math.ceil(height / 512) 31 | total_masks = num_masks_w * num_masks_h 32 | 33 | tokens_per_mask = 170 34 | total_tokens = total_masks * tokens_per_mask + 85 35 | 36 | return total_tokens 37 | 38 | async def num_tokens_from_messages(messages, model=''): 39 | try: 40 | encoding = tiktoken.encoding_for_model(model) 41 | except KeyError: 42 | encoding = tiktoken.get_encoding("cl100k_base") 43 | if model == "gpt-3.5-turbo-0301": 44 | tokens_per_message = 4 45 | else: 46 | tokens_per_message = 3 47 | num_tokens = 0 48 | for message in messages: 49 | num_tokens += tokens_per_message 50 | for key, value in message.items(): 51 | if isinstance(value, list): 52 | for item in value: 53 | if item.get("type") == "text": 54 | num_tokens += len(encoding.encode(item.get("text"))) 55 | if item.get("type") == "image_url": 56 | pass 57 | else: 58 | num_tokens += len(encoding.encode(value)) 59 | num_tokens += 3 60 | return num_tokens 61 | 62 | async def num_tokens_from_content(content, model=None): 63 | try: 64 | encoding = tiktoken.encoding_for_model(model) 65 | except KeyError: 66 | encoding = tiktoken.get_encoding("cl100k_base") 67 | encoded_content = encoding.encode(content) 68 | len_encoded_content = len(encoded_content) 69 | return len_encoded_content 70 | 71 | async def split_tokens_from_content(content, max_tokens, model=None): 72 | try: 73 | encoding = tiktoken.encoding_for_model(model) 74 | except KeyError: 75 | encoding = tiktoken.get_encoding("cl100k_base") 76 | encoded_content = encoding.encode(content) 77 | len_encoded_content = len(encoded_content) 78 | if len_encoded_content >= max_tokens: 79 | content = encoding.decode(encoded_content[:max_tokens]) 80 | return content, max_tokens, "length" 81 | else: 82 | return content, len_encoded_content, "stop" 83 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | import warnings 2 | 3 | import uvicorn 4 | from fastapi import FastAPI, HTTPException 5 | from fastapi.security import HTTPBearer 6 | from fastapi.middleware.cors import CORSMiddleware 7 | from fastapi.templating import Jinja2Templates 8 | 9 | from utils.configs import port, enable_gateway, api_prefix 10 | from utils.log import default_format, access_format 11 | 12 | warnings.filterwarnings("ignore") 13 | 14 | log_config = uvicorn.config.LOGGING_CONFIG 15 | log_config["formatters"]["default"]["fmt"] = default_format 16 | log_config["formatters"]["access"]["fmt"] = access_format 17 | 18 | app = FastAPI( 19 | docs_url=f"/{api_prefix}/docs", # 设置 Swagger UI 文档路径 20 | redoc_url=f"/{api_prefix}/redoc", # 设置 Redoc 文档路径 21 | openapi_url=f"/{api_prefix}/openapi.json" # 设置 OpenAPI JSON 路径 22 | ) 23 | 24 | app.add_middleware( 25 | CORSMiddleware, 26 | allow_origins=["*"], 27 | allow_credentials=True, 28 | allow_methods=["*"], 29 | allow_headers=["*"], 30 | ) 31 | 32 | templates = Jinja2Templates(directory="templates") 33 | security_scheme = HTTPBearer() 34 | 35 | from app import app 36 | 37 | import api.chat2api 38 | 39 | if enable_gateway: 40 | import gateway.share 41 | import gateway.auth 42 | import gateway.chatgpt 43 | import gateway.gpts 44 | import gateway.admin 45 | import gateway.v1 46 | import gateway.backend 47 | else: 48 | @app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "OPTIONS", "HEAD", "PATCH", "TRACE"]) 49 | async def reverse_proxy(): 50 | raise HTTPException(status_code=404, detail="Gateway is disabled") 51 | 52 | if __name__ == "__main__": 53 | uvicorn.run("app:app", host="0.0.0.0", port=port) 54 | # uvicorn.run("app:app", host="0.0.0.0", port=5005, ssl_keyfile="key.pem", ssl_certfile="cert.pem") -------------------------------------------------------------------------------- /chatgpt/ChatService.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | import random 4 | import uuid 5 | 6 | from fastapi import HTTPException 7 | from starlette.concurrency import run_in_threadpool 8 | 9 | from api.files import get_image_size, get_file_extension, determine_file_use_case 10 | from api.models import model_proxy 11 | from chatgpt.authorization import get_req_token, verify_token 12 | from chatgpt.chatFormat import api_messages_to_chat, stream_response, format_not_stream_response, head_process_response 13 | from chatgpt.chatLimit import check_is_limit, handle_request_limit 14 | from chatgpt.fp import get_fp 15 | from chatgpt.proofofWork import get_config, get_dpl, get_answer_token, get_requirements_token 16 | 17 | from utils.Client import Client 18 | from utils.log import log 19 | from utils.configs import ( 20 | chatgpt_base_url_list, 21 | ark0se_token_url_list, 22 | sentinel_proxy_url_list, 23 | history_disabled, 24 | pow_difficulty, 25 | conversation_only, 26 | enable_limit, 27 | upload_by_url, 28 | auth_key, 29 | turnstile_solver_url, 30 | oai_language, 31 | ) 32 | 33 | class ChatService: 34 | def __init__(self, origin_token=None): 35 | # self.user_agent = random.choice(user_agents_list) if user_agents_list else "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36" 36 | self.req_token = get_req_token(origin_token) 37 | self.chat_token = "gAAAAAB" 38 | self.s = None 39 | self.ss = None 40 | self.ws = None 41 | 42 | async def set_dynamic_data(self, data): 43 | if self.req_token: 44 | req_len = len(self.req_token.split(",")) 45 | if req_len == 1: 46 | self.access_token = await verify_token(self.req_token) 47 | self.account_id = None 48 | else: 49 | self.access_token = await verify_token(self.req_token.split(",")[0]) 50 | self.account_id = self.req_token.split(",")[1] 51 | else: 52 | log.info("Request token is empty, use no-auth 3.5") 53 | self.access_token = None 54 | self.account_id = None 55 | 56 | self.fp = get_fp(self.req_token).copy() 57 | self.proxy_url = self.fp.pop("proxy_url", None) 58 | self.impersonate = self.fp.pop("impersonate", "safari15_3") 59 | self.user_agent = self.fp.get("user-agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0") 60 | log.info(f"Request token: {self.req_token}") 61 | log.info(f"Request proxy: {self.proxy_url}") 62 | log.info(f"Request UA: {self.user_agent}") 63 | log.info(f"Request impersonate: {self.impersonate}") 64 | 65 | self.data = data 66 | await self.set_model() 67 | if enable_limit and self.req_token: 68 | limit_response = await handle_request_limit(self.req_token, self.req_model) 69 | if limit_response: 70 | raise HTTPException(status_code=429, detail=limit_response) 71 | 72 | self.account_id = self.data.get('Chatgpt-Account-Id', self.account_id) 73 | self.parent_message_id = self.data.get('parent_message_id') 74 | self.conversation_id = self.data.get('conversation_id') 75 | self.history_disabled = self.data.get('history_disabled', history_disabled) 76 | 77 | self.api_messages = self.data.get("messages", []) 78 | self.prompt_tokens = 0 79 | self.max_tokens = self.data.get("max_tokens", 2147483647) 80 | if not isinstance(self.max_tokens, int): 81 | self.max_tokens = 2147483647 82 | 83 | # self.proxy_url = random.choice(proxy_url_list) if proxy_url_list else None 84 | 85 | self.host_url = random.choice(chatgpt_base_url_list) if chatgpt_base_url_list else "https://chatgpt.com" 86 | self.ark0se_token_url = random.choice(ark0se_token_url_list) if ark0se_token_url_list else None 87 | 88 | self.s = Client(proxy=self.proxy_url, impersonate=self.impersonate) 89 | if sentinel_proxy_url_list: 90 | self.ss = Client(proxy=random.choice(sentinel_proxy_url_list), impersonate=self.impersonate) 91 | else: 92 | self.ss = self.s 93 | 94 | self.persona = None 95 | self.ark0se_token = None 96 | self.proof_token = None 97 | self.turnstile_token = None 98 | 99 | self.chat_headers = None 100 | self.chat_request = None 101 | 102 | self.base_headers = { 103 | 'accept': '*/*', 104 | 'accept-encoding': 'gzip, deflate, br, zstd', 105 | 'accept-language': 'en-US,en;q=0.9', 106 | 'content-type': 'application/json', 107 | 'oai-language': oai_language, 108 | 'origin': self.host_url, 109 | 'priority': 'u=1, i', 110 | 'referer': f'{self.host_url}/', 111 | 'sec-fetch-dest': 'empty', 112 | 'sec-fetch-mode': 'cors', 113 | 'sec-fetch-site': 'same-origin' 114 | } 115 | self.base_headers.update(self.fp) 116 | 117 | if self.access_token: 118 | self.base_url = self.host_url + "/backend-api" 119 | self.base_headers['authorization'] = f'Bearer {self.access_token}' 120 | if self.account_id: 121 | self.base_headers['chatgpt-account-id'] = self.account_id 122 | else: 123 | self.base_url = self.host_url + "/backend-anon" 124 | 125 | if auth_key: 126 | self.base_headers['authkey'] = auth_key 127 | 128 | await get_dpl(self) 129 | 130 | async def set_model(self): 131 | self.origin_model = self.data.get("model", "gpt-3.5-turbo-0125") 132 | self.resp_model = model_proxy.get(self.origin_model, self.origin_model) 133 | if "gizmo" in self.origin_model or "g-" in self.origin_model: 134 | self.gizmo_id = "g-" + self.origin_model.split("g-")[-1] 135 | else: 136 | self.gizmo_id = None 137 | 138 | # 定义映射字典 139 | model_map = { 140 | "o3-mini-high": "o3-mini-high", 141 | "o3-mini": "o3-mini", 142 | "o1-pro": "o1-pro", 143 | "o1-mini": "o1-mini", 144 | "o1": "o1", 145 | "o1-preview": "o1-preview", 146 | "gpt-4o-canmore": "gpt-4o-canmore", 147 | "gpt-4o-mini": "gpt-4o-mini", 148 | "gpt-4o": "gpt-4o", 149 | "gpt-4.5o": "gpt-4.5o", 150 | "gpt-4-mobile": "gpt-4-mobile", 151 | "gpt-4": "gpt-4", 152 | "gpt-3.5": "text-davinci-002-render-sha", 153 | "auto": "auto" 154 | } 155 | 156 | # 查找并设置 req_model 157 | self.req_model = next((v for k, v in model_map.items() if k in self.origin_model), "gpt-4o") 158 | 159 | async def get_chat_requirements(self): 160 | if conversation_only: 161 | return None 162 | url = f'{self.base_url}/sentinel/chat-requirements' 163 | headers = self.base_headers.copy() 164 | try: 165 | config = get_config(self.user_agent) 166 | p = get_requirements_token(config) 167 | data = {'p': p} 168 | log.info(f"Request p: {p}") 169 | r = await self.ss.post(url, headers=headers, json=data, timeout=5) 170 | if r.status_code == 200: 171 | resp = r.json() 172 | 173 | self.persona = resp.get("persona") 174 | if self.persona != "chatgpt-paid": 175 | if self.req_model == "gpt-4" or self.req_model == "o1-preview": 176 | log.error(f"Model {self.resp_model} not support for {self.persona}") 177 | raise HTTPException( 178 | status_code=404, 179 | detail={ 180 | "message": f"The model `{self.origin_model}` does not exist or you do not have access to it.", 181 | "type": "invalid_request_error", 182 | "param": None, 183 | "code": "model_not_found", 184 | }, 185 | ) 186 | 187 | turnstile = resp.get('turnstile', {}) 188 | turnstile_required = turnstile.get('required') 189 | if turnstile_required: 190 | turnstile_dx = turnstile.get("dx") 191 | try: 192 | if turnstile_solver_url: 193 | res = await self.s.post( 194 | turnstile_solver_url, json={"url": "https://chatgpt.com", "p": p, "dx": turnstile_dx} 195 | ) 196 | self.turnstile_token = res.json().get("t") 197 | except Exception as e: 198 | log.info(f"Turnstile ignored: {e}") 199 | # raise HTTPException(status_code=403, detail="Turnstile required") 200 | 201 | ark0se = resp.get('ark' + 'ose', {}) 202 | ark0se_required = ark0se.get('required') 203 | if ark0se_required: 204 | if self.persona == "chatgpt-freeaccount": 205 | ark0se_method = "chat35" 206 | else: 207 | ark0se_method = "chat4" 208 | if not self.ark0se_token_url: 209 | raise HTTPException(status_code=403, detail="Ark0se service required") 210 | ark0se_dx = ark0se.get("dx") 211 | ark0se_client = Client(impersonate=self.impersonate) 212 | try: 213 | r2 = await ark0se_client.post( 214 | url=self.ark0se_token_url, json={"blob": ark0se_dx, "method": ark0se_method}, timeout=15 215 | ) 216 | r2esp = r2.json() 217 | log.info(f"ark0se_token: {r2esp}") 218 | if r2esp.get('solved', True): 219 | self.ark0se_token = r2esp.get('token') 220 | else: 221 | raise HTTPException(status_code=403, detail="Failed to get Ark0se token") 222 | except Exception: 223 | raise HTTPException(status_code=403, detail="Failed to get Ark0se token") 224 | finally: 225 | await ark0se_client.close() 226 | 227 | proofofwork = resp.get('proofofwork', {}) 228 | proofofwork_required = proofofwork.get('required') 229 | if proofofwork_required: 230 | proofofwork_diff = proofofwork.get("difficulty") 231 | if proofofwork_diff <= pow_difficulty: 232 | raise HTTPException(status_code=403, detail=f"Proof of work difficulty too high: {proofofwork_diff}") 233 | proofofwork_seed = proofofwork.get("seed") 234 | self.proof_token, solved = await run_in_threadpool( 235 | get_answer_token, proofofwork_seed, proofofwork_diff, config 236 | ) 237 | if not solved: 238 | raise HTTPException(status_code=403, detail="Failed to solve proof of work") 239 | 240 | self.chat_token = resp.get('token') 241 | if not self.chat_token: 242 | raise HTTPException(status_code=403, detail=f"Failed to get chat token: {r.text}") 243 | return self.chat_token 244 | else: 245 | if "application/json" == r.headers.get("Content-Type", ""): 246 | detail = r.json().get("detail", r.json()) 247 | else: 248 | detail = r.text 249 | if "cf_chl_opt" in detail: 250 | raise HTTPException(status_code=r.status_code, detail="cf_chl_opt") 251 | if r.status_code == 429: 252 | raise HTTPException(status_code=r.status_code, detail="rate-limit") 253 | raise HTTPException(status_code=r.status_code, detail=detail) 254 | except HTTPException as e: 255 | raise HTTPException(status_code=e.status_code, detail=e.detail) 256 | except Exception as e: 257 | raise HTTPException(status_code=500, detail=str(e)) 258 | 259 | async def prepare_send_conversation(self): 260 | try: 261 | chat_messages, self.prompt_tokens = await api_messages_to_chat(self, self.api_messages, upload_by_url) 262 | except Exception as e: 263 | log.error(f"Failed to format messages: {str(e)}") 264 | raise HTTPException(status_code=400, detail="Failed to format messages.") 265 | self.chat_headers = self.base_headers.copy() 266 | self.chat_headers.update( 267 | { 268 | 'accept': 'text/event-stream', 269 | 'openai-sentinel-chat-requirements-token': self.chat_token, 270 | 'openai-sentinel-proof-token': self.proof_token, 271 | } 272 | ) 273 | if self.ark0se_token: 274 | self.chat_headers['openai-sentinel-ark' + 'ose-token'] = self.ark0se_token 275 | 276 | if self.turnstile_token: 277 | self.chat_headers['openai-sentinel-turnstile-token'] = self.turnstile_token 278 | 279 | if conversation_only: 280 | self.chat_headers.pop('openai-sentinel-chat-requirements-token', None) 281 | self.chat_headers.pop('openai-sentinel-proof-token', None) 282 | self.chat_headers.pop('openai-sentinel-ark' + 'ose-token', None) 283 | self.chat_headers.pop('openai-sentinel-turnstile-token', None) 284 | 285 | if self.gizmo_id: 286 | conversation_mode = {"kind": "gizmo_interaction", "gizmo_id": self.gizmo_id} 287 | log.info(f"Gizmo id: {self.gizmo_id}") 288 | else: 289 | conversation_mode = {"kind": "primary_assistant"} 290 | 291 | log.info(f"Model mapping: {self.origin_model} -> {self.req_model}") 292 | self.chat_request = { 293 | "action": "next", 294 | "client_contextual_info": { 295 | "is_dark_mode": False, 296 | "time_since_loaded": random.randint(50, 500), 297 | "page_height": random.randint(500, 1000), 298 | "page_width": random.randint(1000, 2000), 299 | "pixel_ratio": 1.5, 300 | "screen_height": random.randint(800, 1200), 301 | "screen_width": random.randint(1200, 2200), 302 | }, 303 | "conversation_mode": conversation_mode, 304 | "conversation_origin": None, 305 | "force_paragen": False, 306 | "force_paragen_model_slug": "", 307 | "force_rate_limit": False, 308 | "force_use_sse": True, 309 | "history_and_training_disabled": self.history_disabled, 310 | "messages": chat_messages, 311 | "model": self.req_model, 312 | "paragen_cot_summary_display_override": "allow", 313 | "paragen_stream_type_override": None, 314 | "parent_message_id": self.parent_message_id if self.parent_message_id else f"{uuid.uuid4()}", 315 | "reset_rate_limits": False, 316 | "suggestions": [], 317 | "supported_encodings": [], 318 | "system_hints": [], 319 | "timezone": "America/Los_Angeles", 320 | "timezone_offset_min": -480, 321 | "variant_purpose": "comparison_implicit", 322 | "websocket_request_id": f"{uuid.uuid4()}", 323 | } 324 | if self.conversation_id: 325 | self.chat_request['conversation_id'] = self.conversation_id 326 | return self.chat_request 327 | 328 | async def send_conversation(self): 329 | try: 330 | url = f'{self.base_url}/conversation' 331 | stream = self.data.get("stream", False) 332 | r = await self.s.post_stream(url, headers=self.chat_headers, json=self.chat_request, timeout=10, stream=True) 333 | if r.status_code != 200: 334 | rtext = await r.atext() 335 | if "application/json" == r.headers.get("Content-Type", ""): 336 | detail = json.loads(rtext).get("detail", json.loads(rtext)) 337 | if r.status_code == 429: 338 | check_is_limit(detail, token=self.req_token, model=self.req_model) 339 | else: 340 | if "cf_chl_opt" in rtext: 341 | # log.error(f"Failed to send conversation: cf_chl_opt") 342 | raise HTTPException(status_code=r.status_code, detail="cf_chl_opt") 343 | if r.status_code == 429: 344 | # log.error(f"Failed to send conversation: rate-limit") 345 | raise HTTPException(status_code=r.status_code, detail="rate-limit") 346 | detail = r.text[:100] 347 | # log.error(f"Failed to send conversation: {detail}") 348 | raise HTTPException(status_code=r.status_code, detail=detail) 349 | 350 | content_type = r.headers.get("Content-Type", "") 351 | if "text/event-stream" in content_type: 352 | res, start = await head_process_response(r.aiter_lines()) 353 | if not start: 354 | raise HTTPException( 355 | status_code=403, 356 | detail="Our systems have detected unusual activity coming from your system. Please try again later.", 357 | ) 358 | if stream: 359 | return stream_response(self, res, self.resp_model, self.max_tokens) 360 | else: 361 | return await format_not_stream_response( 362 | stream_response(self, res, self.resp_model, self.max_tokens), 363 | self.prompt_tokens, 364 | self.max_tokens, 365 | self.resp_model, 366 | ) 367 | elif "application/json" in content_type: 368 | rtext = await r.atext() 369 | resp = json.loads(rtext) 370 | raise HTTPException(status_code=r.status_code, detail=resp) 371 | else: 372 | rtext = await r.atext() 373 | raise HTTPException(status_code=r.status_code, detail=rtext) 374 | except HTTPException as e: 375 | raise HTTPException(status_code=e.status_code, detail=e.detail) 376 | except Exception as e: 377 | raise HTTPException(status_code=500, detail=str(e)) 378 | 379 | async def get_download_url(self, file_id): 380 | url = f"{self.base_url}/files/{file_id}/download" 381 | headers = self.base_headers.copy() 382 | try: 383 | r = await self.s.get(url, headers=headers, timeout=10) 384 | if r.status_code == 200: 385 | download_url = r.json().get('download_url') 386 | return download_url 387 | else: 388 | raise HTTPException(status_code=r.status_code, detail=r.text) 389 | except Exception as e: 390 | log.error(f"Failed to get download url: {e}") 391 | return "" 392 | 393 | async def get_download_url_from_upload(self, file_id): 394 | url = f"{self.base_url}/files/{file_id}/uploaded" 395 | headers = self.base_headers.copy() 396 | try: 397 | r = await self.s.post(url, headers=headers, json={}, timeout=10) 398 | if r.status_code == 200: 399 | download_url = r.json().get('download_url') 400 | return download_url 401 | else: 402 | raise HTTPException(status_code=r.status_code, detail=r.text) 403 | except Exception as e: 404 | log.error(f"Failed to get download url from upload: {e}") 405 | return "" 406 | 407 | async def get_upload_url(self, file_name, file_size, use_case="multimodal"): 408 | url = f'{self.base_url}/files' 409 | headers = self.base_headers.copy() 410 | try: 411 | r = await self.s.post( 412 | url, 413 | headers=headers, 414 | json={"file_name": file_name, "file_size": file_size, "reset_rate_limits": False, "timezone_offset_min": -480, "use_case": use_case}, 415 | timeout=5, 416 | ) 417 | if r.status_code == 200: 418 | res = r.json() 419 | file_id = res.get('file_id') 420 | upload_url = res.get('upload_url') 421 | log.info(f"file_id: {file_id}, upload_url: {upload_url}") 422 | return file_id, upload_url 423 | else: 424 | raise HTTPException(status_code=r.status_code, detail=r.text) 425 | except Exception as e: 426 | log.error(f"Failed to get upload url: {e}") 427 | return "", "" 428 | 429 | async def upload(self, upload_url, file_content, mime_type): 430 | headers = self.base_headers.copy() 431 | headers.update( 432 | { 433 | 'accept': 'application/json, text/plain, */*', 434 | 'content-type': mime_type, 435 | 'x-ms-blob-type': 'BlockBlob', 436 | 'x-ms-version': '2020-04-08', 437 | } 438 | ) 439 | headers.pop('authorization', None) 440 | headers.pop('oai-device-id', None) 441 | headers.pop('oai-language', None) 442 | try: 443 | r = await self.s.put(upload_url, headers=headers, data=file_content, timeout=60) 444 | if r.status_code == 201: 445 | return True 446 | else: 447 | raise HTTPException(status_code=r.status_code, detail=r.text) 448 | except Exception as e: 449 | log.error(f"Failed to upload file: {e}") 450 | return False 451 | 452 | async def upload_file(self, file_content, mime_type): 453 | if not file_content or not mime_type: 454 | return None 455 | 456 | width, height = None, None 457 | if mime_type.startswith("image/"): 458 | try: 459 | width, height = await get_image_size(file_content) 460 | except Exception as e: 461 | log.error(f"Error image mime_type, change to text/plain: {e}") 462 | mime_type = 'text/plain' 463 | file_size = len(file_content) 464 | file_extension = await get_file_extension(mime_type) 465 | file_name = f"{uuid.uuid4()}{file_extension}" 466 | use_case = await determine_file_use_case(mime_type) 467 | 468 | file_id, upload_url = await self.get_upload_url(file_name, file_size, use_case) 469 | if file_id and upload_url: 470 | if await self.upload(upload_url, file_content, mime_type): 471 | download_url = await self.get_download_url_from_upload(file_id) 472 | if download_url: 473 | file_meta = { 474 | "file_id": file_id, 475 | "file_name": file_name, 476 | "size_bytes": file_size, 477 | "mime_type": mime_type, 478 | "width": width, 479 | "height": height, 480 | "use_case": use_case, 481 | } 482 | log.info(f"File_meta: {file_meta}") 483 | return file_meta 484 | 485 | async def check_upload(self, file_id): 486 | url = f'{self.base_url}/files/{file_id}' 487 | headers = self.base_headers.copy() 488 | try: 489 | for i in range(30): 490 | r = await self.s.get(url, headers=headers, timeout=5) 491 | if r.status_code == 200: 492 | res = r.json() 493 | retrieval_index_status = res.get('retrieval_index_status', '') 494 | if retrieval_index_status == "success": 495 | break 496 | await asyncio.sleep(1) 497 | return True 498 | except HTTPException: 499 | return False 500 | 501 | async def get_response_file_url(self, conversation_id, message_id, sandbox_path): 502 | try: 503 | url = f"{self.base_url}/conversation/{conversation_id}/interpreter/download" 504 | params = {"message_id": message_id, "sandbox_path": sandbox_path} 505 | headers = self.base_headers.copy() 506 | r = await self.s.get(url, headers=headers, params=params, timeout=10) 507 | if r.status_code == 200: 508 | return r.json().get("download_url") 509 | else: 510 | return None 511 | except Exception: 512 | log.info("Failed to get response file url") 513 | return None 514 | 515 | async def close_client(self): 516 | if self.s: 517 | await self.s.close() 518 | del self.s 519 | if self.ss: 520 | await self.ss.close() 521 | del self.ss 522 | if self.ws: 523 | await self.ws.close() 524 | del self.ws 525 | -------------------------------------------------------------------------------- /chatgpt/authorization.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | import random 4 | 5 | from fastapi import HTTPException 6 | 7 | import utils.configs as configs 8 | import utils.globals as globals 9 | from chatgpt.refreshToken import rt2ac 10 | from utils.log import log 11 | 12 | def get_req_token(req_token, seed=None): 13 | if configs.auto_seed: 14 | available_token_list = list(set(globals.token_list) - set(globals.error_token_list)) 15 | length = len(available_token_list) 16 | if seed and length > 0: 17 | if seed not in globals.seed_map.keys(): 18 | globals.seed_map[seed] = {"token": random.choice(available_token_list), "conversations": []} 19 | with open(globals.SEED_MAP_FILE, "w") as f: 20 | json.dump(globals.seed_map, f, indent=4) 21 | else: 22 | req_token = globals.seed_map[seed]["token"] 23 | return req_token 24 | 25 | if req_token in configs.authorization_list: 26 | if len(available_token_list) > 0: 27 | if configs.random_token: 28 | req_token = random.choice(available_token_list) 29 | return req_token 30 | else: 31 | globals.count += 1 32 | globals.count %= length 33 | return available_token_list[globals.count] 34 | else: 35 | return "" 36 | else: 37 | return req_token 38 | else: 39 | seed = req_token 40 | if seed not in globals.seed_map.keys(): 41 | raise HTTPException(status_code=401, detail={"error": "Invalid Seed"}) 42 | return globals.seed_map[seed]["token"] 43 | 44 | async def verify_token(req_token): 45 | if not req_token: 46 | if configs.authorization_list: 47 | log.error("Unauthorized with empty token.") 48 | raise HTTPException(status_code=401) 49 | else: 50 | return None 51 | else: 52 | if req_token.startswith("eyJhbGciOi") or req_token.startswith("fk-"): 53 | access_token = req_token 54 | return access_token 55 | elif len(req_token) == 45: 56 | try: 57 | if req_token in globals.error_token_list: 58 | raise HTTPException(status_code=401, detail="Error RefreshToken") 59 | 60 | access_token = await rt2ac(req_token, force_refresh=False) 61 | return access_token 62 | except HTTPException as e: 63 | raise HTTPException(status_code=e.status_code, detail=e.detail) 64 | else: 65 | return req_token 66 | 67 | async def refresh_all_tokens(force_refresh=False): 68 | for token in list(set(globals.token_list) - set(globals.error_token_list)): 69 | if len(token) == 45: 70 | try: 71 | await asyncio.sleep(0.5) 72 | await rt2ac(token, force_refresh=force_refresh) 73 | except HTTPException: 74 | pass 75 | log.info("All tokens refreshed.") 76 | -------------------------------------------------------------------------------- /chatgpt/chatFormat.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | import random 4 | import re 5 | import string 6 | import time 7 | import uuid 8 | 9 | import pybase64 10 | import websockets 11 | from fastapi import HTTPException 12 | 13 | from api.files import get_file_content 14 | from api.models import model_system_fingerprint 15 | from api.tokens import split_tokens_from_content, calculate_image_tokens, num_tokens_from_messages 16 | from utils.log import log 17 | 18 | moderation_message = "I'm sorry, I cannot provide or engage in any content related to pornography, violence, or any unethical material. If you have any other questions or need assistance, please feel free to let me know. I'll do my best to provide support and assistance." 19 | 20 | async def format_not_stream_response(response, prompt_tokens, max_tokens, model): 21 | chat_id = f"chatcmpl-{''.join(random.choice(string.ascii_letters + string.digits) for _ in range(29))}" 22 | system_fingerprint_list = model_system_fingerprint.get(model, None) 23 | system_fingerprint = random.choice(system_fingerprint_list) if system_fingerprint_list else None 24 | created_time = int(time.time()) 25 | all_text = "" 26 | async for chunk in response: 27 | try: 28 | if chunk.startswith("data: [DONE]"): 29 | break 30 | elif not chunk.startswith("data: "): 31 | continue 32 | else: 33 | chunk = json.loads(chunk[6:]) 34 | if not chunk["choices"][0].get("delta"): 35 | continue 36 | all_text += chunk["choices"][0]["delta"]["content"] 37 | except Exception as e: 38 | log.error(f"Error: {chunk}, error: {str(e)}") 39 | continue 40 | content, completion_tokens, finish_reason = await split_tokens_from_content(all_text, max_tokens, model) 41 | message = { 42 | "role": "assistant", 43 | "content": content, 44 | } 45 | usage = { 46 | "prompt_tokens": prompt_tokens, 47 | "completion_tokens": completion_tokens, 48 | "total_tokens": prompt_tokens + completion_tokens 49 | } 50 | if not message.get("content"): 51 | raise HTTPException(status_code=403, detail="No content in the message.") 52 | 53 | data = { 54 | "id": chat_id, 55 | "object": "chat.completion", 56 | "created": created_time, 57 | "model": model, 58 | "choices": [ 59 | { 60 | "index": 0, 61 | "message": message, 62 | "logprobs": None, 63 | "finish_reason": finish_reason 64 | } 65 | ], 66 | "usage": usage 67 | } 68 | if system_fingerprint: 69 | data["system_fingerprint"] = system_fingerprint 70 | return data 71 | 72 | async def wss_stream_response(websocket, conversation_id): 73 | while not websocket.closed: 74 | try: 75 | message = await asyncio.wait_for(websocket.recv(), timeout=10) 76 | if message: 77 | resultObj = json.loads(message) 78 | sequenceId = resultObj.get("sequenceId", None) 79 | if not sequenceId: 80 | continue 81 | data = resultObj.get("data", {}) 82 | if conversation_id != data.get("conversation_id", ""): 83 | continue 84 | sequenceId = resultObj.get('sequenceId') 85 | if sequenceId and sequenceId % 80 == 0: 86 | await websocket.send( 87 | json.dumps( 88 | {"type": "sequenceAck", "sequenceId": sequenceId} 89 | ) 90 | ) 91 | decoded_bytes = pybase64.b64decode(data.get("body", None)) 92 | yield decoded_bytes 93 | else: 94 | print("No message received within the specified time.") 95 | except asyncio.TimeoutError: 96 | log.error("Timeout! No message received within the specified time.") 97 | break 98 | except websockets.ConnectionClosed as e: 99 | if e.code == 1000: 100 | log.error("WebSocket closed normally with code 1000 (OK)") 101 | yield b"data: [DONE]\n\n" 102 | else: 103 | log.error(f"WebSocket closed with error code {e.code}") 104 | except Exception as e: 105 | log.error(f"Error: {str(e)}") 106 | continue 107 | 108 | async def head_process_response(response): 109 | async for chunk in response: 110 | chunk = chunk.decode("utf-8") 111 | if chunk.startswith("data: {"): 112 | chunk_old_data = json.loads(chunk[6:]) 113 | message = chunk_old_data.get("message", {}) 114 | if not message and "error" in chunk_old_data: 115 | return response, False 116 | role = message.get('author', {}).get('role') 117 | if role == 'user' or role == 'system': 118 | continue 119 | 120 | status = message.get("status") 121 | if status == "in_progress": 122 | return response, True 123 | return response, False 124 | 125 | async def stream_response(service, response, model, max_tokens): 126 | chat_id = f"chatcmpl-{''.join(random.choice(string.ascii_letters + string.digits) for _ in range(29))}" 127 | system_fingerprint_list = model_system_fingerprint.get(model, None) 128 | system_fingerprint = random.choice(system_fingerprint_list) if system_fingerprint_list else None 129 | created_time = int(time.time()) 130 | completion_tokens = 0 131 | len_last_content = 0 132 | len_last_citation = 0 133 | last_message_id = None 134 | last_role = None 135 | last_content_type = None 136 | model_slug = None 137 | end = False 138 | 139 | chunk_new_data = { 140 | "id": chat_id, 141 | "object": "chat.completion.chunk", 142 | "created": created_time, 143 | "model": model, 144 | "choices": [ 145 | { 146 | "index": 0, 147 | "delta": {"role": "assistant", "content": ""}, 148 | "logprobs": None, 149 | "finish_reason": None 150 | } 151 | ] 152 | } 153 | if system_fingerprint: 154 | chunk_new_data["system_fingerprint"] = system_fingerprint 155 | yield f"data: {json.dumps(chunk_new_data)}\n\n" 156 | 157 | async for chunk in response: 158 | chunk = chunk.decode("utf-8") 159 | if end: 160 | log.info(f"Response Model: {model_slug}") 161 | yield "data: [DONE]\n\n" 162 | break 163 | try: 164 | if chunk.startswith("data: {"): 165 | chunk_old_data = json.loads(chunk[6:]) 166 | finish_reason = None 167 | message = chunk_old_data.get("message", {}) 168 | conversation_id = chunk_old_data.get("conversation_id") 169 | role = message.get('author', {}).get('role') 170 | if role == 'user' or role == 'system': 171 | continue 172 | 173 | status = message.get("status") 174 | message_id = message.get("id") 175 | content = message.get("content", {}) 176 | recipient = message.get("recipient", "") 177 | meta_data = message.get("metadata", {}) 178 | initial_text = meta_data.get("initial_text", "") 179 | model_slug = meta_data.get("model_slug", model_slug) 180 | 181 | if not message and chunk_old_data.get("type") == "moderation": 182 | delta = {"role": "assistant", "content": moderation_message} 183 | finish_reason = "stop" 184 | end = True 185 | elif status == "in_progress": 186 | outer_content_type = content.get("content_type") 187 | if outer_content_type == "text": 188 | part = content.get("parts", [])[0] 189 | if not part: 190 | if role == 'assistant' and last_role != 'assistant': 191 | if last_role == None: 192 | new_text = "" 193 | else: 194 | new_text = f"\n" 195 | elif role == 'tool' and last_role != 'tool': 196 | new_text = f"**{initial_text}**\n>" 197 | else: 198 | new_text = "" 199 | else: 200 | if last_message_id and last_message_id != message_id: 201 | continue 202 | citation = message.get("metadata", {}).get("citations", []) 203 | if len(citation) > len_last_citation: 204 | inside_metadata = citation[-1].get("metadata", {}) 205 | citation_title = inside_metadata.get("title", "") 206 | citation_url = inside_metadata.get("url", "") 207 | new_text = f' **[[""]]({citation_url} "{citation_title}")** ' 208 | len_last_citation = len(citation) 209 | else: 210 | if role == 'assistant' and last_role != 'assistant': 211 | if recipient == 'dalle.text2im': 212 | new_text = f"\n```{recipient}\n{part[len_last_content:]}" 213 | elif last_role == None: 214 | new_text = part[len_last_content:] 215 | else: 216 | new_text = f"\n\n{part[len_last_content:]}" 217 | elif role == 'tool' and last_role != 'tool': 218 | new_text = f">{initial_text}\n{part[len_last_content:]}" 219 | elif role == 'tool': 220 | new_text = part[len_last_content:].replace("\n\n", "\n") 221 | else: 222 | new_text = part[len_last_content:] 223 | len_last_content = len(part) 224 | else: 225 | text = content.get("text", "") 226 | if outer_content_type == "code" and last_content_type != "code": 227 | language = content.get("language", "") 228 | if not language or language == "unknown": 229 | language = recipient 230 | new_text = "\n```" + language + "\n" + text[len_last_content:] 231 | elif outer_content_type == "execution_output" and last_content_type != "execution_output": 232 | new_text = "\n```" + "Output" + "\n" + text[len_last_content:] 233 | else: 234 | new_text = text[len_last_content:] 235 | len_last_content = len(text) 236 | if last_content_type == "code" and outer_content_type != "code": 237 | new_text = "\n```\n" + new_text 238 | elif last_content_type == "execution_output" and outer_content_type != "execution_output": 239 | new_text = "\n```\n" + new_text 240 | 241 | delta = {"content": new_text} 242 | last_content_type = outer_content_type 243 | if completion_tokens >= max_tokens: 244 | delta = {} 245 | finish_reason = "length" 246 | end = True 247 | elif status == "finished_successfully": 248 | if content.get("content_type") == "multimodal_text": 249 | parts = content.get("parts", []) 250 | delta = {} 251 | for part in parts: 252 | if isinstance(part, str): 253 | continue 254 | inner_content_type = part.get('content_type') 255 | if inner_content_type == "image_asset_pointer": 256 | last_content_type = "image_asset_pointer" 257 | file_id = part.get('asset_pointer').replace('file-service://', '') 258 | log.debug(f"file_id: {file_id}") 259 | image_download_url = await service.get_download_url(file_id) 260 | log.debug(f"image_download_url: {image_download_url}") 261 | if image_download_url: 262 | delta = {"content": f"\n```\n![image]({image_download_url})\n"} 263 | else: 264 | delta = {"content": f"\n```\nFailed to load the image.\n"} 265 | elif message.get("end_turn"): 266 | part = content.get("parts", [])[0] 267 | new_text = part[len_last_content:] 268 | if not new_text: 269 | matches = re.findall(r'\(sandbox:(.*?)\)', part) 270 | if matches: 271 | file_url_content = "" 272 | for i, sandbox_path in enumerate(matches): 273 | file_download_url = await service.get_response_file_url(conversation_id, message_id, sandbox_path) 274 | if file_download_url: 275 | file_url_content += f"\n```\n\n![File {i+1}]({file_download_url})\n" 276 | delta = {"content": file_url_content} 277 | else: 278 | delta = {} 279 | else: 280 | delta = {"content": new_text} 281 | finish_reason = "stop" 282 | end = True 283 | else: 284 | len_last_content = 0 285 | if meta_data.get("finished_text"): 286 | delta = {"content": f"\n\n*{meta_data.get('finished_text')}*\n\n---\n\n"} 287 | else: 288 | continue 289 | else: 290 | continue 291 | last_message_id = message_id 292 | last_role = role 293 | if not end and not delta.get("content"): 294 | delta = {"role": "assistant", "content": ""} 295 | chunk_new_data["choices"][0]["delta"] = delta 296 | chunk_new_data["choices"][0]["finish_reason"] = finish_reason 297 | if not service.history_disabled: 298 | chunk_new_data.update({ 299 | "message_id": message_id, 300 | "conversation_id": conversation_id, 301 | }) 302 | completion_tokens += 1 303 | yield f"data: {json.dumps(chunk_new_data)}\n\n" 304 | elif chunk.startswith("data: [DONE]"): 305 | log.info(f"Response Model: {model_slug}") 306 | yield "data: [DONE]\n\n" 307 | else: 308 | continue 309 | except Exception as e: 310 | if chunk.startswith("data: "): 311 | chunk_data = json.loads(chunk[6:]) 312 | if chunk_data.get("error"): 313 | log.error(f"Error: {chunk_data.get('error')}") 314 | yield "data: [DONE]\n\n" 315 | break 316 | log.error(f"Error: {chunk}, details: {str(e)}") 317 | continue 318 | 319 | def get_url_from_content(content): 320 | if isinstance(content, str) and content.startswith('http'): 321 | try: 322 | url = re.match( 323 | r'(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'".,<>?«»“”‘’]))', 324 | content.split(' ')[0])[0] 325 | content = content.replace(url, '').strip() 326 | return url, content 327 | except Exception: 328 | return None, content 329 | return None, content 330 | 331 | def format_messages_with_url(content): 332 | url_list = [] 333 | while True: 334 | url, content = get_url_from_content(content) 335 | if url: 336 | url_list.append(url) 337 | log.info(f"Found a file_url from messages: {url}") 338 | else: 339 | break 340 | if not url_list: 341 | return content 342 | new_content = [ 343 | { 344 | "type": "text", 345 | "text": content 346 | } 347 | ] 348 | for url in url_list: 349 | new_content.append({ 350 | "type": "image_url", 351 | "image_url": { 352 | "url": url 353 | } 354 | }) 355 | return new_content 356 | 357 | async def api_messages_to_chat(service, api_messages, upload_by_url=False): 358 | file_tokens = 0 359 | chat_messages = [] 360 | for api_message in api_messages: 361 | role = api_message.get('role') 362 | content = api_message.get('content') 363 | if upload_by_url: 364 | if isinstance(content, str): 365 | content = format_messages_with_url(content) 366 | if isinstance(content, list): 367 | parts = [] 368 | attachments = [] 369 | content_type = "multimodal_text" 370 | for i in content: 371 | if i.get("type") == "text": 372 | parts.append(i.get("text")) 373 | elif i.get("type") == "image_url": 374 | image_url = i.get("image_url") 375 | url = image_url.get("url") 376 | detail = image_url.get("detail", "auto") 377 | file_content, mime_type = await get_file_content(url) 378 | file_meta = await service.upload_file(file_content, mime_type) 379 | if file_meta: 380 | file_id = file_meta["file_id"] 381 | file_size = file_meta["size_bytes"] 382 | file_name = file_meta["file_name"] 383 | mime_type = file_meta["mime_type"] 384 | use_case = file_meta["use_case"] 385 | if mime_type.startswith("image/"): 386 | width, height = file_meta["width"], file_meta["height"] 387 | file_tokens += await calculate_image_tokens(width, height, detail) 388 | parts.append({ 389 | "content_type": "image_asset_pointer", 390 | "asset_pointer": f"file-service://{file_id}", 391 | "size_bytes": file_size, 392 | "width": width, 393 | "height": height 394 | }) 395 | attachments.append({ 396 | "id": file_id, 397 | "size": file_size, 398 | "name": file_name, 399 | "mime_type": mime_type, 400 | "width": width, 401 | "height": height 402 | }) 403 | else: 404 | if not use_case == "ace_upload": 405 | await service.check_upload(file_id) 406 | file_tokens += file_size // 1000 407 | attachments.append({ 408 | "id": file_id, 409 | "size": file_size, 410 | "name": file_name, 411 | "mime_type": mime_type, 412 | }) 413 | metadata = { 414 | "attachments": attachments 415 | } 416 | else: 417 | content_type = "text" 418 | parts = [content] 419 | metadata = {} 420 | chat_message = { 421 | "id": f"{uuid.uuid4()}", 422 | "author": {"role": role}, 423 | "content": {"content_type": content_type, "parts": parts}, 424 | "metadata": metadata 425 | } 426 | chat_messages.append(chat_message) 427 | text_tokens = await num_tokens_from_messages(api_messages, service.resp_model) 428 | prompt_tokens = text_tokens + file_tokens 429 | return chat_messages, prompt_tokens 430 | -------------------------------------------------------------------------------- /chatgpt/chatLimit.py: -------------------------------------------------------------------------------- 1 | import time 2 | from datetime import datetime 3 | 4 | from utils.log import log 5 | 6 | limit_details = {} 7 | 8 | def check_is_limit(detail, token, model): 9 | if token and isinstance(detail, dict) and detail.get('clears_in'): 10 | clear_time = int(time.time()) + detail.get('clears_in') 11 | limit_details.setdefault(token, {})[model] = clear_time 12 | log.info(f"{token[:40]}: Reached {model} limit, will be cleared at {datetime.fromtimestamp(clear_time).replace(microsecond=0)}") 13 | 14 | async def handle_request_limit(token, model): 15 | try: 16 | if limit_details.get(token) and model in limit_details[token]: 17 | limit_time = limit_details[token][model] 18 | is_limit = limit_time > int(time.time()) 19 | if is_limit: 20 | clear_date = datetime.fromtimestamp(limit_time).replace(microsecond=0) 21 | result = f"Request limit exceeded. You can continue with the default model now, or try again after {clear_date}" 22 | log.info(result) 23 | return result 24 | else: 25 | del limit_details[token][model] 26 | return None 27 | except KeyError as e: 28 | log.error(f"Key error: {e}") 29 | return None 30 | except Exception as e: 31 | log.error(f"An unexpected error occurred: {e}") 32 | return None 33 | -------------------------------------------------------------------------------- /chatgpt/fp.py: -------------------------------------------------------------------------------- 1 | import json 2 | import random 3 | import uuid 4 | 5 | import ua_generator 6 | from ua_generator.data.version import VersionRange 7 | from ua_generator.options import Options 8 | 9 | import utils.globals as globals 10 | from utils import configs 11 | 12 | def get_fp(req_token): 13 | fp = globals.fp_map.get(req_token, {}) 14 | if fp and fp.get("user-agent") and fp.get("impersonate"): 15 | if "proxy_url" in fp.keys() and (fp["proxy_url"] is None or fp["proxy_url"] not in configs.proxy_url_list): 16 | fp["proxy_url"] = random.choice(configs.proxy_url_list) if configs.proxy_url_list else None 17 | globals.fp_map[req_token] = fp 18 | with open(globals.FP_FILE, "w", encoding="utf-8") as f: 19 | json.dump(globals.fp_map, f, indent=4) 20 | if globals.impersonate_list and "impersonate" in fp.keys() and fp["impersonate"] not in globals.impersonate_list: 21 | fp["impersonate"] = random.choice(globals.impersonate_list) 22 | globals.fp_map[req_token] = fp 23 | with open(globals.FP_FILE, "w", encoding="utf-8") as f: 24 | json.dump(globals.fp_map, f, indent=4) 25 | if configs.user_agents_list and "user-agent" in fp.keys() and fp["user-agent"] not in configs.user_agents_list: 26 | fp["user-agent"] = random.choice(configs.user_agents_list) 27 | globals.fp_map[req_token] = fp 28 | with open(globals.FP_FILE, "w", encoding="utf-8") as f: 29 | json.dump(globals.fp_map, f, indent=4) 30 | fp = {k.lower(): v for k, v in fp.items()} 31 | return fp 32 | else: 33 | options = Options(version_ranges={ 34 | 'chrome': VersionRange(min_version=124), 35 | 'edge': VersionRange(min_version=124), 36 | }) 37 | ua = ua_generator.generate( 38 | device=configs.device_tuple if configs.device_tuple else ('desktop'), 39 | browser=configs.browser_tuple if configs.browser_tuple else ('chrome', 'edge', 'firefox', 'safari'), 40 | platform=configs.platform_tuple if configs.platform_tuple else ('windows', 'macos'), 41 | options=options 42 | ) 43 | fp = { 44 | "user-agent": ua.text if not configs.user_agents_list else random.choice(configs.user_agents_list), 45 | "impersonate": random.choice(globals.impersonate_list), 46 | "proxy_url": random.choice(configs.proxy_url_list) if configs.proxy_url_list else None, 47 | "oai-device-id": str(uuid.uuid4()) 48 | } 49 | if ua.device == "desktop" and ua.browser in ("chrome", "edge"): 50 | fp["sec-ch-ua-platform"] = ua.ch.platform 51 | fp["sec-ch-ua"] = ua.ch.brands 52 | fp["sec-ch-ua-mobile"] = ua.ch.mobile 53 | 54 | if not req_token: 55 | return fp 56 | else: 57 | globals.fp_map[req_token] = fp 58 | with open(globals.FP_FILE, "w", encoding="utf-8") as f: 59 | json.dump(globals.fp_map, f, indent=4) 60 | return fp 61 | -------------------------------------------------------------------------------- /chatgpt/proofofWork.py: -------------------------------------------------------------------------------- 1 | import hashlib 2 | import json 3 | import random 4 | import re 5 | import time 6 | import uuid 7 | from datetime import datetime, timedelta, timezone 8 | from html.parser import HTMLParser 9 | 10 | import pybase64 11 | 12 | from utils.log import log 13 | from utils.configs import conversation_only 14 | 15 | cores = [8, 16, 24, 32] 16 | timeLayout = "%a %b %d %Y %H:%M:%S" 17 | 18 | cached_scripts = [] 19 | cached_dpl = "" 20 | cached_time = 0 21 | cached_require_proof = "" 22 | 23 | navigator_key = [ 24 | "registerProtocolHandler−function registerProtocolHandler() { [native code] }", 25 | "storage−[object StorageManager]", 26 | "locks−[object LockManager]", 27 | "appCodeName−Mozilla", 28 | "permissions−[object Permissions]", 29 | "share−function share() { [native code] }", 30 | "webdriver−false", 31 | "managed−[object NavigatorManagedData]", 32 | "canShare−function canShare() { [native code] }", 33 | "vendor−Google Inc.", 34 | "vendor−Google Inc.", 35 | "mediaDevices−[object MediaDevices]", 36 | "vibrate−function vibrate() { [native code] }", 37 | "storageBuckets−[object StorageBucketManager]", 38 | "mediaCapabilities−[object MediaCapabilities]", 39 | "getGamepads−function getGamepads() { [native code] }", 40 | "bluetooth−[object Bluetooth]", 41 | "share−function share() { [native code] }", 42 | "cookieEnabled−true", 43 | "virtualKeyboard−[object VirtualKeyboard]", 44 | "product−Gecko", 45 | "mediaDevices−[object MediaDevices]", 46 | "canShare−function canShare() { [native code] }", 47 | "getGamepads−function getGamepads() { [native code] }", 48 | "product−Gecko", 49 | "xr−[object XRSystem]", 50 | "clipboard−[object Clipboard]", 51 | "storageBuckets−[object StorageBucketManager]", 52 | "unregisterProtocolHandler−function unregisterProtocolHandler() { [native code] }", 53 | "productSub−20030107", 54 | "login−[object NavigatorLogin]", 55 | "vendorSub−", 56 | "login−[object NavigatorLogin]", 57 | "getInstalledRelatedApps−function getInstalledRelatedApps() { [native code] }", 58 | "mediaDevices−[object MediaDevices]", 59 | "locks−[object LockManager]", 60 | "webkitGetUserMedia−function webkitGetUserMedia() { [native code] }", 61 | "vendor−Google Inc.", 62 | "xr−[object XRSystem]", 63 | "mediaDevices−[object MediaDevices]", 64 | "virtualKeyboard−[object VirtualKeyboard]", 65 | "virtualKeyboard−[object VirtualKeyboard]", 66 | "appName−Netscape", 67 | "storageBuckets−[object StorageBucketManager]", 68 | "presentation−[object Presentation]", 69 | "onLine−true", 70 | "mimeTypes−[object MimeTypeArray]", 71 | "credentials−[object CredentialsContainer]", 72 | "presentation−[object Presentation]", 73 | "getGamepads−function getGamepads() { [native code] }", 74 | "vendorSub−", 75 | "virtualKeyboard−[object VirtualKeyboard]", 76 | "serviceWorker−[object ServiceWorkerContainer]", 77 | "xr−[object XRSystem]", 78 | "product−Gecko", 79 | "keyboard−[object Keyboard]", 80 | "gpu−[object GPU]", 81 | "getInstalledRelatedApps−function getInstalledRelatedApps() { [native code] }", 82 | "webkitPersistentStorage−[object DeprecatedStorageQuota]", 83 | "doNotTrack", 84 | "clearAppBadge−function clearAppBadge() { [native code] }", 85 | "presentation−[object Presentation]", 86 | "serial−[object Serial]", 87 | "locks−[object LockManager]", 88 | "requestMIDIAccess−function requestMIDIAccess() { [native code] }", 89 | "locks−[object LockManager]", 90 | "requestMediaKeySystemAccess−function requestMediaKeySystemAccess() { [native code] }", 91 | "vendor−Google Inc.", 92 | "pdfViewerEnabled−true", 93 | "language−zh-CN", 94 | "setAppBadge−function setAppBadge() { [native code] }", 95 | "geolocation−[object Geolocation]", 96 | "userAgentData−[object NavigatorUAData]", 97 | "mediaCapabilities−[object MediaCapabilities]", 98 | "requestMIDIAccess−function requestMIDIAccess() { [native code] }", 99 | "getUserMedia−function getUserMedia() { [native code] }", 100 | "mediaDevices−[object MediaDevices]", 101 | "webkitPersistentStorage−[object DeprecatedStorageQuota]", 102 | "sendBeacon−function sendBeacon() { [native code] }", 103 | "hardwareConcurrency−32", 104 | "credentials−[object CredentialsContainer]", 105 | "storage−[object StorageManager]", 106 | "cookieEnabled−true", 107 | "pdfViewerEnabled−true", 108 | "windowControlsOverlay−[object WindowControlsOverlay]", 109 | "scheduling−[object Scheduling]", 110 | "pdfViewerEnabled−true", 111 | "hardwareConcurrency−32", 112 | "xr−[object XRSystem]", 113 | "webdriver−false", 114 | "getInstalledRelatedApps−function getInstalledRelatedApps() { [native code] }", 115 | "getInstalledRelatedApps−function getInstalledRelatedApps() { [native code] }", 116 | "bluetooth−[object Bluetooth]" 117 | ] 118 | document_key = ['_reactListeningo743lnnpvdg', 'location'] 119 | window_key = [ 120 | "0", 121 | "window", 122 | "self", 123 | "document", 124 | "name", 125 | "location", 126 | "customElements", 127 | "history", 128 | "navigation", 129 | "locationbar", 130 | "menubar", 131 | "personalbar", 132 | "scrollbars", 133 | "statusbar", 134 | "toolbar", 135 | "status", 136 | "closed", 137 | "frames", 138 | "length", 139 | "top", 140 | "opener", 141 | "parent", 142 | "frameElement", 143 | "navigator", 144 | "origin", 145 | "external", 146 | "screen", 147 | "innerWidth", 148 | "innerHeight", 149 | "scrollX", 150 | "pageXOffset", 151 | "scrollY", 152 | "pageYOffset", 153 | "visualViewport", 154 | "screenX", 155 | "screenY", 156 | "outerWidth", 157 | "outerHeight", 158 | "devicePixelRatio", 159 | "clientInformation", 160 | "screenLeft", 161 | "screenTop", 162 | "styleMedia", 163 | "onsearch", 164 | "isSecureContext", 165 | "trustedTypes", 166 | "performance", 167 | "onappinstalled", 168 | "onbeforeinstallprompt", 169 | "crypto", 170 | "indexedDB", 171 | "sessionStorage", 172 | "localStorage", 173 | "onbeforexrselect", 174 | "onabort", 175 | "onbeforeinput", 176 | "onbeforematch", 177 | "onbeforetoggle", 178 | "onblur", 179 | "oncancel", 180 | "oncanplay", 181 | "oncanplaythrough", 182 | "onchange", 183 | "onclick", 184 | "onclose", 185 | "oncontentvisibilityautostatechange", 186 | "oncontextlost", 187 | "oncontextmenu", 188 | "oncontextrestored", 189 | "oncuechange", 190 | "ondblclick", 191 | "ondrag", 192 | "ondragend", 193 | "ondragenter", 194 | "ondragleave", 195 | "ondragover", 196 | "ondragstart", 197 | "ondrop", 198 | "ondurationchange", 199 | "onemptied", 200 | "onended", 201 | "onerror", 202 | "onfocus", 203 | "onformdata", 204 | "oninput", 205 | "oninvalid", 206 | "onkeydown", 207 | "onkeypress", 208 | "onkeyup", 209 | "onload", 210 | "onloadeddata", 211 | "onloadedmetadata", 212 | "onloadstart", 213 | "onmousedown", 214 | "onmouseenter", 215 | "onmouseleave", 216 | "onmousemove", 217 | "onmouseout", 218 | "onmouseover", 219 | "onmouseup", 220 | "onmousewheel", 221 | "onpause", 222 | "onplay", 223 | "onplaying", 224 | "onprogress", 225 | "onratechange", 226 | "onreset", 227 | "onresize", 228 | "onscroll", 229 | "onsecuritypolicyviolation", 230 | "onseeked", 231 | "onseeking", 232 | "onselect", 233 | "onslotchange", 234 | "onstalled", 235 | "onsubmit", 236 | "onsuspend", 237 | "ontimeupdate", 238 | "ontoggle", 239 | "onvolumechange", 240 | "onwaiting", 241 | "onwebkitanimationend", 242 | "onwebkitanimationiteration", 243 | "onwebkitanimationstart", 244 | "onwebkittransitionend", 245 | "onwheel", 246 | "onauxclick", 247 | "ongotpointercapture", 248 | "onlostpointercapture", 249 | "onpointerdown", 250 | "onpointermove", 251 | "onpointerrawupdate", 252 | "onpointerup", 253 | "onpointercancel", 254 | "onpointerover", 255 | "onpointerout", 256 | "onpointerenter", 257 | "onpointerleave", 258 | "onselectstart", 259 | "onselectionchange", 260 | "onanimationend", 261 | "onanimationiteration", 262 | "onanimationstart", 263 | "ontransitionrun", 264 | "ontransitionstart", 265 | "ontransitionend", 266 | "ontransitioncancel", 267 | "onafterprint", 268 | "onbeforeprint", 269 | "onbeforeunload", 270 | "onhashchange", 271 | "onlanguagechange", 272 | "onmessage", 273 | "onmessageerror", 274 | "onoffline", 275 | "ononline", 276 | "onpagehide", 277 | "onpageshow", 278 | "onpopstate", 279 | "onrejectionhandled", 280 | "onstorage", 281 | "onunhandledrejection", 282 | "onunload", 283 | "crossOriginIsolated", 284 | "scheduler", 285 | "alert", 286 | "atob", 287 | "blur", 288 | "btoa", 289 | "cancelAnimationFrame", 290 | "cancelIdleCallback", 291 | "captureEvents", 292 | "clearInterval", 293 | "clearTimeout", 294 | "close", 295 | "confirm", 296 | "createImageBitmap", 297 | "fetch", 298 | "find", 299 | "focus", 300 | "getComputedStyle", 301 | "getSelection", 302 | "matchMedia", 303 | "moveBy", 304 | "moveTo", 305 | "open", 306 | "postMessage", 307 | "print", 308 | "prompt", 309 | "queueMicrotask", 310 | "releaseEvents", 311 | "reportError", 312 | "requestAnimationFrame", 313 | "requestIdleCallback", 314 | "resizeBy", 315 | "resizeTo", 316 | "scroll", 317 | "scrollBy", 318 | "scrollTo", 319 | "setInterval", 320 | "setTimeout", 321 | "stop", 322 | "structuredClone", 323 | "webkitCancelAnimationFrame", 324 | "webkitRequestAnimationFrame", 325 | "chrome", 326 | "caches", 327 | "cookieStore", 328 | "ondevicemotion", 329 | "ondeviceorientation", 330 | "ondeviceorientationabsolute", 331 | "launchQueue", 332 | "documentPictureInPicture", 333 | "getScreenDetails", 334 | "queryLocalFonts", 335 | "showDirectoryPicker", 336 | "showOpenFilePicker", 337 | "showSaveFilePicker", 338 | "originAgentCluster", 339 | "onpageswap", 340 | "onpagereveal", 341 | "credentialless", 342 | "speechSynthesis", 343 | "onscrollend", 344 | "webkitRequestFileSystem", 345 | "webkitResolveLocalFileSystemURL", 346 | "sendMsgToSolverCS", 347 | "webpackChunk_N_E", 348 | "__next_set_public_path__", 349 | "next", 350 | "__NEXT_DATA__", 351 | "__SSG_MANIFEST_CB", 352 | "__NEXT_P", 353 | "_N_E", 354 | "regeneratorRuntime", 355 | "__REACT_INTL_CONTEXT__", 356 | "DD_RUM", 357 | "_", 358 | "filterCSS", 359 | "filterXSS", 360 | "__SEGMENT_INSPECTOR__", 361 | "__NEXT_PRELOADREADY", 362 | "Intercom", 363 | "__MIDDLEWARE_MATCHERS", 364 | "__STATSIG_SDK__", 365 | "__STATSIG_JS_SDK__", 366 | "__STATSIG_RERENDER_OVERRIDE__", 367 | "_oaiHandleSessionExpired", 368 | "__BUILD_MANIFEST", 369 | "__SSG_MANIFEST", 370 | "__intercomAssignLocation", 371 | "__intercomReloadLocation" 372 | ] 373 | 374 | class ScriptSrcParser(HTMLParser): 375 | def handle_starttag(self, tag, attrs): 376 | global cached_scripts, cached_dpl, cached_time 377 | if tag == "script": 378 | attrs_dict = dict(attrs) 379 | if "src" in attrs_dict: 380 | src = attrs_dict["src"] 381 | cached_scripts.append(src) 382 | match = re.search(r"c/[^/]*/_", src) 383 | if match: 384 | cached_dpl = match.group(0) 385 | cached_time = int(time.time()) 386 | 387 | def get_data_build_from_html(html_content): 388 | global cached_scripts, cached_dpl, cached_time 389 | parser = ScriptSrcParser() 390 | parser.feed(html_content) 391 | if not cached_scripts: 392 | cached_scripts.append("https://chatgpt.com/backend-api/sentinel/sdk.js") 393 | if not cached_dpl: 394 | match = re.search(r']*data-build="([^"]*)"', html_content) 395 | if match: 396 | data_build = match.group(1) 397 | cached_dpl = data_build 398 | cached_time = int(time.time()) 399 | log.info(f"Found dpl: {cached_dpl}") 400 | 401 | async def get_dpl(service): 402 | global cached_scripts, cached_dpl, cached_time 403 | if int(time.time()) - cached_time < 15 * 60: 404 | return True 405 | headers = service.base_headers.copy() 406 | cached_scripts = [] 407 | cached_dpl = "" 408 | try: 409 | if conversation_only: 410 | return True 411 | r = await service.s.get(f"{service.host_url}/", headers=headers, timeout=5) 412 | r.raise_for_status() 413 | get_data_build_from_html(r.text) 414 | if not cached_dpl: 415 | raise Exception("No Cached DPL") 416 | else: 417 | return True 418 | except Exception as e: 419 | log.info(f"Failed to get dpl: {e}") 420 | cached_dpl = None 421 | cached_time = int(time.time()) 422 | return False 423 | 424 | def get_parse_time(): 425 | now = datetime.now(timezone(timedelta(hours=-5))) 426 | return now.strftime(timeLayout) + " GMT-0500 (Eastern Standard Time)" 427 | 428 | def get_config(user_agent): 429 | config = [ 430 | random.randint(1080, 1440+1080), 431 | get_parse_time(), 432 | 4294705152, 433 | 0, 434 | user_agent, 435 | random.choice(cached_scripts) if cached_scripts else "", 436 | cached_dpl, 437 | "en-US", 438 | "en-US,es-US,en,es", 439 | 0, 440 | random.choice(navigator_key), 441 | random.choice(document_key), 442 | random.choice(window_key), 443 | time.perf_counter() * 1000, 444 | str(uuid.uuid4()), 445 | "", 446 | random.choice(cores), 447 | time.time() * 1000 - (time.perf_counter() * 1000), 448 | ] 449 | return config 450 | 451 | def get_answer_token(seed, diff, config): 452 | start = time.time() 453 | answer, solved = generate_answer(seed, diff, config) 454 | end = time.time() 455 | log.info(f'diff: {diff}, time: {int((end - start) * 1e6) / 1e3}ms, solved: {solved}') 456 | return "gAAAAAB" + answer, solved 457 | 458 | def generate_answer(seed, diff, config): 459 | diff_len = len(diff) 460 | seed_encoded = seed.encode() 461 | static_config_part1 = (json.dumps(config[:3], separators=(',', ':'), ensure_ascii=False)[:-1] + ',').encode() 462 | static_config_part2 = (',' + json.dumps(config[4:9], separators=(',', ':'), ensure_ascii=False)[1:-1] + ',').encode() 463 | static_config_part3 = (',' + json.dumps(config[10:], separators=(',', ':'), ensure_ascii=False)[1:]).encode() 464 | 465 | target_diff = bytes.fromhex(diff) 466 | 467 | for i in range(500000): 468 | dynamic_json_i = str(i).encode() 469 | dynamic_json_j = str(i >> 1).encode() 470 | final_json_bytes = static_config_part1 + dynamic_json_i + static_config_part2 + dynamic_json_j + static_config_part3 471 | base_encode = pybase64.b64encode(final_json_bytes) 472 | hash_value = hashlib.sha3_512(seed_encoded + base_encode).digest() 473 | if hash_value[:diff_len] <= target_diff: 474 | return base_encode.decode(), True 475 | 476 | return "wQ8Lk5FbGpA2NcR9dShT6gYjU7VxZ4D" + pybase64.b64encode(f'"{seed}"'.encode()).decode(), False 477 | 478 | def get_requirements_token(config): 479 | require, solved = generate_answer(format(random.random()), "0fffff", config) 480 | return 'gAAAAAC' + require 481 | 482 | if __name__ == "__main__": 483 | # cached_scripts.append( 484 | # "https://cdn.oaistatic.com/_next/static/cXh69klOLzS0Gy2joLDRS/_ssgManifest.js?dpl=453ebaec0d44c2decab71692e1bfe39be35a24b3") 485 | # cached_dpl = "453ebaec0d44c2decab71692e1bfe39be35a24b3" 486 | # cached_time = int(time.time()) 487 | # for i in range(10): 488 | # seed = format(random.random()) 489 | # diff = "000032" 490 | # config = get_config("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome") 491 | # answer = get_answer_token(seed, diff, config) 492 | cached_scripts.append( 493 | "https://cdn.oaistatic.com/_next/static/cXh69klOLzS0Gy2joLDRS/_ssgManifest.js?dpl=453ebaec0d44c2decab71692e1bfe39be35a24b3") 494 | cached_dpl = "prod-f501fe933b3edf57aea882da888e1a544df99840" 495 | config = get_config("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36") 496 | get_requirements_token(config) 497 | -------------------------------------------------------------------------------- /chatgpt/refreshToken.py: -------------------------------------------------------------------------------- 1 | import json 2 | import random 3 | import time 4 | 5 | from fastapi import HTTPException 6 | 7 | from utils.Client import Client 8 | from utils.log import log 9 | from utils.configs import proxy_url_list 10 | import utils.globals as globals 11 | 12 | async def rt2ac(refresh_token, force_refresh=False): 13 | if not force_refresh and (refresh_token in globals.refresh_map and int(time.time()) - globals.refresh_map.get(refresh_token, {}).get("timestamp", 0) < 5 * 24 * 60 * 60): 14 | access_token = globals.refresh_map[refresh_token]["token"] 15 | # log.info(f"refresh_token -> access_token from cache") 16 | return access_token 17 | else: 18 | try: 19 | access_token = await chat_refresh(refresh_token) 20 | globals.refresh_map[refresh_token] = {"token": access_token, "timestamp": int(time.time())} 21 | with open(globals.REFRESH_MAP_FILE, "w") as f: 22 | json.dump(globals.refresh_map, f, indent=4) 23 | log.info(f"refresh_token -> access_token with openai: {access_token}") 24 | return access_token 25 | except HTTPException as e: 26 | raise HTTPException(status_code=e.status_code, detail=e.detail) 27 | 28 | async def chat_refresh(refresh_token): 29 | data = { 30 | "client_id": "pdlLIX2Y72MIl2rhLhTE9VV9bN905kBh", 31 | "grant_type": "refresh_token", 32 | "redirect_uri": "com.openai.chat://auth0.openai.com/ios/com.openai.chat/callback", 33 | "refresh_token": refresh_token 34 | } 35 | client = Client(proxy=random.choice(proxy_url_list) if proxy_url_list else None) 36 | try: 37 | r = await client.post("https://auth0.openai.com/oauth/token", json=data, timeout=5) 38 | if r.status_code == 200: 39 | access_token = r.json()['access_token'] 40 | return access_token 41 | else: 42 | if "invalid_grant" in r.text or "access_denied" in r.text: 43 | if refresh_token not in globals.error_token_list: 44 | globals.error_token_list.append(refresh_token) 45 | with open(globals.ERROR_TOKENS_FILE, "a", encoding="utf-8") as f: 46 | f.write(refresh_token + "\n") 47 | raise Exception(r.text) 48 | else: 49 | raise Exception(r.text[:300]) 50 | except Exception as e: 51 | log.error(f"Failed to refresh access_token `{refresh_token}`: {str(e)}") 52 | raise HTTPException(status_code=500, detail=f"Failed to refresh access_token.") 53 | finally: 54 | await client.close() 55 | del client 56 | -------------------------------------------------------------------------------- /chatgpt/turnstile.py: -------------------------------------------------------------------------------- 1 | import pybase64 2 | import json 3 | import random 4 | import time 5 | from typing import Any, Callable, Dict, List, Union 6 | 7 | class OrderedMap: 8 | def __init__(self): 9 | self.keys = [] 10 | self.values = {} 11 | 12 | def add(self, key: str, value: Any): 13 | if key not in self.values: 14 | self.keys.append(key) 15 | self.values[key] = value 16 | 17 | def to_json(self): 18 | return json.dumps({k: self.values[k] for k in self.keys}) 19 | 20 | TurnTokenList = List[List[Any]] 21 | FloatMap = Dict[float, Any] 22 | StringMap = Dict[str, Any] 23 | FuncType = Callable[..., Any] 24 | 25 | def get_turnstile_token(dx: str, p: str) -> Union[str, None]: 26 | try: 27 | decoded_bytes = pybase64.b64decode(dx) 28 | return process_turnstile_token(decoded_bytes.decode(), p) 29 | except Exception as e: 30 | print(f"Error in get_turnstile_token: {e}") 31 | return None 32 | 33 | def process_turnstile_token(dx: str, p: str) -> str: 34 | result = [] 35 | p_length = len(p) 36 | if p_length != 0: 37 | for i, r in enumerate(dx): 38 | result.append(chr(ord(r) ^ ord(p[i % p_length]))) 39 | else: 40 | result = list(dx) 41 | return ''.join(result) 42 | 43 | def is_slice(input_val: Any) -> bool: 44 | return isinstance(input_val, (list, tuple)) 45 | 46 | def is_float(input_val: Any) -> bool: 47 | return isinstance(input_val, float) 48 | 49 | def is_string(input_val: Any) -> bool: 50 | return isinstance(input_val, str) 51 | 52 | def to_str(input_val: Any) -> str: 53 | if input_val is None: 54 | return "undefined" 55 | elif is_float(input_val): 56 | return str(input_val) 57 | elif is_string(input_val): 58 | special_cases = { 59 | "window.Math": "[object Math]", 60 | "window.Reflect": "[object Reflect]", 61 | "window.performance": "[object Performance]", 62 | "window.localStorage": "[object Storage]", 63 | "window.Object": "function Object() { [native code] }", 64 | "window.Reflect.set": "function set() { [native code] }", 65 | "window.performance.now": "function () { [native code] }", 66 | "window.Object.create": "function create() { [native code] }", 67 | "window.Object.keys": "function keys() { [native code] }", 68 | "window.Math.random": "function random() { [native code] }" 69 | } 70 | return special_cases.get(input_val, input_val) 71 | elif isinstance(input_val, list) and all(isinstance(item, str) for item in input_val): 72 | return ','.join(input_val) 73 | else: 74 | return str(input_val) 75 | 76 | def get_func_map() -> FloatMap: 77 | process_map: FloatMap = {} 78 | 79 | def func_1(e: float, t: float): 80 | e_str = to_str(process_map[e]) 81 | t_str = to_str(process_map[t]) 82 | res = process_turnstile_token(e_str, t_str) 83 | process_map[e] = res 84 | 85 | def func_2(e: float, t: Any): 86 | process_map[e] = t 87 | 88 | def func_5(e: float, t: float): 89 | n = process_map[e] 90 | tres = process_map[t] 91 | if is_slice(n): 92 | nt = n + [tres] 93 | process_map[e] = nt 94 | else: 95 | if is_string(n) or is_string(tres): 96 | res = to_str(n) + to_str(tres) 97 | elif is_float(n) and is_float(tres): 98 | res = n + tres 99 | else: 100 | res = "NaN" 101 | process_map[e] = res 102 | 103 | def func_6(e: float, t: float, n: float): 104 | tv = process_map[t] 105 | nv = process_map[n] 106 | if is_string(tv) and is_string(nv): 107 | res = f"{tv}.{nv}" 108 | if res == "window.document.location": 109 | process_map[e] = "https://chatgpt.com/" 110 | else: 111 | process_map[e] = res 112 | else: 113 | print("func type 6 error") 114 | 115 | def func_24(e: float, t: float, n: float): 116 | tv = process_map[t] 117 | nv = process_map[n] 118 | if is_string(tv) and is_string(nv): 119 | process_map[e] = f"{tv}.{nv}" 120 | else: 121 | print("func type 24 error") 122 | 123 | def func_7(e: float, *args): 124 | n = [process_map[arg] for arg in args] 125 | ev = process_map[e] 126 | if isinstance(ev, str): 127 | if ev == "window.Reflect.set": 128 | obj = n[0] 129 | key_str = str(n[1]) 130 | val = n[2] 131 | obj.add(key_str, val) 132 | elif callable(ev): 133 | ev(*n) 134 | 135 | def func_17(e: float, t: float, *args): 136 | i = [process_map[arg] for arg in args] 137 | tv = process_map[t] 138 | res = None 139 | if isinstance(tv, str): 140 | if tv == "window.performance.now": 141 | current_time = time.time_ns() 142 | elapsed_ns = current_time - int(start_time * 1e9) 143 | res = (elapsed_ns + random.random()) / 1e6 144 | elif tv == "window.Object.create": 145 | res = OrderedMap() 146 | elif tv == "window.Object.keys": 147 | if isinstance(i[0], str) and i[0] == "window.localStorage": 148 | res = ["STATSIG_LOCAL_STORAGE_INTERNAL_STORE_V4", "STATSIG_LOCAL_STORAGE_STABLE_ID", "client-correlated-secret", "oai/apps/capExpiresAt", "oai-did", "STATSIG_LOCAL_STORAGE_LOGGING_REQUEST", "UiState.isNavigationCollapsed.1"] 149 | elif tv == "window.Math.random": 150 | res = random.random() 151 | elif callable(tv): 152 | res = tv(*i) 153 | process_map[e] = res 154 | 155 | def func_8(e: float, t: float): 156 | process_map[e] = process_map[t] 157 | 158 | def func_14(e: float, t: float): 159 | tv = process_map[t] 160 | if is_string(tv): 161 | token_list = json.loads(tv) 162 | process_map[e] = token_list 163 | else: 164 | print("func type 14 error") 165 | 166 | def func_15(e: float, t: float): 167 | tv = process_map[t] 168 | process_map[e] = json.dumps(tv) 169 | 170 | def func_18(e: float): 171 | ev = process_map[e] 172 | e_str = to_str(ev) 173 | decoded = pybase64.b64decode(e_str).decode() 174 | process_map[e] = decoded 175 | 176 | def func_19(e: float): 177 | ev = process_map[e] 178 | e_str = to_str(ev) 179 | encoded = pybase64.b64encode(e_str.encode()).decode() 180 | process_map[e] = encoded 181 | 182 | def func_20(e: float, t: float, n: float, *args): 183 | o = [process_map[arg] for arg in args] 184 | ev = process_map[e] 185 | tv = process_map[t] 186 | if ev == tv: 187 | nv = process_map[n] 188 | if callable(nv): 189 | nv(*o) 190 | else: 191 | print("func type 20 error") 192 | 193 | def func_21(*args): 194 | pass 195 | 196 | def func_23(e: float, t: float, *args): 197 | i = list(args) 198 | ev = process_map[e] 199 | tv = process_map[t] 200 | if ev is not None: 201 | if callable(tv): 202 | tv(*i) 203 | 204 | process_map.update({ 205 | 1: func_1, 2: func_2, 5: func_5, 6: func_6, 24: func_24, 7: func_7, 206 | 17: func_17, 8: func_8, 10: "window", 14: func_14, 15: func_15, 207 | 18: func_18, 19: func_19, 20: func_20, 21: func_21, 23: func_23 208 | }) 209 | 210 | return process_map 211 | 212 | start_time = 0 213 | 214 | def process_turnstile(dx: str, p: str) -> str: 215 | global start_time 216 | start_time = time.time() 217 | tokens = get_turnstile_token(dx, p) 218 | if tokens is None: 219 | return "" 220 | 221 | token_list = json.loads(tokens) 222 | # print(token_list) 223 | res = "" 224 | process_map = get_func_map() 225 | 226 | def func_3(e: str): 227 | nonlocal res 228 | res = pybase64.b64encode(e.encode()).decode() 229 | 230 | process_map[3] = func_3 231 | process_map[9] = token_list 232 | process_map[16] = p 233 | 234 | for token in token_list: 235 | try: 236 | e = token[0] 237 | t = token[1:] 238 | f = process_map.get(e) 239 | if callable(f): 240 | f(*t) 241 | else: 242 | pass 243 | # print(f"Warning: No function found for key {e}") 244 | except Exception as exc: 245 | pass 246 | # print(f"Error processing token {token}: {exc}") 247 | 248 | return res 249 | 250 | if __name__ == "__main__": 251 | result = process_turnstile( 252 | "PBp5bWF1cHlLe1ttQhRfaTdmXEpidGdEYU5JdGJpR3xfHFVuGHVEY0tZVG18Vh54RWJ5CXpxKXl3SUZ7b2FZAWJaTBl6RGQZURh8BndUcRlQVgoYalAca2QUX24ffQZgdVVbbmBrAH9FV08Rb2oVVgBeQVRrWFp5VGZMYWNyMnoSN0FpaQgFT1l1f3h7c1RtcQUqY1kZbFJ5BQRiZEJXS3RvHGtieh9PaBlHaXhVWnVLRUlKdwsdbUtbKGFaAlN4a0V/emUJe2J2dl9BZkAxZWU/WGocRUBnc3VyT3F4WkJmYSthdBIGf0RwQ2FjAUBnd3ZEelgbVUEIDAJjS1VZbU9sSWFjfk55J2lZFV0HWX1cbVV5dWdAfkFIAVQVbloUXQtYaAR+VXhUF1BZdG4CBHRyK21AG1JaHhBFaBwCWUlocyQGVT4NBzNON2ASFVtXeQRET1kARndjUEBDT2RKeQN7RmJjeVtvZGpDeWJ1EHxafVd+Wk1AbzdLVTpafkd9dWZKeARecGJrS0xcenZIEEJQOmcFa01menFOeVRiSGFZC1JnWUA0SU08QGgeDFFgY34YWXAdZHYaHRhANFRMOV0CZmBfVExTWh9lZlVpSnx6eQURb2poa2RkQVJ0cmF0bwJbQgB6RlRbQHRQaQFKBHtENwVDSWpgHAlbTU1hXEpwdBh2eBlNY3l2UEhnblx7AmpaQ08JDDAzJUVAbn5IA2d8XX5ZFVlrYWhSXWlYQlEdZlQ/QUwuYwJgTG5GZghSRHdCYk1CWWBjclp0aWo3TWMSQmFaaAdge05FbmFhH3hxCFZuIX1BY01WVW5ABx5jfG1ZbjcZEiwwPFYQVm0sdHV8Xnl7alRuemgKZUwICklweW1heHR5Q3UqYVoSR3BCaldIc3Z8SmJOS212CAY5AmMkYmMaRn5UXEthZFsHYFx7ZHRnYV5tcFBZeHocQxUXXU0bYk0VFUZ0ZgFrSWcMRksCAwdJEBBncF12fGUVdnFNQnl4ZQB9WUclYGMRe04TQUZMf0FEbEthW357HEN2aVhAdHAMH0NPdWFicm1YbzNRBSkWMDUAOVdXbBlfRz51ah54YG5iVX9sR2t6RF1pR1RGU20MABBWQy55T3dQfmlUfmFrA35gY2AdDiBWMWVlP1hqHEVAZ3NzfE9/c1pCZWErYXQSB2BKcENjew1baXB9Rm1aG1VBCAkJY01aWW1NbklgZH5Oek1rTX9FFEB7RHNGEG9pKH1eRgFSZGJJdkcMQHUSY0IRQRkzUmFgBG90cklvVwNZThIHQXYABjFJaApCWh1qUEhnWVpiBHxDRDlAHg8kFVcCY1dCUk8VRm9obEN9e21EdnluWxN7eWt8RnFOekRTRXZKXkNPWH40YGMRXHwfRHZ7Z1JKS2R9XG1XR09qCGlaZmZ/QXwnfloWTQxIflxbSVNdSUZgHBRLKCwpQwwmXzB2NFRMOVxUTFNfH3BoRVhfWkcBYghVaSh0ZWMFeG9qBWp5eENNeGNldncHR0wBezVPTjdlSGcOTndjVkAUVl99YQFkRUE2YlNKe3ppeml2V2lvYkhGHjtbNHIALywsMScPEjEFO3Q1MQ0UGDYvK148ETYxIzEcD0gzchNcLSs+LAJxJiEQKBd5MCsXCRclFA0gBRg3axk1HTkBGyoUPRhwCwI2OAIRB2gUBRcjATt6ORQ9JDANOHFlEQITIC8VOS4GAC49GDscBBQMNQ4hDQtQZHYMHmk3BRFHeHZvcXNvd01+WXxPFF9pN2ZaSmR3Z0RkQkl7YmlHbzMsSS8HEy4PPggxGAAYBBcuJREBEQA7LAMANgEiNiZgFR5Mchs0eH83ERFsGCceZTESe2MeEgQSGwgXIgIbb38FFBAWEC1GFC42OQ0CCwcudSIpOwY6MRw7IjwYAgAYD3UbOA8AaHoHPiUkBgQmTA4FUxgAOCoJKxNmVSoANDIzAjdlDxA6ISIOKhQDEhwLPS82IT4CUFIsOyIwLD4+BBsDAww1AnMqHAIlMiMTGT0oAQlUE3QDQhIUACMxDwhGLxEXHQsSIV0FLgMaAgJ2LgsEHyEPLBcKOBtfUhg9MiAXPT5fHhA1Wg8+BxoPLgYcGS0WRSsELjIZKg8EJw4lFQAoUCcTcxASLS9BOTsZD3ERGRUhOD1YUjJxWBEBdnc9PwkQNytyED0zAQtaG3Y2ACsWXSsoPV4+DBQ2DyQ+bg0MHxVHKhAqNh8QPVkNET5fAis5Jh0uGxACKA8kOyo6IBkHIgkKdx0sAgA8SAQVHCkCLwcoBnQHGRAeAxAXOQAdKxhrNxMLJQYrKwAxHnFcOA4HIlEEAVkVDigqAwMoORQQKFkaOy0pISMoRmYDPyFLCRIqVhwCImITET04Gx8QPTMWWRQDcgstAioLGSkBTjw7ECYLeSgraxFoazw2CQcrJgU1cQ0fAB4YEykpIQMEPgJ0NUY0Lhc8IBEEWQtyNSkeECEmHitRFhsULgUrASkfO3E6XDsqLTAVcg8pFCwUaT8rPiMALzskFQQNJBkfKgUxBwscAj4YWhYHDxoXEBRwHgUUMx4gCxsCGBRJAz5yABsCAxIPFSo2AQILLSs7NS4EAGEnFBANJBgTOV0FLWJSKAUQeRkDKyAjCjYqIwEUBwAUPT5iBgohDzYmBAEBJS4pCSspGgUQBDsuD3wvKFd7HwE/EQ8ZFQgRICYEAgUuRhovHFYdM15eNwIgZBgmBVIoJGBnACRXChIKQR8lDVh2CicfKTIBcxwzNionIg4PEVI0FyMQOTkaABI3JSoAByVTKAItJn1ULjcEOG4gBjoqDnAQDjsGHzA2cF92CTIlAhMdchoJABA6KQEyajcgBAM+IhwyE292OTQ0IzUsAVY8EBcxMRxoKgEhBRQSGTMLfQsgFDp1PDQsCgEFKAkIASA8EhF4IgpjIzMJJC4WcyYcEQkPPSMBHlUSfFkuPCQnKiMaAGYWEC80EQIeex9wJjszCSQMFg4iDDcvVxMEBR17Knw0OnMVRyc4fj9ROQpiABoWFxAscR0Na3gBHWdyPjcOBCMleBQgKR4rLQViBhcLGnEgDDZ4ACoPJhQQIH4nHBoDNhkWCyUWDRgVFx4YAwAzFjAELCUPNScjDQ4hDB54Gwg4K2g3BmMBKjkwGggiFAo0Iwp6BBQeDxYwBz4VKCIzeDQmJjYeXTUmHCZpcygrAQt3NAFrBjsmGhtWJz8uUiR3CjorPy4NJXUuOjYIBDoMDGM4MwxxNiMNGg4SES01GHA1O3EIOSo7LQUXHnEeOgIjPXENLjQSfn4OVSkSAgcFBQIxDQUuajUPOj0MFwwcZhMnVzQOCQMDAWBWZBUPPx4oBAA5YA5qBwcrEwQ+IjppEz47Ji4CE2YNKTEzAUcjBgAoFFwyKHwbCz8pARUrDgIIMgg1H2MXGTUBFx0XAgMdEj0HOQ4MIionOyE2cUcxHAA7Iw0sNTkBDUU9GRsbPgkzOBwNKD9hHBdVJipxVTYRAgMmGAIVKxc2JREoNxgtMysDHggNExYWBh8FHwUfBQ8/KQYONiUrLjkfIwpxHDgYCTw1MDEMMBU2JRErK2crDzZdCy94UjAOC00MMgFCKTJxZw8mdgoSCzQMcAtzDC8hMBw7CHJ/GjQ+Cw4aDAVyMTMwEi8gHhUfNB8sDi4hWTQ0GDdJdSEVNggXAhY7Knd3MQ4KGhoZDm11DysqLxI8NXYZCXMDMngaMQg5PSsYKjYxJRJzdx8jOzQlIwklEwgtDhEMdwskLAs3Izg7LQscJi4IeyE3GiAbDAYrHzEzEjcxKicAdSteCTMqJHsUMSEXMT0kJD4Ga3V2Kk4rMSUZHS8qMAsqHTsEPR8RXzArXzc2OgYQOy4oPXc1AQM+DhpuMDFRFTMrBn8pCQkCdCE/MDILKG8uGllRNRlGRy0NGjsyFGoTKSUsOiwkAi8sNRJUNgQ0czEuFgUNMShjBAsBDDErbywzKBoKKzkeOncPDR42HCskNGg7BjEMVgAvOyApLQ5WPgAVHiM+Jz8eOA8BOSI7Xwo4JGIJNjYdCz0MFmAuPhEbLzc3VjUQAGwoHjATcSAGdwUVCjIqMDA1OyQNUB5gGRw6UwpkNS0eECoqbCt2KzQEdD1jBzEZOxQdIjBoMxVqCyoEBToSDB5xPz44LA9MCDAKMAZhLgZZACwMKAYDPWgHODIGHiwMIDUpZ2YEMA04By8INQl3ClQLLC8wCDIIXG8/PSARMDYQLxQyeh8qFTg7MhhUDzkLKwNzDT8RPQ84JC0dDTAqGDA7KxkoKDAcPzh1KQo9LzkeN3YMIxc4HzsBNxorAj0jQX90CCMlPQ4FMTYPfDgwDA0sMyoJHyw6EigMCwULUBsDcnsAdQUAKRAMFBIqLQwCGCkLLmoOJQIEOSU/JQ0JFQgmDx02LwgrIjMLHQQ9DCw+cgoRJREWZAQkCyoyNgskJip0JDg5cy1BXXIzJAl3GCQCdggwZXEbBmcPNAwwCAV9fAkGDDUUBhBmKTgyKAo0KRklcRc/IxY5KQ8SACIKEgg4FVUuDx0FUVoiK3IuEiQEGQkkYToJDhcPJhVTfA8zMiMhFgxnAystCycgLTweB1A0GAMuACIBVEUKHSYiCR0UJA0ENQsRBwUPCgEpMCcvGyUKdxcvH3U5OAwRegMnCiE1IxYiOgsGEGoOAhg/DxJ9IggHCzESCgMsJgJ9awodFDksDRAyCyA1NwodDCwJOFcWCw0yNwokfTUKLwt3IwolIwwocTcbRRAeCwoMHiUZOWkeCRclHihWMyVVcTcfVQEkJjAyMyReOT0jEFwMC1UPPyMwATQnO1oxHz8DNSIoAScYMBMtDi8iFgwgHwwKMAxnDjsXDQooCx4YHSY4JQYYPgQ0Cz0PVkQEEQYqKCIWPTELLBsxElgUMBcENhMKPQQRbyQVRhJdREdUW0tUYB4MX2BjeAU8bxEfZUVYW1VHTF5OSQV/f1xBMU5Jamd7QX9fbWd4H3p1ZhNuYmRFVHRyZHRnBltCCnxGV1YxeEQcDUp3ZlJAFFhafWEKFUlQQ25cOW9iHm90Yk5teXpaSGdhXHsBYStPTR1fdG5wHUIAZ0ZuZWVTeFQVWWliaFxSGFRQOARhQlRVQFVpBmBObEZmAUlKdU9gW0VFbHJkXW0Ffko6cmVTfEx3CXdvV1x+eWMDE2h1IXlJZ0J1VkNKe1cGBnZkcE1gdFJbbXdsWntMECo=", 253 | "gAAAAACWzMwMzIsIlRodSBKdWwgMTEgMjAyNCAwMzoxMDo0NiBHTVQrMDgwMCAo5Lit5Zu95qCH5YeG5pe26Ze0KSIsNDI5NDcwNTE1MiwxLCJNb3ppbGxhLzUuMCAoV2luZG93cyBOVCAxMC4wOyBXaW42NDsgeDY0KSBBcHBsZVdlYktpdC81MzcuMzYgKEtIVE1MLCBsaWtlIEdlY2tvKSBDaHJvbWUvMTI2LjAuMC4wIFNhZmFyaS81MzcuMzYgRWRnLzEyNi4wLjAuMCIsImh0dHBzOi8vY2RuLm9haXN0YXRpYy5jb20vX25leHQvc3RhdGljL2NodW5rcy9wYWdlcy9fYXBwLWMwOWZmNWY0MjQwMjcwZjguanMiLCJjL1pGWGkxeTNpMnpaS0EzSVQwNzRzMy9fIiwiemgtQ04iLCJ6aC1DTixlbixlbi1HQixlbi1VUyIsMTM1LCJ3ZWJraXRUZW1wb3JhcnlTdG9yYWdl4oiSW29iamVjdCBEZXByZWNhdGVkU3RvcmFnZVF1b3RhXSIsIl9yZWFjdExpc3RlbmluZ3NxZjF0ejFzNmsiLCJmZXRjaCIsMzY1NCwiNWU1NDUzNzItMzcyNy00ZDAyLTkwMDYtMzMwMDRjMWJmYTQ2Il0=" 254 | ) 255 | print(result) 256 | -------------------------------------------------------------------------------- /chatgpt/wssClient.py: -------------------------------------------------------------------------------- 1 | import json 2 | import time 3 | 4 | from utils.log import log 5 | import utils.globals as globals 6 | 7 | def save_wss_map(wss_map): 8 | with open(globals.WSS_MAP_FILE, "w") as f: 9 | json.dump(wss_map, f, indent=4) 10 | 11 | async def token2wss(token): 12 | if not token: 13 | return False, None 14 | if token in globals.wss_map: 15 | wss_mode = globals.wss_map[token]["wss_mode"] 16 | if wss_mode: 17 | if int(time.time()) - globals.wss_map.get(token, {}).get("timestamp", 0) < 60 * 60: 18 | wss_url = globals.wss_map[token]["wss_url"] 19 | log.info(f"token -> wss_url from cache") 20 | return wss_mode, wss_url 21 | else: 22 | log.info(f"token -> wss_url expired") 23 | return wss_mode, None 24 | else: 25 | return False, None 26 | return False, None 27 | 28 | async def set_wss(token, wss_mode, wss_url=None): 29 | if not token: 30 | return True 31 | globals.wss_map[token] = {"timestamp": int(time.time()), "wss_url": wss_url, "wss_mode": wss_mode} 32 | save_wss_map(globals.wss_map) 33 | return True 34 | -------------------------------------------------------------------------------- /gateway/admin.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | from fastapi import Request, HTTPException, Form 4 | from fastapi.responses import HTMLResponse, RedirectResponse 5 | 6 | import utils.globals as globals 7 | from app import app, templates 8 | from utils.log import log 9 | from utils.configs import api_prefix 10 | 11 | if api_prefix: 12 | @app.get(f"/{api_prefix}", response_class=HTMLResponse) 13 | async def admin_cookie(request: Request): 14 | response = HTMLResponse(content="Cookie added") 15 | response.set_cookie("prefix", value=api_prefix, expires="Thu, 01 Jan 2099 00:00:00 GMT") 16 | return RedirectResponse(url='/admin', headers=response.headers) 17 | 18 | @app.get("/admin", response_class=HTMLResponse) 19 | async def admin_html(request: Request): 20 | prefix = request.cookies.get("prefix") 21 | if api_prefix and api_prefix != prefix: 22 | raise HTTPException(status_code=403, detail={"message": "Access denied - You do not have permission to access this resource"}) 23 | 24 | tokens_count = len(set(globals.token_list) - set(globals.error_token_list)) 25 | return templates.TemplateResponse("admin.html", {"request": request, "api_prefix": api_prefix, "tokens_count": tokens_count}) 26 | 27 | @app.get("/admin/tokens/error") 28 | async def admin_tokens_error(request: Request): 29 | prefix = request.cookies.get("prefix") 30 | if api_prefix and api_prefix != prefix: 31 | raise HTTPException(status_code=403, detail={"message": "Access denied - You do not have permission to access this resource"}) 32 | 33 | return templates.TemplateResponse("callback.html", { 34 | "request": request, 35 | "title": "Error Tokens", 36 | "zh_title": "错误令牌", 37 | "message": "
".join(list(set(globals.error_token_list))) or "/", 38 | "url": "/admin" 39 | }) 40 | 41 | @app.post("/admin/tokens/upload") 42 | async def tokens_upload(request: Request, text: str = Form(None)): 43 | prefix = request.cookies.get("prefix") 44 | if api_prefix and api_prefix != prefix: 45 | raise HTTPException(status_code=403, detail={"message": "Access denied - You do not have permission to access this resource"}) 46 | 47 | if not text: 48 | return templates.TemplateResponse("callback.html", { 49 | "request": request, 50 | "title": "Error", 51 | "zh_title": "错误", 52 | "message": "You entered a null value", 53 | "zh_message": "您输入的是空值", 54 | "url": "/admin" 55 | }) 56 | 57 | lines = text.split("\n") 58 | for line in lines: 59 | if line.strip() and not line.startswith("#"): 60 | globals.token_list.append(line.strip()) 61 | with open(globals.TOKENS_FILE, "a", encoding="utf-8") as f: 62 | f.write(line.strip() + "\n") 63 | 64 | log.info(f"Token count: {len(globals.token_list)}, Error token count: {len(globals.error_token_list)}") 65 | 66 | return templates.TemplateResponse("callback.html", { 67 | "request": request, 68 | "title": "Success", 69 | "zh_title": "成功", 70 | "message": "Token uploaded successfully", 71 | "zh_message": "令牌上传成功", 72 | "url": "/admin" 73 | }) 74 | 75 | @app.post("/admin/tokens/clear") 76 | async def tokens_clear(request: Request): 77 | prefix = request.cookies.get("prefix") 78 | if api_prefix and api_prefix != prefix: 79 | raise HTTPException(status_code=403, detail={"message": "Access denied - You do not have permission to access this resource"}) 80 | 81 | globals.token_list.clear() 82 | globals.error_token_list.clear() 83 | with open(globals.TOKENS_FILE, "w", encoding="utf-8") as f: 84 | pass 85 | with open(globals.ERROR_TOKENS_FILE, "w", encoding="utf-8") as f: 86 | pass 87 | log.info(f"Token count: {len(globals.token_list)}, Error token count: {len(globals.error_token_list)}") 88 | 89 | return templates.TemplateResponse("callback.html", { 90 | "request": request, 91 | "title": "Success", 92 | "zh_title": "成功", 93 | "message": "Token cleared successfully", 94 | "zh_message": "清空令牌成功", 95 | "url": "/admin" 96 | }) 97 | 98 | @app.post("/admin/tokens/seed_clear") 99 | async def clear_seed_tokens(request: Request): 100 | prefix = request.cookies.get("prefix") 101 | if api_prefix and api_prefix != prefix: 102 | raise HTTPException(status_code=403, detail={"message": "Access denied - You do not have permission to access this resource"}) 103 | 104 | globals.seed_map.clear() 105 | globals.conversation_map.clear() 106 | with open(globals.SEED_MAP_FILE, "w", encoding="utf-8") as f: 107 | f.write("{}") 108 | with open(globals.CONVERSATION_MAP_FILE, "w", encoding="utf-8") as f: 109 | f.write("{}") 110 | log.info(f"Seed token count: {len(globals.seed_map)}") 111 | 112 | return templates.TemplateResponse("callback.html", { 113 | "request": request, 114 | "title": "Success", 115 | "zh_title": "成功", 116 | "message": "Seed token cleared successfully.", 117 | "zh_message": "种子令牌清除成功", 118 | "url": "/admin" 119 | }) -------------------------------------------------------------------------------- /gateway/auth.py: -------------------------------------------------------------------------------- 1 | import hashlib 2 | import os 3 | import json 4 | from urllib.parse import quote 5 | 6 | from fastapi import Request, Path 7 | from fastapi.responses import Response, HTMLResponse, RedirectResponse 8 | 9 | from app import app, templates 10 | 11 | @app.get("/auth/login", response_class=HTMLResponse) 12 | async def login_html(request: Request): 13 | token = request.cookies.get("oai-flow-token") 14 | if token: 15 | return RedirectResponse(url='/') 16 | 17 | response = templates.TemplateResponse("signin.html", {"request": request}) 18 | return response 19 | 20 | @app.get("/auth/logout", response_class=HTMLResponse) 21 | async def logout_html(request: Request): 22 | token = request.cookies.get("oai-flow-token") 23 | if not token: 24 | return RedirectResponse(url='/') 25 | 26 | response = templates.TemplateResponse("signout.html", {"request": request}) 27 | return response 28 | 29 | @app.get("/api/auth/signin", response_class=HTMLResponse) 30 | async def signin_html(request: Request): 31 | signin = request.query_params.get("signin") 32 | token = request.cookies.get("oai-flow-token") 33 | if not signin: 34 | return RedirectResponse(url='/auth/login') 35 | 36 | if token: 37 | return RedirectResponse(url='/') 38 | 39 | if len(signin) != 45 and not signin.startswith("eyJhbGciOi"): 40 | signin = quote(signin) 41 | 42 | response = HTMLResponse(content="Cookie added") 43 | response.set_cookie("oai-flow-token", value=signin, expires="Thu, 01 Jan 2099 00:00:00 GMT") 44 | return RedirectResponse(url='/', headers=response.headers) 45 | 46 | @app.get("/api/auth/signout", response_class=HTMLResponse) 47 | async def signout_html(request: Request): 48 | signout = request.query_params.get("signout") 49 | 50 | if not signout or signout != 'true': 51 | return RedirectResponse(url='/auth/logout') 52 | 53 | token = request.cookies.get("oai-flow-token") 54 | 55 | if not token: 56 | return RedirectResponse(url='/auth/login') 57 | 58 | if len(token) != 45 and not token.startswith("eyJhbGciOi"): 59 | token = quote(token) 60 | 61 | response = HTMLResponse(content="Cookie deleted") 62 | response.delete_cookie("oai-flow-token") 63 | return RedirectResponse(url='/', headers=response.headers) 64 | 65 | @app.get("/auth/login/{path:path}", response_class=Response) 66 | @app.get("/api/auth/signin/{path:path}", response_class=Response) 67 | @app.get("/api/auth/callback/{path:path}", response_class=Response) 68 | async def auth_3rd(request: Request, path: str): 69 | return RedirectResponse(url='/auth/login') 70 | 71 | @app.get("/api/auth/csrf", response_class=Response) 72 | async def auth_csrf(request: Request): 73 | csrf = {'csrfToken': hashlib.sha256(os.urandom(32)).hexdigest()} 74 | return Response(content=json.dumps(csrf, indent=4), status_code=200, media_type="application/json") 75 | 76 | @app.post("/api/auth/sign{path}", response_class=Response) 77 | async def auth_sign(request: Request, path: str = Path(..., regex="in|out")): 78 | sign = {'url':f'/auth/log{path}'} 79 | return Response(content=json.dumps(sign, indent=4), status_code=200, media_type="application/json") 80 | 81 | @app.post("/api/auth/providers", response_class=Response) 82 | async def auth_providers(request: Request): 83 | providers = {"auth0":{"id":"auth0","name":"Auth0","type":"oauth","signinUrl":"/api/auth/signin/auth0","callbackUrl":"/api/auth/callback/auth0"},"login-web":{"id":"login-web","name":"Auth0","type":"oauth","signinUrl":"/api/auth/signin/login-web","callbackUrl":"/api/auth/callback/login-web"},"openai":{"id":"openai","name":"openai","type":"oauth","signinUrl":"/api/auth/signin/openai","callbackUrl":"/api/auth/callback/openai"},"openai-sidetron":{"id":"openai-sidetron","name":"openai-sidetron","type":"oauth","signinUrl":"/api/auth/signin/openai-sidetron","callbackUrl":"/api/auth/callback/openai-sidetron"},"auth0-sidetron":{"id":"auth0-sidetron","name":"Auth0","type":"oauth","signinUrl":"/api/auth/signin/auth0-sidetron","callbackUrl":"/api/auth/callback/auth0-sidetron"}} 84 | return Response(content=json.dumps(providers, indent=4), status_code=200, media_type="application/json") -------------------------------------------------------------------------------- /gateway/chatgpt.py: -------------------------------------------------------------------------------- 1 | import json 2 | from urllib.parse import quote 3 | 4 | from fastapi import Request 5 | from fastapi.responses import HTMLResponse, RedirectResponse 6 | from fastapi.staticfiles import StaticFiles 7 | 8 | from gateway.reverseProxy import web_reverse_proxy 9 | 10 | from app import app, templates 11 | from utils.kv_utils import set_value_for_key 12 | import utils.configs as configs 13 | 14 | with open("templates/chatgpt_context.json", "r", encoding="utf-8") as f: 15 | chatgpt_context = json.load(f) 16 | 17 | @app.get("/", response_class=HTMLResponse) 18 | @app.get("/tasks") 19 | @app.get("/ccc/{path}") 20 | async def chatgpt_html(request: Request): 21 | token = request.cookies.get("oai-flow-token") 22 | if not token: 23 | if configs.enable_homepage: 24 | response = await web_reverse_proxy(request, "/") 25 | else: 26 | response = templates.TemplateResponse("home.html", {"request": request}) 27 | return response 28 | 29 | if len(token) != 45 and not token.startswith("eyJhbGciOi"): 30 | token = quote(token) 31 | 32 | user_remix_context = chatgpt_context.copy() 33 | set_value_for_key(user_remix_context, "user", {"id": "user-chatgpt"}) 34 | set_value_for_key(user_remix_context, "accessToken", token) 35 | 36 | response = templates.TemplateResponse("base.html", {"request": request, "remix_context": user_remix_context}) 37 | return response 38 | 39 | app.mount("/static", StaticFiles(directory="templates/static"), name="static") 40 | 41 | @app.get("/favicon.ico", include_in_schema=False) 42 | async def favicon(): 43 | return RedirectResponse(url="/static/favicon.ico") -------------------------------------------------------------------------------- /gateway/gpts.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | from fastapi import Request 4 | from fastapi.responses import Response 5 | 6 | from app import app 7 | from gateway.chatgpt import chatgpt_html 8 | 9 | with open("templates/gpts_context.json", "r", encoding="utf-8") as f: 10 | gpts_context = json.load(f) 11 | 12 | @app.get("/gpts") 13 | async def get_gpts(request: Request): 14 | #return {"kind": "store"} 15 | return await chatgpt_html(request) 16 | 17 | @app.get("/gpts/{page}") 18 | async def dynamic_child_page(page: str, request: Request): 19 | return await chatgpt_html(request) 20 | 21 | @app.get("/g/g-{gizmo_id}") 22 | async def get_gizmo_json(request: Request, gizmo_id: str): 23 | params = request.query_params 24 | if params.get("_data") == "routes/g.$gizmoId._index": 25 | return Response(content=json.dumps(gpts_context, indent=4), media_type="application/json") 26 | else: 27 | return await chatgpt_html(request) 28 | -------------------------------------------------------------------------------- /gateway/reverseProxy.py: -------------------------------------------------------------------------------- 1 | import json 2 | import random 3 | import time 4 | import re 5 | from datetime import datetime, timezone 6 | 7 | from fastapi import Request, HTTPException 8 | from fastapi.responses import StreamingResponse, Response 9 | from starlette.background import BackgroundTask 10 | 11 | import utils.globals as globals 12 | from chatgpt.authorization import verify_token, get_req_token 13 | from chatgpt.fp import get_fp 14 | from utils.Client import Client 15 | from utils.log import log 16 | from utils.configs import chatgpt_base_url_list, sentinel_proxy_url_list, force_no_history, file_host, voice_host, log_length 17 | 18 | def generate_current_time(): 19 | current_time = datetime.now(timezone.utc) 20 | formatted_time = current_time.isoformat(timespec='microseconds').replace('+00:00', 'Z') 21 | return formatted_time 22 | 23 | headers_reject_list = [ 24 | "x-real-ip", 25 | "x-forwarded-for", 26 | "x-forwarded-proto", 27 | "x-forwarded-port", 28 | "x-forwarded-host", 29 | "x-forwarded-server", 30 | "cf-warp-tag-id", 31 | "cf-visitor", 32 | "cf-ray", 33 | "cf-connecting-ip", 34 | "cf-ipcountry", 35 | "cdn-loop", 36 | "remote-host", 37 | "x-frame-options", 38 | "x-xss-protection", 39 | "x-content-type-options", 40 | "content-security-policy", 41 | "host", 42 | "cookie", 43 | "connection", 44 | "content-length", 45 | "content-encoding", 46 | "x-middleware-prefetch", 47 | "x-nextjs-data", 48 | "purpose", 49 | "x-forwarded-uri", 50 | "x-forwarded-path", 51 | "x-forwarded-method", 52 | "x-forwarded-protocol", 53 | "x-forwarded-scheme", 54 | "cf-request-id", 55 | "cf-worker", 56 | "cf-access-client-id", 57 | "cf-access-client-device-type", 58 | "cf-access-client-device-model", 59 | "cf-access-client-device-name", 60 | "cf-access-client-device-brand", 61 | "x-middleware-prefetch", 62 | "x-forwarded-for", 63 | "x-forwarded-host", 64 | "x-forwarded-proto", 65 | "x-forwarded-server", 66 | "x-real-ip", 67 | "x-forwarded-port", 68 | "cf-connecting-ip", 69 | "cf-ipcountry", 70 | "cf-ray", 71 | "cf-visitor", 72 | ] 73 | 74 | headers_accept_list = [ 75 | "openai-sentinel-chat-requirements-token", 76 | "openai-sentinel-proof-token", 77 | "openai-sentinel-turnstile-token", 78 | "accept", 79 | "authorization", 80 | "x-authorization", 81 | "accept-encoding", 82 | "accept-language", 83 | "content-type", 84 | "oai-device-id", 85 | "oai-echo-logs", 86 | "oai-language", 87 | "sec-fetch-dest", 88 | "sec-fetch-mode", 89 | "sec-fetch-site", 90 | "x-ms-blob-type", 91 | ] 92 | 93 | async def get_real_req_token(token): 94 | req_token = get_req_token(token) 95 | if len(req_token) == 45 or req_token.startswith("eyJhbGciOi"): 96 | return req_token 97 | else: 98 | req_token = get_req_token("", token) 99 | return req_token 100 | 101 | def save_conversation(token, conversation_id, title=None): 102 | if conversation_id not in globals.conversation_map: 103 | conversation_detail = { 104 | "id": conversation_id, 105 | "title": title, 106 | "create_time": generate_current_time(), 107 | "update_time": generate_current_time() 108 | } 109 | globals.conversation_map[conversation_id] = conversation_detail 110 | else: 111 | globals.conversation_map[conversation_id]["update_time"] = generate_current_time() 112 | if title: 113 | globals.conversation_map[conversation_id]["title"] = title 114 | if conversation_id not in globals.seed_map[token]["conversations"]: 115 | globals.seed_map[token]["conversations"].insert(0, conversation_id) 116 | else: 117 | globals.seed_map[token]["conversations"].remove(conversation_id) 118 | globals.seed_map[token]["conversations"].insert(0, conversation_id) 119 | with open(globals.CONVERSATION_MAP_FILE, "w", encoding="utf-8") as f: 120 | json.dump(globals.conversation_map, f, indent=4) 121 | with open(globals.SEED_MAP_FILE, "w", encoding="utf-8") as f: 122 | json.dump(globals.seed_map, f, indent=4) 123 | if title: 124 | log.info(f"Conversation ID: {conversation_id}, Title: {title}") 125 | 126 | async def content_generator(r, token, history=True): 127 | conversation_id = None 128 | title = None 129 | async for chunk in r.aiter_content(): 130 | try: 131 | if history and (len(token) != 45 and not token.startswith("eyJhbGciOi")) and (not conversation_id or not title): 132 | chat_chunk = chunk.decode('utf-8') 133 | if not conversation_id or not title and chat_chunk.startswith("event: delta\n\ndata: {"): 134 | chunk_data = chat_chunk[19:] 135 | conversation_id = json.loads(chunk_data).get("v").get("conversation_id") 136 | if conversation_id: 137 | save_conversation(token, conversation_id) 138 | title = globals.conversation_map[conversation_id].get("title") 139 | if chat_chunk.startswith("data: {"): 140 | if "\n\nevent: delta" in chat_chunk: 141 | index = chat_chunk.find("\n\nevent: delta") 142 | chunk_data = chat_chunk[6:index] 143 | elif "\n\ndata: {" in chat_chunk: 144 | index = chat_chunk.find("\n\ndata: {") 145 | chunk_data = chat_chunk[6:index] 146 | else: 147 | chunk_data = chat_chunk[6:] 148 | chunk_data = chunk_data.strip() 149 | if conversation_id is None: 150 | conversation_id = json.loads(chunk_data).get("conversation_id") 151 | if conversation_id: 152 | save_conversation(token, conversation_id) 153 | title = globals.conversation_map[conversation_id].get("title") 154 | if title is None: 155 | title = json.loads(chunk_data).get("title") 156 | if title: 157 | save_conversation(token, conversation_id, title) 158 | except Exception as e: 159 | # log.error(e) 160 | # log.error(chunk.decode('utf-8')) 161 | pass 162 | yield chunk 163 | 164 | async def web_reverse_proxy(request: Request, path: str): 165 | try: 166 | origin_host = request.url.netloc 167 | if request.url.is_secure: 168 | petrol = "https" 169 | else: 170 | petrol = "http" 171 | if "x-forwarded-proto" in request.headers: 172 | petrol = request.headers["x-forwarded-proto"] 173 | if "cf-visitor" in request.headers: 174 | cf_visitor = json.loads(request.headers["cf-visitor"]) 175 | petrol = cf_visitor.get("scheme", petrol) 176 | 177 | # 修复Bug:输入多个相同键只能获取最后重复键的值 178 | params = {key: request.query_params.getlist(key) for key in request.query_params.keys()} 179 | params = {key: values[0] if len(values) == 1 else values for key, values in params.items()} 180 | 181 | request_cookies = dict(request.cookies) 182 | 183 | # headers = { 184 | # key: value for key, value in request.headers.items() 185 | # if (key.lower() not in ["host", "origin", "referer", "priority", 186 | # "oai-device-id"] and key.lower() not in headers_reject_list) 187 | # } 188 | headers = { 189 | key: value for key, value in request.headers.items() 190 | if (key.lower() in headers_accept_list) 191 | } 192 | 193 | base_url = random.choice(chatgpt_base_url_list) if chatgpt_base_url_list else "https://chatgpt.com" 194 | 195 | if path.endswith(".js.map") or "cdn-cgi/challenge-platform" in path or "api/v2/logs" in path: 196 | response = Response(content=f"BlobNotFoundThe specified blob does not exist. Time:{generate_current_time()}", status_code=404, media_type="application/xml") 197 | return response 198 | 199 | if "assets/" in path: 200 | base_url = "https://cdn.oaistatic.com" 201 | if "voice/previews" in path: 202 | base_url = "https://persistent.oaistatic.com" 203 | if "file-" in path and "backend-api" not in path: 204 | base_url = "https://files.oaiusercontent.com" 205 | if "v1/" in path: 206 | base_url = "https://ab.chatgpt.com" 207 | if "sandbox" in path: 208 | base_url = "https://web-sandbox.oaiusercontent.com" 209 | path = path.replace("sandbox/", "") 210 | if "avatar/" in path: 211 | base_url = "https://s.gravatar.com" 212 | if params.get('s') == '480' and params.get('r') == 'pg' and params.get('d'): 213 | params['s'] = '100' 214 | 215 | token = headers.get("authorization", "") or headers.get("x-authorization", "") 216 | token = token.replace("Bearer ", "").strip() 217 | 218 | 219 | if token: 220 | req_token = await get_real_req_token(token) 221 | access_token = await verify_token(req_token) 222 | headers.update({"authorization": f"Bearer {access_token}"}) 223 | headers.update({"x-authorization": f"Bearer {access_token}"}) 224 | 225 | token = request.cookies.get("oai-flow-token", "") 226 | req_token = await get_real_req_token(token) 227 | 228 | fp = get_fp(req_token).copy() 229 | 230 | proxy_url = fp.pop("proxy_url", None) 231 | impersonate = fp.pop("impersonate", "safari15_3") 232 | user_agent = fp.get("user-agent") 233 | headers.update(fp) 234 | 235 | headers.update({ 236 | # "accept-language": "en-US,en;q=0.9", 237 | "host": base_url.replace("https://", "").replace("http://", ""), 238 | "origin": base_url, 239 | "referer": f"{base_url}/" 240 | }) 241 | 242 | if "file-" in path and "backend-api" not in path: 243 | headers.update({ 244 | "origin": "https://chatgpt.com/", 245 | "referer": "https://chatgpt.com/" 246 | }) 247 | 248 | if "v1/initialize" in path: 249 | headers.update({"user-agent": request.headers.get("user-agent")}) 250 | if "statsig-api-key" not in headers: 251 | headers.update({ 252 | "statsig-sdk-type": "js-client", 253 | "statsig-api-key": "client-tnE5GCU2F2cTxRiMbvTczMDT1jpwIigZHsZSdqiy4u", 254 | "statsig-sdk-version": "5.1.0", 255 | "statsig-client-time": int(time.time() * 1000), 256 | }) 257 | 258 | data = await request.body() 259 | 260 | history = True 261 | 262 | if path.endswith("backend-api/conversation") or path.endswith("backend-alt/conversation"): 263 | try: 264 | req_json = json.loads(data) 265 | history = not req_json.get("history_and_training_disabled", False) 266 | except Exception: 267 | pass 268 | if force_no_history: 269 | history = False 270 | req_json = json.loads(data) 271 | req_json["history_and_training_disabled"] = True 272 | data = json.dumps(req_json).encode("utf-8") 273 | 274 | if sentinel_proxy_url_list and "backend-api/sentinel/chat-requirements" in path: 275 | client = Client(proxy=random.choice(sentinel_proxy_url_list)) 276 | else: 277 | client = Client(proxy=proxy_url, impersonate=impersonate) 278 | try: 279 | background = BackgroundTask(client.close) 280 | r = await client.request(request.method, f"{base_url}/{path}", params=params, headers=headers, cookies=request_cookies, data=data, stream=True, allow_redirects=False) 281 | 282 | if r.status_code == 307 or r.status_code == 302 or r.status_code == 301: 283 | return Response(status_code=307, 284 | headers={"Location": r.headers.get("Location") 285 | .replace("ab.chatgpt.com", origin_host) 286 | .replace("chatgpt.com", origin_host) 287 | .replace("cdn.oaistatic.com", origin_host) 288 | .replace("https", petrol)}, 289 | background=background) 290 | elif 'stream' in r.headers.get("content-type", ""): 291 | log.info("-" * log_length) 292 | log.info(f"Request token: {req_token}") 293 | log.info(f"Request proxy: {proxy_url}") 294 | log.info(f"Request UA: {user_agent}") 295 | log.info(f"Request impersonate: {impersonate}") 296 | log.info("-" * log_length) 297 | conv_key = r.cookies.get("conv_key", "") 298 | response = StreamingResponse(content_generator(r, token, history), media_type=r.headers.get("content-type", ""), background=background) 299 | response.set_cookie("conv_key", value=conv_key) 300 | return response 301 | elif 'image' in r.headers.get("content-type", "") or "audio" in r.headers.get("content-type", "") or "video" in r.headers.get("content-type", ""): 302 | rheaders = dict(r.headers) 303 | 304 | # 修复浏览器无法解码 305 | if 'content-encoding' in rheaders: 306 | del rheaders['content-encoding'] 307 | 308 | # 支持预览媒体而不是直接下载 309 | if 'content-disposition' in rheaders: 310 | rheaders['content-disposition'] = rheaders['content-disposition'].replace("attachment", "inline") 311 | 312 | response = Response(content=await r.acontent(), headers=rheaders, status_code=r.status_code, background=background) 313 | return response 314 | else: 315 | if path.endswith("backend-api/conversation") or path.endswith("backend-alt/conversation") or "/register-websocket" in path: 316 | response = Response(content=(await r.acontent()), media_type=r.headers.get("content-type"), status_code=r.status_code, background=background) 317 | else: 318 | content = await r.atext() 319 | 320 | # 关联的应用 - CallBack 321 | content = re.sub(r'\${window\.location\.origin}\/(aip|ccc|ca)\/', r'https://chatgpt.com/\1/', content) 322 | content = re.sub(r'(aip|ccc|ca)\/\:pluginId\/', r'https://chatgpt.com/\1/:pluginId/', content) 323 | 324 | content = (content 325 | # 前后端 API 326 | .replace("backend-anon", "backend-api") 327 | .replace("https://chatgpt.com/backend", f"{petrol}://{origin_host}/backend") 328 | .replace("https://chatgpt.com/public", f"{petrol}://{origin_host}/public") 329 | .replace("https://chatgpt.com/voice", f"{petrol}://{origin_host}/voice") 330 | .replace("https://chatgpt.com/api", f"{petrol}://{origin_host}/api") 331 | .replace("webrtc.chatgpt.com", voice_host if voice_host else "webrtc.chatgpt.com") 332 | # 前端显示 333 | .replace("https://cdn.oaistatic.com", f"{petrol}://{origin_host}") 334 | .replace("https://persistent.oaistatic.com", f"{petrol}://{origin_host}") 335 | .replace("https://files.oaiusercontent.com", f"{petrol}://{file_host if file_host else origin_host}") 336 | .replace("https://s.gravatar.com", f"{petrol}://{origin_host}") 337 | .replace("chromewebstore.google.com", "chromewebstore.crxsoso.com") 338 | # 伪遥测 339 | .replace("https://chatgpt.com/ces", f"{petrol}://{origin_host}/ces") 340 | .replace("${Mhe}/statsc/flush", f"{petrol}://{origin_host}/ces/statsc/flush") 341 | .replace("https://ab.chatgpt.com", f"{petrol}://{origin_host}") 342 | .replace("web-sandbox.oaiusercontent.com", f"{origin_host}/sandbox") 343 | # 禁止云收集数据 344 | .replace("browser-intake-datadoghq.com", f"0.0.0.0") 345 | .replace("datadoghq.com", f"0.0.0.0") 346 | .replace("ddog-gov.com", f"0.0.0.0") 347 | .replace("dd0g-gov.com", f"0.0.0.0") 348 | .replace("datad0g.com", f"0.0.0.0") 349 | # 翻译 350 | #.replace("By ChatGPT", "ChatGPT") 351 | #.replace("GPTs created by the ChatGPT team", "由 ChatGPT 官方创建的 GPTs") 352 | #.replace("Let me turn your imagination into imagery.", "让我将你的想象力转化为图像。") 353 | #.replace("Drop in any files and I can help analyze and visualize your data.", "上传任何文件,我可以帮助你分析并可视化数据。") 354 | #.replace("The latest version of GPT-4o with no additional capabilities.", "最新版本的 GPT-4o,没有额外的功能。") 355 | #.replace("I can browse the web to help you gather information or conduct research", "我可以浏览网页帮助你收集信息或进行研究。") 356 | #.replace("Ask me anything about stains, settings, sorting and everything laundry.", "问我任何关于污渍、设置、排序以及洗衣的一切问题。") 357 | #.replace("I help parents help their kids with math. Need a 9pm refresher on geometry proofs? I’m here for you.", "我帮助家长辅导孩子的数学。需要晚上9点复习几何证明?我在这里帮你。") 358 | .replace("给“{name}”发送消息", "问我任何事…") 359 | .replace("有什么可以帮忙的?", "今天能帮您些什么?") 360 | .replace("获取 ChatGPT 搜索扩展程序", "了解 “ChatGPT 搜索” 扩展") 361 | .replace("GPT 占位符", "占位 GPT") 362 | # 其它 363 | .replace('fill:"#0D0D0D"','fill:"currentColor"') # “新项目” 图标适配神色模式 364 | .replace("FP()","true") # 解除不显示 Sora 限制 365 | # .replace("https://chatgpt.com", f"{petrol}://{origin_host}") # 我才是 ChatGPT! 366 | # .replace("https", petrol) # 全都给我变协议 367 | ) 368 | 369 | # 项目名称 370 | content = re.sub(r'(? int(time.time()) + 60 * 60 * 24 * 5: 220 | need_refresh = False 221 | except Exception as e: 222 | log.error(f"access_token: {e}") 223 | 224 | if refresh_token and need_refresh: 225 | chatgpt_refresh_info = await chatgpt_refresh(refresh_token) 226 | if chatgpt_refresh_info: 227 | auth_info.update(chatgpt_refresh_info) 228 | access_token = auth_info.get("accessToken", "") 229 | account_check_info = await chatgpt_account_check(access_token) 230 | if account_check_info: 231 | auth_info.update(account_check_info) 232 | auth_info.update({"accessToken": access_token}) 233 | return Response(content=json.dumps(auth_info), media_type="application/json") 234 | elif access_token: 235 | account_check_info = await chatgpt_account_check(access_token) 236 | if account_check_info: 237 | auth_info.update(account_check_info) 238 | auth_info.update({"accessToken": access_token}) 239 | return Response(content=json.dumps(auth_info), media_type="application/json") 240 | 241 | raise HTTPException(status_code=401, detail="Unauthorized") 242 | 243 | -------------------------------------------------------------------------------- /gateway/v1.py: -------------------------------------------------------------------------------- 1 | import json 2 | import time 3 | import uuid 4 | 5 | from fastapi import Request 6 | from fastapi.responses import Response 7 | 8 | from app import app 9 | from gateway.reverseProxy import web_reverse_proxy 10 | from utils.kv_utils import set_value_for_key 11 | 12 | @app.post("/v1/initialize") 13 | async def initialize(request: Request): 14 | res = await web_reverse_proxy(request, f"v1/initialize?k=client-tnE5GCU2F2cTxRiMbvTczMDT1jpwIigZHsZSdqiy4u&st=javascript-client-react&sv=3.9.0&t={int(time.time() * 1000)}&sid={uuid.uuid4()}&se=1") 15 | 16 | if res and res.status_code == 200 and json.loads(res.body.decode()): 17 | initialize = json.loads(res.body.decode()) 18 | set_value_for_key(initialize, "ip", "8.8.8.8") 19 | set_value_for_key(initialize, "country", "US") 20 | return Response(content=json.dumps(initialize, indent=4), media_type="application/json") 21 | else: 22 | return res 23 | 24 | @app.post("/v1/rgstr") 25 | async def rgstr(): 26 | return Response(status_code=202, content=json.dumps({"success": True, "fake": True}, indent=4), media_type="application/json") 27 | 28 | @app.get("/ces/v1/projects/oai/settings") 29 | async def oai_settings(): 30 | return Response(status_code=200, content=json.dumps({"integrations":{"Segment.io":{"apiHost":"chatgpt.com/ces/v1","apiKey":"oai"}}, "fake": True}, indent=4), media_type="application/json") 31 | 32 | @app.post("/ces/{path:path}") 33 | async def ces_v1(): 34 | return Response(status_code=202, content=json.dumps({"success": True, "fake": True}, indent=4), media_type="application/json") 35 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | fastapi==0.115.3 2 | python-multipart==0.0.13 3 | curl_cffi==0.7.3 4 | uvicorn 5 | tiktoken 6 | python-dotenv 7 | websockets 8 | pillow 9 | pybase64 10 | jinja2 11 | APScheduler 12 | ua-generator 13 | pyjwt -------------------------------------------------------------------------------- /templates/admin.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 347 | Admin 348 | 349 | 350 | 351 |
352 |
353 |
354 | 359 |

Admin

360 |

361 | API: /v1/chat/completions
362 | Count: N/A 363 |

364 | 365 |
366 | 369 | 370 |
371 | 372 |
373 | 374 |

Will clear the uploaded and error Tokens.

375 |
376 | 377 |
378 | 379 |

Will clear the SeedToken and Conversation Map.

380 |
381 | 382 | 383 | 384 |
385 |

Powered By FlowGPT
Based On Chat2Api

386 |
387 |
388 | 460 | 461 | 462 | -------------------------------------------------------------------------------- /templates/callback.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 347 | {% if title %} 348 | {{title}} 349 | {% else %} 350 | Info 351 | {% endif %} 352 | 353 | 354 | 355 |
356 |
357 |
358 | 363 | {% if title %} 364 |

{{title}}

365 | {% else %} 366 |

Info

367 | {% endif %} 368 | {% if message %} 369 |

{{message | safe}}

370 | {% else %} 371 |

Are you sure you want to back to home?

372 | {% endif %} 373 | {% if url %} 374 | 375 | {% else %} 376 | 377 | {% endif %} 378 |
379 |

Powered By FlowGPT
Based On Chat2Api

380 |
381 |
382 | 424 | 425 | 426 | -------------------------------------------------------------------------------- /templates/home.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 341 | FlowGPT 342 | 343 | 344 | 345 |
346 |
347 |
348 | 353 |

FlowGPT

354 |

Good at UI.

355 | 356 | 357 |
358 |

Powered By FlowGPT
Based On Chat2Api

359 |
360 |
361 | 394 | 395 | 396 | -------------------------------------------------------------------------------- /templates/signin.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 341 | Sign In 342 | 343 | 344 | 345 |
346 | 364 |
365 | 399 | 400 | 401 | -------------------------------------------------------------------------------- /templates/signout.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 341 | Sign Out 342 | 343 | 344 | 345 |
346 |
347 |
348 | 353 |

Signout

354 |

Are you sure you want to sign out?

355 |
356 | 357 | 358 |
359 | 360 |
361 |

Powered By FlowGPT
Based On Chat2Api

362 |
363 |
364 | 398 | 399 | 400 | -------------------------------------------------------------------------------- /templates/static/favicon-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hmjz100/FlowGPT/629e342bb9dab4c206556d5733b1073e297791de/templates/static/favicon-dashboard.png -------------------------------------------------------------------------------- /templates/static/favicon-dashboard.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /templates/static/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hmjz100/FlowGPT/629e342bb9dab4c206556d5733b1073e297791de/templates/static/favicon.ico -------------------------------------------------------------------------------- /utils/Client.py: -------------------------------------------------------------------------------- 1 | import random 2 | 3 | from curl_cffi.requests import AsyncSession 4 | 5 | class Client: 6 | def __init__(self, proxy=None, timeout=15, verify=True, impersonate='safari15_3'): 7 | self.proxies = {"http": proxy, "https": proxy} 8 | self.timeout = timeout 9 | self.verify = verify 10 | 11 | self.impersonate = impersonate 12 | # impersonate=self.impersonate 13 | 14 | # self.ja3 = "" 15 | # self.akamai = "" 16 | # ja3=self.ja3, akamai=self.akamai 17 | self.session = AsyncSession(proxies=self.proxies, timeout=self.timeout, impersonate=self.impersonate, verify=self.verify) 18 | self.session2 = AsyncSession(proxies=self.proxies, timeout=self.timeout, impersonate=self.impersonate, verify=self.verify) 19 | 20 | async def post(self, *args, **kwargs): 21 | r = await self.session.post(*args, **kwargs) 22 | return r 23 | 24 | async def post_stream(self, *args, headers=None, cookies=None, **kwargs): 25 | if self.session: 26 | headers = headers or self.session.headers 27 | cookies = cookies or self.session.cookies 28 | r = await self.session2.post(*args, headers=headers, cookies=cookies, **kwargs) 29 | return r 30 | 31 | async def get(self, *args, **kwargs): 32 | r = await self.session.get(*args, **kwargs) 33 | return r 34 | 35 | async def request(self, *args, **kwargs): 36 | r = await self.session.request(*args, **kwargs) 37 | return r 38 | 39 | async def put(self, *args, **kwargs): 40 | r = await self.session.put(*args, **kwargs) 41 | return r 42 | 43 | async def close(self): 44 | if hasattr(self, 'session'): 45 | try: 46 | await self.session.close() 47 | del self.session 48 | except Exception: 49 | pass 50 | if hasattr(self, 'session2'): 51 | try: 52 | await self.session2.close() 53 | del self.session2 54 | except Exception: 55 | pass 56 | -------------------------------------------------------------------------------- /utils/configs.py: -------------------------------------------------------------------------------- 1 | import ast 2 | import os 3 | 4 | from dotenv import load_dotenv 5 | 6 | from utils.log import log 7 | 8 | load_dotenv(encoding="ascii") 9 | 10 | def is_true(x): 11 | if isinstance(x, bool): 12 | return x 13 | if isinstance(x, str): 14 | return x.lower() in ['true', '1', 't', 'y', 'yes'] 15 | elif isinstance(x, int): 16 | return x == 1 17 | else: 18 | return False 19 | 20 | api_prefix = os.getenv('API_PREFIX', None) 21 | authorization = os.getenv('AUTHORIZATION', '').replace(' ', '') 22 | chatgpt_base_url = os.getenv('CHATGPT_BASE_URL', 'https://chatgpt.com').replace(' ', '') 23 | auth_key = os.getenv('AUTH_KEY', None) 24 | x_sign = os.getenv('X_SIGN', None) 25 | 26 | ark0se_token_url = os.getenv('ARK' + 'OSE_TOKEN_URL', '').replace(' ', '') 27 | if not ark0se_token_url: 28 | ark0se_token_url = os.getenv('ARK0SE_TOKEN_URL', None) 29 | proxy_url = os.getenv('PROXY_URL', '').replace(' ', '') 30 | sentinel_proxy_url = os.getenv('SENTINEL_PROXY_URL', None) 31 | export_proxy_url = os.getenv('EXPORT_PROXY_URL', None) 32 | file_host = os.getenv('FILE_HOST', None) 33 | voice_host = os.getenv('VOICE_HOST', None) 34 | impersonate_list_str = os.getenv('IMPERSONATE', '[]') 35 | user_agents_list_str = os.getenv('USER_AGENTS', '[]') 36 | device_tuple_str = os.getenv('DEVICE_TUPLE', '()') 37 | browser_tuple_str = os.getenv('BROWSER_TUPLE', '()') 38 | platform_tuple_str = os.getenv('PLATFORM_TUPLE', '()') 39 | 40 | cf_file_url = os.getenv('CF_FILE_URL', None) 41 | turnstile_solver_url = os.getenv('TURNSTILE_SOLVER_URL', None) 42 | 43 | history_disabled = is_true(os.getenv('HISTORY_DISABLED', True)) 44 | pow_difficulty = os.getenv('POW_DIFFICULTY', '000032') 45 | retry_times = int(os.getenv('RETRY_TIMES', 3)) 46 | conversation_only = is_true(os.getenv('CONVERSATION_ONLY', False)) 47 | enable_limit = is_true(os.getenv('ENABLE_LIMIT', True)) 48 | upload_by_url = is_true(os.getenv('UPLOAD_BY_URL', False)) 49 | check_model = is_true(os.getenv('CHECK_MODEL', False)) 50 | scheduled_refresh = is_true(os.getenv('SCHEDULED_REFRESH', False)) 51 | random_token = is_true(os.getenv('RANDOM_TOKEN', True)) 52 | oai_language = os.getenv('OAI_LANGUAGE', 'zh-CN') 53 | 54 | authorization_list = authorization.split(',') if authorization else [] 55 | chatgpt_base_url_list = chatgpt_base_url.split(',') if chatgpt_base_url else [] 56 | ark0se_token_url_list = ark0se_token_url.split(',') if ark0se_token_url else [] 57 | proxy_url_list = proxy_url.split(',') if proxy_url else [] 58 | sentinel_proxy_url_list = sentinel_proxy_url.split(',') if sentinel_proxy_url else [] 59 | impersonate_list = ast.literal_eval(impersonate_list_str) 60 | user_agents_list = ast.literal_eval(user_agents_list_str) 61 | device_tuple = ast.literal_eval(device_tuple_str) 62 | browser_tuple = ast.literal_eval(browser_tuple_str) 63 | platform_tuple = ast.literal_eval(platform_tuple_str) 64 | 65 | port = os.getenv('PORT', 5005) 66 | enable_gateway = is_true(os.getenv('ENABLE_GATEWAY', False)) 67 | enable_homepage = is_true(os.getenv('ENABLE_HOMEPAGE', False)) 68 | auto_seed = is_true(os.getenv('AUTO_SEED', True)) 69 | force_no_history = is_true(os.getenv('FORCE_NO_HISTORY', False)) 70 | no_sentinel = is_true(os.getenv('NO_SENTINEL', False)) 71 | 72 | with open('version.txt') as f: 73 | version = f.read().strip().title() 74 | 75 | title = f"FlowGPT v{version} | https://github.com/hmjz100/FlowGPT" 76 | log_length = len(title) 77 | 78 | def aligned(left_text, right_text, separator=" "): 79 | """ 80 | 格式化字符串,使左侧文本左对齐,右侧文本右对齐,中间用指定的分隔符填充。 81 | 82 | 参数: 83 | left_text (str): 左侧文本。 84 | right_text (str): 右侧文本。 85 | separator (str): 用于填充的分隔符,默认为空格。 86 | 87 | 返回: 88 | str: 格式化后的字符串。 89 | """ 90 | if not right_text: 91 | right_text = "/" 92 | 93 | right_text = str(right_text) 94 | # 如果总长度不足以容纳两侧文本,则直接拼接 95 | if log_length < len(left_text) + len(right_text): 96 | return f"{left_text}{right_text}" 97 | 98 | # 计算需要填充的分隔符数量 99 | padding_length = log_length - len(left_text) - len(right_text) 100 | if padding_length <= 0: 101 | return f"{left_text}{right_text}" 102 | 103 | # 使用分隔符填充中间部分 104 | padding = separator * padding_length 105 | formatted_string = f"{left_text}{padding}{right_text}" 106 | return formatted_string 107 | 108 | version_map = { 109 | "beta": { 110 | "message": "Beta version, not representative of the final quality", 111 | "color": 94 112 | }, 113 | "canary": { 114 | "message": "Canary version, unstable and experimental", 115 | "color": 96 116 | } 117 | } 118 | 119 | log.info("-" * log_length) 120 | log.info(title) 121 | 122 | for version_type, details in version_map.items(): 123 | if version_type in version.lower(): 124 | log.custom(details['message'].title().center(log_length, " "), details['color']) 125 | break 126 | 127 | log.info("-" * log_length) 128 | log.info(" environment variables ".title().center(log_length, " ")) 129 | log.info(" (.env) ".center(log_length, " ")) 130 | log.info(" Security ".center(log_length, "-")) 131 | log.info(aligned("API_PREFIX", api_prefix)) 132 | log.info(aligned("AUTHORIZATION", authorization_list)) 133 | log.info(aligned("AUTH_KEY", auth_key)) 134 | log.info(" Request ".center(log_length, "-")) 135 | log.info(aligned("CHATGPT_BASE_URL", chatgpt_base_url_list)) 136 | log.info(aligned("PROXY_URL", proxy_url_list)) 137 | log.info(aligned("EXPORT_PROXY_URL", export_proxy_url)) 138 | log.info(aligned("FILE_HOST", file_host)) 139 | log.info(aligned("VOICE_HOST", voice_host)) 140 | log.info(aligned("IMPERSONATE", impersonate_list)) 141 | log.info(aligned("USER_AGENTS", user_agents_list)) 142 | log.info(" Functionality ".center(log_length, "-")) 143 | log.info(aligned("HISTORY_DISABLED", history_disabled)) 144 | log.info(aligned("POW_DIFFICULTY", pow_difficulty)) 145 | log.info(aligned("RETRY_TIMES", retry_times)) 146 | log.info(aligned("CONVERSATION_ONLY", conversation_only)) 147 | log.info(aligned("ENABLE_LIMIT", enable_limit)) 148 | log.info(aligned("UPLOAD_BY_URL", upload_by_url)) 149 | log.info(aligned("CHECK_MODEL", check_model)) 150 | log.info(aligned("SCHEDULED_REFRESH", scheduled_refresh)) 151 | log.info(aligned("RANDOM_TOKEN", random_token)) 152 | log.info(aligned("OAI_LANGUAGE", oai_language)) 153 | log.info(" Gateway ".center(log_length, "-")) 154 | log.info(aligned("PORT", port)) 155 | log.info(aligned("ENABLE_GATEWAY", enable_gateway)) 156 | log.info(aligned("ENABLE_HOMEPAGE", enable_homepage)) 157 | log.info(aligned("AUTO_SEED", auto_seed)) 158 | log.info(aligned("FORCE_NO_HISTORY", force_no_history)) 159 | log.info("-" * log_length) 160 | -------------------------------------------------------------------------------- /utils/globals.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | 4 | import utils.configs as configs 5 | from utils.log import log 6 | 7 | DATA_FOLDER = "data" 8 | TOKENS_FILE = os.path.join(DATA_FOLDER, "token.txt") 9 | REFRESH_MAP_FILE = os.path.join(DATA_FOLDER, "refresh_map.json") 10 | ERROR_TOKENS_FILE = os.path.join(DATA_FOLDER, "error_token.txt") 11 | WSS_MAP_FILE = os.path.join(DATA_FOLDER, "wss_map.json") 12 | FP_FILE = os.path.join(DATA_FOLDER, "fp_map.json") 13 | SEED_MAP_FILE = os.path.join(DATA_FOLDER, "seed_map.json") 14 | CONVERSATION_MAP_FILE = os.path.join(DATA_FOLDER, "conversation_map.json") 15 | 16 | count = 0 17 | token_list = [] 18 | error_token_list = [] 19 | refresh_map = {} 20 | wss_map = {} 21 | fp_map = {} 22 | seed_map = {} 23 | conversation_map = {} 24 | impersonate_list = [ 25 | "chrome99", 26 | "chrome100", 27 | "chrome101", 28 | "chrome104", 29 | "chrome107", 30 | "chrome110", 31 | "chrome116", 32 | "chrome119", 33 | "chrome120", 34 | "chrome123", 35 | "edge99", 36 | "edge101", 37 | ] if not configs.impersonate_list else configs.impersonate_list 38 | 39 | if not os.path.exists(DATA_FOLDER): 40 | os.makedirs(DATA_FOLDER) 41 | 42 | if os.path.exists(REFRESH_MAP_FILE): 43 | with open(REFRESH_MAP_FILE, "r") as f: 44 | try: 45 | refresh_map = json.load(f) 46 | except: 47 | refresh_map = {} 48 | else: 49 | refresh_map = {} 50 | 51 | if os.path.exists(WSS_MAP_FILE): 52 | with open(WSS_MAP_FILE, "r") as f: 53 | try: 54 | wss_map = json.load(f) 55 | except: 56 | wss_map = {} 57 | else: 58 | wss_map = {} 59 | 60 | if os.path.exists(FP_FILE): 61 | with open(FP_FILE, "r", encoding="utf-8") as f: 62 | try: 63 | fp_map = json.load(f) 64 | except: 65 | fp_map = {} 66 | else: 67 | fp_map = {} 68 | 69 | if os.path.exists(SEED_MAP_FILE): 70 | with open(SEED_MAP_FILE, "r") as f: 71 | try: 72 | seed_map = json.load(f) 73 | except: 74 | seed_map = {} 75 | else: 76 | seed_map = {} 77 | 78 | if os.path.exists(CONVERSATION_MAP_FILE): 79 | with open(CONVERSATION_MAP_FILE, "r") as f: 80 | try: 81 | conversation_map = json.load(f) 82 | except: 83 | conversation_map = {} 84 | else: 85 | conversation_map = {} 86 | 87 | if os.path.exists(TOKENS_FILE): 88 | with open(TOKENS_FILE, "r", encoding="utf-8") as f: 89 | for line in f: 90 | if line.strip() and not line.startswith("#"): 91 | token_list.append(line.strip()) 92 | else: 93 | with open(TOKENS_FILE, "w", encoding="utf-8") as f: 94 | pass 95 | 96 | if os.path.exists(ERROR_TOKENS_FILE): 97 | with open(ERROR_TOKENS_FILE, "r", encoding="utf-8") as f: 98 | for line in f: 99 | if line.strip() and not line.startswith("#"): 100 | error_token_list.append(line.strip()) 101 | else: 102 | with open(ERROR_TOKENS_FILE, "w", encoding="utf-8") as f: 103 | pass 104 | 105 | if token_list: 106 | log.info(f"Token list count: {len(token_list)}, Error token list count: {len(error_token_list)}") 107 | log.info("-" * configs.log_length) -------------------------------------------------------------------------------- /utils/kv_utils.py: -------------------------------------------------------------------------------- 1 | def set_value_for_key(data, target_key, new_value): 2 | if isinstance(data, dict): 3 | for key, value in data.items(): 4 | if key == target_key: 5 | data[key] = new_value 6 | else: 7 | set_value_for_key(value, target_key, new_value) 8 | elif isinstance(data, list): 9 | for item in data: 10 | set_value_for_key(item, target_key, new_value) 11 | -------------------------------------------------------------------------------- /utils/log.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | default_format = "%(asctime)s | %(levelname)s | %(message)s" 4 | access_format = r'%(asctime)s | %(levelname)s | %(status_code)s - %(request_line)s - %(client_addr)s' 5 | 6 | logging.basicConfig(level=logging.INFO, format=default_format) 7 | 8 | class log: 9 | @staticmethod 10 | def info(message): 11 | logging.info(str(message)) 12 | 13 | @staticmethod 14 | def warning(message): 15 | logging.warning("\033[0;33m" + str(message) + "\033[0m") 16 | 17 | @staticmethod 18 | def error(message): 19 | logging.error("\033[0;31m" + str(message) + "\033[0m") 20 | # logging.error("\033[0;31m" + "-" * 50 + '\n| ' + str(message) + "\033[0m" + "\n" + "└" + "-" * 80) 21 | 22 | @staticmethod 23 | def debug(message): 24 | logging.debug("\033[0;37m" + str(message) + "\033[0m") 25 | 26 | @staticmethod 27 | def custom(message, color_code): 28 | """ 29 | 自定义颜色的日志输出。 30 | 31 | 参数: 32 | message (str): 要输出的消息。 33 | color_code (str): ANSI 颜色代码。 34 | """ 35 | logging.info(f"\033[0;{color_code}m{str(message)}\033[0m") 36 | 37 | log = log() 38 | -------------------------------------------------------------------------------- /utils/retry.py: -------------------------------------------------------------------------------- 1 | from fastapi import HTTPException 2 | 3 | from utils.log import log 4 | from utils.configs import retry_times 5 | 6 | async def async_retry(func, *args, max_retries=retry_times, **kwargs): 7 | for attempt in range(max_retries + 1): 8 | try: 9 | result = await func(*args, **kwargs) 10 | return result 11 | except HTTPException as e: 12 | if attempt == max_retries: 13 | log.error(f"Throw an exception {e.status_code}, {e.detail}") 14 | if e.status_code == 500: 15 | raise HTTPException(status_code=500, detail="Server error") 16 | raise HTTPException(status_code=e.status_code, detail=e.detail) 17 | log.info(f"Retry {attempt + 1} status code {e.status_code}, {e.detail}. Retrying...") 18 | 19 | def retry(func, *args, max_retries=retry_times, **kwargs): 20 | for attempt in range(max_retries + 1): 21 | try: 22 | result = func(*args, **kwargs) 23 | return result 24 | except HTTPException as e: 25 | if attempt == max_retries: 26 | log.error(f"Throw an exception {e.status_code}, {e.detail}") 27 | if e.status_code == 500: 28 | raise HTTPException(status_code=500, detail="Server error") 29 | raise HTTPException(status_code=e.status_code, detail=e.detail) 30 | log.error(f"Retry {attempt + 1} status code {e.status_code}, {e.detail}. Retrying...") 31 | -------------------------------------------------------------------------------- /version.txt: -------------------------------------------------------------------------------- 1 | 1.0.0-canary --------------------------------------------------------------------------------