├── requirements.txt ├── README.md ├── cursor-ws-bridge.user.js └── main.py /requirements.txt: -------------------------------------------------------------------------------- 1 | fastapi 2 | uvicorn[standard] 3 | pydantic 4 | python-multipart 5 | 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Cursor Web to API Bridge 2 | 3 | 将 [Cursor](https://cursor.com/) 网页聊天转换为与 OpenAI 兼容的 API,让您可以在任何支持 OpenAI API 的第三方应用中使用 Cursor 的强大模型。 4 | 5 | ## ✨ 特性 6 | 7 | - **OpenAI 兼容**: 无缝集成到任何支持 OpenAI API 的工具中 (如:NextChat, LobeChat 等)。 8 | - **支持流式传输**: 实时获取响应,体验与原生网页版一致。 9 | - **多模型支持**: 支持 Cursor 提供的所有最新模型。 10 | - **易于部署**: 只需一个 Python 后端和一个浏览器脚本即可运行。 11 | 12 | ## 🚀 工作原理 13 | 14 | 本项目由两部分组成: 15 | 16 | 1. **Python FastAPI 后端 (`main.py`)**: 17 | - 启动一个本地 HTTP 服务器,提供与 OpenAI 对齐的 `/v1/chat/completions` 和 `/v1/models` 接口。 18 | - 维护一个 WebSocket 服务 (`/ws`),用于与浏览器脚本通信。 19 | - 接收到 API 请求后,通过 WebSocket 将其转发给浏览器脚本。 20 | 21 | 2. **浏览器用户脚本 (`cursor-ws-bridge.user.js`)**: 22 | - 需要通过 [Tampermonkey](https://www.tampermonkey.net/) 等扩展安装在浏览器中。 23 | - 当您打开 Cursor 网站时,此脚本会自动运行并连接到本地的 Python WebSocket 服务。 24 | - 脚本会拦截 Cursor 网站的前端 `fetch` 请求。当从后端收到聊天请求时,它会模拟一次真实的网站聊天请求,并将 Cursor 返回的数据流通过 WebSocket 传回给后端。 25 | 26 | 整个流程如下: 27 | ``` 28 | [第三方应用] -> [FastAPI 后端] -> [WebSocket] -> [浏览器脚本] -> [Cursor 官网 API] -> [AI 模型] 29 | ``` 30 | 响应数据再按原路返回。 31 | 32 | ## ⚙️ 如何使用 33 | 34 | ### 步骤 1: 准备环境 35 | 36 | - 安装 [Python 3.8+](https://www.python.org/downloads/)。 37 | - 安装一个用户脚本管理器,如 [Tampermonkey](https://www.tampermonkey.net/) (推荐)。 38 | 39 | ### 步骤 2: 启动后端服务 40 | 41 | 1. 克隆或下载本仓库。 42 | 2. 安装 Python 依赖: 43 | ```bash 44 | pip install -r requirements.txt 45 | ``` 46 | 3. 启动 FastAPI 服务器: 47 | ```bash 48 | # 设置您的 API 密钥 (在 Linux/macOS) 49 | export CURSOR_API_KEY="your-secret-key" 50 | 51 | # (在 Windows PowerShell) 52 | # $env:CURSOR_API_KEY="your-secret-key" 53 | 54 | python main.py 55 | ``` 56 | 服务默认运行在 `http://localhost:8765`。 57 | 58 | ### 步骤 3: 安装用户脚本 59 | 60 | 1. 打开浏览器的 Tampermonkey 扩展仪表盘。 61 | 2. 选择 "新建脚本"。 62 | 3. 将 `cursor-ws-bridge.user.js` 的全部内容复制并粘贴到编辑器中。 63 | 4. 保存脚本。 64 | 65 | ### 步骤 4: 运行 66 | 67 | 1. 确保后端服务 (`main.py`) 正在运行。 68 | 2. 在安装了用户脚本的浏览器中,打开 [Cursor](https://cursor.com/) 官网并保持该页面打开。 69 | 3. 脚本会自动连接到后端。您可以在浏览器开发者工具的控制台中看到 `[cursor-ws-bridge]` 相关的日志,或者访问 `http://localhost:8765` 查看服务状态。 70 | ```json 71 | { 72 | "status": "running", 73 | "browser_connected": true 74 | } 75 | ``` 76 | 当 `browser_connected` 为 `true` 时,代表一切就绪。 77 | 78 | ### 步骤 5: 调用 API 79 | 80 | 现在您可以在任何支持 OpenAI API 的应用中配置使用此接口了。 81 | 82 | - **API 地址**: `http://localhost:8765/v1` 83 | - **API 密钥**: `your-secret-key` (或者您在环境变量中设置的值) 84 | - **模型名称**: 从下面的模型列表中选择 85 | 86 | 这是一个使用 `curl` 的示例: 87 | 88 | ```bash 89 | curl http://localhost:8765/v1/chat/completions \ 90 | -H "Content-Type: application/json" \ 91 | -H "Authorization: Bearer your-secret-key" \ 92 | -d '{ 93 | "model": "gpt-4o", 94 | "messages": [ 95 | { 96 | "role": "user", 97 | "content": "你好,请介绍一下你自己" 98 | } 99 | ], 100 | "stream": true 101 | }' 102 | ``` 103 | 104 | ## 🧠 可用模型 105 | 106 | 您可以通过访问 `http://localhost:8765/v1/models` 端点获取实时模型列表。当前支持的模型包括: 107 | 108 | | 模型 ID | 所属机构 | 109 | |-----------------------|------------| 110 | | `gpt-5` | openai | 111 | | `gpt-5-codex` | openai | 112 | | `gpt-5-mini` | openai | 113 | | `gpt-5-nano` | openai | 114 | | `gpt-4.1` | openai | 115 | | `gpt-4o` | openai | 116 | | `claude-3.5-sonnet` | anthropic | 117 | | `claude-3.5-haiku` | anthropic | 118 | | `claude-3.7-sonnet` | anthropic | 119 | | `claude-4-sonnet` | anthropic | 120 | | `claude-4-opus` | anthropic | 121 | | `claude-4.1-opus` | anthropic | 122 | | `gemini-2.5-pro` | google | 123 | | `gemini-2.5-flash` | google | 124 | | `o3` | openai | 125 | | `o4-mini` | openai | 126 | | `deepseek-r1` | deepseek | 127 | | `deepseek-v3.1` | deepseek | 128 | | `kimi-k2-instruct` | moonshot-ai| 129 | | `grok-3` | xai | 130 | | `grok-3-mini` | xai | 131 | | `grok-4` | xai | 132 | 133 | ## ⚠️ 注意事项 134 | 135 | - 本项目依赖于 Cursor 网站的前端实现,如果官网进行大的改版,用户脚本可能需要适配更新。 136 | - 请确保浏览器页面始终处于打开状态,否则 API 将无法工作。 137 | - 这是一个社区项目,与 Cursor 官方无关。请合理使用。 138 | -------------------------------------------------------------------------------- /cursor-ws-bridge.user.js: -------------------------------------------------------------------------------- 1 | // ==UserScript== 2 | // @name Cursor WebSocket Bridge 3 | // @namespace https://tampermonkey.net/ 4 | // @version 1.0.0 5 | // @description Bridges Cursor's chat API to a WebSocket for OpenAI-compatible API access. 6 | // @author You 7 | // @match https://cursor.com/* 8 | // @run-at document-start 9 | // @grant none 10 | // ==/UserScript== 11 | 12 | (function () { 13 | 'use strict'; 14 | 15 | /** ========= 配置 ========= **/ 16 | const WEBSOCKET_URL = 'ws://localhost:8765/ws'; 17 | const DEBUG = true; 18 | const log = (...args) => { if (DEBUG) console.log('[cursor-ws-bridge]', ...args); }; 19 | 20 | /** ========= WebSocket 连接管理 ========= **/ 21 | let ws; 22 | function connect() { 23 | log(`正在连接 WebSocket: ${WEBSOCKET_URL}...`); 24 | ws = new WebSocket(WEBSOCKET_URL); 25 | 26 | ws.onopen = () => { 27 | log('WebSocket 连接已建立。'); 28 | ws.send(JSON.stringify({ type: 'status', data: { status: 'ready' } })); 29 | }; 30 | 31 | ws.onmessage = (event) => { 32 | try { 33 | const message = JSON.parse(event.data); 34 | log('收到服务端消息:', message); 35 | if (message.type === 'chat_request') { 36 | handleChatRequest(message.data); 37 | } 38 | } catch (e) { 39 | log('解析服务端消息出错:', e); 40 | } 41 | }; 42 | 43 | ws.onclose = (event) => { 44 | log(`WebSocket 连接已关闭 (code: ${event.code})。5秒后重新连接...`); 45 | setTimeout(connect, 5000); 46 | }; 47 | 48 | ws.onerror = (error) => { 49 | log('WebSocket 发生错误:', error); 50 | // onerror 会自动触发 onclose,无需手动重连 51 | }; 52 | } 53 | 54 | /** ========= 聊天请求处理 ========= **/ 55 | async function handleChatRequest(requestData) { 56 | const { messages, model, request_id } = requestData; 57 | log(`处理聊天请求 ${request_id}, 模型: ${model}`); 58 | 59 | const listener = buildListenerForStreaming(request_id); 60 | window.__upstreamMessages = messages; 61 | 62 | const controller = new AbortController(); 63 | 64 | try { 65 | await fetch('/api/chat', { 66 | method: 'POST', 67 | headers: { 'content-type': 'application/json' }, 68 | body: JSON.stringify({ 69 | model: model || 'claude-sonnet-4-20250514', 70 | messages: [], // 会被拦截器替换 71 | trigger: 'submit-message', 72 | __rid: listener.rid, 73 | }), 74 | signal: controller.signal 75 | }); 76 | } catch (error) { 77 | if (error.name !== 'AbortError') { 78 | log(`请求 ${request_id} 的 fetch 失败:`, error); 79 | sendError(request_id, `Fetch failed: ${error.message}`); 80 | cleanupListener(listener); 81 | } 82 | } 83 | } 84 | 85 | function sendData(payload) { 86 | if (ws && ws.readyState === WebSocket.OPEN) { 87 | ws.send(JSON.stringify(payload)); 88 | } 89 | } 90 | 91 | function sendError(requestId, errorMessage) { 92 | sendData({ 93 | type: 'error', 94 | data: { 95 | request_id: requestId, 96 | error: errorMessage 97 | } 98 | }); 99 | } 100 | 101 | /** ========= SSE 流处理 ========= **/ 102 | const bridge = { listeners: new Map() }; // rid -> listener 103 | 104 | function cleanupListener(listener) { 105 | if (listener) { 106 | bridge.listeners.delete(listener.rid); 107 | } 108 | } 109 | 110 | window.emitMeta = (meta) => { 111 | const rid = meta?.rid; if (!rid) return; 112 | const l = bridge.listeners.get(rid); 113 | if (l) { 114 | l.active = true; 115 | log(`[SSE] 元数据已关联, rid=${rid}, request_id=${l.request_id}`); 116 | } 117 | }; 118 | 119 | window.emitDelta = (rid, delta) => { 120 | const l = bridge.listeners.get(rid); 121 | if (l?.active) l.onDelta(delta); 122 | }; 123 | 124 | window.emitError = (rid, errorText) => { 125 | const l = bridge.listeners.get(rid); 126 | if (l?.active) l.onError(errorText); 127 | }; 128 | 129 | window.emitDone = (rid) => { 130 | const l = bridge.listeners.get(rid); 131 | if (l?.active) l.onDone(); 132 | }; 133 | 134 | window.emitUsage = (rid, usage) => { 135 | const l = bridge.listeners.get(rid); 136 | if (l?.active) l.onUsage(usage); 137 | }; 138 | 139 | function buildListenerForStreaming(request_id) { 140 | const rid = 'ws_req_' + Math.random().toString(16).slice(2); 141 | const listener = { 142 | rid, 143 | request_id, 144 | active: false, 145 | onDelta: (delta) => { 146 | sendData({ type: 'chat_delta', data: { request_id, content: delta } }); 147 | }, 148 | onError: (errorText) => { 149 | sendData({ type: 'error', data: { request_id, error: errorText } }); 150 | }, 151 | onUsage: (usage) => { 152 | sendData({ type: 'chat_usage', data: { request_id, usage } }); 153 | }, 154 | onDone: () => { 155 | log(`[SSE] 流结束, request_id=${request_id}`); 156 | sendData({ type: 'chat_done', data: { request_id } }); 157 | listener.active = false; 158 | cleanupListener(listener); 159 | } 160 | }; 161 | bridge.listeners.set(rid, listener); 162 | return listener; 163 | } 164 | 165 | /** ========= Fetch 拦截器 ========= **/ 166 | const originalFetch = window.fetch; 167 | if (!originalFetch.__isBridge) { 168 | window.fetch = async function (input, init) { 169 | const url = typeof input === 'string' ? input : input?.url || ''; 170 | const isChat = url.includes('/api/chat') && init?.method === 'POST'; 171 | 172 | if (!isChat) { 173 | return originalFetch(input, init); 174 | } 175 | 176 | log('[Fetch Intercept] 拦截到 /api/chat 请求'); 177 | let body; 178 | try { 179 | body = JSON.parse(init.body); 180 | } catch (e) { 181 | return originalFetch(input, init); 182 | } 183 | 184 | if (window.__upstreamMessages) { 185 | body.messages = (window.__upstreamMessages || []).map(m => ({ 186 | role: m.role, 187 | parts: [{ type: 'text', text: String(m.content ?? '') }], 188 | })); 189 | log(`[Fetch Intercept] 已注入 ${body.messages.length} 条上游消息`); 190 | delete window.__upstreamMessages; 191 | } 192 | 193 | const newInit = { ...init, body: JSON.stringify(body) }; 194 | const response = await originalFetch(input, newInit); 195 | 196 | // 克隆响应体用于 SSE 解析 197 | if (response.ok && response.body) { 198 | const streamClone = response.clone().body; 199 | parseSseStream(streamClone, body.__rid); 200 | } 201 | 202 | return response; 203 | }; 204 | window.fetch.__isBridge = true; 205 | } 206 | 207 | /** ========= SSE 解析器 ========= **/ 208 | async function parseSseStream(stream, rid) { 209 | const reader = stream.getReader(); 210 | const decoder = new TextDecoder(); 211 | let buffer = ''; 212 | 213 | const processChunk = (chunk) => { 214 | const lines = chunk.split('\n').filter(line => line.startsWith('data:')); 215 | for (const line of lines) { 216 | const jsonStr = line.replace(/^data: /, ''); 217 | if (jsonStr === '[DONE]') { 218 | window.emitDone(rid); 219 | return; 220 | } 221 | try { 222 | const event = JSON.parse(jsonStr); 223 | if (event.type === 'text-delta' && typeof event.delta === 'string') { 224 | window.emitDelta(rid, event.delta); 225 | } else if (event.type === 'error' && event.errorText) { 226 | window.emitError(rid, event.errorText); 227 | } else if (event.messageMetadata?.usage) { 228 | window.emitUsage(rid, event.messageMetadata.usage); 229 | } else if (['text-end', 'finish-step', 'finish'].includes(event.type)) { 230 | window.emitDone(rid); 231 | } 232 | } catch (e) { 233 | // 忽略解析错误 234 | } 235 | } 236 | }; 237 | 238 | try { 239 | window.emitMeta({ rid }); 240 | while (true) { 241 | const { done, value } = await reader.read(); 242 | if (done) { 243 | window.emitDone(rid); 244 | break; 245 | } 246 | buffer += decoder.decode(value, { stream: true }); 247 | let boundary; 248 | while ((boundary = buffer.indexOf('\n\n')) !== -1) { 249 | const chunk = buffer.substring(0, boundary); 250 | buffer = buffer.substring(boundary + 2); 251 | processChunk(chunk); 252 | } 253 | } 254 | } catch (e) { 255 | log(`[SSE] 解析流时出错, rid=${rid}:`, e); 256 | window.emitDone(rid); 257 | } 258 | } 259 | 260 | /** ========= 启动 ========= **/ 261 | connect(); 262 | 263 | })(); 264 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | import logging 4 | import time 5 | import uuid 6 | import os 7 | from typing import Dict, Any, Optional 8 | 9 | from fastapi import FastAPI, WebSocket, WebSocketDisconnect, HTTPException, Depends 10 | from fastapi.security import OAuth2PasswordBearer 11 | from fastapi.responses import StreamingResponse, JSONResponse 12 | from fastapi.middleware.cors import CORSMiddleware 13 | from pydantic import BaseModel 14 | import uvicorn 15 | 16 | # --- Pydantic 模型以兼容 OpenAI --- 17 | 18 | class ChatMessage(BaseModel): 19 | role: str 20 | content: str 21 | 22 | class ChatCompletionRequest(BaseModel): 23 | model: str 24 | messages: list[ChatMessage] 25 | stream: bool = False 26 | 27 | class ModelCard(BaseModel): 28 | id: str 29 | object: str = "model" 30 | owned_by: str = "unknown" 31 | created: int = int(time.time()) 32 | 33 | class ModelList(BaseModel): 34 | object: str = "list" 35 | data: list[ModelCard] 36 | 37 | # --- 基本设置 --- 38 | 39 | # 从环境变量中读取 API 密钥,如果未设置则使用默认值 40 | API_KEY = os.environ.get("CURSOR_API_KEY", "your-default-api-key") 41 | if API_KEY == "your-default-api-key": 42 | print("\033[93m" + "警告: 正在使用默认 API 密钥。请设置 CURSOR_API_KEY 环境变量以确保安全。" + "\033[0m") 43 | 44 | oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token") 45 | 46 | async def verify_api_key(token: str = Depends(oauth2_scheme)): 47 | """依赖项:验证 API 密钥""" 48 | if token != API_KEY: 49 | raise HTTPException( 50 | status_code=401, 51 | detail="无效的 API 密钥", 52 | headers={"WWW-Authenticate": "Bearer"}, 53 | ) 54 | 55 | logging.basicConfig(level=logging.INFO) 56 | logger = logging.getLogger(__name__) 57 | 58 | app = FastAPI( 59 | title="Cursor OpenAI-Compatible API Bridge", 60 | description="一个通过 WebSocket 桥接浏览器脚本,将 Cursor 服务转换为 OpenAI 兼容 API 的服务。", 61 | version="1.0.0" 62 | ) 63 | 64 | # --- CORS 中间件 --- 65 | app.add_middleware( 66 | CORSMiddleware, 67 | allow_origins=["*"], # 允许所有来源 68 | allow_credentials=True, 69 | allow_methods=["*"], # 允许所有方法 70 | allow_headers=["*"], # 允许所有头 71 | ) 72 | 73 | 74 | # --- 模型列表 --- 75 | # 基于原始脚本和当前 Cursor 支持的模型列表 76 | AVAILABLE_MODELS = [ 77 | # GPT-5 系列 78 | {"id": "gpt-5", "owned_by": "openai"}, 79 | {"id": "gpt-5-codex", "owned_by": "openai"}, 80 | {"id": "gpt-5-mini", "owned_by": "openai"}, 81 | {"id": "gpt-5-nano", "owned_by": "openai"}, 82 | 83 | # GPT-4.1 系列 84 | {"id": "gpt-4.1", "owned_by": "openai"}, 85 | {"id": "gpt-4o", "owned_by": "openai"}, 86 | 87 | # Claude 系列 88 | {"id": "claude-3.5-sonnet", "owned_by": "anthropic"}, 89 | {"id": "claude-3.5-haiku", "owned_by": "anthropic"}, 90 | {"id": "claude-3.7-sonnet", "owned_by": "anthropic"}, 91 | {"id": "claude-4-sonnet", "owned_by": "anthropic"}, 92 | {"id": "claude-4-opus", "owned_by": "anthropic"}, 93 | {"id": "claude-4.1-opus", "owned_by": "anthropic"}, 94 | 95 | # Gemini 2.5 系列 96 | {"id": "gemini-2.5-pro", "owned_by": "google"}, 97 | {"id": "gemini-2.5-flash", "owned_by": "google"}, 98 | 99 | # 其他模型 100 | {"id": "o3", "owned_by": "openai"}, 101 | {"id": "o4-mini", "owned_by": "openai"}, 102 | {"id": "deepseek-r1", "owned_by": "deepseek"}, 103 | {"id": "deepseek-v3.1", "owned_by": "deepseek"}, 104 | {"id": "kimi-k2-instruct", "owned_by": "moonshot-ai"}, 105 | {"id": "grok-3", "owned_by": "xai"}, 106 | {"id": "grok-3-mini", "owned_by": "xai"}, 107 | {"id": "grok-4", "owned_by": "xai"}, 108 | ] 109 | 110 | # --- 连接和状态管理 --- 111 | 112 | class ConnectionManager: 113 | def __init__(self): 114 | self.active_connection: WebSocket | None = None 115 | self.pending_requests: Dict[str, asyncio.Queue] = {} 116 | 117 | async def connect(self, websocket: WebSocket): 118 | await websocket.accept() 119 | if self.active_connection: 120 | logger.warning("新的浏览器脚本连接,将替换现有连接。") 121 | await self.active_connection.close(code=1012, reason="被新连接替换") 122 | self.active_connection = websocket 123 | logger.info("浏览器脚本已连接。") 124 | 125 | def disconnect(self, websocket: WebSocket): 126 | if self.active_connection is websocket: 127 | self.active_connection = None 128 | logger.info("浏览器脚本已断开连接。") 129 | for request_id, queue in self.pending_requests.items(): 130 | error_msg = {"type": "error", "data": {"error": "WebSocket connection lost."}} 131 | queue.put_nowait(error_msg) 132 | self.pending_requests.clear() 133 | 134 | async def send_to_browser(self, message: dict): 135 | if self.active_connection: 136 | await self.active_connection.send_text(json.dumps(message)) 137 | else: 138 | raise ConnectionError("没有活动的浏览器脚本连接。服务不可用。") 139 | 140 | def create_request_queue(self, request_id: str) -> asyncio.Queue: 141 | queue = asyncio.Queue() 142 | self.pending_requests[request_id] = queue 143 | return queue 144 | 145 | def get_request_queue(self, request_id: str) -> asyncio.Queue | None: 146 | return self.pending_requests.get(request_id) 147 | 148 | def remove_request_queue(self, request_id: str): 149 | if request_id in self.pending_requests: 150 | del self.pending_requests[request_id] 151 | 152 | manager = ConnectionManager() 153 | 154 | # --- WebSocket 端点 --- 155 | 156 | @app.websocket("/ws") 157 | async def websocket_endpoint(websocket: WebSocket): 158 | await manager.connect(websocket) 159 | try: 160 | while True: 161 | data = await websocket.receive_text() 162 | message = json.loads(data) 163 | logger.debug(f"收到浏览器消息: {message}") 164 | 165 | if "data" in message and "request_id" in message["data"]: 166 | request_id = message["data"]["request_id"] 167 | queue = manager.get_request_queue(request_id) 168 | if queue: 169 | await queue.put(message) 170 | else: 171 | logger.warning(f"收到未知 request_id 的消息: {request_id}") 172 | except WebSocketDisconnect: 173 | manager.disconnect(websocket) 174 | except Exception as e: 175 | logger.error(f"WebSocket 异常: {e}") 176 | manager.disconnect(websocket) 177 | 178 | # --- OpenAI 兼容的 API 端点 --- 179 | 180 | @app.get("/v1/models", response_model=ModelList) 181 | async def list_models(_: str = Depends(verify_api_key)): 182 | """返回可用模型列表""" 183 | return ModelList( 184 | data=[ModelCard(id=model["id"], owned_by=model["owned_by"]) for model in AVAILABLE_MODELS] 185 | ) 186 | 187 | async def stream_generator(request_id: str, model: str, queue: asyncio.Queue): 188 | try: 189 | while True: 190 | message = await queue.get() 191 | msg_type = message.get("type") 192 | data = message.get("data", {}) 193 | 194 | if msg_type == "error": 195 | error_msg = data.get('error', '未知流错误') 196 | logger.error(f"请求 {request_id} 在流式传输中发生错误: {error_msg}") 197 | 198 | # 以 OpenAI 兼容的格式在流中返回错误信息 199 | error_chunk = { 200 | "id": f"chatcmpl-{request_id}", 201 | "object": "chat.completion.chunk", 202 | "created": int(asyncio.get_event_loop().time()), 203 | "model": model, 204 | "choices": [{ 205 | "index": 0, 206 | "delta": {"content": f"\n\n[ERROR]: {error_msg}"}, 207 | "finish_reason": "stop" 208 | }] 209 | } 210 | yield f"data: {json.dumps(error_chunk)}\n\n" 211 | break 212 | 213 | if msg_type == "chat_done": 214 | break 215 | 216 | if msg_type == "chat_delta": 217 | chunk = { 218 | "id": f"chatcmpl-{request_id}", 219 | "object": "chat.completion.chunk", 220 | "created": int(asyncio.get_event_loop().time()), 221 | "model": model, 222 | "choices": [{ 223 | "index": 0, 224 | "delta": {"content": data.get("content", "")}, 225 | "finish_reason": None 226 | }] 227 | } 228 | yield f"data: {json.dumps(chunk)}\n\n" 229 | 230 | final_chunk = { 231 | "id": f"chatcmpl-{request_id}", 232 | "object": "chat.completion.chunk", 233 | "created": int(asyncio.get_event_loop().time()), 234 | "model": model, 235 | "choices": [{"index": 0, "delta": {}, "finish_reason": "stop"}] 236 | } 237 | yield f"data: {json.dumps(final_chunk)}\n\n" 238 | yield "data: [DONE]\n\n" 239 | finally: 240 | manager.remove_request_queue(request_id) 241 | 242 | async def non_stream_handler(request_id: str, model: str, queue: asyncio.Queue): 243 | full_content = "" 244 | usage_data = {} 245 | try: 246 | while True: 247 | message = await queue.get() 248 | msg_type = message.get("type") 249 | data = message.get("data", {}) 250 | 251 | if msg_type == "error": 252 | error_msg = data.get('error', '未知内部错误') 253 | status_code = 400 if "not found" in error_msg.lower() else 500 254 | raise HTTPException(status_code=status_code, detail=error_msg) 255 | 256 | if msg_type == "chat_delta": 257 | full_content += data.get("content", "") 258 | elif msg_type == "chat_usage": 259 | usage_data = data.get("usage", {}) 260 | elif msg_type == "chat_done": 261 | break 262 | 263 | return { 264 | "id": f"chatcmpl-{request_id}", 265 | "object": "chat.completion", 266 | "created": int(asyncio.get_event_loop().time()), 267 | "model": model, 268 | "choices": [{ 269 | "index": 0, 270 | "message": {"role": "assistant", "content": full_content}, 271 | "finish_reason": "stop" 272 | }], 273 | "usage": usage_data 274 | } 275 | finally: 276 | manager.remove_request_queue(request_id) 277 | 278 | @app.post("/v1/chat/completions") 279 | async def chat_completions(request: ChatCompletionRequest, _: str = Depends(verify_api_key)): 280 | if not manager.active_connection: 281 | raise HTTPException(status_code=503, detail="浏览器脚本未连接,服务不可用。") 282 | 283 | request_id = str(uuid.uuid4()) 284 | 285 | queue = manager.create_request_queue(request_id) 286 | 287 | try: 288 | await manager.send_to_browser({ 289 | "type": "chat_request", 290 | "data": { 291 | "request_id": request_id, 292 | "model": request.model, 293 | "messages": [msg.dict() for msg in request.messages] 294 | } 295 | }) 296 | except ConnectionError as e: 297 | manager.remove_request_queue(request_id) 298 | raise HTTPException(status_code=503, detail=str(e)) 299 | 300 | if request.stream: 301 | return StreamingResponse(stream_generator(request_id, request.model, queue), media_type="text/event-stream") 302 | else: 303 | response_data = await non_stream_handler(request_id, request.model, queue) 304 | return JSONResponse(content=response_data) 305 | 306 | @app.get("/") 307 | async def root(): 308 | return {"status": "running", "browser_connected": manager.active_connection is not None} 309 | 310 | # --- 主入口点 --- 311 | 312 | if __name__ == "__main__": 313 | uvicorn.run(app, host="0.0.0.0", port=8765) 314 | --------------------------------------------------------------------------------